A multi-network friendly Openstack VM image with netplugd

A while (long while. Sorry about that) ago, I posted about setting up a VM in an Openstack cloud so that it worked nicely with multiple networks. I also promised that I would explain how to automate this process.
So, in this post, I will explain how to create an Openstack VM image that will automatically set up networking for all of the Openstack networks that are connected to the VM.
The principles and techniques explained here should work similarly in other clouds that support multiple networks (like Amazon VPC)

TL;DR

  • Create a VM from your base image, with a single network. 
  • SSH into the instance and install netplugd.
  • Configure netplugd with an appropriate script, tweaked for your choice of Linux distribution.
  • Delete files that are not required, stop the instance and create a new image from the instance.
  • The new image will work correctly with all connected networks.

The Problem – VM Images with one configured NIC

As discussed in the previous post, setting up the networks in Neutron and starting a VM that is attached to them is not enough. The VM has to cooperate with the network setup, somehow. 
In many of the images provided by cloud vendors and Linux distributions, only one Network Interface Card – eth0 – is defined, so only the first network is available to the VM.
You can set up the other NICs using ssh (it network access is possible with only one NIC), or with a user-data script that runs during the VM start-up (I’m not a fan of this approach – more on that in a different post). But the best way (in my opinion, anyway) is by baking this behavior into your image.

The Solution – VM Images with NICs configured by netplugd

netplugd is “a daemon that notifies the status of one or more ethernet interfaces, calling an external script when something changes.” 
And that is exactly what we need. (Another option is ifupd). 

Let’s get to work.

Network Setup
I’ll be using the same network setup as in the previous post.
Base VM
Start by creating a VM that is attached to a single network, and attach a floating IP to the instance. The command line is similar to the ‘nova boot’ command from the previous post, you’ll just need to pass a single network ID instead of two. We’ll be using the same security group and key-pair as well.
And ssh into the instance:
ssh -i demo-keypair.pem ubuntu@<Floating-IP>
Now to setup netplugd:
sudo apt-get install netplug
sudo mv netplug /etc/netplug/netplug
sudo chmod +x /etc/netplug/netplug
You should read the netplug file. It is a straightforward script that detects network cards that are plugged in/out, and then edits the interfaces file accordingly.
Note: your choice of Linux distribution might keep the interfaces configuration in a different location or file format. Some tweaking of this file might be necessary. The script as provided works on Ubuntu 12.04. Feel free to send in a pull request if you want to add your configuration to the repo.
Now, let’s clean up the VM and close the session:
rm ~/.ssh/authorized_keys
exit
Power off the instance
nova stop demo-vm
Wait until the VM fully shuts down. It should look something like this:
nova list –name demo-vm
Create a snapshot of the image:
nova image-create demo-vm “Ubuntu 12.04 with netplugd”
 
We can now delete the VM
nova delete demo-vm
 
Your image is now ready!
Let’s  take it out for a spin. We’ll use both networks this time:
nova boot –flavor standard.medium –image “Ubuntu 12.04 with netplugd” –security-groups demo-security-group –key-name demo-keypair –nic net-id=XXXXX-XXXX-XXXX-XXXX-XXXXXXXX –nic net-id=YYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY demo-vm
 Attach a floating IP to the instance and ssh into it.
 ssh -i demo-keypair.pem ubuntu@<Floating-IP> ifconfig
 
 
Note how both NICs are up and running? You now have a multi-network friendly Openstack Image. Enjoy.

One last thing, and this one always gets first-time users.
Just cause the NICs are up, doesn’t mean all is well in networking land. You are probably going to need to set up routing rules to determine which NIC should be used for each of your network packets. These rules depend on what you are trying to accomplish in this network configuration. But we’ll leave that for another post. For now, consult with your sysadmin.

 

A multi-cloud console with mist.io and Quickbuild

I’m taking a break from blogging about my various Openstack related shenanigans. This time I want to talk about monitoring cloud usage.

I work with IaaS clouds – lots of them. The list includes several different AWS accounts and several HP cloud accounts. It also includes occasional ‘guest’ accounts on Softlayer as well as private Openstack and Cloudstack accounts. With so many resources, and so many projects, stuff gets dropped once in a while. VMs are left running, forgotten by the developer that was using them or missed by the automation that was supposed to shut them down. And in the cloud, if you forget something, you pay for it. Literally. VMs are billed for the time they are active, whether or not you were actually doing something useful with them. And before you know it, the bills start mounting, and the people from accounting show up at my desk.

So I need some way to monitor my cloud usage across multiple clouds and accounts. I spent a while looking online for a good cloud monitoring solution that would give me a quick overview of my current cloud usage. I love dashboards, by the way. Being able to see everything that interests me in one place makes things so much easier.

After looking around at a bunch of cloud monitoring options, I finally found mist.io. It had just the right features I needed, and all that was left was to plug it in to my existing dashboard system.

So let’s start by reviewing the pieces.

Mist.io

Mist.io (http://www.mist.io) is a cool open-source project that monitors VM usage
across multiple clouds. The list of supported cloud providers is pretty extensive (see the up to date list here: http://mistansible.readthedocs.org/en/latest/mist_backends_module.html) and at this time includes:
EC2, Rackspace, Openstack, Linode, Google Compute Engine, SoftLayer, Digital Ocean, Nephoscale, HP Cloud, Bare metal servers, Docker containers and KVM hypervisors.

There is also a mist.io web-site, which basically offers the same product with some additional premium additions. I’ll be using this service for my purposes, though you can always install the open source version locally. Mist.io also offers a python SDK (http://mistclient.readthedocs.org/en/latest/) which makes it a very scriptable system – I’ll be using the client SDK shortly.

Mist.io includes a console for your current cloud usage which is pretty useful by itself:

mist_io_console_redacted

But for my project I also need to maintain history and statistics about my usage.

Quickbuild

The CI system we use for Cloudify is Quickbuild (http://www.pmease.com/). We’ve been using it for a very long time and it has proven to be a rock-solid system. Quickbuild also has a flexible dashboard system, where I can plug in my own custom data sources.

While this post deals a lot with the specifics of Quickbuild, the principles should be pretty much the same for any CI/Automation system.

The glue

The general idea is to define a configuration in Quickbuild that will poll the mist.io API for running VMs, collect historical data and display the latest results on a Quickbuild dashboard.

So let’s get to work!

Mist.io account

First up, you’ll need to set up your account on http://mist.io . It’s free.

Cloud credentials

For each cloud account that you want to monitor, create a dedicated user and give it the minimal permissions required to view the currently running instances. This is a good best practice for all integrations – use a dedicated account for each integration, using something like Openstack keystone or AWS IAM, and give it only the permissions it MUST have.

Configure mist.io backends

In mist.io, a backend is a monitorable target that can host compute instances. These can be Openstack tenants and AWS regions, for instance. You will need to set up a backend for each one of these. If, like me, you work across a lot of AWS regions and Openstack tenants, this part can get a bit tedious. So I wrote a couple of scripts using the mist client SDK to speed things up a bit.

Set up HP Cloud Backends:


from mistclient import MistClient
client = MistClient(email="MY_MIST_EMAIL", password="MY_MIST_PASSWORD")
hp_username="HP_CLOUD_USERNAME"
hp_password="HP_CLOUD_PASSWORD"
hp_regions = [
["hpcloud:region-a.geo-1", "HP – US West"],
["hpcloud:region-b.geo-1", "HP – US East"]
]
# list of HP tenants to monitor
hp_tenants = ["my-first-tenant", "my-other-tenant"]
def create_hp_backends():
for region, region_name in hp_regions:
for tenant in hp_tenants:
print "Creating HP backend for tenant %s in region %s" % (tenant, region)
try:
client.add_backend(title= "%s – %s" % (region_name, tenant), provider=region, key=hp_username,secret=hp_password,tenant_name=tenant)
except Exception as e:
print "Failed to create backend: %s" % e.message
create_hp_backends()

Set up AWS Backends:


from mistclient import MistClient
client = MistClient(email="MY_MIST_EMAIL", password="MY_MIST_PASSWORD")
ec2_demo_access_key ="AWS_ACCESS_KEY"
ec2_demo_secret_key = "AWS_SECRET_KEY"
ec2_account_name = "AWS_ACCOUNT_NAME"
def create_ec2_backends():
# creates backends for all ec2 regions
for provider in client.supported_providers:
if "EC2" in provider["title"]:
title = "%s – %s" % (provider["title"], ec2_account_name)
print "Creating backend: %s" % title
try:
client.add_backend(title = title, provider = provider["provider"],
key=ec2_demo_access_key, secret=ec2_demo_secret_key)
except Exception as e:
print "Failed to create backend: %s" % e.message
create_ec2_backends()

Just grab the scripts and tweak the credentials and tenants to match what you need.

BTW: I do wish mist.io would make this bit a little easier. There really should be an easier way to just give them the credentials and say ‘Monitor everything’

Create a script to collect the current compute instance details
I wrote another quick script using the mist SDK to do this. The whole thing is on github:
https://github.com/barakm/mist-monitor

This is the interesting bit:

https://github.com/barakm/mist-monitor/blob/master/mist_monitor/mist_monitor.py

Note how the python script generates an XML file with the details of the compute instances. Quickbuild likes XML files as input.

Configure Quickbuild

First thing we need to do is set up Quickbuild to accept the XML file format our script generates. This is a one-time operation, but you need to be an administrator of the Quickbuild server to be able to do this.

  • As a Quickbuild administrator, go to Administration -> Plugin Management -> Custom Statistics Report -> Configure
  • Click “Add New Category”
  • Give your category a name, like “Running Cloud Instances” and an appropriate description
  • Add two ‘indicators’ (fields in the report)
    • running – This gives us the total number of running machines, and is usually the most interesting value. Set the XPath expression to – count(//machine[@state=’running’]).
    • all – This gives us the total number of machines, so it includes machines that are shut-down or deleted. Not as interesting, but can be useful. Set the XPath expression to – count(//machine)

It should look something like this:

quickbuild_category_config

Set up the recurring task

With the custom Quickbuild category all set up, we can create the task that actually polls the mist.io API. This can and should be done with a regular Quickbuild user, not an administrator.

I have made the configuration available as a gist. You can import it from: https://gist.github.com/barakm/3927cc0e8930b259c69e

Or you can create it manually using the following instructions:

  • Create a new Quickbuild configuration somewhere in the build tree. I called mine ‘CloudNodeMonitor’
  • In the configuration definitions screen, go to Settings->Repositories
  • Click the ‘+’ icon to add a new repository and choose a git repo
  • Set the git repository URL to
    https://github.com/barakm/mist-monitor.git
    (you can always fork this repo if you want to add something). Make sure to give your Quickbuild repository a name you will remember.
  • In the Quickbuild configuration screen, go to settings->steps
  • Add a new step (it’s the ‘+’ icon) and choose repository->checkout.
  • In the step editing screen, make sure to choose the repository you created previously
  • Add a new step, and choose build->shell/batch command
  • Set the command field to:
    ./mist_monitor_runner.sh ${vars.getValue(“mistUsername”)} ${vars.getValue(“mistPassword”)}
    Note how we are passing the mist.io credentials as Quickbuild variables – we will configure them later.
  • Set the Working Directory field to: mist_monitor
  • Add a new step, and choose Publish -> Custom Statistics Report
  • Set the Statistics Category to the name of the custom statistics category you created previously (something like “Running Cloud Instances”)
  • Set the Files to Process field to:
    mist_monitor/output.xml
  • Set the Report Set Name to: All_Machines
  • In the Configuration editing screen, go to Settings->Variables
  • Add a new variable. Call it mistUsername, and set its value to the username of your mist.io account.
  • Add a new variable. Name it mistPassword (you may want to set the value to be displayed as a secret value, not a cleartext one) and set its value to the password of your mist.io account.
  • Set the task execution schedule. Go to: Settings->General Settings->Edit and schedule the periodic execution of the task. Once an hour works out fine for me.

Your new Quickbuild configuration should look something like this:

quickbuild_task_steps

Run the task a couple of times from the Quickbuild console to make sure it works as expected. Have a look at the ‘Latest Build’ tab to see the results.

Set up the dashboard widget

Quickbuild has a built-in dashboard system that is pretty straightforward.

  • Choose the dashboard you want to use (or create a new one)
  • Select Add Gadget -> Others -> Custom Statistics
  • Choose a relevant title and set the configuration to the task you created
  • Set the Build field to: Latest Successful Build
  • Set the Category Name field to the Custom Category you created (“Running Cloud Instances”)
  • Select the “All_Machines” Report set and click Save.

You should see the latest results from your cloud monitor show up on the dashboard.

Running_Instances_Widget
Click ‘View Report’ and choose the ‘Statistics’ tab, and you can see statistics on your cloud usage:

Instances_Graph
Now you have yourself a Quickbuild dashboard showing you the number of compute instances running on all of your clouds, courtesy of mist.io, plus some nice historical data as well.