Swift and quotas in upcoming 1.8.0 (Grizzly) release.

There is two new nifty middlewares for doing quotas in upcoming Swift release 1.8.0 called container_quotas and account_quotas.

Those are two different middlewares because they are actually addressing different use cases.

container_quotas is typically used by end users the use case here is to let user to specify a limit on one of their container.

Why would you want to restrict yourself you may ask ? This is because when you allow a public upload to a container for example with tempurl or/and formpost you want to make sure people are not uploading a unlimited amount of datas.

The headers to configure for the container quota are :

X-Container-Meta-Quota-Bytes – The Maximum size of the container, in bytes.
X-Container-Meta-Quota-Count – Maximum object count of the

The account_quotas is more the typical quota implementation. A “super
user” with the reselleradmin group/role can set a byte limit for
an account and the account will not be able to have new
objects/containers until someone cleanups his account to get under the
limited quotas.

The headers to configure the account quotas are :

X-Account-Meta-Quota-Bytes – The Maximum size of the account in bytes.

The commit for the container quotas is here :

Basic container quotas

and account quotas commit :

Account quotas

Enjoy.

 

Using python-novaclient against Rackspace Cloud next generation (powered by OpenStack)

With the modular auth plugin system merged into python-novaclient it is now very easy to use nova CLI against the Rackspace Public Cloud powered by OpenStack.

we even have a metapackage that would install all the needed bits. This should be easy as doing this :

pip install rackspace-novaclient

and all dependencies and extensions will be installed. To actually use the CLI you just need to specify the right arguments (or via env variable see nova –help) like this :

nova –os_auth_system rackspace –os_username $USER –os_tenant_name $USER –os_password $KEY

on RAX cloud, usually the username is the tenant name so this should match.

For the UK Cloud you just need to change the auth_system to rackspace_uk like this :

nova –os_auth_system rackspace_uk –os_username $USER –os_tenant_name $USER –os_password $KEY

Rackspace CloudDNS python binding

I have released a python binding to Rackspace CloudDNS here which allow you to create/update/delete domains and records. It’s available on github

https://github.com/rackspace/python-clouddns/

The binding is pretty simple and have unfortunately no documentation (or even tests) but you can figure out most of it from here :

https://github.com/rackspace/python-clouddns/blob/master/tests/t.py

I will be very welcoming pull request that add a bit of documentation.

Mass editing firewall on Rackspace Cloud.

A lot of our customers in Rackspace cloud has been asking how to mass edit firewalls of servers when you have multiple servers without doing it manually.

Part of my cloudservers-api-demo I have written a simple firewall scripts abstracting the Operating System firewall software to allow/enable/disable the firewall and ports/networks.

The script has been kept very simple by design and currently allow only to :

  • enable the firewall

  • disable the firewall

  • allow or disallow a port or a network

  • see firewall status

PREREQUISITES

  • A management server under Ubuntu maverick.

  • A supported Operating System for clients which includes :

  • Debian.

  • Ubuntu.

  • RHEL.

  • Fedora

  • My patched python-cloudservers library (see below for installs).

  • Your SSH key installed on all VM for root users.

Install

  • After you have kicked a VM with a Ubuntu maverick and connected to it as root you want first execute intall some prereq packages :

apt-get update && apt-get -y install python-stdeb git

checkout my python-cloudservers library :

git clone git://github.com/chmouel/python-cloudservers.git

after being checked-out you will go into the python-cloudservers directory which has just been created and do this :

cd python-cloudservers/
python setup.py install

this should automatically install all the dependences.

Now you can install my api-demo which include the firewall script :

cd ../
git clone git://github.com/chmouel/cloudservers-api-demo

You need to configure some environemnt variable first which keep information about your rackspace account.

edit your ~/.bashrc (or /etc/environement if you want to make it global) and configure those variable :

export RCLOUD_DATACENTER=UK
export UK_RCLOUD_USER="MY_USERNAME"
export UK_RCLOUD_KEY="MY_API_KEY"
export UK_RCLOUD_AURL="https://lon.auth.api.rackspacecloud.com/v1.0"

or for the US you would have :

export RCLOUD_DATACENTER=US
export UK_RCLOUD_USER="MY_USERNAME"
export UK_RCLOUD_KEY="MY_API_KEY"
export UK_RCLOUD_AURL="https://auth.api.rackspacecloud.com/v1.0"

source the ~/.bashrc or relog into your account to have those accounts set-up you can test it to see if that works by going to :

~/cloudservers-api-demo/python

and launch the command :

./list-servers.py

to test if this is working properly (it should list your servers for your DATACENTER)

you are now basically ready to mass update firewall on all servers.

Let’s say you have two web servers named web1 and web2 and two db servers named db1 and db2 and you would like to allow the 80 port on the web servers and 3306 port on the db servers.

You would have to go to this directory :

~/cloudservers-api-demo/firewall/

and first execute this command to see the help/usages :

./fw-control.py --help

so let’s say to enable the firewall on all the web and db server first you can do :

./fw-control.py -s "web db" enable

it will connect and enable the firewall on all the servers which match the name web and db.

now let’s say we want to enable port 80 on the web :

./fw-control.py -s "web" allow port 80

if you log into the servers you can check with

iptables -L -n

that it it has been enabled properly.

This is simple enough for you to modify the script to your liking to make it more modular for your specific environement.

Howto shutdown your Cloud Server and not getting billed for it.

Currently in Rackspace-Cloud when you are shutting-down your Cloud Servers you are still paying for it.

The reason is that when the Cloud Server is shut-down your CloudServer is still sitting on the hyper-visor and still use resources on the Cloud and then get you billed for it.

There is a way to get around it by having the CloudServer stored as an image into CloudFiles.

The caveat with this solution is that every-time you are creating a server out of the stored image you are getting a new IP and in certain cases you would need to make a change in your application with the new IP.

If you only use domain names instead of IP in your application you are not dependent of the IP change, to update the domain with the new IP after creating the VM you can either :

– Have a dynamic DNS or ‘Cloud DNS’ updated just after you created your server out of the image.

– Have a script going into your server and update the IP directly in /etc/hosts.

In programming words this is the steps you would do. I am using the python-nova binding which allow you to connect to RackSpace Cloud.

At first I am going to create an object which we are going to authenticate

import novaclient
cx = novaclient.OpenStack(USERNAME,
                            API_KEY)

or for the UK :

import novaclient
cx = novaclient.OpenStack("USERNAME",
                            "API_KEY",
                            'https://lon.auth.api.rackspacecloud.com/v1.0')

cx is going to be the object from where we can do things on it. Let’s first find the server server that we want, assuming your server is called test you would get the server like this :

server = cx.servers.find(name='test')

The variable ‘server’ contain our server ‘object’ and we can get its ID out of it :

server_id = server.id

We got the function cx.images.create to create an image from a server which accept as first argument the image name and the second the server id we just got. this would start the creation of the image :

cx.images.create("backup_server", server_id)

The server has started to get backed-up into your Cloud Files account, you can see it directly into the “My Server Images” tab of Hosting => Cloud Servers section :

You can now delete the server since it’s ‘backuped’ into cloud files ;

server.delete()

At this time you are not billed for your Cloud Servers anymore and only for the storage usage in Cloud Files.

When you want to restore the image as a server, you would first get the id of your image :

image = cx.images.find(name='backup-test')
image_id = image.id

and create the server out of this image :

 CNX.servers.create(image=image_id,
                            flavor=1,
                            name="test",
                            )

The flavor argument is the type of image you want, 1 the minimal 256M flavor. The full list is :


In [14]: for x in cx.flavors.list():
   ....:     print x.id, '-', x.name
   ....:     
   ....:     
1 - 256 server
2 - 512 server
3 - 1GB server
4 - 2GB server
5 - 4GB server
6 - 8GB server
7 - 15.5GB server

When the server has created it should be exactly the same as what you have before created in image. You can now run a script using SSH with SSH keys to log into servers and do adjustment with the new IP.

Uploading to Rackspace Cloud Files via FTP

Sometime ago I wrote a FTP proxy to RackSpace Cloud Files which expose Rackspace Cloud Files as a FTP server acting as a proxy.

Thanks to the OpenSource community a user on github took it and add support OpenStack and all the latest features available in Cloud Files.

It is now pretty robust and works pretty well via nautilus even with the pseudo hierarchical folder feature. The fun part here is that it allow you to effectively have a Cloud Drive where you can easily store your files/backup from your Linux desktop via nautilus built-in ftp support.

I have made a video that show how it works :

Upload to the Cloud via FTP from Chmouel Boudjnah on Vimeo.

Installing python-cloudfiles from pypi

I have just uploaded python-cloudfiles to pypi available here

This make things easy to add as a dependence of your project like you can have something like this in your setup.py :

requirements = ['python-cloudfiles']

and it will automatically download it as part of the dependence with easy_install or pip.

cool kids on latest debian/ubuntu can do stuff like this (from python-stdeb package) :

pypi-install python-cloudfiles

which would automatically download the tarball from pypi and install it as a packages (like the way it should be for prod machine!)

If you have a virtualenv environment you can easily do a (needs python-pip package) :

pip -E /usr/local/myvirtualenvroot install python-cloudfiles

and magic would be done to get you on latest python-cloudfiles.

As a bonus side you can browse online the python-cloudfiles library :

http://packages.python.org/python-cloudfiles/


[Update] This has been renamed back to python-cloudfiles please update your setup.py or scripts.

How to use fireuploader with the Rackspace Cloud UK

Fireuploader is a Firefox addon that gives you a nice GUI to upload files via your firefox browser.

I have made a special version of the extensions to make it works with the RackSpace Cloud UK.

Install the addon from here :

http://www.chmouel.com/pub/firefox_universal_uploader__fireuploader_-0.4.5-fx+mz+ukcf.xpi

Allow the website by clicking Allow in the yellow bar on the top as seen in this screenshot :

Click Allow on the top

Click on Install Now and restart Firefox.

After the Firefox browser is restarted you can go in Tools => Fireuploader and choose Rackspace Cloud UK in the dropdown list :

and click on Manage Account :

enter your UK username and UK API Key and “Save Password” if you like and it should log you into your UK Cloud :

Howto access the UK Rackspace Cloud with the PHP Binding

One of the last library I didn’t documented in my earlier post was php-cloudfiles. You need to have at least the version 1.7.6 released to have support to different auth_server and when you have that you can do it like this to get access to cloud files via the library :

< ?php
require_once("cloudfiles.php");

# Allow override by environment variable
$USER = "MY_API_USERNAME";
$API_KEY = "MY_API_KEY";

$auth = new CF_Authentication($USER, $API_KEY, NULL, UK_AUTHURL);
$auth->authenticate();
?>

Backup with duplicity on Rackspace CloudFiles (including UK) script.

It seems that my post about using duplicity to backup your data on Rackspace CloudFiles got popular and people may be interested to use with the newly (Beta) released Rackspace Cloud UK. You would just need to have a environnement exported at the top of your backup script like this :

export CLOUDFILES_AUTHURL=https://lon.auth.api.rackspacecloud.com/v1.0

and it will use the UK auth server (the same goes for OpenStack auth server if you have your own Swift install).

To make things easier I have taken this script from :

http://damontimm.com/code/dt-s3-backup

and adapted it to make it work with Rackspace Cloud Files.

This is available here :

https://github.com/chmouel/dt-cf-backup

You need to make sure that you have python-cloudfiles installed, on a Debian or Ubuntu system you can do it like this :

sudo apt-get -y install python-stdeb 
sudo pypi-install python-cloudfiles

Check the documentation of your Operating System to install python-cloudfiles, usually it is very easy to do it via pip (pip install python-cloudfiles)

When you have installed duplicity and checkout the script (see the github page for documentation how to do it) you can start configuring it.

At the top there is a detailled explanation of the different variables that need to be configured. You can change it in the script or you can have them configured in an external configuration file in your home directory called ~/.dt-cf-backup.conf, this is an example :

export CLOUDFILES_USERNAME="MY_USERNAME"
export CLOUDFILES_APIKEY="MY_APIKEY"
export PASSPHRASE="MY_PASSPHRASE"
GPG_KEY="8D643162"
ROOT="/home/chmouel"
export DEST="cf+http://duplicity_backup"
INCLIST=( /home/chmouel/ )
EXCLIST=( 	 "/home/chmouel/tmp"    "/**.DS_Store" "/**Icon?" "/**.AppleDouble"  )
LOGDIR="/tmp/"
LOG_FILE_OWNER="chmouel:"

You can then just run :

./dt-cf-backup.sh --backup  

to do your backup.

There is much more documentation in the README.txt.

I just would like to thanks again the author of dt-s3-backup for this script. I have just made a few modifications for Rackspace Cloud Files.