Category Archives: Rackspace

How-to use the Rackspace Cloud UK API

Rackspace just released the public beta for the UK version of Rackspace Cloud. The UK Rackspace Cloud doesn’t have the same auth server as the US Cloud so there is a few change you need to do to support the UK Rackspace Cloud. This is the same way Amazon has different zone for EC2 we have now different geographical zone between the US and now the UK.

If you access directly, you just need to adjust the Auth URL in your code to go to :

instead of :

The language binding provided by Rackspace has all been updated and available from github :

For Python CloudFiles :

 cnx = cloudfiles.Connection(api_username, api_key, authurl="

For Python CloudServers :

    cloudservers.CloudServers("USERNAME", "API_KEY", auth_url="")

For Ruby CloudFiles :

require 'cloudfiles'

  # Log into the Cloud Files system
cf =
                                :username => "USERNAME",
                                :api_key => "API_KEY",
                                :authurl => ""

For C# CloudFiles :

UserCredentials userCreds = new UserCredentials(new Uri(""), username, api_key, null, null);
Connection connection = new com.mosso.cloudfiles.Connection(userCreds);

For Java CloudFIles add a file in your classpath with this content :


and you would be able to access it like this without any argument to the constructor :

FilesClient client = new FilesClient();

For non rackspace binding I have sent a patch for Apache libcloud :

which when integrated would allow to do something like this :

for jclouds you can just pass the auth server like this for cloudfiles :

and like this for cloudservers :

for Ruby Fog :

require 'rubygems'
require 'fog'

rackspace =
  :rackspace_api_key => "",
  :rackspace_username => "",
  :rackspace_auth_url => ""

Automatically spawn Rackspace Cloud Servers and customise it.

Lately I had to spawn some cloud servers and automatically customise them.

I have used the python-cloudservers library and installed it automatically with pypi (works for Debian/Ubuntu you may want to check for other distros):

pypi-install python-cloudservers

From there writing the script was pretty straight forward, I needed to know what type of CloudServers I wanted which in my case the smallest was good enough which is number 1 for me.

If you want to see all flavours you can do something like that from python command prompt

import cloudservers
cs = cloudservers.CloudServers("API_USERNAME", "API_PASSWORD")
for i in cs.flavors.list():
    print "ID: %s = %s" % (,

which should output something like this at the time this article has
been written :

ID: 1 - 256 server
ID: 2 - 512 server
ID: 3 - 1GB server
ID: 4 - 2GB server
ID: 5 - 4GB server
ID: 6 - 8GB server
ID: 7 - 15.5GB server

You need to figure out the image type as well which is basically the Operating System, in this case I wanted Ubuntu Maverick which is 69. If you want to see all image type you can do :

import cloudservers
cs = cloudservers.CloudServers("API_USERNAME", "API_PASSWORD")
for i in cs.images.list():
    print "ID: %s = %s" % (,

which print something like this for me at this time :

ID: 29 = Windows Server 2003 R2 SP2 x86
ID: 69 = Ubuntu 10.10 (maverick)
ID: 41 = Oracle EL JeOS Release 5 Update 3
ID: 40 = Oracle EL Server Release 5 Update 4
ID: 187811 = CentOS 5.4
ID: 4 = Debian 5.0 (lenny)
ID: 10 = Ubuntu 8.04.2 LTS (hardy)
ID: 23 = Windows Server 2003 R2 SP2 x64
ID: 24 = Windows Server 2008 SP2 x64
ID: 49 = Ubuntu 10.04 LTS (lucid)
ID: 14362 = Ubuntu 9.10 (karmic)
ID: 62 = Red Hat Enterprise Linux 5.5
ID: 53 = Fedora 13
ID: 17 = Fedora 12
ID: 71 = Fedora 14
ID: 31 = Windows Server 2008 SP2 x86
ID: 51 = CentOS 5.5
ID: 14 = Red Hat Enterprise Linux 5.4
ID: 19 = Gentoo 10.1
ID: 28 = Windows Server 2008 R2 x64
ID: 55 = Arch 2010.05
ID: 6719676 = Backup-Image

Now to make stuff going automatic we send our ~/.ssh/id_rsa to ‘/root/.ssh/authorized_keys’ and assuming you have a properly
configured ssh-agent which was already identified you have a passwordless access and you can launch command.

I have a script that does basic customisations at :

but you get the idea from there to launch the command the way you want, you can as well scp and ssh it after if you wanted to have some non public stuff in the script.

Here is the full script you need to adjust a few variable at the top of the file and customize it the way you want but that should get you started :

Upload a file to Rackspace Cloud Files from Windows

I don’t have to use much of the Windows Operating System except when I have to  synchronize my Garmin GPS to use the excellent SportsTrack software for my fitness training.

I wanted to get safe and backup my SportsTrack ‘logbook’ directly to Rackspace Cloud Files; while this is easy to do from Linux using some other script I made but I haven’t had anything at hand for Windows without having to install bunch of Unix tools.

So I made a quick C# CLI binary to allow just do that and do my backups via a ‘Scheduler Task’ (or whatever cron is called on Windows).

It’s available here :

and note that you will need nant to compile it.

Upload to Rackspace Cloud Files in a shell script

I don’t really use much the GUI and always the command line so I don’t really use the Cloud File plugin I created for nautilus.

So here is a shell script to upload to Rackspace Cloud Files and give you back a shortened URL of the public URL file. Great for quick sharing… You have to install the zenity binary first..

[Update: this is now available here]

New GNOME plugin for uploading to Rackspace Cloud Files and APT/PPA repo for CF tools.

Sometime ago I made a shell script to upload directly to Rackspace CF using the script capability of nautilus. While working well it did not offer the progress bar and was hard to update. I have made now as a proper python nautilus plugin which offer these features.

The code is available here :

The old version is here, which is still a good example for uploading to Rackspace CF via the shell :

To make it easier for people to install all the tools I have made for Rackspace Cloud Files I have made available a PPA repository for ubuntu karmic which should work in debian unstable :

it contains as well the API packaged until they are going to be uploaded to the official debian/ubuntu repositories.

FTP server for Cloud Files

I have just committed an experiment of a FTP Server answering to Cloud Files. It act completely transparently to be able to use any FTP Client to connect to cloud-files.

There is probably a couple of bugs there but the basis of it seems to be working, please let me know if you find any problems with it.


By default it will bind to port 2021 and localhost to be able to be launched by user which can be changed via the command line option -p. The username password are your API Username and key.

Manual Install

FTP-Cloudfs require the pyftpdlib which can be installed from here :

and python-cloudfiles :

you can then checkout FTP-Cloudfs from here :

The way to install python package is pretty simple, simply do a python install after uncompressing the tarball downloaded.

Automatic Install:

You can generate a debian package directly from the source if you have
dpkg-buildpackage installed on your system. It will give you a nice
initscripts as well to start automatically the ftp cloudfs process.


Albeit I am working for Rackspace Cloud this is not supported by
Rackspace but please feel free to send a comment here if you have any

Accessing to Rackspace Cloud Files via servicenet (update)

Last week I have posted an article explaining how to connect to Rackspace Cloud Files via Rackspace ServiceNET but I actually got it wrong as pointed by my great colleague exlt so I had to take it down until figured out how to fix it.

I have add that feature properly to the PHP and Python API in version 1.5.0 to add a ‘servicenet’ argument to the connection and updated the blog post here :

It should give you all the information for the howto use that feature.

I have released as well a minor update in 1.5.1 to allow you to define the environment variable RACKSPACE_SERVICENET to force the use of the Rackspace ServiceNET this allow you to don’t have to modify the tools and have a clean code between prod and testing.

How to connect to Rackspace Cloud Files via ServiceNET

If you are a Rackspace customer and you are are planning to use Rackspace Cloud Files via it’s internal network (ServiceNet) so you don’t get billed for the bandwidth going over Cloud Files this is how you can do.

The first thing is to make sure with your support team if your servers are connected to ServiceNet and if you have that connection then there is a small change to do in your code.

The second thing is to use the just released 1.5.0 version on GitHUB for PHP :

and for python :

(you need to click on the download link at the top to download the tarball of the release).

Afer this is just a matter to set the argument servicenet=True, for example in PHP :


$auth = new CF_Authentication($user, $api_key);
$conn = new CF_Connection($auth, $servicenet=true);

In Python you can do like this :


cnx = cloudfiles.get_connection(username, api_key, servicenet=True)

Rackspace Cloud Files helper scripts

Since working a lot with Rackspace Cloud Files I have put some quick
scripts using the python-cloudfiles API to do some ls rm and cp of containers or objects..

It’s really a collection of quickly written stuff put in the same file
but at least if not useful for you it would give you an idea how the
python-cloudfiles works. It is all available here :

rsync like backup to Rackspace Cloud File with duplicity

It seems that there is no much documentation about how to do rsync like backup with duplicty so here it is :

UPLOAD_TO_CONTAINER="backup" #adjust it as you like
export CLOUDFILES_USERNAME=Your Username
export PASSPHRASE=The Passphrase for your encrypted backup

duplicity /full/path cf+http://${UPLOAD_TO_CONTAINER}

This should take care to upload the backup files to the backup container. It does that incrementally and detect the changes to your file system to upload. There is much more option for duplicity look at the manpage for more info.