Sometime there is some ideas that are just obvious that they are good ideas. When Monty started to mention on the OpenStack development mailing list about a tool he was hacking on allowing to integrate docker containers to do the testing it was obvious it was those ideas that everybody was thinking about that it would be awesome if it was implemented and started to get used.
The idea of dox is like the name imply is to slightly behave like the tox tool but instead of running virtualenvs you are using docker containers.
The testing in the OpenStack world is a bit different than the other unit testing. Since OpenStack is inherently working with the local system components we have to get abstracted from the local developer box to match exactly the system components. In other words if we run our testing against a zookeeper daemon of a specific version we want to make it sure and easy that this version has been installed.
And that’s where docker can help, since you can easily specify different images and how to build them making sure we have those tools installed when we run our testing targets.
There are other issues with tox that we have encountered in our extensive use of it in the OpenStack world we are hoping to solve here. virtualenv has been slow for us and we have come up with all sorts of hacks to get around it. And as monty mention in his mailing list post docker itself does an EXCELLENT job at handling caching and reuse where we easily see in the future those standard image built by the openstack-infra folks that we know works and validated in upstream
openstack-ci published on dockerhub that everyone else (and dox) can use to run tests.
The tool is available here in stackforge here :
with an handy README that would get you started :
Its not quite ready yet but you can start running tests using it. If you want a fun project to work on that can help the whole Python development community (and not just OpenStack) come hack with us. We are as well on Freenode servers in IRC on channel #dox.
If you are not familiar with the contribution process of Stackforge/OpenStack see this wiki page which should guide through it :
Sometime you just need a long trans atlantic flight and a stupidly long stop-over in a random city to do some of those task that can improveÂ your day to day but you never take some time to do it.
When using emacs I wanted a simple way to launch a nosetests on the current function my cursor is in Emacs.Â The syntax on nosetests is a bit tricky and I actually always have to look at my shell history to know the proper syntax (nosetests directory/filename.py:Class.function).
I created a simple wrapper for emacs for Â that which allow to just hit a key to copy the nosetests command to feed to your shell or to use it for the compile buffer.
It’s available from here :
I have binded those keys for my python mode hook :
(local-set-key (kbd "C-S-t") 'nosetests-copy-shell-comand)
(local-set-key (kbd "C-S-r") 'nosetests-compile)
UPDATE: There was an another nose mode already that does much more available here : https://bitbucket.org/durin42/nosemacs/
I have done lately quite a bit of work with python-novaclient the (nova/keystone) openstack client. I often experiment it with ipython in the console.
There is a nice debugging facility in novaclient which you can see while using –debug argument on the command line and if you wanted to use it with ipython you could have that at the beginning of your session :
This would give you the details of the session showing you the REST requests and responses including the headers. It even show you the curl commands that you can use on the command line to experiment with it.
I have just uploaded python-cloudfiles to pypi available here
This make things easy to add as a dependence of your project like you can have something like this in your setup.py :
requirements = ['python-cloudfiles']
and it will automatically download it as part of the dependence with easy_install or pip.
cool kids on latest debian/ubuntu can do stuff like this (from python-stdeb package) :
which would automatically download the tarball from pypi and install it as a packages (like the way it should be for prod machine!)
If you have a virtualenv environment you can easily do a (needs python-pip package) :
pip -E /usr/local/myvirtualenvroot install python-cloudfiles
and magic would be done to get you on latest python-cloudfiles.
As a bonus side you can browse online the python-cloudfiles library :
[Update] This has been renamed back to python-cloudfiles please update your setup.py or scripts.
I am sure there is billions or more people who already done that but I needed this quickly for my project and was not feeling googling around :
Since I haven’t see much script like this around the web here is a quick script to suck bunch of albums from facebooks (your own) nothing fancy just something to get you started with pyfacebook.
from facebook import Facebook
# see http://developers.facebook.com/get_started.php
# Your API key
# Application secret key
cnx = Facebook(API_KEY, SECRET_KEY)
cnt = 1
for row in bigthing:
ret[cnt] = row['name'], row['aid'], row['link']
print "%d) %s - %s" % (cnt, row['name'], row['link'])
cnt += 1
ans = raw_input("Choose albums (separated by ,): ")
return [ret[int(row)] for row in ans.split(', ') ]
chosen_albums = choose_albums(cnx)
for album in chosen_albums:
name, aid, _ = album
print "Album: ", (name)
ddir = "fbgallery/%s" % name
if not os.path.exists(ddir):
for photo in cnx.photos.get(aid=aid):
url = photo['src_big']
dest="%s/%s.jpg" % (ddir, photo['pid'])
if not os.path.exists(dest):
print "Getting: ", url
I don’t check very often my twitter to know when someone replies and I find it hard to figure out what’s going on when i check a couple of days after even using a client showing only the reply (my client of choice lately is the Emacs twittering mode client)
I have made a script who’s checking your direct reply and email it to you. This is to setup via a cron on a server who has a mail server configured locally. You can get it from here :
Like a lot of people I have my irssi on a server in a screen. This has
been working great so far but my only concerns are the notifications
on the desktop when something happening.
Over the time I have found some different solution with mitigated
results for me :
– Use fanotify script with the libnotify-bin and SSH like mentioned here.
– Setup your irssi (or other) as irc proxy bouncer and connect with
your desktop client (like xchat) to get notification.
The fanotify is kind of very hacky on a laptop with intermittent
connection and having a cron doing a ssh every minutes or so is not
ideal, not talking about no passphrase ssh key or having to snoop the
SSH_AGENT variable to connect without password.
The via proxy method is not my thing and I don’t feel like having
xchat open all the time just for it and I anyway usually forget to
My solution is to have a plugin for irssi notify me via XMPP if there
is a direct message addressed to me. I usually get my pidgin or gmail
alway open and if i don’t since it goes to a gmail account I got gmail
sending me an email about it.
You can find all the information about the install and configuration
Last week I have posted an article explaining how to connect to Rackspace Cloud Files via Rackspace ServiceNET but I actually got it wrong as pointed by my great colleague exlt so I had to take it down until figured out how to fix it.
I have add that feature properly to the PHP and Python API in version 1.5.0 to add a ‘servicenet’ argument to the connection and updated the blog post here :
It should give you all the information for the howto use that feature.
I have released as well a minor update in 1.5.1 to allow you to define the environment variable RACKSPACE_SERVICENET to force the use of the Rackspace ServiceNET this allow you to don’t have to modify the tools and have a clean code between prod and testing.