Preparing custom images for OpenStack

This article will show you how to use libvirt to create base images that can be uploaded to OpenStack.

Why would you want to do this?

Linux distributions like Fedora and Ubuntu already ship “cloud” images and most providers also have their own custom images for you to use, but I find it much more comforting to have full control of the software that is installed and I like the ability to easily apply new security patches to base images.

I wouldn’t use images to replace config management (CM) with something like Salt or Ansible but they are nice to give sane system defaults in things like grub.conf, sysctl.conf, and shipping a Chef or Salt agent so that your CM engine can communicate with your server right away.

Setting up your environment

The first thing you need to do is get a minimal install disk for the Linux distribution you want to use. I prefer using Fedora netinst disks but another popular option is Ubuntu Server.

To get the latest Fedora here, you can choose “netinst” under Direct Downloads:

To get the latest Ubuntu you can go here:

Once you have acquired your distribution of choice you just need to verify that you have virt-install and virt-viewer installed:


yum install virt-install virt-viewer


apt-get install virtinst virt-viewer

If you prefer a graphical user interface, you may use virt-manager instead, but I try to keep everything in the CLI; that way it can be repeated easily.

Preparing your disk

Now that you have a base ISO and the tools necessary, let’s get started by creating a disk to install the virtual server into. Resizing an image isn’t an impossible task but it is much easier to choose a reasonably sized disk for the purpose it will be used for.

I primarily use 8 GB disks – that way we can fit all the system components required as well as our own web applications. Any large files should be placed in a SAN or something like Dreamhost’s dreamobjects.

The other big decision you must make upfront is what disk format you want to use – the trade-off is disk space vs performance. The two primary formats are qcow2 (QEMU Copy on Write) and Raw. qcow2 is great if you have limited disk space and don’t want to allocate the full 8 GB up front. Raw is preferred if you want the best performance.

If you choose qcow2, you’ll also need to make sure you have qemu-img:


yum install qemu-img


apt-get install qemu-utils

Create a raw disk:

fallocate -l 8192M server.img

Create a qcow2 disk:

qemu-img create -f qcow2 server.qcow2 8G

Installing your distribution onto the disk

We will use the virt-install command to get the distribution installed onto the disk image.

To install Fedora on a qcow2 disk image:

virt-install --name base_server --ram 1024 --cdrom=./Fedora-20-x86_64-netinst.iso \
--disk path=./server.qcow2,format=qcow2

To install Ubuntu Server on a raw disk image:

virt-install --name base_server --ram 1024 --cdrom=./ubuntu-12.04.4-server-amd64.iso \
--disk path=./server.img,format=raw

You should follow the standard install steps that you normally would when setting up your distribution. But here are some tips for each:


  • Choose minimal install – by default it selects “GNOME”.


  • Be sure to select OpenSSH server – it won’t install it by default.

  • On Ubuntu 12.04, there is a bug that makes it hang after running fsck. You will need to edit grub to get it to boot, hit _e_ at the boot prompt and add “nomodeset” on the linux line. You will know that you need to do this if your boot hangs on fsck:

    fsck from util-linux 2.20.1
    /dev/mapper/ubuntu--vg-root: clean, 57106/441504 files, 286779/1764352 blocks
    /dev/sda1: clean, 230/62248 files, 39833/248832 blocks

Preparing image for openstack

To prepare a virtual machine for the cloud, you will need to install the cloud-init package, which allows the cloud providers to inject certain system settings when creating servers based on the image. These are things like hostname and ssh keys.

On Fedora:

yum install cloud-init

On Ubuntu:

apt-get install cloud-init

Then you need to just configure cloud-init by editing /etc/cloud/cloud.cfg and update the datasources_list section to include EC2. OpenStack uses EC2 metadata for cloud-init.

You should also verify the user setting in this same config and define the user you plan to use, it will be where the authorized_keys file is setup for when the cloud provider injects your SSH key into the server.

cloud-init will not create the user for you, it will just assign the SSH keypair and reset the password. So make sure the user defined in cloud.cfg is also created on the system.

Once you have your cloud-init settings the way you want them, just shutdown and run the virt-sysprep command.

On the guest machine:

shutdown -h now

On the host machine:

virt-sysprep -d base_server

Uploading your image to OpenStack

Using the glance API it is very straightforward to upload the image to OpenStack. Just run the following command:

glance image-create --name base_server --disk-format=qcow2 \
--container-format=bare --is-public=True --file server.qcow2 --progress

Once the image upload completes you will be able to use it immediately within nova. You can reference it by name or by the id from glance image-list.

To create your first instance from the image:

nova boot --flavor m1.tiny --image base_server --key-name devops \
--security-groups free_for_all test_server

Obviously the security groups, key name, and flavors are based on your installation of OpenStack but can all easily be queried from the nova API:

nova flavor-list
nova secgroup-list
nova keypair-list

And you are done! You’ll be able to re-use your new image as a base for all new instances you spin up in openstack!

Writing tests for Pyramid and SQLAlchemy

TL;DR: Putting it all together, the full code can be found here:


Pyramid’s documentation doesn’t cover the preferred way to test with SQLAlchemy, because Pyramid tries to stay out of your way and allow you to make your own decisions. However, I feel i’ts necessary to document what I think is the best way to test.

When I first started writing tests with SQLAlchemy I found plenty of examples of how to to get started by doing something like this:

from db import session # probably a contextbound sessionmaker
from db import model

from sqlalchemy import create_engine

def setup():
    engine = create_engine('sqlite:///test.db')

def teardown():

def test_something():

I have seen this done so many times, but I feel there is so much wrong with it! So let’s establish some base rules when testing:

  • Always test your system like it would be used in production. SQLite does not enforce the same rules or have the same features as Postgres or MySQL and will allow tests to pass that would otherwise fail in production.
  • Tests should be fast! You should be writing tests for all your code. This is the main reason people do test against SQLite, but we can’t violate rule number one. We have to make sure tests against Postgres are fast, so we shouldn’t be tearing down and recreating tables for every single test.
  • You should be able to execute in parallel to speed up when you have thousands of tests. Dropping and creating tables per test would not work in a parallel environment.

For an example, I have a project with 600+ tests and it would take 2 and half minutes to execute running against SQLite. But when we swapped our test configuration to execute against Postgres, testing took well over an hour. That is unacceptable!

But running them in parallel will give us a huge speed up. Check out the results of the tests running in single proc mode vs using all 4 cores:

$ py.test
======= 616 passed in 143.67 seconds =======

$ py.test -n4
======= 616 passed in 68.12 seconds =======

The right way

So what is the proper way to setup your tests? You should initialize the database when you start your test runner and then use transactions to rollback any data changes your tests made. This allows you to keep a clean database for each test in a very efficient way.

In py.test, you just have to create a file called that looks similar to:

import os

ROOT_PATH = os.path.dirname(__file__)

def pytest_sessionstart():
    from py.test import config

    # Only run database setup on master (in case of xdist/multiproc mode)
    if not hasattr(config, 'slaveinput'):
        from models import initialize_sql
        from pyramid.config import Configurator
        from paste.deploy.loadwsgi import appconfig
        from sqlalchemy import engine_from_config
        import os

        ROOT_PATH = os.path.dirname(__file__)
        settings = appconfig('config:' + os.path.join(ROOT_PATH, 'test.ini'))
        engine = engine_from_config(settings, prefix='sqlalchemy.')

        print 'Creating the tables on the test database %s' % engine

        config = Configurator(settings=settings)
        initialize_sql(settings, config)

With py.test, when you are running in parallel mode, the pytest_sessionstart hook gets fired for each node, so we check that we are on the master node. Then we just grab our test.ini configuration file and execute the initialize_sql function.

Now that you have your initial test configuration finished, you have to define a base test class that does the transaction management in setUp and teardown.

First, lets setup the Base testing class what will manage our transactions:

import unittest
from pyramid import testing
from paste.deploy.loadwsgi import appconfig

from webtest import TestApp
from mock import Mock

from sqlalchemy import engine_from_config
from sqlalchemy.orm import sessionmaker
from app.db import Session
from app.db import Entity  # base declarative object
from app import main
import os
here = os.path.dirname(__file__)
settings = appconfig('config:' + os.path.join(here, '../../', 'test.ini'))

class BaseTestCase(unittest.TestCase):
    def setUpClass(cls):
        cls.engine = engine_from_config(settings, prefix='sqlalchemy.')
        cls.Session = sessionmaker()

    def setUp(self):
        connection = self.engine.connect()

        # begin a non-ORM transaction
        self.trans = connection.begin()

        # bind an individual Session to the connection
        self.session = self.Session(bind=connection)
        Entity.session = self.session

    def tearDown(self):
        # rollback - everything that happened with the
        # Session above (including calls to commit())
        # is rolled back.

This base test case will wrap all your sessions in an external transaction so that you still have the ability to call flush/commit/etc and it will still be able to rollback any data changes you make.

Unit Tests

Now there are a few different types of tests you will want to run. First, you will want to do unit tests, which are small tests that only test 1 thing at a time. This means you will skip the routes, templates, etc. So let’s setup our Unit Test Base class:

class UnitTestBase(BaseTestCase):
    def setUp(self):
        self.config = testing.setUp(request=testing.DummyRequest())
        super(UnitTestBase, self).setUp()

    def get_csrf_request(self, post=None):
        csrf = 'abc'

        if not u'csrf_token' in post.keys():
                'csrf_token': csrf

        request = testing.DummyRequest(post)

        request.session = Mock()
        csrf_token = Mock()
        csrf_token.return_value = csrf

        request.session.get_csrf_token = csrf_token

        return request

We built in a utility function to help us test requests that require a csrf token as well. Here is how we would use this class:

class TestViews(UnitTestBase):
    def test_login_fails_empty(self):
        """ Make sure we can't login with empty credentials"""
        from app.accounts.views import LoginView
        self.config.add_route('index', '/')
        self.config.add_route('dashboard', '/')

        request = testing.DummyRequest(post={
            'submit': True,

        view = LoginView(request)
        response =
        errors = response['errors']

        assert errors[0] == u'csrf_token'
        assert errors[0].msg == u'Required'
        assert errors[1] == u'Username'
        assert errors[1].msg == u'Required'
        assert errors[2] == u'Password'
        assert errors[2].msg == u'Required'

    def test_login_succeeds(self):
        """ Make sure we can login """
        admin = User(username='sontek', password='temp', kind=u'admin')
        admin.activated = True

        from app.accounts.views import LoginView
        self.config.add_route('index', '/')
        self.config.add_route('dashboard', '/dashboard')

        request = self.get_csrf_request(post={
                'submit': True,
                'Username': 'sontek',
                'Password': 'temp',

        view = LoginView(request)
        response =

        assert response.status_int == 302

Integration Tests

The second type of test you will want to write is an integration test. This will integrate with the whole web framework and actually hit the define routes, render the templates, and actually test the full stack of your application.

Luckily this is pretty easy to do with Pyramid using WebTest:

class IntegrationTestBase(BaseTestCase):
    def setUpClass(cls): = main({}, **settings)
        super(IntegrationTestBase, cls).setUpClass()

    def setUp(self): = TestApp(
        self.config = testing.setUp()
        super(IntegrationTestBase, self).setUp()

In setUpClass, we run the main function of the applications that sets up the WSGI application and then we wrap it in a TestApp that gives us the ability to call get/post on it.

Here is an example of it in use:

class TestViews(IntegrationTestBase):
    def test_get_login(self):
        """ Call the login view, make sure routes are working """
        res ='/login')
        self.assertEqual(res.status_int, 200)

    def test_empty_login(self):
        """ Empty login fails """
        res ='/login', {'submit': True})

        assert "There was a problem with your submission" in res.body
        assert "Required" in res.body
        assert res.status_int == 200

    def test_valid_login(self):
        """ Call the login view, make sure routes are working """
        admin = User(username='sontek', password='temp', kind=u'admin')
        admin.activated = True

        res ='/login')

        csrf = res.form.fields['csrf_token'][0].value

        res ='/login',
                'submit': True,
                'Username': 'sontek',
                'Password': 'temp',
                'csrf_token': csrf

        assert res.status_int == 302

Problems with this approach

If a test causes an error that will prevent the transaction from rolling back, such as closing the engine, then this approach will leave your database in a state that might cause other tests to fail.

If this happens tracing the root cause could be difficult but you should be able to just look at the first failed test unless you are running the tests in parallel.

If you are good about writing and running your tests regularly you should be able to catch individual tests causing issues like this fairly quickly.

Turning Vim into a modern Python IDE


$ git clone
$ cd dotfiles
$ ./ vim


Back in 2008, I wrote the article Python with a modular IDE (Vim). Years later, I have people e-mailing me and commenting daily asking for more information, even though most of the information in it is outdated. Here is the modern way to work with Python and Vim to achieve the perfect environment.

Because one of the most important parts about a development environment is the ability to easily reproduce across machines, we are going to store our vim configuration in git:

$ mkdir ~/.vim/
$ mkdir ~/.vim/{autoload,bundle}
$ cd ~/.vim/
$ git init

The purpose of the autoload directory is to automatically load the vim plugin Pathogen, which we’ll then use to load all other plugins that are located in the bundle directory. So download pathogen and put it in your autoload folder.

You’ll need to add the following to your ~/.vimrc so that pathogen will be loaded properly. Filetype detection must be off when you run the commands so its best to execute them first:

filetype off
call pathogen#runtime_append_all_bundles()
call pathogen#helptags()

Now lets add all of the vim plugins we plan on using as submodules to our git repository:

git submodule add bundle/fugitive
git submodule add bundle/snipmate
git submodule add bundle/surround
git submodule add bundle/git
git submodule add bundle/supertab
git submodule add bundle/minibufexpl
git submodule add bundle/command-t
git submodule add
git submodule add bundle/ack
git submodule add bundle/gundo
git submodule add bundle/pydoc
git submodule add bundle/pep8
git submodule add bundle/py.test
git submodule add bundle/makegreen
git submodule add bundle/tasklist
git submodule add bundle/nerdtree
git submodule add bundle/ropevim
git submodule init
git submodule update
git submodule foreach git submodule init
git submodule foreach git submodule update

Thats it! Now that we’ve got our vim configuration in git!

Now lets look at how to use each of these plugins to improve the power of vim:

Basic Editing and Debugging

Code Folding

Lets first enable code folding. This makes it a lot easier to organize your code and hide portions that you aren’t interested in working on. This is quite easy for Python, since whitespace is required.

In your ~/.vimrc just add:

set foldmethod=indent
set foldlevel=99

Then you will be able to be inside a method and type ‘za’ to open and close a fold.

Window Splits

Sometimes code folding isn’t enough; you may need to start opening up multiple windows and working on multiple files at once or different locations within the same file. To do this in vim, you can use these shortcuts:

Vertical Split : Ctrl+w + v
Horizontal Split: Ctrl+w + s
Close current windows: Ctrl+w + q

I also like to bind Ctrl+<movement> keys to move around the windows, instead of using Ctrl+w + <movement>:

map <c-j> <c-w>j
map <c-k> <c-w>k
map <c-l> <c-w>l
map <c-h> <c-w>h


The next tweak that really speeds up development is using snipmate. We’ve already included it in our bundle/ folder so its already enabled. Try opening up a python file and typing ‘def<tab>’. It should stub out a method definition for you and allow you to tab through and fill out the arguments, doc string, etc.

I also like to create my own snippets folder to put in some custom snippets:

$ mkdir ~/.vim/snippets
$ vim ~/.vim/snippets/python.snippets

Put this in the file:

snippet pdb
    import pdb; pdb.set_trace()

Now you can type pdb<tab> and it’ll insert your breakpoint!

Task lists

Another really useful thing is to mark some of your code as TODO or FIXME! I know we all like to think we write perfect code, but sometimes you just have to settle and leave a note for yourself to come back later. One of the plugins we included was the tasklist plugin that will allow us to search all open buffers for things to fix. Just add a mapping to open it in ~/.vimrc:

map <leader>td <Plug>TaskList

Now you can hit <leader>td to open your task list and hit ‘q’ to close it. You can also hit enter on the task to jump to the buffer and line that it is placed on.

Revision History

The final basic editing tweak I suggest everyone start utilizing is the Gundo plugin. It’ll allow you to view diff’s of every save on a file you’ve made and allow you to quickly revert back and forth:

Just bind a key in your .vimrc to toggle the Gundo window:

map <leader>g :GundoToggle<CR>

Syntax Highlighting and Validation

Simply enable syntax highlighting in your ~/.vimrc:

syntax on                           " syntax highlighing
filetype on                          " try to detect filetypes
filetype plugin indent on    " enable loading indent file for filetype

Because we enabled pyflakes when we added it as a submodule in ~/.vim/bundle, it will notify you about unused imports and invalid syntax. It will save you a lot of time saving and running just to find out you missed a colon. I like to tell it not use the quickfix window:

let g:pyflakes_use_quickfix = 0


The final plugin that really helps validate your code is the pep8 plugin, it’ll make sure your code is consistent across all projects. Add a key mapping to your ~/.vimrc and then you’ll be able to jump to each of the pep8 violations in the quickfix window:

let g:pep8_map='<leader>8'

Tab Completion and Documentation

Vim has many different code completion options. We are going to use the SuperTab plugin to check the context of the code you are working on and choose the best for the situation. We’ve already enabled the SuperTab plugin in the bundle/ folder, so we just have to configure it to be context sensitive and to enable omni code completion in your ~/.vimrc:

au FileType python set omnifunc=pythoncomplete#Complete
let g:SuperTabDefaultCompletionType = "context"

Now we just enable the menu and pydoc preview to get the most useful information out of the code completion:

set completeopt=menuone,longest,preview

We also enabled the pydoc plugin at the beginning with all the submodules; that gives us the ability to hit <leader>pw when our cursor is on a module and have a new window open with the whole documentation page for it.

Code Navigation


The most important part about navigating code within vim, is to completely understand how to use buffers. There is no reason to use tabs. Open files with :e <filename> to place in a buffer. We already installed the minibufexpl plugin, so you will already visually see every buffer opened. You can also get a list of them doing :buffers.

You can switch between the buffers using b<number>, such as :b1 for the first buffer. You can also use its name to match, so you can type :b mod<tab> to autocomplete opening the buffer. You need to make sure you are using the minibufexpl from my github since it has patches that make it much better to work with.

To close a buffer you use :bd or :bw.

File Browser

NERD Tree is a project file browser. I must admit I used this heavily back when I was migrating from Visual Studio and used to the Solution Explorer, but I rarely use it anymore. Command-T is usually all you’ll need. It is useful when you are getting to know a new codebase for the first time though. Lets bind a shortcut key for opening it:

map <leader>n :NERDTreeToggle<CR>

Refactoring and Go to definition

Ropevim is also a great tool that will allow you to navigate around your code. It supports automatically inserting import statements, goto definition, refactoring, and code completion. You’ll really want to read up on everything it does, but the two big things I use it for is to jump to function or class definitions quickly and to rename things (including all their references).

For instance, if you are using django and you place your cursor over the class models.Model you reference and then called :RopeGotoDefintion, it would jump you straight to the django library to that class definition. We already have it installed in our bundles, so we bind it to a key to use it:

map <leader>j :RopeGotoDefinition<CR>
map <leader>r :RopeRename<CR>


The final tool that really speeds up navigating your code is the Ack plugin. Ack is similar to grep, but much better in my opinion. You can fuzzy text search for anything in your code (variable name, class, method, etc) and it’ll give you a list of files and line numbers where they are defined so you can quickly cycle through them. Just bind the searching to a key:

nmap <leader>a <Esc>:Ack!

We use ! at the end of it so it doesn’t open the first result automatically.

Integration with Git

We installed 2 plugins, git.vim and fugitive, that give us all the integration we need. Git.vim will provide us syntax highlighting for git configuration files; fugitive provides a great interface for interacting with git including getting diffs, status updates, committing, and moving files.

Fugitive also allows you to view what branch you are working in directly from vim. Add this to your statusline in ~/.vimrc:


The big commands you need to know:

  • Gblame: This allows you to view a line by line comparison of who the last person to touch that line of code is.
  • Gwrite: This will stage your file for commit, basically doing git add <filename>
  • Gread: This will basically run a git checkout <filename>
  • Gcommit: This will just run git commit. Since its in a vim buffer, you can use keyword completion (Ctrl-N), like test_all<Ctrl-N> to find the method name in your buffer and complete it for the commit message. You can also use + and - on the filenames in the message to stage/unstage them for the commit.

Test Integration

django nose

Test runner integration really depends on the testing library you are using and what type of tests you are running but we included a great generic plugin called MakeGreen that executes off of vim’s makeprg variable. So for instance, if you are using django with django-nose you could define a shortcut key in your ~/.vimrc like this:

map <leader>dt :set makeprg=python\\ test\|:call MakeGreen()<CR>

This will just give you a green bar at the bottom of vim if your test passed or a red bar with the message of the failed test if it doesn’t. Very simple.


I also included the py.test vim plugin for those who prefer it. This plugin has a lot more functionality including executing individual tests by class, file, or method. You can also cycle through the individual assertion errors. I have the following bindings:

" Execute the tests
nmap <silent><Leader>tf <Esc>:Pytest file<CR>
nmap <silent><Leader>tc <Esc>:Pytest class<CR>
nmap <silent><Leader>tm <Esc>:Pytest method<CR>
" cycle through test errors
nmap <silent><Leader>tn <Esc>:Pytest next<CR>
nmap <silent><Leader>tp <Esc>:Pytest previous<CR>
nmap <silent><Leader>te <Esc>:Pytest error<CR>


Vim doesn’t realize that you are in a virtualenv so it wont give you code completion for libraries only installed there. Add the following script to your ~/.vimrc to fix it:

" Add the virtualenv's site-packages to vim path
py << EOF
import os.path
import sys
import vim
if 'VIRTUAL_ENV' in os.environ:
    project_base_dir = os.environ['VIRTUAL_ENV']
    sys.path.insert(0, project_base_dir)
    activate_this = os.path.join(project_base_dir, 'bin/')
    execfile(activate_this, dict(__file__=activate_this))


The only true django tweak I make is before I open vim I’ll export the DJANGO_SETTINGS_MODULE environment so that I get code completion for django modules as well:

export DJANGO_SETTINGS_MODULE=project.settings

Random Tips

If you want to find a new color scheme just go to to preview a large selection.

© John Anderson <> 2011

Tips and Tricks for the Python Interpreter

I have seen a lot of people switch over to using ipython, bpython, etc to get auto-complete support without realizing that the standard interpreter does have this functionality.

To enable auto-complete support in the python interpreter you need to create a python startup file that enables readline support. A python startup file is just a bunch of python code that gets executed at startup of the interpreter. To do this you just setup PYTHONSTARTUP in your ~/.bashrc and then create a ~/ file:

    import readline
except ImportError:
    print("Module readline not available.")
    import rlcompleter
    readline.parse_and_bind("tab: complete")

Now when you are in python you have tab completion on importing, calling methods on a module, etc.

>>> import o
object(  oct(     open(    or       ord(     os

I always end up using the pretty print module for viewing long lists and strings in the interpreter so I prefer to just use it by default:

# Enable Pretty Printing for stdout
import pprint
def my_displayhook(value):
    if value is not None:
            import __builtin__
            __builtin__._ = value
        except ImportError:
            __builtins__._ = value


sys.displayhook = my_displayhook

It is also very useful to be able to load up your favorite editor to edit lines of code from the interpreter, you can do this by adding the following into your ~/

import os
import sys
from code import InteractiveConsole
from tempfile import mkstemp

EDITOR = os.environ.get('EDITOR', 'vi')
EDIT_CMD = '\e'

class EditableBufferInteractiveConsole(InteractiveConsole):
    def __init__(self, *args, **kwargs):
        self.last_buffer = [] # This holds the last executed statement
        InteractiveConsole.__init__(self, *args, **kwargs)

    def runsource(self, source, *args):
        self.last_buffer = [ source.encode('latin-1') ]
        return InteractiveConsole.runsource(self, source, *args)

    def raw_input(self, *args):
        line = InteractiveConsole.raw_input(self, *args)
        if line == EDIT_CMD:
            fd, tmpfl = mkstemp('.py')
            os.write(fd, b'\n'.join(self.last_buffer))
            os.system('%s %s' % (EDITOR, tmpfl))
            line = open(tmpfl).read()
            tmpfl = ''
            lines = line.split( '\n' )
            for i in range(len(lines) - 1): self.push( lines[i] )
            line = lines[-1]
        return line

c = EditableBufferInteractiveConsole(locals=locals())

# Exit the Python shell on exiting the InteractiveConsole

For Django developers when you load up the ./ shell it is nice to have access to all your models and settings for testing:

# If we're working with a Django project, set up the environment
if 'DJANGO_SETTINGS_MODULE' in os.environ:
    from django.db.models.loading import get_models
    from django.test.client import Client
    from django.test.utils import setup_test_environment, teardown_test_environment
    from django.conf import settings as S

    class DjangoModels(object):
        """Loop through all the models in INSTALLED_APPS and import them."""
        def __init__(self):
            for m in get_models():
                setattr(self, m.__name__, m)

    A = DjangoModels()
    C = Client()

After these tweaks the python interpreter is a lot more powerful and you really lose the need for the more interactive shells like ipython and bpython. All of these settings work in both python2 and python3.

If you want to see my complete ~/ you can get it on github

Convert a string to an integer in Python

A fun interview question some developers like to ask is to have you convert ascii characters to an integer without using built in methods like string.atoi or int().

So using python the obvious ways to convert a string to an integer are these:

>>> int('1234')
>>> import string
>>> string.atoi('1234')

The interesting thing here is finding out where on the ascii character table the number is. Luckily python has this already built in with the ord method:

>>> help(ord)

    ord(c) -> integer

    Return the integer ordinal of a one-character string.

>>> ord('1')
>>> ord('2')

You can see that the numbers are grouped together on the ascii table, so you just have to grab ‘0’ as the base and subtract the rest:

>>> ord('1')-ord('0')

So if we have the string ‘1234’, we can get each of the individual numbers by looping over it:

>>> num_string = '1234'
>>> num_list = []
>>> base = ord('0')
>>> for num in num_string:
...   num_list.append(ord(num) - base)
>>> print num_list
[1, 2, 3, 4]

but now how to we combine all these together to get 1234? You can’t just add them up because you’ll just get 1+2+3+4 = 10.

So, we have to get 1000 + 200 + 30 + 4, which is a simple problem to solve. Its just number times 10 to the nth power, so the final solution is:

num = '1234'
new_num = 0
base = ord('0')

for i,n in enumerate(reversed(num)):
      new_num += (ord(n) - base) * (10**i)

print new_num

This code is a little verbose though, lets make it a dirty nasty one liner!

>>> sum([(ord(n)-ord('0')) * (10 ** i) for i,n in enumerate(reversed('1234'))])

Caesar Cipher in Python

I’m currently teaching my wife to code and one of the problems that we worked on to teach her some fundamental programming concepts was re-implementing the caesar cipher in python. It was fun not only to code but to also start sending each other “secret” messages!

The caesar cipher is a rather simple encoding, you just shift the alphabet a certain amount of characters. For example, if you are using a shift of 2:

a => c
b => d
y => a
z => b

Using this as an interview type question would provide a few interesting problems and give you a good perspective on how good a developers problem solving skills are and how knowledgeable they are in the language of their choice.

The first issue is to handle the beginning and end of the alphabet, if you are encoding ‘z’ then you will have to start your shift on a. The second problem is to only encode letters since there was no ascii table to define in what order characters are shifted back in those times.

Without using too much of the built in python niceties you could do something similar to this:

def decode_shift_letter(current_ord, start, end, shift):
    if current_ord - shift < start:
        new_ord = (current_ord + 26) - shift
        return chr(new_ord)
        return chr(current_ord-shift)

def encode_shift_letter(current_ord, start, end, shift):
    if current_ord + shift > end:
        new_ord = (current_ord - 26) + shift
        return chr(new_ord)
        return chr(current_ord+shift)

def decode(input, shift):
    return modify_input(input, shift, decode_shift_letter)

def encode(input, shift):
    return modify_input(input, shift, encode_shift_letter)

def modify_input(input, shift, shift_letter):
    new_sentence = ''

    for letter in input:
        # we only encode letters, random characters like +!%$ are not encoded.
        # Lower and Capital letters are not stored near each other on the
        # ascii table
        lower_start = ord('a')
        lower_end = ord('z')
        upper_start = ord('A')
        upper_end = ord('Z')
        current_ord = ord(letter)

        if current_ord >= lower_start and current_ord <= lower_end:
            new_sentence += shift_letter(current_ord, lower_start, lower_end, shift)
        elif current_ord >= upper_start and current_ord <= upper_end:
            new_sentence += shift_letter(current_ord, upper_start, upper_end, shift)
            new_sentence += letter

    return new_sentence

def get_shift():
        shift = int(raw_input('What shift would you like to use?\n'))
    except ValueError:
        print 'Shift must be a number'
        shift = get_shift()

    if not (shift > 0 and shift <= 25):
        print 'Shift must be between 1 and 25'
        shift = get_shift()

    return shift

def main():
        task = int(raw_input('1) Encode \n'+ \
                             '2) Decode \n'))
    except ValueError:
        print 'Invalid task, try again!'

    shift = get_shift()
    input = raw_input('What message would you like to %s\n' % ('Encode' if task == 1 else 'Decode'))

    if task == 1:
        print encode(input, shift)
    elif task == 2:
        print decode(input, shift)

if __name__ == '__main__':

This would prove that you are a decent problem solver and have enough of the language to get things done but if you want to prove you have mastered the python language you might take advantage of some slicing and some methods out of the string module and change your code to look something like:

from string import letters, maketrans

def decode(input, shift):
    return modify_input(input, -shift)

def encode(input, shift):
    return modify_input(input, shift)

def modify_input(input, shift):
    trans = maketrans(letters, letters[shift:] + letters[:shift])
    return input.translate(trans)

Do get more information on string.letters and string.maketrans you can visit their documentation [here](