Skip to main content

How to Run an Awesome Code Dojo

Python Code Dojo

Alistair Roche recently tweeted the following question:

@ntoll Hey Nicholas, any chance I could get some advice from you on how to run an awesome coding dojo?

This blog post is my answer.

I helped start the London Python Code Dojo with Jonathan Hartley and Bruce Durling because I was learning Python and wanted to meet people I could learn from. Therein lies the essence of the dojo: a place to learn with/from other people.

The original dojo concept is explained at the website. We started out with every intention of following the instructions. However, we've let our imaginations run and evolved the meetings to our group's dynamics and interests. If you run a dojo, so should you too (adapt).

Whereas is quite formal and focussed on test-driven development (TDD) we have changed things in three major ways:

  1. expand the pair programming "randori" to encompass groups of no more than five participants
  2. drop the TDD dogma, and
  3. encouraged noise, debate and creativity.

Working in groups was suggested by Ciarán and makes the dojo a software version of Scrapheap Challenge. Every group is given about an hour and a half to solve the same problem, at the end of the evening each group does a show and tell which ends up being a public code review cum question and answer session.

Although I practice TDD I personally dislike pushing any "one true" methodology on people. There are times when TDD is the worst thing to do (I'll explain why in another blog post) and I personally feel it's important to let people discover so-called good practice for themselves by observing it in use rather than being told to do it.

It soon became clear that encouraging noise, debate and "creativity" leads to a buzz in the room and lots of energetic intellectual interaction between participants. I once overheard someone (it might have been Tom Viner but I might be wrong) explain that the London Python code dojo is to the original dojo concept as an improvisation in a jazz club is to a string quartet recital.

So here's the London dojo recipe:

  • We limit the number of tickets to around thirty to make the evening easier to manage (it's like herding cats).
  • Start with a social element. For us it's pizza and beer. Not only is it a good way to welcome new members but it is also a great community building mechanism. Several people have found jobs via the connections made in the pre-code social.
  • During the social element we encourage people to write ideas for the evening's problem on a white/black-board or flip chart. Good ideas are usually algorithmic in nature and very specific. Recent examples include:
    • Game solving algorithms: Boggle, Mastermind and Tic-Tac-Toe.
    • Creating a simple game: Hunt the Wumpus clone.
    • Problem solving: Maze navigation.
    • An adventure game where each month we create a new aspect of the game: world representation, parser, keeping game state, puzzles and so on.
    Sometimes we start the evening with a presentation of some sort of problem or Python module and we use that as the basis of the group coding. For example, we all created animated Christmas cards with PyGame.
  • The "organiser" convenes the coding session by calling for a vote on the problems suggested during the social part of the evening. We usually work out the top three ideas then take a second vote to decide a winner.
  • The "organiser" randomly assigns everyone a group.
  • There follows one and a half hours of furious coding with the organiser calling out half-hour intervals.
  • Finally, once the time is up every group does a show and tell where we need to see their code running (or not as the case may be) followed by a code review. For me, this is the best (and funniest) part of the evening.

If you want to know more, come to a dojo! Alternatively, you may be interested in the slides from a presentation I gave at Europython in 2011 on running the code dojo:

Last, but definitely not least, many thanks to my co-dojo-cat-herders Tom Viner, Tim Golden, Jonathan Hartley and Bruce Durling

Baking with Raspberry[Pi/Py]

Raspberry Pi[e]

This weekend is PyconUK and I'll be bringing along an alpha version of the RaspberryPi device. As the RaspberryPi foundation's website explains, they "exist to promote the study of computer science and related topics, especially at school level, and to put the fun back into learning computing". They go on to mention that the device will run "open" software and lists Python as being part of their software stack.

Given that RaspberryPi is based in the UK I thought it'd be good if the UK's Python community found out about it with a view to community contributions. As a result, I emailed RaspberryPi to ask if they'd like to send someone to PyconUK. Unfortunately no-one from the foundation was available but they did offer to send us the alpha-board so we could play with it during the sprints and hacking that takes place at these types of conference.

So I have a RaspberryPi device sitting on my desk. I've spent a couple of hours playing to ensure we have something that works which we can hack on/with. What follows is the story of what I've done and what attendees can expect to see at PyconUK.

The device arrived on Monday and my very first impression was that kids find it fascinating. My kids were intrigued when I unpacked it (although I didn't switch it on). Just the look of the board seems to be intrinsically interesting to children (a fact that is often lost on us adults) and they wanted to prod, poke and switch it on. Unfortunately for them, I didn't have the right leads to connect the thing to a monitor. They had to wait. :-(

I took some photos (see below) and decided to try the device out at Monday's NortHACKton meeting since there's quite a number of hardware-hacker types who attend (I'm most definitely a software-hacker type) and they could offer sage advice in the event of a problem.

Raspberry Pi alpha board

Given the correct monitor leads and an excited crowd of geeks we booted the device into the Debian Squeeze flavour of Linux, discovered the framebuffer resolution was set to something really weird (so we had the left and right hand sides of the display chopped off) and realised we didn't have a username and password with which to log in. In any case, when the login prompt appeared there were spontaneous "Ooohs" and "Ahhhs" from the assembled geeks (as if they'd never seen a Linux box boot up).

So my second impression of the RaspberryPi device is that it's a good candidate as the next "shiny" thing to obsess geeks. It's always a good sign if just booting the device up produces a positive response.

After a bit of head scratching, password guessing and general faffing about with the monitor we switched the device off, removed the SD card and mounted it on one of our laptops in order to add a user by hand. Unfortunately, we forgot to examine the sudoers file so although we could log in, we couldn't really do anything "dangerous" (i.e. interesting).

Fast forward to today: now that I've purchased the required leads and other gubbins I've booted the device this afternoon, created a "pyconuk" user, made sure it has a home directory and is in the sudoers list (actually, that just means adding it to the appropriate group).

Remembering back to my first explorations with Linux back in 1997 I also hesitantly typed startx and waited for things to blow up. Amazingly the thing worked and a desktop appeared (at the rather amazing resolution of 1920×1080). It appears that the window manager is OpenBox and IceWeasel (Firefox) is also installed.

I also decided to check out Python support. Turns out Python 2.6.6 is installed. I've made sure we have pip and virtualenv installed, apt-got a whole bunch of other Python related packages and GIT (to allow us to grab code). I even managed to get PyGame working in a stuttery sort of a way. Sound appears not to work though. Not sure why and my focus this afternoon has been to get a passable Python environment working rather than investigate hardware problems and configuration issues.

Finally, I noticed this post about a bunch of Ruby guys who've managed to get KidsRuby running on the device. One of my aims for this weekend is to get something similar done for Python. Hey look, I've even got a Github repository set up where we can coordinate.

Can the UK's Python community create an interactive set of programming lessons for Python, written in Python for the RaspberryPi?

I'll keep you posted.

Raspberry image credit: used under a Creative Commons license.

Teach our kids to code (or not)?

Three things have prompted me to write my first blog post in over almost year:

  1. This petition encouraging the government to mandate the teaching of software development to ten year old kids in the UK.
  2. The RaspberryPi project who are creating a cheap (£20) computer to encourage kids to tinker.
  3. Digging out my old BBC Micro for my kids on a recent trip to my parent's house (see the picture below).


All three points are related to the question: how do we encourage children to engage in programming? Before continuing I want to make it unambiguously clear that I believe children should be encouraged to program.

When I first saw the petition on Twitter my first reaction was, "yeah, right on! I'll sign up" and duly signed.

However, my experience as both a professional computer programmer and former senior teacher in the UK's state sector leads me to believe I've made a mistake when signing the petition.

It states,

"Start teaching coding as a part of the curriculum in Yr 5. If it can be introduced as a part of the central curriculum in Year 5, then by the time those kids are drawn up through the education system, there would be far less of a disparity between the sexes - and maybe even an increased number of young people with an ability to manipulate open data, relate to code and challenge each other to design and build the digital products that we have not even begun to imagine. Year 8 is too late, we are losing the female coders and we need this generation to help us code a better country."

All laudable aims.

However, if the UK's Department of Education becomes involved and tells teachers to "start teaching coding as part of the curriculum in Yr 5" then we're doomed. Here's why…

First of all, where do you put such lessons in an already crowded curriculum? Secondly, who should teach "coding"? Thirdly (and most importantly), who decides what to teach?

The first problem is a question of priority. It has been my experience that in the UK priorities are directly linked to school results. For primary schools (who teach Yr 5) it means the SAT results for English, Maths and Science for the year 6 cohort leaving to embark on their secondary education. All through years 3, 4, 5 and 6 the priority seems to be focussed on getting the kids prepared for these tests (in years 1 and 2 the priority is for the SAT tests taken at the end of year 2). I suspect that if "coding" became part of the year 6 school results then schools would pay attention. Otherwise it'd be something the kids did on a Friday afternoon between P.E. and Music after a morning full of English, Numeracy (not Maths) and Science.

The second problem concerns skills and resources. Teaching is the hardest job I've ever had (I was a secondary head of music). Teaching in key stages 1 and 2 (Primary) strikes me as being even harder than teaching in key stages 3 and 4 (Secondary). You have to know how to do so much (which is why good teachers are so rare). Add all the bureaucracy, fads and government directives then you get a well-meaning unholy mess that works sometimes but more often than not fails spectacularly. Can you imagine the reaction from teachers when "coding" is introduced into this pot of educational stew? If you think teachers have lots of holidays in which they could learn how to program then think again: when do you think they do their planning, re-organisation of classrooms and marking of course-work? The only reasonable solution I can see would be for a specialist teacher of programming to take a lesson each week. But then the school would have to pay for them and it becomes a question of what each individual school sees as being a priority (see point one).

The third problem is the most interesting: Who decides what to teach? In the UK we have a national curriculum that tells teachers what they should teach so it's the government who decides. This has the advantage of ensuring all schools teach a worked out basic curriculum. It also has the disadvantage that all schools teach a worked out basic curriculum. Given the rapid change in the technology world and the slow rate of change in the educational world, is it possible that a worked out national curriculum would be any good? I have my doubts. I also worry that making kids have programming lessons is a great way to dissuade the next generation. Perhaps it should be like learning a musical instrument, everyone should have the opportunity to learn but only if they want to. Programming is not everyone's cup of tea.

This leads me to RaspberryPi – a cheap computer for kids to tinker with. I personally think this is a far superior way to engage kids in programming. To my (musician's) eyes the RaspberryPi device is like giving kids musical instruments. They're fun, kids can play with them and make them their own with nothing more than a chance to experiment without adult intervention. This builds upon what kids do naturally and is exactly what happened when I found my old BBC at the weekend: my kids had a go themselves and were, within minutes, playing around and having fun programming.

Does this require a change in the national curriculum? Of course not.

Finally there is the essential consideration of kids who don't have access to introductory books on programming, a computer or supportive parents.

Perhaps a better petition would call for a RaspberryPi to be given to every child in year 5 along with pre-installed self-paced software that teaches programming.

New Features in FluidDB


I now work for Fluidinfo the company behind FluidDB. I'm employee number three. Actually Guy #3 is an excellent job title and description of what I'm doing at Fluidinfo. (The lack of any posts to this blog is also an indication of how busy we are at Fluidinfo).

The fruits of our work are a new version of FluidDB that comes with several enhancements to the API. I want to describe two of them in this blog post: the addition of "/about" and "/values" based paths.


Up until now, the only way to reference an object in a URL was through its ID (a rather hard-to-remember value called a uuid) similar to this:

As the example shows, remembering the URL to get specific tag values is difficult.

The addition of /about based paths allows you to reference an object using its unique fluiddb/about value (percent encoded in the URL). This means URLs suddenly become more meaningful and easy to remember. For example:

as an alternative to the previous example.

Of course, it's possible to use all the usual HTTP methods to manipulate resources in much the same way as the existing /objects based API.

Full details can be found in the FluidDB API documentation.


Until now, if you wanted to get, delete or update a set of values in FluidDB you had to do a request for each tag-value. This was a severe limitation.

For example, to return six fields on a result set of, say, 100 objects resulted in 601 requests to FluidDB (one initial query request to retrieve the result set and another 600 to get all the values). Not only did this make FluidDB work harder but is meant the client has to wait for all the requests to travel over the network introducing a painfully large amount of latency.

Happily, the new /values based API turns the example given above into a single request. Here's how:


Those of you familiar with queries specified in the SQL language of conventional relational databases will know that they take the form:

SELECT column_name(s) FROM table_name WHERE column_name operator value

where a concrete example might be:

SELECT firstname, lastname, email FROM users WHERE group_id = 2

that returns a result similar to this:

firstname lastname email
Terry Jones
Esteve Fernandez
Nicholas Tollervey

This demonstrates that to return multiple results you need to select values that belong to a record that matches some sort of constraint.

Here's how to do a similar query using an HTTP GET request to FluidDB's new /values api.

Obviously, since this is an HTTP GET request we're passing all the important arguments in the URL. Lets break down what each segment means:

  • – indicates we're using the new /values api.
  • ?query=has+fluidinfo%2Femployee – is the constraint used to identify the result set. The constraint is written in FluidDB's über-minimalist query language. Notice how the query has been percent encoded.
  • &tag=fluiddb/about&tag=fluiddb/users/username&tag=fluiddb/users/name&tag=fluidinfo/staffpic – for each object in the result set return the values associated with the following tags:
    • fluiddb/about
    • fluiddb/username
    • fluiddb/name
    • fluidinfo/staffpic

In plain English, we're asking FluidDB to return the about value, username, real name and staff picture attached to all objects that represent employees of Fluidinfo Inc. The (truncated) result is some json like this:


    "results" : 

        {'id': {

        "05eee31e-fbd1-43cc-9500-0469707a9bc3" : {

            "fluiddb/about" : {

                "value" : "Object for the user named terrycojones"


            "fluiddb/users/username" : {

                "value" : "terrycojones"


            "fluiddb/users/name" : {

                "value" : "Terry Jones"


            "fluidinfo/staffpic" : {

                "value-type" : "image/png",

                "size" : 79393



        "8af015f1-dbe3-46d0-855e-5e3c2b4a2ca5" : {

            "fluiddb/about" : {

                "value" : "Object for the user named esteve"


            "fluiddb/users/username" : {

                "value" : "esteve"


            "fluiddb/users/name" : {

                "value" : "esteve"


            "fluidinfo/staffpic" : {

                "value-type" : "image/png",

                "size" : 61325



        "a694f2d0-428e-4aaf-85d1-58e903f56b30" : {

            "fluiddb/about" : {

                "value" : "Object for the user named ntoll"


            "fluiddb/users/username" : {

                "value" : "ntoll"


            "fluiddb/users/name" : {

                "value" : "Nicholas Tollervey"


            "fluidinfo/staffpic" : {

                "value-type" : "image/png",

                "size" : 81673





The actual data is under the "results" key in the json dictionary. We've used a single key "depth" because we might add further keys in addition to "results" at a later date. These will be used to indicate other useful information. For example, paging or time taken to retrieve the result.

Each individual result is identified by the object's uuid. Results contain tags that match the values selected in the query. If the tag does not exist on an object it will not appear in the result. Tags that do exist will be represented in one of two ways:

  1. Primitive values (such as strings, numbers, booleans, null and lists of strings) will contain a single "value" entry that gives the actual value.
  2. Opaque values (anything else), will contain two entries:
    • "value-type" – an indication of the MIME type of the data.
    • "size" – an indication (in bytes) of how big the tag value is.

Should you wish to get the "opaque" value of a tag you'll need to use the original /objects based GET request.


To continue with our SQL to /values example, to update a record in a traditional relational database you use the appropriately named "update" statement:

UPDATE table_name SET column1=value, column2=value2 WHERE some_column=some_value

In other words, you need to provide values for fields that belong to a record that matches some sort of constraint.

To do this in FluidDB use a PUT HTTP request with a query in the URL (just like the GET request described above) and a json dictionary of tags and values to add/update on objects that match.

For example, the URL might be:

This contains exactly the same query as the URL used in GET – in other words, I'm interested in all objects that represent employees of Fluidinfo Inc.

The payload of the request might be a json dictionary like this:


    "ntoll/met" : {

        "value" : true


    "ntoll/work/colleague" : {

        "value" : "Fluidinfo"



Notice how the structure of the dictionary is similar to that of the results returned from a GET request to /values: each tag is associated with a new value to add or update on the matching objects.

It's only possible to update/create tags with "primitive" values (strings, numbers, booleans, null and lists of strings). To update/create tags on an object with "opaque" values then use the original /objects based PUT request.


In SQL you delete a record using the "DELETE" statement:

DELETE FROM table_name WHERE some_column=some_value

This will remove a record that matches the constraint from the referenced table.

Unfortunately (for our SQL to /values example), that's not how it works in FluidDB.

In FluidDB objects are indestructible so they can't be deleted. However, it is possible to delete tags from objects and this works in almost exactly the same way as a GET request (hint: just change the HTTP method from GET to DELETE).

In other words, if you called the following URL with an HTTP DELETE request:

The following tags:

  • fluiddb/about
  • fluiddb/username
  • fluiddb/name
  • fluidinfo/staffpic

... would be deleted from all objects that match the constraint:

has fluidinfo/employee

You'll know that the request was a success because you'll get a result with the 204 (No Content) code.



I've only explained a couple of the new features we've recently rolled out. There are quite a lot more that have arrived or are in the pipeline:

  • Text indexing is being phased in (but is definitely a work in progress). We're only taking the very first step: the fluiddb/about tag will be indexed with other tags to follow. This will allow users to search string values within FluidDB.
  • MD5 checking of payload data: if you provide an MD5 Checksum FluidDB will validate your data.
  • Cross Origin Resource Sharing (CORS) makes it possible to make cross origin requests from your browser rather than rely on JSONP. FluidDB will have an almost complete implementation of this emerging standard although we expect to make changes and improvements as the specification matures.
  • OAuth support for third party applications will be arriving soon. If you're familiar with the way Twitter works with third party applications you'll know what to expect.
  • Updates and improvements to the /values API will also arrive soon.
  • We're starting to look at providing notifications of events within FluidDB, e.g. a certain tag has been used, a particular user has tagged something or an interesting object has been tagged (probably via webhooks – but it's early days).

Lots of great stuff! I'd better get back to work… ;-)

Stephen Fry Groks Software

I must be on a roll - two blog posts in a matter of hours.

Like many who live in the UK I know Stephen Fry as a "celebrity" with a love for "gadgets". However, this recent interview demonstrates that he also groks software, design and usability.

My favourite section contrasts what he calls functional software with Apple's offerings (starting around 6 minutes in):

"[It's] as if these devices are only function objects, and that's what Apple realise, supremely, and others are now beginning to realise. The point is they're made for human beings and human beings are first and foremost emotional creatures. We are creatures of emotion. Our emotion hits the brain, this is study-able - you can see this on encephalographs and things. Emotions hit us before cognitive thought and that means that if we have an object that's in our pocket all the time [...] and we take it out it's something we have a relationship with: we touch it, we feel it, we look at it - whether or not we want [to], that means we have an emotional engagement. And therefore anybody who produces it merely to say, "press this it does this, press that it does that" doesn't understand what it is to be human.

Apple understand that we are all, as human beings, people who want to cradle, stroke, fondle, smile, get annoyed, treat "as alive" an object. That's not pretentious, it's not what Ruskin would call "the pathetic fallacy" it is the way all humans respond to what is around us. Apple got this by making us smile, by making us delight in the things they offered. And now, fortunately, all the other big players: Google and HTC and even Microsoft (god bless them), Motorola and Palm - they're all understanding that users want (to use that terribly hackneyed word) an "experience". Not just a series of functions that "this provides that" as if it's a cupboard, a filing system and that's it - they want to hug it and technology finally allows that."

I agree with him: good software puts the human user at the heart of what it does.