Skip to main content

FluidDB 101

On Monday Terry and Esteve "switched on" the alpha version of their FluidDB offering. Congratulations guys!

I've written about FluidDB before – I find the philosophy behind the project so interesting. Now that I (and you) can get to play with it I'm recording some quick take-aways for the sake of my memory and hopefully your education (I'm assuming you're au fait with FluidDB). ;-)

Currently, FluidDB is in read-only mode as none of the pillaging horde that are the alpha testers have been given access to their user's details so can't make any "writes". According to Terry, alpha testers will get their account details by the end of the week.

Nevertheless, we can still "read" what little is in there to get a feel of what interacting with FluidDB is like:

Example Interactions

FluidDB currently communicates with the outside world via an HTTP based API. You could use tools like wget or curl to interact with FluidDB but simple libraries are already being written (kudos to Sanghyeon Seo for the quick work – my fork includes fixes for Python < 2.6). I'll use my fork in the following Python doctest like example:

Return a list of objects that have the tag "username" from the "fluiddb/users" namespace




>>> import fluiddb

>>> fluiddb.call('GET', '/objects', query='has fluiddb/users/username')

(200, {'ids': ['8b57277a-09f6-485d-9108-761b7848c913', ...SNIP... '62fe2cca-7ad2-4dd8-9b0a-b3909c0709e8']})



Return an object where the tag "username" from the "fluiddb/users" namespace has the value "ntoll"




>>> fluiddb.call('GET', '/objects', query='fluiddb/users/username = "ntoll"')

(200, {'ids': ['5873e7cc-2a4a-44f7-a00e-7cebf92a7332']})



Find out about a specific object:




>>> fluiddb.call('GET', '/objects/5873e7cc-2a4a-44f7-a00e-7cebf92a7332', {"showAbout": True})

(200, {'about': 'Object for the user named ntoll', 'tagPaths': ['fluiddb/about', 'fluiddb/users/name', 'fluiddb/users/username']})



Get the value of the tag "fluiddb/users/username" from the object with the uuid "5873e7cc-2a4a-44f7-a00e-7cebf92a7332"




>>> fluiddb.call('GET', '/objects/5873e7cc-2a4a-44f7-a00e-7cebf92a7332/fluiddb/users/name')

(200, 'ntoll')



Get the result as json (GET and PUT responses default to raw HTML payload)




>>> fluiddb.call('GET', '/objects/5873e7cc-2a4a-44f7-a00e-7cebf92a7332/fluiddb/users/name', body=None, format='json')

(200, {'value': 'ntoll'})



So far so good…

Until we get our API keys that allow us to "write" stuff in our namespaces then this is all we've got to play with. I suspect that we might be able to create new objects (I've not tested this yet) as these don't "belong" to anyone – remember it's the namespaces/tags and the associated values that make FluidDB so interesting.

Finally, you might be asking where one might find out more… the most useful pages I've found are:

You might also want to join the #fluiddb channel on Freenode IRC or subscribe to the two Google groups.

I'll update with more as soon as I've worked it out / been given access… :-)

Augmented Reality - A Developer's Perspective

I've been amazed at the various reactions to my last post. Since then I've been collecting my thoughts about Augmented Reality (AR). What follows is a first attempt to distil them into a coherent commentary on AR and its potential.

What is Augmented Reality?

Augmented Reality is a means of superimposing digital assets upon the "real" world as seen through an appropriately configured device.

For example, in my last post I described how I put the locations of "real world" geocaches as digital representations into the Wikitude AR viewer provided by Mobilizy.

Alternatively, an AR enabled device might recognise the face of a new acquaintance, retrieve their contact details and display them or import them into your address book. Something similar to this is demonstrated in the following video (from the Swedish company TAT):

In both cases, within an augmented reality digital assets have two essential qualities:

  1. A location in the real world – identified by longitude/latitude/altitude (as with the geocaches) or by some other means (such as facial recognition).
  2. A context to give them meaning – provided by the digital asset's representation in the augmented world.

How does it work?

The basic recipe for Wikitude is simply…

  • An AR enabled device (like my Android based mobile phone) has GPS capabilities, a compass and accelerometers that enable it to work out where I am, where I'm pointing and how I'm moving / holding the device in the real world.
  • Given this information it is possible to work out what digital assets are close by, if I'm facing in their direction and if the device is being held in such a way that it is looking at them.
  • Finally, by capturing the output of the device's camera it is possible to add such digital assets to the image displayed on the screen of the device.

...but there is more…

Digital assets need to be understood as representing something. This can be achieved in several ways:

  • The context of the application. For example, it's obvious that you're looking at geocaches in the AR view of GeoBeagle.
  • Visual clues. To continue with the geocaching example, one might represent different types of cache with different icons in the AR view. One might even represent distance with alpha compositing – as an asset gets further away it becomes more transparent until, finally, it disappears. Other dimensions that could be represented include relative speed of an asset via blue/red shift and an asset's "importance" related to its on-screen size. I suspect conventions to emerge as AR technology matures.
  • Layers/feeds/channels that filter assets. Imagine you're looking at a scene that is cluttered with many assets but you only want to see telephone kiosks (how ironic). One might filter out the "shops", "attractions", "hotels" and "transport" layers leaving only the "utilities" layer that shows things like public toilets and telephone kiosks. Companies should be able to make money by providing subscription based layers.

The current state (for developers)

I've only had experience of using Wikitude so I'll limit my comments to that platform.

Wikitude is beta software but if you want to play with an existing version to see how it performs then download the Wikitude World Browser application available in the Android market.

Wikitude is also in "closed" beta – meaning you'll have to register in order to get the documents and associated code / libraries. It is my understanding that eventually one will need a developer key to use the API.

Wikitude already seems to be very stable – I've not had it crash (yet) but I'm sure as more people start to use it more opportunity will exist for breakage.

The API is very simple. As I explained in my previous post, this is both good and bad. To paraphrase Albert Einstein: "As simple as possible, but no simpler". Wikitude is currently too simple (but in a good way). I'd like to be able to:

  • Define the menus and associated event handlers associated with each PoI (Point of Interest – a digital asset within the AR world) so that I can customise what happens when a user clicks a specific PoI.
  • Define areas rather than just points and choose how such areas are "filled" – colour, texture etc…
  • Have an elegant solution for missing altitude information. For example, geocaches only have a longitude and latitude but no altitude. Wikitude assumes 0 meters altitude if none is provided so an early version of the geocaching application had nearby geocaches appearing underground (as I'm at an altitude of 157 meters). My solution was to make the world flat by giving everything within 6000 meters the same altitude as the current user – although this isn't at all ideal.
  • Be able to display pathways based on data from OpenStreetMap.org (for example). It'd be good to superimpose public rights of way, footpaths and other navigation information. Something like an AR SatNav.
  • Add 3d models to the AR. Imagine visiting an ancient monument and being able to see an artist's impression that could be viewed in situ. What a great educational resource that would be and Architects would find it useful on site during the pre-build phase.

Nevertheless, Mobilizy have the right attitude because "simple" is a good place to start. I can only assume they have various features up their sleeves that they'll add when finished and properly tested.

I don't want to give the wrong impression because one can already do quite a lot:

  • Add points according to longitude / latitude / altitude.
  • Change the label associated with the PoI and the description that is displayed when the item is tapped.
  • Change the icon displayed in the AR view.

Potential

So what happens now..? I can imagine all sorts of uses for this technology and I'll work my ideas out into a blog post in the not-too-distant future.

However, I'd caution against making every location based application viewable via AR. Often the top-down Google Maps view is all that is needed. For example, why show houses for sale in AR when houses for sale (in the UK at least) always have an estate agent's "For Sale" sign placed prominently outside the property? One should only use a technology because it is the best fit for a problem, not because it is the latest and greatest.

Hello Android!

Android

Last month I bought my first new mobile phone since 2001 (honest).

As a developer I wanted something I could play with, so after looking at the iPhone, Blackberry and Windows based offerings I ended up going for the HTC Magic running Google's Android platform.

So far, I've found it to be a fantastic phone: easy to use, lots of very cool features and small enough to fit in your pocket. It's definitely on a par with Apple's OS (I own an iPod Touch) – and with other Android devices coming to market it looks like Google is onto a winner (again).

There are several fantastic applications available for Android phones. Two of may favourites are GeoBeagle and Wikitude – World Browser (both available in the Marketplace).

GeoBeagle

GeoBeagle is an open-source Geocaching application. What is Geocaching..? As the website explains:

"Geocaching is a high-tech treasure hunting game played throughout the world by adventure seekers equipped with GPS devices. The basic idea is to locate hidden containers, called geocaches, outdoors and then share your experiences online. Geocaching is enjoyed by people from all age groups, with a strong sense of community and support for the environment."

As Android phones come with GPS (Global Positioning System) built it – just like the SatNav in your car – they can be used to determine your location and how far away you are from a cache. GeoBeagle is an excellent application that makes this easy – and it's what I use when I go geocaching with my kids.

Wikitude

Wikitude World Browser is an example of augmented reality. I'll let their website explain,

"[It] presents the user with data about their surroundings, nearby landmarks, and other points of interest by overlaying information on the real-time camera view of a smart-phone."

When colleagues have asked about my new phone this is the one application I always show quite simply because it raises such incredulity and positive feedback that it is hard not to feel like a time-traveller from the future demonstrating a gizmo from Star Trek.

Happily, Mobilizy, the creators of Wikitude, have released an API (currently in closed Beta) and I requested to get on board. My intention was to mashup GeoBeagle and Wikitude as a means of learning Android development.

Yesterday evening I started experimenting. This evening I finished it off and managed to find enough time before sunset to run to my nearest geocache for testing, the results of which can be seen in the video below:

What does this show..?

  1. The freedom to learn from open-source projects such as GeoBeagle is invaluable for a newcomer to a platform such as Android. Bottom line: being open is an immediate win.
  2. Wikitude is an amazingly simple API to use. As you'll read below, the Mobilizy guys have made developer's lives very easy – my only concern being that such a simple API will make customisation difficult (although this is only the very first Beta release – I'm mentioning this for the purposes of feedback and fully expect Wikitude to mature).
  3. The Android platform is incredibly easy to learn, use, develop against and deploy to.

How..?

My background in software development includes C/C++, C#/.NET and Python. Android applications are written in Java and deployed to a Dalvik virtual machine – a "sandbox" for running Java especially designed for mobile devices. As a result I didn't know what to expect. I got from nothing to the content of the video above with the following steps:

Step 1

You'll need to download and install the Android SDK. Full instructions can be found on Android's developer site. This will probably involve installing and configuring the Eclipse IDE. Don't worry, the instructions are very clear and I managed to do it… ;-) (Also, the Android developer site is excellent with lots of tutorials, documentation and videos for the hungry mind.)

Step 2

Once you have the basic development environment installed you'll probably need to tweak it and check it with a simple "Hello World" application. In my case I needed to make sure that Eclipse was targeting Java 1.6 (I'm on OS X where Java 1.5 seemed to be the default choice). I bought an e-book version of Hello, Android and chugged through the opening chapters happy that things seemed to be building and working as expected. I also made sure I could deploy / develop against my "real life" mobile phone (rather than the emulator provided). This simply involved plugging it into the computer. You'll need to go to Settings -> Applications on your phone and make sure "Unknown Sources" is checked and that the appropriate settings are ticked under "Development" (I have "USB debugging" and "Stay awake" both checked).

Step 3

Get hold of GeoBeagle. The website's wiki has excellent instructions but you need to be aware of the following:

  • You'll need to have Subversion installed in order to get the source.
  • Don't use the "trunk" as suggested in the documents. Use the "sng" branch. Grab the code by issuing the following command:



svn checkout http://geobeagle.googlecode.com/svn/branches/sng GeoBeagle



  • Follow the instructions on the wiki but be aware that you'll have to make sure the di, gen and src directories are all referenced in the project's build path (Properties for GeoBeagle -> Java Build Path -> Source).
  • The same goes for when you set up the unit tests. I found that the Android 1.5 jar wasn't referenced in the Libraries section of the unit test project's Build Path configuration.
  • I could only get GeoBeagle to run on my phone, not in the emulator. I'm assuming the lack of appropriate hardware emulation (such as GPS) is the problem here. In any case, developing on a "real" phone is a breeze.

Step 4

Sign up for the Wikitude Beta programme and wait for the code and documentation to arrive via email. To get Wikitude and GeoBeagle to play nicely together I simply did the following:

  • Reference wikitudearintent.jar in the Libraries section of the Java build path for GeoBeagle.
  • Add an appropriate button (I put mine in the menu for the cache list) and join up the event handling code.
  • In the event handling code do something like this:



// create the intent

WikitudeARIntent intent = new WikitudeARIntent(myActivity.getApplication(), null, null);

// add the POIs (points of interest)

Collection<WikitudePOI> pois = new ArrayList<WikitudePOI>();

for(GeocacheVector gv : this.mGeocacheVectors.getGeocacheVectorsList()){

        Geocache geocache = gv.getGeocache();

        float longitude = (float) geocache.getLongitude();

        float latitude = (float) geocache.getLatitude();

        String name = (String)geocache.getIdAndName();

        String description = "A description - grab info from geocache instance";

        WikitudePOI poi = new WikitudePOI(latitude, longitude, 0, name, description, null, null);

        pois.add(poi);

    }

    intent.addPOIs(pois);

}

// Add a title

intent.addTitleText("Augmented Reality View");

// Launch the intent

myActivity.startActivity(intent);



There are only two types of object a developer need worry about:

  1. WikitudeARIntent – the "intent" for the Wikitude World Browser (in Android an "intent" represents something that does a specific sort of thing – like display information using augmented reality as in this case).
  2. WikitudePOI – a Point Of Interest to display with the Wikitude intent. You supply information such as the longitude / latitude, altitude, name, description and so on.

I'm sure the code example above and description of the two classes is more than enough to see how the API works. Even the Wikitude documentation is only three pages long (as it doesn't need to be any longer).

As I mentioned earlier, my main concern is that with such simplicity comes a lack of potential for customisation. For example, it would be great to override the menu that pops up if you double-tap a POI (as mistakenly happens in the video). Also, I'd like to be able to define area as well as individual points. Why? Consider the following use-case: wouldn't it be great to be standing at the top of the Eiffel Tower and look down on Paris with all the various neighbourhoods highlighted and perhaps colour coded (indicating traffic congestion, for example)?

Wrapping Up…

Many thanks to Stephen Ng whose advice and patient help was most gratefully received when trying to get GeoBeagle to build. Without open-source developers like Stephen great tools like GeoBeagle would never exist. He's worth his weight in gold! ;-)

Over the past two evenings I've had a lot of fun. I suppose the reason for this is both applications share the same outlook expressed in Wikitude's tag-line:

"The World IS the Platform!"

...and who wouldn't want to develop for that platform..?

Counterpoint, "In C" and Code

I recently found some old teaching materials I used to use for an adult-education class in beginners musical theory. It was targeted at interested non-musicians so everything was simply defined with concise examples. I'd forgotten I'd written my own musical fragments for explaining counterpoint and was pleasantly surprised at what I'd produced (inevitably in the style of J.S.Bach). Yesterday I had reason to use the free music typesetting programme Lilypond for the first time in five years. I was having so much fun that I decided to input my old examples into Lilypond and here are the results (with appropriate commentary):

Counterpoint…

...is the pleasing combination of two (or more) different melodies, often of contrasting nature (for example, melodic shape or rhythmic texture). The first melody to be introduced is called the "Subject" with additional melodies being called "Counter Subjects" (often numbered 1..n if there are more than one).

Here are two melodies:

Subject

Contrapuntal Subject

Listen to example as MIDI

Counter Subject

Counter-subject

Listen to example as MIDI

Notice how they differ from each other: the "Subject" usually moves by leaps and contains many quaver notes making it sound very "busy" whereas the Counter Subject usually moves by step and has an altogether less urgent rhythmic feel to it (lots of minims). Notice also that the shapes of the melodies are similar in the first four bars (a general rise then fall) and contrary in the final bars.

It is these differences between the melodies that make the resulting counterpoint interesting to listen to:

Combined Melodies

Counterpoint

Listen to example as MIDI

This is all very clever and hopefully sounds good, but there is more! In contrapuntal musical forms such as the Fugue once the melodies have been introduced the composer will often "develop" the material in cunning and surprising ways. Below is a rather incomplete list of melodic transformations one might employ to keep the listener on their toes (you need to know what you're listening for – but it can turn into quite a good game):

  • Inversion – the original melody is turned upside down. Where the pitch moves up by three notes in the original, it moves down by three notes in the inversion.
  • Retrograde – the melody is played backwards.
  • Augmentation – the duration of the notes in the original melody is lengthened by a constant amount (for example, all note lengths are doubled).
  • Diminution – the opposite of augmentation, the notes are shortened by a constant amount.

Obviously, these tricks can be combined to get transformations such as retrograde-inversion (backwards and upside-down) or augmented-retrograde (lengthened and backwards), for example.

Composers will then start to combine these combinations so you might end up with the subject being played in counterpoint to itself as a retrograde-inversion above a third part that is an augmentation of the counter subject. Fantastic stuff!

To illustrate this point, here is our original contrapuntal example followed by itself in retrograde with the parts swapped between bass and treble (making it a musical palindrome):

With Retrograde

Counterpoint with retrograde

Listen to example as MIDI

Notice how I make sure the melodies conform to the melodic-minor and introduce a Tierce de Picardie at the end so some of the notes are not exactly the same in either direction.

"In C"

Fast forward to yesterday. I recently discovered a great musical experiment on YouTube called: In Bb

As you'll see when you click the link, you are presented with YouTube clips of performers playing improvised fragments in the key of B flat major. As the visitor to the site you create the "counterpoint" by starting, pausing and stopping the videos in a sequence of your choosing. You may even "mix" them – to a limited extent – by changing the volumes associated with each video.

Although not the formal "classical" counterpoint I describe above, this is definitely the pleasing combination of two (or more) contrasting melodies. I was very impressed.

After mentioning this on Twitter I was helpfully reminded that it was similar to Terry Riley's In C (the second link is to the score). I was a big fan of this piece when I was at music college and it explained why "In Bb" sounded so familiar.

In C is an aleatorical composition (i.e. it includes elements of chance in its performance) and consists of fifty-three short musical "subjects" to be performed repeatedly an arbitrary number of times but in sequence. Riley suggests that musicians keep within two-three fragments of each other.

Once again, this isn't the formal "classical" counterpoint I described above, but it is the combination of thematic subjects into what has been described as the world's first minimalist piece.

I've found several performances on the Internet: this one is a midi realisation whereas the following excerpt from a live concert is performed by a symphony orchestra.

Code

This brings me onto my current vocation (software developer)...

You can't have failed to notice that what makes counterpoint (and music in general) interesting is the introduction of melodies (fragments, subjects or whatever) and their combination and transformation through time. If I replaced the word "melody" with "data structure" in the preceding sentence then you have a pretty close analogy to how software works.

Like any analogy, I accept that it isn't a completely "sane" mapping from one world to the other, but I can't help but want to be able to compose music in code. Something along the lines of this (in Python):




subject = melody([ c(4), c(4), g(4), g(4), a(4), a(4), g(2)])

subject.rhythm()

[4, 4, 4, 4, 4, 4, 2] # 6 crotchets followed by a minim

subject.pitch()

[c, c, g, g, a, a, g]

subject.intervals[ 0, 7, 0, 2, 0, -2 ] # intervals between the notes in semi-tones

subject.invert()

[ g, a, a, g, g, c, c ]



Obviously, I'd need to work out the API in a lot more detail – the above example is just from the top of my head – but I think it'd be great fun to implement something like this.

Hang on… what's this..? :-)

Europython 2009

Europython logo

I've recently got back from this year's Europython conference. It's a bit of a long post (there was so much good stuff), but here's my round-up of what I learned, found interesting, intriguing and cool.

FluidDB

FluidDB is a fascinating (if unintentionally secretive) project that I've known about since watching this video. The "talking head" is one of the founders, Terry Jones. I was so intrigued by what I saw that I emailed Terry a bunch of daft questions and he was kind enough to reply.

As a result, I was looking forward to two talks about FluidDB: Terry's own Introducing FluidDB and Esteve Fernandez's Twisted, AMQP and Thrift. The former being a high level "philosophical" view of whilst the latter being more concerned with some of the underlying (Python based) technology.

I have to admit that it is the "philosophical" aspect of this project that has me hooked. "A database with the heart of a wiki" is the tag line of the project, but, as far as I can tell, that only scratches the surface (Terry also describes it as a metadata engine for everyone and everything). If I understand Terry correctly…

  • FluidDB is not a relational database but stores sets of tag / value pairs.
  • Each set represents a "thing". All "things" are public in that anyone can add tag / value pairs to it.
  • All tag / value pairs are protected by a strong yet simple permissions based system.
  • Tags are also "things" that can themselves be tagged (higher order / meta tagging).
  • Tags are organised in namespaces that are owned by an account (for a person, organisation or application [for example]).
  • It is schema-less in that the tag / value pairs associated with a "thing" are not predefined.
  • It is an "open" system in that any account can add data without having to ask permission.
  • Retrieving data is easy through the use of a simple query language.
  • It's been designed to scale.

Given such a back-of-a-postcard (and probably inaccurate) description – why is FluidDB so interesting? Two things immediately strike me:

  1. Free to write, with finely grained contributions (by looking at the tag/namespace one can tell who has contributed what).
  2. Evolution as a means of database development.

Free to Write – Anyone can add a tag / value pair to a "thing". However, you might not be able to nor want to see everything associated with a "thing": you might not have permission to view some of the tags from certain namespaces and you might only be interested in those from other namespaces.

Evolutionary development – Because of the "fluid" nature of FluidDB conventions and practices emerge due to evolutionary pressure in exactly the same way they do for our wider social conventions. This is important because, until now, database development (and thus, the way information is organised / presented) has been decided by the likes of me – a software developer – and there isn't any guarantee I'll do this in a way that is useful for you (either because of my lack of skill or because I want to retain / impose control and also because it's impossible to anticipate what people will want to do with things).

Perhaps an example might shed some light…

So a "thing" (Set) consists of tag / value pairs from various namespaces..?

Absolutely!

But what does it represent?

Whatever the tag / value pairs seem to imply.

Actually, there is already a helpful convention for working out what a thing represents: a special immutable tag called "about" whose value is unique and can only be set when the thing to which it refers is created. It allows you [and everyone else] to ask for an object about X where X is something helpfully unique like "NASDAQ:GOOG".

So, if a "thing" had a "nicholas/title" tag with the associated value "Seven Pillars of Wisdom", a "nicholas/author" tag with the associated value "T.E.Lawrence" and another tag called "nicholas/ISBN" with the value "0954641809" then you can be pretty sure that I am attempting to describe a book (the tags exist within my "nicholas" namespace). Furthermore, the special "about" tag might have been set to "ISBN:0954641809" when the "thing" was first created – indicating the thing is a book with a particular ISBN.

Contrast this with a regular database schema or API (such as that from Amazon.com) and you'll notice that they have already defined the concept of a "book" as represented in a "books" table with certain fields defined with certain types and perhaps a many-to-many relation to an "authors" table or other "helpful" stuff. The conventional system is imposing structure whereas FluidDB does not: you invent your own.

For example, the same set might also contain the following additional tag / value pairs along the lines of:

Tag Value
wahida/location "Wadi Rum"
wahida/long 29.5765
wahida/lat 35.419928
amazon.com/Product Description "Seven Pillars of Wisdom is the monumental work that assured T.E. Lawrence's place in history as "Lawrence of Arabia." Not only a consummate military history, but also a colorful epic and a lyrical exploration of the mind of a great man, this is one of the indisputable classics of 20th century English literature. Line drawings throughout."
amazon.com/Average Customer Review 4.5
fred/rating "10/10"
jill/score "5 stars"
bill/genre "Autobiography, War"

Wahida is obviously describing the rock formation that is also called the Seven Pillars of Wisdom and has provided the coordinates for the location.

Amazon.com and Bill have attached some product information and Fred and Jill have both indicated positive opinions.

How do we know the namespace Amazon.com is associated with the bookseller with that domain name? I'll let Terry answer that question (quoted from an email):

"Fluidinfo will only give namespaces that match domains to the actual domain owner, [so] you'll know that's an official amazon tag. That's part of allowing the evolution of reputation and trust – with a giant headstart seeing as we get to import the trust associated with internet domain names."

Interestingly, Fred and Jill use different tag-names and scoring systems to represent their opinions. Furthermore, they don't make it clear to what they refer that has the name "Seven Pillars of Wisdom" (although we'll probably assume they're referring to the book rather than the location).

By adding such information we have an example of "Free to write" and an evolving schema.

Assuming all these tag / value pairs are public, then I could start to cross reference information such as the rather obvious "best reviewed books by T.E.Lawrence" to the not-so-obvious "books associated with places in Arabia". I might also be buddies with Fred and would like to find out what he likes to read but find the publicity bumph from Amazon misleading so select to search using only Fred's tags.

Over a period of time Wahida might create a new thing about the rock formation, move her tags / values over and associate the original thing (the book by T.E.Lawrence) with the new one via a tag called "named after" that stores the unique ID of her new "thing". She does this because she's noticed that other people are making a distinction between a place and things associated with that place.

Also, Jill decides to use the tag name "rating" rather than "score" and to give her marks "out of ten" because that's just what more people do. Notice that the database "schema" changes as conventions become apparent over time. This is evolution at work as the fitness of the convention is determined by how useful it is to the people namespace authors care about.

Is this such a strange idea? Absolutely not, it's exactly how we get stuff done in our wider "social" world – for example, like the philosopher John Searle talks about when he refers to social reality.

I'm pretty sure I've missed something or not fully understood everything. But then, with no documentation or implementation to explore I'm doing nothing more than recollecting and guessing. Nevertheless, we were told we would get both these things "in a month". [Edit – Terry has been in touch and given me a sneak peek at the docs – unfortunately I haven't had chance to read them yet.]

Finally, I spoke to both Terry (just after his talk) and Esteve (in one of those really interesting corridor conversations) and their attitude reminds me of a quote from the introduction to T.E.Lawrence's book mentioned above:

"All men dream; but not equally. Those who dream by night in the dusty recesses of their minds wake in the day to find that it was vanity; but the dreamers of the day are dangerous men, for they may act out their dreams with open eyes, to make it possible."

Terry and Esteve are dangerous men!

Best of luck with your venture guys and I look forward to playing with it in the not-too-distant future.

Pythonic Music

I'm a classically trained musician so so I'm always interested to hear about tools for musical composition / development. There were two examples of this at the conference: RjDj presented by Chris McCormick and a display in the foyer for Ableton Live (who were looking for developers to join their team).

RjDj is an iPhone application that uses "scenes" (akin to musical "recipes" that act as proxies for the original composer) to take input (via the microphone or motion sensors) and mix it with other sounds provided by the composer to produce what I'd call an "auditory experience" that is always unique.

Chris's boss demonstrates the product in this video (at about 4 minutes in you get to see him use pens, staplers and his mug as the sources of input sounds into a scene):

All very cool.

What has this to do with Python..? Whilst the audio libraries are coded in C++ and the iPhone application is obviously in Obj-C all the glue code is in Python (including a Django based website).

I get the impression that Chris hates developing for the iPhone as we were treated to a very entertaining rant targeted at Apple (I can't wait to get this on Android!).

I also ended up chatting to the Ableton guys during lunch on Thursday. When I explained I had a musical background they gave me a quick demo, handed me a demo CD and told me to check out the site. Apparently, much of their software is written in Python with only the high-performance audio functionality being written in C/C++.

I've had a lot of fun playing with the demo version – I followed a couple of tutorials and then did my own thing. I also had a look around the web-site and their promo-video pretty much sums up what their product is capable of.

Un-conference / Corridor Chats

For me, it is often the corridor chats and conversations over lunch that are the most rewarding aspects of a conference. Europython 2009 was no exception: speech recognition and natural language processing, Zen Buddhism, FOSS in a corporate environment, Django / Pinax, music and software apprenticeship were just some of the topics covered.

Bruce Eckel gave a quick pre-keynote presentation on unconferences – conferences based around a theme and consisting of participant-led talks and presentations. Sounds like my kind of place and, happily, PyConUK this year will be organised along these principles (happening at the same location at EuroPython sometime in September). Can't wait!

Bletchley Park

Enigma

I live about 12 miles north of Bletchley Park (in Towcester) and visit several times a year with my kids. As well as being important from a historical perspective (the centre of British code-breaking during WWII and home of the world's first programmable digital computer) it is also a "grand day out" with lots of things to see and do.

So I was especially pleased to hear the keynote by Sue Black and Simon Greenish about Bletchely Park. Unfortunately, not enough people know about this gem of a museum or its continuing financial woes. The attendees of EuroPython strike me as a sympathetic audience to pitch to and the resulting interest in the authentic Enigma machine (see photo above) and a trip to Bletchley organised on the conference mailing list demonstrated Sue and Simon are onto something.

Lets hope they continue to make progress with their fundraising efforts.

Turtle

As you might have noticed, my daughter and I have an interest in Logo, turtles and other such fun. I was especially pleased to attend the talk turtle.py – a Teaching Tool given by turtle.py's creator Gregor Lingl.

Put simply, Python now has a module that engages with kids like my daughter. Furthermore, she can play – as kids should – in an environment that encourages play as a means of learning. Finally, she can "graduate", should she choose to do so, into other modules and the wider Python language. Turtle.py is worth its weight in gold (yeah – I know source code doesn't weigh anything) simply because it facilitates the transition from childlike playfulness to the playful creativity that is programming.

I was also entertained by Gregor's examples: As he showed off ever increasingly complex software written with the turtle.py module I came to the realisation that everything in Gregor's world is a turtle. This appealed to my sense of humour – especially when he put up a slide with the text "A Website" and I mistakenly thought he'd written a simple web-server "out of turtles" (I was wrong – he was asking for help with the project website).

IronPython

Prior to my Python work I was a .NET guy so I was particularly looking forward to Michael Foord's Introduction to IronPython (it turns out Michael lives quite close to me: two Pythonistas in rural Northamptonshire..? There must be something in the water).

I have a little knowledge of IronPython from my days at Barclays Capital: we looked at including it in a tool I had built so our end customers (other developers in the bank) could customize various aspects of the software. In the end we had other priorities and I left the bank to learn Python.

Michael managed to pitch it just right to newbies like me with feet in both "camps". I was especially interested to hear how IronPython consolidates itself with non-Pythonic aspects of the CLR (method overloading for example) and integrates with the wider .NET framework. It certainly whetted my appetite and I'm now chomping my way through some of the chapters in his recently published IronPython in Action (a very good read – although I'm still trying to work out what the guy on the front cover is all about, there isn't a colophon like in O'Reilly books).

Thanks

I want to end with a big "thank you" to all those involved in the organisation and execution of the conference – I had a great time and I'm already looking forward to next year. ;-)