Skip to main content

The Experience of Music

This is a second blog post resulting from emails about music exchanged with my buddy Francis (the first one can be found here). I also felt compelled to write because of this recent fragment of conversation on Twitter with another collaborator and buddy, Dr. Laura-Jane Smith, concerning how one measures the efficacy or effect of an artistic encounter.

What follows are personal reflections. Please take them with a large pinch of salt. I'm merely creating a context in which you may think about music.

The video (by FinallyStudio) claims there are lots of ways to understand music because there are lots of aspects of music to consider (and you don't have to know about any of them to enjoy music). The evidence for this argument is all around us: people with no musical training enjoy music all the time. Be it listening to the radio, singing in the shower or picking out melodies on a self-taught instrument, music is somehow an innate part of the human experience, no matter your level of education.

Both Francis's emails and Laura-Jane's question about positivism struck me as overtly intellectual in outlook: there had to be some thing to be identified, processed and (in LJ's case) measured and interpreted. As Francis explained,

I think I'm *so bad* at this particular music, I'm just not noticing any patterns. My instinctive feeling is it is just a whiney, drifty noise.

In these two sentences Francis identifies the nub of the matter (and the title of this post): the experience of music.

How does music feel? What are the sensations that music arouses in you? What do you think when you encounter music?

When reading this post I want you to wonder about your raw, intuitive or instinctive reactions to music rather than any higher order thinking about music.

Why?

I believe there is a danger to over-think music.

As far as I can tell, there's no need to identify patterns to appreciate music that is otherwise unenjoyable. Perhaps the composer wanted to create whiney, drifty noise. Even if this is not the case, who's to say Francis's reaction is wrong? I find plenty of well loved music hard to enjoy (or even to listen to).

In the context of health, MRI scanners can't measure qualia to provide evidence of music's efficacy or influence. Surely, enjoyment of the sensation is evidence enough? (After all, from a utilitarian perspective, the net sum of "happiness" in the world grows because of music.) Nevertheless, I'm reminded of an episode of the BBC programme Imagine from 2008 where Alan Yentob subjected himself to an MRI scan while listening to music. Upon hearing Jessye Norman perform one of Strauss's Four Last Songs - music to which he acknowledged he had a special emotional attachment - his brain was suffused with blood. As one reviewer put it,

He listened to her and his mind blushed.

While the MRI scanner allowed viewers to watch Yentob's apparent physical reaction to Norman's performance it didn't give us a sense of Yentob's obvious emotional reaction (obvious because he talks about how the music makes him feel).

Furthermore, each of us has a different, very personal reaction to music. What may seem like a heavenly performance to one person is a turgid noise to another. The infant school recorder ensemble may bring tears to the eyes for very different reasons, depending on who is listening. Some people love to play certain pieces while others can't stand the thought of another damn performance.

Because of these reasons, attempting to pin down the experience of "music" feels like an impossible task.

Even generalisations don't work. For example, we may assert that music is just a particular sort of sound; but then we'd need to explain Beethoven's composition technique. His deafness forced him to compose his later works solely in his head - no actual sound was involved in the process despite it being fundamentally musical.

In fact, some musicians disagree with the sentiment that music is in some way a sub-set of sound. Take the renegade American composer Charles Ives who famously asked,

What has sound got to do with music!?

Ives believed that music is in some way the underlying "spirit" of the composer and (especially) the performer expressed through sound rather than the sound itself. An alternative way to answer his question is to claim that music is the feeling you get from sounds rather than the sounds themselves.

In some sense, it is rewarding to "understand" music and know what's going on. Discovering, interpreting and describing the conceptual world of music is an interesting, enhancing and intensifying experience. But there is a danger that we become distracted by such intellectual diversions in a similar way that one might become fixated by the form of a Sonnet while missing its meaning:

Silent Noon Your hands lie open in the long fresh grass, -- The finger-points look through like rosy blooms: Your eyes smile peace. The pasture gleams and glooms 'Neath billowing skies that scatter and amass. All round our nest, far as the eye can pass, Are golden kingcup-fields with silver edge Where the cow-parsley skirts the hawthorn-hedge. 'Tis visible silence, still as the hour-glass. Deep in the sun-searched growths the dragon-fly Hangs like a blue thread loosened from the sky: -- So this wing'd hour is dropt to us from above. Oh! clasp we to our hearts, for deathless dower, This close-companioned inarticulate hour When twofold silence was the song of love.                            ~ Christina Rossetti

Rosetti's poem was beautifully set to music by Ralph Vaughan-Williams (my wife and I enjoy playing it together ~ me on piano, Mary on 'cello). Listen to the recording below; sit back, let the music and words wash over you.

I challenge you to be unmoved.

Nothing to hide..?

On several occasions I have had to explain my position on privacy and surveillance (especially in a digital context). To save me the task of repeating myself, here it is in as simple a form as possible.

"If you have nothing to hide, you have nothing to fear" is a common argument in support of the mass surveillance of citizens by the government or the harvesting of user data by private corporations. Often it is made in a reassuring manner, as demonstrated by William Hague:

A common re-statement of the argument goes, "only if you're doing something wrong should you worry, and then you don't deserve to keep it private".

I'm guessing many people will immediately sympathise with these sentiments. After all, we don't want the bad guys to gain the upper hand, you're probably a fine upstanding citizen and we should be happy that innocents are protected from the evil-doers that such a drag net will identify.

I beg to differ.

For a start, this position is a classic false dichotomy: two seemingly black and white choices are given yet there are many ways to address the subject. Such either/or thinking excludes the potential for a more nuanced and subtle debate. Furthermore, such false dichotomies are a favourite tactic in argument and, unless you know what you're looking for, can hoodwink many who take things at face value and stifle debate.

Leaving this aside, the actual choices presented in the argument hide various nasty "home truths":

  • It's not you who determines if you have anything to hide. It doesn't matter how upstanding a citizen you think you are; your point of view doesn't matter in this context since it's the law or (more dangerously) public opinion that judges you (viz. misjudged attacks on paediatricians). For example, teenagers growing up in the Eastern Bloc had to hide their enjoyment of "corrupting capitalist music" (such as Rock and Roll). While one's taste in music is a relatively harmless attribute please consider the plight of LGBT persons living in Singapore and elsewhere who risk legal challenges and remember the many curtailments of religious freedoms throughout history.
  • It assumes surveillance results in correct data and sound judgement. People make mistakes and sometimes agents of the state or employees of corporations are really stupid and don't act in the public or customer's interest. For example, remember the Twitter joke trial? Given the context of the tweet it was obviously a joke, but the Police interpreted it differently. Is demonstrating a sense of humour a crime? I'd argue it's a healthy sign of a liberal tradition of free speech.
  • Rules change. For example, the UK's RIPA law gives the government powers to investigate and intercept communications on the grounds of national security. Sounds reasonable. Yet the number of public authorities empowered to use this legislation has increased four times (in 2003, 2005, 2006 and 2010) and many such bodies use their powers for totally unintended things (such as tracking dog fouling). The result is supposedly water-tight legislation being subverted by local councils (I hardly think dog shit is an issue of national security).
  • Breaking the law isn't necessarily bad. Mohandas Ghandi, Emmeline Pankhurst, Jesus, Nelson Mandela, Socrates, and [insert any well known and well respected historical person who many believe have been a beacon of hope, progress and morality] all broke the law. The law is often an Ass and breaking it is often the only way in which "progress" is made. Imagine how such persons and the causes they represent would have thrived in a digital panopticon. (They wouldn't.)
  • Those who say privacy is dead are the ones that gain the most from surveillance. This one's rather obvious but worth re-stating. Perhaps I'm being cynical, but when the CEOs of Facebook and Google both state privacy is dead you've got to wonder if this has something to do with their business interests (where they sell your data through targeted advertising).
  • Privacy is a fundamental human right. Yes, I realise this is an argument from authority - specifically, the UN's universal declaration of human rights - but I believe (and I'm guessing you do too) that intimate declarations of love, doctors discussing a patient, engineers developing a new top secret world changing product and journalists planning an exposé of government corruption are just a few scenarios where privacy is both a reasonable and legitimate requirement.

Am I suggesting privacy trumps all? No. I would strongly argue for openness when it comes to public institutions, the machinations of government, our political representatives and corporations that deal with personal data. How else are we to hold such entities to account?

Am I saying there should be no surveillance? Of course not, that would be silly: I can think of plenty of legitimate reasons for surveillance but none of them legitimise the blanket surveillance of everyone. Furthermore, I'm not the only one who believes this:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Yes, I know, the irony isn't lost on me either.

This article was written in haste over lunch and tidied up after work. Think of it as a first draft and please feel free to take pot shots - I welcome constructive comments, critique and ideas. ;-)

EDIT #1

A detective I know has pointed out that,

Ripa isn't (and never was) just for national security, that is only one part of the act.

I stand corrected! :-)

I was trying to demonstrate how laws can suffer from "scope creep": once legitimate and sensible legislation being used in quite unintended and nefarious ways. My detective buddy (who also happens to have a philosophy degree and is exactly the sort of ethical, thoughtful and smart person you'd hope would be working as a detective) understood exactly what I was getting at so pointed out that,

Law creep might be better with terrorism stop and search powers where more people are stopped and searched without any actual individual justification. That seems a more clean cut issue to my mind; specific powers being used to blanket cover areas. Or laws set up to manage serious sex offenders also catching idiots - the guy who drops his trousers while drunk on a night out in town now has to notify any change of address and register as a sex offender!

Great stuff!

Asynchronous Python

Python version 3.4 was recently released. For me, the most interesting update was the inclusion of the asyncio module. The documentation states it,

...provides infrastructure for writing single-threaded concurrent code using coroutines, multiplexing I/O access over sockets and other resources, running network clients and servers, and other related primitives.

While I understand all the terminology from the documentation I don't yet have a feel for the module nor do I yet comprehend when to use one feature rather than another. Writing about this module and examining concrete examples is my way to grok asyncio. I'll be concise and only assume familiarity with Python.

So, what is asyncio?

It's a module that enables you to write code that concurrently handles asynchronous network based interactions.

What precisely do I mean?

Concurrency is when several things happen simultaneously. When something is asynchronous it is literally not synchronised: there is no way to tell when some thing may happen (in this case, network based I/O). I/O (input/output) is when a program communicates with the "outside world" and network based I/O simply means the program communicates with another device (usually) on the internet. Messages arrive and depart via the network at unpredictable times - asyncio helps you write programs that deal with all these interactions simultaneously.

How does it work?

At the core of asyncio is an event loop. This is simply code that keeps looping (I'm trying to avoid the temptation of using a racing car analogy). Each "lap" of the loop (dammit) checks for new I/O events and does various other "stuff" that we'll come onto in a moment. Within the asyncio module, the _run_once method encapsulates a full iteration of the loop. Its documentation explains:

This calls all currently ready callbacks, polls for I/O, schedules the resulting callbacks, and finally schedules 'call_later' callbacks.

A callback is code to be run when some event has occurred and polling is discovering the status of something external to the program (in this case network based I/O activity). When a small child constantly asks, "are we there yet..?" on a long car journey, that's polling. When the unfortunate parent replies, "I'll tell you when we arrive" they are creating a sort of callback (i.e. they promise to do something when some condition is met). The _run_once method processes the I/O events that occurred during the time it took to complete the previous "lap", ensures any callbacks that need to be run are done so during this lap and carries out "housekeeping" needed for callbacks that have yet to be called.

Importantly, the pending callbacks are executed one after the other - stopping the loop from continuing. In other words, the next "lap" cannot start until all the sequentially executed callbacks finish (in some sense).

I imagine you're thinking, "Hang on, I thought you said asyncio works concurrently?" I did and it does. Here's the problem: concurrency is hard and there's more than one way to do it. So it's worth taking some time to examine why asyncio works in the way that it does.

If concurrent tasks interact with a shared resource they run the risk of interfering with each other. For example, task A reads a record, task B reads the same record, both A and B change the retrieved record in different ways, task B writes the record, then task A writes the record (causing the changes made by task B to be lost). Such interactions between indeterminate "threaded" tasks result in painfully hard-to-reproduce bugs and complicated mechanisms required to mitigate such situations. This is bad because the KISS (keep it simple, stupid) principle is abandoned.

One solution is to program in a synchronous manner: tasks executed one after the other so they have no chance to interfere with each other. Such programs are easy to understand since they're simply a deterministic sequential list of things to do: first A, then B, followed by C and so on. Unfortunately, if A needs to wait for something, for example, a reply from a machine on the network, then the whole program waits. As a result, the program can't handle any other events that may occur while it waits for A's network call to complete - in such a case, the program is described as "blocked". The program becomes potentially slow and unresponsive - an unacceptable condition if we're writing something that needs to react quickly to things (such as a server - precisely the sort of program asyncio is intended to help with).

Because asyncio is event driven, network related I/O is non-blocking. Instead of waiting for a reply from a network call before continuing with a computation, programmers define callbacks to be run only when the result of the network call becomes known. In the meantime, the program continues to respond to other things: the event loop keeps polling for and responding to network I/O events (such as when the reply to our network call arrives and the specified callbacks are executed).

This may sound abstract and confusing but it's remarkably close to how we make plans in real life: when X happens, do Y. More concretely, "when the tumble dryer finishes, fold the clothes and put them away". Here, "the tumble dryer finishes" is some event we're expecting and "fold the clothes and put them away" is a callback that specifies what to do when the event happens. Once this plan is made, we're free to get on with other things until we discover the tumble dryer has finished.

Furthermore, as humans we work on concurrent tasks in a similar non-blocking manner. We skip between the things we need to do while we wait for other things to happen: we know we'll have time to squeeze the orange juice while the toast and eggs are cooking when we make breakfast. Put in a programmatic way, execute B while waiting on the result of the network call made by A.

Orange juice, toast and eggs

Such familiar concepts mean asyncio avoids potentially confusing and complicated "threaded" concurrency while retaining the benefits of strictly sequential code. In fact, the specification for asyncio states that callbacks are,

[...] strictly serialized: one callback must finish before the next one will be called. This is an important guarantee: when two or more callbacks use or modify shared state, each callback is guaranteed that while it is running, the shared state isn't changed by another callback.

Therefore, from a programmer's perspective, it is important to understand how asynchronous concurrent tasks are created, how such tasks pause while waiting for non-blocking I/O, and how the callbacks that handle the eventual results are defined. In other words, you need to understand coroutines, futures and tasks.

The asyncio module is helpfully simple about these abstractions:

  • asyncio.coroutine - a decorator that indicates a function is a coroutine. A coroutine is simply a type of generator that uses the yield from, return or raise syntax to generate results.
  • asyncio.Future - a class used to represent a result that may not be available yet. It is an abstraction of something that has yet to be realised. Callback functions that process the eventual result are added to instances of this class (like a sort of to-do list of functions to be executed when the result is known). If you're familiar with Twisted they're called deferreds and elsewhere they're sometimes called promises.
  • asyncio.Task - a subclass of asyncio.Future that wraps a coroutine. The resulting object is realised when the coroutine completes.

Let's examine each one of these abstractions in more detail:

A coroutine is a sort of generator function. A task defined by a coroutine may be suspended; thus allowing the event loop to get on with other things (as described above). The yield from syntax is used to suspend a coroutine. A coroutine can yield from other coroutines or instances of the asyncio.Future class. When the other coroutine has a result or the pending Future object is realised, execution of the coroutine continues from the yield from statement that originally suspended the coroutine (this is sometimes referred to as re-entry). The result of a yield from statement will be either the return value of the other coroutine or the result of the Future instance. If the referenced coroutine or Future instance raise an exception this will be propagated. Ultimately, at the end of the yield from chain, will be a coroutine that actually returns a result or raises an exception (rather than yielding from some other coroutine).

A helpful (yet not entirely accurate) metaphor is the process of calling a customer support line. Perhaps you want to know why your order for goods is late. The person at the end of the phone explains they can't continue with your query because they need to check something with their accounts department. They promise to call you back. This pause is similar to the yield from statement: they're suspending the work while they wait for something else, thus allowing you to get on with other stuff. At some point, their accounts department will provide a result and the customer support agent will re-enter the process of handling your query and when they're done, will fulfil their promise and give you a call (hopefully with good news about your order).

The important concept to remember is that yield from suspends coroutines pending a result so the event loop is able to get on with other things. When the result becomes known, the coroutine resumes.

The following example (like many of the examples in this post, it's an annotated modification of code in the Python documentation on asyncio) illustrates these concepts by chaining coroutines that ultimately add two numbers together:

"""
Two coroutines chained together.

The compute() coroutine is chained to the print_sum() coroutine. The
print_sum() coroutine waits until compute() is completed before it returns a
result.
"""
import asyncio


# Notice the decorator!
@asyncio.coroutine
def compute(x, y):
    print("Compute %s + %s ..." % (x, y))
    # Pause the coroutine for 1 second by yielding from asyncio's built in
    # sleep coroutine. This simulates the time taken by a non-blocking I/O
    # call. During this time the event loop can get on with other things.
    yield from asyncio.sleep(1.0)
    # Actually return a result!
    return x + y


@asyncio.coroutine
def print_sum(x, y):
    # Pause the coroutine until the compute() coroutine has a result.
    result = yield from compute(x, y)
    # The following print() function won't be called until there's a result.
    print("%s + %s = %s" % (x, y, result))


# Reference the event loop.
loop = asyncio.get_event_loop()
# Start the event loop and continue until print_sum() is complete.
loop.run_until_complete(print_sum(1, 2))
# Shut down the event loop.
loop.close()

Notice that the coroutines only execute when the loop's run_until_complete method is called. Under the hood, the coroutine is wrapped in a Task instance and a callback is added to this task that raises the appropriate exception needed to stop the loop (since the task is realised because the coroutine completed). The task instance is conceptually the same as the promise the customer support agent gave to call you back when they finished processing your query (in the helpful yet inaccurate metaphor described above). The return value of run_until_complete is the task's result or, in the event of a problem, its exception will be raised. In this example, the result is None (since print_sum doesn't actually return anything to become the result of the task).

The following sequence diagram illustrates the flow of activity:

Sequence diagram of a coroutine

So far we've discovered that coroutines suspend and resume tasks in such a way that the event loop can get on with other things. Yet this only addresses how concurrent tasks co-exist through time given a single event loop. It doesn't tell us how to deal with the end result of such concurrent tasks when they complete and the result of their computation becomes known.

As has been already mentioned, the results of such pending concurrent tasks are represented by instances of the async.Future class. Callback functions are added to such instances via the add_done_callback method. Callback functions have a single argument: the Future instance to which they have been added. They are executed when their Future's result eventually becomes known (we say the Future is resolved). Resolution involves setting the result using the set_result method or, in the case of a problem, setting the appropriate exception via set_exception. The callback can access the Future's result (be it something valid or an exception) via the result method: either the result will be returned or the exception will be raised.

Another example (again, an annotated modification of code from the Python documentation) illustrates how this works:

"""
A future and coroutine interact. The future is resolved with the result of
the coroutine causing the specified callback to be executed.
"""
import asyncio


@asyncio.coroutine
def slow_operation(future):
    """
    This coroutine takes a future and resolves it when its own result is
    known
    """
    # Imagine a pause from some non-blocking network based I/O here.
    yield from asyncio.sleep(1)
    # Resolve the future with an arbitrary result (for the purposes of
    # illustration).
    future.set_result('A result set by the slow_operation coroutine!')


def got_result(future):
    """
    This function is a callback. Its only argument is the resolved future
    whose result it prints. It then causes the event loop to stop.
    """
    print(future.result())
    loop.stop()


# Get the instance of the event loop (also referenced in got_result).
loop = asyncio.get_event_loop()
# Instantiate the future we're going to use to represent the as-yet unknown
# result.
future = asyncio.Future()
# Wrap the coroutine in a task to schedule it for execution when the
# event loop starts.
asyncio.Task(slow_operation(future))
# Add the callback to the future. The callback will only be executed when the
# future is resolved by the coroutine. The future object is passed into the
# got_result callback.
future.add_done_callback(got_result)

# Run the event loop until loop.stop() is called (in got_result).
try:
    loop.run_forever()
finally:
    loop.close()

This example of futures and coroutines interacting probably feels awkward (at least, it does to me). As a result, and because such interactions are so fundamental to working with asyncio, one should use the asyncio.Task class (a subclass of asyncio.Future) to avoid such boilerplate code. The example above can be simplified and made more readable as follows:

"""
A far simpler and easy-to-read way to do things!

A coroutine is wrapped in a Task instance. When the coroutine returns a result
the task is automatically resolved causing the specified callback to be
executed.
"""
import asyncio


@asyncio.coroutine
def slow_operation():
    """
    This coroutine *returns* an eventual result.
    """
    # Imagine a pause from some non-blocking network based I/O here.
    yield from asyncio.sleep(1)
    # A *lot* more conventional and no faffing about with future instances.
    return 'A return value from the slow_operation coroutine!'


def got_result(future):
    """
    This function is a callback. Its only argument is a resolved future
    whose result it prints. It then causes the event loop to stop.

    In this example, the resolved future is, in fact, a Task instance.
    """
    print(future.result())
    loop.stop()


# Get the instance of the event loop (also referenced in got_result).
loop = asyncio.get_event_loop()
# Wrap the coroutine in a task to schedule it for execution when the event
# loop starts.
task = asyncio.Task(slow_operation())
# Add the callback to the task. The callback will only be executed when the
# task is resolved by the coroutine. The task object is passed into the
# got_result callback.
task.add_done_callback(got_result)

# Run the event loop until loop.stop() is called (in got_result).
try:
    loop.run_forever()
finally:
    loop.close()

To my eyes, this is a lot more comprehensible, easier to read and far simpler to write. The Task class also makes it trivial to execute tasks in parallel, as the following example (again, taken from the Python documentation) shows:

"""
Three tasks running the same factorial coroutine in parallel.
"""
import asyncio


@asyncio.coroutine
def factorial(name, number):
    """
    https://en.wikipedia.org/wiki/Factorial
    """
    f = 1
    for i in range(2, number+1):
        print("Task %s: Compute factorial(%s)..." % (name, i))
        yield from asyncio.sleep(1)
        f *= i
    print("Task %s: factorial(%s) = %s" % (name, number, f))


# Instantiating tasks doesn't cause the coroutine to be run. It merely
# schedules the tasks.
tasks = [
    asyncio.Task(factorial("A", 2)),
    asyncio.Task(factorial("B", 3)),
    asyncio.Task(factorial("C", 4)),
]


# Get the event loop and cause it to run until all the tasks are done.
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))
loop.close()

So far, all our examples have used the asyncio.sleep function to simulate arbitrary amounts of time to represent the wait one might expect for non-blocking network I/O. This is convenient for examples, but now that we understand coroutines, futures and tasks we'd better examine how networking fits into the picture.

There are two approaches one can take to network based operations: the high level Streams API or the lower level Transports and Protocols API. The following example (based on this original implementation) shows how a coroutine works with non-blocking network I/O in order to retrieve HTTP headers using the stream based API:

"""
Use a coroutine and the Streams API to get HTTP headers. Usage:

python headers.py http://example.com/path/page.html
"""
import asyncio
import urllib.parse
import sys


@asyncio.coroutine
def print_http_headers(url):
    url = urllib.parse.urlsplit(url)
    # An example of yielding from non-blocking network I/O.
    reader, writer = yield from asyncio.open_connection(url.hostname, 80)
    # Re-entry happens when the connection is made. The reader and writer
    # stream objects represent what you'd expect given their names.
    query = ('HEAD {url.path} HTTP/1.0\r\n'
             'Host: {url.hostname}\r\n'
             '\r\n').format(url=url)
    # Write data out (does not block).
    writer.write(query.encode('latin-1'))
    while True:
        # Another example of non-blocking network I/O for reading asynchronous
        # input.
        line = yield from reader.readline()
        if not line:
            break
        line = line.decode('latin1').rstrip()
        if line:
            print('HTTP header> %s' % line)


# None of the following should be at all surprising.
url = sys.argv[1]
loop = asyncio.get_event_loop()
task = asyncio.async(print_http_headers(url))
loop.run_until_complete(task)
loop.close()

Note how, instead of yielding from asyncio.sleep, the coroutine yields from the built in open_connection and readline coroutines that handle the asynchronous networking I/O. Importantly, the call to write does not block, but buffers the data and sends it out asynchronously.

The lower level API should feel familiar to anyone who has written code using the Twisted framework. What follows is a trivial server (based on this example) that uses transports and protocols.

Transports are classes provided by asyncio to abstract TCP, UDP, TLS/SSL and subprocess pipes. Instances of such classes are responsible for the actual I/O and buffering. However, you don't usually instantiate such classes yourself; rather, you call the event loop instance to set things up (and it'll call you back when it succeeds).

Once the connection is established, a transport is always paired with an instance of the Protocol class. You subclass Protocol to implement your own network protocols; it parses incoming data and writes outgoing data by calling the associated transport's methods for such purposes. Put simply, the transport handles the sending and receiving of things down the wire, while the protocol works out what the actual message means.

To implement a protocol override appropriate methods from the Protocol parent class. Each time a connection is made (be it incoming or outgoing) a new instance of the protocol is instantiated and the various overridden methods are called depending on what network events have been detected. For example, every protocol class will have its connection_made and connection_lost methods called when the connection begins and ends. Between these two calls one might expect to handle data_received events and use the paired Transport instance to send data. The following simple echo server demonstrates the interaction between protocol and transport without the distraction of coroutines and futures.

"""
A simple (yet poetic) echo server. ;-)

- ECHO -

Use your voice - say what you mean
Do not stand in the shadow
Do not become an echo of someone else's opinion
We must accept ourselves and each other
Even the perfect diamond
may have cracks and faults

A-L Andresen, 2014. (http://bit.ly/1nvhr8T)
"""
import asyncio


class EchoProtocol(asyncio.Protocol):
    """
    Encapsulates the behaviour of the echo protocol. A new instance of this
    class is created for each new connection.
    """

    def connection_made(self, transport):
        """
        Called only once when the new connection is made. The transport
        argument represents the connection to the client.
        """
        self.transport = transport

    def data_received(self, data):
        """
        Called when the client sends data (represented by the data argument).
        """
        # Write the incoming data immediately back to the client connection.
        self.transport.write(data)
        # Calling self.transport.close() disconnects. If you want the
        # connection to persist simply comment out the following line.
        self.transport.close()


loop = asyncio.get_event_loop()
# Create the coroutine used to establish the server.
echo_coroutine = loop.create_server(EchoProtocol, '127.0.0.1', 8888)
# Run the coroutine to actually establish the server.
server = loop.run_until_complete(echo_coroutine)

try:
    # Run the event loop forever, waiting for new connections.
    loop.run_forever()
except KeyboardInterrupt:
    # Unless we get Ctrl-C keyboard interrupt.
    print('exit')
finally:
    # Stop serving (existing connections remain open).
    server.close()
    # Shut down the loop.
    loop.close()

An example interaction with this server using netcat is shown below:

$ python echo.py &
[1] 7486
$ nc localhost 8888
Hello, World!
Hello, World!
$ fg
python echo.py
^Cexit

Yet, this only scratches the surface of asyncio and I'm cherry-picking the parts that most interest me. If you want to find out more the Python documentation for the module is a great place to start, as is PEP 3156 used to specify the module.

In conclusion asyncio feels like Twisted on a diet with the added fun and elegance of coroutines. I've generally had good experiences using Twisted but always felt uncomfortable with its odd naming conventions (for example, calling the secure shell implementation "conch" is the world's worst programming pun) and I suffer from an uneasy feeling that it exists in a slightly different parallel Pythonic universe. Personally, I feel asyncio is a step in the right direction because such a lot of the "good stuff" from Twisted has made it into the core language in a relatively small and obvious module. I'm also looking forward to using it in my own projects (specifically, the drogulus).

As I become more adept at using this module I may write up more.

Image credits: Breakfast © 2010 Pankaj Kaushal under a Creative Commons License. Sequence Diagram © 2014 The Python Software Foundation.

Autonomy

I believe autonomy is important. I like to think I have personal autonomy. I also think it important as a political end to be promoted by institutions in our wider society.

This article is a reflection upon autonomy.

The etymology of "autonomy" is derived from Greek for "autos" (self) and "nomos" (rule or law). It was originally applied to city states where citizens were free to live by their own laws rather than laws imposed upon them. Unsurprisingly, my dictionary defines autonomy as, "the power or right of self-government; self-government, or political independence, of a city or a state".

However, it is personal autonomy, as applied to individuals, that I am interested in exploring.

An autonomous person is, "obedient to a law that one prescribes to oneself" (to paraphrase Rousseau). Such a person leads their life in a manner that is consistent with a set of beliefs, values and principles that are the outcome of reflection and self evaluation. As Socrates famously said, "the unexamined life is not worth living". Furthermore, an autonomous person is not only capable of considering, deciding and acting but does so in all three cases. To act without self-reflection - perhaps out of habit or because of an unthinking obedience to received "normal" modes of behaviour - is not autonomy.

Socrates

For someone to possess autonomy not only should they have the capacity for self-reflection but they should have the additional two external requirements of freedom to act without imposition and freedom to act according to their principles. Perhaps the most famous description of such freedoms can be found in Isiah Berlin's essay, "Two Concepts of Liberty" (Berlin uses the words "freedom" and "liberty" interchangeably) in which he defines positive and negative freedom.

Berlin introduces these concepts by relating each to a question:

...the negative sense, is involved in the answer to the question "What is the area within which the subject - a person or a group of persons - is or should be left to do or be what he is able to do or be, without interference by other persons?" The second, ...the positive sense, is involved in the answer to the question "What or who, is the source of control or interference that can determine someone to do, or be, this rather than that?"

Let's unpack these two aspects of freedom / liberty:

Put simply, negative liberty is freedom from coercion or interference. Berlin qualifies this by claiming that coercion implies deliberate interference from others. As Berlin puts it,

If I say I am unable to jump more than ten feet in the air [...] it would be eccentric to say that I am to that degree enslaved or coerced. You lack political liberty or freedom only if you are prevented from attaining the goal by human beings.

In the context of autonomy, negative liberty affords the opportunity for self reflection or decision making on one's own terms. As Berlin points out, in a wider sense one's "freedom" to act may be limited by laws of nature but this is not our focus. Rather, we're concerned with coercion interfering with our intent to act in such-and-such a way.

Alternatively, positive liberty is the freedom to act in a particular way. It is the capacity to act based upon one's own choices and reasons. Berlin insists,

I wish to be the instrument of my own, not of other men's acts of will. I wish to be a subject, not an object; to be moved by reasons, by conscious purposes, which are my own, not by causes which affect me, as it were, from outside. I wish to be somebody, not nobody; a doer - deciding, not by external nature or by other men as if I were a thing, or an animal, or a slave incapable of playing a human role, that is, of conceiving goals and policies of my own and realising them.

Put simply, one can be free from interference (negative liberty) and free to act (positive liberty) and in order to act autonomously one of these freedoms is required.

Why only one? It's possible to act autonomously but only enjoy one of the two sorts of liberty. For example, I have the positive freedom to choose to drive on the wrong side of the road, and, under certain rare circumstances may choose to do so although I ought to expect curtailment of my negative freedoms via coercion in the form of the police. Alternatively, I may enjoy the negative liberty to be free from coercion when applying to one or another university but the decision concerning my offer of a place does not rest with me (a positive freedom to attend). It's the decision of the institutions to which I applied.

But are these freedoms the same as autonomy?

No. The will to act is missing - the capacity to take advantage of such liberties. Consider the following, as a result of misinformation, misunderstanding, habit or deception a person may not realise they have liberty to act in some way or another. Perhaps they come from a town where there are "Keep off the Grass" signs in every park and so they mistakenly believe that this rule is the case for their current location when no such rule is advertised or even exists. They are at liberty to walk where they will but don't because of their ignorance.

Therefore, there is a need for an open, enquiring and imaginative mind used to engaging in reflection in order to discover and decide to take advantage of such liberties.

And so we come full circle back to the capacity for self-reflection.

Unsurprisingly, I believe such an outlook ought to be encouraged at school, via institutions such as professional organisations and facilitated by the technology we invent, build and deploy.

Autonomy can also be paradoxical: strangely, I may act autonomously to curtail the freedoms needed to act autonomously. For example, I may freely choose to join the army causing me to live a necessarily un-autonomous life.

Furthermore, we regularly curtail our own and each other's autonomy in order to act autonomously: this is perhaps best encapsulated in the term "wage slave", a situation where one is forced to work in order to earn enough to pay for some other thing perceived to be more valuable than one's current autonomy (such as a mortgage, family holiday abroad or merely earning enough to make ends meet).

More puzzling still, many are paid to curtail other people's autonomy: I'm often asked to design software that is purposely limited until you pay money - and even then it is only designed to work in a specific pre-defined manner (for example, you have to pay to join a dating website, and even then, you're limited by not being able to contact your potential dates directly). This is politely (and mistakenly) called "business", but an (understandably) cynical person would prefer the term "exploitation".

Finally, exercising personal autonomy is how we decide and act upon life's big questions of a political, ethical and personal nature. It is this reason, ultimately, that explains why I believe it to be such an important attribute. It's also why the promotion of autonomy is the stated primary aim of the drogulus.

Image credits: Socrates. © 2005 Eric Gaba (under a creative commons license). Keep off the Grass. © 2012 Mirsasha (under a creative commons license).

Pachelbel's Canon

I've been exchanging emails with my buddy Francis. We've been discussing classical music. Much of the discussion has been about discerning the "clever stuff" that may not be immediately obvious when listening to a piece of music.

It's possible to listen to any sort of music and simply enjoy the sensation of the sounds. I do this quite often. However, there's also a hidden world of clever, sneaky and fun tricks that composers and performers use to bring about a performance. To use a mathematical analogy, it's the difference between just appreciating the beautiful picture of the fractal (below) compared to having the additional understanding of the simple and rather beautiful mathematics that bring it into existence.

An image of a fractal

Put simply, sometimes it's fun to know exactly what's going on (in a musical sense). Unfortunately, in the context of classical music, this usually requires both historical and theoretical knowledge. So what do you do if you don't have this knowledge? In the case of Francis, an obvious starting point is to explore the classical pieces he already enjoys. When I asked him what classical recordings he owned he replied,

I have and quite like Faure's Pavane, and Pachelbel's Canon. Frustratingly though when I play them now they feel like cliches. This is partly because I know them from childhood, so there's nothing new to me. Partly because I don't know how to engage deeper beyond them.

So, I want to lift the curtain on Pachelbel's Canon and give you a sense of some of the non-obvious "clever stuff" happening in the piece. I assume no prior musical knowledge.

Rob Paravonian's humorous rant (above) centres around the fact that the poor cellist (playing the lowest part) plays the same sequence of eight notes 28 times. This is a special type of ostinato (repeated pattern) called a ground bass. As Rob demonstrates, the ground bass is a technique that's (over) used in a lot of pop music. In the Baroque period (the historical era during which Pachelbel was writing) composers used it as an anchor within a piece so that the listener had some sort of expectation of what was coming next. The clever bit is how the composer plays with our expectations by setting contrasting, surprising and clever melodies over the top of the ground bass.

In Pachelbel's case he starts by setting a very simple, regular step-wise melody above the ground bass. As the piece continues each new repetition of the ground bass has an increasingly complex setting above it until the melody is moving quite quickly, is rhythmically more interesting (it's uneven) and jumps around in pitch. Following this "climax" (yes, that is the correct term) the melody gradually transforms into a slower moving, simpler and more relaxed state that draws the piece to a close.

This is a sketch of the form of the piece at the macro (high) level. But there are other levels of "resolution" that can be viewed under our musical microscope.

For example, have you wondered why the piece is called "Pachelbel's Canon" and not "Pachelbel's Ground Bass"? Well, a musical canon is a contrapuntal technique where a single melody is played on top of itself with voices coming in one after the other. You probably already know several canons and have perhaps sung some at school: "Row, Row, Row Your Boat" and "Frere Jacques" are well known examples. In any case, the way a canon is performed is always the same: a first voice starts to sing or play the melody and, at some regular period of time later, N number of voices gradually join in one after the other and play the melody.

It is this "canonical" technique that Pachelbel uses. The piece is written in four parts: the ground bass and three melody parts. However, each of the melody parts plays exactly the same melody but offset by one repetition of the ground bass. So, not only does the melody mysteriously fit with the ground bass, but it fits with itself. Furthermore, Pachelbel has organised things so that the macro form of the piece (the feeling of a gradual build up to a climax followed by a relaxation to the end) emerges from such micro level interactions between the different parts.

Cool huh..?

All of the features I describe above can be heard if you listen very carefully. However, don't expect to hear them all at once! This is why so much classical music deserves repeated listening because there's always something new to spot! The following video is particularly useful because the Musanim project have superimposed a graphical version of the score that follows the performance. The ground bass is represented by the blue blocks at the bottom of the screen and the red, brown and yellow blocks represent each of the three melody parts. Basically, the vertical axis is pitch (how high or low a note sounds) the horizontal axis is time and colour represents the part being played (sometimes the colours change if parts play the same note). If you watch and listen very carefully you'll be able to spot the canonical relationship between each of the melody parts.

Hang on a minute, if there are only four parts in the piece why are there six performers..?

Obviously the cello is playing the ground bass. Each of the violinists is playing one of the melody parts. The first violin is on the left, second in the middle and the third to the right of the organ. Right of the organ? What's an organ doing in there and what on earth is that funky guitar like thing on the far right?

Well, these are the continuo group - a sort of Baroque-era rhythm section (the instrument on the far right is, in fact, a Theorbo - a member of the lute family of instruments). They follow the bass part (in this piece it's the ground bass played by the cello) and improvise harmonic "filling" in much the same way that a rhythm guitarist in a pop group or pianist in a jazz trio do.

Now, watch the performance again, but this time I want you to concentrate on what the performers are actually doing. Notice how they move a lot, swaying from side to side and often change where they're facing. Since they're all reading their parts they have to keep together in some way. Obviously they're listening very carefully to each other but they're also aware of each other's movements through the corner of their eyes. As a result they're able to get a sense of what everyone else is doing and collectively work as an ensemble. At certain points you'll see them glance, make eye contact with each other and even smile. This is another means of non-verbal communication the players use to keep their sense of ensemble (you see the third violinist do this about halfway through the performance). Finally, at the end when they need to coordinate the close of the performance everyone is looking at the first violinist on the far left to get a sense of how the music is slowing down and ultimately coming to a stop.

Each group works in a different way and watching an ensemble's group dynamics is yet another aspect of classical music that allows for repeated listening. It's fun to compare and contrast the different ways in which performers react to and perform a piece of music

I hope you enjoy listening!

Image credits: Fractal © wikimedia.org under a Creative Commons license.