Skip to main content

Perception, Abstraction and Culture

Why Computers Work (part 5)

(Part 1, part 2, part 3, part 4)

The Treachery of Images by René Magritte is a thought provoking visual brain twist. It shows a pipe, under which is written in French, "this is not a pipe". Magritte is correct, it is not a pipe, but actually a painting:

This is not a painting
Source - educational fair use.

In a similar vein, you're not currently looking at a painting ~ rather, you're looking at a multitude of pixels, each one acting like a small tile in a huge mosaic containing millions of tesserae, each tile being one of many millions of possible colours. If you were to zoom into the image you'd see something like this:

This is still not a painting

Yet, you're not even looking at an assortment of tessellated pixels! Rather, your screen actually consists of repeating cells, each in turn split into red, green and blue (RGB) sections. If you used a magnifying glass to look at your screen, you'd see something similar to this:

This is definitely not a painting
Source - Licensed under CC BY-SA 3.0.

The upper half of the image contains blocks of the RGB cells arranged to display the basic colours: red, green and blue. The lower half contains RGB cells arranged into two blocks for white and black. By adjusting the amount of constituent red, green and blue emitted in each individual RGB cell, many millions of colours can be generated. If you step back from your screen, squint your eyes and look at the blocks of colour you'll see this effect in action.

For completeness, here's a close-up of one of the individual RGB cells:

An RGB square pixel

There are likely to be several hundred of these minute electrical components per square-inch of the screen you're using to read this article.

Our perception of what we encounter depends on its scale. Furthermore, what we encounter may not be what we see, as René Magritte forces us to acknowledge with The Treachery of Images (we see a pipe when we're actually encountering a painting, or pixels, or a screen, and so on). There is a phenomenological aspect to our relationship to things: we can't help but add meaning coloured by our unique experience of the world. We have a unique and personal perspective.

This applies to other encounters with the world too.

Our perception of time depends on a sense of scale or, perhaps more accurately, tempo. Each of the following eight images shows a different snapshot of a stick man.

Individual frames for a running stick figure animation
Source - educational fair use.

Yet if we repeatedly and speedily place them on top of each other, we no longer see eight individual images, but a single image in motion. In fact, because we can't help but add meaning, we see a stick man running.

Running stick man animation

Scale also effects other senses, such as our perception of sound. Consider the following musical experiment: Beethoven's mighty 9th Symphony stretched to last twenty-four hours. By drastically slowing down the tempo (all other aspects of the piece remaining the same ~ notes, instrumentation, and so on), it becomes something completely different. It sounds like an ambient musical experiment by the likes of Brian Eno and our recognition of melody, form or harmonic structure disintegrates - even though the original melody, form and harmonic structures are still present. Such a temporal zoom, as well as hiding aspects of the music, also reveals new details: I find myself concentrating on the timbre of the instruments and enjoying the indistinct transitions between pitches ~ it's like listening through fog.

Why are these examples important..?

In the first post I challenged you to acquire new perspectives about seemingly everyday things. I called these brain twists because they cause "aha" moments. Such subjective shifts provide a new, and hopefully deeper, understanding of what we're encountering or how we relate to something. The external world remains as it was before, but it is we who have changed perspective.

Yet the previous examples show brain twists are something we do naturally, even if we're not always aware we're performing such twists. These examples were carefully chosen because they bring such changes of perspective into focus.

Put simply, I'm inviting you to shift your perspective about how we change perspective. Or, put another way, can you "brain twist" brain twists..?

Our capacity to shift perspective due to scale is a fundamental reason why computers appear to work. They work so fast at such a small scale that our sense of time and space means we don't see our computer re-drawing a static image on a flat screen made up of millions of RGB cells at a rate of around 64 images per second. Rather, we see this blog post consisting of words, images, sound and video - things that are meaningful to us. The computer is working at a completely different scale of time (3 billion instructions a second), space (running on microscopic electrical components) and with an absence of meaning (it's just physics relating to electrical circuits). Yet you understand the words, appreciate the images and engage with the music in this blog post: you bring your unique culturally informed and meaningful perspective to the human scale of things.

Here's the brain twist: such tricks of perspective due to scale also apply to thinking.

Consider learning to ride a bike ~ it's a challenge because the learner has to think about lots of different things at once. For instance, turning the pedals, steering, keeping balance, posture, the brakes and coordinating all these things together so the beginner cyclist moves safely in the right direction. When we become proficient at cycling, this bundle of thinking simply becomes "riding a bike". All the constituent aspects I describe above are subsumed into a larger concept.

Such diverse dexterous details, through careful practice and familiarity, become a single named activity. In a sense, we have zoomed out in the scale of our thinking. Such generalisations are useful as placeholders in further thoughts... the building blocks of our generalised thinking are at a different scale.

For instance, I could say "I'm just going to cycle to the shops, do you want me to get anything for you?". The concept of cycling to the shops is a place holder for the rather complicated activity of riding a bike, but whose specific details are not important for the meaning of the sentence.

This is, in a computing sense, what we mean by "abstraction".

In this sense of "abstraction", functional units that fulfil a certain role are organised into larger, or are composed of yet smaller, functional units. Such units are used together to achieve some valuable end when their relative scale allows complementary usage.

Because computers process billions of instructions a second, this hierarchy of abstraction is hidden in the blink of an eye with the end result being something comprehensible at our human scale (except when things go wrong and the computer becomes incomprehensible or appears confused... commonly known as a bug).

Often the skill of the programmer or engineer is to work out the arrangement and coordination of such abstractions to achieve something meaningful at the human scale.

A wonderful visual example of such an arrangement and coordination of abstraction is Conway's Game of Life.

The Game of Life is an automata: rules that define a process for how certain states of affairs transition to a new state (sound familiar?). In the case of the Game of Life, the states of affairs describe a huge grid. Each square cell in the grid can be alive (white) or dead (black). Another way to imagine the Game of Life is as many parallel tapes lined up on a huge Turing machine with some squares white (on) and others black (off).

There are only three rules to work out the next state of affairs, and they are disarmingly simple:

  1. Any live cell with two or three live neighbours survives.
  2. Any dead cell with three live neighbours becomes a live cell.
  3. All other live cells die in the next generation. Similarly, all other dead cells stay dead.

It feels as if nothing of much interest could be made with such a game. But consider the following Game of Life grid state:

By following the simple rules, the next state of affairs must look like this (pick a white cell from the first grid, follow the rules in your head, and work out if you agree with this second grid):

Of course, this grid can transition to a further new state of affairs (notice anything interesting? It's an upside-down version of the first grid, but moved one cell to the left and one cell up):

The next state of affairs is, unsurprisingly, the same as the second state but upside down:

A further outcome of such mirroring is that this final state must transition back to the first state of affairs but moved by one cell to the left.

Such an arrangement and coordination of cells, combined with the rules of the Game of Life, create a looping pattern of four steps that always moves in a single direction until it bumps into another pattern of cells. It appears to humans as if something is flying across the grid of squares (which is why this pattern is called a spaceship).

Skilful combination and arrangement of such small groups of cells in the Game of Life show how abstraction, scale and perspective interact. In the following video, as the camera zooms out the tempo of change speeds up to reveal structures within structures, and something rather amazing.

A similar stack of abstraction is required for you to read this blog post on your computer.

My website is created using an easy-to-read-and-write programming language called Python. Python is, itself, written in another programming language called C. C is less easy to read but still comprehensible to a trained software engineer. Yet C has to be compiled for it to work. When C code is compiled, the relatively understandable C code is translated into instructions written in assembly language, that work in a way that is closer to how the circuitry of the computer works. But we're not finished yet..! The assembly language is itself translated into machine code, a representation of assembly language instructions as binary numbers. This is the lowest level software engineers tend to go when it comes to zooming-in to the computer. But the numeric machine code instructions physically stored on the computer hardware are usually further refined into microcode - a series of circuit-level instructions that describe how the hardware should behave to complete the computation. Of course, someone will have organised and designed the millions of transistors and other microscopic components that make up the (Turing complete) hardware. This is the level of computing at which electrical engineers, chip designers and, ultimately, physicists can be found.

Along the way, I didn't have to write everything from scratch. Coders use software libraries in the same way a chef may re-use pre-existing recipes. Someone else will have figured out how to do some valuable task and organised the steps needed to fulfil it into, say, a module of re-usable Python code. Just like the bundle of thinking needed to ride a bike becomes subsumed into the concept of cycling, I don't have to know the implementation details, but can refer to the relevant re-usable code when it is needed in my own program. Such re-use of existing instructions happens in all the levels of the computing stack I describe above.

All these levels of abstraction are required to get to this fragment of Python:

Hello World in Python

As I come to the end of this exploration of computing, I can't help but feel there is one final brain twist ~ something missing or yet undiscovered about our relationship with computers.

To use a musical metaphor, I feel like I've explained why a piano works (the tuning, acoustic properties, action of the keyboard and hammers) and shown why this relates to musical theory. Yet I've said nothing about the art or performance of music, the emotions we feel when we hear or play music, or that music moves us at a fundamental and human level.

We are missing the "music" of computers (and we miss it at our peril).

Wittgenstein best sums up my feeling:

Die Menschen, die immerfort ›warum‹ fragen, sind wie die Touristen, die, im Bädeker lesend, vor einem Gebäude stehen & durch das Lesen der Entstehungsgeschichte etc etc daran gehindert werden, das Gebäude zu sehen.

(People who are constantly asking 'why' are like tourists, who stand in front of a building, reading Baedeker, & through reading about the history of the building's construction etc etc are prevented from seeing it.)

~ Ludwig Wittgenstein, Culture and Value (MS 124:93)

We should, rightly, be amazed by the technical marvel of computers; but it is all too easy to be overwhelmed by, and focus on, the apparent cleverness (or Rube Goldberg-iness) of it all. Sadly, in so doing we miss the opportunity for an enlarged, more creative and expressive encounter with computing.

Computers are not just machines for rapidly evaluating logical instructions. They are a medium through which we share and express our values, culture, social world and forms of life. They are only valuable because we are able to create and express things of significance through computers. Furthermore, the way we express things with computers, and that we choose this form of expression is also of cultural significance. Like other forms of creative endeavour, computers reflect those who make, inhabit and consume in such a medium.

Computers shine when they enlarge us in an affirmative, fulfilling, humane, creative and expressive way. Yet computing for the sake of computers, with no regard for culture, is a diminished, inhumane and insular form of ignorance and, sadly, I can't help but feel disappointed with how, in the mainstream, we currently use and think of computers.

Social media is anything but social... it's an efficient advertising laboratory that turns humanity into lab rats caged in echo chambers of digitally digested packets of small-mindedness. "Artificial intelligence", "machine learning", "blockchain" and other such buzzwords are nerdy PR for the clever use of computers to unsatisfactorily automate human activity (including our latent prejudices). Computer games are mostly beautiful looking yet formulaic variations on a theme, target driven and hardly allow a player to express themselves. Our world is polluted by complicated and unpleasant computerised gizmos: "intelligent" washing machines, programmable coffee makers, automatic point-of-sale machines... most of which are banal or frustrating to use. And, of course, no aspect of life is too small to be "solved" by an app available on your mobile device (a perculiarly problematic outlook ~ it's as if we're all broken and need technology to fix us).

Such a shallow, invasive and unfulfilling world of computing is perhaps inevitable. It reflects our failure to think creatively while we focus on automating things in the most trivial, complicated or inconvenient manner. Yet highlighting such a problematic state of affairs is helpful, for then we can compensate and re-balance.

Personally, I can't help but feel the missing part of our relationship with computers is an emphasis on things like culture, contemplation and creativity ~ activities and aspects of our lives that are affirmative, healing, empowering, raise up our existence or provoke useful reflection and personal growth.

The final brain twist I have for you is to rise above a merely technical view of computers. I hope you imagine, create and participate in a meaningful, authentic and engaging culture expressed with and through computers. One in which computers become a medium for affirmative, liberating and expressive activities that enlarge ourselves, our world and our place in it.

In other words, let's together compose the oft-missing "music" of computing.

After all, we are only just getting started with these strange instruments of automation.

Thank you for reading.


Automated Rule Following Machines

Why Computers Work (part 4)

(Part 1, part 2, part 3, part 5)

What follows isn't so much a brain twist, as an extended make-pretend that points the way to what a computer is.

Imagine a strange looking contraption sitting before you on a table.

It consists of an exceptionally long reel-to-reel tape that passes through an electro-mechanical device of some sort. The tape is subdivided into square "frames". Some of the squares contain symbols, others are empty. As it operates, the tape moves through the device both to the left or to the right, like a badly behaved cinema projector.

A Turing Machine
Source - Licensed under CC BY 3.0

The "head" (where the tape passes through the device) covers exactly one square's worth of the tape. There's a flash of light from the head when a new square is completely contained therein. Sometimes this is followed by a clicking sound. When the click is heard the symbol in the square is changed: it's either different or completely rubbed out. Then the tape moves to the left or right to a new square before continuing its strange flashing and clicking operation.

On the front of the machine is a little window. The window is labelled "STATE" and contains a number that changes when each square is "processed" by the head.

After a while the machine comes to a stop. The state window contains the number 0 (zero). By the head, on the tape, are a sequence of squares containing some familiar looking symbols:

H E L L O   W O R L D

Two other objects are on the table. On one side of the contraption is a black and white photograph of a thoughtful looking man with an old fashioned side parting and enigmatic smile, sitting in a sturdy deck chair. On the other is a thick ring-bound pile of paper whose title you can just make out: "Instructions".

The instructions listed on the front page consist of just four rules:

  1. Set the machine to state "1".
  2. Read the symbol in the current frame on the tape.
  3. Given the current state of the machine and the symbol read from the current frame, look up what to do next. "What to do next" is defined by three transitions:
    • Change to a new numbered state.
    • Replace or delete the symbol in the current frame.
    • Move the tape by some number of frames to the left or the right.
  4. If the new state is "0" then stop, otherwise resume from rule 2.

On the many pages that follow is a single huge table entitled "What to do next". It contains thousands of entries under the same five columns, and all look something like these extracts:

1 "A" 785 "5" 1L
516 " " 657 "!" 86R
516 "#" 657 "!" 86R
1023 "#" 1020 " " 23R

A note, just above the table, explains:

Each combination of the "CURRENT STATE" and "HEAD READ" values is unique. There is one row for each possible combination. Identify the row that corresponds to the current state of affairs. The new state of affairs is defined by the "NEXT STATE", "HEAD WRITE" and "MOVE TAPE" values on the same row.

Therefore, the first line of the extract means that if the machine is in state 1 and the head reads the symbol "A" then the next state is 785, the symbol "5" should be written to the current square, and the tape should move 1 square to the left. The final line of the extract means that if the machine is in state 1023 and the head reads the symbol "#" then the next state is 1020, the symbol should be deleted from the current square and the tape should move 23 squares to the right.

This is propositional logic..!

It's a table of conditionals whose two premises (the "CURRENT STATE" and "HEAD READ" values) are conjuncted (logical AND) to identify what the resulting behaviour should be (the "NEXT STATE", "HEAD WRITE", and "MOVE TAPE" values). IF the state is this AND the value read by the head is that THEN go to the specified next state, write such-and-such to the tape and move the tape by however many squares in a certain direction.

If you look carefully, hidden within the extract are also disjunctions (logical OR). The middle two lines (for state 516) have the same state and outcome, but different symbols read from the head. This is the same as checking IF (the state is 516 AND the head reading is empty) OR (the state is 516 AND the head reading is "#") THEN the next state is 657, write "!" to the tape and the tape should move 86 squares to the right.

But what is the purpose of this machine? What does it do?

It follows the four rules, that start the instruction manual, to repeatedly perform the logical steps defined in the "What to do next" table. The steps in the table react to and change the state of the symbols on the tape in order to arrive at a desired end result. What that end result is depends upon what's on the tape when the machine is switched on and how the steps in the table interact with that starting state and subsequent states of affairs.

In other words, given a certain "what to do next" table, a starting state and some input on the tape, it computes a result.

It is a computer!

If you had the instructions, a copy of the table, a physical tape, a pencil and an eraser you might follow the rules from the start state (state 1) until the end state (state 0) was achieved. But the machine would do it much faster and wouldn't get tired, bored or make mistakes.

A cardboard computer

This process might feel familiar: deliberately precise rules describe how, given certain states of affairs, such and such things must happen. States of affairs unambiguously describe how things are in the world. For example, "the machine is in state 284 and the current square on the tape contains an H". When I say that such and such must happen, I mean clear and unambiguous instructions describe how the machine proceeds given a certain state of affairs.

This is how I described the game of Snap in the first article!

It turns out that computing a result is not that different to playing a children's game. Computing a result is the same as following a set of instructions for changing states of affairs. Such unambiguous instructions are called algorithms.

Since the rules in the "what to do next" table are logical, they can be represented by the sorts of electrical circuitry described in the previous post. Such circuitry receives input signals from the components in the head, and controls other components such as the motors that control the tape.

For this machine to do anything meaningful, someone will have carefully crafted the "what to do next" rules so the symbols on the tape when the machine is switched on in state 1, are transformed into a completely-new-yet-useful set of symbols when the machine achieves state 0. The algorithm is defined by the "what to do next" rules written in the table and encoded in the circuitry. The machine transitions from state 1 and what may be found on the tape at that moment, via a huge number of intervening states, to state 0 and a computed result written on the tape.

Alan Turing
Source - this image is in the public domain.

What I've described is a Turing machine. Such an imaginary device was invented by the British mathematician Alan Turing (the chap in the photograph). It turns out that anything that can be computed by a set of instructions that manipulate symbols (the algorithm), can be computed by a Turing machine. Anything that works in a way that is equivalent to a Turing machine is described as Turing complete because it, too, is able to compute outcomes from an algorithm. To be Turing complete is to be a programmable computer.

Turing explained how computation works by describing his machine. If you create something that can work like a Turing machine then you have also created a computer. You don't have to prove your computer is complete, because Turing already did that for you. This equivalence of capability is why we learn about Turing machines: they make it clear what it is to be a computer.

There are many variations on a Turing machine. The one I've described could be simplified in a couple of ways:

  1. The machine could use multiple tapes with multiple heads rather than a single tape and a single head. The information used to make a computation is available in concurrent squares on several tapes, rather than as a sequence of individual values on one single tape.
  2. The alphabet of symbols that could be written to or read from the tape need only consist of two symbols: one representing "on", and the other representing "off". Put simply, information is represented in a binary fashion as shown in the following illustration (white or "1" is on, black or "0" is off).
1 0 1 1 0 0

Each of these changes wouldn't make the machine any more powerful than the one I've described, but they may make the machine easier to build and program. The important property of the machine is its Turing completeness.

The computer you're using to read this article obviously works in a completely different way to the seemingly ramshackle contraption I describe. While there is no tape or head inside your computing device, there are transistors etched into silicon chips that react to and change the state of values stored in memory, billions of times a second. The chips connect to other parts of your device via input/output pins. The other parts of your device, connected via specialist hardware, might include things like a screen, keyboard, mouse, speaker or microphone.

Just as a Turing machine iterates over the same cycle of reading from the head, writing to the tape and transitioning to a new state, a silicon based chip, synchronised by a clock, carries out a similar iterative cycle of work one instruction after another. In fact, the clock speed of your machine's CPU tells you how many instructions the chip will carry out each second (for instance, a 3Ghz chip will manage 3 billion instructions a second).

If you're interested in how an actual chip behaves, this link takes you to a simulation of an ARM1 chip -- a forerunner to those that run your mobile phone. Click on the "play" button () to watch the chip "operate". It's possible to zoom in and out, move around, speed up or slow down the clock and see how individual transistors are connected to and interact with each other.

If a computer is simply something that is Turing complete, why is it that we're able to use such devices to read and write words, hear music, draw pictures, watch videos and all the other meaningful stuff for which we find computers so useful?

The answer relates to how we perceive things, our talent for abstraction and our cultural context. The next and final post will explore these concepts.


Why Computers Work (part 3)

(Part 1, part 2, part 4, part 5)

It's easy to imagine one thing representing another. We do this all the time.

For example, in the UK the Royal Standard always flies from buildings containing the monarch. This flag is understood to represent the Queen's presence and somebody ensures it is flown at the right moment from the buildings she visits.

The Royal Standard over Buckingham Palace
Source - Licensed under CC BY-SA 3.0.

Here's a twist...

Instead of the Royal Standard, a light on top of a building's flag pole could represent the Queen's presence. If it were on, she was present.

Notice there are two possible states for the Queen (she's either present or not), this is mirrored by the two possible states for the light (it is either illuminated or not), and we are playing along in a sort of game that provides meaning for such a representation (we understand that the Queen is present if the light is illuminated, just like we do when we see the flying of the Royal Standard). One thing (a light) is representing something else (the Queen's presence).

Such a device could be built using the circuit shown in the diagram below:

Simple switch circuit

At the top is the symbol representing a battery that supplies current for the other components. Lines show how the battery and other components connect. The circle containing a cross represents a bulb and the gap created by the line veering off at an angle represents a switch. In the following photograph the real-world components, shown next to their symbols, connect to form the circuit:

Electrical components

The switch can be in two possible states: on (creating a circuit so current moves through the components, thus lighting the bulb) or off (where the circuit is broken, stopping the current and extinguishing the bulb).

Here's a royal brain twist for you: can you think of something else that deals with only two possible states?

If you thought, "propositional logic" then you deserve a knighthood!

The two possible states of the circuit, as controlled by the switch, mirrors the two possible states encountered in propositional logic: on/true and off/false. We could play along with the logical game and say that the circuit represents the truth value of the proposition, "the Queen is present".

Here's another royal brain twist: it's also possible to make simple circuits that mimic the logical operations found in propositional logic ("and", "or", "not" and all the rest).

By re-arranging the physical components of the circuit, from a logical point of view, the illuminated bulb could represent the presence of both the Queen and/or her hier, the Prince of Wales:

  • The Queen is present AND the Prince of Wales is present.
  • The Queen is present OR the Prince of Wales is present.

Remember, propositional logic doesn't care about the meaning of the propositions ("the Queen is present", "the Prince of Wales is present"), so the circuits' behaviour could be generalised to represent "A and B" or "A or B".

Here's the diagram for the "and" circuit:

And circuit

The two switches are labelled to show how they represent the propositions "A" and "B". Because of the consecutive arrangement of the switches, the states of the switches and the resulting behaviour of the bulb match the truth table for "and": if both switches are on, then the bulb is on (representing "true"), otherwise, in all other combination of switch states, the bulb is off (representing "false").

Here's the diagram for the "or" circuit:

Or circuit

In this case, the parallel arrangement of the switches causes the circuit to mirror the truth table for the logical "or" operation. If only one (or both) of the switches are on, then the bulb is on, otherwise, when both switches are off, the bulb is off.

The circuit for "not" is slightly different and allows me to introduce a new (but important) electrical component: the transistor.

Transistors control flow of electrical current with three connections called "gate", "source" and "drain". Electrical current flows between "source" and "drain" only if current is also applied to the "gate" connection. Rather than controlling the flow of current with a switch operated by a person, a transistor is controlled by another electrical component via the "gate".

Transistors work because they're made from two different types of silicon, a material that only conducts electricity under certain conditions (which is why it's called a semi-conductor). The silicon is mixed with certain impurities to create p-type and n-type silicon. In the diagram below the p-type silicon is shown in blue, the n-type silicon in red and the gate connection in green.

Transistor off

In a similar way to how "and" and "or" circuits behave as a result of the physical arrangement of their components, the cleverness of transistors comes about because of how the p-type and n-type parts are physically arranged. The "source" terminal is connected to an n-type layer (in red, on the left of the diagram), the "gate" (in green) to the p-type barrier layer (in blue) and "drain" to another n-type layer (in red, on the right of the diagram).

In very simplistic terms, if a voltage is applied to p-type silicon via the "gate" connection, it behaves like n-type silicon. When in this state it is no longer an insulating barrier between the "source" and "drain" and electricity can flow. This is shown in the diagram below.

Transistor on

Transistors come in many shapes and sizes, but are most common as extraordinarily small components etched onto silicon (i.e. microchips). The image below was taken by an electron microscope and shows a transistor on a microchip with the parts labelled.

A Transistor on a microchip
Source - educational fair use.

Back to the "not" circuit: remember that electricity will always flow the shortest way to "ground" (for instance via a lightning rod). The following (very simplified) circuit diagram shows how a transistor and ground is used to make something that behaves like a logical "not".

The transistor is represented by a circle pierced by three lines. The source is represented by the line entering the circle at the 1 o'clock position, the drain by the line at the 5 o'clock position and the gate by the line at 9 o'clock.

Not circuit

On the left, the button labelled "A", attached to the gate, is off (representing false) so current is unable to pass through the transistor to ground. As a result, the electrical current (in red) flows through the lamp to illuminate it (representing true). On the right, the button is on (in green, representing voltage applied to the transistor's gate). The current is able to flow through the transistor from the source to the drain and then to ground (so no current flows to the bulb to illuminate it). The red arrows make it clear how the current flows.

Thus, the lamp is always in the opposite logical state to the switch and the circuit mirrors the behaviour of a logical "not". If the switch is off, the light is on and if the switch is on, the light is off.

The important brain twist is understanding that we can design physical systems (i.e. electrical circuits) that appear to behave as logical ones.

Yet another seemingly contradictory brain twist is to remove "logic" from any explanation of the circuits.

From a scientific point of view of the circuit, an observer can only describe physical properties and behaviours in terms of electrical current, the mechanical behaviour of switches, the incandescent properties of filament and the electrochemical nature of batteries and transistors. You won't find any formal rules of propositional logic here, nor meaning ascribed to physical states of affairs. The scientific view of the circuit is the "how" of the circuit.

Yet if we agree the on/off states of the switches represent the true/false values of propositions "A" and "B" and the on/off status of the light represents a resulting logical outcome (true/false) then the behaviour of the first circuit undoubtedly mirrors the truth table for "and". Remember, it's not logic that makes the circuit behave in this way (it's actually physics!), but because we play along in a game where physical states mirror and thus accurately represent logical ones, then the circuit acquires an additional layer of meaning in terms of the rules of logic.

This is the "why" of the logical circuit: a human is needed to understand what the behaviour of a physical object apparently represents. Without the meaningful human-ascribed behaviour there is just an object described by physics.

Here's another instance of this phenomenon: there is no physical law described by science that explains why green means "go" and red means "stop" at traffic lights - yet the traffic lights are very much a physical system that can be described by science. If meaning is involved, then humans and their cultural norms are needed to make sense of it to explain what it represents.

Teleology (using meaningful purpose or design to explain phenomena) isn't a part of the "how" of physics-based descriptions of the world - yet humans commonly retrofit purpose or design to describe physical systems that exhibit behaviour appearing to represent something meaningful, such as logical operations or traffic lights. The brain twist is to realise where and when such human interventions assign meaning to the physical world, such as in the case of the logic circuits I've described.

Imagine an arrangement of many transistor-based circuits (representing various logical operations), chained together so the output of one circuit provides the input to the next. This yet-more-complicated arrangement of logical circuits defines the meaningful behaviour of a device. Such a device, like its constituent logical circuits, connects to the outside world via inputs and outputs and has a strange capability: to change its own state in order to store and process information.

The next post will explore such a machine.

Movements of Thought

Why Computers Work (part 2)

(Part 1, part 3, part 4, part 5)

Just as one can describe rules for a card game, it is possible to describe rules for thinking: rules for movement of thought.

This is the study of Logic, and using these rules is called reasoning. Reasoning with logical rules allows others, who know such rules, to follow the movements of thought that brought you to a certain conclusion. Furthermore, because there are rules, it's possible to notice when they're ignored or used incorrectly (when movements of thought don't make logical sense).

There are many types of logic, but the one we're going to examine is called propositional (or sentential, or boolean) logic.

"Propositional" and "sentential" are just descriptive names for a type of logic that deals with forming sentences by combining propositions. In logic, propositions make assertions that are either true or false. George Boole (1815-1864), shown below, invented a system of mathematical algebra that works like propositional logic (it also deals with values that are either true or false) and so the term "boolean" is often used synonymously.

George Boole
Source - this image is in the public domain.

Here's a contrived example of a sentence in propositional logic:

"If it is sunny and I am wearing a thick coat, then I am hot."

I want to draw your attention to some important aspects of this sentence that may not, at first, appear obvious:

  • It contains two propositions called premises ("it is sunny" and "I am wearing a thick coat"). Premises make assertions about states of affairs.
  • It ends with a conclusion ("I am hot"). A conclusion is a proposition that depends upon the premises in some way.
  • The sentence is in the form of a conditional ("if [premises], then [conclusion]"). A conditional is a rule saying that if its premises are true then the conclusion must be true.
  • The premises are related to each other by a logical operator called a conjunction ("and"). This is another rule: if the related premises are evaluated together, collectively they are true only if all of them are true.

Propositional logic describes the rules of a "game" to construct sentences that make logical sense. Playing by the rules of logic forces everyone to reach the same inevitable conclusion: if we accept that the premises are true (it is sunny and I am wearing a coat) propositional logic dictates the conclusion must be true (I am hot). The object of propositional logic is to use the rules to evaluate (work out) if a sentence is true or false.

Here's the brain twist: propositional logic doesn't care about meaning. The important logical aspects of the example above don't concern my state in the real world (which is why the sentence sounds slightly odd). Propositional logic only cares about truth and how propositions fit together. I could revise the example to:

"If A and B then C."

It doesn't matter what A, B or C stand for nor what they may mean -- from the perspective of propositional logic all that matters is there are two conjuncted premises (A and B) and a conclusion (C) expressed in a conditional (if ... then ...). If one or both of the premises is false, then the conclusion must also be false. Why? Because the logical rules pertaining to conditionals and conjunctions (and nothing else outside those rules) make it that way!

The brain twist is divorcing yourself from meaning -- just concern yourself with truth values of the propositions, the structure of the sentence and following the rules.

In the same way the rules of Snap explain what must happen, given card related states of affairs, so the rules of propositional logic do the same with propositions and sentences. The rules of Snap don't care what the specific values of cards are, just that such values may match. Similarly, propositional logic doesn't care what the specific meanings of the propositions may be, only that such propositions connect in a sentence that can be evaluated with rules dealing in just two possible states: true and false.

The simplest way to express the rules that govern such logical operations for connecting propositions is with a truth table.

Here's the definition of "and" (conjunction):

 A | B | A and B
 F | F |    F
 F | T |    F
 T | F |    F
 T | T |    T

Can you see how it works?

The first two columns represent propositions labelled "A" and "B". The third column represents the outcome of the "and" operation, given the values of "A" and "B" in the first two columns. Each possible combination of truth value for "A" and "B" is enumerated as a row with the resulting truth value for the "and" operation in the third column. It's a simple tabular way to express the rules of propositional logic. If you had any doubt how "and" (conjunction) worked, you'd find the definitive answer in this truth table.

For instance, take the first row: if propositions "A" and "B" are both false (as expressed in the first two columns), then the outcome for this rule (expressed in the third column) must be false.

Here's another truth table that defines the rule for the logical operation called "or" (disjunction):

 A | B | A or B
 F | F |   F
 F | T |   T
 T | F |   T
 T | T |   T

Let's pretend that "A" is false but "B" is true. How would you evaluate the truth value of "A or B"? Is it true or false?

The answer is that "A or B" is true. Why? Because the second row tells us so: the "A" column is false, the "B" column is true so the result, expressed in the third column, is true.

Here's one final truth table. It's a bit different to the other two since it only works on a single proposition:

 A | not A
 T |   F
 F |   T

It's easy to see what the "not" (negation) operation does to a proposition: it flips its truth value so false becomes true, and true becomes false.

While these logical operations have familiar names ("and", "or" and "not") that appear to relate to how they work, it is the truth table and only the truth table that defines how they behave in propositional logic, not any similarity to how we may use such words in everyday English. There are further rules, expressed as truth tables, for connecting propositions that you may wish to look into. They are XOR (eXclusive OR), NAND (Not AND), NOR (Not OR) and XNOR (eXclusive Not OR).

Logical puzzles become fun when you combine such logical operations to build more complicated structures. Take for example:

(A and B) or (C or not D)

I've put parentheses ("(" and ")") around propositions so you can see how they relate to the logical operators (the and, or and not). If we pretend all the propositions represented by letters are false, what is the overall truth value of the sentence?

To find the answer we play the logical "game" in the same way we would with Snap: we follow the rules.

Start by evaluating the operators within the parentheses. If we replace the propositions "A" and "B" with their truth values (remember, all the propositions are false), we get:

(false and false) or (C or not D)

Given the rule set out in the truth table for "and", the propositions in the first parentheses evaluate to false. Here's how the sentence looks as a result:

false or (C or not D)

To evaluate the "or" in the remaining parentheses we should first evaluate the "not" operator to find the truth value of the proposition on the right. If "D" is false, then the truth table for "not" tells us that "not D" must evaluate to true. Since "C" is false, the sentence looks like this when "C" and "not D" are replaced by their truth values:

false or (false or true)

The truth table for "or", when applied to values in the remaining parentheses tells us that if one of the propositions is true, then the "or" operation must evaluate to true, giving us:

false or true

Re-using this rule to evaluate the remaining "or" operation gives the result:


The only way to solve such logical puzzles is by following the inevitable steps dictated by the rules. That's how logic works!

But why does logic work?

For the same reason why the game of Snap works: we modify our behaviour to follow unambiguous rules because we want to determine the truth of sentences in propositional logic (the aim of playing this sort of logic game). Furthermore, by pretending propositions represent states of affairs in the real world, we can use logic to describe and, in a sense, encode aspects of the real world. For instance, we could describe the rules of Snap with logic.

But why is logic useful?

Because logic clearly and unambiguously describes the relationships between states of affairs (premises), and allows us to take action given a resulting truth value. Consider, for example, this conditional statement: IF the card on top of stack A is an ace AND the card on top of stack B is an ace, THEN shout "SNAP!". Logic describes the movement of thought needed to participate in the card game (or any other useful structured activity).

Here's a final "brain twist" for you:

If Logic provides an inevitable, almost mechanical framework for organising and describing movements of thought then we should be able to build machines to work with such seemingly mechanical logical rules.

It turns out that we can, and describing such machines is the next step in describing why computers work!

Why Computers Work: An Introduction to Following Rules

Why Computers Work (part 1)

(Part 2, part 3, part 4, part 5)

This is the first of five short blog posts exploring why computers work.

I'm going to present a friendly introductory overview for laypeople, from zero to Turing machines, automata, abstraction and more. My aim is conciseness and clarity, so I've necessarily missed out, glossed over and simplified things. There are plenty of more advanced resources online should you wish to investigate this subject further.

Most importantly, I hope to stimulate your thinking about computers by creating a place to explore ideas.

I hope you enjoy these articles and, as always, I love getting constructive feedback via email. Unless otherwise stated, all images and diagrams were created by the author.

αἰὼν παῖς ἐστι παίζων, πεσσεύων· παιδὸς ἡ βασιληίη.

(The universe is a child's game. A child's kingdom.)

~ Heraclitus (Fragment B52)


Computers are ubiquitous.

They touch all aspects of our lives, from mediating our social interactions to modelling aspects of our culture and managing the everyday infrastructure of society.

Therefore, computers are interesting and, understandably, folks want to know how the machines that automate and control so much of our world work. This is a typical response.

I'm going to take a different approach.

I'm going to explore why computers work.

What's the difference?

Answers to "how?" questions tell us what method or steps make something happen. In contrast, the answers to "why?" questions describe what makes something possible ~ an opportunity to encounter a more fundamental perspective.

It's the difference between a car mechanic who may understand how an engine works (so they can fix broken engines), and a mechanical engineer who understands the physics and chemistry relating to why engines work (so they can improve the design of engines). Knowing how something works means your frame of reference is within the system, whereas if you understand why something works you're not bound by existing products, solutions or cultural practices.

Most importantly, cultivating an understanding of why something is possible is an invitation for playful creativity, fearless exploration and careful refinement of alternatives to the current crop of answers to "how?". It is an opportunity to enlarge and change our world ~ a form of intellectual empowerment and growth.

Knowing how is good, but understanding why is better.

With the scene suitably set, to start our journey, we'll learn all about...

Following Rules

Do you know how to play the children's card game, Snap?

(Bear with me, I promise it'll be worth it.)

Using a standard deck of cards, the aim is to win all the cards by taking turns to play.

Let's follow along as siblings Penelope (11), Sam (8) and Will (5) play a game:

Penelope (for no other reason than she's the oldest) acts as dealer. She shuffles the deck of cards so they're in a random order and deals them, face down and in equal quantity, to each player until there are none left to deal. This is the starting state from which all games of Snap begin.

Sam, the player to the left of the dealer, takes the first turn.

Players take a turn by moving the top card from their face-down stack onto the top of an adjacent stack of face-up cards, so the newly moved card is also facing up. Since this is Sam's first turn, the face-up stack doesn't exist, so a new stack is created with Sam's first face-up card taken from his pile of face-down cards.

So ends Sam's turn.

A game of Snap!
Source - From WikiHow and licensed under CC BY-NC-SA 3.0

The next player to the left, William, takes his turn in a similar fashion and so the game continues from one player to the next.

After a while Sam shouts "SNAP". He's noticed that two face-up cards on top of different stacks have the same value (for example, two aces are visible on top of Penelope and Will's stacks). Since he was first to shout "SNAP" he wins all the cards in the matching stacks of cards. He gleefully scoops up all the cards in the face-up stacks belonging to Penelope and Will.

Play continues to the left of the player who turned up the matching card.

A few moments later Penelope shouts "SNAP". But there is a problem: there are no matching face-up cards! William points this out to Penelope and reminds her she has to pay a forfeit by giving one card to each of the other players from her face-down stack. Sam and Will get a card each and Penelope is two cards down.

At this point in the game Sam has the most cards. Will and Penelope have less with only a few cards difference between the quantities in their face-down stacks.

Things are hotting up and the three children become more excited: when two matching face-up cards appear again all three of them shout "SNAP" at the same time. Since nobody can claim to have shouted "SNAP" first, and to avoid arguments, both the matching stacks are placed in the middle and added to the "Snap pool". When "SNAP" is next called unambiguously and correctly, the winner will get both the matching stacks and any cards in the "Snap pool". The stakes have suddenly got higher.

Things quieten down for a few rounds until Penelope runs out of face-down cards to put onto her face-up stack. At this point she simply flips over her face-up stack and it becomes her face-down stack. Play continues as before.

Eventually, after a few more calls of "SNAP", Penelope finds that she's run out of cards and so out of the game.

Sam and Will play together until, through luck and fast reactions, Sam finds himself with all the cards and he's declared the winner of the game.

That's how you play Snap..!

The rules illustrated in this deliberately precise story describe how, given certain states of affairs, such-and-such things must happen. States of affairs unambiguously describe how things are in the world. For example, "there are two stacks of face-up cards whose top cards are of matching value". When I describe how such-and-such must happen, I mean clear and unambiguous instructions describe how play proceeds given a certain state of affairs.

Put simply, this is how the game of Snap is played.

But why is Snap played?

Because, in addition to knowing the rules, we understand that playing the game of Snap means modifying one's behaviour to follow these rules. If everyone modifies their behaviour in accordance with the rules then folks can play together. Obviously, we don't explain card games in such a formal manner to very young children. But, as my description shows, children discover it's lots of fun to informally learn and skilfully follow rules that bring about exciting situations in games.

Knowing how to play Snap is analogous to knowing how computers work. Looking beyond the rules of Snap to appreciate that human behaviour and enjoyment motivates people to follow the rules of the game, shows us why Snap is played. This insight may also inspire us to invent new games. Perhaps we become inspired to explore further aspects of human behaviour relating to following rules, or we may even reflect upon the nature of play or games. In any case, exploring why rather than how provides a fascinating perspective from outside the system (of Snap, in this case).

We need to achieve an analogous enlargement of perspective beyond how computers work in order to understand why computers work. Only by acquiring a perspective outside current norms can we possibly hope to invent or improve technology and explore the nature and use of such devices. The alternative is churn and recycling of existing ideas within the current system ~ a situation that stifles innovation or positive change.

The computing-related rules and states of affairs I'm going to describe are not much more complicated than the game of Snap explained above. However, what follows may include a few brain twists where I expect you to use your imagination to see things in a seemingly unusual or unintuitive way! I'm challenging you to acquire a new perspective about seemingly everyday things. The external world remains as it was before, but you will have changed. Your newly acquired perspective reveals a deeper understanding of what you're observing and why this makes computing possible. It's fun and can result in rather pleasant "aha" moments (if you were a cartoon character, it's that feeling you get when a lightbulb appears above your head).

The next chapter in our story is a beautiful brain twist: to think about thinking...