Intro  
        Build          Coffin          Track          Trains          Nerves
JOY      SORROW      BIRTH      SEX      ART      ILLNESS      ANIMALS      LOVE      WAR

Commuter - Snakes
An installation by Rob Godfrey


Commuter's  nervous system has been designed to enable consciousness, to give Commuter the ability to 'think'. This will not be achieved by algorithmic means. Commuter is not artificial intelligence; more, 'non-biological intelligence'. Before discussing how this will be achieved, let's have a brief look at the history of AI (Note: the term 'artificial intelligence' gets applied to everything from room thermostats to computer chess programmes to speech recognition software, et al. In the context of the following discussion, artificial intelligence is taken to describe a machine that has a near-human level of consciousness).

nothing contributes so much to tranquillize the mind as a steady purpose
  Mary Shelley, Frankenstein

Alan Mathison Turing (1912-1954) is often described as the 'founding father' of artificial intelligence. In 1935, at Cambridge University, Turing described a computing machine which consisted of a limitless memory and the ability to read and write to that memory. The actions of the machine are dictated by a programme stored in its memory. Turing's hypothetical computing machine of 1935 is now known simply as the universal Turing machine. All modern computers are, in essence, universal Turing machines.

To trace the roots of our modern computers we actually have to go back to the Victorian age: in 1854 an English mathematician by the name of George Boole published a paper called, An Investigation into the Laws of Thought, on Which are Founded the Mathematical Theories and Probabilities, in which he proposed that the logic of everyday situations is subject to mathematical laws and can be written into an algebra. Boolean algebra is binary, which means something can have the value 1 or 0; likewise, yes or no, or true or false, or high or low, etc, etc. There are no intermediate states. It has to be one or the other. Boolean algebra is the nuts and bolts of Boolean logic and has been contributed to by the likes of Augustus De Morgan, who was a colleague of George Boole. Boolean logic can, quite logically, be boiled down to three basic principles: And, Or and Not.

          AND: all must be true for the result to be true.
          OR: any must be true for the result to be true.
          NOT: produces the opposite; ie, if true is the input, false will be the output.

Binary algebra is not exactly a riveting subject, and thus for a number of years Boolean logic went largely unnoticed by the world at large; that is, until an American called Claude Shannon realised that it was perfectly suited to electronic circuits. Electronic circuits exist in one of two states: On or Off (or High or Low), and the three principles of Boolean logic can act as 'gates', or switches; which might all sound pretty mundane, but from small acorns do mighty oaks grow, or perhaps slither: in 1940, five years after Alan Turing had come up with a concept we now know as the 'computer', Claude Shannon began using Boolean logic gates in electrical systems at the Massachusetts Institute of technology (MIT). The computer age was born.
Boolean logic, logic gates, example half adder
The diagram on the left shows what is known as a 'half-adder' and illustrates the basic principles involved in applying logic gates to computers. This example shows two inputs of "1". You can try the other possible additions, 1 + 0, or 0 + 1, or 0 + 0, but that's not very exciting, is it. So, what's exciting about 1 + 1..? Well, Boolean logic gates arranged in the correct way will give the answer: 10. At this point you're no doubt scratching your head, but remember, we're dealing with the binary number system here: "10" in binary is "2" in decimal. What that half-adder over there on the left is showing you, using simple Boolean logic, is that One plus One = Two... It's enough to make you believe in God, or at least, to quote from the Bible:

 
the serpent was more subtle than any other beast of the field that the Lord God had made
Genesis Ch 3:1-24


Half adders can only perform two digit (two bit) binary addition, which is a tad tedious. However, join two half-adders together and you get a full adder which can perform four digit (four bit) addition. Now we can go on to join full adders together to create things like ripple adders and lookahead adders, which can perform large sums. By adjusting the logic gates, binary adders can be used not only for addition, but also for subtraction, multiplication and division, which allows them to control a computer's data flow and process instructions. You are reading these words on the screen of a universal Turing machine, and that machine works by using simple 1's and 0's and Mr Boole's logic (a modern microchip can contain well over a million logic gates), which is rather amazing, when you think about it.

The fact that the Turing machine has a stored programme of instructions, and can write to its own memory, suggested that it could be allowed to modify it's own programme - ie, the machine could act under its own volition - and thus was born the field of research known as 'artificial intelligence'. In the early days of AI, back in the 1950s and 60s, hopes were high and many people believed that within a short time we'd have machines which were intelligent and could think. This hasn't happened. In 1965, Gordon Moore, founder of Intel, predicted that microprocessors would double in complexity every 18 months. This has happened and it is known as Moore's Law. However, there's only so much that you can cram on to a silicon disk and within the next decade or so the circuits contained in a microprocessor will be measured on an atomic scale; and you can't get any smaller than that; at least, not in the Newtonian Universe. In the quest for ever faster processing speeds, researchers have been looking at alternatives to the traditional microprocessor design, such as DNA computers and quantum computers. These alternatives will still be Turing machines.

The Turing machine is largely to blame for the hopes and disapointments of the artificial intelligence community, most of whom have placed their bets on super powerful digital computers as the solution to artificial intelligence. Is this the right approach? I don't think so. Let's use a goldfish as an example: could a goldfish calculate the cube root of 42,000? I doubt it very much, because when it comes to evolution/survival, knowing the cube root of 42,000 is not high on the list of priorities (although I do know some very clever fish). A human with a gift for maths, using a pencil and paper could probably calculate this sum in a couple of minutes. Most humans, though, would be unable to do it. A modern day computer would take about ten nanoseconds (thousand-millionths of a second) and would spit out something like this:

34.760266448864497867398652190045

In terms of memory and processing power, you could compare the brain of a goldfish to what are called 'first generation digital computers', from the 1950s, yet the motor actions and behaviour of a goldfish are way beyond anything those first generation computers could do. Half a century down the line, a fish is still a fish and computers now have incredible processing power and memory, yet the fish is basically still more intelligent than the computer (in that computers still can't match the motor actions and behaviour of a simple creature like a goldfish). This seems to suggest that there's something else going on in the fish's brain, beyond mere processing power. Of course, it's something called 'thought', albeit very primitive fishy thought; and 'thought' is something we're going to have to think about when it comes to creating artificial intelligence.

No one has been able to build a machine that can think because there is a fundamental problem with regard to producing 'thought' inside a digital computer; ie, the digital computer operates by using algorithms (programmes) and algorithms work by using maths: anything which can be explained mathematically can also be expressed algorithmically by a computer; and here's the rub: we still don't understand what 'thought' is and thus can't explain it in mathematical terms, which leaves the digital computer sinking thoughtlessly into the water with our aforementioned goldfish; and to make matters worse, the flip side of this problem is that the Turing machine is a deterministic device - algorithms can only work with predictability - whereas it seems highly probable that the mysterious process we call 'thought' does not follow the same rules of predictability, so even if you could somehow formularise thought a digital computer would still be unable to express it. This is explained in more detail in the next section, Dishwash Theory.

If you accept the proposition that 'thought' will not be formularised in a laboratory-near-you anytime soon, does this leave artificial intelligence dead in the water? No: even though you cannot explain 'thought' it is still possible to build a machine that can think. To explore this further we have to dip into the heady world of quantum mechanics.

The universe is not only queerer than we suppose, it is queerer than we can suppose.
  JBS Haldane

Put simply, quantum mechanics (which is a wider term for quantum theory and quantum physics) is the study and theory of subatomic particles; ie, particles smaller than atoms. This, of course, encompasses electrons, neutrons and protons, the old school text book idea of what an atom is made of, but it also involves even smaller particles such as quarks, baryons, mesons and a whole host of others which are still being discovered. The thing about these subatomic particles is that they don't obey the laws of classical physics. In the quantum universe, particles pop into and out of existence. Time as we know it has no meaning. There are strange concepts like the weak force and dark energy. In the quantum universe particles can interact with each other instantaneously over considerable distances without any observed physical connection (in Einstein's universe nothing is supposed to travel faster than the speed of light). In effect, it could be said that the quantum universe resides in the 'fourth dimension', and beyond. If you want an example of just how mind-boggling quantum mechanics can be, do a search on "Transactional interpretation of quantum mechanics", but not before you finish reading this page.


vii.  The Yank In The Tank

We chased limelight to Tower Bridge
to see our dreams hang in the air,
where once stews sucked on Eckett's ridge
and cholera took Bill Sikes' lair.
I caught my breath, your gymslip dare,
as little girls sang songs to Dave,
the thrusting piles of finance there
now plunged into Fagin's moist grave.
We thought it was rather quite brave
to swing with dollymops and rats
in such a very taboo place, save
for chic bistros and yuppy flats;
and I gave you a crooked grin;
you said: "shut-up and drink your gin".

Some scientists have theorised that the mind works at a quantum level (ie. Shadows of the Mind by Roger Penrose) where the determinism of classical physics is replaced by spontaneity. I, too, believe in the quantum mind (see the third in this series of essays, Non-Biological Intelligence). However, the quantum mind is irrelevent in the context of this discussion, because subatomic particles are not fully understood and so we end up in the same blind alley that the artificial intelligence community has found itself in with the brain-as-a-digital-computer concept. However, what we do know is that both subatomic particles and thought exist, even though they may never be proven to exist using the laws of classical physics.


So, where does this leave us..? It leaves us with particle accelerators. Physicists use these devices to study the behaviour of known subatomic particles and to find new particles (by smashing known particles together using very high energy electrical fields). In otherwords, because subatomic particles do not obey the laws of classical physics, and thus can't be neatly formularised and created at will, the only way of studying them is to create the conditions under which they can exist.

Commuter is not a particle accelerator, yet it follows the same principle: at our present level of understanding 'thought' can not be formularised, so instead of directly creating the entity known as 'thought', Commuter creates the conditions under which thought can exist.

Which begs the obvious question: what are the conditions which give rise to thought? This is explained in the next section, Dishwash Theory.

 

JOY      SORROW      BIRTH      SEX      ART      ILLNESS      ANIMALS      LOVE      WAR

HTML Comment Box is loading comments...

e-mail: rob@spiderbomb.com

Spiderbomb.com