Emergent Behaviours: Towards computational aesthetics
Overview of the electronic arts moving from the 1960s to the creation of artificial intelligences. Discusses various artists and their projects.
In the 1960s many artists using computers were playing about with random numbers and using them as an analogy for intuition or the "creative mistake". It was a fairly limited area and by the late '60s most artists were expressing frustration with this simplistic metaphor. The catalyst for change was Martin Gardner's regular column in Scientific American which many system and computer-based artists read on a regular basis.
In 1967 Gardner reported on the work of Cambridge mathematician John Horton Conway. Conway had been working on a cellular automaton he called "The Game of Life". It was a simulated machine based on simple rules which govern the way in which individual cells in a matrix relate to each other over time. Cellular automata were not new, John von Neumann, one of the inventors of the modern digital computer had played around with them. They had inspired his hypothesis of self-replicating machines, which wasn't proved until after his death in 1957 (and is the origin of the "Terminator" legend).
Conway's automaton was new and different. Von Neumann's had been complex systems with complex rule-based behaviour. The Life Game was simple and elegant but nevertheless manifested very high degrees of variety in its behaviour.
The Game of Life is played on a square matrix like a chequerboard. It can be any size. Each cell of the matrix can be either occupied or empty. The game begins when the matrix is 'seeded' with occupied cells. This is like the first frame in an animated sequence. The next 'frame' is calculated by first creating an identical matrix of cells and then filling it according to the following rules:
1. Examine each cell in the first matrix and count the number of neighbours that are occupied;
2. if the cell is occupied and it has two neighbours it will be occupied in the next frame;
3. if the cell is empty and has three neighbours it will be occupied in the next frame;
4. otherwise the cell will be empty in the next frame.
When all the cells in the new matrix have been calculated it replaces the original matrix and the cycle begins again.
Back in 1967 people were astonished that such a simple deterministic system could produce such an amazing variety of behaviour. Marvin Minsky's fledgling Artificial Intelligence lab at MIT is rumoured to have dropped all its projects to investigate the Game of Life.
Unlike Conway, a mathematician who was more interested in proving hypotheses about the game, the MIT researchers were hands-on computer hackers. They coded up software versions of the game then let the multimillion dollar defence department computers at MIT grind out endless generations. Soon they were discovering the strange new inhabitants of this new computational space. The Glider was a stable creature that could perambulate across the matrix until it was consumed by a block. Then there was the curious life cycles of the pentaminos. A whole universe had been created out of just three simple instructions.
I remember my own excitement when I read Gardner's article and my frustration working on large sheets of graph paper with a pencil and eraser as I laboriously recalculated frame after frame. The idea of using a system to create an artwork appealed to my logical mind and I began exploring other alternatives like video feedback and symmetry systems. Six years later, in 1974, I began to learn how to use computers and quickly recognised a perfect medium for explorations of this kind. In 1977 I began postgraduate studies in the Computer and Experimental Department of the Slade School of Fine Arts at University College, London and was astonished to discover other artists who shared my fascination with logical systems.
Chris Briscoe was creating both drawings and music sequences using a system that used mark/sound generating automata that based their future behaviour on their proximity to their neighbours in both 2 and 3-D space. His drawings were lyrical and surreal. The late Julian Sullivan shared my fascination with cellular automata and developed code for boundary detection that was eventually integrated into an early computer vision system developed by UCL's Engineering School. Steve Bell stuck a disposable sumi brush onto the plotter and began generating calligraphic sequences.
In 1978 a Polish mathematician and researcher called Andre Lissowski visited on his way home from Harvard and brought news of a French researcher called Benoit Mandelbrot and his work on iterative systems. Chaos, or Non-Linear theory was a welcome insight into the processes that many of us had been working with for 10 years.
Harold Cohen was an occasional tutor and was already well on his way to creating Aaron, the autonomous drawing system that is now on permanent exhibit at the Computer Museum in Boston, USA. The late Edward Ihnatowicz was also a regular visitor. In 1967 Edward had created SAM - the Sound Activated Mobile, a petalled flower (each petal was a parabolic microphone) that followed peoples movements as they walked past his plinth at the Cybernetic Serendipity exhibition in London's ICA in 1968. Then in 1969, with a multimillion dollar budget from Dutch electronics giant Phillips, Edward created the Senster which is arguably the first great masterwork of the computer art convergence.
The Senster was a 16 foot articulating arm based on a lobster claw and operated by a hydraulic system under the control of a Honeywell-8 computer. At the end of the arm, on the Senster's 'head' was an array of sensing instruments: directional microphones; radar and sonar. It lived in a large geodesic dome in Eindhoven, Phillips headquarter city, in Holland. If you made a noise, or moved, it came over to 'look' at you. If you made a loud noise or aggressive movement it backed away from you. The geodesic dome could hold about 200 people. Each one was a variable in the Senster's behaviour which, not surprisingly given that variety of 'input', was amazingly complex. Behavioural scientists queued up to do experiments with the system and couldn't believe that something so simple (the Honeywell was a 12-bit computer with 4K of memory) could produce behaviour so lifelike.
Sadly the Senster was expensive to keep alive and Phillips scrapped the system in 1975. Edward never got a major budget again and died in 1986.
Cohen's Aaron is a self-referential system that has no concept of anything outside of itself and its own rule structures. By comparison Ihnatowicz's Senster was computationally much simpler but had a prototype awareness of its environment. This reflected Edward's interest in the work of Piaget on infant learning. He often spoke about artificial intelligence and his belief that computers would attain intelligence in similar ways to young children and animals - by interacting with a complex environment that they had to accommodate and decode.
By the early 80s postmodern dogma had begun to bite and there was little sympathy and even less support for the high-modernist formalisms of system art. Laurence Gowing, then Professor, closed down the Slade's Computer Department in 1981. Chris Briscoe and I left to set up Digital Pictures hoping that we could underwrite our research by doing commercial computer animation. It didn't work out.
1981 was also the year that IBM released the PC and, by the mid 80s affordable computers with lots of 'user friendly' software were on the market. Ironically the art mainstream, who had never endorsed the work of the systems artists, fell over itself to accommodate the neat little postmodern appropriations that were created using digital darkroom software (and with a singular lack of consideration for the unique and intrinsic capabilities of the computational metamedium). Baudrillard said it was OK and postmodernism, in its guise as romantic self-indulgence, concurred.
Benoit Mandelbrot had meanwhile created a focus that, after some struggle, established the study of non-linear systems as a credible discipline. Now it was scientists, mathematicians and engineers and not artists who were playing with postmodern science and 'artificial life'.
There were a number of notable exceptions. All were working in relative isolation from the mainstream artworld. In the UK William Latham began a long standing relationship with IBM as artist in residence where he produced a genetic system that 'evolved' sculptures. It was based on his postgraduate research in the sculpture department at the Royal College of Art. In 1990 an exhibition of his work (fully funded by IBM) was offered to the Australia Sculpture Triennial who rejected it because it wasn't "real" sculpture.
Latham's system was developed in collaboration with colleagues who were computer scientists and involves formal language theory as well as a genetic system similar to the "blind watchmaker" algorithm described by Richard Dawkins in his book of the same name. William selects an initial object and the system creates 8 mutated siblings. He then selects one of these and the process continues. When an acceptable object results William 'harvests' it and then continues the search. In time he has a collection of objects which can all be described by a unique formal 'sentence' of operators. These sentences can then be interpolated and the result is a complex animation where strange surreal objects metamorphose into each other.
Karl Sims survived for several years as artist in residence at supercomputer manufacturer Thinking Machines. His major work to date "Panspermia" is the story of a lifeform evolving throughout a galaxy. Although his work is quite different from Latham's it shares similar processes. Sims has said about his work:
"for me it is important to consider the computer as not just a fine arts medium, but as an artistic tool whose limits can be expanded. Ideally, a computer would allow the realisation of virtual worlds and images without limiting the levels of complexity of the resulting style". (SIM91)
Michael Tolson is a scientist turned artist and founder of Xaos tools who produce the Paint Alchemy filters for the graphic arts software PhotoShop. His own 2-D painterly work has been exhibited widely and won the Computer Graphic Award at Ars Electronica in 1993.
"I imagine the computer as what Deleuze calls a 'pure interior' and its screen less as a window than as a petri dish. Considered from this vantage point, surface becomes a pretty exciting place indeed. Far from being an appliqué, it is more a substantial domain which is frothing with emergence". (TOL93)
It is also interesting to note that the Paint Alchemy filters his company developed are not the planned work of humans but are harvested from a genetic software farm that Tolson and his colleagues have created.
And it's here that we come full circle to the experiments of Cohen and Ihnatowicz in the late 60s. Aaron creates 'original' Cohen drawings and Cohen's success was precisely that he managed to completely codify and externalise his own creative behaviour. Is it possible instead to create an aesthetic behaviour? An automaton that can itself create unique artworks that reflect a 'personal' and evolving aesthetic that is distinguishable from its human builder? I think it is and that the quest for this goal is one of the most interesting challenges that artists face in these brave new cyberworlds.