A revolutionary new computer based on the apparent chaos of nature can reprogram itself if it finds a fault
OUT of chaos, comes order. A computer that mimics the apparent randomness found in nature can instantly recover from crashes by repairing corrupted data.
Dubbed a “systemic” computer, the self-repairing machine now operating at University College London (UCL) could keep mission-critical systems working. For instance, it could allow drones to reprogram themselves to cope with combat damage, or help create more realistic models of the human brain.
Everyday computers are ill suited to modelling natural processes such as how neurons work or how bees swarm. This is because they plod along sequentially, executing one instruction at a time. “Nature isn’t like that,” says UCL computer scientist Peter Bentley. “Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that.”
Today’s computers work steadily through a list of instructions: one is fetched from the memory and executed, then the result of the computation is stashed in memory. That is then repeated – all under the control of a sequential timer called a program counter. While the method is great for number-crunching, it doesn’t lend itself to simultaneous operations. “Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program,” Bentley says.
He and UCL’s Christos Sakellariou have created a computer in which data is married up with instructions on what to do with it. For example, it links the temperature outside with what to do if it’s too hot. It then divides the results up into pools of digital entities called “systems”.
Each system has a memory containing context-sensitive data that means it can only interact with other, similar systems. Rather than using a program counter, the systems are executed at times chosen by a pseudorandom number generator, designed to mimic nature’s randomness. The systems carry out their instructions simultaneously, with no one system taking precedence over the others, says Bentley. “The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions,” he says.
It doesn’t sound like it should work, but it does. Bentley will tell a conference on evolvable systems in Singapore in April that it works much faster than expected.
Crucially, the systemic computer contains multiple copies of its instructions distributed across its many systems, so if one system becomes corrupted the computer can access another clean copy to repair its own code. And unlike conventional operating systems that crash when they can’t access a bit of memory, the systemic computer carries on regardless because each individual system carries its own memory.
The pair are now working on teaching the computer to rewrite its own code in response to changes in its environment, through machine learning.
“It’s interesting work,” says Steve Furber at the University of Manchester, UK, who is developing a billion-neuron, brain-like computer called Spinnaker (see “Build yourself a brain“). Indeed, he could even help out the UCL team. “Spinnaker would be a good programmable platform for modelling much larger-scale systemic computing systems,” he says.