Emergent behaviors are pervasive in Physics, Chemistry and other natural sciences. For example, when snowflakes fall to the ground, its shape changes due to intrinsic complicated interactions under different thermal and humid conditions.

One of the very interesting experiments concerning emergent behaviors is John Conway’s Game of Life. In this game, a set of simple rules are pre-defined for each unit of the map. At each step, a unit interacts with its nearby units. As time goes, complex patterns and behaviors can be observed.

Stephen Wolfram gave a systematic review of simple rules to generate complex patterns, which he mentioned as cellular automata. Cellular automata was also used to explain the formation of snow flakes and applied to many applications like image processing. However, as Ray Kurzweil pointed out in his book “Singularity is Near”, the patterns generated by the cellular automata is limited in that no patterns more complex than those depicted in Wolfram’s book are observed. For example, no recognizable images, no trees, no humans can be produced through Wolfram’s approach.

From the informational theoretic point of view, complex patterns contain much more information. However, if these information can be generated through simple rules in an emergent manner, then much part of this information is redundant. As a result, these rules can be summarized by observing the patterns through machine learning, just as we human beings learn by looking at our daily experiences.

I tried machine learning on generating cellular automata from a complex pattern, say “Chrome logo”.

This can be achieved if we consider each node in the map as a computing node, say a neuron in the Neural Network. Then we can use the Back Propagation algorithm to learn the model by applying a critic at each of the pixels at the image. At first, no such constraints like “rules should be as simple as possible” is used. So the weight of the neural network is free. We start with the initial value:

Through several iterations, we ended up with a Neural Network output like this:

However, this result is not satisfactory since our automata does not generate complex patterns through simple rules. Instead, its rules are quite complicated. In fact, much of the information is stored right in the weights of the neural network.

In order to restrict the rules the automata can use, we can put restrictions on the Neural Network by reducing its receptive field and synchronizing the value each weight can take. By doing this, I find the learning algorithm fails to fit the image. This probably explains why no more complex patterns are observed in Wolfram’s book.

In the future, I hope I can apply theories of machine learning to prove the limitation of Wolfram’s cellular automata.

I was thinking about the brain, in which apparently the stuff that is doing the learning is changed by what it has learned and also that it can learn purely on introspection, using stored information obtained earlier via the senses. This means that it must have rules for changing its current state to a new state, and for me the similarity of the brain and a cellular automate became evident. When I wanted to know if other people had similar ideas, I googled cellular automata machine learning and stumbled upon this website.

The idea of using a CA for generate a specific pattern is interesting, but I fail to see where the learning is.

What you try to do is to let an initial pattern in a CA evolve to the Chrome logo using an initial rule. If the result is not satisfying, you (and not the CA) change the rule and run the CA again. By repeating this procedure, you (and not the CA) eventually come op with a rule that yields a reasonable rendering of the logo. Hence the question, who is learning here.

What I had in mind, was a rule that could be modified by the CA itself and the CA itself should evaluate the quality of the rendering, so that it could throw away rules that do not improve this quality. This means, that the initial pattern of the CA requires to contain a reference logo pattern, that is stable during the operation of the CA, just as is done in the brain.

A CA could be given input senses, by allowing cells outside the grid to influence cells within the grid under the same transition rules. I wonder if one could implement AI via this route.

I think many of these ingredients are also available in Stephen Wolfram’s CA’s but I am afraid I can’t quite follow him, moreover, his ambition is to explain the universe via CA’s.

Other applications of CA’s are described eg. in ”Cellular Automata Approaches to Biological Modeling” (http://www.math.pitt.edu/~bard/pubs/jtb_ca.pdf); I know that various stages in the early development of an embryo, like cleavage, blastulation and gastrulation have been emulated by CA models, but these applications of CA’s have nothing to do with AI.

A far more ambitious idea is the Goedel machine as proposed by Juergen Schmidhuber, but I think it is a bit far fetched to call this machine a Cellular Automate.

(http://www.idsia.ch/~juergen/goedelmachine.html)

Like a CA, the Goedel machine incorporates self-reference; I think this machine will incorporate an Artificial Recurrent Neural Network (RNN), but one that uses advanced concepts like “Long Short-Term Memory”, which are inspired by the working of the brain. (http://www.idsia.ch/~juergen/rnn.html)

One of the consequences of self-reference in the Goedel machine is, that it will be conscious.

Since a lot is known about the way brains become conscious, I think that this consequence is logical, moreover, it implies that many animals must be conscious as well.

Exiting, is it not?

The cellular automata proposed by Wolfram says extremely simple rules can generate faily complex patterns. My question is whether these underlying simple rules can be learnt through AI techniques from the observed complex patters. In that effort, no cleverness is needed to figure what rules corresponds to what patterns. Nor do we need to try out all possible rules as Wolfram did.

The cellular automata, however, fails to explain more complex patterns than those claimed in Wolfram’s book. For example, living creatures, intelligence. Particularly, given some arbitrary digital image, no cellular automata can generate the image with extremely simple rules. What I found out in the experiment is that the model complexity of cellular automata fails to cover complex patterns like images.

What I think of as critical in a capable cellular automata is “memory”. In theory of computation, devices with more memory usually have more computational power. For example, push-down automata recognizes more languages than deterministic finite automata because PDA has infinite memory. So does Turing machine than context-free grammar. I think the work you mentioned on self-reference or recurrent neural networks is interesting because it suggests efforts to be made in incorporating memories into CA.

In Wolfram’s vision, the universe could start with simple rules and evolve to a complexity that not only contains human beings, but also much more trivial patterns like this logo.

This does not imply that your initial pattern was somewhere involved when the original logo was created.

I think that a more plausible initial pattern was formed in the brain of the original designer and that this pattern was formed using vectors instead of rasters.

Maybe it is interesting to repeat your experiment using a vector image, starting e.g. with a single dot.