Emergent behaviors are pervasive in Physics, Chemistry and other natural sciences. For example, when snowflakes fall to the ground, its shape changes due to intrinsic complicated interactions under different thermal and humid conditions.

One of the very interesting experiments concerning emergent behaviors is John Conway’s Game of Life. In this game, a set of simple rules are pre-defined for each unit of the map. At each step, a unit interacts with its nearby units. As time goes, complex patterns and behaviors can be observed.

Stephen Wolfram gave a systematic review of simple rules to generate complex patterns, which he mentioned as cellular automata. Cellular automata was also used to explain the formation of snow flakes and applied to many applications like image processing. However, as Ray Kurzweil pointed out in his book “Singularity is Near”, the patterns generated by the cellular automata is limited in that no patterns more complex than those depicted in Wolfram’s book are observed. For example, no recognizable images, no trees, no humans can be produced through Wolfram’s approach.

From the informational theoretic point of view, complex patterns contain much more information. However, if these information can be generated through simple rules in an emergent manner, then much part of this information is redundant. As a result, these rules can be summarized by observing the patterns through machine learning, just as we human beings learn by looking at our daily experiences.

I tried machine learning on generating cellular automata from a complex pattern, say “Chrome logo”.

This can be achieved if we consider each node in the map as a computing node, say a neuron in the Neural Network. Then we can use the Back Propagation algorithm to learn the model by applying a critic at each of the pixels at the image. At first, no such constraints like “rules should be as simple as possible” is used. So the weight of the neural network is free. We start with the initial value:

Through several iterations, we ended up with a Neural Network output like this:

However, this result is not satisfactory since our automata does not generate complex patterns through simple rules. Instead, its rules are quite complicated. In fact, much of the information is stored right in the weights of the neural network.

In order to restrict the rules the automata generated can use, we can put restrictions on the Neural Network by reducing its receptive field and synchronizing the value each weight can take. By doing this, I find the learning algorithm fails to fit the image — a high bias exists. This probably explains why no more complex patterns are observed in Wolfram’s book — no models exists with a low bias.

In the future, I hope I can apply theories of machine learning to prove the limitation of Wolfram’s cellular automata.