HOME




                                              Artificial Life and Emergence

          Biology explains life in carbon form, as matter in highly organized entities with complex forms, composed of
cells and complex molecules. Life as a long process of evolution, has emerged out of interactions between great
numbers of non-living molecules. Artificial life claims to mimic this biological phenomenon in different media like
bio-chemistry (wet alife), robotics (hard alife) and in silicon form inside the computer (soft alife).This may lead
to create similar or even identical instances of life. The agenda of Alife is to study life-as-it-could-be and not
as-we-know-it-on-earth.

        Creation of life in simulated environments has its roots to Jon von Neuman; he designed the first artificial-life model when he created his famous self-reproducing, computation-universal cellular automata. His work gave way to simulate the process of increasing order of complexity observed in nature emerging out of interactions between entities at lower level. The basic intuition is instead of taking top down reductive explanation of constituent structure, we take a bottom- up approach by defining systems at lowest level, with simple but non-trivial rules and allow them to interact, if this results in to something organized, with increased complexity we look for “Emergence”.


      Within soft Alife the focus is natural generation of complex objects through study of interacting systems, behavioral models and to seek for emergent behavior. It is difficult to construct a theory or model which describes “emergence” or emergent phenomena, because the property is emergent if it cannot be comprehended by the underlying system model. Emergence occurs if “more is different”, i.e. if there are properties of group that which cannot be explained by properties of the parts, entities or agents alone. It occurs when a simple set of causal rules form complex patterns when they are played out in the system. The patterns formed are not reducible to the simple set of rules that form them. An interesting aspect in the process of emergence is the observation of effect without an apparent cause, which distinguishes between local, low level components and global, high level patterns.



                                                                         


        Complexity theory identifies four types of Emergence; (1) Simple/Nominal Emergence: It is totally predictable, each component and element has a fixed and constant role, which is not allowed to change in the course of time. A system in form of a machine has for instance a function which is different from the function of the parts and components, but the overall function is well-known, and it only matches the planned and designed function. There are no unpredicted or unexpected behavior patterns. (2)Weak emergence: Weak emergence is the form of emergence related to swarms, flocks and other social groups. It describes emergence forms with simple feedback, which is predictable in principle, but not in every detail. Top-down feedback from the group imposes in turn constraints on the local interactions. An example is a flock of Geese, which limits the possible movements of the individual birds. (3) Multiple Emergences: Multiple emergence is a form of emergence with multiple positive and negative feedback loops. The behavior is not predictable, and can be chaotic. Completely new roles can appear, while old roles disappear. A typical example for multiple emergence are bubbles and droplets. (4) Strong Emergence: This is the strongest possible sense of emergence , the weakest form of causal dependence. It is not predictable, even in principle, because it describes the appearance of a new code or completely new system in a multi-level or multi-scale system with many levels. Any attempt of explaining emergent macroscopic, high-level phenomena in terms of microscopic low-level phenomena is useless and futile. Strong emergence is the emergence of a whole new system, with new building blocks and interaction laws. The "strong" emergence of a system is identical or at least closely related to the simulation or representation of a system through another system - simulation is the attempt to represent certain features of the behavior of a system by the behavior of another system. The interface between the new and the old system is described by a new code or language. . They can be distinguished by the different degree of predictability and the different types of roles.



          Programming emergence in computer environments leads us to a twofold intuition (a) What we can construct we are also able to explain for e.g. a detailed procedure for assembling a machine may give us enough information to construct an explanation of its workings in the form of an algorithmic description of rules for its change of state. (b) Complex things in nature construct themselves as wholes via long process of local interactions between simple entities; this emergence of whole or collective behavior of units should be mimicked in our algorithm. The good example of programming of emergent behavior is of “Langton’s Ant”. It is a two-dimensional cellular automata with a very simple set of rules but complicated emergent behavior. It was invented by Chris Langton in 1986. Rules of are:

Squares on a plane are colored variously either black or white. We arbitrarily identify one square as the "ant". The ant can travel in any of the four cardinal directions at each step it takes. The ant moves according to the rules below:


1) At a black square, turn 90 degree right, flip the color of the square, move forward one unit
2) At a white square, turn 90 degree left, flip the color of the square, move forward one unit


         Langton’s ant would seem to be a simple animal – after all the rules are less than complex. In fact the ant displays behavior which can be termed as emergent. Suppose you start the ant in an eastwards direction on a white grid; the first move will turn the ant right so that it is facing south and take it forwards one square, turning the starting square black. As it is now on a white square the ant will turn right so that it is now facing west and then moves it onto another white square, turning the last square black. After a few moves the ant will start revisiting earlier squares that have turned black. Very quickly the ant’s movements become quite complicated. Every so often during the first few hundred moves the ant produces a nice symmetrical pattern. After this things get rather chaotic for a few thousand moves, then something amazing happens; the ant gets locked into a cycle that repeats the same sequence of 104 moves, the overall result being to move the ant two squares diagonally down towards the left. It continues like this forever (or until the ant encounters some previous trail), systematically building a broad diagonal “highway. This behavior is interesting, but experiments show that if you scatter any number of black squares around before the ant sets off, it still ends up building the highway. The problem baffling mathematicians is that nobody can provide proof that the ant always ends up building a highway for every possible configuration of black squares, though it certainly seems that it does. Following is the video of simulation of Langton’s Ant.
 


           Langton’s Ant provides a good example of ‘emergent behavior’ but none the less it is tempting to ask what is the relation between explaining life by constructing computational models of lifelike behavior and defining the so created `emergent' patterns as true instances of life? Is Artificial Life redefining the notion of living systems in biology, or does it for the first time give a universally valid definition of life? Should we really believe in the explanatory strategy of the of Artificial Life, that life in a genuine sense (not just representations, but the very phenomenon) can be artificially created in vitro, or in silico, so that ALife research contributes to explore life from a much more universal point of view? How can one be sure that life simply can be defined as "the emergence (in any kind of medium) of complex structures with certain life-like properties"? What makes this notion counterintuitive to some biologists? Do organisms have to be material? And why have biologists been so reluctant to give clear definitions of life that could be used as a measure to hold up against the simulation when the alifer claims it to be a `real' living thing? We may even ask: What may be the meaning of the fact that all attempts to formulate a satisfactory definition of life have failed?