TDT22: Complex and Biologically-Inspired Systems
This compendium is the 2015 version of TDT22
The following papers are curriculum:
- Bar-Yam, The Dynamics of Complex Systems - Examples, Questions, Methods and Concepts
- Heylighen, The Science of Self-Organization and Adaptivity Presentation
- Langton, Computation at the Edge of Chaos: Phase Transitions and Emergent Computation Presentation
- Gershenson, Introduction to Random Boolean Networks Presentation
- Newman, The Structure and Function of Complex Networks (exclude section 4, 5, and 6) Presentation
- Perez, Modeling Mountain Pine Beetle Infestation with an Agent-Based Approach at Two Spatial Scales Presentation
- Garnier, The Biological Principles of Swarm Intelligence
- Doursat, A Review of Morphogenetic Engineering Presentation
- Bar-Yam, Human Civilization II: A Complex(ity) Transition
- Sayama, Introduction to the Modeling and Analysis of Complex Systems, Chapter 1 and 2
- a Sayama, PyCX: A Python-Based Simulation Code Repository for Complex Systems Education
- b Sayama, Introduction to the Modeling and Analysis of Complex Systems, Chapter 10
- Mini project (text not part of the cyllabus)
The Dynamics of Complex Systems - Examples, Questions, Methods and Concepts
What are complex systems?
Complex systems consist of multiple, interwoven parts, that each on their own has simple behaviour, but together display complex and often unexpected behaviour.
Central (universal) properties
There are two types of emergence: Emergent complexity, where simple parts form complex behaviour; and emergent simplicity where complex parts form simple behaviour. It is emergent complexity that is the subject of this course, and will be denoted simply as "emergence".
Two categories of emergence: Local emergence, where the same behaviour is display in both small and large sets of the system; and global emergence, where the emergent behaviour in a large set of the system is different from the small set.
A complex system has the highest entropy when it is in equilibrium. The amount of bits needed to describe a complex system is defined by
I = log2(N)
where N is the number of possible states, i.e. no two inequal strings of information lead to equal states in a complex system.
Analysis of complex systems
There are mainly two ways to study a complex system: Analyze the individual elements and observe their interaction (two of the central properties as mentioned above), or classify the system based on characteristics it has in common with other systems and analyze it statistically.
The Science of Self-Organization and Adaptivity
A self-organizing system is a system where a global order or structure appears without the influence of an external agent, in other words only through local interactions. The opposite of a self-organizing system would be a centralized system, but in every centralized system there is an element of self-organization at the lower levels of interaction.
Two examples of self-organizing systems: Magnetization and Bénard rolls
A global order has high correlation between the separate elements of the system, which provides the system with robustness (ability to function despite damage) and resilience (ability to adapt to damage). Their non-linear nature, however, makes it difficult to anticipate the global effect when altering the local behaviour, and a change to input or local behaviour may give both positive and negative feedback simultaneously from the system.
The thermodynamical paradox
Self-organizing systems not reaching an equilibrium retain their low entropy by exporting high entropy to the environment.
There may be several possible stable configurations of the system, and which it enters is based on fluctuations between unstable states early on. This branching between possible states is called bifurcation.
When the system can maintain its structure despite external stimuli, it has organizational closure. That is, the system is self-sufficient through rigid internal stimuli loops. Despite this, such a system will often exchange energy or matter with the external system.
Far-from-equilibrium dynamics denotes systems that don't stabilize in a minimum energy state, but rather depend on an external energy input in order to maintain their self-organization. An example of this is Bénard rolls, where heat is required in order for the water to maintain the movement. This makes the system fragile because of its dependence on the environment, but also more capable to react to external changes.
Being adaptive implies the system has robustness and resilience. In order to adapt to changes in the environment, a system needs variety of actions or variable features to cope with these changes, and the ability to select the most appropriate action. It's here the "edge of chaos" comes in: Too much variety and the system becomes chaotic, too little and it becomes ordered; too many competing actions to select from and the system becomes chaotic, too few and it becomes ordered. The selection mechanism is highly dependent on the variety of actions, but also on having a good fitness measure of an action.
The goal of adapting is to maintain or improve the fitness of the system, where fitness is defined as the ability to survive under the given conditions. As such, a system is said to be "fit" if it survives under the current conditions. Improving the fitness leads the system into an attractor, and will ultimately reach an equilibrium at the bottom of the fitness landscape.
Computation at the Edge of Chaos: Phase Transitions and Emergent Computation
The goal of the article was to find out under what conditions a physical system supports computation. In order to test this, the problem was brought into the context of cellular automata, which can to some degree model a thermodynamical system.
- CA: Cellular Automaton (plural: Cellular Automata)
- Quiescent state: Inactive or stable state
- Period length: Number of time steps required to reach a cycle in the CA
- Transient time: Number of time steps required for the system to stabilize
A cellular automaton consists of a grid (in any finite number of dimensions) of cells, with each cell being in one of a finite number of states.
The state of each cell is decided according to a transition function, which takes the state of the cells in the cell's neighbourhood as input. The neighbourhood can be any combination of nearby cells (including yourself), but usually adjacent cells. The transition function, in context of the chosen neighbourhood, decided the behaviour of the cellular automaton, i.e. the next state in the state space. If the transition function is random the CA will result in chaos, while if it always (independent of neighbourhood) choose the same state the CA will become ordered. To find the edge of chaos is to find the balance between the two extremes.
In order to find the correct amount of chaos, some method of controlling orderliness needs to be defined. It is done by choosing a quiescent state, i.e. inactive or stable state, and adjust how often the transition function should lead a cell to this state. By letting all transition functions lead to this state and slowly increase randomness, the transition to chaos in the CA can be observed.
- K^N: Total number of transition functions
- n: Number of those transition functions leading to the quiescent state
Then the degree of chaos can be approximated by
lambda = (K^N - n) / K^N
where lambda = 0 means all transition functions lead to the quiescent state. Note that the same lambda value will give different behaviour at different configurations of a CA, so it cannot be used as a fixed value for creating emergent behaviour.
Using the bias defined in the previous subsection, two sampling strategies are proposed:
- Random-table method: Pick transition functions iteratively, where lambda is the probability of picking a transition function not leading to the quiescent state
- Table-walk-through: Start with an initial set of transition functions leading to the quiescent state, and randomly replace entries with a transition function not leading to the quiescent state until lambda is correct
In order to make the studies more tractable, the author imposed further restrictions on the rule space:
- Quiescence condition: If all neighbours are equal (including yourself), remain in the current state
- Isotropy condition: Orientation of neighbourhood states does not matter
Wolfram's qualitative CA classes
The following denotes a categorization of different resulting evolutions a CA display:
Class I: Homogenous state (i.e. only one cell state)
Class II: Simple periodic structures
Class III: Chaos
Class IV: Complex patterns
This categorization can be used to classify the system's behaviour at different lambda.
Structure and parameters
A one-dimensional CA of 128 cells was used, where each end of the array is connected, i.e. a circular array. The neighbourhood function consists of the two closest neighbours on each side of a cell, and the cell itself. In total there were 4 transition functions (and thus 4 possible states each cell could have). In short:
K = 4
N = 5
Two kinds of initial state distributions were tested: A uniform random distribution, and a random distribution in the central 20 cells with the remaining cell initialized as 0. The following were the most significant observations (for both distributions) at different intervals of lambda:
- 0 - 0.15 : Dynamics die within few time steps
- 0.2 : Periodic structure (infinite), transient of 7-10 steps
- 0.25 : Longer transients, period of 1
- 0.35 : Dynamical possibilities broadening with longer transients and a new periodic structure
- 0.4 : Transient length 60 steps, period 40 steps
- 0.45 : Transient length 1000 steps, period 14848 steps. Solitary waves (propagating structures) are observed
- 0.5 : Transient length 12000 steps, then settles down to periodic behavior
- 0.55 : Shorter transient length, "settles" in chaotic dynamics
- 0.6 : Shorter transient length, broader dynamical activity
- 0.65 : Chaotic after 10 steps, width increase with 1 cell each time step
- 0.70 : Chaotic after 2 steps
- 0.75 : Chaotic after 1 step, maximum disorder for this K
Furthermore, at lambda = 0.5, a range of array sizes up to 512 cells were tested in order to observe the change in transient length: It grows exponentially with linear increase in array size.
In order for a system to support computation, it needs to be able to both store and transmit information, which are opposite dynamics. A balance between the two needs to be found in order to support both functionalities.
The ability to transmit information can be described as the system's entropy (i.e. higher entropy means more chaos), which will be calculated by Shannon entropy. Given a cell A and its probability p_i for using transition function i:
H(A) = -SUM(p_i log2(p_i)), for i = 1, ..., K
In order to measure the cells' ability to affect the behaviour of another, we need a measure of their correlation, or mutual information. Given two cells A and B (either distinct cells or the same at different time steps), the mutual information is defined as:
I(A, B) = H(A) + H(B) - H(A, B)
Structure and parameters
A two-dimensional CA with 64x64 cells, 8 transient functions, and the adjacent cells and itself as neighbourhood was used. In short:
K = 8
N = 5
First, a gap between 0 <= lambda <= 0.6 and 0 <= H <= 0.84 was observed, which implies a first-order transition between having too little entropy to transmit info and too much entropy to store info. (Note that the transition occured at different lambda on different runs, which could for this measure alone conclude that both storage and transmission of info are supported at different lambda. When comparing with mutual information, though, it is not true.)
Second, the mentioned gap, although it was significant in size, was not completely empty. This implies there are a second-order transition present, which means there are greater dynamics present (and greater possibilites of emergent computation) that is not fully captured by these experiments.
Third, the range of H values observed decreased rapidly when lambda > 0.6, which means the cells have less variety and lose its ability to support storage.
Fourth, when comparing entropy and mutual information, it was found that mutual information was largest around H = 0.32 (H normalized), which would indicate an optimal trade-off between ability to transmit and store data in this vicinity.
The results show a potential for emergent computation, but are there any analogies to the emergent computation found in CA? Some dynamics that are speculated could be similar:
- Transient length grows exponentially with increased array size -> exponential computation problems
- Halting problem, which is deciding if a computation will freeze (low lamdba), complete (high lamdba), or is indecidable (phase transition)
- Simulated annealing spends a lot of computation time in a "freeze" area of computation. Could this be necessary for emergent computation?
- Can CA dynamics support the analogy of matter to liquid to gas phase transitions?
- Is evolution a process of adaptability on the edge of chaos?
Introduction to Random Boolean Networks
Although the structure of random boolean networks seems limiting, several natural systems may be modelled by RBNs due to approximate firing thresholds found in their processes, which can be modelled by only two values. The model can be extended to use more than two values though, which is briefly touched upon in the subsection "Multi-valued networks".
Takes a value of 0 or 1, which initially is random and later is updated according to an internal logical function (usually a lookup table).
Connects a node with other nodes, and possibly itself. The connections are created randomly initially. If the number K of edges in to a node is equal across all nodes the network it is called homogenous, if not it is called non-homogenous.
- Descendants: All nodes a node affects
- Ancestor: All nodes that affect a node
- Linkage loop: A circuit of nodes that activates itself
- Relevant elements: Nodes that form the linkage loop without having a constant logical function
- Linkage tree: A path of activation that has no feedback to itself
The state of the system is one of all the possible combinations of node states, i.e. there are 2^N possible states.
Classification of states
- Successors: The states that a state can lead to
- Predecessors: The states that leads to a state
- In-degree: Number of predecessors
- Garden-of-eden: States without predecessors, i.e. in-degree of 0
Order, chaos, and the edge
In an RBN the edge of chaos, or the critical phase, can be visualized as a square lattice where collections of nodes ("islands") continuously change between stable and unstable states, where all nodes at some point are perturbated (changed from its "normal" state due to external influence).
Damage spread is a measure of stability for a system, where a damaged, i.e. altered, node state or connection will propagate changes through the network: In ordered networks the changes stop early, as it has no ability to vary its state; in chaotic networks, the change will propagate through the entire network and have drastic effects to the future states; on the edge of chaos the change can propagate through parts of the network, but not necessarily through the whole network.
Ordered networks tend to have high convergence, meaning that (many) nearby states flow to the same state. In the chaotic phase, nearby states tend to diverge. In the critical phase, nearby states tend to neither converge nor diverge, but retain differences equivalent to their initial differences.
Two methods are mentioned: G-density, i.e. density of garden-of-Eden states, and the in-degree frequency distribution.
- Ordered: High G-density and high in-degree frequency, which leads to short transient times and high convergence
- Critical: In-degree distribution approximates a power law, and medium convergence
- Chaotic: High frequency of low in-degrees, and long attractor lengths resulting in low convergence
An attractor is a set of repeated states. It is called a circular attractor if there are multiple states in the set, and a point attractor if the set consists of only one state. In order to have a cycle attractor at least one node needs to be its own ancestor. The set of states that leads to an attractor is called an attractor basin.
There are different models with different properties, but the main focus of the paper is the classical model.
It has a synchronous updating scheme, and each state only has one successor (because the next state is deterministically decided).
The following results for a probability of p=0.5 (equal probability of the returned value from the logical function to be 0 or 1) has been observed for a network of N nodes with K incoming edges:
The probability of having long attractors decreases exponentially. Average number of cycles seems to be independent of N. Median length in the order of sqrt(N/2).
Average attractor length grows exponentially. The typical cycle length grows proportional to 2^(N/2).
Typical attractor length and average number of attractors grow algebraically with N (disputed results though).
Extension to the classical model that allows a node to have more than 2 values. Some natural systems are better modelled with more than 2 values of each node, but for theoretical purposes several boolean models can be combined to achieve the same result.
A scale-free topology is non-homogenous, which means each node can have any number of incoming connections. The scale-free topology is considered to model real world applications more accurately. Although such networks are not well understood, they are shown to have several beneficial properties:
- Shorter attractors
- Higher entropy
- More mutual information
- Greater adaptivity (in some space of connection variability)
All nodes are updated simultaneously in timestep t+1 based on the state of timestep t.
Node is iteratively picked at random to be updated. This also makes it non-deterministic.
Deterministic Async RBNs
Nodes are updated periodically, but not all at the same time, and in random time interval.
The Structure and Function of Complex Networks
A network is a set of items, which are usually called vertices or nodes, with connections between them, usually called edges.
- Degree: The number of edges connected to a node. Nodes in a directed graph has an in-degree and an out-degree.
- Component: The largest subgraph in which any two vertices are connected to each other by paths. In a directed graph, each node has both in-component and out-compontent.
- Geodesic path: Shortest path from one node to another.
- Diameter: Longest geodesic path in the graph.
- Density: Ratio between edges and nodes
Real world networks
- Social networks: E.g. Facebook
- Information networks: E.g. article citations, WWW, or preference networks
- Technological networks: E.g. electrical grid, or the internet
- Biological networks: E.g. metabolic pathways, or neural networks
- Small world effect: Low mean distance between nodes
- Transitivity/clustering: Two nodes with a shared neighbour will often be neighbours themselves. The probability can be modelled as a clustering probability coefficient
- Degree distributions: Probability of a random node having K connections (see also "Scale-free networks")
- Network resilience: Degradation of removal. Random has smaller impact than targeted removal
- Mixing patterns: Equal types of nodes have a higher probability of being connected
- Degree correlation: Nodes with equal (or similar) connection degree have a higher probability of being connected
- Community structures: Clusters of nodes with high internal degree and low external degree
- Largest component: Largest part of communication network where communication is possible
- Distribution of "betweenness centrality": Number of paths between two nodes running through a given node
- Recurring subgraphs: Subgraph that occur often in a larger network
Social networks will typically be assortative (nodes prefer other nodes with equal type or degree), while information networks are disassortative. Community structures can be extracted by a technique called hierarchical clustering, where edges are added iteratively to the network based on high connection strength between nodes.
A scale-free network has a degree distribution that is scale-free, meaning that a multiplication of its input is equal to a multiplication of the function. This property implies that the network has a power-law distribution.
Models of growth
Most networks do not just "come to be", but are gradually grown through the process of adding nodes and edges. The following models are theories for how these are added.
Price's growth model
A type of preferential attachment (or cumulative advantage) model for directed acyclic graphs. This means that each node will get a number of new connections proportional to its existing number of connections, e.g. an article with many citations is preferred to an equivalent article with few citations, and will thus gain even more citations. The resulting degree distribution is called a power-law degree distribution, where many nodes have low in-degree and few have high in-degree. When a new node is added, it will have a static out-degree, e.g. the number of references in the article, and zero in-degree. Price's model gives zero probability of increasing the in-degree of a node with an initial zero in-degree, but he cheated and added a constant to the in-degree to fix the imperfection (saying that an article can be considered to cite itself).
The model works though, and agrees with observation from real world networks.
Barabási and Albert's model
The same as Price's model, only for undirected graphs. This simplifies the model (and solves the intial zero in-degree problem), but is not as realistic for real world networks.
Vertex copying model
When adding new vertex, either assign edges randomly or copy the edges of another vertex.
Edges (bonds) or vertices (sites) are assigned a status of either "occupied" or "unoccupied", with the aim to study the properties of the resulting subgraphs of occupied and unoccupied sites or bonds. The process is called percolation, which basically means filtering. This process has been used to test the resilience of networks, i.e. how much of the graph can be removed before functionality degrade, or the components degrade significantly in size.
Disease spreads through networks by a power law degree distribution (person with many friends is more likely to catch disease than a person with no friends), where the reasoning behind putting people in quarantine is evident: Disconnect these nodes from other nodes, and the epidemic will not spread further. Percolation theory can model the effect of the spreading disease as it disables clusters of the network. By identifying the highest degree vertices and most sensitive parts of the network functionality, measures can be made to limit the damage from the disease.
The SIR model
- Susceptible: Can catch decease
- Infective: Can transmit decease
- Recovered: Neither of the above (immune)
The SIS model
As SIR, only instead of Recovering after being infected, the person is again Susceptible. This model reflects a network containing a computer virus.
An exhaustive search creates an index of the network and uses this index for processing future queries. The indexing is usually performed by crawling the whole network initially, then updating the index every time a change occurs. A guided search must query the whole network on each instance, performed by multiple crawlers guided by some heuristic in order to search only promising parts of the network. Network navigation is the idea that networks can be architectured to enable faster search, based on observations such as Milgram's small world where people were connected through a short path unknown to them.
A network property changing due to altering edges and/or node values is called a phase transition. Altering the graph in a graph coloring problem from low density to high density will trigger a phase transition: With few edges the coloring problem is easy, with many edges the coloring problem is impossible. This concept has been used to illustrate the Ising model as a network, and opinion forming in social networks.
Analysis on networks is performed by studying statistical properties of the network, in order to establish models. These models enables behaviour prediction. One such technique is the hierarchical clustering mentioned earlier.
Modeling Mountain Pine Beetle Infestation with an Agent-Based Approach at Two Spatial Scales
They kill trees.
By modeling the local behaviour of beetles, the researchers hope to be able to predict future behaviour.
Two types of agents:
Both are modelled on the scale of individuals and on the scale of landscape. On a landscape scale, each agent represents a group of its kind. For beetles such a group would be all invidviduals habitating a tree, and for trees all trees participating in a forest.
The hard part of modeling entire systems based on many individuals is creating models for their behaviour at a local scale. An accurate model of the environment, i.e. the forest, is often hard to create because of insufficient data about local differences, which makes the model even harder to develop.
Moderately infested areas spread beetles faster than lightly infested areas, due to scarce resource in dense colonies. Using their model they are able to approximate the location of the pine beetles 10 years into the future.
The Biological Principles of Swarm Intelligence
Social insects have a strong structure of self-organization, and has inspired numerous algorithms to control the collective behaviour of artificial systems. Further development should be made to emulate the self-adaptation they show, in order to make the individuals in algorithms respond to the needs of the colony and increase the flexibility. This does not entail increased complexity at the level of individuals, but the introduction of variable probabilities (modulation) instead of fixed probabilities for performing actions.
- Modulation: Probability for a given behaviour varies
- Stigmergy: Mechanism of indirect communication
Stigmergy is a mechanism of indirect communication between individuals, such as the pheromone trails of ants. Stigmergic cues can trigger actions, where each action is given a probability of being performed. This motivates collaboration among the individuals, as each individual will most of the time perform the action it perceives as the most useful to the colony.
An example of this is bees building cells in their nest, where the current structure functions as the stigmergy: There is a high probability of building another wall in a corner between cells, and a low probability to start building a new cell. That way the bees prioritize to finish cells before building new cells, but will always build new cells when all commenced cells have been finished.
Principles of self-organization
There are some "ingredients" required to maintain a self-organized system:
- Positive feedback: Promote a certain behaviour
- Negative feedback: Mechanism for disencouraging the positive feedback (e.g. pheromones evaporates if not maintained)
- Random fluctuations: Have a probability of not choosing a task, to motivate exploration
- Multiple stigmergic actions: Even if one individual fails to perform a task, others will "save the day"
Basically, this makes a system of checks and balances that relies on no single individual for the colony to survive.
Categories of collective behaviour
These are the functions that organize the insects' tasks (not mutually exclusive):
- Coordination: Organization in space and time to complete a task, e.g. a swarm flying
- Cooperation: Combined effort to solve a problem, e.g. killing large enemy
- Deliberation: Making a collective decision between several possibilities, e.g. which food source
- Collaboration: Different individuals performing different activities, e.g. foragers and builders
Modulation of self-organized behaviour
Modulation of behaviour means that the probabilities that governs the actions of individuals change. These changes are triggered by a sensed change in outer or inner factors.
An outer factor of change is independent of the colony, i.e. changes in the environment. This includes food distribution, predators, and weather.
Inner factors relates to changes in the colony or the individual. This includes the size of the colony (e.g. with respect to space available for each individual), and ratio between castes. It is also possible for the individual to modulate based on experience (performing successful actions) and age.
Managing uncertainty and complexity
One of the more interesting aspects of natural complex systems are their robustness and flexibilty, which allows the systems to perform under a wide range of conditions and failures. The robustness of colonies stems from the multiplicity of interactions, which allows the failure of individuals to not affect the colony. Flexibility stems from both modulation of behaviour, but also the basic principles of self-organization (as mentioned above) that allows colonies to respond appropriately to events.
A Review of Morphogenetic Engineering
Nature's design is based on self-organization where small rules for small individuals create larger complex systems. Humans have architectured design, where the big system's rules are designed.
In the cross between these two ways of thinking is where we find "morphogenetic engineering".
Endowing physical systems with information
In nature there are several self-organized systems, that when endowed with information can show architecture. Complex systems found in the nature consists of networks upon networks of functionality, all which has the ability to adapt to its environment. The motivation to adapt is driven by its success, i.e. fitness, which has resulted in natural "innovations".
Augmented complex systems
Embedding information into natural complex systems in order to guide it to showing certain behaviour.
Embedding informational systems in physics
Artificial systems show architecture, but has no self-organization until information that allows such properties is embedded. The science of Artificial Life (ALife) approaches the challenge of incorporating natural processes into programming by looking at the elements bottom-up instead of top-down, as traditional (symbolic) artificial intelligence does. ALife can be approached at three different scales:
- Micro-scale: Chemical processes in the individual
- Meso-scale: The individual
- Macro-scale: Population or societal systems
Typically, but not necessarily, all scales needs to be considered in order to make a functional system. Examples of implementations includes the class of algorithms called swarm intelligence, where whole colonies of individuals, such as ants, function at an individual level according to some chemical processes in order to achieve societal goals.
Approaches to create ME
These concepts facilitate non-heterogenity, reproducability, development, and modularity in complex systems:
Assemble agents into something else, like Lego, in order to achieve some additional property. Typically all individual agents will retain some of its functionaliy, but not freedom of movement. Example: Robots linking together to build a bridge.
Create clusters or networks to adopt a certain shape, such as an insect swarm. One of the properties that a swarm can achieve is extended sensory input by transitivity, which can be used for evasion or food discovery.
Creating new agents through division and aggregation, like living cells.
Evolving the system by changing, adding or removing elements, or changing the rules it operates under.
Human Civilization II: A Complex(ity) Transition
The goal of this article is to show similarities and dissimilarities between the human civilization and complex systems.
Comparing human civilization with a complex system
As a conclusion to the question "is human civilization a complex system?", the article says "yes, probably", but there are both similarities and dissimilarities to a complex system.
- Many elements (e.g. human beings, machines)
- Many interactions
- Social gatherings
- Through time
- Family and communities
- Regionally (e.g. countries, religion, language)
- Processes supporting organization
- Biological evolution
- Social evolution
- Interdependence (see subsection below)
- Complex behaviour (see "Transition from centralized control to self-organization")
Interdependence means there is a mutual dependence between elements. Without interdependence in a system, it could not be complex because some part of the system could survive without the other. Historically, regions of humanity have operated independently of each other, but the dependency is ever growing: In the early history of man, humans were only reliant on their next of kin. Today each person is reliant on many others all around the world in order to function as a part of society. As an example, economic sanctions would clearly not be effective if not for some degree of interdependence. Some other elements supporting the theory of interdependence in human civilization:
- Political interests
- Military interventions
- Economical propagation of events
- Human-made environmental changes
- Information sharing
- No interaction with equivalent complex systems
- Humanity's response to environmental changes are not complex
Regarding the last bullet point: In complex systems, the system adapts to external changes in order to survive. Humanity, on the other hand, will to a significant degree adapt the environment to its needs. While this could be viewed as the "optimal form of adaptation", it does not fit with the general characteristics of a complex system. The possibility of adapting the environment comes from humanity's unique position as a system that has surpassed its environment in complexity, or at least parts of it.
Transition from centralized control to self-organization
Decrease in central control
Before the industrial revolution, the usual organization structure was very homogenous: One person at the top managing several, possibly thousands, of workers performing the same task. After machines replaced much of the manual labor that was performed, organizations became more heterogenous, and thus the complexity of the organization increased. Workers possessed different knowledge and skill sets, while the guy at the top had the same capabilities as before. This requried more layers of management, so that the central management only received the essential information and issued major commands.
From hierarchy to networked organizations
As competition increased and more specialization was requried, more and more levels of management was required to cope with the increased complexity of the organization. Although the information age made management more capable than before, it only mitigated the effect. Today we see that some businesses are more similar to a network of workers with equal responsibility rather than a hierarchy of control.
Without central control one might think that the system is fragile, but it is rather the opposite: If one part of the system fails, it is an opportunity (e.g. economical) for another, and competition motivates adaptation. We can use food distribution as an example: Serving millions of people with different requirements concerning assortment, price, and amount of food is an incredibly complex problem to solve, but the hundreds of distributors, restaurants, and stores satisfy this need (and creates something similar to an emergent behaviour). In the communist regimes where central control were responsible for satisfying this demand, the only way to cope with the problem was to keep the variety to a minimum and only have few stores to make distribution of correct amounts manageable.
Consequences of the transition
Higher complexity in technology and industry requires more specialized education, and faster adaptation from the businesses' point of view. This entails that more people need to adapt their specialization and/or change careers during their life as well, which becomes harder with a more complex system. The article speculates that with a more complex society even a "specialization" in social life will be required, as people from different backgrounds will find it difficult to interact on a meaningful level, as well as having "specialized" news that fit your areas of knowledge and interest when the amount of news becomes unmanageable.
The individual's relationship to civilization
There were more acceptance for people dying before and during the early industrial revolution, because factory accidents and such just happened, and for most people an odds of 1 to 100 of dying was an acceptable risk. Today, society does not tolerate any risk of a person dying in accidents, and several governments have issued "zero accidents" goals in both traffic and industry. The point is, a complex society makes it safer for the individuals. The change in life expectancy of the average person is proof enough that a complex society is beneficial to the life of individuals.
As for economic safety, more people are changing careers than before, and often several times during their life. Although this instability may lower the life quality of the individuals, the unemployment rate in the US has been relatively stable for several decades, which shows that where some businesses and professions are abandoned others emerge. The general tendency shows that fewer people work for Fortune 500 companies (and the profits of those companies are equally lower as well) than before, which means more small businesses flourish.
These are speculations based on imperfect models, but in short:
- Less centralized, but not fully self-organized (e.g. needs central control of public services)
- Interplanetary colonization will be interdependent of the current civilization
The last bullet point entails that we will never see a complex system equivalent to human civilization.
Introduction to Modeling and Analysis of Complex Systems, Chapter 1 and 2
About complex systems
Complex systems can be categorized as problems of organized complexity, fitting right between problems of disorganized complexity (independent components) and problems of simplicity (according to the book this is a system of "dependent components", but in context of the known interdepence between components in complex systems, this might rather be interpreted as one way dependencies).
Different fields of study in, or roots of, complex systems may be categorized as follows (the categorization is created by the author). Nonlinear Dynamics, Systems Theory, and Game Theory are (by the author) considered to be the roots of research on complex systems.
Outputs are not given by a linear computation of the inputs. Possibility of both stability and chaos in such systems, which imply an edge of chaos inbetween.
Tools to solve real-world complex problems.
Can be categorized as complex in context of nonlinear dynamics, where global behaviours with a lot of agents may be hard to predict based on the individual agents' rules.
Self-organizing process that involves space and time, where interactions between components can provide emergent behaviour (such as cellular automata).
Evolution and adaption
Fundamentals of modeling
Models in Science and Engineering
Science is an endeavor to try to understand the world around us by discovering fundamental laws that describes how it works.
How to be a scientist
A typical example of science at work goes something like this:
- Observe nature
- Develop a hypothesis that could explain your observation.
- Make predictions from your hypothesis and test them with an experiment.
- See if experiment “proves” the hypothesis.
- If yes, you can say you were correct and publish a paper
- If no, hang your head in shame and gather more data for another hypothesis
Unfortunately it is not as easy as step 4-2. To show this, here is some logic:
- We observe that the driveway is wet. Let us call this phenomenon P.
- We develop the hypothesis that it has been raining. This is hypothesis H. We say that H -> P.
- We make a prediction that your neighbor’s driveway is also wet. This is Q, and H -> Q.
- Unfortunately, we can’t really “prove” anything. Just because P or Q happen, doesn’t mean H is the cause. Maybe the hypothesis K, that the sprinklers were on earlier, is the correct hypothesis.
How to be wrong
The only way we can say anything about H is by taking the contraposition of H -> Q, which is not Q -> not H. If not Q is true, then your hypothesis is wrong, but if Q is true, it doesn’t help us in proving the hypothesis, only giving some evidence for it. You only have supportive evidence for your hypothesis, and you have not been able to disprove it.
What is a model
All laws of nature we have models for are well-tested hypotheses at best. Scientists have repeatedly failed to disprove them. There is no guarantee for universal, permanent correctness. In the end, all we do is building models of nature. A model is just a simplified representation of a system.
We can say that science is an endless effort to create models of nature, and that engineering uses those models to control or influence nature. This is why modeling occupies an essential part of an endeavor in science and engineering.
Two types of models
Modelling approaches can be put into two families:
- Descriptive modelling, where researchers try to specify the actual state of a system at a given point in time.
- Rule-based modelling, where researchers try to specify the rules which can explain the dynamics of a system.
Both these approaches are equally important, as they rely upon each other. For instance, observations of planetary motion was later used to make the rules describing how they moved.
The article focuses on rule-based modelling.
- Observe the system of your interest.
- Reflect on which underlying rules might cause the system to behave like it does.
- Derive predictions from those rules and compare with reality.
- Repeat above step until you are satisfied with the model (or run out of time or funding)
Different people might find different models based on their experience and knowledge
Modeling complex systems
Modeling cause and effect in complex systems are complicated compared to traditional science and engineering, and requires the analyst to become familiar with the dynamics of the complex system in order to understand them. Because of this computational modeling has had a significant effect on the research of complex systems.
Still some trial and error will be required in order to get a model right. Some important things to consider when modeling a complex system:
- What are the key questions?
- At what scale will the basic individuals operate?
- How is the system structured?
- What are the possible states of the system?
- How does the state of the system change over time?
PyCX: A Python-based Simulation Code Repository for Complex Systems Education
Why create PyCX?
To provide an easy-to-use, general-purpose framework giving students the flexibility needed to be creative and thorough in their learning of complex systems.
Problems or limitations with previous simulations tools with GUI:
- Attention diverted from general "marketable" skills towards learning the tools
- Different tool preferences in different fields
- Details hidden from the user
- Limits user creativity
The main problem of previous programming frameworks:
- Few and hard general-purpose, or only limited-purpose frameworks
- Difficult to use (especially for non-CS students)
- Easily accessible, and free
- Easy to use
Limitations of Python
- Difficult to install (for non-CS students)
- (Relatively) difficult to create GUI
Some of the simulators available in PyCX:
- Iterative Maps
- Cellular Automata
- Dynamical Networks
- Agent-Based Models
Introduction to the Modeling and Analysis of Complex Systems, Chapter 10
Shows how to use PyCX (described in "PyCX: A Python-Based Simulation Code Repository for Complex Systems Education"), and nothing more. Unless you're very interested in PyCX, it is not worth the read.
An important aspect when modeling complex systems is to choose the correct model, and knowing the assumptions and limitations of each model.
Interactive Simulation with PyCX
The framework offers some interactive actions to use when the simulation is running:
- Step once (i.e. perform one iteration of the simulation)
Interactive parameter control
Some dudes added a feature to PyCX that allows you to control parameters during a simulation.
Simulation without PyCX
You can run the simulation without initializing GUI. Instead, it is possible to output each state as an image (and create a video!).