Glossary of ECCO Concepts

This is a first draft of a glossary defining the most fundamental concepts of the ECCO ontology. Eventually, this glossary should evolve into a kind of semantic network of nodes (concepts) connected by links (relationships) of different types.

The present draft focuses on the concepts that are unique, or at least typical, for the ECCO world view, which centers around the self-organization of agent collectives. In our research, we make use of many other concepts from the domains of complexity, cybernetics, evolution and cognition, such as: feedback, control, emergence, hierarchy, chaos, non-linearity, etc. However, most of these concepts have already been defined elsewhere, in our work (e.g. Gershenson & Heylighen (2004): How can we think the complex?) and the one of others. They will therefore be added to the Glossary later.


Agents and Evolution


a change in the state of the world governed by a causal relationship. This cause-effect relation can be represented as a condition -> action rule: whenever a certain condition (state of affairs, functioning as cause) is encountered, a particular action (change of that state, effect) is performed, deterministically or probabilistically, that modifies that state of affairs. Although we informally explained action as a change of state, action is the true primitive of our ontology, and therefore all other concepts, including "state" must be defined in terms of action or derived concepts. 


The state of the world at a particular instant is defined by the set of all actions that could be performed at that moment. (If actions are probabilistic, the state includes the probability distribution of all the actions.) This maps to the more intuitive notion of state as the set of all properties that are actual or "true" at a certain moment if we remember that a property needs to be observed to be deemed "true", and that an observation, as shown by quantum mechanics, is an action, which typically changes the observed state.


an autonomous, persistent producer of actions. Agents can be people, animals, robots, organizations, cells, or even molecules. Agents have preferences for certain actions over others, in the sense that when offered a choice they are more likely to perform the "preferred" actions. Preference functions like a gradient or force field that pushes the agent in a particular direction.


an end state or "attractor" to which an agent's actions lead, i.e. a state where the preference is for not further changing the state. For a physical object, the implicit goal is to minimize potential energy or free energy (this is equivalent to maximizing equilibrium or stability). For a living organism the implicit goal is to maximize fitness, i.e. survival and reproduction.


a measure of the "success" of an action, i.e. the degree to which the action has made the agent advance towards its goals; the amount of "reward", "benefit" or "satisfaction" that an agent obtains from an action. When confronted with different options for actions, agents will normally choose the one from which they expect the highest utility.


the amount of utility consumed or wasted by performing an action. An action normally uses energy, and some of this energy will be wasted or dissipated and therefore no longer be available to perform further actions.


an action that is not produced by a (goal-directed) agent. An example is the spontaneous decay of a radioactive particle, or the mutation of a string of DNA during replication.


an on-going change in the state of world caused by subsequent actions. Variation produced by events is in general undirected, variation produced by an agent is directed towards the agent's goal. However, since the agent merely has a short-term, local knowledge of the effects of its actions (bounded rationality), there is no guarantee that variation will reach the goal in the long term. Therefore, variation is always to some degree blind, since the agent cannot foresee all the consequences of its actions.


the selective retention of a particular state, because no further actions occur to change that state. Such a state typically corresponds to a (local) maximum of the utility function for the agents involved, i.e. a state that they cannot improve by further actions. Such maxima of the utility function define the "attractors" of the dynamics.


the long-term, directed change in the state of world towards higher overall utility which results from the interplay of variation of states and the selection of states with higher utility. Evolution can be seen as a search for utility based on trial-and-error, where variation produces the trials, and selection eliminates the errors.




the degree to which an agent is unsure about what to do or to expect. The larger the number of options that can potentially occur, the larger the uncertainty, and therefore the larger the amount of trial-and-error that the agent will have to perform before it can be certain to have made a satisfactory decision. Agents with high uncertainty will therefore be very inefficient in accumulating utility. Uncertainty is normally measured using Shannon's formula for entropy, which is based on the probability distribution of the different options.


anything that reduces uncertainty. Using Shannon's formula, the amount of information in a message can be calculated as the initial uncertainty minus the new uncertainty (after the message has been received). The value of information can in principle be calculated as the expected increase in utility made possible by applying that information to the selection of actions. In practice, this calculation is rarely doable since the outcome depends on the agent's intelligence, which is much more difficult to quantify.


given a certain information, the degree to which an agent is able to make good decisions, i.e. selections of actions that maximally accumulate utility in the long term. A zero-intelligence agent is one that selects actions at random. Intelligence has two components: knowledge (or "crystallized intelligence"), and fluid intelligence.


the ability, typically derived from experience or communication, to anticipate the consequences of a given action or event. Knowledge can be represented in the form of condition -> action or condition-> condition rules. The latter specifies which new condition can be expected to follow a given condition. Knowledge differs from information in that it produces general predictions or expectancies, applicable in many different situations, while information strictly speaking only applies to the present situation.

fluid intelligence

the ability to internally explore many different combinations of possible events and actions in order to find the one that according to the existing knowledge would produce the largest utility. This requires a mechanism of inference, such as the concatenation of condition-condition rules, e.g. A -> B, B -> C, therefore A -> C.


the acquisition, processing, storage, and use of information and knowledge to support intelligent decision-making


the hypothetical ability of an agent to always choose the best action. In reality, rationality is restricted or bounded, as an agent never has enough information, knowledge or intelligence to accurately determine the utility of all possible courses of action. Bounded rationality implies that there is always an element of uncertainty or trial-and-error involved in making decisions; no decision can be a priori proven to be the best one.

intelligence amplification

a process that increases the ability of an agent to make good decisions. Intelligence can be amplified by providing more or better knowledge (e.g. an encyclopedia in which facts can be checked), by increasing the ability to explore many different possibilities (e.g. by means of a computer program that can make more and faster inferences than a human brain, or via drugs that improve thinking in the brain), or by some combination of these.



course of action

the trajectory that an agent would describe through its state space if left undisturbed, by performing subsequent actions that bring the present state closer to a goal state.


any change in the agent's situation that makes the agent deviate from its present course of action. This deviation can be positive (moving it closer to the goals), negative (moving away from the goals), or neutral. The defining characteristic of a diversion is that the agent has no control over it (although the agent may try to control its subsequent effects): it does not originate from the agent's decision-making, but is unexpected, coming from an initially unknown origin. Examples are a sudden discovery, an obstacle appearing on the road, an apple falling from a tree, an unexpected phone call.


a negative diversion. A phenomenon that, if left unchecked, would make the agent's situation deviate from its goals, i.e. reduce its  utility. Disturbances typically originate in the environment, but can also appear because of some malfunctioning within the agent itself. Examples are obstacles, accidents, encounters with predators, parasites or otherwise hostile agents, diseases, poor weather conditions, etc. 


a positive diversion. A unexpected change in the situation that creates an opportunity for the agent to perform an action that increases its utility, so that it can reach its goals more quickly or easily than expected. Affordances can be tools, means or resources (e.g. a phone, a hammer, food, someone that can give advice)  that help the agent achieve its goals, or the disappearance of obstacles or constraints (e.g. a clearing up of the weather, a reduction in the price of energy).


an action performed by an agent that suppresses or compensates for a disturbance, so as to minimize any deviation from the goal or course of action

regulation or control

the process by which an agent constantly minimizes deviations from its goals, by appropriately counteracting disturbances. Regulation makes use of negative feedback: deviations in one direction are compensated by actions that push the state in the opposite direction.


the use of known affordances in order to maximize the increase in utility they can bring about. Examples are harvesting fruit, mining for coal, cultivating crops.


the process by which an agent seeks for affordances, by trying out actions without specific expectation of what the action would bring about, in the hope that one of them would uncover an affordance. Examples are animals foraging for food, children playing, or people browsing magazines.

the exploration-exploitation trade-off

the difficult decision for an agent about how much energy to invest in exploration rather than exploitation. While exploitation of known affordances makes the agent advance to the goal most reliably, affordances can become exhausted, lose their usefulness because of a change in the situation, or lose their competitive edge relative to new affordances. Therefore it is wise to invest in discovering new affordances before the old ones have lost their power. But exploration alone is a too risky and inefficient, and must be complemented by exploitation. The general rule is that a more variable, unpredictable environment will necessitate more exploration; a more stable environment lends itself more to exploitation.



the process by which an agent constantly adjusts its course of action so as to maximally counteract disturbances and exploit affordances, i.e. so  as to dynamically maximize its advance in utility taking into account the diversions it encounters. Navigation includes regulation, exploration and exploitation.




a cohesive group of agents held together by a network of strong interactions. This cohesion distinguishes it from the environment, which groups any other agents with which there is a weak(er) interaction. If the agents in the system share a goal, the system functions like a higher-order agent.


everything that is considered to be external to a given agent or system, but that still interacts with it

complex adaptive system

a system consisting of many interacting agents, where their interactions are not rigidly fixed, preprogrammed or controlled, but continuously adapt to changes in the system and in its environment

collective intelligence

the degree to which the agents in a system collectively can make good decisions as to their future course of action; in particular, the degree to which the agents collectively can make better decisions than any of them individually.

distributed cognition

the acquisition, storage and use of information and knowledge distributed over different agents in a system, so as to support their collective intelligence


the substrate that carries or supports the interactions between agents; that part of the world that is changed by an action, and whose changed state is perceived as a condition for a subsequent action by another agent. Examples of media are air for acoustic interaction, the electromagnetic field for electric interactions, the physical surroundings for collaborative building. The medium is often the environment shared by the interacting agents, but can also be internal to the agents.






reciprocal effect of two agents (say, A  and B) on each other: the action performed by A creates a condition that triggers another action (reaction) from B, which in turn affects the condition of A, stimulating it to react in turn, and so on. Interaction can go on indefinitely, or stop when the final condition does not trigger any further action.


zero-sum interaction

an interaction in which every gain in utility for one agent is counterbalanced by an equal loss in utility for the other agent. This typically occurs when utility is proportional to the amount of "material" resources (such as food, money or energy) that an agent acquires: when the total amount of resources is conserved, the sum of gains (positive changes) and losses (negative changes) must equal zero.


increase in overall utility caused by interaction; characteristic of an interaction with a positive sum, i.e. a win-win situation where all parties gain in utility. This typically happens when the action performed by one agent to advance towards its goals makes it easier for another agent to achieve its goals as well. An example is the sharing of information or knowledge so that every advance or discovery made by one agent can benefit the other agents as well. Unlike material resources, information is not conserved, and therefore a gain for one agent  can be accompanied by a gain for the other.


the opposite of synergy; decrease in overall utility caused by interaction; characteristic of an interaction with negative sum, where all parties together lose (although one may gain at the expense of a larger loss by the others). This typically occurs when resources are dissipated or wasted during the interaction.  An example is a traffic jam, where enormous amounts of fuel, time and energy are wasted because of mutual obstruction between vehicles. The dissipation can be physical (dissipation of energy or thermodynamic entropy, because of diffusion or physical friction), or informational (waste of resources because of uncertainty leading to many trials ending in error).


the relation between agents involved in a synergetic or positive sum interaction. Usually, cooperation is assumed to be intentional, i.e. the agents act in the expectation of a positive sum result (now or later). If the positive sum interaction is unintentional, we may just call it "synergy".


the relation between agents involved in a zero sum interaction


the relation between agents involved in an interaction with friction or negative sum. Usually, conflict is assumed to be intentional, i.e. the agents act in the expectation of inflicting a loss on the other party. If a negative sum interaction is unintentional, like in a traffic jam, we may just call it "friction".

transaction costs

the degree to which utility in a positive-sum interaction is lost to friction. Even when the interaction overall is synergetic, some of the generated utility will be dissipated during the process. Typical transactions costs are the effort invested in finding the right partner to interact with, negotiating who will contribute what to the transaction, and making sure that everything happens as planned. According to some estimates, in our present economic system more than half of economic value generated is lost to transaction costs. The most fundamental source of transaction costs is uncertainty: since the agent does not know what transaction to enter into, what to agree upon, or what to expect, it will need to spend a lot of energy in search, negotiation, and enforcement of agreements.




the arrangement or mutual alignment of actions so as to maximize synergy and minimize friction in their overall pattern of interaction. It implies that two actions performed simultaneously or subsequently are selected so as to maximally complement and minimally obstruct each other. This requires a minimization of the uncertainty that otherwise would dissipate resources in needless trial-and-error.


the spontaneous emergence or evolution of coordination in a complex adaptive system. Self-organization reduces uncertainty. The driving force behind self-organization is co-evolution based on variation and selection: actions and reactions produce a continuously changing configuration of interactions (variation); however, the more synergetic a configuration, the more "satisfied"  the agents will be with the situation, and thus the less they will act to produce further changes (selective retention or preference for synergetic configurations); vice versa, the more friction there is, the more the agents will be pressured to intervene and change course in order to increase their utility (elimination of high friction configurations).


a relatively stable arrangement or structure of agents inside a system that functions to ensure coordination of their actions. This structure specifies the specific roles of and interactions between the system's agents. Its function is to maximize synergy and minimize friction (including transaction costs) in their further interactions. For example, in a human organization the different individuals each have their own responsibilities, and the rules of the organization specify who interacts with whom in what way. This minimizes transaction costs, since it is no longer necessary to search for partners, negotiate with them, or strictly monitor whether they do what they are expected to do.


a regulatory structure external to the agents that promotes coordination between them. An example is the system of roads, traffic lights, traffic signs, and lanes that coordinates the movement of vehicles so as to minimize mutual obstruction (i.e. friction). Mediation may emerge from self-organization (e.g. vehicles spontaneously moving to the side in order to let others pass), or be imposed by an inside or outside agent (e.g. a policeman regulating traffic).


a form of indirect coordination via the medium, where the trace left by an action in the medium stimulates the performance of a subsequent action that is complementary to the preceding action. Stigmergy is typically the result of the self-organization of a mediator out of the medium. It is probably the simplest way to achieve coordination in a complex system because it does not make any cognitive demands on the agents (such as remembering who is to do what when), and therefore functions even with agents of very low intelligence.

evolution of cooperation

the general tendency for interactions to become more synergetic through variation and selection, thus reducing competition and conflict