This project tackles a particularly demanding area of composite systems: Complex systems of interacting subcomponents where there is a combination of local interactions between subcomponents, together with a different scale of longer range interactions.
In contrast to traditional methods, based typically on large scale agent-based simulations or on numerical solutions of coupled non-linear deterministic or stochastic differentialequations, our framework is based on inherently distributive and approximative probabilistic approaches. Our methods to describe uncertainty, information transfer and emergent properties in complexsystems are based on complex connected graphs. The techniques for analysing such graphs derive from extensions of methods in statistical physics to decompose high dimensional joint distributions into simpler, computable quantities. The novelty of our method stems from the focus on systems which exhibit this composite character: a combination of localised and long range, sparse and dense, weak and strong interactions between subcomponents in such graphs.
We are interested in exploiting complexity in information systems which can be described by such graphs, but we are using techniques from physics and modify them to be applicable to inference in, and analysis of, complex systems.
This framework leads to new insights and fundamental tools and techniques on generic systems which are applicable to many important current real world problems, including CDMA coding methods, ad-hoc sensor networks, distributive traffic-lights management, embedded intelligent sensors, and cortical functioning.
The investigators of this project are Prof David Saad and Prof David Lowe.
The study of complexity has been identified as a key research area of significant future demand both by UK research councils and the EU and is supported through several dedicated programmes.
However, current methods to study complexity are based on large scale simulations, for instance of interacting agents, or on numerical solutions of coupled non-linear deterministic or stochastic differential equations. Being of a large scale, highly non-linear and inherently with multi-level interactions these methods are likely to be unsuccessful in providing generic theoretical insight as well as robust, principled and reliable solutions for specific instantiations of such systems. The main problems in these approaches are their sensitivity to model parameters and external observations, their sheer scale that prevents reliable numerical modelling and the difficulty in modelling and analysing systems with multi-level interactions.
The general approach that we propose for understanding such systems is based on probabilistic approximative and distributive methods that are local, scale well (linearly or at most quadratically) with the system size, accommodate variable and measurement uncertainties and readily provide confidence levels for the inferred variables. The macroscopic description in specific problems can be provided using methods of equilibrium statistical mechanics that exploit the large system size to provide typical and robust expectation values that characterise the system's behaviour. This gives generic theoretical and analytical description of the system and further understanding of its capabilities and limitations that are difficult to obtain numerically due to the computational effort required. For instance, the macroscopic framework will identify threshold values for the ratios between the strengths of dense and sparse interactions under which a composite system could provide optimal solutions, the types and quality of solutions obtained and the conditions under which a collaborative behaviour is likely to emerge.
The problem we focus on here is that of composite systems of huge numbers of elements with multi-level interactions. These are particularly difficult and challenging in complex systems since although principled approaches have been devised for very dense or for very sparse interaction systems, they fail for the composite multilevel systems we investigate. We aim to obtain insights into information inference and knowledge structures by taking inspiration from the physics of coupled, interacting many-body systems normally analysed by methods of equilibrium statistical physics.
The exemplar application domains we have in mind span novel embedded sensor systems, control of transportation flow systems such as networks of traffic lights, and information flow in nervous systems, all of which exhibit the phenomenon of local connections with long-range interactions. For instance, a distributive inference method for traffic light management would include information on the state of neighbouring traffic lights (strong interactions) as well as weak interactions with all traffic lights related to traffic coming into/out-of town. Similarly, individual oscillators in a MEMS array of sensors will inevitably have local neighbourhood mechanical coupling combined with parasitic long range interactions leading to possible novel emergent behaviours and new self-organising sensor modalities.
A principled approach is required to determine the optimal state and to analyse such composite systems. We propose that insight into the complexity of such systems will derive from the interdisciplinary approach of extending concepts from physics across into the information/inference domain.
For instance, in the inference domain, probabilistic graphical models (Bayes belief networks) provide a powerful framework for modelling statistical dependencies between variables in systems that can be mapped onto sparse graphs (i.e., where the number of direct probabilistic dependencies per variable is much smaller than the number of system variables and interactions are characteristically strong). These methods play an essential role in providing principled approximate probabilistic inference in a broad range of applications from medical expert systems to telecommunication. (These methods, that have largely been developed independently in the computer science and information theory literature, can be derived from early results in statistical physics: belief propagation on graphs for instance, relates to stationary points of the Bethe approximation of the free energy dating back to 1935).
However, these methods and their variations are generally inappropriate for systems that are globally more densely connected (and where the interactions are characteristically weak) as the computational effort grows exponentially with the connectivity. Also, dense systems include a high number of loops in the corresponding graphical representation which cannot be generally tolerated by these methods (since one of the main approximations used is the weak dependence between variables that are not directly connected, which is clearly violated in the presence of loops).
On the other hand, different methods, based on mean-field approximations, have been suggested for obtaining solutions in the case of densely connected systems, where the number of connections is large and of the same order as the number of variables (while theconnection strength is relatively weak compared to the system size). However, these methods rely heavily on the fact that the system is very large and the interactions weak, through a sequence of approximations.
The same dichotomy exists in the methods used for analysing disordered physical systems, and systems that can be mapped onto disordered systems. The statistical physics methods used for analysing densely connected models, such as the Sherrington-Kirkpatrick model require different assumptions than those used in the study of sparsely connected systems. Although the models have originated in the statistical physics literature, very similar models have been used to study other complex systems such as neural networks, CDMA signal detection and error-correcting codes.
We have identified this particularly challenging area of systems' complexity of composite systems that include both naturally sparse, strong connections and weak, dense interactions. The proposal includes both the development of inference methods in composite systems, aimed at finding solutions for particular instances, and the analysis of generic systems with multi-level interactions where computational studies are intrinsically intractable. Objectives
In this project we will:
Formalise relationships between the theory and techniques of inference in composite systems to specific real-world complexity aimed at developing generic tools.
Develop inference methods for composite systems that include both strong and weak interactions, which are randomlyconnected in a sparse and dense manner, respectively.
Extend the new inference algorithms to problems that include more complex structures, such as local strongly connected regions with weak long range interactions.
Analyse composite models via methods of statistical physics to gain insight by studying the behaviour of such a composite system and to demonstrate the efficacy of such tools to the study of generic composite systems.
The success of the inference techniques developed will be assessed by the ability of the developed algorithms derived from principled theory to find successfully exact solutions for composite systems, compared with the performance of heuristic search algorithms. A successful analysis of generic composite physical systems, comprising interactions of various strengths and connectivities will enable us to gain insight into the behaviour of such generic complex systems, demonstrating the usefulness of the method for analysing complex systems. The insight obtained will include the identification of threshold for the system's parameters (e.g., ratios between the strengths of dense and sparse interactions) under which a composite system could provide optimal solutions both practically and theoretically, the solutions obtained in other regimes and parameter values, and under which conditions collaborative behaviour is likely to emerge.
Presentations - oral
Presentation - poster