The architecture of complexity: The generative secret behind emergence
In 1962, a polymath named Herbert Simon wrote a paper called ‘The Architecture of Complexity’. And to us, it is one of the most important papers ever written.
Complexity isn’t a lot of stuff, it’s organized interactions
When people talk about ‘complexity’, they usually mean a lot of interacting parts. Like the butterfly effect. That’s not wrong, but it’s not useful. The more useful definition is structural: complexity is what you get when many parts are organized in a way that allows coherent behavior to appear at higher levels from collectives. Cells, organs, cities. The real question isn’t why are there so many parts. It is how could nature possibly repeatedly build systems that don’t simply collapse under their own combinatorial entropy? After all, if I had asked any human to put together >100K parts without an explicit wiring diagram, there zero chance of success at creating something that works.
Simon: Complexity has an architecture
Herbert Simons The Architecture of Complexity defines a clear worldview. His central claim is that complex systems share a single common blueprint: hierarchies of subassembles. Molecules organize into pathways, pathways into processes, processes into cells, cells into tissues, tissues into organs. People organize into teams into departments into firms. Functions into libraries into services into platforms. This shape is invariant to scale and is general because this shape is the only shape that makes complexity buildable in finite time.
Testing this idea
Our studies have shown that biological systems spanning bacterial genomes to tumor microenvironments can be described using statistical frameworks that are hierarchical by design. However, Simon’s idea is deeper. Not simply that biological systems can be represented by hierarchy, but the actual generative process would construct such systems in a hierarchically modular manner to maintain robustness to external perturbations. To show this would go beyond simply statistically analyzing extant systems and instead seeing if hierarchically modular functional systems were able to be robust to changing environments ‘for free’ so to speak.
The concept of ‘constructive models’ instead of generative models
It is an important distinction to note that Simon’s point is one construction, not simply of generation. That is, there are likely a large degeneracy of possible models that could satisfy the creation of complex systems. In his case, he outlines a watchmaker named Tempus that makes watches in two different ways: continuously and in hierarchical subassemblies. However, only one of those ways is a constructive process consistent with finishing in finite time given perturbations (like a person knocking on the door): the one building hierarchically. This anecdotal thought experiment highlights a critical difference that is noteworthy in the age of AI. There’s a fundamental difference between construction and generation. While generation creates systems consistent with the statistical structure of the input distribution, construction involves inferring an instruction manual of how to build the synthetic systems that are consistent with the input distribution of systems. In this sense, one could imagine constructive models being far more useful than generative models for a variety of cases, particularly those involving engineering complex systems.
The future
Can constructive models actually be learned? This will likely require a departure from transformer-based architectures where the constructive logic is one token at a time (a logic that is obviously inappropriate for complex, non-linear systems, but one that works given the scale at data being fed into such models). We are actively pursuing such alternative architectures and time will tell whether such types of AI, inspired by complexity theory and emergence, may have a place in the rapidly changing universe of models.
