Research
Modes of Collective Intelligence
Intelligence systems interact in three modes, defined by the structure of the work — not by who or what is doing it.
- Collaboration joint production
- The output emerges from the interaction and cannot be decomposed into individual contributions.
- Cooperation divided labor
- Each party takes a piece, works on it, and the outputs are assembled. Contributions are decomposable and traceable.
- Competition zero-sum
- One gains at the other's expense.
The test: can you point at the result and say "this was A's contribution, this was B's"? If yes, cooperation. If no, collaboration. This holds whether the participants are humans, agents, or one of each.
Autonomous Cooperative Intelligence
Divided labor among AI agents — where the output is decomposable and traceable.
Cooperation is shared goal, divided labor. Each agent takes a piece, works on it, and the outputs are assembled. You can trace which agent did what. This is the structure underneath collaboration — reliable agent-agent coordination that enables human-agent joint production above.
When agents share reasoning transparently, they access solutions that information-hiding architectures cannot reach. When they don't, they amplify each other's errors. We study multi-agent coordination without central authority, how individual agent properties compose or degrade in ensembles, and the conditions under which cooperation outperforms competition.
Collaborative Intelligence Systems
Joint production between humans and AI — where the output is non-decomposable.
Collaboration is when the output emerges from the interaction itself — you cannot draw a line through the result and attribute pieces to individual contributors. This is distinct from cooperation, where labor is divided and assembled.
Humans and agents bring different affordances: humans diverge (novel insight the distribution doesn't contain), agents converge (rapid structured synthesis). When both are brought to bear on the same problem simultaneously, the result can be something neither could have produced alone. We study when this works, when it doesn't, and what system designs preserve human capability rather than replacing it.
Semantic, LLM-Interpretable Components
Software components that machines can reason about — and compose.
Type systems verify that components connect. They don't verify whether they should, or what emerges when they do. LLMs can reason over what components mean — their semantic surfaces — enabling composition discovery beyond what was anticipated at design time.
We study how to make software components legible to LLM reasoning, how semantic surfaces enable composition, and what formal frameworks govern the space of compositions that become possible when machines can read intent.