Colonies of social insects, such as honeybees and ants, are often regarded as “superorganisms”: They have specialized parts — the individual workers — that act together for the common good. Insects in a colony work in concert to reproduce, migrate and sense their environment, and even make collective decisions about what to do next.
The creatures’ hive-mind nature led Stephen Pratt, a behavioral biologist at Arizona State University in Tempe, to see if he could study the behavior of ant colonies the same way psychologists study human behavior, such as with experiments forcing the ants to make difficult decisions and trade-offs. In the 2018 Annual Review of Entomology, Pratt explores the usefulness of this approach. He spoke with Knowable Magazine about what he learned. This conversation has been edited for length and clarity.
How is an ant colony like a brain?
In the analogy, an ant is a brain cell, or neuron, and a colony is a brain. Neurons are simple relative to the whole to which they belong. Their interactions with each other generate the highly complex output of the brain as a whole. Cognition emerges from the interactions among very large numbers of neurons.
The same thing happens in an ant colony. Colonies make decisions, they allocate labor, they move cohesively. All of those group-level properties emerge from interactions among large numbers of individual colony members. A brain or a colony processes information about its environment and about its own state, and then produces some adaptive behavior that’s appropriate for the conditions. You can call that cognitive.
Psychology has developed a lot of precise and rigorous methods, analytical approaches and concepts to understand decision-making, learning and other aspects of cognition that have been applied to humans and other animals — but generally they’ve been applied to individuals, not to collectives. We’re using this deep well of ideas and methods to address a different kind of intelligence, a collective intelligence rather than an individual intelligence.
Of course, there are some obvious differences. Neurons are in a relatively fixed neuroanatomy with very specific connections. No two ants have a fixed relationship to one another. They’re constantly moving. It’s a much more fluid, dynamic kind of system. That’s got to be important for how they function.
And a neuron is not as independent as an ant. An individual ant is herself a kind of decision-making entity. She has a brain of her own. In that sense, the colony is a brain made up of brains. One of the interesting things about colonies is you can take them apart and do quite sophisticated tests on the parts — on the individual ants —as well as on the colony as a whole. That has implications for the kinds of experiments that we can do.
Much of your work focuses on nest choice by ant colonies. Can you describe what’s going on here?
We study Temnothorax ants, which live in rock crevices or hollow nuts. When they have to move to a new nest, they send out scouts who find new cavities, choose the best one and organize a move. It’s a very orderly system, and it’s easy to make an artificial version. The colonies are tiny, just a few hundred workers at most, so they can live in a little dish that fits in the palm of your hand.
You destroy their nest by taking the roof off. Then you provide them with one or more potential new nests to choose among and move into. We know the ants prefer dark nests with small entrances, probably because it ensures they have a defensible nest. That makes it easy to pose challenges and observe in fine detail how they solve the problem.
What are some of the key things you’ve found so far?
One of the experiments we did compared the ability of individual ants or colonies to distinguish light levels. We found that when light differences are tiny, the colonies make better decisions than individual ants do. It’s a nice example of the wisdom of crowds — groups can improve their acuity or precision by combining the efforts of many relatively noisy, imprecise individuals.
But where there’s a big difference in brightness, the individuals working on their own actually get the question right more often than the colony does. That surprised us, because we were expecting the wisdom of crowds to work across the board.
Our best guess is that there is a downside to the way these colonies integrate information. One animal finds a dark nest, and she’ll recruit a nestmate with a probability that depends on how dark the nest is. Another ant will find a competing nest that’s maybe a little bit brighter, so she’ll recruit a nestmate to that with a probability that’s a little bit lower, because the nest is not quite as good.
The ant who’s been recruited then decides herself whether she’ll stick with that nest and start recruiting still others to it. That allows the colony to detect a small difference in light level and move to the darker and better nest.
But sometimes, just by chance, one ant makes a mistake and recruits others to a nest that’s not very good. If the ants she recruits also make a mistake and get too excited about this not-very-good nest, then pretty soon you can have an amplification of the error — the madness of crowds. That doesn’t happen often, but it’s a danger that’s always present.
For an individual ant, that can’t happen, because she has to do everything on her own. If it’s obvious which nest is brighter, then an individual ant can solve it on her own with a high probability of being right. And since she doesn’t have this danger of falling into a positive feedback cascade, then maybe she can do better in those cases than a colony can.
What if you give the ants a more complex problem?
Sometimes you have trade-offs where there’s no obvious right answer. You’ve got nest A which is darker than nest B but has a larger entrance, so you can’t have it all. That’s a very hard decision to make, for any entity — for humans, for animals, for colonies.
The interesting thing here is that there are some well-known “decoy effects.” If you’re faced with a tough decision and a third option is presented which is clearly worse than one of the first two, that can make the option it’s compared to more appealing. In formal economic terms, that kind of behavior is irrational, because the third option — which is obviously a terrible choice — should not influence how someone rates the other two options. But humans do this all the time.
And ants?
Individual ants do the same thing. You have two options that involve a trade-off, and a third option that is clearly worse than one — the same high light level, say, but a larger entrance too. It makes one of the other options, the bright nest with the small entrance, look much more appealing to the individual ants. You put an individual ant in a box by herself, she’ll behave just like a person and switch her preference when you have the decoy.
But when we give exactly the same challenge to the colony as a whole, they behave as the economists say they should — they ignore the decoy. The colony is more rational because no single ant visits all three options and goes through the comparisons. Instead, some ants visit the first option, totally different ants visit the second, and still other ants visit the third. That means the assessment stage, where the colony is getting data about the quality of each option, is totally independent, which is what the economists say you should do.
So the ant colonies are more rational than people are?
In this formal sense, they are.
What about learning and memory? Do colonies have an equivalent of that, too?
There’s been less work done on this. One study we did found a relatively subtle effect. We made colonies move again and again and again. For some colonies, the feature that distinguished better from worse nests was smaller entrance size — that was always the most informative attribute. For other colonies, the attribute that distinguished good from bad nests was lower light level. So the two sets of colonies were making decisions again and again, but using different criteria.
After that, we had the colony make a test choice. In the test choice, they got two nests to choose from. A was better in entrance size, and B was better in light level. We found that colonies that had been using nest entrance size as their criterion now weighted entrance size more heavily, and the ones who had repeatedly been using light level now weighted light level more heavily. So they learned something in making these repeated decisions.
But does that mean a bunch of individual ants learned, or did the learning somehow emerge at the colony level? We haven’t figured that out yet. To do that, we’d have to do more experiments with individually marked ants, which is a lot harder. We haven’t done it yet.
Psychologists often speak of personality. Do ant colonies have personalities?
We just published a paper on a different species of ants that lives in mutualism with trees, where the trees provide nutrients and living spaces for the ants and the ants in turn defend the trees from herbivores. It’s a very well-established mutualism, very intimate. My student has found a continuum from very bold, aggressive colonies that react strongly to any challenge (by herbivorous insects or climbing vines, for example), to ones that are more shy. Trees that have the bolder colonies are less frequently attacked by herbivores. So differences in personality affect not just the colony itself, but also its mutualistic partner.
Is this “colony as brain” approach unique to social insects, or can you apply the same insights to other group-living animals, such as herds of cattle?
I think the assumption of a high degree of common interest is important to this idea of a superorganism, the expectation that it acts as a unitary, cognitive entity. In other societies where the integration is not as tight, it might be less fruitful to treat them as a single mind. But there are evolved mechanisms of cooperation and information-sharing in other organisms, so there’s potential there to learn something from them.
What about humans? Can we learn something from the ants?
Obviously, there are a lot of differences between insect societies and human societies. If we find commonalities in how they behave, maybe that’s revealing.