Work Culture

What happens when AI joins your team?

By

Published on July 10, 2025

Ever get the sense that one of your coworkers is using AI on the down low? Do they suddenly love the em dash? Did you catch them using words like “certainly” or “delve”?

According to a Gallup poll from mid-June, there’s a decent chance you’re on to them. Nearly 20% of U.S. employees now use generative AI at least a few times a week or more—nearly double the share from two years ago. Daily use has also doubled in the past year, rising from 4% to 8%. And while further increases could be a boon for businesses—McKinsey estimates we could add up to $4.4 trillion to annual global productivity if companies more widely adopted AI—what about for their employees? What actually changes about the way you work when AI joins your team?

Harvard researcher Fabrizio Dell’Acqua, PhD, has been teaming up with the brightest minds in AI research (like Ethan Mollick at the University of Pennsylvania’s Wharton School) to find out. His most recent working paper—The Cybernetic Teammate—had 776 professionals from Procter & Gamble work on real product innovations individually and in teams of two, both with and without help from ChatGPT. The results are fascinating. Individuals with AI were able to match the performance of teams without AI, which seems like a pretty big deal. Even more shocking: Individuals working with AI also reported some of the emotional benefits typically attributed to human collaboration. But the highest-quality outputs—the top 10% of ideas—came from teams that used AI, suggesting that you still need human teamwork to achieve the highest degree of excellence. 

In this conversation, Dell’Acqua helps unpack some of the big challenges still ahead for AI adoption—both from an organizational perspective, and from the perspective of someone who just received an email from a colleague that contained the word “moreover.”

Dropbox Dash: Find anything. Protect everything.

Find, organize, and protect your work with Dropbox Dash. Now with advanced search for video and images—plus generative AI capabilities across even more connected apps.

See what's new →

One recent study from Microsoft found that 75% of knowledge workers are using AI, but I’ve seen all kinds of numbers. Where are we in terms of wider adoption?
If what you're measuring is whether people have used it, then adoption is very high. Inside universities, I'd be surprised if fewer than 98% of students have used it. That’s traditional adoption. But then there is sustained use over time, and that’s actually pretty uneven. There are companies where a lot of people have tried it—but if you actually check whether they regularly use it, it's very low.

Why do you think that is?
There aren’t always very clear use cases. There is a bit of a distance between the huge potential [and what we’re seeing in practice]. That’s true also in the literature on the AI productivity paradox: We have hundreds of billions of investment, but when you check activity statistics in the US economy, they haven’t spiked. We don't see [meaningful] productivity gains yet.

It took 30 years for US manufacturers to restructure their production processes when electricity was first introduced. The first approach was to substitute electricity for coal-fired steam power, which brought some advantages. But then they restructured the whole process. Likewise, to write an essay for a class, clearly the productivity benefit is there. But for a knowledge worker’s workflow, it's a bit harder. You may need to rethink how teams are structured.

It's very hard, even for experts, to know exactly which tasks an LLM will be especially good at. So we have these huge productivity benefits, above 30% or 40% in some areas. At the same time, with other tasks in the same workflow, we have a productivity drop.

"We need to rethink our processes of learning now that this technology is available," says Dell’Acqua

When I use AI, I find the actual process of learning doesn't go any faster. I still have to get the information in and process it and make the connections that I can only really make by reading and writing. I learn best while writing. Could relying on AI slow human learning?
That's a crucial observation. There is a claim that AI will actually reduce human capabilities. This happened with technology like GPS. Some people don’t walk two blocks without using Google Maps, right? By extension, some of this may happen with AI. Do we need to know how to write citations or bother with spelling? No. But there is this quote by [chess grandmaster] Magnus Carlsen that I like. He has said AI inspired changes in his strategy, and he adjusted his game. He was at the very peak of human potential. But he can go a little bit higher, right? 

We need to rethink our processes of learning now that this technology is available. In the short term, it may actually be negative for learning. The question is, what do we need to learn? In theory, we should learn something that is complementary to AI skills, so that the human-AI combination is greater. But it’s very hard to predict.

In the Cybernetic Teammate study, you suggest that there’s a switch underway in the role of AI: from tool to teammate.
As long as you think of these as tools, you're going to find a lot of limitations. Of course you can do things that you would have done with Excel even better. Great. But this goes beyond that. We’re seeing the declining cost of expertise. When expertise is more accessible, the skills that knowledge workers will need are very different. It's much less about direct knowledge. It's more about being able to ask the right questions, to validate, to push back when needed.

I’ve done some research on humans collaborating with AI and “falling asleep at the wheel.” That’s when performance goes down because humans using AI start to copy paste answers without checking. They disengage and don't use their skills. Avoiding that trap will be an important skill.

"From the perspective of the individual knowledge worker, the question is what to do yourself and what to allocate to AI," says Dell’Acqua

Let’s talk about what happens when one of your teammates is literally AI. In another experiment you ran, the “Super Mario” study, human teams played a game on Nintendo Switch where the goal was to collect ingredients for a recipe. After six rounds, some teams replaced one human with an AI teammate. At first, performance on the teams with an AI teammate dropped. Over time, however, the human teammates adapted to the AI and their scores recovered. What does this tell you about the future of work environments wherein we’re collaborating alongside AI teammates?
Even when introducing a new human teammate, there’s inevitably some disruption. We introduced AI agents that outperformed 97% of humans at the game and we saw a short-term [performance dip]. This brings us back to the point about restructuring workflows: This is a disruption. There can definitely be negative short-term effects, especially on teamwork. The other point in the Super Mario paper is that in a relatively short time frame, the teams with an AI teammate managed to recover. They restructured and adapted. The trend may even ultimately improve scores, once the AI capabilities are fully incorporated.

This is a classic problem. You have some adjustment costs that will be paid just by introducing new technology. And depending on who pays them, and what the structure inside the company is, the process may take more or less time.

What do these findings say about the way that knowledge work is changing in the AI era?
From the perspective of the individual knowledge worker, the question is what to do yourself and what to allocate to AI. We have seen two kinds of modalities: the centaur and the cyborg. The cyborg does everything through AI. The centaur picks and chooses. 

We’re probably going towards a world where more people adopt the cyborg approach. The important thing is not to shut off your brain. Check, validate, push back. In the Cybernetic Teammate study, the best workers prompted 15-20 times before they actually got to the solution. Don’t just ask a question and copy-paste the answer.

This interview has been edited and condensed from two conversations.