Working Smarter

Episode 1: Kate Darling on human-machine partnerships

May 1, 2024
"It's so clear to me from the research that people are going to treat these things like they're alive. We can't stop that. But what we can do is shift people's thinking a little bit—and animals are such a great example for a non-human that we have been interacting with for a long time."

For our first episode of Working Smarter we’re talking to Kate Darling, a research scientist at MIT’s Media Lab and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots. 

Darling has spent more than a decade studying human-robot interaction through a social, legal, and ethical lens. She’s interested in how people relate to robots and digital constructs, socially and emotionally—whether it’s an AI-powered chatbot or one of the many robotic dinosaurs that Kate has in her home. 

Hear Darling talk about the bonds we’re already forming with our smart—and not-so-smart—devices at work and at home, and why our relationship with animals might be a better way to frame the interactions we’re having with increasingly intelligent machines.

Show notes:

~ ~ ~
 

Working Smarter is a new podcast from Dropbox about how AI is changing the way we work and get stuff done.

You can listen to more episodes of Working Smarter on Apple Podcasts, Spotify, YouTube Music, Amazon Music, or wherever you get your podcasts. To read more stories and past interviews, visit workingsmarter.ai

This show would not be possible without the talented team at Cosmic Standard, namely: our producers Samiah Adams and Aja Simpson, technical director Jacob Winik, and executive producer Eliza Smith. Special thanks to Benjy Baptiste for production assistance, our marketing and PR consultant Meggan Ellingboe, and our illustrators, Fanny Luor and Justin Tran. Our theme song was created by Doug Stuart. Working Smarter is hosted by Matthew Braga.

Thanks for listening!

Full episode transcript


I am the proud caretaker of two large, beautiful cats: Kubrick and Moose.

Kubrick is orange and white and extremely fluffy, while Moose is your typical salt and pepper tabby. They are adorable, and I love them, but that love comes at a cost.

My home is perpetually covered in hair.

It's on my bed, on my clothes. I find clumps of it just floating, mid-air. I watch, each day, as cat hair tumbleweeds migrate across the floor.

So, a few years ago, I did the only sensible thing. I bought a robot vacuum. It’s a squat, round disc, and I call it: Eufy. It’s literally just the brand name, but, eh—it stuck.

Eufy can’t do everything. It struggles with my rug. It can’t clean the couch—and my cats love the couch. But like any good partner, we both have our strengths. Eufy can slip underneath various obstacles with relative ease. It’s methodical when it comes to the baseboards. And, most importantly, it keeps the tumbleweeds at bay.

It means that when I do have to vacuum, there’s less for me to do—and it takes less time than I’d otherwise spend.

If you listened to our last podcast, Remotely Curious, then you know that this is something we think about a lot. Not robot vacuums, per se, but having the right tools to do our jobs.

At Dropbox—where I work—we’re on this mission to design a more enlightened way of working. Early on, that meant making it easier for people to store their stuff in the cloud. More recently, it’s meant rethinking the very nature of how and when and where we work—not quite hybrid and not quite remote, but an approach we call Virtual First. 

And now with artificial intelligence, or AI, we think we can finally build the tools we’ve been dreaming about all this time.

The kinds of tools that can help us find exactly what we need, when we need it—and maybe even before we know what we’re looking for. Tools that can keep us organized, and help us find focus—that can take all the repetitive, tedious tasks off our plates and leave us more time for work that actually matters.

Creative work. Impactful work. Human work.

Obviously, we think there’s a lot of potential here—whether you use Dropbox or not. And so we thought, why not start a new show, where we ask founders, researchers, and engineers about the things they’re building and the problems they’re solving with the help of AI?

We’re calling it Working Smarter, and it’s a podcast about how AI is changing the way we work and how we get stuff done. How people write, how they run their businesses—even practice law. And not just in the future. We’re talking about stuff that people are already doing and thinking about today.

Because we want to help you work smarter, too.

My little round assistant isn’t particularly smart. It doesn’t have any cameras, and it doesn't connect to wifi. It certainly doesn’t have AI. 

I know it’s just a robot. But I’ve become quite fond of Eufy all the same.

Sometimes it gets stuck on a slipper or a cord, and I’ll say “Oh, Eufy!” like it’s a mischievous puppy or a wayward child. Once, when Eufy’s battery failed, it made the saddest, most plaintive chime each time I tried to turn it back on. 

It really bummed me out!

And if that’s the way I feel about a vacuum, what happens when our assistants become even more advanced? As our apps and devices start to behave even more like…us?

I’m your host Matthew Braga, and today’s episode of Working Smarter is all about the leap from working on screens, to working with our machines. 

I’ll be talking to Kate Darling, a research scientist at MIT’s Media Lab, who has spent more than a decade studying human-robot interaction through a social, legal, and ethical lens.

Kate is interested in how people relate to robots and digital constructs, socially and emotionally—whether it’s a chatbot, or one of the many robotic dinosaurs that Kate has in her home.

What do recent advances in AI mean for us, our workplaces, and society at large? That’s coming up next on this episode of Working Smarter.
 

~ ~ ~


Kate, thank you so much for joining us today.

Thank you so much for having me. 
 

People have worked alongside physical robots and in all sorts of contexts for quite a while now—you know, in warehouses, assembly plants, even at home with our robot vacuums. I have a robot vacuum. I'm wondering, though, does it feel like we're at a similar tipping point when it comes to digital bots and digital work? Like, the kind of work that's done primarily on a computer screen? 

There's all this research showing that people will develop emotional connections to even the very, very simple robots that we have right now. But I think what we've really seen in the past year or two with some of these new AI applications is that, with the new language capabilities of artificial agents, it's becoming much more obvious that people treat automated technology differently than another kind of machine.

And that oftentimes we don't just treat it like a tool, but we'll also treat it like a social agent. And so I feel like we just reached that tipping point in the AI world where people are starting to understand that we treat these things like agents. And I think robotics now has to catch up a little.
 

Oh, interesting. Can you elaborate a little bit on that? How so? 

Well, I do think that robots have this special effect on people because, in addition to mimicking cues that people kind of recognize, they're doing this on a physical level, which is much more visceral to people. But robots still aren't very sophisticated in what they can do. And so we're seeing this proliferation of chatbots and virtual agents, and there's going to be mass adoption of these systems in that realm.

In order to really get robots into homes and workplaces in a way that is more ubiquitous, I think the technology has to get a little bit cheaper and a little bit better. We're just not at that place yet because it takes so much more to have to interact with the physical world than it does to have something on a screen.
 

Right. Now, I'm curious: why study this? Like, why does it matter how we interact with machines, whether they're designed to look like us or merely existing as software like some of the bots that people have been playing with recently? 

I think it matters because, you know, as these machines come into more shared spaces—which is happening—I think it's really important to understand and anticipate that people treat them differently than other devices. I think that helps with the design of the technology. It helps with understanding how to integrate it.

In robotics, the whole field of human robot interaction has known for quite some time now that there's certain things about robots where, if you design it to look or feel in a way that people don't like, or feels a little bit off to them, they're going to hate a robot way more than a different kind of machine that does the exact same thing, but it's not robotic.

The fact that people treat these things like they're alive and project agency onto them is also something that you can harness to get people to really enjoy interacting with a robot.

I also think it's important to understand these projections that we have, because we're constantly comparing the technology to human ability and skill because we are anthropomorphizing it, because we are projecting human-like qualities and traits and behaviors onto it.

And I've always said that that's just the wrong comparison when we're thinking about robotics and AI and automated systems, because it really limits us in terms of thinking about some of the possibilities for what we could be building.
 

A recurring theme in your writing and your speaking is that robots don't have to look like us. They don't have to act like us. I know that you wrote a whole book about this called The New Breed where you suggest a better analogy might be to think of robots more akin as animals or pets. Why do you think that?

Because it's so clear to me from the research that people are going to treat these things like they're alive. We can't stop that. But what we can do is shift people's thinking a little bit—and animals are such a great example for a non-human that we have been interacting with for a long time.

We've used animals for work, for weaponry, for companionship. Animals have autonomous behavior. We've partnered with animals, not because they do what we do, but because their skill sets are different and that's really useful to us. It seems like that would be a much more fruitful way to be thinking about some of these automated technologies that we're seeing.

Because despite some of the incredible advances in AI that we're seeing, I would still argue that artificial intelligence is not like human intelligence. And I would also argue that that shouldn't be the goal in the first place. Like, why are we trying to recreate something we already have when we can create something that's actually different and useful to us?

So I feel like the animal analogy opens people's minds to other possibilities for what we could be doing with the technology.
 

Do you think that analogy applies equally as well to digital bots as it does to physical ones? 

I mean, I think it does. Obviously, it's not a perfect analogy, and I'm not trying to say that robots and animals are the same or that we should treat all of these artificial agents just like animals. 
 

Of course.

It's just, like, the idea that we could be partnering with these things in what we're trying to achieve.

And I do think that that holds in the digital realm as well. Of course, it's become more difficult. So I wrote this book before LLMs were a thing, right? So now we have this language element, and that's something that animals lack. And so it's a little more difficult to get people away from this, like, comparison to people when you have something that can interact with you on the language level.

But I still think it's true. I still think these systems have capabilities and, like, immense potential and advantages that are different from what we are able to do. And if we could be leaning into that space of difference, I think we could have a much more creative way of designing and using these systems that isn't just trying to replace what a person does.
 

Definitely. And you've hit on something that I find very interesting. I mean, you use the phrase “agents” a second ago. There are so many different names that people are using to refer to the tools and the apps and the systems that have emerged over the last year. I've seen assistants, agents, co-pilots, collaborators, companions. What do you make of this struggle or lack of consensus to accurately characterize what it is that we've been building?

It's such an interesting time to be watching what's unfolding because I do think that something robotics has long struggled with is: how do you set user expectations in the right way?

It's such a challenge because people project so much of human behavior onto the systems that they have certain expectations. I feel like the way that we describe the system is a great way to frame it for people and frame those expectations. It's not clear to me that the people putting the systems out into the world are always thinking as deeply about how they're framing it as they should.

Whether you call something a companion or you call something a co-pilot, what you want to do is set the user expectations so that they understand what the technology can do and can't do so that they're not ultimately disappointed. 


I think the optimistic end case is that these things can be collaborators. They can actually be co-pilots. They can, maybe not replace a co-worker, but fulfill sort of similar kind of tasks you might have turned to a real person before.

And I'm wondering, what are some of the things that need to happen before we can begin to not only accept but even trust these kinds of agents to do the things that we want them to do?

Yeah. I mean, it's funny. I feel like we already maybe trust the agents too much. Well, it kind of depends on the context. 
 

Sure. 

One of the things that has sort of been observed in human robot interaction—there's some limited studies on it—but when people are interacting with a social robot, not only do they treat it like a social agent and interact with it as though it has an inner state and a mind and stuff. They also—because they're nevertheless aware that it is a robot—they feel less judged by it, and they are more willing to tell it things than they might even be willing to tell a human.

So there's this weird space of difference where because people understand that they're interacting with something that isn't alive, and because they trust it in certain ways. Like, we trust computers to be really good at math, for example—better than asking your neighbor to do a calculation.

We might not trust them as much with relationship advice—although I think that that's going to start changing because the LLMs are pretty good at, at even giving people personal advice like that. 

I think that we do need to figure out a way to make the systems trustworthy, so that people aren't trusting them too much, or giving them more information than they understand that they're giving, and for that information to then be misused, Because, of course, they're not talking to a dog or a neighbor. They're talking to a corporation that is collecting data to improve the capabilities of the chatbot, but also maybe collecting personal data that could be used in other ways—not in people's own best interest, but, you know, in someone else's best interest.

So I think we need to have some protections so that people understand how their data is being used and is not being used against them.
 

Is it the fact that these systems, at the moment, sound so much like us, and we're able to interact with them in a way that is so much like we interact with other people, that makes us so much more trusting of them?

Yeah, I think that's part of it. It's happened with much simpler systems, though. I mean, there was a chatbot called Eliza back in the ‘70s that Joseph Weizenbaum created at MIT, and it was extremely simple. It would just answer everything you said with a question.
 

Like it would flip it back around on you.

Yeah. It would be like, "Well, how do you feel about that?" And people would tell it all sorts of things, right? So it doesn't take much. But now I think it’s just becoming more salient to everyone because everyone has more experience. Because everyone has now interacted with ChatGPT or tried it out for themselves, and so I think people are actually seeing that this is a thing.
 

You're making me also think of how Google search does not look anything like another person, and that hasn't stopped tons of people from asking it very sensitive health-related questions thousands, millions of times a day. So, I guess it doesn't matter necessarily in that sense either.

Earlier I had mentioned we've obviously had physical robots in workplaces for quite some time now and we've developed rules, guidelines, ways in which we work with these robots in those kind of capacities.

In the purely digital space, what are some of the questions we're going to have to ask ourselves as more intelligent agents, assistants, whatever you want to call them, become bigger parts of our workplaces and how we work?

Well, one question I have is how much of the effects of introducing automated technologies into the workplace are actually about the technologies and the capabilities of the technologies and how much of the effects are about the political economic system surrounding them?

The Luddites always get a bad rep for being anti-technology. But if you go back and look at what the Luddites actually were protesting, it was employers using new technologies as an excuse for poor labor practices. And I think that that's something that is somewhat observable if you look at automation in some of the areas where we've had it for a few decades—where, in countries with strong labor protection laws, it's not as much of an issue and there's less job loss and there's less misperception of the technology. And then countries where there's not a lot of labor protection, there's a lot more disruption that happens and depending on whether you care about workers or not, you might find that a good or a bad thing.

But I do think that one thing to be really mindful of that we don't talk enough about is that it's not just technology being deployed. It's technology being placed into an existing system. And depending on how that existing system is set up, that's going to have huge effects on what happens. I don't know if that makes sense. 
 

I think so. And I think it kind of tracks with some of what I've seen you write and speak about in the past, which is that: we shouldn't be thinking about these agents as “how do we replace jobs or replace people with these agents?” but “how do we help people essentially augment the tasks that we already do or do things differently?” Like, work with them, in a sense.

Is that a better way to think about it? How to use these tools in a way that helps to amplify or augment the skills that we already have rather than simply just replace us?

I mean one of the things that we've seen in manufacturing and automation where we've had robotics for some time now is that you can't fully replace people because the skill sets that humans have are so different from the skill sets of the robots, and that it's really so much more effective if you can harness both of those.

So I do think that augmentation or finding new ways to help people be more productive rather than automating them away. I think that that is not only a better future because I like it better, but also I think it makes sense from a business perspective.
 

What are some elements of your day-to-day work that you wish could be assisted or changed with some of the autonomous agents that we're starting to develop now?

Email. 
 

Email? 

Email. Yeah.
 

How so? 

Oh just like—I know that this is probably not what's going to happen, because when it becomes easier to answer email, then just more email will happen, right?
 

Of course.

Or we'll just have, like, bots emailing each other, trying to sound as human as possible with no human in the middle.

I think there's still a lot to be said for human judgment and human creativity. I mean, yes, generative AI could come up with ideas and could be a great tool. But David Autor at MIT did some work—or his lab did some experiments—where they were pairing people who write to do creative writing with an AI tool. And what they found was not that the AI was able to, like, replace the human skill if someone had no training, but rather if you had the skills in advance, it would enhance what you could do. 

So I think that that speaks, again, a little bit to the fact that there is a human skill set that we bring to the table. That, if we could use the machines to enhance it, we're going to see much better results than just trying to recreate that skill set in the AI system.
 

Yeah. That makes a lot of sense. 

I understand that you have a number of robots in your home already. I've read you have seals, you have dinosaurs, you have robot dogs. What have you learned from living with these robots that perhaps other people are about to discover as they welcome more digital bots into their lives? 

Not only do I have robots, I also have small children. So my son is six, my daughter's two and a half, and it's just… It is so obvious how intuitively they—and by the way, animals too. Like, my kids, the pets around us, they will all treat the robots like living things, no question.

They're getting used to Amazon's Alexa and they know how to issue commands, etcetera. But just the fact that Alexa is in a stationary thing that doesn't move around—they treat it so differently than the robots. It's really interesting to me. Having read all the research and had my personal experiences, just to see my kids just completely validate everything that I've been arguing for so long. Because it's very clear that the robots are in such a different realm than any other kind of machine.
 

Where did these robots come from? The ones that you have in your home.

Oh, gosh. Well, some of them I bought. Some of them people have sent me. Some of them are, like, left over from, like, studies that I've done. But yeah, if anyone wants to send me a robot. [Laughs]
 

You’ll add it to your menagerie.

I will use it.
 

You also co-authored, I think, a really interesting paper that I saw was published in early 2023 that looked at what you were kind of referring to a second ago— whether people could still form bonds with robots that don't resemble us, right? And I'm wondering if you can elaborate a little bit on what you found from doing that research? 

Yeah, so that's a project that I advised on. So this was a very talented student who did all of the work for this. And it was really cool. Like he created a robot that—it's not even a robot, it's like an artifact that has, like, the most minimum viable social affect.

So it's just a box that had like a little smiley face on it. 
 

Uh huh.

And he gave it to people and, like, it had a phone number on it so people could take it with them and they could text the box's owners—or parents—what the box was doing. And they could also pass it along to someone else. And so he tracked these artifacts and how people interacted with them in a really social way even though, yeah, they didn't look super special. Like, they had just enough of a cue to get people to be like, “Yeah! The artifact is enjoying being outside in the sun.”

Like it's amazing how little it takes to get people to socialize with something like—and we see that with the Roomba too. I mean, the Roomba is just like a disc and 85% of people name their Roombas. 
 

Yeah. I mean, you're making me wish that I gave mine a more unique name than its brand name, but…

Well, it makes you, it makes you unique, right? You're in the like 15%. 
 

You've been researching how humans and machines interact now for well over a decade. What has surprised you most about this present moment that we're living in? 

I love this present moment. I think even I was surprised that engineers will work so hard to create a system that is, like, robust and reliable and safe enough to put out in the world. And like, I've seen people spend decades on huge feats of engineering and put out an amazing product from an engineering perspective, and just not at all take into account how people are going to react to it.

And suddenly, they are facing all of these issues with deployment, whether that's PR issues or issues that people hate the robot. And it's just fascinating to watch how important human-robot interaction is and how important understanding our psychology around robotics or even these AI systems is. Because the research has been very clear, but now we're seeing it in action, and it's pretty funny.
 

Well, and, not only are we seeing it in action, but I'm curious about—you mentioned a little bit about this a second ago—but you have two young kids who are starting to engage with this technology as they grow older. How do you think that this is going to figure into their lives as they continue to grow up?

Oh, it's wild to me that they're not going to remember a time before they could have actual conversations with devices. I can't believe—like, I did not predict the most recent developments in AI. I don't think anyone did. Even the people working on it didn't predict it.

And it's just… I think it's a game changer. I don't make predictions anymore, but I do think that my kids are going to live a very different life. So I was working with a student, Daniela DiPaola, who did some interesting research where she showed that kids who interact with a social robot—they'll anthropomorphize it, and they'll treat it like a social agent, and they'll tell it stuff and whatever. And you can teach them exactly how the thing works, and that makes no difference in how they're willing to interact with it.

They don't care. They treat the robot like a friend. 
 

Well, and to that point, as people who grew up before this age, before this era, we bring our own preconceived notions about what robots are and how they should act and how they should look and how they should be. And your kids don't have any of that yet, and I wonder how is the way in which they are interacting with some of these systems different than how you see adults, especially in some of the research you're doing, interact with some of these systems?

The funny thing is that I don't see much difference in the behavior between adults and kids. I see differences in what people claim to think, or how they claim to behave. 
 

How so?

Well, in human-robot interaction, there's a lot of experimental work, And it's a known thing that you can't have someone interact with a robot and then ask them about it. That's not good data, because people will justify their behavior. They'll try to rationalize why they treated the robot like a living thing or why they did XYZ, and they're like that because they feel sheepish about it subconsciously or whatever it is. You have to have a behavioral measure.

My kids don't feel self-conscious in that way, so you could probably ask them and they would be very honest, but… I think that the behavior is the same! 

Like, I was just in the office the other day and there's this stupid robot. It's a very successful product. It's the Hasbro Joy for All cat. 
 

Ok.

And it's like—it's great. I mean, it's cheap enough that people can buy it. It's less than a hundred dollars. And it's just like this cat and it purrs and it says “meow.” And I was watching one of the roboticists interact with it, and how he was, like, stroking it and treating it like a cat. And this is like a hardcore engineer. It's just, it's so funny to watch people's behavior. Like, this is so ingrained in us.

We can tell ourselves as much as we want that it's not rational. It's still going to happen.
 

You mentioned a second ago how people interact with some of these things almost a little sheepishly and I wonder: are people going to be sheepish about, you know, asking their autonomous assistant, their AI agent, to clean out their inbox? Like, “Hey can you correct all the fields in this Excel file I'm working on?” All that kind of drudge work I think people are like really excited for this stuff to take away. Are people going to feel a different way about actually having these systems do that in practice?

What we've seen so far from even very primitive AI systems is that people will say “please” and “thank you.” I talked to this company a while back that makes some AI powered virtual assistant and they said that people would send gifts to the office for the virtual assistant. And that's not even like like advanced technology! Now that we could actually have a deep conversation with like a chatbot, I think people will still ask the assistant to do the drudge work, but they might be more grateful than rational. 
 

I'm thinking about my own life, and it's not even work, but I have a window that has some automated blinds in it that I can control with Siri. And I'll say, “Siri, can you close the blinds?” And then after Siri close the blinds, I'll just instinctively, reflexively say, “Thank you! Thank you for closing the blinds.” And it feels so silly, but I don't know, I also kind of… I kind of like it.

I think it's nice. I always say, when I see a child being nice to a robot, or even a soldier bond with a robot on the battlefield, I don't see people being silly. I see people being kind. I think that instinct to socialize or be nice to an artificial agent is not something that we need to beat out of ourselves, unless it's something that starts getting taken advantage of.
 

Right. I imagine you've been doing a lot of interviews that touch on LLMs and a lot of the agents that we've been seeing more and more over the last year. What do people often—and when I say people, I guess I just mean the general public— what do people often not understand or get wrong about what's being built so far?

I think that so many people right now are interested in how can these systems help my business? Or even me who's like, oh yeah, how can this system help me write emails or whatever? But I think what we're underestimating or not talking enough about is how game-changing this is going to be for human-machine relationships in general.
 

How so? 

Because people are going to socialize with these systems. There are going to be systems that are specifically designed for that. I mean, there already are like artificial companions. And then there's also just like people who are already finding companionship and talking to ChatGPT. And I think that… I don't even know all of the effects that that's going to have. But I do think it is a profound shift in how we interact with machines, and it's something that we should be thinking more about.
 

Can I say "how so" again? 

Sure. I feel like even in the workplace, rather than thinking about, oh, you know, “Can we give our workers a chatbot to help them be more productive?” I think we need to also be thinking about the social and the interaction element. Is there a way that we can harness the ways that people will treat the system more like a work colleague than a tool? Could that be helpful in some areas? Does that raise any ethical questions? 

There was a company that asked me a long time ago, “We have this internal chatbot and it helps onboard people.” And they noticed that one person was extremely verbally abusive to the chatbot. And not just, like, once in a testing-the-limits kind of way, but repeatedly over the course of a longer period of time. And they were like, “Is this an HR issue?”

We don't know if that's an HR issue, but you know what? It seems like a good thing to start thinking about and figuring out. And also the fact that this person didn't know or didn't think about the fact that what he was inputting into the system would be seen by others? I think that there's a lot of education that needs to happen around that too. So… Yeah, I think there's a lot to think about. 
 

Right, because I guess there’s not that much of a leap between asking an agent for “notes from the meeting that we worked on last week” to then asking, like, “Hey, how do I navigate this tricky work situation?” or “How do I better, you know, work with a colleague?” Like the things you kind of instinctively talk to colleagues about the more that you get to know them. I can imagine that’s starting to be topics of conversation that we don’t know how to deal with yet.

Yeah. And because there’s this intermediary agent, I think sometimes even the most technically minded people who understand how the system works—I think even they sometimes forget that the data is going somewhere else.
 

Right. That is a very good point.

Kate, this has been really great talking to you. Thank you so much for joining us. 

Thank you again as well. This has been so much fun. 
 

~ ~ ~
 

Sometimes, when my robot vacuum Eufy is humming along the floor, I try to guess what it’s going to do next. Whether it’ll go left or right, or finally get that spot. Whether it’ll do what I would do.

As my software and my apps get smarter, I find myself playing this game in other parts of my life too. I compare meeting notes with my transcription AI. I feed my search assistant the most tenuous keywords to see what it can find. I’m looking—for a lack of a better word—signs of humanity.

But like Kate says: isn’t humanity a limiting lens?

I kind of like the idea that working alongside our AI helpers could be more akin to collaborating with something distinctly non-human. Something more like…my cats.

Just without the tumbleweeds of hair.

If you want to learn more about Kate and her work, you can visit katedarling.org.

Kate’s book The New Breed: What Our History with Animals Reveals about Our Future with Robots is available now.

We’ll drop a link to both in the show notes. 

Working Smarter is brought to you by Dropbox. We make AI-powered tools that help knowledge workers get things done, no matter how or where they work.

You can listen to more episodes on Apple Podcasts, YouTube Music, Spotify, or wherever you get your podcasts.

And you can also find more interviews on our website, workingsmarter.ai

This show would not be possible without the talented team at Cosmic Standard:

Our producers Samiah Adams and Aja Simpson, our technical director Jacob Winik, and our executive producer Eliza Smith.

At Dropbox, special thanks to Benjy Baptiste for production assistance and our illustrators, Fanny Luor and Justin Tran.

Our theme song was created by Doug Stuart. 

And I’m your host, Matthew Braga. Thanks for listening.

~ ~ ~
 

This transcript has been lightly edited for clarity.