Animation by Justin Tran

Working Smarter

Can a chatbot learn what makes you laugh?

By

Published on August 09, 2023

Filed under

In our new Working Smarter series, we hear from AI experts about how they’re leveraging machine learning to solve interesting problems and dramatically change the way we work for the better. 

Robots have been making humans laugh for years. Long before Bender became a fan favorite on Futurama, Rosey the Robot was throwing shade on The Jetsons, R2D2 and C3P0 were bringing slapstick vibes between lightsaber fights in Star Wars—and HAL from 2001: A Space Odyssey with those pod bay door pranks? Classic.

So it’s not surprising that so many had expected superhuman humor right out of the gate when ChatGPT was launched. Some tried prompting it to mimic the styles of stand-up comedians. Others tried performing bits it had written. So far, chatbots are proving impressive at puns and dad jokes, but not yet grokking the nuances of topical humor and cultural satire.

Some say humor is the “last frontier” for AI. If that’s true, what will it take to get there?

Dr. Julia Taylor Rayz is a Professor of Computer and Information Technology at Purdue who’s been researching the potential to teach humor to computers for nearly 20 years. She started her career as an software engineer, but when coding began to feel like a slog, she became fascinated with solving one of her field’s most enduring mysteries:  Why can’t computers recognize jokes? 

“The field of humor studies is actually quite old,” says Rays. “There is a lot of research on what humor is and what we appreciate. You start looking at it and go, “How difficult is it? Surely we can come up with an algorithm to detect something like a silly knock-knock joke.” 

New large language models seem to hold the promise of cracking the comedy code. And studies have shown the benefits of humor in the workplace. So when we interact with AI as collaborators, could they become even better brainstorming partners if they learn to be as hilarious as humans? 

We asked Dr. Rayz to catch us up on the status of computational humor and what funny chatbots might mean for people welcoming AI into the workplace.

Dropbox: Why is it challenging for chatbots to be as adept at humor as they are at other tasks? 
Rayz
: Because when we are trying to be humorous, we don't make all of the information explicit. When you’re searching for information, you’re going to give it absolutely everything, right? “I am looking for an apartment in Florence. I need an elevator slash lift.” I'm going to give you both terms just in case you don't recognize one or the other. In jokes, on the other hand, I’m going to give you as little information as possible because part of the fun is putting it all together. If everything is explicit, most people will simply not appreciate it.

“The interaction is such that, if you're following up, it thinks it needs to change its mind and give you additional information.”—Dr. Julia Taylor Rayz

We can write an algorithm like, “Here are two situations. You have to find the two situations.” Then, according to some humor theories, they have to be compatible with each other, but oppose in some weird sense. Something unexpected has to happen. So you have to jump from point A to point B, except I'm not going to tell you how you're going to jump there [or] how the situations relate. So you can put that A and B have to overlap, and that A and B have to oppose… but how do you get to that information? 

Which gets us to the advances in natural language processing. At least now we should be able to understand where we can fill in the blank. If I am talking about, “I got up in the morning, and I drove to work.” I did not drive to work in my pajamas. I did a series of activities—take a shower, brush my teeth—you know that’s going to happen without verbalizing it. Now there is enough in these large language models to recover the information and fill the gaps… at least the most obvious ones. Then you can go with the chain of thought, correct certain things, and it will get better and better. So it feels like we’re close, right? No.

What do you think is holding them back?
I think the interaction is such that, if you're following up, it thinks that you need something extra, or it needs to change its mind and give you additional information. I played with ChatGPT because it's fast. You log in, give it a joke, and ask it to explain why something is a joke. I gave it a very old pun that’s taken from a dissertation of another researcher, a good friend of mine, Kiki Hempelmann from Texas A&M: “It's a dime good deal.” I asked ChatGPT, why is it funny? It was changing its mind so many times. “It's funny because it uses word play to create a humorous twist. In everyday language, the phrase ‘a dime a dozen’ is a common expression.”

20 years ago, when we tried to write an algorithm, it didn't have background knowledge. It couldn't fill in the blank, but at least it gave you a consistent response. If you asked it 10 times and gave exactly the same input, it would give you exactly the same output. 

Granted, the same joke is not going to be funny exactly the same way if you say it 10 times, but at least the reasoning was there. You look at it now and go: What can I rely on? How do I know that the information I get isn’t going to be a random choice out of these four responses until I hear what I actually like? Is it going to have some reasoning that is reliable?

In terms of AI in the workplace, what are the potential benefits of teaching computers to detect and generate humor? Could it help workers build trust with their chatbot collaborators?
There is some research that people with similar values share similar jokes. So on the human level, there's that bonding element. Is it relevant to a computer? Yes, it potentially can make [you] feel a little bit better about communicating with it. It’s significantly better than where we were even six months ago, but it’s still not quite smooth enough, at least not at the personal level. 

It can write text and it is amazing. Often enough, I ask it to edit my own text. Some of the suggestions are very good, but that's on the editing level. In order for it to be humor, it has to be creative, right? I don't want to see the seams, even if it's a computer. I would rather not see a joke than see something that is stale or awful. 

“This paradigm of computers figuring out our sense of humor is going to depend on how open we are with them about why we like something and why we don't.”

Now that some chatbots can recall personal preferences based on previous chats, do you think they could develop a kind of water cooler rapport by learning individual senses of humor?
I'm assuming that we’re talking about single modality without other sensors—it doesn’t see my face, doesn’t see when I smile, doesn’t see when my eyes are moving. It doesn't have the necessary feedback to learn what we actually enjoy and what we don't. What it can do is learn what it is that we communicate with it about. But I’m not at all sure that this is going to be a truthful approximation of our interests, or what we care about. 

The odds are, if something happens to me to change my sensitivities to a particular topic, I'm not going to talk about it to ChatGPT. If it tells me a joke that is based on that, I am probably not going to respond in way for it to understand why I am sensitive to it. So this paradigm of computers figuring out our sense of humor is going to depend on how open we are with them about why we like something and why we don't.

Computers have become better at detecting humor over the last five years. Where do you think we’ll be five years from now?
What I would like to see is the ability to reason and explain why something is funny [or] not funny. I want that rationale to be solid. If I got hit by a car tomorrow morning with minor injuries, I [would] probably have a totally different sense of humor about car accident jokes than I had today. So I should be able to plug that variable in, whatever happened to me, and the output should change. I want that decision mechanism. 

This interview has been edited and condensed for clarity.