In our new Working Smarter series, we hear from AI experts about how they’re leveraging machine learning to solve interesting problems and dramatically change the way we work for the better.
Author and futurist Kevin Kelly has been calling out the limits of thinkism for years. Problems can’t be solved by greater intelligence alone, he says, because analyzing data only gets you so far. To actually solve problems, you have to run experiments—you have take action.
As the Senior Maverick at WIRED, co-chair of the Long Now Foundation, and author of 14 books, including What Technology Wants and The Inevitable, Kelly has been thinking about the impact of AI for a long time. But as a writer, photographer, and creator himself, he’d rather be experimenting with the latest tools than worrying about worst-case scenarios.
Kelly is well aware of the risks. He just doesn’t believe we’re heading for dystopia. Instead, he’s doubling down on optimism and participating in the possibility of protopia—a better world built on incremental progress.
When he zooms out to look at the world from a 100-year horizon, he’s hopeful about where we’re going because he can see how far we’ve come. In his new book, Excellent Advice For Living—a collection of “little proverbs and telegraphic adages” originally conceived as a gift for his children—Kelly draws on that long-term view and a lifetime of personal lessons.
Recently, Kelly began integrating ChatGPT, Bing, Bard, Midjourney, DALL-E and Stable Diffusion into his creative pursuits. That’s when he realized he’s mentoring his AI co-pilots as well—and seeing his perspective reflected in how they respond. Here, Kelly describes how he’s working with AI to kickstart first drafts, explore text-to-image generation, and envision improbable worlds he couldn’t imagine on his own.
You’ve been using AI to help you create first drafts for years. How has your use of AI evolved since the release of ChatGPT?
One of the epiphanies I had recently is a lot of the capabilities the AI are now doing they’ve been capable of for years. The real revolution is now we have a conversational interface. It’s not just language [recognition] like speaking to Siri or AIexa, but going back and forth, having them follow the conversation, being able to ask additional questions and follow up.
That's been huge. It reminds me of the same kind of explosion we had from the Internet to the web. The Internet was around for years. It was really hard to use. There were no pictures. Then when the web came along [with] a graphical user interface—suddenly it took off.
“The real revolution is now we have a conversational interface.”
But here's where it gets interesting. I've been doing AI art as well. It’s easy to have these models generate something instantly. It's much harder to get them to do something specific. [Chatbots] are trained on the entire corpus of human writing—the best and the worst. It's amazing for the amount of work, but it's still kind of intern level. To get them to go beyond that, you have to kind of push them, provoke them into doing better than average work. That's true of the image generators, too. To get them to produce one that's unpredictable requires a lot of nudging.
My realization was that even though these models can theoretically produce any possible image, there is a whole lot of stuff they cannot make because there's no language for it. A lot of the greatest art in the world, if you were to try and reduce that to a description, you cannot—and that's why it's great art. There are things that we want to do that are beyond language. To me, that was kind of exhilarating and revealing at the same time.
How does the feeling of collaborating with AI compare to collaborating with humans?
My framing of the AIs is that we should think of relating to them as an artificial alien. They're kind of alien creatures. Even if they get to the point of having some kind of self awareness and complex varieties of intelligence, it's going to be like interacting with Spock or Yoda. We'll be frustrated at times because we expect them to do things they won't be able to do. But there'll be many different kinds of AIs in our lives, just as we have all kinds of gadgets and machines in our lives. We'll have dozens, maybe hundreds of varieties of AIs.
The quality of the results seem to depend on having the patience to tweak and iterate. What advice would you give to people who mainly want to use AI to accelerate their work?
There are some studies right now, showing that the kind of work that's most impacted is, no surprise, the more repetitive kind of work that can be easily auto-completed. A programmer made a remark that these AIs can do 90% of what I do, and then amplify the 10% that I'm left to do. That's the genius of it—you're doing both at once. They can take over and replace a lot of the repetitive stuff. They can also help you magnify the 10% that only you can do as a partner. That’s one of the reasons why they're so exciting—they're actually working at both ends.
Your vision of protopia involves getting there through slow, incremental progress. So I’m curious how you feel about this era of accelerated technological progress.
Exponential growth is very slow in the beginning. I think protopia does sort of rely on the fact that progress may not be visible. It may seem to be hidden in a way, but it's still there if you look for it, particularly if you turn around and look in the past.
“They can replace a lot of the repetitive stuff. They can also help you magnify the 10% that only you can do as a partner.”
A lot of success is almost indistinguishable from being patient. If you’re willing to take a 10-year horizon, you can do amazing things. At the Long Now Foundation, where I'm co-chair, we're suggesting to take a 100-year or 1,000 year horizon. Then you can really get things done as a society. We're not quite there yet, but I think we can get there. We can make a societal commitment to being a good ancestor over generations.
Elevating the horizon does several things. It allows compounding forces to work. When you take the longer view, you can offset fairly large setbacks and mistakes that would normally be an obstacle. So I think protopia does rely on a longer perspective which does work at a different pace. It’s not the same speed as FOMO!
You’ve said optimism is a crucial engine for the future because the best things happen when we “imagine the improbable.” Do you think we can train AI to be optimistic?
There's a lot of work going on in AI alignment and safety right now. Right now, people tend to think of that as aligning away from the bad stuff, but it could also be aligning towards the good stuff. It may be that that research on trying to instill values and guidance in it might not be just about preventing them from being harmful or racist, but it might actually be useful in trying to say, “Well, how do I make an optimistic version?”
Last time we spoke, you said a knowledge worker’s next job probably doesn't have a title yet. What’s the most important skill we can learn now to prepare for the next 10 years?
I think for all content producers, working with an AI is a skill. It'll be ever changing. But we're already seeing that some people are really fantastic at this. The AI whisperers spend their 1,000 hours fooling around with them, learning how these AIs are thinking, then learning to accumulate their own language that works with the AIs.
Even though they're in theory conversational, it's like talking to a dog. You still have to reshape what you're saying to maximize it. Understanding what works and what doesn't work. What kind of prompts give you results, and which ones don't. For whatever reason, the AIs also have individual personalities. Midjourney is different than DALL-E and Stable Diffusion. ChatGPT is different from Bing and Bard.
So you might have on your resume that you can do Photoshop or Premiere or Final Cut Pro. People will be saying, “I'm really good at talking to Bard. I know all the conversational interfaces.” That skill is going to be essential. It used to be that people would brag they were a power searcher with Google. Now we assume you know how to use Google. That ability to mine these other minds is going to be an ongoing part of the process.
What would you say to people who are hesitant about exploring the new AI tools?
There are artists who say, “I'm never going to use the AI. I'm going to do everything by hand.” Well, back in the 80s, writers were saying they were never going to use word processors. There was a belief that word processing ruined your writing, that people could detect when you're using a word processor versus a typewriter. I’m sure it changed writing in some capacity, but I don't know anybody who's not using a word processor. Neal Stephenson will write his first drafts out by hand, then he types them in and does the editing because it's crazy not to.
I think we're going to get to the same place where everybody is using these two different lenses. I think disclosure is going to be part of it if you're using [AI] in ways that’s not the standard. There's rules in newspapers about what you can Photoshop and what you can't. You can adjust the contrast and the colors, but you can't move things around. The same thing with AIs—there will be some level of standard operating procedure that we consider norm and others where you have to disclose that you're doing it.
This interview has been edited and condensed for clarity.