The field of artificial intelligence is being transformed by giant new systems with a remarkable ability to generate text and images. But Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans, warns against reading too much into what is going on inside these silicon brains. Despite recent claims to the contrary by a Google engineer, the machines are not becoming sentient, she says.

However, Mitchell predicts that this powerful new form of AI could have profound effects, from changing the way many workers go about their jobs to our understanding of what intelligence is and what machines might be capable of.

In this wide-ranging discussion with the FT’s west coast editor, Richard Waters, she explains the potential and limits of the latest AI — as well as the technical and social challenges that lie ahead to ensure the technology will be genuinely beneficial.

Richard Waters: Since GPT-3 [Generative Pre-trained Transformer 3, a new language-generation model] came along . . . it feels like things have moved very fast. Should we think about this as a new field of AI? Is it taking AI in a new direction?

Melanie Mitchell: People are characterising this as ‘generative AI’ — so it’s AI systems that can generate human language, human text, images, videos, computer code, etc. And it’s not really new. But what is new is how well it’s working. It’s something that people have been working on for a long time. But, nowadays, because of some new techniques, and the availability of very fast computers and huge amounts of data, due to the internet, these programs can have access to enormous amounts of human-created text and images, [from] things that people have posted online and all the things that have been digitised. [So] the systems are able to work incredibly well all of a sudden.

RW: How well are they working? Are there any objective tests that give us a sense of how effective they are?

Tech Exchange 

The FT’s top reporters and commentators will be holding monthly conversations with the world’s most thought-provoking technology leaders, innovators and academics, to discuss the future of the digital world and the role of Big Tech companies in shaping it. The dialogues will be in-depth and detailed, focusing on the way technology groups, consumers and authorities will interact to solve global problems and provide new services.

MM: It’s a little bit hard to quantitatively measure either how effective they are, or how fast they’re improving. There are certain kinds of evaluation methods that people use to assess these systems, but they’re not very good at quantitatively evaluating. But [with] the qualitative evaluation, you can look at GPT-3, for example, and look at the text that it can generate, and then look at some of the more recent, much larger systems, like for instance, Google’s. The text that’s generated is just astoundingly good: it’s much more coherent, [with] much fewer laughable errors, and so on. So it’s a quality assessment.

And, in terms of image generation, that seems to have improved enormously in the last couple of years. We see these systems like OpenAI’s Dall-E system, and some other more recent systems — you can give them a text prompt and they can just generate seemingly anything that you want them to, although they do have limitations.

RW: The rule of thumb in the AI world these days seems to be larger is better. They are scaling up [but] you’re not getting any diminishing returns as they get bigger. They are getting startlingly better. Are we on an accelerating path to much stronger capabilities?

MM: I would say yes. But the question is how far is that going to go? Some people say, OK, our ultimate goal is AGI [artificial general intelligence] or human-level intelligence where machines can do everything that humans can do in the cognitive realm. And some people think that just the scaling procedure is going to lead to this magical AGI that we’ve been promised for so long.

But other people — including myself — are more sceptical. I think that we’ve seen systems that can do very human-like text generation, very human-like image generation, but we can also look at the flaws of these systems and some of the flaws are not going to be solved by just pure scaling. It’s a big debate; this is one of the biggest current debates in the whole field of AI.

RW: You mentioned Google, which is at the very edge of advanced research, but is also applying this stuff right now in its search engine and other products. So how general purpose might this technology be and how might it be used?

MM: It’s definitely being used in search engines. It’s being used in language translation, like Google Translate, and other systems. It’s being used to create chatbots, for customer service applications. It’s been used to generate computer code to assist programmers. People are using it to generate images for whatever purpose you want an image for — your book cover or your advertisement. So there’s a lot of places it’s been applied.

More recently, I saw — still in the experimental stage — the use of so-called language models to translate human language into instructions for a robot. So if you want your household robot to bring [something], you can say that in natural language to these language models and the language model would translate it into some computer code that the robot could follow.

A lot of that is still in the research/experimental phase. But that’s the direction all of this is going. I think there are going to be a lot of applications. These companies are now getting to see how these large AI models might be commercialised.

RW: Before we get into what they can and can’t do, maybe we can look a little more at the philosophical issues. What is your definition of AI? And do these new systems challenge that, or help us to get closer to it?

MM: AI is one of those terms that can mean many different things. And people use it in different ways, which makes it very confusing. I think the definition of AI is computers, systems that can produce intelligent behaviour. But the question is: what do we mean by intelligence? Our idea of what requires intelligent behaviour keeps changing.

Back in the old days, people used to think that playing chess at grandmaster level was the pinnacle of intelligence. But we found that computers with brute-force searching could play chess without anything that we would consider to be intelligent. Suddenly, chess no longer requires intelligence and chess-playing programs became the equivalent of practice tools — like a baseball pitching machine might be better than a human but wasn’t considered to be intelligent.

Now, being able to speak a language and conversing and dealing with human language has become synonymous with intelligence in a lot of people’s minds. With that definition, certainly, these machines seem to produce intelligent language behaviour. Notice, that’s not the same thing as saying that they’re intelligent — because that’s a harder thing to define.

RW: Do you think it’s right to think of language as the ultimate test, the thing that really sets humans apart? Is it a good place to look for intelligence?

MM: It’s definitely one of the things that set humans apart: this whole ability to manipulate symbols. Language is just a bunch of symbols, right? Words, phrases, we can use those. In fact, we use language to make ourselves more intelligent. We can communicate things to each other and learn from each other and articulate our thoughts. But it’s a really hard question because it seems like these systems, like GPT-3 and its successors, have some of the attributes of language: they’re able to spit out convincing paragraphs or dialogues or whatever, but it doesn’t seem like they have the understanding of language that humans have. Philosophers have a term for this: it’s competence without comprehension.

So [if you say] ‘I had eggs for breakfast’, I have a strong model in my mind of what that means, and why you might have had that, and what it meant to you that these language models don’t really have. They’ve never had eggs. They don’t know what breakfast is. They only learned language.

So is there going to be anything that we can do that a system that only learns from language cannot do? That’s a big debate. People are getting very, very heated about these philosophical questions with respect to current AI that are really hard to answer.

RW: To some people, just the very idea of using words like “understanding” and “intelligence” is complete nonsense, whereas other people want to stretch the definition of these terms. Is there any better way of trying to think about what these machines are doing?

MM: I think it’s dangerous for us to assume that they understand just because they seem to. There are dangers in attributing too much human-like characteristics to them. We saw that clearly with the recent incident where a Google engineer decided the system he was working with was sentient. He was very, very convinced just by the fact it was telling him that it was sentient, and it seemed very human-like.

That attribution of human-like characteristics to machines goes way back — to the early days of language-generation systems. Even when they were very bad, people still often thought that they had some understanding, which they very clearly did not. But now they’ve just gotten better and better. And we still have that problem where we’re programmed [to think that if] something’s talking to us and sounds like a person, we attribute personhood to it. Even on the thinnest evidence.

RW: So you would say this is a Google engineer ascribing personhood to something and it’s just a fallacy; he’s simply falling for the oldest trick in the book?

MM: Yes, I do think that. It is hard to define these terms like sentience, personhood, consciousness, understanding. We don’t have scientific definitions or tests for these things. But it’s very clear that some systems do not have those characteristics. The current language models do not have those characteristics.

I understand, in some sense, how they’re working. I know that they don’t have any memory outside of a single conversation, so they can’t get to know you. And they don’t have any notion of what words signify in the real world. But the question is: could that change with more data? If you start to give them visual input, you start to give them auditory input, you start to connect them more and more with the world, is that going to change? I don’t know the answer.

I think eventually, perhaps, we will have machines that we could attribute these characteristics to. But I don’t think that the current scaling up of models that only interact with language and digitised information is going to get us there.

RW: Are other breakthroughs, other techniques, and whole new directions going to have to be added to these models to go the next step?

MM: Yeah, that’s what I believe. But other people in the field don’t believe that. They think we have everything we need — we just need more.

RW: Reasoning is not quite understanding, but we can define what reasoning is. And we’re starting to see people trying to train these systems to reason by giving them models of how thought processes move from one step to another and reach a conclusion. Do you think that represents something new and does that push these machines in a different direction?

MM: People have been trying to get machines to reason since the beginning of the field. More recently, these language models — even though they had not been trained to do reasoning — seem able to do some reasoning tasks. But they’re what people call ‘brittle’ — meaning you can easily make them make mistakes, and reason incorrectly.

People were playing around with reasoning tasks in GPT-3, with word problems: ‘If you have four apples, and I have one apple, and you give me your apples, how many apples do I have?’ Things like that. And the system was getting it wrong. But if they added something to the prompt, like, ‘Let’s think step by step’, then the system would go through the steps and get it right. It’s very sensitive to the kinds of prompts you give it. But even with that, adding that prompt, these systems are still very prone to make mistakes.

This gets to what we were talking about before: the difference between this statistical approach — where systems learn about statistical correlations of different words and sentences and phrases — and language. How it’s doing that reasoning seems to be very different from what we humans do. And that’s why it’s prone to errors. It doesn’t make the same mental simulation that we do, where I imagine you giving me these apples, and I know that four plus one equals five, and I can do that addition fairly reliably. The system seemed to be doing it a different way, and we don’t really understand how.

© LagartoFilm/Dreamstime

So I would say that reasoning by machine is still very much an unsolved problem. These models are doing some interesting but hard-to-understand things that look like reasoning, but are different from what we do.

RW: Let’s go on to some of those limitations. With GPT-3, if you ask it to explain something, maybe the first sentence it produces will be sensible. But the second paragraph probably won’t be. And, by the time it gets to the end of a page, it’ll have wandered off to some point that is completely irrelevant. Can you ever, just through a statistical approach, produce reliable, valuable information?

MM: My intuition is no — there’s something else needed. But I’ve been wrong before. One of the problems is that these systems don’t know whether what they are saying is true or false. They’re not connected to facts. Some people have tried to get them to verify the things that they say by showing evidence from websites but those things are riddled with errors, too.

You can actually get these systems to contradict themselves very easily. I was playing around with prompts about vaccines. And I got it to say in the same paragraph vaccines are totally safe, and vaccines are very dangerous. So they don’t have a sense of what’s true, what’s not true, whether they’ve contradicted themselves or not. They’re not reliable as sources of information.

RW: These flaws make it ever harder to use them in practical ways, in business settings, in important decision-making. Does that just disqualify them?

MM: It’s a little bit like if you’ve ever used machine language translation: they’re typically really good but, occasionally, will make some really glaring error; you really need a human in the loop to make sure everything’s OK and make corrections. I think the same thing is true here. You can’t really use them just autonomously to spit out text and publish the text. You need a human in the loop. But, in the commercial sense, they’re meant to assist humans. Maybe you as a journalist, eventually, could have your first draft spat out by GPT-8 or whatever, and then you would edit it. I could see that. That’s very feasible. I think probably some journalists are already doing that.

RW: This boundary between decision support systems and decision-making systems is obviously a fine one and it just depends on how much people trust the system. So how do we calibrate our trust in these systems?

MM: It’s hard because they’re not very transparent about how they’re doing what they do. We know that they’re completing prompts, like ‘Write a 300-word essay on the American Civil War’. But how do they decide on those particular sentences, and whether to spit out something from training data that a human wrote, maybe changing a word or two? Or coming up with something totally new? Or saying something that is partially true, partially not? We don’t know how it’s deciding on what to spit out.

I know people are working on trying to improve the trustworthiness of these systems but, as you say, I don’t know if it could ever be completely trustworthy.

RW: Another area that you touched on is bias. Humans are biased — we’re all biased in our decision-making. I guess that’s just part of our social interaction. Is bias in machines trained on human data just going to be natural part of these systems — and something we just have to live with?

MM: As you say, humans are biased. We stereotype. We have default beliefs about things, whether it be demographics, like gender, or race, or age. And those [AI] systems have been trained on human-generated data and absorbed the biases. So one example here: a system that generates an image. If you say ‘Draw a scientist’, it will almost always draw a white male. Because this is the data it has been trained on.

You can play around with changing the training data, but then you run into unexpected other problems. I read somewhere that some of these companies will add terms to your query, terms like ‘African American’ or ‘female’ to make it more likely that results will be diverse, which is kind of a top-down, after-the-fact de-biasing, which can generate weird results.

So, this whole idea of de-biasing is really difficult. You want your system to reflect reality but you also want it to not magnify biases. And those two things are difficult to do at the same time.

RW: One criticism we’ve had about the field of AI is that it’s being led by mostly male, mostly white researchers, many of them employed in corporations. And the field has not opened up enough to other voices, other points of view. Is that a fair criticism?

MM: I think it is. Some of the biases are overt but there are a lot of subtle biases that come into play. And it turns out that a lot of those have been revealed by people who are outside of the mostly white, mostly male engineers — often by women, or by black women, or other under-represented people. Without having people with diverse backgrounds, you’re going to get tunnel vision at these companies.

RW: We’re also seeing examples that really strike me as surprising. For instance, Facebook pushing out an AI chatbot that is producing very biased results for some people. In many ways, it looks like the same mistake that Microsoft made with its Tay chatbot a few years ago. It looks like they haven’t learned anything. Do you see any signs that people are learning the lessons of the impact of these technologies at large scale?

MM: As you say, the Tay chatbot, which was on Twitter, was Microsoft and it was embarrassing, but I don’t think it hurt Microsoft in any financial way. Similarly, I don’t think this new thing called BlenderBot is going to hurt the company even though all these embarrassing things are coming out. So there’s these different pressures: let’s deploy these things even if they have flaws and we’ll fix the flaws as they come up — rather than having to spend a huge amount of time and delay deployment of the product to try and fix problems. I don’t work in industry but it does seem that people are repeating a lot of the mistakes of the past. There’s no punishment, really. Maybe we need more outside regulation for some products.

RW: Another cause of concern is misinformation [from] any generative system that can produce words or images.

MM: I think it’s a huge concern. We’ve always had misinformation factories. If we only look back to the 2016 US presidential election, we had misinformation factories going on without these AI models. But AI models might make it easier to create this misinformation. Especially the more visual data, like very realistic photos of people. It’s starting to be common to be able to do videos, to mimic people’s voices saying whatever you want them to say. I think people aren’t really aware of how concerning and how impactful they might be in very negative ways. We’ll see in the next few years how people are going to be using those things to try and spread information. And I think it’s going to be a huge problem. I don’t know how to solve it.

RW: It feels like we’ve spent years talking about whether AI is going to replace people or replace jobs. But it feels like you can see some very direct effect on work — maybe the image generating systems are the most direct, we’re starting to see articles online that are illustrated with images generated by AI. Are we now on the cusp of seeing many more jobs potentially being done by a machine?

MM: The answer is probably yes. But, on the other hand, in the past at least, they take over a lot of jobs, but it also creates new kinds of jobs. It’s hard to predict. I don’t think these systems can be left alone to write articles or generate images. We need humans to be in the loop to edit them or guide them. So they’re not going to be totally autonomous for a long time.

RW: So the role of humans is to know the right question to ask and to know how to interpret and edit the answers?

MM: I think that that’s going to be the case.

RW: We’ve been living with the search engines for a while and maybe what we’re talking about here is much more efficient systems that can give us back fuller, more complete answers.

MM: That’s absolutely correct. We will become editors and, if you will, question askers — which is really what artificial intelligence needs.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments