Google search chief Prabhakar Raghavan, pictured in 2018, had to contend with a challenging launch for the Bard AI chatbot last year © Bloomberg

Prabhakar Raghavan, Google’s search chief, was preparing for the Paris launch of its much-anticipated artificial intelligence chatbot in February last year when he received some unpleasant news.

Two days earlier, his chief executive, Sundar Pichai, had boasted that the chatbot, Bard, “draws on information from the web to provide fresh, high-quality responses”. But, within hours of Google posting a short gif video on Twitter demonstrating Bard in action, observers spotted that the bot had given a wrong answer.

Bard’s response to “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about?” was that the telescope had taken the very first pictures of a planet outside the Earth’s solar system. In fact, those images were generated by the European Southern Observatory’s Very Large Telescope nearly two decades before. It was an error that harmed Bard’s credibility and wiped $100bn off the market value of Google’s parent company, Alphabet.

The incident highlighted the dangers in the high-pressure arms race around AI. It has the potential to improve accuracy, efficiency and decision-making. However, while developers are expected to have clear boundaries for what they will do and to act responsibly when bringing technology to the market, the temptation is to prioritise profit over reliability.

The genesis of the AI arms race can be traced back to 2019, when Microsoft chief executive Satya Nadella realised that the AI-powered auto-complete function Google’s in Gmail was becoming so effective that his own company was at risk of being left behind in AI development.

Test yourself

This article is part of a collection of ‘instant teaching case studies’ exploring business challenges. Read the piece then consider the questions at the end.

About the author: David De Cremer is the Dunton Family Dean and a professor of management and technology at D’Amore-McKim School of Business at Northeastern University in Boston. He is author of ‘The AI-Savvy Leader: 9 ways to take back control and make AI work’ (Harvard Business Review Press, 2024).

Technology start-up OpenAI, which needed external capital to secure additional computing resources, provided an opportunity. Nadella quietly made an initial $1bn investment. He believed that a collaboration between the two companies would allow Microsoft to commercialise OpenAI’s future discoveries, making Google “dance” and eating into its dominant market share. He was soon proved right.

Microsoft’s swift integration of OpenAI’s ChatGPT into Bing marked a strategic coup, projecting an image of technological ascendancy over Google. In an effort not to be left behind, Google rushed to release its own chatbot — even though the company knew that Bard was not ready to compete with ChatGPT. Its haste-driven error cost Alphabet $100bn in market capitalisation.

Nowadays, it seems the prevailing modus operandi in the tech industry is a myopic fixation on pioneering ever-more-sophisticated AI software. Fear of missing out compels companies to rush unfinished products to market, disregarding inherent risks and costs. Meta, for example, recently confirmed its intention to double down in the AI arms race, despite rising costs and a nearly 12 per cent drop in its share price.

There appears to be a conspicuous absence of purpose-driven initiatives, with a focus on profit eclipsing societal welfare considerations. Tesla rushed to launch its AI-based “Fully Self Driving” (FSD) features, for example, with technology nowhere near the maturity needed for safe deployment on roads. FSD, with driver inattention, has been linked to hundreds of crashes and dozens of deaths.

As a result, Tesla has had to recall more than 2mn vehicles because of FSD/autopilot issues. Despite identifying concerns about drivers’ ability to reverse necessary software updates, regulators argue that Tesla did not make those suggested changes part of the recall.

Compounding the issue is the proliferation of sub-par “so-so technologies”. For example, two new GenAI-based portable gadgets, Rabbit R1 and Humane AI Pin, triggered a backlash, accused of being unusable, overpriced, and not solving any meaningful problem. 

Unfortunately, this trend will not slow: driven by a desire to capitalise as quickly as possible on incremental improvements of ChatGPT, some start-ups are rushing to launch “so-so” GenAI-based hardware devices. They appear to show little interest in whether a market exists; the goal seems to be winning any possible AI race available, regardless of whether it adds value for end users. In response, OpenAI has warned start-ups to stop engaging in an opportunistic and short-term strategy of pursuing purposeless innovations and noted that more powerful versions of ChatGPT are coming that can easily replicate any GPT-based apps that the start-ups are launching.

In response, governments are preparing regulations to govern AI development and deployment. Some tech companies are responding with greater responsibility. A recent open letter signed by industry leaders endorsed the idea that: “It is our collective responsibility to make choices that maximise AI’s benefits and mitigate the risks, for today and for the future generations”.

As the tech industry grapples with the ethical and societal implications of AI proliferation, some consultants, customers and external groups are making the case for purpose-driven innovation. While regulators offer a semblance of oversight, progress will require industry stakeholders to take responsibility for fostering an ecosystem that gives greater priority to societal welfare.

Questions for discussion

  • Do tech companies bear responsibility for how businesses deploy artificial intelligence in possibly wrong and unethical ways?

  • What strategies can tech companies follow to keep purpose centre stage and see profit as an outcome of purpose?

  • Should bringing AI to market be more regulated? And if so, how?

  • How do you predict that the tendency to race to the bottom will play out in the next five to 10 years in businesses working with AI? Which factors are most important?

  • What risks for companies are associated with not joining the race to the bottom in AI development? How can these risks be managed by adopting a more purpose-driven strategy? What factors are important in that scenario?

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments