© FT Montage/Getty Images

What is artificial intelligence?

Artificial intelligence is the ability of machines to carry out intelligent tasks typically performed by humans. It involves the use of computers to reproduce or undertake such actions, often at a higher speed and accuracy than previously achieved.

AI typically combines computer science with data to solve problems or make predictions. Its processes involve algorithms, which are a series of rules written into computer code.

Historically, AI has been used to carry out complex mathematical tasks, or to play strategy games such as chess, often beating human competitors. In general, the more specific the application, the more effective an AI can be. But it has limitations, including bias in outcomes, a high cost of computing power, and a lack of transparency over why a system makes a particular decision.

What is machine learning?

Machine learning (ML) is an application of AI whereby computer programs can automatically learn from, and adapt to, new information without being specifically programmed to do so. Algorithms can detect patterns in the past data that a computer is trained on and make predictions or recommendations without explicit instructions from humans. ML programs improve over time with training, learning from past experiences or mistakes, as well as spotting patterns in new data.

What is deep learning?

Deep learning (DL) is a subset of ML that solves complex problems, such as speech recognition or image classification. It encourages the AI to learn through high volumes of unstructured data in various mediums including text, images and video. DL models run on software called neural networks, modelled on the human brain.

The main difference between ML and DL is that ML requires human input to understand the data and learn from it. DL can ingest unstructured data in a raw form and distinguish the difference between different categories of data.

What are neural networks?

Deep learning uses neural networks, which are systems for processing data inspired by the way neurons interact in the human brain. Information goes into the system, the neurons communicate to understand it, and the system creates an output.

For example, an AI could recognise a turnip by identifying multiple turnips in its data. It will then be able to spot any new turnips it encounters. If it comes across a carrot for the first time, it might mistakenly think it is a turnip at first. But, the more carrots it sees, the better it will be able to discern a difference. Humans can help an AI’s judgment by marking output as correct or incorrect — for example, by labelling images of any new vegetables input to the AI.

How does generative AI work?

Generative AI takes vast amounts of raw data — for example, the entire works of Shakespeare — and learns the patterns within it, in order to generate the most likely correct response when prompted with a question. For example, if you asked it to write a Shakespearean sonnet, the AI would use its learning to generate the most likely sequence of words, with the correct number of lines and rhyming pattern. It would create a version similar to, but not a direct replica of, his poems.

Generative models have been used on numerical data for a number of years. But, as deep learning and natural language processing have become more advanced, generative AI has been applied to images, audio and text.

The term became widely known after the Microsoft-backed company OpenAI released its chatbot ChatGPT in November, which can produce humanlike paragraphs of text. GPT-4, the AI model behind the technology, has been trained on millions of text sources, including websites, newspapers and books.

Generative AI marks a turning point in natural language processing — the ability for computers to process and generate text and other language-based mediums, including software code, images and scientific structures.

Early examples include GPT-4, Google’s PaLM, which is used in its chatbot Bard, as well as image-generation AI, such as DALL-E 2 and Midjourney. 

This focus on Generative AI is causing a shift towards AI systems trained on large, unlabeled data sets, which can be fine-tuned for different purposes, rather than AI systems that execute specific tasks in a single area.

Reducing the need to label the data makes the AI more accessible, as consumers or companies can deploy it in different circumstances.

What are large language models?

Generative AI tends to rely on large language models (LLMs), a type of AI system that works with languages and uses neural networks. LLMs are the current, cutting-edge way of laying out neural networks from research.

They are called “large” language models as they hold vast amounts of data. LLMs today can have millions or billions more data sets than those trained just a few years ago, due mainly to the increasing computational capacity.

GPT-4, PaLM and Meta’s LLaMa are all examples of LLMs. Adding language models to Google’s search engine, for example, was what the company called “the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of search”.

Although the potential for LLMs is huge, so are the resources needed to design, train and deploy the models. All require vast amounts of data, energy to power the computers, and engineering talent.

What is AGI?

AGI stands for Artificial General Intelligence — an AI that is capable of the same level of intelligence as humans, or an even higher level.

So far, AI has been able to outperform humans in standardised tests but still stumbles with common knowledge, having so-called “hallucinations” where it states falsehoods as facts. Examples of these hallucinations include creating fake book citations or answering “elephant”, when asked which mammal lays the largest eggs.

However, Geoffrey Hinton, known as the “Godfather of AI,” has said that AGI could be here in as little as five years. He and others have warned about the risks that such a level of AI could pose to society and humankind. Ian Hogarth, a tech investor and chair of the UK government’s newly created AI task force recently wrote an essay in the Financial Times warning against the race to AGI, or as he calls it “God-like AI.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article