Algorithm: A sequence of rules that a computer follows to complete a task — it takes an input, for example, from a data set, then performs a test or calculation on it and generates an output. Algorithms can be used in this way to spot patterns in data and make predictions.

Algorithmic bias: Decision-making errors or unfair outcomes that can arise from problems with an algorithm’s processing of data, or flaws and biases in the underlying data itself. A bias can result in an algorithm inadvertently privileging or disadvantaging one group of users over another group. Examples include customers being treated differently because of systemic prejudices around race, gender, sexuality, disability, or ethnicity.

Alignment: An area of research responsible for ensuring that artificial general intelligence — or so-called God-like AI — systems have goals that align with human values. For example, alignment researchers have helped to train AI models to avoid answering questions about how to self-harm and how to use bigoted language.

Artificial General Intelligence: A computer system capable of generating new scientific knowledge and performing any task that humans can. It would enable the creation of a superintelligent computer that learns and develops autonomously, understands its environment without the need for supervision, and becomes capable of transforming the world around it (see Artificial Intelligence, God-like AI and Superintelligence).

Artificial Intelligence (AI): The science of enabling machines to perform tasks that would previously have required human brainpower — for example, reasoning, decision-making, differentiating words and images; learning from mistakes, predicting outcomes, problem solving. It involves the use of computers to reproduce or undertake such actions, often at a higher speed and accuracy than previously achieved (see machine learning).

Big data: Very large data sets that may be analysed computationally to reveal patterns, trends, and associations. Businesses may use analysis of big data to identify common human behaviours, transactions, and interactions.

Chatbot: A software application that can respond to text questions and mimic human conversation, by analysing text and predicting the answer that is required. Chatbots are mostly used as virtual assistants on customer service websites, but generative AI has enabled them to be used to create different forms of writing (see Generative AI and Generative Pre-trained Transformer).

ChatGPT: A natural language processing chatbot driven by AI technology developed by Open AI. ChatGPT is based on a language-based model that the developer fine tunes with help from user feedback. In response to text-based questions and ‘prompts’ describing the type of written output required, the chatbot can compose articles and essays, write emails, tell creative stories, and generate programming code.

Compute: the computational power required for AI systems to perform tasks, such as processing data, training machine learning models, and making predictions — measured in Floating-point Operations Per Second, or FLOPS.

Computer vision: A field of research that uses computers to obtain useful information from digital images or videos. Applications include object recognition; facial recognition; medical imaging; navigation; video surveillance. It uses machine learning models that can analyse and distinguish different images based on their attributes.

Dall-E: a deep-learning model developed by OpenAI that can generate digital images from text-based natural language descriptions, called prompts, which are input by users.

Data science: Research involving the processing of large amounts of data in order to identify patterns, spot trends and outliers, and provide insights into real-world problems.

Deepfake: Synthetic audio, video or imagery that can convincingly represent a real person, or create a realistic impression of a person who has never existed. Created by machine learning algorithms, deepfakes can make real people appear to say and do whatever the creators wish. Deepfakes have raised concerns over their ability to enable financial fraud, and to spread false political information (see Generative Adversarial Network).

Deep learning (DL): A subset of machine learning that can be used to solve complex problems, such as speech recognition or image classification. Unlike machine learning, which requires human input to understand the data and learn from it, DL can ingest unstructured data in a raw form — from text, music or video — and distinguish the difference between different categories of data. DL models run on software called neural networks, modelled on the human brain.

Floating-point Operations Per Second (FLOPS): the unit of measurement used to calculate the power of a supercomputer.

Generative Al: A subset of machine learning models that can generate media, such as writing, images or music. Generative AI is trained on vast amounts of raw data — for example, the text of millions of web pages or books — and learns the patterns within it, in order to generate the most likely correct response when prompted with a question written in text in that language (see machine learning).

Generative Adversarial Network: A machine learning technique that can generate data, such as realistic ‘deepfake’ images, that is difficult to distinguish from the data it is trained on. A GAN is made up of two competing elements: a generator and a discriminator. The generator creates fake data, which the discriminator compares to real ‘training’ data and feeds back with where it has detected differences. Over time, the generator learns to create more realistic data, until the discriminator can no longer tell what is real and what is fake.

Generative Pre-trained Transformer: One family of large language models developed by OpenAl since 2018, and used to power its ChatGPT chatbot.

God-like AI: A popular term for Artificial General Intelligence.

Hallucination: A flaw in Generative AI models that can result in chatbots stating falsehoods as facts, or “inventing” realities. Examples of hallucinations include creating fake book citations or answering “elephant”, when asked which mammal lays the largest eggs.

Human In The Loop (HITL): A system comprising a human and an AI component, in which the human can intervene by training, tuning or testing the system’s algorithm, so that it produces more useful results.

Large Language Model (LLM): A machine learning algorithm that can recognise, summarise, translate, predict and generate text.

Machine Learning (ML): An application of AI whereby computer programs can automatically learn from, and adapt to, new data without being specifically programmed to do so. ML programs improve over time with training, learning from past experiences or mistakes, as well as spotting patterns in new data.

Multi-Agent System: A computer system involving multiple, interacting software programs known as ‘agents’. Agents often actively help and work with humans to complete a task — the most common everyday examples are virtual assistants on smartphones and personal computers such as Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana.

Natural Language Processing: A field of AI that uses computer algorithms to analyse or synthesise human speech and text. The algorithms look for linguistic patterns in how sentences and paragraphs are constructed, and how the words, context and structure work together to create meaning. It is used in the development of customer service chatbots, speech recognition, and automatic translation.

Neural networks: Computer systems for processing data inspired by the way neurons interact in the human brain. Data goes into the system, the neurons communicate to understand it, and the system creates an output. The more frequently the system acts on data, the better able it is to discern differences in the data set — for example, to distinguish images.

Open Source: Software and data that are free to edit and share, in order to help researchers to collaborate, to check and replicate findings, and to share new developments with the wider community of developers.

Singularity: a currently hypothetical point in time when artificial general intelligence surpasses human intelligence, leading to an acceleration in technological progress and the potential automation of all knowledge-based work.

Superintelligence: An AI system that is self-aware and possesses a higher level of intelligence than humans.

Supervised learning: a form of machine learning that uses labelled data to train an algorithm to classify data or predict outcomes accurately. Because the inputs are labelled, the model can measure its accuracy in recognising or distinguishing between them, and therefore learn over time.

Turing Test: A test of a machine’s ability to demonstrate humanlike intelligence. It was first devised by mathematician and computing pioneer Alan Turing as the “imitation game” in his 1950 paper “Computing Machinery and Intelligence”. The test involves a human evaluator asking questions to another human and to a machine via a computer keyboard and monitor. If the evaluator cannot tell from the written responses which is the human and which is the machine, then the machine has passed the Turing test.

Unsupervised learning: a form of machine learning in which algorithms analyse and cluster unlabeled data sets, by looking for hidden patterns in the data — without the need for human intervention to train or correct them.

Definitions derived from Financial Times articles and The Alan Turing Institute Data Science and AI Glossary

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article