© Financial Times

This is an audio transcript of the Rachman Review podcast episode: How social media platforms put profits before people

[MUSIC PLAYING]

Madhumita Murgia
Hello and welcome to the Rachman Review. I’m Madhumita Murgia, European technology correspondent at the Financial Times, filling in for Gideon Rachman, who’s away on leave. In this week’s podcast, we’re looking at social media and its role in real-world harms and violence. My guest is Cori Crider, a human rights lawyer. She set up a not-for-profit organisation called Foxglove, which is focused on litigating and campaigning against the misuse of technology by Big Tech companies like Facebook and by governments around the world. So how has social media endangered our safety, and what are governments gonna do about it?

Over the past few years, I’ve spent time reporting on the real-life consequences that misinformation and hate speech online can have. In Myanmar, military leaders used social media as a tool to demonise the Rohingya Muslim minority during a campaign of ethnic cleansing. In India, rumours spreading on WhatsApp groups have led to religious rioting. And in New Zealand, a terrorist who shot at worshippers in a mosque quoted YouTube as his source of inspiration. My interest in the impact of social media on society took me to Brussels last year, where I met Frances Haugen, a former Facebook product manager and whistleblower. She’s released thousands of pages of leaked memos to US regulators, which indicated Facebook places profits over the safety of its 2.9bn users. They’ve also showed how Facebook plays down the harm it can cause to society. Here’s what Frances Haugen told the Senate commerce committee nine months ago.

Frances Haugen
I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy. The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.

Madhumita Murgia
Haugen ignited a discussion that had repercussions all over the world — amongst regulators, the media and human rights activists. I started our conversation by asking Cori Crider to discuss how she came to take on Big Tech after spending her career defending Guantánamo prisoners.

Cori Crider
Before I or my co-founders got into this Big Tech area, we had worked a lot in the national security and human rights space. I didn’t do technology stuff at all for the first decade-plus of my work as a human rights lawyer. I did Guantánamo detainees and I did people who had lost loved ones in drone attacks. And that’s got me really started thinking about mass data systems and how they can be used to abuse human rights, basically. So at my old non-profit, which is called Reprieve, we’d interviewed people who had lost innocent loved ones in these drone attacks in places like Yemen or Pakistan. Time and again, somebody who was just clearly not just not a terrorist, but not even a kind of low level of a militant group would have been targeted and killed in these attacks. And we started to wonder how it happened. And then one day, a man called Michael Hayden, who was previously the head of the CIA and the NSA at different times, sat on a panel and said something extremely revealing. He said, “We kill people based on metadata”. And we thought, hang on, what does that mean? Metadata is, of course, information on your phone that isn’t the content of a text message that you sent me, but it’s about when you sent a message, maybe where you were, all of those other little bits of details that the government can use to paint a picture of your life, your movements, maybe who you are. Or so the Obama administration thought. So anyway, it turns out some of the very first disclosures whistleblown by Edward Snowden about the NSA and how the surveillance programme worked involve the way that surveillance data fed into the drone killing programme. And so under the Obama administration, there were these things euphemistically referred to as “signature strikes”. That was the government’s shorthand for an attack on a group of people whose identities the government did not know. It was an attack on a phone or a pattern of phone behaviours. And as a result of this, people were repeatedly losing their lives or were losing their loved ones who had no affiliation with militant groups, basically because of an overweening confidence in the power of big data, as it was then called. Now it just gets called AI, doesn’t it? But it’s the same broad issue. And then I think once you see a frame around a problem like that then, you know, you see the problem popping up everywhere. And we realised that the UK government was using these mass data systems to make incredibly important decisions about people in ways that people weren’t asked about and didn’t understand and probably wouldn’t support. But finally, I think we started to realise that while lots of us had been worried in civil society and human rights groups about the NSA or GCHQ and mass surveillance by governments, over the past 10 years, a handful of California corporations had amassed a level of information and power over us and about us that would honestly be the envy of most states. And so we thought, well, hang on a minute. What are people doing about Facebook? What are people doing about Google?

Madhumita Murgia
I’m curious, when you started looking at the private sector and its use of data, how you came to realise the parallels in your two worlds. So not just how companies are collecting data about individuals, but also how that translates into a human rights issue or human rights abuse. You know, when did you start realising that a Facebook or a YouTube or a, you know, social media company could actually facilitate these types of harms in different parts of the world?

Cori Crider
Two things I think. I mean, one is that it was about partly how they treat their staff. This is really about abuses of power and fundamentally, I just don’t like a bully. And I used to think there could be no greater imbalance of power than between, say, a Guantánamo detainee and the defence department who was holding him without charge or trial. And then I met, for the first time, a content moderator working for Facebook. Here you have people, often migrant labourers, whose right to live and work in the country where they are depends on their hanging on to this job. And they work under a cloak of secrecy. They’re made to sign these incredible NDAs that are almost like they work for the CIA. They’re not meant to tell their family members that they work for Facebook. They’re not meant to talk to people outside about the work they do. And Facebook will say, oh, this is for your security and for your protection. But in point of fact, it stops them organising. It makes people feel afraid to talk, even to lawyers like us about what their options are and what their rights are. But it really has a chilling effect on these people. And I suppose the other thing that surprised me about it was the seriousness of the trauma that the people who moderate content for Facebook all day, every day were experiencing. Because the sad fact is people around the world upload absolutely horrific stuff to these social media platforms. They upload terrorism, beheadings, mass shootings, child abuse, you name it. If it’s out there in the world, it gets translated on to social media and someone will post it on the internet. So if you think that social media is bad now, you would not set foot in it, and you certainly wouldn’t allow your loved ones to set foot in that without the labour of this group of people called content moderators. So a content moderator is somebody essentially who firefights the absolute worst of what people put on social media so that you and I don’t have to see it so that advertisers don’t flee these platforms and so that the platform is at least vaguely usable. Without the labour that these people do, we quickly realised social media doesn’t exist. And so the myth that Facebook, but also other tech giants kind of peddled from their inception, which is that it was a programming genius in his dorm room and a little bit of business acumen that caused these companies to become these colossi is actually just total mythology. The platforms spread their usability. Their very power actually stands on the backs of these workers. So in terms of how it relates to Guantánamo prisoners, I would never have thought until I sat down and talked to a content moderator that you could really get post-traumatic stress disorder just from sitting in front of a screen. I thought, really? I mean, you know, I used to talk to people about, you know, the defence department beating them up or force-feeding them or kidnapping them. It didn’t occur to me before I investigated it with Foxglove that actually this kind of experience could genuinely give people PTSD. But it can. It is well-known now. I think you yourself reported at one point that one of the big outsourcing companies that Facebook uses, Accenture, now requires people to sign a sort of acknowledgment that says this work is extremely dangerous. I acknowledge that I might get post-traumatic stress disorder in the course of the work because it just turns out that sitting in front of a huge number of context-free bits of toxic content really is psychotoxic for some people and it really gives them PTSD.

Madhumita Murgia
The other strand of this is the content itself. Can we talk about some of the examples of how that sort of content has facilitated real-world violence, hate, other types of human rights abuses?

Cori Crider
I think this is one of the most important things that arises from the disclosures from the whistleblower, Frances Haugen, for example. So it’s bad in the United States, it’s bad in the west. We’ve seen all kinds of problems. So we saw, for example, the January 6 riots on the Capitol, definitely organised and incited over Facebook and these “Stop the Steal” groups saying that the election in 2020 where Donald Trump lose, he didn’t really lose and so forth. But if you think it’s bad in the United States, what Frances Haugen’s disclosures really show is that it is vastly, vastly worse in the rest of the world, and that’s basically because of resource. To give you one of the most striking statistics from the Haugen disclosures, I think something like 87 per cent of Facebook’s misinformation budget goes to the English language — US and west.

Madhumita Murgia
When you’re talking about misinformation budget, you mean to moderate that?

Cori Crider
Exactly. As I understand that, that would be the amount of money they would spend on staffing systems, algorithmic filters, but also people, human beings. It’s just the overall wadge of resource is overwhelmingly going to the US, to English, as if Facebook wasn’t used by nearly 3bn people. So for many societies, including societies in conflict, at risk of conflict, facing elections, there’s just nobody home or there’s almost nobody home. And that has had devastating effects in various countries over the world. So there’s one really very well-known example where Facebook basically admits that they didn’t, quote, “act quickly enough”, which is Myanmar. And there have been multiple investigations about this. Facebook has commissioned its own independent human rights reports that show that members of Myanmar’s military junta essentially incited incredible violence against the Rohingya using the Facebook platform. That Facebook was, for all intents and purposes, in Burmese society, the internet. And so that when they did this in a society that had been very closed and was starting to become more connected, to have more kind of online public discourse, it was happening on Facebook and that the military regime or pro-regime influencers, if you will, were incredibly successfully inciting and fomenting hatred and violence against the Rohingya people.

Madhumita Murgia
Through misinformation.

Cori Crider
Through misinformation, exactly that. And that directly contributed to violence and to people’s deaths. And Facebook then said, “OK, well, we accept that we didn’t move quickly enough on this”. And they also then say — this is the usual Facebook move these days, isn’t it? It’s the kind of Zuckerberg apology tour, as I like to think of it, which is, “We got it wrong. We’ve invested in systems. It’s not a problem now”. Lather, rinse, repeat. And so what they have said about Myanmar is, “We got it wrong. We have invested in systems and we have improved it, including more human content moderators,” and so forth. So then fast forward a few years and Global Witness does an investigation that says, OK, what’s it looking like now? The junta obviously then have engaged in another coup in 2021, and it’s gotten a lot worse again. And so Global Witness said, OK, what’s the situation with hate speech and incitement to violence spreading? Given that this is the signal example where Facebook admits they got it wrong and should have done something to fix it. And so they bought ads using language that had previously been flagged as inciting of violence in Myanmar just to see what would happen in the advertising process. Ads are preapproved on Facebook, I hasten to add, so the fact that they bought these ads doesn’t mean that they posted them. They did not incite genocide on Facebook, but they bought ads that were essentially doing that to see whether that would be caught by Facebook’s new improved whiz bang filters in Burmese. And Facebook approved all eight of the ads that they sought to buy. All of them.

Madhumita Murgia
So what would stop those ads from being posted? Would it have to be again approved by Facebook?

Cori Crider
No. They cleared the process. They passed the reviews. The only thing that stopped them was the fact that it was Global Witness as opposed to a member of the Burmese military from doing it. So then working with us, actually we thought, well, hang on, let’s run this experiment in a couple of other places. So we then did it with them in Ethiopia, which experts say is a whisper from genocide itself. It’s a nation of 117mn people with very few people moderating content, and they did the same thing. So they bought about a dozen ads, this time in Amharic, one of Ethiopia’s major languages, again, inciting violence, trying to play this test on easy mode for Facebook as well. So it’s not as if we came up with some new language. We used language that had previously been taken down from actual organic posts. So, you know, if there is anything that you might have thought would go into the filter, it would be something that has already flunked the hate speech policy. Anyway, bought the ads, 12 ads. Again, all approved. We then raised this in a letter with Facebook. What do you think they said? “They shouldn’t have gotten through. We’re investing heavily in systems to protect the public and we’re gonna hire more moderators.” So that. OK, well, great. Thank you very much. You know, we’ve now flagged this as an issue with you in Amharic. So let’s see what happens if we buy two more ads. (Makes whooshing sound) They went through. They went through again. So it just shows you, I think, that when Zuckerberg does go on one of these official apology tours in front of Congress, inevitably, what does he say is gonna fix it? He doesn’t really talk about people very much except to kind of basically say, oh, we have 30,000 people working on safety around the world. By the way, when he talks about how many staff are his problem, the numbers suddenly halves to 15,000 because he doesn’t want to have to deal with his content moderators. But anyway, he says essentially, “Senator, the AI will fix it. We will improve our artificial intelligence systems in all of these languages around the world. And we’re gonna have amazing content filters. That means that this isn’t a problem”. That, I have to say, has been pretty obviously exploded as a myth. It just is not possible. It is not possible to develop sophisticated enough language filters, even in English, where they’re putting a lot of time and resource into it, that are gonna catch these incidents of hate speech. It takes people and people are very expensive. And again, I would say it’s not just about Facebook. I think as any of these social media platforms scale, be it TikTok, Facebook, YouTube, because it’s a mass upload, not quite a social media. All of the problems that we see in the world, of violence, political propaganda, hate, the abuse of children, all that stuff is just gonna be reflected in the platforms themselves and is especially gonna happen because of the decision to prioritise and promote content that is engaging, because, sad to say, humans tend to engage with and react with material that is incendiary, that is hateful, that invokes your kind of fear or flight responses.

Madhumita Murgia
Speaking of the examples of misinformation and what it can do, you know, we talked about Myanmar. You mentioned Ethiopia. Obviously, now we’re in the middle of a conflict in Ukraine. And I think you’ve worked also a bit with Ukrainian content moderators to look at how Russian misinformation is being employed via Facebook.

Cori Crider
So we talk to people who moderate conflict out of the Ukrainian war. These are again, outsourced, absolute bottom of the totem pole content moderators. And they say that they will notice content coming from the conflict that looks to them like, maybe not a bot, maybe it’s a troll, somebody who’s just paid by the Russian government to spout misinformation. But the same phrases coming up in different accounts again and again and again, about you’re saying that you don’t support the invasion, but what about, you know, when the Ukrainians were firing missiles into the Donbas, whatever it is. But they will see it come up and it will be eerily similar, the phrasing. And they think, well, hang on a minute, this looks to us like what Facebook would call co-ordinated, inauthentic behaviour, which is, you know, Facebook-ese for state-sponsored fake news or state-sponsored propaganda. And so they would love to at least flag it so that somebody could take a bird’s-eye view of it and say, well, hang on a minute, is this actually a state-sponsored propaganda operation? And Facebook, with all of its incredible money, all of its incredible supposed sophistication and programming, doesn’t even build into its systems the ability for a frontline content moderator just to tick a box that says, I think this may be somebody else. I think this may be state-sponsored misinformation, for example. It’s just not there. Nobody wants to listen, which I think is a symptom of the fact that they see these people as sort of working in a call centre. You know, disposable jobs that aren’t the core function of their business as opposed to valuing them in the way that they really need to.

Madhumita Murgia
Or people who come with local knowledge and context and can actually add to the quality of the job that’s being done, right?

Cori Crider
Exactly that. I think it’s a real blind spot in all of these social media companies, to be honest. It’s one of the things that bothered me about the first breakout documentary about social media content moderators. I think it was called The Cleaners. Ghetto. Good documentary. But the thing that bothers me about it is the title, because the reality is, the work they’re doing is editorial work. They’re actually doing something more like an editor or a judge. Is that satire or is it hate speech? Is that movie investigative journalism that should stay up in the public interest or is it just a snuff film? That’s a fine-grained distinction that requires some real cultural competence and knowledge, but Facebook doesn’t treat it that way. And in fact, let’s take a point of comparison for a moment. These guys that we talked to are at the very bottom of the totem pole. At the top end of the totem pole, of course, is the so-called Facebook oversight board, to whom Facebook have just announced a grant of, what is it, $150mn or something? And it’s got all kinds of luminaries, Nobel Prize winners, Alan Rusbridger, the former editor of The Guardian, people on six-figure consultancy salaries for a bit of part-time work, to kind of pore over an individual piece of content. And endless ink is spilled over the oversight board’s content appeal decisions, whereas that decision that they write out is the final judgment in a chain that goes right back somewhere to an underpaid, outsourced content moderator who does that all day, every day at scale and conditions that probably gave her PTSD. And I think, well, there’s a real disconnect there.

Madhumita Murgia
What’s changed? Has anything changed in terms of foreign language content or how they approached things in more vulnerable communities?

Cori Crider
So what Facebook say is that they have invested, by which they mean they’ve hired some more people, and that they have invested in improving their filters in other languages, so in Ethiopia. Like trying to get more classifiers for hate speech, so to try to improve the automated filtering systems in some of these languages that at least are supposed to flag some stuff for content moderators. It is pretty obviously an order of magnitude short of what would be required to actually effectively moderate the platform outside English, I would say. I mean, the tests run by Global Witness in Amharic really show you that, I think, because that’s a place where they claim that they have invested because it’s one of their high-risk areas. And yet a dozen ads explicitly inciting genocide using identical text from material that was taken offline before don’t somehow get caught in the net. So whatever their investment is just clearly ain’t enough.

Madhumita Murgia
Isn’t the problem here that this is a closed black box, these so-called algorithms, the engagement systems on Facebook, and also the moderation algorithms, nobody knows how they work. You know, maybe not even the people building them. So how do we ever know if it’s good enough? Does that need to be external regulation? Do they need to be like an oversight board, some kind of a technical board of experts who kind of look in and mark the homework of these algorithms? You know, what is the solution?

[MUSIC PLAYING]

Cori Crider
The top line is, as one of those financiers once said, it wasn’t Warren Buffett. I feel like it was Warren Buffett’s friend. But anyway, show me the incentive and I will show you the outcome. If it’s expensive, they will have to be dragged kicking and screaming to do it. And so self-regulation will always fail, let’s be clear about that. There are things that the company can do and occasionally in an emergency moment, they do do, so to give you some examples. When the rioters stormed the Capitol on January 6, there was a series of what they call “break-glass measures”, which were a list of software interventions that they made to dial down the virality of various kinds of hateful and inciting content. So it’s not quite correct to say, well, Facebook has no idea how the thing works. Of course, they don’t control it completely. But there are definitely things that they choose to do. They only really do that in a very, very tiny percentage of cases. I mean, basically in their home country, when there was a clear and obvious constitutional crisis, but there turn out to be other places also having massive constitutional crises. And it’s not clear to me when, if ever, they have used the break-glass measures in those other places. So there are things you can do within the software to quote, “decrease engagement”. Because remember, the engagement itself is partly a creature of their own device. And then, of course, staffing. I actually don’t think that there’s any way around that, if you’ve got a thousand times as many people on your network, you may well need a thousand times as many content moderators. And that’s just an answer that Facebook and other tech giants don’t wanna hear. So yeah, I think there is an obvious role for governments to play here. One, as you say, in opening up the black box of these systems. Two, in looking at these labour conditions. Anybody who thinks with things like, I don’t know, the Online Safety Bill here or the Digital Services Act, that it will be enough simply to pronounce that the companies must do more to take unlawful or harmful content off without addressing themselves to the labour side of this, I think are setting themselves up actually to fail. This is work done not simply by computers but by people, and it’s always gonna be.

Madhumita Murgia
So what regulation do you think goes hand-in-hand with that, that addresses the human component?

Cori Crider
One of the things that the EU are looking at right now is something called the Platform Workers Directive. So it may well be that some of the gaps and shortfalls in the DSA could be made up there. I don’t accept, for example, that the conditions that they currently labour in are inevitable or can’t be regulated any more than factories a hundred years ago. The original way that factories behaved was also to tell workers to kind of keep their arm out of the machine or whatever. And then ultimately we said, well no, actually, it’s not the worker’s job to not lose their arm, it’s your job not to take it. And I think there’s something similar that could be done here. So, one, it may well be right to set a quantity of people who are proportionate to the user base in a language. That may be a matter for national or regional regulation, you know, that’s something to be explored, but there may well be a number there that is required for scale. Two, on conditions, there are analogies in other contexts. Who says that there should be no limit on the amount of psychotoxic content that people should have to swim in? If you are a police officer in this country investigating a child abuse imagery case, there will be limits set by your workforce on your exposure to that material because it is harmful. And you will have sessions with a psychiatrist, a clinician, I think effectively mandatory as part of your work. Not what Facebook offers, which are these wellness coaches who are not qualified to diagnose and treat PTSD. And in some places where we have heard, they don’t even respect patient confidentiality. (Laughter) So there, I mean, they’re not doctors and they don’t behave like doctors. The fact that it is difficult and I would never say that it isn’t, doesn’t mean we should just throw up our hands. We as a society, given the extent to which these places really are, where our political debate and our public debate is happening. We’ve got to get to grips with it, just as we did previously with things like broadcast.

Cori Crider
And actually, speaking of the Facebook oversight board, they’re quite interesting to me. Do you think the role they’re playing is useful in any way? I mean, I know you said they kind of, you know, just looking at individual pieces of content and making pronouncements on this. But do you think there can be a non-governmental role for people from civil society to understand better how this is working and to be the voice of communities outside of just, you know, the UK and Europe, which obviously have these strong regulatory regimes.

Cori Crider
I think there absolutely can be a role for civil society and a non-governmental role, but I would generally tend to prefer a role for people who are not wholly funded by Facebook. So I would rather, for example, that Facebook hadn’t thrown all of those independent academic researchers who were trying to build APIs so that they could meaningfully investigate the prevalence of hateful ads, for example, or political ads on the platform and shuck them off. But actually, Facebook didn’t like what they were doing because it wasn’t something that was within Facebook’s control so inevitably they’d locked them out of the platform. That, I think, is meaningful accountability. Look, you’ve got very senior people on the oversight board, but the fact of the matter is, the thing is totally funded by Facebook. To be boring lawyer-ese about it, the jurisdiction, the questions that they are supposed to address themselves to are essentially set by Facebook. So how can that work? So take somebody like Daniel Motaung, the South African content moderator who is moderating content in Nairobi in Facebook’s east African hub that we at Foxglove have been working with, and who is bringing a constitutional petition against Facebook for union busting and for the horrific conditions of work of him and hundreds of other people. The entry-level person of whom the oversight board of the final Court of Appeal. Are the oversight board going to address themselves to those conditions? Are they interested in what is happening at scale and at pace at the bottom end of the systems that they’re reviewing? I haven’t yet seen any sign that they are. I’m prepared, of course, to reconsider whether it is essentially a PR exercise. But to do so, I think the oversight board would need to think a little bit more carefully and a lot more critically about the scope of their functions and what they’re actually there to achieve.

Madhumita Murgia
So looking ahead, we’ve looked at Facebook. I think it’s a really interesting case study because of its scale and the emerging evidence of the harm that it has caused and the work that it’s trying to do to change that, at least the perception of that. But there are others. TikTok is something you mentioned briefly. This is growing really quickly, and you know, they similarly are hiring moderators to kind of work on this content. So what, going forward, do you see as solution to a growing problem not only on Facebook but across many platforms?

Cori Crider
I think the regulation has to be sectoral. This isn’t about picking on Facebook because as we all know, the kids are not on Facebook anymore because the kids think that their Nazi uncle is on Facebook, right? (Laughter) The kids are on TikTok, apparently. And so, of course, we’re gonna have to try and apply these regulations more generally. I really am a strong believer that these things require a labour regulation solution in part. And that if these companies are forced to recognise that content moderation is not some kind of outsourced adjunct to their business, it is the business, then you’re gonna see a platform that looks very different. That’s not enough, but it’s part of it. I think the other thing I would be looking at and the United States has started to lap European and British enforcers on this, frankly, is the law of antitrust and competition. Because if you think about what’s going on in Ethiopia, for example, how does that happen? It’s partly a problem of size and scale. Why have we gotten to a place where content for Mark Zuckerberg essentially to be the absentee landlord of the public square for societies he knows nothing about and cares about possibly even less? And in fact, one of the reasons that people are stuck with it is because of these companies’ dominance. Right? They get a lesser service. They get a poorly moderated service, partly because they don’t have any alternative.

Madhumita Murgia
So do you think break-up is something that can solve the emerging political problems too?

Cori Crider
I think that monopoly power is one of the greatest problems that we as a society face and that the power of Big Tech is a huge part of that, absolutely. I think that we have allowed these corporations now to become so large and so powerful that no one legal or policy instrument will be sufficient to solve the range of problems that they cause in our society. But yes, I’m definitely a fellow traveller of the kind of, forgive this incredibly pretentious word, but the neo brand stance in the United States. The break-uppers, basically. The people who say monopoly is a problem not simply because of a risk of price gouging, which is what people have tended to say since the 1970s. The problem of monopoly is that it provides an unaccountable and unelected counterweight to what democratic societies want. So people in the UK, people in Europe, people in the United States, they definitely don’t have the same politics. They definitely don’t agree on much, but they actually do tend to agree that the power of these corporations has outstripped in many ways the power of democracies to regulate them. And people are incredibly worried about that.

Madhumita Murgia
And just coming back to you, are you gonna stick with taking on Big Tech? Do you feel like this is your sort of mission until you see change here?

Cori Crider
I think, I would say that the use and abuse of mass data systems about us, but without us, is the core fight along with climate change of our time and of our generation. And I wish I could say that it would be won by us within five to ten years but I’m sad to say that I think these are generational challenges, and I think it’s gonna take quite a lot of us quite a long time to unpick what has started.

[MUSIC PLAYING]

Madhumita Murgia
That was Cori Crider ending this edition of the Rachman Review. Next week, my colleague John Paul Rathbone will be standing in for Gideon, talking to former Colombian defence minister Sergio Jaramillo about the benefits of starting a peace process early between Ukraine and Russia. So please join us then.

[MUSIC PLAYING]

This transcript has been automatically generated. If by any chance there is an error please send the details for a correction to: typo@ft.com. We will do our best to make the amendment as soon as possible.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Comments

Comments have not been enabled for this article.