In 2018, Alexandr Wang went to China. On a visit to a company building facial recognition technology, the co-founder of US start-up Scale AI saw early signs of a “tech cold war” brewing and decided to act. 

When Wang set up Scale AI in 2016, aged 19, his plan was to use a combination of human labour and technology to accurately label the reams of data underpinning artificial intelligence tools. His early customers were autonomous vehicle companies. 

The company has grown rapidly, hitting a $7.3bn valuation in 2021, as its roster of customers has expanded to include financial technology companies and others building AI models. After the China trip, Wang began assiduously building relationships with the US government, motivated, he suggests, as much by a sense of duty as by the prospect of reward.  

As tensions between the US and China have escalated and AI has rapidly developed, 26-year-old Wang has picked up contracts from the US Department of Defense worth tens of millions of dollars and become a leading voice on the impact of AI on the future of warfare — a topic he expands on in this interview.  


George Hammond: How significant a technology is AI for the military? 

Alexandr Wang: You know, I think it’s extremely significant. If you look at the history of warfare or the history of military power, you notice that, by and large, the countries that most rapidly integrate new technologies have the high ground when it comes to hard power. My personal view is that, at the end of the day, for any country, hard power is the most important power; the most critical power.

Tech Exchange

The FT’s top reporters and commentators hold monthly conversations with the world’s most thought-provoking technology leaders, innovators and academics, to discuss the future of the digital world and the role of Big Tech companies in shaping it. The dialogues are in-depth and detailed, focusing on the way technology groups, consumers and authorities will interact to solve global problems and provide new services. Read them all here

If I look back to where I grew up, Los Alamos, it’s certainly true that the atomic bomb decisively led to the end of the second world war. The decades since then have had much less war, much more peace than, you know, the hundreds of years before that — [this period] called Pax Americana. I think a large factor in that is that the US has been able to, since the atomic bomb, very rapidly integrate and develop new technologies that improve our hard power projection. 

Then you look at something like artificial intelligence, where, even in our daily lives, it clearly touches everything in a way that I think not very many technologies have. You can imagine the same is true for war fighting. It’s one of the few technologies that potentially undergirds every component of war fighting, from weapons deployment and development to back-office functions, such as “how efficient are the logistics of the military?”, “how efficient are the country’s personnel practices or intelligence capabilities in areas such as cyber security?”

Let’s imagine a world where some other country — whether it’s Russia or China, or the UAE or some other country — is able to more rapidly transform their operation with AI. That’s very scary because then you have an adversary whose capabilities you truly have no understanding of. 

GH: What are some of the military risks that are created or exacerbated by AI? 

AW: The goal here fundamentally is not to go to war: it’s to project deterrence. The history of the US over the past 80 years has been: we’ve invested and developed powerful war fighting technologies to deter conflict globally. We have seen that in a very real way in the war in Ukraine. We’ve been able to fend off a Russian invasion through many of those technologies and capabilities that the US has developed and funded and built. 

Compare AI to atomic bombs. Nuclear technology is very visible: it is apparent to the entire world when an atomic bomb has been dropped. That is extremely good for deterrence because we can all agree we don’t want atomic bombs to be launched. And if anyone breaks that rule, it’s blatantly obvious to everyone. 

AI Is a technology that is widely disseminated, available and varied, and in many cases difficult to detect. So you can imagine the use of AI systems in ways that are incredibly damaging to national security. If we knew about it, there could be escalatory actions, but we aren’t able to detect it. So you can imagine there basically being an increase in foreign adversaries and foreign interference just because it’s a difficult technology to detect. 

If you look at how China has applied AI within its borders, primarily around facial recognition, global surveillance, suppression of minorities, you can imagine it figuring out how to export that technology in ways that are undetectable and, ultimately, could be quite concerning for the world. 

GH: If AI is going to be as big for the military as the atomic bomb and has the potential for the kind of asymmetric warfare that you’re describing, is that potentially an accelerant on global warfare?

AW: I can see an argument for sure where it’s an accelerant to countries finding more ways to interfere with one another than purely through warfare. There’s a number of ways for countries to influence or attack other countries that don’t result in human lives being lost per se: cyber attacks, disinformation campaigns. AI can increase the level of that activity, the efficacy of those activities. 

The flipside is, I think, that for any country to imagine the possibility of an AI-enabled war, where you have hardware and drone swarms that are fully autonomous, the toll is far greater than that of a pre AI war — and therefore that will still act as a deterrent for any sort of conflict. 

GH: How important is it to have a person who’s there and willing to pull the trigger, and knowing what the context is? With AI, you remove that. Is the risk that AI makes tools of war more powerful and less moral? 

AW: In my mind, it’s simply an increase in capability. The US Department of Defense has made very clear commitments: decisions that could result in the loss of human life require a human. I think we’ve been very clear as a country that we’re going to adhere to our moral standards.

GH: Do you think that the US’s main rivals, including China, have the same attitude?

AW: This is a key question — it gets to the philosophical question of democracy versus totalitarianism. Throughout the history of humanity, in societies that are controlled by one dictator, the willingness to engage in activities that are globally deplorable is much higher. If you look at countries like China and Russia, because of the structure of government, there’s a high propensity to use AI in ways that are amoral.

Look at public statements made by the CCP [Chinese Communist party] with respect to AI and how it affects their military. They talk quite explicitly about it being a potential leapfrog technology. They believe that, if they overinvest into AI relative to their adversaries, they could produce capabilities and technologies that leapfrog other countries. And in particular, they view the incumbents such as the US as stuck in an innovator’s dilemma of sorts, where we continue to maintain and upgrade our existing systems, and don’t invest in the new technology and, five or 10 years down the line, end up being outdone. That’s how they talk about it.

GH: I’ve heard you speak before about the visit to China that you took in 2018, and hearing about AI development, and how that worried you at the time. Can you talk about what you saw? 

AW: So we walked into the office of Face++, a company that builds facial recognition technology. In the main lobby there’s a big camera and you walk in and then there’s this big screen that shows the camera feed. It shows a box around your face with all of your summary statistics on the screen. Age, gender, race, all these kinds of stats. It’s already a dystopian [scene] to walk into.

And then we listened to a presentation, where they were touting that much of their AI talent had been educated in America, were coming from companies like Microsoft and Google and Facebook. Then they talked through their business use case. In the news we have seen the use of this technology, which is for surveillance and Uyghur suppression. [In 2019, the US blacklisted Face++’s owner, Megvii, citing violations of Uyghurs’ human rights; the company said the move lacked “any factual basis”.]

This was at the same moment where, in the US, there was a pullback of US tech companies from working with the military over moral concerns. And in my head I played this forward: if China is moving full speed ahead in integrating AI and the US doesn’t, this is going to end really badly.

GH: That gets into the question of where you think we are in this race between the US and China to develop AI in general, but then to apply it militarily in particular. Who do you think is ahead and, beyond dollars spent, what are the components that go into being successful? 

AW: If you look at the overall AI race from a pure technology development perspective, the western world writ large, and the US in particular, is certainly ahead. The Large Language Model technology that is so important today was developed in the US; same with the image generation technology that wowed the world last year. So these are all American technologies built by the American innovation system, which is a reason to be hopeful.

And there’s good reason to believe that China will actually hamstring its ability to be successful in these areas. China recently released its AI regulations. One of those regulations is that AI must adhere to socialist principles and the subtext is, you know, they can’t have LLMs saying bad things about the CCP and bad things about President Xi. Anyone who’s played with ChatGPT realises that these systems are hard to control. So, as a country, they invest so much in censorship that I think it’s hard for them to fully invest in LLM technology, culturally speaking.

That said, if you look towards military applications, I think we need to be pretty mindful: facial recognition, and computer-vision technology more broadly, has played out in China’s favour because they took the technology and were able to develop, through civil-military fusion and a domestic tech industry, differentiated military capabilities. 

GH: You have said before that data is everything in this race for AI. The Chinese approach to data privacy is very different to the US approach. Can the US keep ahead without compromising privacy?

AW: An interesting thing is that not all data is created equal for AI. People often cite this point: China has no [concern for] privacy or the personal liberties of its citizens and so will be able to amass far more data. And I think that probably is true. But what is the purpose of the data? If you want to develop a global surveillance state, obviously that’s highly valuable data. But if they want to build a differentiated military capability, then they would need military data.

The US should have a decisive advantage here. We’ve invested far more into our military platforms. We have more military hardware than any country in the world. We have more satellites in space than any other country in the world. If we can translate all the data coming off all these platforms into a central data set we can use for our own development, that’s a decisive advantage that would be impossible for China to catch up with because they would have to invest trillions and trillions of dollars into military hardware before they could compete. 

Now there are practical realities that don’t make that true today. You know, the DOD generates 22 terabytes of data on a daily basis. It is much more than, you know, Chinese military hardware could produce on a daily basis. But right now, most of that data is thrown away within the US. 

GH: When you say they throw away most of that data, what does that mean? 

AW: Most of it gets uploaded to hard drives off these military platforms — off of a plane, for example, or a vessel. But there is no operational practice by which the data from these hard drives gets uploaded into a central repository. And therefore most of the time they end up just reusing the hard drives and overwriting old data. So it ends up being lost. 

GH: Like wiping a VHS? 

AW: Yeah. The DOD has a lot of stated plans for building a central data repository. The US can win and win quite decisively. But it does require action and does require a plan to be implemented. 

GH: Do you get the sense that there is a plan developing at this point? 

AW: There is certainly a plan developing. I think the intent is clear and I think sometimes where we fall down as a country is we water down our plans as they move through a bureaucratic process. I think we need to make sure we don’t water this one down. 

GH: The White House recently issued an executive order limiting US investment in Chinese technologies that could further national security, including AI. Do you think that’s the right approach?

AW: If you look at some of what has been happening in practice, one thing that became of particular interest was chips. These incredibly high-powered chips were being developed in the US and then were being disseminated quite broadly around the world, including many of them going to China. Then there were examples where US venture capital firms were investing in the Chinese OpenAIs [ChatGPT’s developer], if you will.

US capital, US retirement funds were going to fund Chinese AI capabilities, which the CCP certainly has no intention of using for anything other than power projection throughout the world. Clearly that is not something that should be happening. So I think there are extreme cases where you can look at it and say, “that does not make any sense for the US.” 

GH: The question, I guess, is where you draw the line on this tech decoupling. 

AW: I think we’re in the early innings of watching a quite monumental, prolonged tech cold war between the two greatest economic powers in the world. I think what we see today is a glimpse of what we’re going to continue seeing in the future. 

GH: Beyond abiding by domestic rules, what do you think the role of private start-up investors should be? Should they be looking to play a role here?

AW: I think American technologists — which extends to those at research labs, in addition to those at start-ups — should develop a point of view on whether we should ensure that America continues to have military and economic leadership. That’s a tough question for everyone. I’m of the camp that says yes, because I think it’s the best way to ensure democracy and our way of life for centuries to come. But technologists should form a point of view. And if you believe yes, then I think it’s a pretty sharp moment to be investing your time and resources in ensuring that the US has those capabilities. 

GH: You started off at Scale doing data labelling for autonomous vehicles. When did the shift into national security come and how big a body of work is that today? 

AW: It really happened on the tails of the trip to China that I mentioned. It was a trip that didn’t quite sit right with me. Roughly a year later, we started making a concerted attempt to build ties and collaborate with the US government.

It’s an important part of our work. If I were to zoom out fully on what the enduring impact for Scale is likely to be — looking back centuries from now, say — there are two things. One is that we were a critical enabler for everything we’re seeing in the current AI revolution, and the second will be ensuring American leadership through AI capabilities for the US government. From a mission perspective, it’s quite significant to what we do at Scale. Even if it’s not the best business decision, my belief is it’s critical work that we have to invest in. 

GH: Why is it not the best business decision?

AW: Part of the reason more tech companies don’t work with the US government is because the US government is not the easiest organisation to do business with. There’s a bunch of very well documented challenges: I’d say they make decisions very slowly, they’re quite bureaucratic, they tend to favour doing work with the old guard, the Beltway bandits, as they’re called, the Lockheeds and Boeings and whatnot. They’re a very difficult organisation to interact with, they don’t make it easy by any means. But we’ve chosen to invest in it regardless. 

This transcript has been edited for brevity and clarity


Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments