Why are AI companies so insistent on government regulation?
2 min read

Why are AI companies so insistent on government regulation?

Why are technology companies so insistent on government regulation, especially when it comes to existential risk from AI?

Why are technology companies so insistent on government regulation, especially when it comes to existential risk from AI? For example, OpenAI CEO Sam Altman's appearance at the Senate, or the recent open letter published by the Centre for AI Safety.

Of course some desire to prevent existential risks (x-risks) is sincere. I think that many x-risks are live possibilities, but the risks of AI are primarily around the ways it will reshape labour markets and entrench existing inequalities and power relations. These changes and the technical unemployment that is the result are already occurring. Preventing any brake on these changes is what is really at stake here. Realistically replacing jobs is the business model of artificial intelligence, allowing individual people to do substantially more and reducing reliance on costly specialist expertise. Energy consumed by preventing x-risks will absorb attention from these other more near-term risks. But asking for regulation is also a form of political terraforming.

Asking for regulation around AI risk allows the current players in AI to shape the institutions which form the market and the political environment in which they will operate. The boards of these government stamped or internationally formed organisations will be stuffed with known quantities - even highly critical - whose views are well known and remain within a known envelope of criticism. The agenda, shape, powers and concerns of these institutions will be set by AI companies. Government expertise in technology in general and advanced technology in particular is shallow. Governments are used to this sort of corporate co-writing of laws and here AI companies genuinely are subject matter experts, given the closed nature of their research. Shaping the market allows existing players to prevent new technology surpassing them. Licensing AI means increased visibility on emerging technological innovations and trends. It levels the playing field and permits “fair” competition, by making the landscape more known. It also permits AI companies to control intellectual property and ensure open source models can be regulated out of existence if they truly become a threat to their proprietary models and downstream services.

The shaping of the political environment is more profound. The writer’s strike is a portent of things to come: those near-term risks of AI will doubtless come under heavy scrutiny from trade unions and other political actors. By having forums in which these sorts of concerns can be arbitrated allows these questions to be sharply referred to the relevant body to be firmly kicked into the long grass. This is a form of depoliticisation, similar to how in neoliberal governance key elements of economic control are taken out of politics and handed to independent institutions. For example, the World Bank or the IMF, or more locally the independence of the Bank of England. Quinn Slobodian calls this "market encasement", where institutions protect markets from substantial political oversight.

Silicon Valley seems to have learnt lessons from regulation of startups that intervened in existing markets like Uber, Deliveroo and Airbnb. Increasingly, these services find themselves in lengthy legal disputes as to their operating model, for example between Deliveroo and trade unions. In the longer tail, we can think about the anti-monopoly court cases directed against Microsoft, a substantial investor in OpenAI. The disruption to labour markets as a result of AI will be far greater than any individual start up. Better to get ahead of regulation by setting it yourselves.

Enjoying these posts? Subscribe for more