There have been many comments in the last year about the potential dangers of Artificial Intelligence, from such AI luminaries as Elon Musk, Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Gary Marcus and others. But they might not be the right people to listen to in this regard, because the threats of AI are fundamentally political. Most scientists and technical experts, however intelligent, do not have training in politics. They generally do not have the mindset to think about politics, with the exception of the regulatory impact to their sector. Nobody expects an inventor to grasp the political and social implications of his invention.
The Blind Spot of AI Threats
This explains why these AI experts usually make
rather naïve and unimaginative comments regarding the threats of AI; such as
“we need to urge companies to pause
AI”, “the government definitely needs to involved”, “humans can hurt others with
AI”, we don’t want “AI to fall into the wrong hands”, because “bad actors” could
use AI, etc. Also, sometimes the potential threats of AI seem to be minimized
and sometimes exaggerated.
What all these AI threat assessments have in common is that they never
recognize the “bad actor” with the worst record of all: the state.
This is clearly a blind spot. For these AI scientists
the fundamental distinction between state and society is inexistent; it’s
always a collective “we” that need to manage the potential threats of AI. This
is precisely the warning that Murray Rothbard expressed so clearly in Anatomy of the State
(1974): “With the rise of democracy, the identification of the State with
society has been redoubled… The useful collective term “we” has enabled an
ideological camouflage to be thrown over the reality of political life.”
Though it is difficult to distinguish the state
from society in this age of statist interventionism and crony capitalism, it is
essential to do so. The state, according to the standard Weberian definition,
is “a human community that (successfully) claims the monopoly of the legitimate
use of physical force within a given territory”. The state is, thus, by its
very nature radically different from the rest of society. As Ludwig von Mises warned
in Omnipotent Government (1944): “Government is essentially the negation of
liberty.” In other words, freedom suffers when state coercion increases. Though
crony corporate power can influence government in order to get preferential
treatment when the rule of law can be bent (as if often can), it is clear who
holds the reins. It is necessary to abandon the myth of the “benevolent
state”.
Seen in this light, for all new technology it
is necessary to ask to what extent the state controls this technology and its
development. In this respect, the record of AI is poor, since most major AI
players (like Google,
Microsoft,
OpenAI,
Meta,
Anthropic,
etc.), their founders, and their core technologies have been supported since
their inception in important ways by US government funding,
research grants, and infrastructure. DARPA (Defense Advanced Research Projects
Agency) and the NSF (National Science Foundation) funded the early research
that made neural networks viable (i.e. the core technology for all major AI labs today).
This evolution is not in the least surprising,
since the state naturally tries to use all possible means in order to maintain
and expand its power. Rothbard again: “What the State fears above all, of
course, is any fundamental threat to its own power and its own existence.” Thus,
the threats of AI should be seen from two sides. On the one hand, the state can
actively use AI to enhance its power and its control over society (as per above),
but on the other hand, AI could also represent a challenge for the state, in empowering
society both economically and politically.
Will AI Tilt the Balance of Power?
The threat of AI should be assessed,
therefore, in terms of the potential impact it can have on the uncertain balance
of power between state and society, or to express it more sociologically, between
the ruling
minority and the ruled majority. This relationship depends on who benefits
most from new instruments of power, such as the printing press, modern banking,
television, internet, social media, and… artificial intelligence. In some
cases, the state used these tools to enhance its control, but some of them may empower
society. For instance, television
was a medium that arguably strengthened the position of the ruling
minority, while social media is currently enhancing the majority’s political influence
at the expense of the ruling minority. The same question, therefore, concerns AI:
will AI empower the state at the expense of society, or vice versa?
As seen above, the state got involved in AI
long ago, already at the theoretical and inception stage. Today, fake
libertarian Peter Thiel’s Palantir
is providing
AI analytics software to US government agencies to enhance their power of
surveillance and control of the population by building a centralized, national
citizen database (including the nightmarish possibility of “predictive policing”). Anthropic is
also teaming up with Palantir and Amazon Web Services to provide US intelligence
and defense agencies access
to its AI models. And Meta will make its
generative AI models available to the US government. It is
true that such initiatives might, in theory, make state bureaucracy more
efficient, but this might only increase the threat to individual freedom. Worryingly,
this development is considered “normal” and not raising any eyebrows among AI
industry journalists and experts.
From the point of view of society, AI will eventually
lead to radical corporate changes and productivity increases, far beyond internet’s
information revolution. The political consequences could be significant, since AI
can give each individual a personal research assistant and provide a simpler access
to knowledge even in fields with gatekeepers. Routine tasks can be taken over
by AI, freeing up time for higher-value tasks including political engagement. For
instance, AI can make it easier to understand and check government activity,
such as summarizing legislation in plain language, analyzing budgets and
spending data, fact-checking claims in real time; thereby reducing the
knowledge gap between governments and ordinary citizens.
Of course, this increased political
empowerment of society could be stymied if access to AI is conditioned. If the state
keeps the upper hand in AI, it could weaken dissidents and discredit independent
journalists who use AI, by surveillance, manipulation, or worse, in particular where
the state feels only loosely bound by its constitutional limitations. This is
unfortunately the case not only in the US but also with most states and
supranational organizations.
The future of AI, such as AGI, agentic AI and physical
AI, is only going to make this discussion of AI threats more important.
These evolutions will enhance the possibility for rights violations by the
state, but also increase the opportunities and possible countermeasures at
individual and community level. A lot could depend on whether the numerous AI functions
of the future will be mostly open, decentralized, and encrypted.
This future is still uncertain, but the political framework
presented here arguably remains valid.
The political stakes involved with AI are far
more consequential than those data scientists developing AI seem to recognize. The
threats of AI are consistent with the threats that all new technologies represent
if they are used nefariously by the state. It is essential, therefore, for the
public not only to learn about AI and embrace its potential, but also to see it
in the larger context of the political struggle for freedom.
No comments:
Post a Comment