LONDON (Bywire News) - There is a bracing idealism in Michael J. Casey's recent commentary on the inherent dangers of artificial intelligence (AI), a call to arms that is as striking as it is admirable. However, I find it necessary to contest some of the premises that underpin his arguments, not least the assertion that embracing a 'decentralisation mindset' is the panacea to our AI worries.
Casey’s piece largely hinges on the idea that decentralisation, in the style of blockchain and cryptocurrency, could mitigate the threat of artificial intelligence falling into the wrong hands. However, it overlooks a fundamental geopolitical reality: powerful technologies are often exploited by adversaries.
Casey rightly notes the importance of international cooperation in dealing with the challenges posed by AI. But let's not forget that history is replete with instances of technologies developed for benign purposes being repurposed for nefarious ends. Does the phrase 'nuclear technology for peaceful purposes' ring any bells?
This point leads us to a core question: can we trust that AI, regardless of how decentralised it may be, will be used solely for benevolent purposes? The answer, in all likelihood, is a resounding no. A darker yet possible scenario is the creation of a super artificial general intelligence (AGI) by adversarial nations. One could imagine a potential arms race of AGI development, where we witness AGIs being produced not for the betterment of humanity, but as a tool of national strategy.
The worry, then, is not just about 'Alice and Bob' thought-experiments; it's about the real-world geopolitical chessboard where the game is being played out. It's not just about rogue actors finding a loophole in the AI code; it's about state actors who, with all the resources at their disposal, might direct the technology towards harmful objectives.
Moreover, the assumption that a super AGI might 'live' within a decentralised network and thus not feel threatened enough to eradicate us is, at best, a speculative hypothesis, and at worst, a dangerous gamble. Even if AGIs 'live' in a decentralised network, what's to stop them from prioritising their survival over ours?
Therefore, while we must applaud efforts to regulate and manage AI in a globally cooperative way, such as the 'AI for Good' initiative by the United Nations' International Telecommunication Union, the Federal Trade Commission's guidelines on AI bias, and the European Union's proposed regulations on high-risk AI systems, we must also recognise the inherent limitations and potential risks of a wholly decentralised approach to AI.
In the face of a rapidly evolving technology landscape and the intractable global politics, what we need is a robust, vigilant and pragmatic approach to AI development and governance. As with nuclear or chemical technologies, a keen awareness of who has access to AI and for what purpose is key.
In conclusion, the argument for a more regulated, less anarchic AI landscape is a compelling one. But it must be grounded in a realistic understanding of the world we live in. And that includes acknowledging the fact that, like it or not, adversarial nations may leverage this technology against us.
In such a scenario, one is forced to ponder: is decentralisation really the answer to our AI conundrum? Or is it merely a shift of the risk from a single entity to a network, thereby increasing the potential points of failure?
As we move forward in our understanding of AI and its implications, it is crucial to ask these hard questions, to scrutinize our assumptions, and to continuously refine our strategies to ensure the safe, beneficial use of AI for all. Let us never forget: in the game of AI, the stakes are sky-high.
(By Michael O'Sullivan)