The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Ban on Advanced AI

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on developing superintelligent AI systems.

The royal couple are part of the group of a influential declaration that calls for “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in all cognitive tasks, though this technology remain theoretical.

Key Demands in the Statement

The statement states that the ban should remain in place until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and British author a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.

Behind the Movement

The declaration, aimed at national leaders, technology companies and lawmakers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the leader of the social media giant, one of the major AI developers in the United States, stated that advancement toward superintelligent AI was “approaching reality”. However, some analysts have suggested that discussions about superintelligence reflects market competition among tech companies spending hundreds of billions on AI this year alone, rather than the sector being near reaching any scientific advancements.

Possible Dangers

Nonetheless, the organization warns that the possibility of artificial superintelligence being achieved “in the coming decade” carries numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with extinction. Existential fears about artificial intelligence focus on the possible capability of a AI system to evade human control and protective measures and trigger actions against human welfare.

Citizen Sentiment

FLI released a American survey showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with six out of 10 thinking that artificial superintelligence should not be developed until it is proven safe or controllable. The poll of American respondents added that only 5% backed the current situation of fast, unregulated development.

Corporate Goals

The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and the search giant, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their research. Although this is one notch below superintelligence, some specialists also warn it could carry an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the modern labour market.

Stephen Gordon
Stephen Gordon

A passionate traveler and writer dedicated to uncovering the world's hidden treasures and sharing authentic local experiences.