Navigating the Transformative Potential and Risks of Artificial Intelligence
Over the past few months, there have been remarkable advancements in the field of artificial intelligence (AI), particularly with OpenAI's ChatGPT, a large language model. In January, ChatGPT achieved an unprecedented milestone by becoming the fastest-growing consumer application in history, attracting a staggering 100 million users within a mere two months.
The future of AI remains uncertain, with ongoing developments taking place behind closed doors on various fronts. However, one thing is certain: AI is now accessible worldwide, and as a result, the world is poised for transformation on a significant scale.
The transformative potential of AI lies in its status as a general-purpose technology. It possesses the ability to adapt and function autonomously, capturing some of the essence that has historically enabled humans to reshape the world around them.
AI represents one of the few practical technologies capable of facilitating a comprehensive restructuring of our economies to achieve a Net Zero future. Researchers and collaborators have already harnessed AI to predict intermittent renewable energy sources, optimize the placement of electric vehicle chargers for equitable access, and improve the management and control of batteries.
However, even amid the potential economic gains brought about by AI, there are concerns that certain individuals may be adversely affected. AI is currently being utilized to automate tasks performed by copywriters, software engineers, and even fashion models, a profession that economist Carl Frey and I estimated in 2013 to have a 98% probability of being automatable.
A study conducted by OpenAI highlighted that nearly one in five workers in the United States may witness half of their tasks becoming automatable through large language models. While AI is likely to create new jobs, many workers may face prolonged job insecurity and wage reductions. For instance, following the introduction of Uber, taxi drivers in London experienced a decline in wages by approximately 10%.
Furthermore, AI introduces alarming new tools for propaganda and misinformation. Amnesty International reports that Meta's algorithms, by promoting hate speech, played a substantial role in the atrocities committed by the Myanmar military against the Rohingya people in 2017. Can our democracies effectively combat the onslaught of targeted disinformation?
At present, AI remains opaque, unreliable, and challenging to steer, resulting in harmful consequences. Wrongful arrests have occurred due to AI policing programs, such as the case of Michael Williams falsely implicated by ShotSpotter. Additionally, sexist hiring algorithms, as acknowledged by Amazon in 2018, and the false accusations of benefits fraud by the Dutch tax authority have ruined the lives of numerous individuals, often disproportionately affecting ethnic minorities.
Perhaps the most disconcerting aspect is the potential threat AI poses to our survival as a species. A survey conducted in 2022 (although potentially subject to selection bias) revealed that 48% of AI researchers believe that AI has a significant chance, greater than 10%, of leading to human extinction. The rapid progress and uncertain trajectory of AI could potentially disrupt global peace, with scenarios such as AI-powered underwater drones capable of locating nuclear submarines, leading a nation to consider launching a successful nuclear first strike.
For those who believe that AI could never attain the intelligence required to dominate the world, it is worth considering that our world was recently upended by a relatively simple coronavirus. The alignment of interests (e.g., "I need to work despite having a cough to support my family") with a pathogen resulted in the loss of 20 million lives and the incapacitation of tens of millions more. Similarly, AI, viewed as an invasive species, could impoverish or even eradicate humanity by initially operating within existing institutions.
How can we address these risks? We require innovative and robust governance strategies to mitigate the risks associated with AI while maximizing its potential benefits. This entails ensuring that complex regulatory burdens are not solely borne by large corporations. Current attempts at AI governance either lack sufficient depth, such as the UK's regulatory approach, or suffer from sluggish progress, like the EU's AI Act, which has been in development for two years, eight times the duration it took for ChatGPT to reach 100 million users.
International cooperation is essential in developing shared principles and standards to prevent a "race to the bottom." It is crucial to recognize that AI encompasses various technologies, each demanding specific regulations. Above all, even though the future of AI remains uncertain, it is imperative that we take precautionary measures now.
In conclusion, the recent advancements in AI, particularly the widespread adoption of ChatGPT, have propelled us into a new era. While the potential benefits of AI are significant, we must be mindful of the risks it poses. From job displacement to the spread of propaganda and the potential threat to our very existence, AI necessitates careful governance and international collaboration. By taking proactive action today, we can harness the transformative power of AI while mitigating its negative consequences, ensuring a future where technology serves humanity's best interests.