Navigating the AI Regulation Revolution
OpenAI’s CEO Sam Altman recently stated that OpenAI might ‘cease to operate’ in the EU, sparking a widespread discussion and political debate on data regulation and AI regulation. This came about after a wider discussion on how AI might cause human extinction, which probed international leaders to discuss the effects of AI following pandemics and war.
Sam Altman visited South Korea to discuss international regulation saying:
“As these systems get very, very powerful, that does require special concern, and it has global impact. So it also requires global cooperation”
Amidst these discussions and his now famous AI tour, Sam Altman disagreed with the designation of “high risk” systems in the recent AI bill that the EU drafted. If OpenAI and its applications are deemed high risk, then it would have to comply with ‘additional safety requirements’ which might hinder the progress of ChatGPT and GPT-4, resulting in Altman saying that if this is the case, then they might back out.
In reality, it is highly unlikely that Sam Altman would ever pull out from the EU as the AI market is worth billions. What is worrying is the power and influence he single-handedly holds over the progress of AI and disruptive technologies in the EU. Since the EU has been holding up guardrails, the top 10 highest funded AI start-ups are outside the EU. If things continue to develop like this, the EU will keep on losing its geopolitical influence, funding, and research, which will have a significant impact on the progress and state of disruptive technologies in the future. Truth be told, it is much easier to develop AI start-ups in the United States and China. If the EU had to somehow manage and press the breaks momentarily, the AI itself wouldn’t stop learning as it currently operates independently of its creators.
As the godfather of AI, Geoffrey Hinton said;
“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Another glaring issue is that the EU lacks a clear strategic focus on AI, contrary to the United States and China. Its stance is currently reactive in that it is paving the way for AI regulation but not setting the proper frameworks for AI innovation. Digital entrepreneurship is seriously lacking in the EU and even certain basic modern hardware like Nvidia or TSMC needs to be imported from outside the EU.
To understand how this happened, we must rewind to 2017. The government of the People’s Republic of China published the “Next Generation AI Development Plan” in 2017 which kick-started the AI race. What is noteworthy here is that China invested billions in AI companies and trained 1.4 million engineers. Just two years before – in 2015 – Elon Musk co-founded OpenAI with Sam Altman, pumping in a total of $50 million into OpenAI. What is worrying is how both China and the United States approached the AI-training process. AI apps like ChatGPT work through machine learning, so copious amounts of data would have to be inputted for it to ‘learn’ how to respond correctly to answers. The catch here is that 1) to date, OpenAI hasn’t been transparent regarding where exactly it is getting its information from and 2) we do not know the calibre of the information that is inputted. Since the data we input into AI algorithms are human-made, the AI creates information based on real world socio-economic issues which may indirectly reflect racist, discriminatory and biased data leading to misinformation, disinformation and fake news. Another issue is how OpenAI used Kenyan workers to make ChatGPT ‘less toxic’, paying them less than $2 per hour. Besides glaring ethical issues, this brings to the fore the realisation that pumping enormous amounts of data and information to create ChatGPT is anything but straightforward. The need to have human intervention to identify phrases that should be banned from the system also reflects the inherent dangers of AI.
ChatGPT’s exponential growth post-Covid-19 flourished within a non-GDPR US environment. However once it started operating in the EU, it started facing data protection issues amidst a flurry of ethical and legislative concerns. ChatGPT was recently banned in Italy over privacy concerns. The Italian watchdog stated that on the 20th March, “the app had experienced a data breach involving user conversations and payment information” which caused an uproar amidst data privacy concerns. Since then, ChatGPT updated its terms of service and became more transparent vis-à-vis data processing. However, it will only do so if gatekeepers and watchdogs of governments notice and catch what’s going on. Unfortunately, Italy stands alone in actually actioning legislative pressure on OpenAI at EU level. As a result of this controversy, the EU created a draft Artificial Intelligence Act (AIA) to set up basic rules on AI within Europe. This risk-based approach to AI might even become redundant once it is actually enforced due to the speed at which AI is developing.
The AIA may very well result in the “Brussels effect”, whereby international bodies will be pressured to adhere to legislative rules if they want to operate in EU markets. Other than that, transparency will play a huge factor in the development of chatbots and emotion detection systems which might lead to global disclosure.
We think that AI regulation is of utmost importance to protect the livelihoods of future generations. Through legislation and awareness, 3CL supports AI governance for a safer digital world.