Kicking the AI regulation can down the road to the frontier

Jaspreet Bindra- Founder, The Tech Whisperer Ltd., UK

The last month has been filled with two kinds of news on AI: the OpenAI-Sam Altman saga, and the race to regulate AI. Joe Biden started it with his Executive Order on safe, secure, and trustworthy AI, requiring AI majors to be more transparent and careful in their development. A day later was the Global AI Summit convened by UK PM Rishi Sunak, with 28 countries including China; Elon Musk, Demis Hassabis, and Sam Altman attended. This led to a joint statement to regulate Frontier AI. The EU might be next, China already has principles governing GenAI, and India is not far behind. In fact, OpenAI also recently announced a team to tackle Super Alignment, stating that, “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

The race among countries to develop AI has transformed into a race to regulate it. This is certainly good news; the fact that countries and corporations are aware of the dangers that this powerful technology can pose to humankind. Thus, it is good that the major countries and AI companies are proactively trying to manage the attendant risks. One hopes that they have learned their lessons from the ills that social media begat us to do it better this time. Hopefully, we will not have an AI Hiroshima-Nagasaki before people sit up to the dangers of it.

However, if you look more carefully, most of this concern and regulation seem to be focused on what is loosely called Frontier AI, an AGI-like situation in the future when AI will become more powerful than humans and perhaps not in our control. The UK AI Summit was crystal clear on its focus on Frontier AI. The OpenAI announcement is also about alignment between human and super intelligent AI value, or Super-alignment. Most of the discourse around regulating AI seems to be focused on this future eventuality and managing that. My belief, however, is that we need to worry far more about the clear and present dangers that AI represents, rather than what can happen with AGI in the future. LLMs hallucinate today and are optimized for plausibility rather than the truth. AI-powered ‘driverless’ cars cause accidents and kill pedestrians today. Many GenAI models suffer from racial and gender bias since they are themselves trained on supersets of data that are biased. Copyright and plagiarism issues are all over, leading to unhappy human creators and tens of lawsuits in courts. Training these humongous Lý large language models continue to spew out CO2 and degrade the environment, as I have noted in my articles earlier (https://bit.ly/3QsM2Wx). Gary Marcus, a noted AI scientist, and author, agrees with this sentiment: “…the (UK AI) summit appears to be focusing primarily on long-term AI risk – the risk of future machines that might in some way go beyond our control. 

"In the realm of artificial intelligence, our focus should not dwell solely on the hypothetical perils of the future, but rather on the urgent need to regulate the tangible threats AI poses to our present reality."

We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.” (https://bit.ly/3tQXVy5). Interestingly, a recent Politico article has a very intriguing take (https://politi.co/3tUSzli) on this situation: it talks about a conscious effort by Silicon Valley AI honchos lobbying the US Government to focus on just ‘one slice of the overall AI problem’ – “the long-term threats that future AI systems might pose to human survival.” Critics say that focusing on this ‘science fiction’ shifts the policy discourse from more pressing present issues, ones that leading AI firms might want to keep off the policy agenda. “There’s a push being made that the only thing we should care about is long-term risk because ‘It’s going to take over the world, Terminator, blah blah blah,’” AI professor Suresh Venkatasubramanian says in Politico. “I think it’s important to ask, what is the basis for these claims? What is the likelihood of these claims coming to pass? And how certain are we about all this?” This is my exact point: instead of superintelligence-caused doomsday scenarios, which have a comparatively very small probability, we need to focus on the many immediate threats of AI. It is not a swarm of Terminator drones arising from a datacentre that will cause the destruction of humanity.

 It is far more probable that it will be a state actor with the wrong intentions who uses deepfakes and false content at scale to subvert democracy or a cornered dictator who turns to AI based lethal autonomous weapons to win a war he is losing in. It will be the unbridled race to build the next massive LLM that will further accelerate global warming, or a deluge of fake provocative news that will turn communities on each other. AI will not kill us, but a human using AI could. We need to regulate for humans using AI, not AI itself.

The journey into industry

Jaspreet Bindra,the Founder of TechWhisperer UK Limited, has significantly impacted the global technology landscape. With a distinguished career, Jaspreet held pivotal roles such as the Group Chief Digital Officer at Mahindra Group, Regional Director at Microsoft India, and General Manager in the Tata Group.

A founding member of Baazee.com, later evolving into eBay India, Jaspreet's entrepreneurial spirit led to the establishment of Tech Whisperer Limited. As the Founder, he provides strategic counsel to global giants like ThoughtWorks, PwC, and Mahindra Holidays, offering unparalleled insights into technology, AI, and Generative AI.

Jaspreet's influence extends to the Advisory Board of Findability Sciences, a Boston-based enterprise AI firm. Recognized as the inaugural 'Digitalist of the Year' by Mint and SAP in 2017, he imparts knowledge at Ashoka University and serves as an Expert/Faculty for Singularity University and Harvard Business, holding the esteemed accreditation of PCC ICF Coach.

Beyond his corporate prowess, Jaspreet is a prolific author with acclaimed works like "The Tech Whisperer" (2019) and "The Immune Organisation" (2021), celebrated globally for their profound insights. Armed with an MBA, a background in Chemical Engineering, and a recent Master's Degree in AI, Ethics, and Society from the University of Cambridge, UK, Jaspreet Bindra is a trailblazer shaping the future of technology with unparalleled expertise and passion.