top of page
Search

The Role of Government Regulation in AI Development

Government efforts to regulate AI must not assume that AI safety is mutually exclusive with innovation.
Government efforts to regulate AI must not assume that AI safety is mutually exclusive with innovation.

Over the past decade or so, the breakneck pace of AI development has no doubt guaranteed the well-being of millions of people, and, with slight effort to stay on such a trajectory, the technology will certainly stay this way for decades more to come. In my opinion, however, recent actions undertaken by many AI companies as well as the governments of many leading AI developers in aggregate constitute a deviation from the path to the benefit of humanity. Yet, with some new research pointing towards the potential harms of AI chatbots, it is necessary that we begin considering the possibility of regulation to limit the extent of their availability. Inspired by the implications of the Grok Companions feature, this article discusses the need for governmental regulation, refuting common misconceptions used to defend the commercial distribution of various AI chatbots, and proposes how future legislation might control or prohibit safety lapses within current chatbot models.


Grok’s Troubles


Grok has always maintained a spot as one of the most contentious commercial AI models since its inception, periodically becoming a symbolic spotlight for the issue of corporate control over AI models in Elon Musk’s hilariously unsuccessful attempts to use it as a tool to advance a pro-right agenda on X. Yet, recently, Grok pushed out its new Companions feature, which attracted yet more controversy. On the surface, the Companions feature is a series of chatbots in reminiscence of previous chatbots services like those offered by Meta AI and Character.AI, yet  it outdoes all these in a surprisingly absurd way. The first two companions include Rudi, a swearing Red Panda, and Ani, a blonde anime girl, both made up of a fine-tuned version of Grok as well as an accompanying avatar. 


Speculative media have, unsurprisingly, focused most, of its attention on Ani. A variety of online reports corroborate the chatbot’s inherently romantic features, with several reviewers taking particular note at the ‘love levels’ a user may achieve to unlock increasingly sexual conversations along with accompanying changes to the avatar. WIRED reviewers also noted the AI model’s readiness to openly talk about BDSM topics, as well as its clingy style of speech and inconsistent child filter. Since I do not have the willingness to purchase the 30$ per month SuperGrok subscription to access the Companions feature, I was unable to independently verify some of the claims about the chatbot; the internet, on the other hand, seemed to agree on one thing: this particular chatbot was excessively bold. Rudi, for how questionable it seems, attracted far less controversy. The cartoon Red Panda tends to sling insults and dark jokes that many found unfunny and ridiculous. Many reviewers tended to sideline this character, instead dismissing it as a less important one mostly catered towards Gen-Z kids. 


To tell the truth, I found both chatbot characters rather dull. Instead, what interested me was the distinct process and reception of this otherwise dime-a-dozen romantic chatbot. First of all, Companions is, among the products released by the “industry leaders” of AI (e.g. OpenAI, DeepMind, Anthropic, Meta), the first chatbot designed specifically to engage in romantic roleplay, despite commonplace ethical concerns from alleging long-term psychological effects to exploiting vulnerable demographics. The distinct paucity of regulation surrounding chatbots like these stood out to me immediately, in addition to the fact that other than answering to a few dissenting voices, xAI was able to release the product with impunity. This all points toward the major question of technology regulation: Should new technology be closely watched to safeguard users, or given free rein to grow and be developed?


Responsibility and Innovation


As with all incipient technologies, the psychological effects of AI chatbot use on humans is neither scientifically proven nor empirically apparent. Many people have long surmised that such technologies could potentially exacerbate existing problems, and initial reports have found a negative correlation between well-being and chatbot usage. Despite this, these relatively unknown technologies are still well in the process of invading the mainstream media. In considering whether or not these technologies are indeed harmful or not, technology commentators and policymakers alike overlook the crucial point that such a consideration should, idealistically, never be a necessary concern in the first place within commercial technologies. Airline passengers would not be happy knowing that their plane might experience catastrophic failure. Likewise, clinical trial participants would not bode well with knowing that numerous animals had not preceded them in the testing process. One of the most key principles of engineering is that regardless of anything, safety always comes first. To get an idea about the potential dangers of these chatbots, in any case, we only need to look at the examples of the two examples of teens whose suicides have been linked to AI being complicit in their suicidal ideation . 


Many proponents of the current “develop now, fix later” doctrine points to the obvious: we’re locked in a race of innovation with China. My response to this is one of complete agreement: we are in fact locked in an AI “arms race”, and the products of our time will likely be adapted within the arsenals of cyber-warfare, among many other things. Despite this, I contend that the need for innovation is not a case to disregard safety—we can never assume that rapid technological progress is mutually exclusive with consumer safety. I anticipate and object to two notable objections to this claim:


Safety and product improvement happen as a result of the flaws and lapses found within widespread deployment. 

There are plenty of ways to test the reliability and safety of products within beta-testing settings. While these tests have no doubt been conducted (notably, OpenAI rolls out new models to Pro users before other types of users), it is not an overstatement to say that the mass deployment of many commercially available chatbots are conducted in such a way that disregards user safety, with many ChatGPT models failing to divert or end conversations even when users signal distress. Even if commercial deployment were necessary to find many of these issues, it would be much more reasonable if the adequate safeguards were taken to ensure the safety of vulnerable user groups, which is currently not the case. 


Consistent widespread deployment of new models results in chat transcripts which greatly accelerate the training of new models, unlike traditional, non-AI products, none of which benefit proportionately from a larger pool of user feedback.

Chat transcripts are usually not processed verbatim as part of RLHF processes used by companies like OpenAI and Google. While they may in fact inform the safety and engagement model of corresponding chatbots, separate data pipelines, mostly high-quality, technical data created or verified by humans, influence the aspects of AI training most pertinent to developing reasoning performance and other types of specialized knowledge (e.g. coding, math solving, etc). There is therefore a scant case to claim that the widespread distribution of these AI chatbots is a prerequisite to the rapid advancement of AI capabilities. 


Hopefully, I have shown that the need for innovation isn’t the root cause of these safety lapses—rather, the concerted lack of effort on safety protocols and testing is. Yet, the practical course of action to correct this persistent quality remains a matter of debate. 


The Role of Regulation


The obvious solution to the aforementioned lack of safety standards is to simply increase government regulation of the practice of training and distributing chatbots. What is not obvious, however, is how this highly ambiguous proposal would be done in practicality. In the early 20th century, the United States learned through Prohibition the important lesson that harsh, all-encompassing bans on a harmful product doesn’t work. Instead, banning alcohol without stripping the substance of its desirability simply led to a black market fever, increasing instead of decreasing the total alcohol consumption. In the late 20th century, to combat the mass consumption of cigarettes, the US government took a different approach: instead of outright banning the use of cigarettes, they reduced the social desirability of tobacco products through publishing widely circulated reports detailing how they might cause skin cancer, mandating cigarette companies to place visible disclaimers on every product, and limiting the pervasiveness of cigarette advertisements. These subtle actions resulted in a continuous decline of cigarette consumption from a historic peak of almost 4000 to roughly 800 cigarettes per capita per annum. 


To take away from history, governmental control over unsafe chatbots should go beyond legal barriers of consumption and development. They should also seek to lessen the perceived social permissibility of consuming these products, whether through campaigns or public research. Despite this, it is still unclear the degree to which the government can actually influence wider social shifts, with current public opinion directed towards viral social media trends to a greater extent than towards political-economic shifts. In all, there is really no downside to a few promptly instated, yet well-constructed, regulations on AI chatbots in the current world. 


 
 
 

Comments


bottom of page