What the Take it Down Act Means for AI
- Thomas Yin
- 21 minutes ago
- 4 min read

On May 19th, President Trump signed the ‘Take it Down Act’ into law, setting a precedent for the illegalization of nonconsensual explicit content distribution. Doubly praised for its attempt at protecting the innumerable victims of revenge pornography and criticized for its generalization of such content into a single category, the Act (which includes a subclause prohibiting AI-generated explicit content) constitutes a much-needed first step in the active government regulation of AI-generated deepfake content. In this article, we explore the legal ramifications that arise from the loose interpretation of the Take it Down Act, including the biggest question of all: What does it mean for AI content to be harmful?
Ship in a Bottle
The actual words of the Act is strikingly short; coupled with the phrasing of the bill (which only defines one or two procedures for the identification and removal of the “intimate visual depictions” it outlaws), it is clear that the legislation is meant to be taken generally, as a legal foundation on which for future laws and court decisions to build. This ambiguity accounts for much of the dissent directed at the bill, and for good reason—there is little to none immediate detailing of many of the facets associated with the grand goal of outlawing nonconsensual explicit depictions.
For example, while the Act includes a provision for victims (subjects) of nonconsensual posts to contact a social media platform to request the deletion of said posts, there is no further information ordaining that these platforms aid federal officials in actually prosecuting the creator or distributor of the images, a stipulation which is equally as important as the removal of these materials. Acknowledging that the bill might have been intentionally made underwhelming in order to facilitate its passage through Congress (as opposed to more comprehensive legislations being “locked-up” as a natural result of the animosity that stems from nuanced laws), the bill also fails to adequately define a concrete interpretation of the content it attempts to ban.
Suppose that a contrived offender distributes AI-generated explicit content roughly based on a certain set of features (e.g. height, length, hair and eye color, ethnicity). Does this place all individuals whose characteristics roughly match this aforementioned set in a legal position to sue the offender? Although many such cases may be settled in court (e.g. With the prosecution carrying the burden to prove that the offender did in fact intend for the content to depict a specific person or distributed the content as a calculated attempt to harm a specific person), it is my opinion that such legal boundaries, especially in such nuanced issues as AI deepfakes, should be made as transparent as possible to minimize the number of legal loopholes that intended offenders may circumvent.
Weighing the Scales
Deepfakes had existed for years before the advent of sophisticated image-generation AI models, yet they were previously time-consuming to make, often requiring time, money, and professional expertise in order to recreate an event or image from scratch. With AI deepfakes, in contrast, ill-meaning individuals can produce these types of content much more effectively, thereby exacerbating the potential harm that the technology could deliver. Yet, not all deepfakes are harmful—looking past the slanderous portrayals of victims, there is a surprisingly creative use of deepfake videos and images in entertainment, as exemplified by the 2023 “American Presidents” trend, in which AI voices of Trump, Biden, and Obama were juxtaposed with anything ranging from Minecraft to music tierlists.
The spectrum of AI deepfakes thus invites the question of “At which point does AI deepfaking become harmful?” Indeed, this point is one of the biggest conceptual ambiguities in the Take it Down Act. It’s been proven time and time again throughout history that almost all technological advancements presented themselves as a double-edged sword, yet, unlike inventions which had legitimate and objective benefits (the Steam Engine and LLMs, for example), there is very little positive use to be had from deepfaking compared to the vast realm of consequences that seem to entail every single mention of the technology. This logic also applies quite prominently in deepfake technology that isn’t directly meant to slander.
The philosophical answer to how harmful AI deepfakes are lies on the analysis of the various purposes for which generative AI mimicking human facial features in particular is used. While the Take it Down act blocks generating such content to slander or exploit specific individuals through nonconsensual intimate content, it does not consider the potential harms indirectly produced through the widespread use of deepfake technology for uses such as roleplaying and fake news.
As the technology of making faces or events appear photorealistic improves to a point where it is virtually indistinguishable to the average observer, deepfake AI will help exacerbate the negative impacts of fake news by enabling malicious groups to intentionally propagate fraudulent events in order to further their own agenda. Even though less malicious, an increase in the number of such videos created by well-intentioned people, if not correctly labelled or identified, may also cause an overall decrease in public awareness of important events by means of diversion.
There are always a small segment of people who attempt to frame AI advances as an important step in recreating an ideal humanoid replacement intended to serve as a companion or partner. Many of these products are unfortunately consumed by people who wish to fulfill their desire for genuine human interaction with a digital alternative, and the marketing of deepfake technology in tandem with preexisting chatbots may aggravate the preexisting problems with AI roleplaying.
Regardless of the ambiguities and holes that it may contain, we must acknowledge the Take it Down Act for what it is—an early and very much appreciated attempt at the regulation of AI deepfaking, a problem that will most likely emerge as one of the prime issues of AI ethics sooner or later.
Comments