top of page

AI needs to be regulated, period.

Updated: Apr 24

AI is new and exciting tech, but it has the potential to do serious damage without proper safeguards in place.


Written by Kasey Sheridan





In recent years, AI is popping up everywhere: from online chatbots, to image generators, to faulty “immersive experiences.”


As it currently stands with practically zero regulation from the government, AI is a dangerous tool that’s making life harder for artists, educators and victims of non-consensual deepfakes and image generation. It poses a threat to everyone — including you.


So, what, exactly, is the issue, and what steps can be taken to limit the dangers posed by it?


AI Art is Theft


An AI generated image of a banana duck-taped to a white background.
“Comedian” by Maurizio Cattelan. (Credit: Sarah Cascone)

Above this paragraph is a picture of “Comedian,” an artwork by Maurizio Cattelan that generated a decent amount of controversy when it sold for $120,000 in 2019.


Am I really going to tell you that the banana taped to a wall is art, but a painting generated by AI isn’t?


Yes. Yes, I am.


We’re not going to dwell on the banana —“eye of the beholder” and whatnot. My point is, art needs that human component to be art. 


AI art doesn’t just take “inspiration” from artists. Humans can be inspired; machines can’t. AI literally steals from artists and meshes images together to create the Frankenstein’s monsters that some call AI-generated “art.”


A number of brands that use Stable Diffusion, the software responsible for most mainstream AI image generators, were actually sued by a group of artists and Getty Images alike in 2023 (results pending) for using images from artists and stock photos to “train” their image generators.


Shutterstock, an incredibly popular stock photo provider, began to take steps to pay artists whose works were used to train image generators. While it is a step in the right direction, it still doesn’t take away from the fact that AI image generators do not and cannot credit the artists they take “inspiration” from.


AI art theft isn’t just limited to photos, either. There’s a chance that even this article could be scraped by an AI text generator for “training purposes,” so don’t be surprised if you see direct quotes from this piece in a future “story” about AI written by a tired college student who plugged a prompt into a generator. 


The bottom line is: if it’s on the internet, it’s basically fair game for AI. Unlike natural, human intelligence, it can’t actually generate ideas on its own. Those ideas have to come from somewhere, and they come from people like you and me who might’ve never guessed that a robot could rip us off.


" How can you even be sure the content you rely on is factual and written by humans?"

This article wasn’t AI-generated — or was it?


Remember Sports Illustrated? That magazine that used to publish the swimsuit issues tabloids would rave about?


Yeah, me neither.


Prior to falling into irrelevance, in November last year, Sports Illustrated, which used to be a giant in the entertainment and sports news world, admitted to deleting several articles that were “written” by authors who didn’t exist.



A profile of Drew Ortiz, a completely fabricated author who published stories for Sports Illustrated.
The profile of Drew Ortiz, a completely fabricated author who published stories for Sports Illustrated. (Credit: Sports Illustrated/Internet Archive)

Sports Illustrated claimed that the articles, most of which were product reviews, were produced by a third-party company called AdVon Commerce, who they unsurprisingly terminated their partnership with.


“Authors” started popping up on Sports Illustrated’s website with AI-generated headshots, names and bios and, while AdVon claimed the articles were “written and edited by humans,” journalistic integrity was completely thrown out the window the moment they thought it was ethical to use AI-generated bylines.


If Sports Illustrated, a once-renowned publication, could publish AI-generated content without thinking twice about it, what does that imply for other organizations? How can you even be sure the content you rely on is factual and written by humans?


A May 2023 report from NewsGuard, a watchdog organization dedicated to tracking misinformation, found that at least 49 “news” sites were pumping out AI-generated articles.


Celebretiesdeaths.com, a now defunct site, published a piece claiming President Biden had died and Kamala Harris took over as acting President. The article was obviously false — it was the only site to report on it, and it’s hard to believe that no major outlets had stories about it — but who’s to say somebody couldn’t stumble across it and believe it? Additionally, as mentioned earlier, AI relies on internet content to “learn,” so there’s nothing stopping it from using sites with AI-generated content for training purposes. It has the potential to become a full-fledged misinformation machine. 


NewsGuard found that most of the sites it flagged as having AI-generated content were spewing out hundreds of “articles” a day, many of which contained blatant errors. The ads plastered all over the place imply that these clickbaity “articles” are just a way for the website owners to make an easy buck through ad revenue.


I’ll leave you with some words of wisdom from a student journalist.If it seems too crazy to be true, you’re reading it on a site like “crazynews123.com,” it doesn’t have a byline, words or phrases are repeated multiple times and no major outlet is reporting on it — it’s probably not real news. 


That being said, it’s getting harder and harder to tell. An AI-generated article about a politician who doesn’t exist winning a hot-dog eating contest might seem laughable now, but the technology is improving. Look at the Sports Illustrated incident.


As of March 2024, NewsGuard has found that at least 750 sites are pumping out “unreliable, AI-generated news” in a number of languages. With a major election on the horizon, expect that number to rise. Be diligent.


While we’re on the topic of gullibility…


Come with me, and you’ll be, in a world of AI-generated fabrication…

 


A picture of the botched “Willy’s Chocolate Experience” in Glasgow. An inside of a building with giant candy decorations and gray walls.
A look into the less-than-magical world of the botched “Willy’s Chocolate Experience” in Glasgow. (Credit: Stuart Sinclair)

If you live on Earth and use social media, there’s a good chance you’ve heard about the botched Glasgow Willy Wonka experience.


Just in case you haven’t, I’ll give you a quick rundown: “Willy’s Chocolate Experience” (for legal purposes, no relation to Willy Wonka), hosted by House of Illuminati,  was supposed to be an “immersive experience” that promised a “journey filled with delicious treats, enchanting adventures and moments worth capturing.” Tickets were $44 per person and the event was so bad that parents ended up calling the police.


As the hilariously horrible event went viral, new details about it began to emerge, and even the actors joined the conversation.


According to the cast, actors were given a “15-page script” and the entire thing seemed to be AI-generated. It featured a completely new villain called “The Unknown,” an evil chocolate maker who lived in the walls. The show was supposed to end with Willy sucking The Unknown up with a giant, magical vacuum, which would’ve been impressive by Broadway standards to pull off.


Children were promised chocolate, “a pasadise [sic] of sweet teats,” and a “catcagating [sic]” experience — whatever that means. They were instead offered a sip of lemonade and a single jelly bean.



An AI generated image of "Willy's Chocolate Experiene." Featuresa carnial-dressed man in the center surrounded by giant candies.
Even Willy’s Chocolate Experience’s website was AI-generated. (Credit: Willy’s Chocolate Experience)

I mention the Glasgow Willy Wonka disaster because it’s hilarious, but it’s also concerning.


Parents saw the above poster on the Willy’s website and forked over $44. If the promise of “exaserdray lollipops” was enough to convince people to take the day off and promise their kids a fun time, imagine what other scams people are inevitably going to fall for. 


As I mentioned earlier, AI is getting better every day. The Willy’s Chocolate Experience incident was a real laugh fest, but will it still be funny when more and more people are tricked by AI-generated scams?


Educators have their hands tied


I’m not writing this section to judge you as a student. In fact, I’ve been using AI to help me with my writing for years now. 


Grammarly is an AI program that reads over your text, checks your grammar, and gives you suggestions on how to improve your grammar. I love it. I generally don’t have any qualms with non-generative AI. But, in the world of higher education, generative AI is where things get… messy.


In general, AI regulation in the classroom varies by university and more so by professor. I encourage you to actually read the AI policies under some of your syllabi and to compare and contrast them. In my experience, I’ve found that many of my professors encourage the use of AI “tools” (such as Grammarly) to assist with classwork, but many prohibit the use of AI to complete an assignment entirely.


Some professors encourage AI in the classroom; some professors hate it. But, one thing’s true across the board — it’s really, really, really difficult to prove that a student submitted an AI-generated assignment. Is it possible that a student’s essay could look eerily similar to another student’s? Of course. And, could these essays have some repeated keywords that are dead giveaways of AI-generated content? Also yes. But, if neither essay has “this article was generated by AI” explicitly written anywhere, it’s just too hard to prove that the student cheated at all.


Educators are in a tough spot right now. AI will inevitably be a part of our lives, so we should learn to utilize it — but should we also learn to work and to exist in spite of it?


Deepfakes, exploitation and Taylor Swift


This is the part of the article where it gets scary. 


In October of last year, Francesca Mani, a then-14-year-old high school student, was just one of 30 female students at Westfield High in New Jersey who had AI-generated deepfake pornographic images made of her and shared among the boys at school.


Mani pressured her school, her state and her country to take charge and do something about the harmful deepfake images affecting her and so many other young girls across the country, but the reaction to the incident on all levels was… quiet.


In January 2024, deepfake pornographic images of Taylor Swift went viral on X (formerly Twitter). Just a week after the incident, The Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE) was proposed. The act would allow victims of non-consensual deepfakes of a sexual nature to sue those who created the images. 


A 2019 study found that 96% of deepfake videos online are non-consensual pornography. The DEFIANCE Act would be the first federal law to protect victims. As of 2024, only ten states actually have laws that criminalize non-consensual pornographic deepfakes.


Absolutely nobody deserved to go through what Mani or Swift went through, regardless of celebrity status. Is it concerning that it took it happening to a celebrity for lawmakers to start paying attention? I think the answer goes without saying. But, it is a relief that lawmakers are finally taking the issue seriously.


However, a law allowing victims to sue isn’t enough. 


Social platforms allowing these images to be spread and the easily-accessible AI generators that create the kind of content mentioned need to be held accountable. All it takes to generate an image or video is a single picture of a victim’s face posted to social media. 


It’s not just images and videos, either. AI voice deepfakes are becoming more and more popular online. Scammers have even begun using the voices of people’s relatives and/or loved ones to scam them in impersonation phone calls, often demanding a ransom for a nonexistent kidnapping or asking for some money to help out in a fabricated situation.


Even the deceased aren’t safe from deepfakes.


Soul Machines recently unveiled their all-new Marilyn Monroe chatbot. The avatar, which bear an uncanny resemblance to the deceased starlet, is supposed to grant “tailored experiences” to the users who chat with her. 



An AI generated portrait image of Marilyn Monroe with a white background.
Marilyn Mon-no: a look at Soul Machine’s ironically soulless Marilyn Monroe machine. (Credit: Soul Machines)

During last year’s SAG-AFTRA strike, which was partially a reaction to a lack of AI regulation in the entertainment industry, the late Robin Williams’s daughter, Zelda, spoke out about the “fist burning” nature of AI deepfakes on her Instagram story.


“I’ve witnessed for YEARS how many people want to train these models to create/recreate actors who cannot consent, like Dad,” Williams said. “These recreations are, at their very best, a poor facsimile of real people, but at their worst, a horrendous Frankensteinian monster.”


AI, across the board, must be regulated: in the arts, in the classroom, online and especially when it comes to creating deepfakes of those who have not or cannot consent. As a student, as a journalist, as an artist and as a woman, I just don’t feel protected.


78 views

Recent Posts

See All
bottom of page