Home Culture How to Opt Out of A.I. Online

How to Opt Out of A.I. Online

by DIGITAL TIMES
0 comment


Last week, like the Jews of Exodus painting blood on their lintels, hundreds of thousands of Instagram users posted a block of text to their accounts hoping to avoid the plague of artificial intelligence online. “Goodbye Meta AI,” the message began, referring to Facebook’s parent company, and continued, “I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.” Friends of mine posted it; artists I follow posted it; Tom Brady posted it. In their eagerness to combat the encroachment of A.I., all of them seemed to overlook the fact that merely sharing a meme would do nothing to change their legal rights vis-à-vis Meta or any other tech platform.

It is, in fact, possible to prevent Meta from training its A.I. models on your personal data. In the United States, there is no law giving users the right to protect their public posts against A.I., but you can set your Instagram account to private, which will prevent models from scraping your data. (Users in the United Kingdom and European Union, which have stronger data regulation, can also file a “right to object” to A.I. form through Meta’s account settings.) Going private presents a dilemma, though: Are you willing to limit the reach of your profile just to avoid participating in the new technology? Other platforms have more targeted data preferences buried in their settings menus. On X, you can click Privacy, then Data sharing and personalization: there you’ll find a permission checkbox that you can uncheck to stop X’s Grok A.I. model from using your account data “for training and fine-tuning,” as well as an option to clear past personal data that may have been used before you opted out. LinkedIn includes an opt-out button in its data privacy settings. In general, though, digital platforms are using the content we’ve uploaded over the years as raw material for the rapid development of A.I. tools, so it’s in their best interests not to make it too convenient for us to cut them off.

Even if your data isn’t going to train artificial intelligence, you will be peppered more and more frequently with invitations to use A.I. tools. Google search now often puts A.I. answers above Web site results. Google Chrome, Facebook, and Instagram prompt us to use A.I. to create images or write messages. The newest iPhone models incorporate generative A.I. that can, among other things, summarize the contents of your text threads. Meta recently announced that it is testing a new feature that will add personalized A.I.-generated imagery directly into users’ feeds—say, for example, your likeness rendered as a video-game character. (According to the company, this feature will require you to opt in.) Mark Zuckerberg recently told Alex Heath of The Verge that such content represented a “logical jump” for social media, but added, “How big it gets is kind of dependent on the execution and how good it is.”​​ As of yet, all these A.I. experiences are still nascent features in search of fans, and the investment in A.I. is vastly greater than the organic demand for it appears to be. (OpenAI expects 3.7 billion dollars in revenue this year, but five billion dollars in gross losses.) Tech companies are building the cart without knowing whether the horse exists, which may account for some users having feelings of paranoia. Who asked for this, and to what end? The main people benefitting from the launch of A.I. tools so far are not everyday Internet users trying to communicate with one another but those who are producing the cheap, attention-grabbing A.I.-generated content that is monetizable on social platforms.

It is this torrent of spammy stuff—what some have taken to calling “slop”—that none of us can opt out of on today’s Internet. There is no toggle that allows us to turn off A.I.-generated content in our feeds. There are no filters that sort out A.I.-generated junk the way e-mail in-boxes sift out spam. Facebook and TikTok theoretically require users to note when a post has been made with generative A.I., and both platforms are refining systems that automatically label such content. But so far neither measure has made A.I. materials identifiable with any consistency. When I recently logged in to Facebook for the first time in years, I found my feed populated with generically named groups—Farmhouse Vibes, Tiny Homes—posting A.I.-generated images that were just passable enough to attract thousands of likes and comments from users who presumably did not realize that the images were fake. Those of us who have no interest in engaging with slop find ourselves performing a new kind of labor every time we go online—call it a mental slop tax. We look twice to see whether a “farmhouse” has architecturally nonsensical windows, or whether an X account posts a suspiciously high volume of bot-ishly generic replies, or whether a Pinterest board features portraits of people with too many fingers. Being online has always involved searching for the needles of “real” content in a large and messy haystack of junk. But never has the hay been as convincingly disguised as needles. In a recent investigation of the “slop economy” for New York, Max Read writes that from Facebook’s perspective slop posts are “neither scams nor enticements nor even, as far as Facebook is concerned, junk. They are precisely what the company wants: highly engaging content.”

Read concludes that slop is ultimately what people want—we consume it, so we must on some level like it. Among the vital participants in the slop economy are “all of us,” he writes. But it’s hard to accurately gauge the appetite for something that is being forced upon us. Social media has remained largely unregulated for decades, and it seems unlikely that we can expect legal interventions to curb our exposure to slop. (Gavin Newsom, the governor of California, recently vetoed a state Senate bill that would have constituted the country’s first A.I. regulation, mandating safety-testing regimes and so-called kill switches for the most powerful A.I. tools.) But we might look, instead, to e-mail spam as a precedent for how tech companies could become motivated to regulate themselves. In the nineties and two-thousands, spam made e-mail nigh-unusable; one 2009 report from Microsoft found that ninety-seven per cent of e-mails were unwanted. Eventually, filtering tools allowed us to keep our in-boxes at least somewhat decluttered of junk. Tech companies may eventually help solve the slop problem that they are creating. For the time being, though, avoiding A.I. is up to you. If it were as easy as posting a message of objection on Instagram, many of us would already be seeing a lot less of it. ♦



Source link

You may also like