Are Corporations Using AI to Invade Our Privacy? WriCampians Dissect YouTube’s Latest Controversial Endeavor by Sophie Katz, age 17

Start

Last Wednesday, YouTube began rolling out its latest child protection feature in the United States, which will see YouTube users–most of whom do not pay to use the platform–evaluated using artificial intelligence. Artificial intelligence, commonly known as AI, has long been a controversial fixture in technological and societal conversations, specifically since generative AI model ChatGPT became available to the masses in late 2022.

Since then, AI has shaken up educational pursuits, dethroned the jobs of many artists and writers, and caused controversy over its excessive usage of water, often damaging the water supply of many marginalized communities, as Harvard Business Review reports.

As AI itself has become a buzzword of sorts, seeing it coupled with YouTube and child protection was sure to prompt polarizing thoughts in children and adults alike. YouTube’s new policy, which utilizes AI to determine a user’s age and subsequently flags a viewer as under or over 18, was met with anger by counselors and campers at WriCampia. The technology will ensure child protection features on accounts it deems minor-like. If YouTube’s AI is incorrect, users will have to provide identification methods, including but not limited to credit cards and government IDs. While YouTube claims that the policy is focused on childhood protection (many users opt to enter falsified birthdates), privacy, ethical and environmental concerns have populated the conversation on the policy. Many other social media platforms, including Meta and TikTok, also use the technology.

WriCampians of different levels of YouTube usage evaluated the dilemma. Leda V., a 13-year-old WriCampian, lacks a YouTube account and due in part to parental limits, watches YouTube only occasionally. She expressed a strong “Ugh!” when asked about the ethics of the policy, and she noted the detrimental effects of AI on creatives. “I want to be a writer and AI is taking away from musicians, writers, and artists careers,” she said with passion.

Both Leda and Ashley (a counselor who we found assisting the Page 2 Stage actors) agree that parents are the best people to moderate a child’s actions on the internet. Evaluating the reason why corporations may want the biometric data, Ashley said that “they want it so they can sell your information to marketers.”

Ashley’s sentiment is strong and rooted in previous actions of corporations, although a YouTube spokesperson stated that the platform will not use the personal data for advertising purposes. Ashley suggests leaving the content regulation to the uploaders’ personal opinion of the nature of the video.

Finn S., 12, also has a personal distaste for AI. “I’d rather have a handmade animation of a cat holding a car than a sloppy AI video!” he said adamantly.

Interestingly, he also implied that the government should have a handier role in social media content regulation. “It’s harder to breach because they have good computer people,” he said earnestly.

One government that has recently taken state-initiated action on combating child protection issues is that of the United Kingdom. The country’s approach represents the differences between force from corporations, governments, and parents and how they are being implemented. This government influence may push companies to change their policies or vice versa, with trickle-down effects to the United States.

Senior Camper Quinn, 17, worried that these laws may contaminate the classic first amendment right of freedom of speech. They compared the situation to local government book bans in the United States. Quinn grew up with a blind grandfather and expressed that books can be instrumental in providing connective experiences for children to understand the lives of those around them. “When you open up the chromebooks here at WriCampia a pop up comes up that has ads,” Quinn recounted. “People scrolled down and said ok, but what it is is Chrome is facilitating and aiding the transfer of data across websites to improve the ads that are being shown to you.”

Quinn also recollected that “some videos present as childish,” referencing a mature program entitled The Amazing Digital Circus that parents and algorithms may miss when regulating content for their children. “AI servers use up a lot of energy that is for the most part in the United States gained through coal and fossil fuel combustion,” which make up a significant amount of the energy that we gather, according to Quinn. “There needs to be a disclaimer for these kinds of AI that are saying this information may be incorrect,” they said, tying in to a call to urge YouTube to stop letting AI encroach on personal decisions regarding its users’ habits and personal data. ✎

Leave a Reply

Your email address will not be published.