Panel Emphasizes that Buzzy AI Tech Should be Seen as ‘Deployed for Surveillance’

At a Bloomberg conference in San Francisco, prominent figures in AI attended, including Sam Altman of OpenAI and Emad Mostaque, founder of Stability AI. However, one of the most captivating discussions occurred in the afternoon during a panel conversation on AI ethics.

Panel Emphasizes that Buzzy AI Tech Should be Seen as 'Deployed for Surveillance'
Panel Emphasizes that Buzzy AI Tech Should be Seen as ‘Deployed for Surveillance’

Panel Emphasizes that Buzzy AI Tech Should be Seen as ‘Deployed for Surveillance’

The panel included Meredith Whittaker, the president of the secure messaging app Signal, Navrina Singh, co-founder and CEO of Credo AI, and Alex Hanna, the director of Research at the Distributed AI Research Institute. They delivered a common message to the audience: Don’t become overly absorbed by the potential and risks of AI in the future. AI is not magic, it’s not entirely automated, and according to Whittaker, it’s already more invasive than what most Americans appear to understand.

For instance, Hanna highlighted the global community of individuals who contribute to training today’s extensive language models. He indicated that these contributors often receive insufficient recognition in the enthusiastic discussions about generative AI. This oversight can be attributed to the unglamorous nature of the work and its inconsistency with the prevailing narrative about AI.

Hanna emphasized, “From the reports we have, there is a large workforce performing annotation behind the scenes to enable these technologies to function even to a basic extent. These workers are engaged with platforms like Amazon Mechanical Turk and companies like Sama. They come from various parts of the world, including Venezuela, Kenya, and the United States. They are the ones responsible for labeling, whereas individuals like Sam Altman and Emad Mostaque, who may describe these technologies as magical, need to acknowledge the human element. These technologies may appear autonomous, but there’s a significant amount of human labor supporting them.

Whittaker’s Remarks 

Whittaker, with her extensive background including working at Google, co-founding NYU’s AI Now Institute, and advising the Federal Trade Commission, delivered even more direct and impactful remarks. She emphasized that while the world may be enchanted by chatbots like ChatGPT and Bard, the underlying technology is perilous. This is especially concerning as power becomes increasingly concentrated in the hands of those at the pinnacle of the advanced AI hierarchy. The audience responded enthusiastically to her message.

Whittaker stated, “Perhaps some in this audience are users of AI, but the majority of the population is subjected to AI… It’s not a matter of individual choice. Many of the ways AI influences our lives, makes decisions, and shapes our access to resources and opportunities happen behind the scenes, often without our awareness.

Whittaker provided an example of someone seeking a loan from a bank. This individual could be denied without any awareness that a system in the background, likely powered by a Microsoft API, used scraped social media data to determine their lack of creditworthiness. “I will never find out because there’s no mechanism for me to discover this,” she explained. She continued to say that while there are potential solutions, challenging the current power structure to implement them is extremely difficult. “I’ve been at the table for about 15 to 20 years, but being at the table without any influence is meaningless,” she emphasized.

Many individuals without influence might share Whittaker’s perspective, including current and former employees of OpenAI and Google who have reportedly expressed concerns about their companies’ AI product launches.

Meanwhile, there’s a lot that regular people may not fully grasp about what’s unfolding, according to Whittaker. She characterized AI as “a surveillance technology” and went on to explain to the audience that AI necessitates surveillance through the collection of vast datasets, perpetuating the need for more data and increasingly intrusive data collection. She stressed that these systems are deployed for surveillance purposes, regardless of whether the AI’s output is generated through statistical estimates or data from cell towers tracking locations. This data becomes part of an individual’s profile, wielding significant power over their life, and this power is concentrated in the hands of tech companies.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here