Updat3
Search
Sign in

AI chatbots are giving out people’s real phone numbers

Topic: technologyRegion: globalUpdated: i1 outletsSources: 1Spectrum: Center Only2 min read
📰 Scored from 1 outletsacross 1 Center How we score bias →
Story Summary
SITUATION
AI chatbots are inadvertently exposing users' real phone numbers, raising serious privacy concerns. This issue highlights the urgent need for better safeguards in AI technology to protect personal information from unauthorized access.
Coveragetap to expand ▾
Spectrum: Center Only🌍Other: 1
Political Spectrum
Position is inferred from coverage mix.
i1 outlets · Center
Left
Center
Right
Left: 0
Center: 1
Right: 0
Geography Coverage
Distribution of where coverage is coming from.
i1 unique outlets · Dominant: Global
KEY FACTS
  • People report that their personal contact info was surfaced by Google AI—and there’s apparently no easy way to prevent it.
  • And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal cell phone number.
HISTORICAL CONTEXT

This development falls within the broader context of Technology activity in Global. Current reporting indicates: AI chatbots are giving out people’s real phone numbers AI chatbots are giving out people’s real phone numbers People report that their personal contact info was surfaced by Google AI—and there’s apparently no easy way to prevent it.

And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal cell phone number. This context is based on the currently available source text and may be refined as fuller reporting becomes available.

Brief

Google's AI chatbot Gemini has come under scrutiny after users reported that it inadvertently exposed their personal phone numbers, raising significant privacy concerns.

One notable incident involved a software developer in Israel who began receiving numerous unsolicited calls from individuals looking for services such as legal advice and locksmiths, all due to incorrect information provided by the chatbot.

Similarly, a PhD candidate at the University of Washington experienced a breach of privacy when Gemini revealed her colleague's personal cell phone number during a casual interaction with the AI. These incidents highlight a troubling trend, as users are increasingly finding their personal information at risk due to the capabilities of generative AI tools.

The surge in privacy-related queries at DeleteMe, a company dedicated to helping individuals remove their personal information from the internet, has increased by 400% over the past seven months, indicating a growing awareness and concern among the public.

As AI technology continues to evolve, the implications for user privacy and data security remain a pressing issue, prompting calls for better safeguards and accountability from tech companies like Google.

Why it matters
  • The emergence of AI chatbots inadvertently exposing personal phone numbers raises significant privacy concerns for individuals, particularly those in professional settings, such as academics and business leaders, who rely on confidentiality for their communications.
  • This breach not only jeopardizes personal safety and security but also undermines trust in AI technologies, potentially leading to stricter regulations and hesitance among users to adopt these tools.
  • As individuals face unsolicited contact and potential harassment, the fallout could result in a chilling effect on open collaboration and information sharing in various fields.
What to watch next
  • Major tech companies, including Google and Microsoft, are expected to announce new privacy measures for their AI chatbots within the next 72 hours to address concerns about data security.
  • The European Union is set to release updated regulations on AI data protection before the upcoming digital rights summit in March, which may impact chatbot functionalities.
  • Leading cybersecurity firms are planning to publish comprehensive reports on the vulnerabilities of AI chatbots by the end of the month, highlighting potential risks for users.
  • Consumer advocacy groups are organizing a campaign to raise awareness about the risks of AI chatbots, with a press conference scheduled for next week to discuss actionable steps for users.
Sources
1 of 1 linked articles