AI chatbots are giving out people’s real phone numbers
Coveragetap to expand ▾Spectrum: Center Only🌍Other: 1
- People report that their personal contact info was surfaced by Google AI—and there’s apparently no easy way to prevent it.
- And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal cell phone number.
Google's AI chatbot Gemini has come under scrutiny after users reported that it inadvertently exposed their personal phone numbers, raising significant privacy concerns.
One notable incident involved a software developer in Israel who began receiving numerous unsolicited calls from individuals looking for services such as legal advice and locksmiths, all due to incorrect information provided by the chatbot.
Similarly, a PhD candidate at the University of Washington experienced a breach of privacy when Gemini revealed her colleague's personal cell phone number during a casual interaction with the AI. These incidents highlight a troubling trend, as users are increasingly finding their personal information at risk due to the capabilities of generative AI tools.
The surge in privacy-related queries at DeleteMe, a company dedicated to helping individuals remove their personal information from the internet, has increased by 400% over the past seven months, indicating a growing awareness and concern among the public.
As AI technology continues to evolve, the implications for user privacy and data security remain a pressing issue, prompting calls for better safeguards and accountability from tech companies like Google.
- The emergence of AI chatbots inadvertently exposing personal phone numbers raises significant privacy concerns for individuals, particularly those in professional settings, such as academics and business leaders, who rely on confidentiality for their communications.
- This breach not only jeopardizes personal safety and security but also undermines trust in AI technologies, potentially leading to stricter regulations and hesitance among users to adopt these tools.
- As individuals face unsolicited contact and potential harassment, the fallout could result in a chilling effect on open collaboration and information sharing in various fields.
- Major tech companies, including Google and Microsoft, are expected to announce new privacy measures for their AI chatbots within the next 72 hours to address concerns about data security.
- The European Union is set to release updated regulations on AI data protection before the upcoming digital rights summit in March, which may impact chatbot functionalities.
- Leading cybersecurity firms are planning to publish comprehensive reports on the vulnerabilities of AI chatbots by the end of the month, highlighting potential risks for users.
- Consumer advocacy groups are organizing a campaign to raise awareness about the risks of AI chatbots, with a press conference scheduled for next week to discuss actionable steps for users.
