Leading firms are racing to release and upgrade AI products while rolling out features that may draw on public profiles or private content.
Meta, Google and LinkedIn have introduced AI functions with varying approaches to data use and opt-out options.
“Gmail just flipped a dangerous switch on October 10, 2025 and 99% of Gmail users have no idea,” a November 8 Instagram post said.
Other viral posts warned that new AI tools make most private information available for corporate harvesting.
“Every conversation, every photo, every voice message, fed into AI and used for profit,” a November 9 X video about Meta said.
Technology companies are seldom fully transparent about what data they collect and how they use it, Krystyna Sikora, a research analyst with the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.
“Unsurprisingly, this lack of transparency can create significant confusion that in turn can lead to fear mongering and the spread of false information about what is and is not permissible,” Sikora said.
She said the most reliable way for users to understand and protect their privacy is to read platform terms and conditions, which often specify how data is used and shared.
The United States has no comprehensive federal data privacy law for technology companies.
Meta
A widely shared claim said Meta would start reading direct messages from December 16 and feed them into AI systems for profit.
Meta announced a new policy taking effect on December 16, but it does not mean private messages, photos or voice notes will automatically be used to train its AI.
The policy mainly concerns how Meta customises content and advertising based on user interactions with Meta AI.
For example, if a user chats with the Meta AI about hiking, the platform may suggest hiking groups or gear.
Meta says it does not use private messages from Instagram, WhatsApp or Messenger to train its AI models.
However, it does collect content set to “public,” including posts, photos, comments and reels.
If users discuss sensitive topics such as religion, sexual orientation or race with Meta AI, the company says its system is designed not to convert those interactions into ads.
Meta also says its AI only accesses a device microphone when users give permission.
The company added that its AI may use information about people without Meta accounts if they appear in other users’ public posts.
It said deleting accounts does not remove the possibility of past public content being used.
David Evan Harris, who teaches AI ethics at the University of California, Berkeley, told PolitiFact that the lack of US federal privacy rules means users have no standardised right to opt out of AI training.
He said opt-out tools, where they exist, are often hard to find.
Meta does not provide a universal opt-out for AI training across Instagram, Facebook and Threads.
WhatsApp users can disable Meta AI per chat through advanced privacy settings.
The form circulated online as an opt-out method only allows users to report instances where Meta AI reveals personal information.
Google
A social media post claimed Google’s AI can now read every Gmail message and attachment by default.
Google said its AI product Gemini Deep Research, announced on November 5, can connect to services such as Gmail, Drive and Chat only after user permission.
Users can choose which data sources Gemini can access, including Gmail and Google Drive.
Google also collects data through Gemini prompts, uploaded images and videos, and interactions with apps such as YouTube or Spotify, if permission is granted.
It may also collect call and message logs if users allow access.
A Google spokesperson told PolitiFact the company does not use data from registered users under 13 to train its AI.
Google can access email data if smart features in Gmail and Google Workplace are enabled, which are automatically on in the United States.
These features help draft emails or suggest calendar events.
Turning off smart features can block AI access to Gmail, but not to Gemini used separately in apps or browsers.
A lawsuit in California alleges that an October policy change gave Gemini default access to private content in Gmail, Chat and Meet.
The suit claims the change violates California’s 1967 Invasion of Privacy Act by enabling access without clear consent.
Before the change, users had to manually opt in, according to the complaint.
To limit AI training use, users can use temporary Gemini chats or chat without signing in, which prevents chat history from being saved.
Opting out of Gmail and other product integrations requires disabling smart features in settings.
LinkedIn
Another claim said LinkedIn would start using user data to train AI from November 3.
LinkedIn, owned by Microsoft, confirmed it is using some US members’ data to train generative AI models.
The data includes information from user profiles and public posts.
LinkedIn said it does not use private messages for this training.
The company also said Microsoft began receiving certain LinkedIn member data from November 3 to support personalized advertising.
Autumn Cobb, a LinkedIn spokesperson, told PolitiFact that members can opt out of having their content used for AI training.
They can also opt out of targeted advertising.
To disable AI training use, members must go to data privacy settings, select “Data for Generative AI Improvement” and turn off the training option.
To stop personalized ads, they must turn off ad personalization and data sharing with affiliates in advertising data settings.