
OpenAI recently revealed a troubling trend: several China‑linked actors are exploiting ChatGPT for covert influence operations, cybercrime, and phishing campaigns.
- Influence Operations: Groups are using AI-generated text—and even profile images—to post political content that amplifies U.S. domestic tensions. One campaign actively promoted both sides of divisive topics such as tariffs, foreign aid cuts, and China–Taiwan tensions, deliberately stoking discord.
- Fake Personas and Deception: In a scheme known as “Uncle Spam,” accounts leveraging ChatGPT posed as U.S. veterans with fabricated names (“Veterans For Justice”), using believable logos to lend authenticity and sow confusion.
- Cyber‑enabling Tools: These actors also asked ChatGPT to help build social‑media bots, develop phishing scripts, conduct open‑source intelligence, and even assist with malware testing—activities that escalate the danger of fraud and hacking.
OpenAI took action by banning dozens of malicious accounts—four of the ten most notable cases were linked to China. However, the real concern lies in the downstream impact: how these activities can exploit trust, especially among vulnerable populations.
Who Are the Targets?
- General U.S. Public: Political influence operations aim to polarize communities by spreading emotionally charged misinformation—making people distrustful of institutions, one another, and online content.
- Senior Citizens: Grandparent scams and spoofed crises are becoming alarmingly sophisticated. AI now enables deepfakes—both voice and image—used to impersonate family members in urgent need, or to deliver urgent requests via phone or social media.
Older adults, often less familiar with digital skepticism and AI threats, have already lost billions to scams. A 2023 estimate puts losses at $10 billion, disproportionately affecting seniors.
Protecting The American Public, Especially Seniors
- Spot The Signs of AI‑driven Scams
- Voice or video messages that claim to come from loved ones in urgent situations—verify via alternate contact first.
- Emotional manipulation: scams urging you to “act now” or “don’t tell anyone” often involve deception.
- Pause & Verify
- If a dramatic story unfolds—like legal trouble or medical emergency—call the family member, attorney, or hospital to confirm independently.
- Never transfer money or gift cards to someone you can’t verify in person.
- Educate on Digital Hygiene
- Encourage seniors to install scam-blocking tools, fraud filters on social media, and caller‑ID spoof protection.
- Teach them how to spot phishing attempts—unsolicited links, emotional triggers, or requests to share personal data.
- Strengthen Community Awareness
- Loved ones, caretakers, and friends should have honest conversations about scams. Family is often the first line of defense.
- Know local resources: AARP, Better Business Bureau, and state attorney general offices offer scam‑reporting services and educational materials.
- Stay Informed on AI Risks
- As cybercriminals adopt generative AI, continue updating scam-prevention habits. Subscribe to trusted sources—government alerts, financial institutions, consumer-protection agencies.
Final Takeaway
While AI like ChatGPT brings incredible promise, it also equips malicious actors—some linked to China-run operations—to create disinformation, craft phishing scams, and mimic loved ones. The American public must stay alert, with seniors at the center of protection efforts. Vigilance, verification, and open dialogue across families and communities are key.
-Nguyễn Bách Khoa-
Sources:
