Site Logo

AI, not A1: A taste of the future

Published 1:30 am Wednesday, July 30, 2025

Ruby Carlino
1/2

Ruby Carlino

Ruby Carlino
Ruby Carlino

By Ruby E. Clarino

for the Sequim Gazette

At an event focused on educational innovation, the secretary of education, who’s working hard to put herself out of a job, made news when she talked about a school system “that’s going to start making sure that first graders, or even pre-Ks, have A1 teaching in every year.”

Who can blame Kraft Heinz’s A.1. Sauce for capitalizing on its unexpected shout-out at a panel on the future of education?

Artificial Intelligence, or AI, is here — and it’s here to stay. The potential for good or bad cannot be overstated.

The AI frontier may feel like the wild, wild west. More than 1,000 state laws relating to AI reportedly have been introduced this year — California leading the pack with the most enacted state-level AI laws — but only this May did Congress start considering its first federal regulation.

Earlier this month, the White House, in an apparent shift, released its AI Action Plan which focuses on accelerating innovation and infrastructure. Advocacy groups are warning that the emphasis on deregulation and innovation is at the expense of public interest protections, worker rights, and environmental safeguards.

Let’s look at the good. The largest hospital-based research enterprise in the United States, Mass General Brigham, has an AI tool called FaceAge that reportedly can estimate a person’s biological age and improve cancer survival prediction. This could help inform treatment decisions in cancer care and other chronic diseases.

Microsoft says that its Aurora AI predicted the 2022 Iraqi sandstorm and a 2023 typhoon’s landfall in the Philippines four days in advance of the actual event.

An AI-powered microscope system has been developed at the University of Tokyo that can detect dangerous blood clots forming in real time without needing invasive procedures.

In Saudi Arabia, a Chinese startup established an AI medical clinic featuring an AI “doctor” that reportedly diagnoses and prescribes treatments with human oversight.

Now, the bad.

“Grandparent scams” are evolving into AI-enabled scams generating panicked calls from a grandchild claiming to have been arrested or to be in need of bail money. This is AI voice cloning, and it’s just getting started. In a latest improvement, a photo and a snippet of a voice is enough to generate a high quality video in two minutes.

There’s viral TikTok of Tom Cruise biting into a lollipop saying, “That is incredible. How come nobody ever told me there’s bubblegum?” It was meant to look and sound like him, but it wasn’t him. This is a deep fake video using AI that can mimic voice, expressions, and mannerisms. That makes detection harder.

Credit reporting company Experian says that scammers are now using AI to write messages, create images and generate videos that can enhance their scams.

AI can be programmed to be emotionally manipulative. This will likely supercharge romance scammers. Whereas in the past, you may have been able to detect foreign language usage and grammatical mistakes, as in the infamous Nigerian prince scams, those days may be over. The next romance scammer may appear tri-lingual, empathetic, and made to look like a French model while sitting at a scam farm in Myanmar.

AI-driven robocall systems will likely intensify those already annoying intrusions. Unlike the old robocalling systems that were clearly recordings, the new systems not only can spoof phone numbers, they now are also able to mimic human pauses and filler talk making deceptions harder to detect.

Artificial Intelligence like ChatGPT made news for being used in college essays and court filings. These Generative AI can write reports, emails, texts, or letters tailored to a potential target’s interests and background. We should expect that phishing attempts will become more polished and will continue to increase as more automated tools become available and affordable.

So, what can be done to protect ourselves? One suggested strategy is to agree on a code word that family members can use for voice verification. I have a code word with my husband that he knows to ask should somebody sounding like me ever call to say, “Honey, I’ve been kidnapped…..”

If your cell provider has caller ID verification and anti-robocall features, turn them on. Nothing offers 100% protection, of course, but any degree of protection will help.

Video verification for emergency calls is now being used by some 911 centers. It is usually a one-way video (the dispatcher can see you, but you can’t see them). Some U.S. 911 centers are already reportedly piloting or have deployed video verification.

There does not appear to be any announced timeline for 911 centers to adapt video verification in our state. That’s a gap that our elected representatives need to address.

As we confront this fast changing world, I think we need to stay in closer touch with our older relatives and friends. Some of our elderly loved ones may become more isolated as they age making them prime targets for scams offering companionship. Staying connected with people we trust and having a strong sense of community can help protect us all.

Report fraud

If you’re not able to use ReportFraud.ftc.gov to file a report, you may call the FTC’s Consumer Response Center at 877-382-435

___________

Ruby E. Carlino is a published writer with over a decade of blogging experience and a background as a technology analyst. She has lived in Sequim since 2018, after spending years in Asia, Central America, Europe, and the Washington, D.C. area during her husband’s diplomatic assignments. She can be reached at nextchaptercolumn@proton.me.