AI Scams Are Getting Nasty, and Healthcare AI Has a Big Blind Spot

AI Scams Are Getting Nasty, and Healthcare AI Has a Big Blind Spot

1 0 0

I’ve been watching the AI scam landscape evolve since the early ChatGPT days, and frankly, it’s getting ugly out there. When OpenAI dropped ChatGPT in late 2022, the first thing security folks noticed was how effortlessly it could write convincing phishing emails. That was the opening shot.

Now we’re past that. Cybercriminals have moved from simple email scams to fully automated vulnerability scans, hyperrealistic deepfakes, and turbocharged phishing campaigns that adapt in real time. Rhiannon Williams over at MIT Technology Review calls them “supercharged scams,” and that’s not hyperbole. The volume is the scary part. Organizations that used to handle a few dozen attacks a week are now drowning in hundreds. AI makes everything faster, cheaper, and easier to scale. And we’re still in the early innings. As more criminals adopt these tools and the models keep improving, it’s going to get worse before it gets better.

Meanwhile, over in healthcare, we’ve got a different kind of problem. Doctors are using AI for notetaking, scanning patient records, flagging people who might need specific treatments, even interpreting X-rays and lab results. A growing pile of studies says these tools are accurate. Great. But here’s the question nobody seems to be asking loudly enough: does that accuracy actually translate into better health outcomes for patients? Jessica Hamzelou dug into this for The Checkup, and the answer is basically “we don’t know.”

It’s a weird gap. We’re deploying AI tools into clinical workflows, spending money, changing how doctors work, but we haven’t done the hard work of measuring whether patients actually benefit. You can have a model that reads X-rays with 99% accuracy, but if it leads to more false alarms, more unnecessary procedures, or just adds noise to an already overloaded system, what have you really gained? The tech industry loves to move fast and break things, but breaking healthcare is a different game.

On the research front, DeepSeek finally dropped its long-awaited V4 model. Preview versions are out, and the company is claiming it’s the most powerful open-source platform available, capable of rivaling closed-source models from OpenAI and DeepMind. That’s a bold claim, but what’s more interesting is that V4 is adapted for Huawei’s chip technology. That’s a strategic move that has geopolitical implications beyond just model performance. China is clearly building its own AI stack from the ground up, and DeepSeek is a key piece.

Also worth noting: more countries are moving to restrict children’s social media access. Norway is enforcing a new ban, the Philippines might follow, and there’s a growing push in the US to get AI out of schools entirely. The backlash against putting AI in front of kids is real and gaining momentum. I think that’s healthy. We should be skeptical about deploying unproven technology to the most vulnerable users.

Comments (0)

Be the first to comment!