
The AI news cycle doesn’t last for slow weeks. I’ve been writing about this for over a year now and this one still caught me off guard. Not because there was a big breakthrough or a story. But because I think there was a change in mood. People are using AI more than ever, but a growing number seem to wish they didn’t have to.
One of the stories I’ve been following closely is Anthropic teasing its new role model, Claude Mythos. This is being described as a big step forward and yet the hype is ahead of the evidence. Elsewhere, the focus is less on capacity and more on broader consequences. Like Val Kilmer’s return via AI, which raises questions about consent and the future of entertainment.
Once again, I’ve rounded up the key stories you need to know, along with some practical tips for getting the most out of tools like ChatGPT. Think you know the biggest AI stories of the past week? Take my AI news quiz below to see what you remember.
Were you paying attention? Take my quiz on AI news
Last week’s top AI headlines
Welcome to ICYMI AI, your weekly roundup of the most important developments in artificial intelligence. Here are the biggest AI stories from the past week and why they matter.
Article continues below
The first full performance of a dead actor in AI is here – and it won’t be the last
Upcoming movie As deep as the grave used AI to recreate Val Kilmer in a major role. This is not a brief appearance, but a full performance with the approval of his family.
This feels like a threshold moment. With AI, a person’s image can become an asset that can outlive them. It seems important to note that the filmmakers emphasize that Kilmer’s family was involved in the process. But it doesn’t address some of the deeper questions around consent, ownership, and what performance means when an actor is never on set.
Another detail really strikes me here. An AI-generated key shot would have only taken a few minutes once the assets were ready. Which suggests that this type of AI filmmaking won’t be rare or expensive. We may soon reach a point where AI-generated films are commonplace, but will audiences accept them?
- Learn more: “Fear not the dead and fear not me”: AI brings a digital Val Kilmer back to the screen
A recent test found that Google Gemini produces writing that is harder to flag as AI-generated than many of its competitors, particularly ChatGPT. It is important to note that some AI detection tools failed to detect Gemini-generated writing.
This is important because it compromises the AI detection tools that many people are asked to rely on. Schools, publishers and some online platforms are investing in AI detection. But if the results are inconsistent or just plain wrong, then the authority of these detectors no longer has any meaning.
The bigger issue is what happens when the line between human-written and AI-generated content completely collapses. Many people think they can still spot AI writing, but I’m starting to wonder if we’re only picking up on the most obvious clues and everything else is already going unnoticed.
- Learn more: “ChatGPT continues to be reported again and again” — Gemini is the best AI at imitating human writing and evading detection
LinkedIn says AI is yet to impact jobs, but on-the-ground evidence tells a different story
A LinkedIn executive said this week that AI did not appear to be a reason for a decline in hiring. But the stories of individual companies tell a different story. The latest example is British supermarket chain Morrisons, which announced hundreds of office job cuts, explicitly linking them to the AI restructuring.
This is part of a growing trend across industries and countries. Targeted AI-related reductions that do not yet appear to be reflected in the overall data. If your workplace has started talking about “efficiency” and “restructuring” in the same sentence as AI, you may already be seeing this happening. Even if it doesn’t show up in hiring trend data.
Interestingly, it’s not just jobs that could change: the internet itself could split in two. In an opinion piece, we explored the idea that there could soon be a new Internet, an 80/20 Internet where 80% of web traffic is AI agents and only 20% is human. There could therefore be two Internets, essentially, one optimized for machines and one for humans always in search of reality and authenticity.
- Learn more: “Is AI having an impact on jobs right now? We looked and, honestly, we didn’t see it’: LinkedIn exec says AI isn’t yet causing a big decline in hiring
More AI news you may have missed
- After a developer recently discovered that a popular plugin was collecting their AI prompts, it only seemed right to share my favorite tips from the last week: this simple change in how you invite the AI and this prompt hack that fixes ChatGPT’s biggest weakness.
- Do you like data and research? Me too. Stanford University released its 2026 AI Index report, which tracks the rapid evolution of AI, but our ability to measure and manage it is not keeping pace. And while we talk about the rapid growth of AI, a former programmer’s speech at a planning meeting in Ohio is going viral for highlighting what that growth is actually costing on the ground — from drained water tanks to massive energy demands.
- The Bank of England is running simulations on how AI could destabilize financial markets. More specifically, what happens if AI systems amplify a crash?
- Anthropic’s new model, Claude Mythos, has seen its deployment restricted after demonstrating its ability to find and exploit software vulnerabilities. It’s worth getting acquainted with this explainer from Claude Mythos.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.