- The Zilahut
- Posts
- The Last Time You Trusted a Video
The Last Time You Trusted a Video
WAN 2.5 preview shows what’s coming next

You’re not ready for what AI video models are pulling off right now.
No, really, you think you’ve seen deepfakes, some meme edits, maybe a funny TikTok filter.
Cute.
Forget all of that.
What’s happening in the labs (and already on Twitter timelines) is on a different level.
WAN 2.5. Kling 2.5. Wanimate.
Each one is bending the rules of what’s possible on a laptop or a cloud server, and the speed of this progress feels like cheating.
And you can run some of this at home.
Free. Open source.
No permission slips from Silicon Valley.
That’s disruptive in the truest sense of the word.
PRESENTED BY SPONSOR
Practical AI for Business Leaders
The AI Report is the #1 daily read for professionals who want to lead with AI, not get left behind.
You’ll get clear, jargon-free insights you can apply across your business—without needing to be technical.
400,000+ leaders are already subscribed.
👉 Join now and work smarter with AI.
The shock of Wanimate
Let’s start with the one that actually made me put down my coffee: WAN 2.2 Animate (yes, I agree, Wanimate is the better name).
It takes the movement from one piece of footage and flawlessly pastes it onto an entirely different character.
Think: swap Sonic into a Marvel fight scene, and the lighting, shadows, clothing, even the folds in the fabric all match.
You look at the side-by-side comparisons and your brain stalls.
The AI doesn’t just replace faces; it replaces the whole actor while preserving the background. One demo swapped in Sam Altman as a martial arts master, and I swear, if you weren’t paying attention to the lip sync, you’d believe he trained at Shaolin Temple.
Sure, the close-up lip movements aren’t always perfect.
Some shots get plasticky.
A nunchuck floats where it shouldn’t.
But the fact that this is open source means anyone with a gaming rig can generate Hollywood-grade edits from their bedroom.
Once the tool is in the wild, every film student, every meme maker, every random teenager on Discord has access to technology that used to cost a studio millions.
PRESENTED BY SPONSOR
Marketing ideas for marketers who hate boring
The best marketing ideas come from marketers who live it. That’s what The Marketing Millennials delivers: real insights, fresh takes, and no fluff. Written by Daniel Murray, a marketer who knows what works, this newsletter cuts through the noise so you can stop guessing and start winning. Subscribe and level up your marketing game.
Kling 2.5 Turbo: fast, cheap, scary good
Then there’s Kling 2.5 Turbo. Not open source, but still mind-bending.
For about thirty-five cents, you can spin up a five-second 1080p video. The speed is nuts. The quality? Honestly, shockingly stable for a “turbo” model.
Think cars drag-racing down a runway, or a gorilla ice skating with cinematic lighting and dust particles in the air.
That’s not a fever dream, it’s actual prompts people are running. Yes, sometimes a backflipping sloth morphs mid-air into something Cronenberg would appreciate, but at this pace, those glitches feel like temporary hurdles.
The detail… the reflections, the fake audio layered in, the physics of water splashes, is already at the point where you stop asking, “Can the AI do this?” and start asking, “How soon before someone uses this for something messy?”
WAN 2.5: the monster in preview

And then there’s WAN 2.5 Preview.
Still locked behind a queue (seriously, people are waiting in line like it’s Black Friday), but the claims are enormous: seamless audio-visual syncing, richer motion dynamics, 1080p fidelity, and text-based editing.
Here’s the crucial difference: WAN models often go fully open source after preview.
That’s the ticking time bomb. If Wanimate is already this good, WAN 2.5 fully unleashed will make Kling look like a warm-up act.
Studios will hate this. Indie creators will worship it.
And the rest of us?
We’ll be stuck in the middle, questioning whether that campaign ad, that viral clip, that news footage is authentic or stitched together last night in someone’s dorm room.
Why this matters
This isn’t “fun AI toys.” This is the gut-check moment where we realize that video, our most trusted medium, has lost its monopoly on reality.
We already doubted still images. Now video is joining the “fakeable in seconds” club.
The question is no longer if this technology will break into mainstream culture.
It’s how fast, and who gets to wield it first. Will it empower indie filmmakers to bypass Hollywood?
Or drown us in a flood of synthetic nonsense until we stop trusting our eyes altogether?
Either way, you don’t get to sit this one out.
VIDEO OF THE WEEK:
Why you should care: Because it’s the closest thing to seeing the future of media before it blindsides you.
This isn’t theory, this isn’t ten years away, it’s live demos of people swapping characters into movies, conjuring realistic creatures from text prompts, and editing footage with the kind of fidelity that used to take a team of VFX artists.
Watch it, not just to be impressed, but to understand the sheer velocity of change bearing down on your life, your job, and your sense of what’s real.
The loop I want to leave open for you is this: if video itself is no longer proof, what replaces it? What do we lean on for trust?
That answer isn’t here yet. But the tech is, and it’s not slowing down.
Thanks for reading!
Best,
Miroslav from The Zilahut
AI TOOLS YOU SHOULD KNOW
Disclaimer: Within the article, you will find affiliate links. If you decide to purchase through these links, we want to sincerely assure you that we will receive a commission at no extra cost to you.
Reply