Cut 90-min podcasts to a 3-min weekly digest
Save hours. Get smarter in 3 minutes every Monday — free.
Follow the team on X for highlight drops, episode threads, and launch updates.
Highlight drops, launch threads, and behind-the-scenes from the podmark team.
Follow Podmark on XPodcasts are handpicked by our team.
Latest Highlights

I have a really good one. My boyfriend works at Anthropic, so take what I say with an 80% discount. Before 4.5, I wasn't using Claude daily, though I tried it occasionally like other models. ChatGPT and Gemini were my daily drivers. When Opus 4.5 launched, I tested it with an unpublished study, asking it to write a column in the style of Casey's Platformer. ChatGPT 5.1 still fails this, giving bullet points I'd never use. Gemini 3's output is somewhat structured like mine but has obvious AI tells. When I first tried this with Opus 4.5, it sent a chill down my spine. For the first time, I saw sentences I could have written, especially the conclusion. Earlier this year, we discussed style transfer, like the Studio Ghibli moment for images. I've been waiting for that in text, and this was it. I thought, "Oh my God, it's starting to happen." That was the first thing Opus 4.5 did that made me think they might have something here.

AI, as a field, is largely inspired by human intelligence. We are the most intelligent animals known in the universe, and human intelligence is multifaceted. Psychologist Howard Gardner, in the 1960s, coined the term "multiple intelligences" to describe this, including linguistic, spatial, logical, and emotional intelligence. I view spatial intelligence as complementary to linguistic intelligence, not in opposition to a vague "traditional" concept. Spatial intelligence is the ability to reason, understand, move, and interact in space. Consider the deduction of DNA structure; much of it involved spatially reasoning about molecules and chemical bonds in 3D to conjecture a double helix. This ability, demonstrated by Francis Crick and Watson, is difficult to reduce to pure language. Even daily tasks, like grasping a mug, are deeply spatial. Seeing the mug, its context, my hand, and geometrically matching my hand to the mug, then touching the correct points, is all profoundly spatial. While I use language to describe it, language alone cannot enable you to pick up a mug.

We're starting to implement this now. At DeepMind, much of my RL research focused on this. When we first accessed GPT-4, we had a strong intuition that we'd be able to string together many model calls or eventually create reasoning models where the entire agent is differentiable. On the very first day we had access to GPT-4, Winston spent 14 hours in his room, redoing many of his associate tasks. His work essentially involved a hacky, agentic process: he'd look up case law, summarize it, and then use that summary to draft documents. Witnessing this gave us early insight into the future direction of this technology.
See What You'll Get
A sample of our weekly curated podcast highlights
Here are this week's carefully curated podcast highlights just for you.
© 2025 Podmark. All rights reserved. |
How it works
Discover the best podcast insights, curated by AI and refined by humans. Here's how Podmark transforms every episode into bite-sized brilliance.