Discussion about this post

User's avatar
Sam Illingworth's avatar

Awesome piece. Thanks team, so many takeaways, but for me this hit hard "you can’t generalize what you don’t deeply understand" - it's why a slow and steady approach is the way to critical thinking. 🙏

Expand full comment
Saida Chakri's avatar

Most firms claiming to be “AI-first” are really just “LLM-assistant-first”, and that’s a dangerous illusion.

This article nails it, but the uncomfortable truth is that practitioners like me spend more time re-educating senior executives than actually building. I took courses with Sara and Tyler (the BEST in the market) and I use their frameworks daily. However, in global organizations, progress is limited to using rigid LLMs and shallow agentic capabilities. Leaders mistake pilots and demos for transformation, while the real moat, EVALS and workflow intelligence, barely makes it past PowerPoint.

If we want to break the cycle, leadership must stop treating AI like a compliance checkbox and start treating it as a cultural reset, otherwise competitors will be encoding expertise at scale while they’re still running “safe” sandbox experiments.

The hopeful part? People like Sara and Tyler are playing the role that great thought leaders throughout history have always played, sparking cultural shifts by equipping a new generation of practitioners with knowledge and frameworks that ripple outward. That’s how we move from AI hype to lasting transformation.

Expand full comment
8 more comments...

No posts