- Able Craft & Code
- Posts
- 1 counterintuitive sign your AI strategy is working
1 counterintuitive sign your AI strategy is working
Why early velocity drops predict AI success
In this issue:
CRAFT: The productivity dip that signals your team is doing AI right
CODE: Why you should focus on workflow orchestration instead of task automation
NEWS: MIT finds AI coding revolution still distant, tech companies regret AI-driven layoffs, and Google turns NotebookLM into media destination with publisher content
LEARN: Frameworks leaders need for successful (and human-centered) AI integration
CRAFT NOTES
What we’re thinking about
There’s no question that AI has accelerated the pace of change in the SDLC. But as we race to integrate these powerful new tools, we need to ensure we’re not abandoning the fundamentals of our craft–the very things that have made us effective engineers and leaders.
For example, we all know there’s a learning curve with new technology, yet we expect AI to deliver efficiencies immediately, without a lag in productivity. It’s as if once AI enters the picture, things should only move faster. If AI adoption slows your team down, something must be wrong.
But what if that initial velocity drop isn't a red flag, but rather, a healthy sign your team is taking AI seriously?
In our recent webinar on human-centered AI adoption, we shared data that might reshape how you think about those early velocity drops:
87% of AI projects never make it to production, and
73% of engineering teams report AI adoption anxiety
But here's the crucial insight—the teams that push through those initial productivity dips consistently outperform teams that abandon AI at the first sign of friction.
The difference? They frame the learning period correctly from the start.
Instead of measuring immediate feature velocity, successful teams track what we call "AI confidence metrics"—how quickly developers can spot AI-generated bugs, how effectively they're prompting, whether they're building better mental models of AI capabilities. These leading indicators predict long-term success far better than sprint velocity ever will.
As our Director of Engineering Pradeepa Dhanasekar recently discovered during our own AI transition, the breakthrough moment came when the team stopped asking "Can we trust this AI code?" and started asking "How can we use AI to explore solutions faster, then apply our expertise to make them production-ready?"
That reframe changed everything.
Remember, you're not just implementing tools—you're building institutional AI capability. And like any capability worth having, it requires patience, practice, and protection from quarterly delivery pressure.
So if your velocity has dipped since introducing AI, don’t take it as a signal that something is wrong. Instead, know that you’re investing in the future speed of your entire organization.
CODE LAB
What we’re discovering
"The real unlock for the future of the SDLC is rethinking workflows from a principled perspective. That means creating conditions where leaders can emerge, own decisions, and execute without friction. In a practical sense, that means orchestrating the use of agents to allow our people to stay focused on what matters most—being the creative leaders we hire them to be. Approaching AI-driven software development through a lens of co-creation means we can design systems where agents and people work together to produce ultimate value."
Why this matters for your team: Your best engineers are too talented to spend 30% of their time on coordination overhead, status updates, and routine decision-making. By strategically deploying AI agents to handle workflow orchestration—not just individual tasks—you create space for your team to focus on the high-impact problem-solving and creative work that actually drives your business forward. The question isn't whether AI can automate parts of your process; it's whether you're designing systems that amplify human creativity.
AI NEWS
What we’re paying attention to
A major new MIT study argues that while AI tools have made "tremendous progress," there's still "a long way to go toward really getting the full promise of automation" because current AI models struggle profoundly with large codebases spanning millions of lines and often "hallucinate" code that looks plausible but doesn't work. The research maps software engineering tasks beyond simple code generation, identifying bottlenecks like poor human-machine communication and AI's tendency to retrieve incorrect code based on similar syntax rather than functionality. The findings suggest the dream of fully autonomous software engineering remains tantalizingly out of reach.
As more than 64,000 people have been laid off across the tech sector this year with AI being a major factor, one analyst argues that companies are making a fundamental mistake: "AI doesn't invent. It recycles. It's trained on other people's ideas, imitates patterns, and doesn't jump the curve." Meanwhile, a new survey found that 55% of companies that laid off staff due to AI automation now regret the decision, with confidence in AI's ability to replace human workers waning from 54% to 48% year-over-year. The warning: companies juicing short-term profits by cutting creative talent may find themselves playing catch-up when they need genuine innovation.
Google is transforming its AI research assistant NotebookLM into more of a destination by adding featured notebooks from The Economist, The Atlantic, and other publishers, allowing users to explore curated content with AI-powered analysis and citation. The move builds on NotebookLM's recently launched public sharing feature, which has already seen more than 140,000 public notebooks shared since its debut last month. Users can now read original source material while asking questions, exploring topics, and getting cited answers—or listen to pre-generated Audio Overviews that turn documents into podcast-style discussions.
LEARNING OPPORTUNITIES
The human-centered AI roadmap: frameworks CTOs need for successful AI integration
Webinar on-demand: The most successful AI integrations aren't technical projects—they're leadership challenges. Discover the frameworks that help CTOs navigate role evolution, maintain team cohesion, and accelerate AI adoption. Join Pradeepa Dhanasekar, Director of Engineering, Jose Rodriguez, AI Solutions Lead, and Fede Garcia, Senior Software Engineer as they share real experiments and practical strategies for building teams where AI amplifies human capability.
Get the recording now
SKIP THE ENDLESS EXPLORATION, START BUILDING
Done with endless AI meetings? Ready for practical implementation?