- Able Craft & Code
- Posts
- The AI security threat hiding in plain sight
The AI security threat hiding in plain sight
Don’t let complex AI security risks distract you from the everyday ones
In this issue:
CRAFT: The real AI security risks aren't coming from hackers—they're happening in everyday workflows
CODE: How to approach AI strategically when your toolkit is already too complex
NEWS: Amazon's $100M agent bet, Microsoft's AI payoff validation, and why AI tools made experienced developers slower
CRAFT NOTES
What we’re thinking about
AI security risks are evolving every day.
For example, prompt injection currently tops OWASP's risk list. Talented hackers across the globe are, no doubt, delighting in discovering new LLM capabilities that can help them infiltrate systems large and small.
Safeguarding your operations from external threats has to be a priority.
But here's what we're seeing: the biggest AI security risks aren't necessarily coming from external attacks. They're happening in everyday workflows that seem harmless.
A developer debugging an authentication issue pastes a JSON response into ChatGPT without thinking twice—customer PII and all.
An engineer uses Claude to optimize some code and accidentally shares proprietary algorithm details.
Your team treats AI tools like Google Search, when they're actually more like shared whiteboards that never get erased.
You're not alone if this sounds familiar. Research shows that 11% of what employees paste into ChatGPT contains confidential data, and Samsung discovered employees had leaked sensitive semiconductor source code and internal meeting transcripts to ChatGPT in three separate incidents within just 20 days. It's not about malicious intent—it's about cognitive overload in moments when security isn't top of mind.
The tricky part? Traditional security approaches don't work here. You can't just block AI tools—they're already essential to your team's productivity. And you can't rely on policies alone, because risky behaviors happen in split-second decisions during problem-solving.
What works instead:
Create simple rules like "sanitize data before pasting" and "use dummy values for debugging."
Provide teams with approved AI tools that have built-in data protection.
Train people to recognize when they're about to share something sensitive—not through lengthy policies, but through quick, practical examples they'll actually remember.
The companies that get AI security right aren't the ones with the most sophisticated policies—they're the ones that make secure practices feel natural. Start with one workflow, fix the obvious gaps, and build from there. Your future self will thank you for solving this now, before it becomes a headline with your company's name on it.
CODE LAB
What we’re discovering
"Great engineers know that it's not about using every tool — it's about using the right one for the job. With LLMs and AI agents, we now have an incredibly powerful and flexible toolkit that can adapt to a wide range of challenges. A lot of this is still new and that’s exactly what makes it exciting: we have a chance to explore uncharted territory and apply our creativity and engineering knowledge to figure out where these technologies can deliver the most impact. We're still in the process of matching tools to problems, but for the first time we have a toolset that's expansive enough to meet the complexity of the problems we solve. Engineers who embrace and master this will not only stand out today, but will be shaping the future of software."
Why this matters for your team: Your engineering teams are already drowning in tool complexity - adding AI shouldn't make that worse. The teams that will pull ahead are those that approach AI strategically, identifying specific high-impact use cases rather than trying to implement every new capability. Start by auditing where your current toolchain creates the most friction, then evaluate whether AI can solve those specific problems better than your existing solutions.
AI NEWS
What we’re paying attention to
AWS launched its AI agent marketplace with over 900 pre-built solutions and a dedicated investment fund, signaling that the industry believes you're moving beyond pilots to actual procurement. The marketplace includes agents from major vendors like Anthropic, Salesforce, and IBM, suggesting AI agents are becoming standardized enterprise tools rather than experimental projects.
The company's $27.2 billion quarterly AI infrastructure investment delivered better-than-expected cloud growth, providing the first major validation that massive AI spending translates to measurable business results. For leaders questioning whether AI investments are worth it, Microsoft's numbers offer concrete evidence that the math can work.
Researchers tested 16 experienced developers working on real issues in their own open-source repositories and found AI tools increased completion time by 19%. The slowdown occurred because developers spent extra time validating AI-generated code and correcting subtle errors that weren't immediately obvious—suggesting that AI's productivity gains may be overstated for complex, real-world tasks requiring deep domain knowledge.
SKIP THE ENDLESS EXPLORATION, START BUILDING
Done with endless AI meetings? Ready for practical implementation?