Yesterday's AI
Week of October 20th, 2025
It’s FOMO either way, so why not just take a cup of coffee and enjoy Yesterday’s News 😃
I’ve organized this issue into multiple sections:
General News - meant for all levels of technical, just to know what’s been cooking
Big Money Deals - cause who doesn’t want to share with their colleague what an amazing investment/acquisition/deal happened
Technical - all news that are related to technical advances in the AI industry
Skeptical - a spoonful of skeptical to keep us sane
Choose your section to focus on, save this for later, share it with your colleagues and let me know if you like it or not! 😃
📰 GENERAL NEWS
IBM Partners with Anthropic to Integrate Claude
IBM announced a strategic partnership to integrate Anthropic’s Claude models into its software portfolio, starting with a new AI-first integrated development environment. Early testing with over 6,000 IBM developers showed productivity gains averaging 45%.
Hot take: IBM is still trying to prove it can surf the AI wave without drowning in its legacy portfolio.
The companies also co-authored an enterprise AI implementation guide focused on the Agent Development Lifecycle, which sounds like corporate buzzword bingo but is probably actually useful. The 45% productivity gains are impressive if real, though I’m always suspicious when companies test their own tools and report glowing numbers.
Honestly, IBM’s history with “strategic AI initiatives” is a graveyard (hello Watson Health). At least this time they’ve got a partner with working models.
EU Launches €1 Billion “Apply AI” Strategy
The European Commission announced its Apply AI Strategy, mobilizing approximately €1 billion from existing programs like Horizon Europe and Digital Europe to accelerate AI adoption across 10 key sectors including healthcare, manufacturing, energy, and defense. The initiative aims to push AI adoption from current 13.5% of European businesses to 75% by 2030.
Hot take: Europe looked at America and China spending $180B and $140B on AI respectively and said “we can do that too!” then allocated... €1 billion. That’s not even a rounding error in the AI arms race. Commission President Ursula von der Leyen emphasized an “AI first” policy approach, which is adorable. Here’s Europe’s strategy: spend 1% of what the US spends, regulate 100x harder, then wonder why all the AI companies are in San Francisco. There are two possible futures here: (1) the AI bubble bursts spectacularly, America’s $180B goes up in smoke, and Europe looks wise for sitting it out with their measured €1B and sensible regulations, or (2) AI actually works, Europe falls a decade behind, and we get another round of “why doesn’t Europe have any tech giants?” think pieces. My money’s on option 2, but Europe’s playing the long game where “we told you so” counts as victory even if you lost the war. The 75% adoption target by 2030 is ambitious when you’re bringing a billion euros to a hundred-billion-dollar fight.
NVIDIA’s “Personal AI Supercomputer” Goes on Sale
NVIDIA launched DGX Spark for purchase starting at $3,999, a compact desktop “AI supercomputer” delivering approximately 1 petaflop of AI performance with the Grace-Blackwell superchip and 128GB of unified memory.
Hot take: NVIDIA is selling you a desktop computer that costs as much as a used car and calling it a “personal AI supercomputer.”
$3,999 is both “consumer-ish” and absurd for actual consumers.
CEO Jensen Huang hand-delivered one of the first units to Elon Musk, because of course he did - can’t have a tech product launch without Elon getting the first one. The specs are genuinely impressive though: a petaflop of compute on your desk is wild. This is either the future of AI development or an expensive paperweight for enthusiasts, depending on whether local AI actually takes off.
Anthropic Ships Claude Haiku 4.5
Haiku 4.5 delivers performance comparable to Sonnet 4 at approximately one-third the cost and more than twice the speed, scoring 73.3% on SWE-bench Verified. The model is priced at $1/$5 per million input/output tokens.
Hot take: Anthropic just made their previous flagship-level performance cheap and fast. What was recently at the frontier is now cheaper and faster - that’s the entire AI race in one sentence. Haiku 4.5 matching Sonnet 4’s coding performance while being 2x faster and 3x cheaper is genuinely impressive. The economics are wild: what cost you $15 last quarter now costs $5. This is great for consumers and terrifying for anyone building a business on AI API margins. The real play here is Sonnet 4.5 can break down complex problems into multi-step plans, then orchestrate a team of multiple Haiku 4.5s to complete subtasks in parallel. That’s the future - one expensive smart model coordinating many cheap fast models. AWS Lambda but make it AI agents. What could go wrong?
💰 BIG MONEY DEALS
Meta Hires Andrew Tulloch for Reported $1.5 Billion
Meta successfully recruited Andrew Tulloch, co-founder of Mira Murati’s AI startup Thinking Machines Lab, with a compensation package that is claimed to reach $1.5 billion over six years including performance bonuses and stock incentives. Meta denied the Wall Street Journal’s reported compensation figure as “inaccurate and ridiculous,” though the company confirmed the hire.
Hot take: Let me get this straight: Meta couldn’t buy the startup, so they just bought one founder for potentially $1.5 BILLION over six years? That’s $250 million per year. For ONE person. Meta’s denial of the WSJ figure as “inaccurate and ridiculous” while refusing to provide the actual number is the corporate equivalent of “I’m not saying what I make but it’s definitely not that much” while driving a Lamborghini. This is the AI talent war reaching peak absurdity. We’ve gone from acqui-hires to just... billion-dollar hires. No company. No team. Just one very expensive human and whatever’s in his brain. Remember when talent wars meant free lunches and stock options? Now it’s billion-dollar golden handcuffs.
Mira Murati must be wondering if she should’ve just sold her co-founder instead of the whole startup. The real story: Meta is so desperate to compete with OpenAI they’re basically paying GDP-of-small-countries money for talent. Or this is yet another atempt to pour oil into the AI-hype.
Salesforce Acquires Apromore for Process Intelligence Play
Salesforce signed a definitive agreement to acquire Apromore, an Australian process intelligence platform, though financial terms weren’t disclosed (read: probably not enough zeros to make headlines). The acquisition aims to enhance Salesforce’s “agentic process automation” capabilities by adding Apromore’s process mining and task mining technology to the Agentforce platform.
Hot take: Salesforce looked at the AI agent hype cycle and decided what it really needs is... process mining software from Australia. To be fair, this actually makes sense: before you can automate a process with AI agents, you need to understand what that process actually is. Apromore maps workflows, finds bottlenecks, and identifies automation opportunities - basically it tells you where your humans are wasting time so AI can waste it more efficiently. Founded in 2014, Apromore has been doing process intelligence since before it was cool to slap “AI” on everything. They’ve got customers like T-Mobile and Vodafone, so this isn’t a acqui-hire, it’s an actual product with actual revenue (shocking!). Salesforce CEO Marc Benioff presumably sees this as the missing piece for Agentforce to actually do something useful in enterprises beyond answering support tickets. The real question: how long until Salesforce rebrands it as “Agentforce Process Intelligence Powered by Einstein” and charges 3x for it? My money’s on Q2 2026.
🔬 TECHNICAL
Google Unveils CodeMender Security Agent
Google DeepMind announced CodeMender, an AI agent designed to automatically find, fix, and prevent security vulnerabilities across large codebases. The system can migrate APIs, add bounds safety annotations, and preserve behavior by judging functional equivalence. Unlike traditional vulnerability scanners, CodeMender produces validated patches that go through human review before implementation.
Hot take: Google built an AI that automatically fixes security bugs in your code. This is either the future of secure software or the beginning of an AI that writes patches for vulnerabilities created by other AIs, which will then need patches from a meta-AI, and so on until we achieve peak recursion. Examples include resolving heap overflow reports that concealed deeper lifetime bugs, which is genuinely impressive - most static analysis tools would’ve called it a day after finding the surface issue. The “human review before implementation” part is key. Nobody wants their production code automatically patched by an AI at 3 AM on a Saturday. This is Google’s play to own the AI-powered DevSecOps market, which is smart because security is one of those things enterprises will actually pay for. Unlike, you know, search quality.
DeepMind + Commonwealth Fusion Systems for Tokamak Control
AI for tokamak control in SPARC fusion project.
If you understood more than half of this sentence, carry on reading. If not, just know: Google is teaching AI to wrangle angry plasma donuts.
Hot take: DeepMind is using AI to control plasma in fusion reactors, which is the most “we live in the future” sentence possible. Fusion energy has been “20 years away” for the last 60 years, but maybe AI is the thing that finally makes it work? Or maybe AI will just be really good at managing the plasma while fusion remains 20 years away forever. Either way, this is the kind of AI application that actually matters - not generating marketing copy, but controlling superheated plasma to potentially solve humanity’s energy crisis. Respect to DeepMind for working on hard problems that could actually change the world. Though I’m sure someone will still find a way to use this research to generate better cat pictures.
🛡️ SKEPTICAL
Bitdefender Report: 58% of Security Pros Told to Hide Breaches
Bitdefender’s 2025 Cybersecurity Assessment Report revealed that 58% of security professionals were instructed to keep breaches confidential even when they believed disclosure was necessary—a 38% jump since 2023. The report also found that 84% of attacks exploit existing tools and that there’s a growing disconnect between executives prioritizing AI adoption and frontline managers prioritizing cloud security and identity management.
Hot take: Let me get this straight: 58% of security professionals are being told to hide breaches, up from 20% two years ago. This isn’t a trend, it’s a crisis. Companies are so terrified of disclosure that they’re literally telling their security teams to shut up and cover it up. The kicker? Executives prioritize AI adoption while frontline managers prioritize actual security - classic disconnect between C-suite “innovation” and people dealing with actual threats. Everyone’s racing to deploy AI while their systems are actively compromised and management is telling security teams to keep quiet about it. This is fine. Everything is fine. The house is on fire but at least we have an AI chatbot! This report is a damning indictment of corporate security culture, and the fact that it’ll get less attention than “new AI model drops!” tells you everything about our priorities.
LLM Poisoning: Just 250 Docs Enough
Anthropic, UK AI Security Institute, and Turing Institute research shows approximately 250 poisoned documents can implant backdoors in large language models.
Hot take: It takes just 250 poisoned documents to backdoor an LLM. TWO HUNDRED AND FIFTY. That’s not a sophisticated nation-state attack, that’s “motivated teenager with time” territory. Every company training on web data, user uploads, or really anything from the internet should be absolutely terrified right now. The entire premise of “we’ll just train on all the data” is collapsing because it turns out all the data includes a tiny percentage of poisoned data that can compromise the entire model. And how do you even detect this? Manual review of billions of documents? AI to review AI training data? It’s poisoned turtles all the way down. This research is Anthropic essentially saying “by the way, this thing we’re all building can be trivially compromised” and everyone’s going to ignore it and keep training anyway because what’s the alternative, not building AI?
Ongoing: Agentic/RAG Fragility
New “Phantom” paper shows RAG + agents still vulnerable to adversarial data.
Hot take: Retrieval-augmented generation and AI agents, the two hottest things in AI, are fundamentally fragile against adversarial inputs. Shocking! Who could have predicted that systems that trust whatever data they retrieve might be vulnerable to poisoned data? (Everyone. Everyone predicted this.) The “Phantom” paper is just the latest in a long line of research showing that RAG systems will confidently retrieve and use malicious data if it’s ranked highly enough. This is the AI version of “Google bombing” except now it’s “RAG bombing” and the stakes are way higher because enterprises are using this stuff for critical decisions. Every company deploying RAG: “it’s fine, we trust our data sources!” Narrator: Their data sources were not fine.
That was this week in AI: where IBM discovered enterprise partnerships, Europe’s €1B can’t buy love (or compete with $180B), NVIDIA put a supercomputer on every desk (for only $4k!), Meta paid $1.5B for one human brain (maybe?) and it turns out AI security is just regular security with extra steps and everyone’s bad at both.
See you next week, assuming the AI agents haven’t poisoned each other’s training data into mutual incomprehensibility. YAI 👋
Disclaimer: I use AI to help aggregate and process the news. I do my best to cross-check facts and sources, but misinformation may still slip through. Always do your own research and apply critical thinking (with anything you consume these days).


