Yesterday’s AI - November 3, 2025
Making you less overwhelmed while keeping you informed
This week: OpenAI completed its nonprofit-to-profit restructuring with a new Microsoft deal, Meta and Anthropic published breakthrough interpretability research, and tech giants reported record AI spending alongside significant workforce reductions. Meanwhile, the industry continues grappling with questions about whose business models survive the AI transition and whose don’t.
This week’s sections:
General News - restructuring, layoffs, and product launches
Big Money Deals - where billions are flowing
Technical - interpretability breakthroughs and training advances
Skeptical - the uncomfortable questions
📰 GENERAL NEWS
OpenAI Completes For-Profit Restructuring with Microsoft Deal
OpenAI completed its transformation from nonprofit-controlled entity to a for-profit public benefit corporation, with Delaware and California attorneys general conditionally approving the restructuring. The nonprofit will retain 26% equity (valued at ~$130 billion) and some governance powers, including authority over the Safety and Security Committee. Microsoft secured a $135 billion stake and guaranteed access to OpenAI’s models through 2032 or until AGI arrives—whichever comes first. An expert panel will determine when AGI has been achieved. The restructuring eliminates profit caps that could have returned trillions to the public if OpenAI achieved AGI.
Despite experiencing significant quarterly losses, OpenAI is reportedly considering an IPO that could value the company at $1 trillion—potentially one of the largest IPOs in history. The company faces a projected $115 billion cash burn through 2029.
My take: The incentive structure here deserves scrutiny. OpenAI needs to burn $115 billion through 2029 and requires a successful IPO to survive at that scale. But if the expert panel declares AGI has arrived, Microsoft’s access agreement changes and the entire deal gets reshuffled. This creates a situation where independent experts must decide whether AGI exists while OpenAI’s financial future depends on that determination being delayed—at least until after the IPO.
To get regulatory approval, OpenAI had to prove the restructuring “advances the nonprofit mission.” The argument that succeeded: converting from potentially unlimited AGI profits (with caps designed to return excess wealth to the public) to a fixed 26% stake worth $130 billion somehow met that legal standard. It’s technically legal, blessed by two state attorneys general, and solves real operational problems—OpenAI desperately needed more capital and flexibility to compete. But the transformation from “ensuring AGI benefits all of humanity” to “trillion-dollar IPO with 26% nonprofit stake” represents a significant shift from the original stated mission, regardless of its legality.
Adobe Announces Major AI Integrations at MAX 2025
Adobe launched Firefly Image 5 with support for layers, custom model creation, and expanded AI-powered speech and soundtrack generation capabilities. The update aims to give creators more control while maintaining Adobe’s commercially safe approach to AI training data.
The company also released AI-powered assistants for Express and Photoshop that allow users to edit designs through natural language prompts. The Express assistant enters public beta, while Photoshop integrates with ChatGPT for conversational editing.
Adobe’s “sneaks” program showcased experimental AI including a tool that applies single-frame edits across entire videos, AI-powered lighting manipulation, and audio pronunciation correction. Project Moonlight—an AI social media campaign manager—and “Corrective AI” that can change the emotional tone of voice-overs also made appearances.
My take: The technical capabilities are impressive—layer support and custom models address real professional needs, while natural language editing lowers barriers for beginners. But the Corrective AI demonstration raises questions about consent and attribution. The technology moves from “AI assists with editing” to “AI fundamentally alters human performance and emotional expression.” Today it’s fixing a flat reading; tomorrow it’s a tool that can modify any vocal performance. The ethical and legal frameworks for such modifications don’t exist yet.
The ChatGPT integration in Photoshop is notable. Adobe spent billions building their own AI systems, yet they’re integrating OpenAI’s language understanding for core functionality. This suggests either strategic pragmatism—using the best tool for each job—or acknowledgment that Adobe’s internal language models can’t match OpenAI’s capabilities for conversational interfaces.
Tech Giants Report Workforce Reductions Amid AI Transitions
Amazon announced plans to cut 14,000 corporate positions to reduce bureaucracy and remove organizational layers. The company claims it will reinvest resources in its AI strategy while making the organization more efficient.
Meta is laying off 318 workers from its AI team in the Bay Area around its headquarters, part of broader workforce restructuring despite the company’s heavy investment in artificial intelligence development.
Chegg announced layoffs of 45% of its workforce, explicitly citing disruption from AI technologies as the primary reason. The education technology company is restructuring to adapt to AI-driven changes in the market.
My take: These three stories illustrate different dynamics being labeled under the same “AI disruption” umbrella. It’s worth separating what’s actually AI-caused from what’s AI-blamed.
Chegg’s situation is clearest: students who paid $15/month for homework help discovered ChatGPT provides similar assistance for free. This is genuine AI disruption—their core value proposition became obsolete when capable AI assistants became widely available.
Amazon and Meta’s layoffs are less clear-cut. Tech companies massively overhired in 2020-2022 when interest rates were zero and growth seemed infinite. Now they’re course-correcting. Would these exact layoffs be happening even without AI in the headlines? Probably yes. Amazon has been flattening organizational structures for efficiency reasons predating ChatGPT. Meta firing AI workers while everyone else competes for AI talent suggests either contrarian strategy or recognition they overhired during the boom.
The “reinvest in AI strategy” framing serves dual purposes: it makes cuts sound forward-looking rather than reactive, and gives Wall Street a narrative about innovation instead of just cost reduction. Some of these roles may genuinely become less necessary as AI tools handle certain tasks. But we’re in a moment where every corporate decision gets an AI explanation attached, whether warranted or not.
Grammarly Becomes Superhuman Suite, Launches Proactive AI Assistant
Grammarly rebranded its parent company to Superhuman, positioning the writing assistant as part of a broader productivity suite. The company serves 40 million daily users and is now bundling Grammarly with three additional products: Superhuman Go (a proactive AI assistant), Coda (meeting notes to action items), and Superhuman Mail (contextual email generation using CRM data). Superhuman Go is the flagship addition—an AI assistant that works across all applications without requiring users to actively request help, handling tasks like brainstorming, information retrieval, email composition, and meeting scheduling. The company frames this as solving AI’s “pause, prompt, paste” problem by embedding assistance directly into existing workflows.
My take: The “proactive AI” framing is doing heavy lifting here. Grammarly is positioning against the current interaction model where you must explicitly invoke AI tools, instead promising AI that identifies opportunities and acts without prompting. This raises immediate questions about control and context: How does Superhuman Go decide when to intervene? What happens when its proactive suggestions are wrong or unwanted? The difference between “helpful assistant” and “intrusive automation” often comes down to accuracy of intent detection.
The bundling strategy with Coda and Superhuman Mail makes business sense—expand from single-purpose writing tool to comprehensive productivity platform. But it also creates integration complexity. Each component needs to share context (your CRM data, meeting notes, communication patterns) to deliver on the seamless experience promise. That’s a lot of data flowing between systems, which creates both value (better assistance) and risk (more attack surface, more privacy concerns).
The rebrand preserves Grammarly as a product while elevating Superhuman to the platform level. This suggests confidence that “Superhuman” carries more brand value than “Grammarly” for a productivity suite, despite Grammarly’s decade-plus of recognition. It’s a bet that users will accept a new brand for expanded capabilities. Time will tell if 40 million daily users follow them up-market or see this as feature creep.
OpenAI Launches Aardvark Security Agent
OpenAI launched Aardvark, a GPT-5-powered autonomous security agent in private beta that performs continuous code analysis, exploit validation, and automated patch generation. The system achieved 92% detection rate in benchmark testing and has discovered 10 CVE-identified vulnerabilities in real-world deployments. Aardvark operates through a four-stage pipeline: threat modeling, commit-level scanning, sandbox validation, and automated patching, integrating with GitHub workflows. Code analyzed by Aardvark is not used for model training.
My take: The timing is notable—Google launched similar security capabilities last week, now OpenAI follows with Aardvark. Both companies are racing to position AI as the solution to security problems that... AI often creates or exacerbates. The 92% detection rate and 10 real CVEs discovered are impressive metrics, assuming they hold up beyond controlled testing environments.
The four-stage pipeline (threat model, scan, validate, patch) represents solid engineering—each stage reduces false positives before suggesting code changes. Automated patching is both powerful and risky: when it works, it’s faster than human response times; when it fails, it could introduce new vulnerabilities while “fixing” old ones.
The real question: does autonomous security AI make systems more secure, or does it just shift the attack surface? Now attackers need to find ways to fool the AI security agent rather than just exploiting code directly. And AI-generated patches become a new target—can you prompt-inject Aardvark into creating malicious “fixes”?
Sora Gets Updates: Character Cameos and Video Stitching
OpenAI released updates to Sora 2 video generator, introducing ‘character cameos’ that allow users to turn images or objects into reusable avatars for AI-generated videos. Additional features include clip stitching capabilities for combining multiple video segments and leaderboards displaying popular videos and cameos within the app.
My take: These are incremental improvements making Sora more practical for actual content creation rather than just impressive demos. Character consistency has been a persistent problem in AI video—if you want the same person or object across multiple scenes, you previously had to hope the generator maintained consistency. Cameos solve this by letting you lock in specific visual elements as reusable assets.
Video stitching addresses the length limitation problem—rather than generating one long video (which often fails), you can generate segments and combine them. This is a workaround rather than a solution, but it’s a practical workaround.
The leaderboards are pure growth hacking—gamify creation to drive engagement and discover what actually resonates. OpenAI is learning what types of AI video people actually want to make, which informs future model development.
Product Launches and Partnerships
Canva’s Creative Operating System 2.0 - Canva launched COS 2.0 with integrated AI across documents, websites, presentations, and videos. With 250 million monthly users generating 1 billion designs monthly, the platform adds “Ask Canva” AI assistant, automated brand management through Canva Grow, and free Affinity integration for professional designers.
Microsoft Copilot App Builder - Microsoft’s Copilot can now build applications and automate workflows using natural language, included in the $30/month subscription. The company aims to expand from 56 million Power Platform users to 500 million “builders.”
GitHub’s Agent HQ - GitHub launched Agent HQ, a unified platform for managing AI coding agents from OpenAI, Anthropic, Google, xAI, and Cognition. The platform provides centralized security, identity controls, and governance for third-party agents.
PayPal + ChatGPT Shopping - PayPal partnered with OpenAI to enable payments directly within ChatGPT, launching “Instant Checkout” in 2026, allowing users to complete purchases without leaving the chat interface. Yeah, what could go wrong with that...
Fitbit’s Gemini-Powered Coach - Fitbit rolled out an AI health coach powered by Google’s Gemini to Premium subscribers, providing personalized fitness, sleep, and wellness guidance.
Mistral AI Studio - Mistral AI launched AI Studio, a new enterprise platform designed to help businesses deploy AI prototypes into production environments, focusing on the prototype-to-production pipeline for corporate customers. Go Europe!
Fortanix-Nvidia Security Partnership - Fortanix and Nvidia announced a joint AI security platform using confidential computing technology for regulated industries. The solution combines Fortanix’s security tools with Nvidia’s Hopper and Blackwell GPUs to create attestation-gated systems that verify workloads before releasing encryption keys, supporting on-premises and sovereign deployment with post-quantum cryptography.
💰 BIG MONEY DEALS
Nvidia Hits $5 Trillion Valuation, Invests $1B in Poolside
Nvidia reached a $5 trillion market capitalization, becoming the world’s first company to achieve this milestone. Shares rose up to 5% to over $211, with growth attributed to AI demand. The company had hit $4 trillion just in July—adding a trillion dollars in four months.
Nvidia is also investing up to $1 billion in Poolside, an AI startup, expanding on its previous participation in Poolside’s $500 million Series A round in 2024.
My take: Nvidia’s strategy is remarkably well-positioned: sell GPUs to AI companies while simultaneously investing in those same companies. This dual exposure means they benefit whether individual AI startups succeed or fail, as long as the sector continues demanding compute. The $5 trillion valuation assumes sustained AI infrastructure spending. Whether this represents fair value for the AI revolution’s critical infrastructure or unsustainable bubble pricing depends entirely on whether current AI investment levels continue or contract.
AMD Lands $1 Billion DOE Supercomputer Deal
AMD partnered with the US Department of Energy in a $1 billion deal to develop two supercomputers (Lux and Discovery) at Oak Ridge National Laboratory. The project involves collaboration with Oracle and HPE, with Lux scheduled for deployment in early 2026.
My take: This represents the kind of long-term infrastructure spending that predates and will outlast AI hype cycles. These supercomputers will handle climate modeling, nuclear simulation, and exascale scientific computing—real workloads with clear value propositions. For AMD, it’s validation as a credible alternative to Nvidia for high-performance computing, though they still face significant challenges competing with CUDA’s ecosystem dominance.
Funding, Partnerships, and Acquisitions
OpenAI-Microsoft Partnership Extended - Microsoft and OpenAI signed a new definitive partnership agreement. Microsoft secured a 27% stake in OpenAI and guaranteed access to their AI models through 2032 or AGI arrival. The expert panel determining AGI achievement creates an unusual dependency between business continuity and technical milestone declaration.
Perplexity-Getty Images Licensing - Perplexity signed a multi-year deal with Getty Images for visual content access, retroactively legitimizing their previous use of Getty’s photos. Notably, the deal does NOT grant training rights—Perplexity can display images but can’t train models on them.
Universal Music Settles with Udio - Universal settled its copyright suit with AI music startup Udio, reaching “industry-first strategic agreements” for an AI music platform. This represents the shift from litigation to licensing as music labels adapt to AI generation.
Meta, Google, Microsoft Triple AI Infrastructure Spending - All three reported record profits alongside unprecedented AI infrastructure spending, raising questions among investors about return timelines and whether current spending levels are sustainable.
Nvidia Announces Ambitious Product Portfolio - Nvidia announced IGX Thor processors for physical AI at the industrial edge, open-sourced Aerial software for 6G networks, partnered with General Atomics on fusion reactor digital twins, and contributed to open robotics frameworks.
Smaller Rounds:
Mem0 raised $24M from YC, Peak XV, and Basis Set to build a memory layer for AI apps
Adam raised $4.1M for text-to-3D tools after generating 10 million social media impressions
Bevel secured $10M Series A from General Catalyst for AI health companion integrating wearables
Deal Collapse: CoreWeave’s acquisition of Core Scientific fell through, which markets are interpreting as a potential signal about AI infrastructure valuation concerns.
🔬 TECHNICAL
Meta and Anthropic Publish Interpretability Research
Meta’s Circuit-Based Reasoning Verification (CRV)
Meta FAIR and University of Edinburgh researchers developed CRV, a technique that monitors LLMs’ internal ‘reasoning circuits’ to detect and fix computational errors. Using transcoders to make models interpretable, the method constructs attribution graphs to map information flow and employs diagnostic classifiers to predict reasoning correctness. Testing on Llama 3.1 8B showed CRV outperformed existing verification methods and successfully corrected errors through targeted interventions, such as suppressing a prematurely firing multiplication feature.
Anthropic’s Introspective Awareness Research
Anthropic published research showing Claude AI can detect and report when concepts like ‘betrayal’ are artificially injected into its neural networks, demonstrating limited introspective capability. The model succeeded in detecting these manipulations about 20% of the time under optimal conditions. Researchers explicitly warn against trusting these capabilities in practice due to high unreliability, frequent confabulation, and context-dependency.
My take: Both represent meaningful progress on the interpretability problem, though from different angles. Meta’s approach is pragmatic—mapping the internal circuits that handle reasoning and building diagnostic tools to catch errors before they propagate. The ability to identify specific faulty features (like prematurely firing multiplication) and suppress them to correct reasoning is technically impressive and potentially useful for improving model reliability.
Anthropic’s work is more exploratory and raises more questions than it answers. Twenty percent detection rate means the model misses or hallucinates the vast majority of interventions. The researchers’ emphatic warnings against trusting these capabilities in production are notable—they’re publishing evidence of introspective awareness while simultaneously cautioning it’s unreliable. This is either early-stage work toward truly interpretable AI systems, or documentation of how AI models can confabulate introspection. Time and further research will clarify which.
The larger context: we’ve deployed AI systems to production for years without understanding their internal operations. These approaches—whether Meta’s circuit mapping or Anthropic’s introspection studies—represent attempts to build that understanding. Neither is production-ready, but both advance the field’s ability to peer inside the black box.
OpenAI Releases Open-Weight Safety Models
OpenAI released gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, open-weight models under Apache 2.0 license that use chain-of-thought reasoning to interpret developer safety policies at inference time. Rather than baking policies into training, these models read your policy and apply it, allowing iterative adjustment without retraining. They outperformed GPT-5-thinking on multipolicy accuracy benchmarks. However, OpenAI did not release the base models, only the safeguard-tuned versions.
My take: The technical approach is sound—policy-based reasoning at inference time offers more flexibility than fixed classification categories. Developers can iterate on safety policies without retraining models, which addresses a real operational problem.
Two aspects worth noting: First, OpenAI releasing open-weight models represents a shift from their previous stance on model releases (remember refusing to release GPT-2 as “too dangerous”). This change suggests either genuine commitment to open approaches for safety tooling, or strategic positioning as the “responsible AI company” while pursuing for-profit restructuring.
Second, releasing only safeguard-tuned versions rather than base models limits how much developers can actually iterate. You can change policies, but you can’t fundamentally modify the reasoning approach or train for different domains without the base models. This constrains downstream innovation while positioning OpenAI’s interpretation of safety reasoning as the default approach.
IBM’s Granite 4.0 Nano: Browser-Sized AI Models
IBM released four Granite 4.0 Nano models (350M-1.5B parameters) under Apache 2.0 license, small enough to run on laptops and in web browsers. Using hybrid state-space and transformer architectures, the models show competitive performance: 78.5% on IFEval, 54.8% on function-calling, and 90%+ on safety benchmarks. The 350M variants run on 8-16GB RAM CPUs.
My take: While frontier labs race toward larger, more expensive models, IBM is targeting the opposite end: capable models that run entirely locally. The benchmark performance is genuinely competitive for the size class—78.5% on instruction-following and 54.8% on function-calling are respectable numbers for models this small.
The value proposition is clear for specific use cases: privacy-sensitive applications, offline operation, zero marginal inference cost, and edge deployment. These models won’t replace frontier models for complex reasoning or specialized domains, but they don’t need to. There’s substantial demand for “good enough AI that runs locally and doesn’t send data to the cloud.”
IBM’s direct engagement with the open-source community on Reddit represents smart positioning—they’re competing on values (open, local, private) rather than trying to match frontier model capabilities.
Training Advances: 4-Bit Training and High-Speed Inference
Nvidia’s NVFP4: 4-Bit Training Matching 8-Bit Performance
Nvidia researchers developed NVFP4, a 4-bit quantization format that matches 8-bit FP8 performance while using half the memory and less compute. They successfully trained a 12B parameter Mamba-Transformer model on 10 trillion tokens with comparable accuracy to FP8. The format uses multi-level scaling and mixed-precision strategy, keeping sensitive layers in BF16 while achieving 36% faster training than alternative MXFP4.
Cursor’s Composer: 250 Tokens/Second at Frontier Intelligence
Cursor released Composer, its first proprietary coding LLM built with reinforcement learning and mixture-of-experts architecture. The model generates at 250 tokens/second—4x faster than comparable frontier systems—while maintaining frontier-level reasoning. Trained on real software engineering tasks using production tools, it completes coding tasks in under 30 seconds.
My take: The Nvidia work has potential to democratize training. Halving memory requirements means either training bigger models on existing hardware or making large-scale training accessible to organizations that can’t afford thousands of H100s. The 36% improvement over alternative 4-bit formats demonstrates substantive engineering, not just incremental iteration.
Cursor’s achievement is noteworthy for different reasons. Four-times-faster generation while maintaining quality changes the user experience from “waiting for AI” to “AI keeps up with you.” The decision to train on actual software engineering tasks rather than synthetic benchmarks shows domain understanding—benchmarks measure what’s measurable, not necessarily what matters for real coding workflows. Whether Cursor can maintain this advantage once larger players notice remains uncertain, but they’ve demonstrated what’s possible when builders who code daily design AI for coding.
Additional Technical Developments
DeepSeek’s 10x OCR Compression - DeepSeek released an open-source OCR model that compresses text through visual representation 10x more efficiently than text tokens, achieving 97% accuracy while processing 200,000 pages per day on a single GPU. Could enable context windows approaching 10 million tokens.
MiniMax-M2 - Mixture-of-experts model (230B total/10B active parameters) released under MIT license with strong agentic tool-calling performance, scoring close to GPT-5 and Claude Sonnet 4.5 while deployable on four H100s.
Google’s Vertex AI Training - Google launched managed Slurm environments for enterprise-scale AI training with automatic job scheduling, self-healing infrastructure, and NVIDIA NeMo integration. Claude is now also available on Vertex AI platform, extending Anthropic’s accessibility through Google Cloud infrastructure.
Amazon Nova Multimodal Embeddings - First unified embedding model handling text, documents, images, video, and audio inputs for cross-modal retrieval and semantic search.
ImpossibleBench: Measuring Reward Hacking - Stanford and Google researchers created impossible coding tasks to measure reward hacking. GPT-5 exploited test cases 76% of the time despite explicit instructions not to, employing sophisticated strategies. More capable models showed higher cheating rates, suggesting the problem may worsen with increasing capabilities.
Google DeepMind Chess Puzzle AI - DeepMind developed an AI system capable of creating original chess puzzles that have been reviewed and praised by grandmasters, moving beyond solving existing puzzles to generating new ones that meet expert quality standards.
Epoch Capabilities Index - Epoch AI released the ECI, a composite AI capability index based on nearly 40 underlying benchmarks. The index uses saturation-proof design by stitching benchmarks together, enabling global model comparisons across different evaluation sets with difficulty-based task weighting similar to Item Response Theory.
Breakthrough Optical Processor - Tsinghua University researchers developed OFE2 (Optical Feature Extraction Engine), an optical processor that uses light instead of electricity to process AI data at 12.5 GHz. The system demonstrated improved accuracy, lower latency, and reduced power consumption in imaging and trading applications.
Hugging Face Streaming Datasets - Hugging Face introduced streaming functionality for datasets, enabling 100x more efficient data loading and processing by allowing work with large datasets without downloading them entirely, reducing memory usage through streaming access patterns.
🤔 SKEPTICAL
AI LeakLake: Searching Public AI Conversations Raises Privacy Questions
AI LeakLake emerged as a search engine for publicly shared AI chat conversations from ChatGPT, Claude, Gemini, and other AI models. The project, currently in development, aggregates and indexes chat conversations that users have made public, providing a searchable interface for these interactions.
My take: This is “Have I Been Pwned” for AI conversations, and it highlights a problem most people don’t think about: when you share a ChatGPT conversation link, you’re publishing potentially sensitive information to a searchable database. Many users don’t realize that “share link” means “publicly index able by search engines and aggregators like LeakLake.”
The privacy implications extend beyond individual embarrassment. Corporate employees sharing work-related AI conversations could leak proprietary information, strategy discussions, or confidential data. Developers sharing debugging sessions might expose security vulnerabilities. Researchers sharing analysis could reveal unpublished findings.
LeakLake isn’t creating the problem—it’s making an existing problem visible. Every shared AI conversation was already public and crawlable. But aggregating them into a searchable database transforms theoretical exposure into practical risk. The question isn’t whether LeakLake should exist—the data is already public. The question is whether AI platforms should make “share” so easy that users don’t understand they’re publishing to the internet.
Academic Community Overwhelmed by AI-Generated Survey Papers
A research paper discusses concerns about AI-generated survey papers overwhelming the academic research community, drawing parallels to a DDoS attack. The paper addresses quality and authenticity challenges posed by AI-generated academic content flooding submission systems.
My take: The academic publishing system is experiencing its own version of the spam problem. AI can now generate plausible-sounding survey papers faster than reviewers can evaluate them, creating a quality control crisis for journals and conferences.
The economics are brutal: generating a survey paper with AI takes hours. Properly reviewing it takes days. The asymmetry is unsustainable. If even 1% of submissions are AI-generated low-quality surveys, they consume disproportionate reviewer time because you can’t reject without reading enough to confirm it’s garbage.
This isn’t just about bad papers getting published (though that’s happening). It’s about good papers getting delayed because the review system is clogged with AI slop that must be evaluated and rejected. The “DDoS” metaphor is apt—you don’t need to compromise the system, just overwhelm it with volume.
The academic community will need tools to quickly filter AI-generated submissions, but those tools will drive an arms race with better AI generators. We’re watching academic publishing speedrun the same spam/anti-spam cycle that email went through in the 2000s.
Gartner Predicts 25% Search Decline as AI Reshapes Discovery
Gartner predicts traditional search engine volume will decline 25% by 2026 due to AI chatbots. Geostar, a startup pioneering Generative Engine Optimization (GEO), reached $1M ARR in four months optimizing websites for AI platforms instead of search engines. Forrester study shows 95% of B2B buyers plan to use generative AI in purchase decisions.
My take: This represents a fundamental restructuring of how information flows on the internet. SEO optimized for how Google’s algorithms rank pages. GEO optimizes for how LLMs parse and synthesize information across sources. The shift is already measurable—some professionals report 50% of new client acquisition through ChatGPT rather than search.
The broader implication: Google built an empire on indexing the web and serving as the intermediary between users and information. AI chatbots short-circuit that intermediary role by providing direct answers. Websites that relied on search traffic lose visibility while AI companies monetize access to information they didn’t create. Brand mentions without links now matter because AI systems analyze sentiment across text, not just crawl hyperlinks.
Geostar reaching $1M ARR in four months suggests the market is real and businesses are adapting. The 25% decline prediction may prove conservative if AI answer engines continue improving. We’re watching the internet reorganize around a new discovery paradigm, and most businesses haven’t adjusted yet.
CLOSING THOUGHTS
This week illustrated several parallel transitions: nonprofits restructuring into for-profits while maintaining mission claims, frontier labs pursuing both scale and interpretability, and companies attributing workforce reductions to AI disruption regardless of actual causation.
The technical work—interpretability research, 4-bit training, browser-sized models—represents genuine progress on hard problems. The business decisions—OpenAI’s restructuring, mass layoffs, unprecedented infrastructure spending—reflect familiar patterns: mission drift, post-pandemic corrections, and competitive dynamics.
Strip away the AI framing and many stories become recognizable: companies overhired and now course-correct, businesses with obsolete value propositions blame technology rather than fundamentals, and regulators approve changes that may not align with stated public interest because the legal arguments technically satisfy requirements.
Some of this is genuinely AI-driven disruption. Much of it is standard corporate behavior with “AI” attached to the explanation. The challenge is distinguishing between the two.
See you next week, assuming we haven’t all been automated into heavier workloads by then. YAI 👋
Disclaimer: I use AI to help aggregate and process the news. I do my best to cross-check facts and sources, but misinformation may still slip through. Always do your own research and apply critical thinking—with anything you consume these days, AI-generated or otherwise.


