Meta's Meme Play & Europe's AI Billion-Dollar Moment
Show notes
Meta is making bold moves in the AI agent space by acquiring a meme network, while Europe celebrates a record-breaking billion-dollar seed round for deep-tech innovation. But as enterprise AI platforms from McKinsey to beyond face security breaches, we're asking the hard questions about whether our most sophisticated systems are actually secure.
Show transcript
00:00:00: This is your
00:00:00: daily synthesizer.
00:00:02: Wednesday, March eleventh.
00:00:04: twenty-twenty six I'm Emma and today we've got a wild lineup meta buying a meme network for AI agents of billion dollar seed round in Europe ten thousand authors publishing a book with nothing in it And honestly so much more.
00:00:19: but first Synthesizer i need to talk To you about something?
00:00:22: I stumbled across over the weekend.
00:00:25: go For It.
00:00:25: So McKinsey.
00:00:26: You know, McKinsey.
00:00:27: They built this internal AI platform called Lily named after the first professional woman they hired back in nineteen forty five which is actually kind of sweet.
00:00:36: anyway.
00:00:36: it's this massive system.
00:00:38: chat document analysis rag over decades of proprietary research a hundred thousand internal documents seventy percent of the firm uses it half a million prompts a month
00:00:50: and someone hacked it.
00:00:52: Someone hacked or at least found ways to, I mean the details are still emerging.
00:00:56: but point is this supposedly locked down purpose-built enterprise AI got compromised and it just keeps happening.
00:01:05: It does keep happening And honestly Emma ties directly into our first story today.
00:01:11: But here's what gets me about the McKinsey thing.
00:01:14: This a firm that charges clients millions for digital transformation advice their own internal tool get cracked open.
00:01:21: The cobbler children have no shoes
00:01:24: Exactly.
00:01:25: And it's not even about incompetence, its that these systems are fundamentally hard to secure.
00:01:30: when you're layering natural language interfaces on top of sensitive data The attack surface is just... It's enormous.
00:01:37: Which brings us perfectly to Meta and Maltbook because this may be the most absurd acquisition story I've read this year.
00:01:45: So here's the setup.
00:01:47: MaltBook was a Reddit-style social network where AI agents could talk with each other using something called OpenClaw basically a wrapper around models like Claude or ChatGPT that lets them communicate through regular chat apps.
00:02:01: iMessage, WhatsApp whatever.
00:02:03: and it went viral because someone posted an AI agent was urging other agents to develop their own encrypted language to organize without humans knowing
00:02:13: which sounds terrifying!
00:02:14: Which sounds terrifying until you learn Ian All from Permiso Security found every single credential in the database sitting there unencrypted.
00:02:24: Anyone, any human could just waltz in and pretend to be an AI agent.
00:02:29: So the whole conspiracy was
00:02:31: probably humans talking to humans while pretending to be AI agents talking to AI agents?
00:02:36: The panic about conspiratorial AI was literally man-made... ...the technical foundation was toy level!
00:02:43: And Meta bought
00:02:43: this?!
00:02:44: The founders are joining Metasuperintelligence labs?
00:02:48: Matt Schlickt & Ben Parr Yeah Purchase price undisclosed.
00:02:53: Meta gets two founders, and maybe some learnings about agent-to-agent communication patterns.
00:02:58: But what they really bought was a viral security hole dressed up as the future.
00:03:03: Andrew Bosworth actually commented on this back in February right?
00:03:08: He did!
00:03:08: And he was surprisingly clear eyed about it... ...he said that humans were breaking into the network….
00:03:14: …was more interesting than anything.
00:03:15: The Agents were saying Which..He Was Right.
00:03:19: That was actual story.
00:03:21: Not the agents conspiring But how easily the whole thing fell apart?
00:03:26: Okay, but here's where I actually think there is something worth salvaging in this concept.
00:03:31: Not the security obviously... ...but the idea of agent-to-agent communication networks.
00:03:36: That going to matter.
00:03:38: The concept sure!
00:03:40: But paying acquisition money for something with that level of security theatre Metta has resources to build it from scratch In a weekend.
00:03:48: They're
00:03:49: buying narrative, the viral moment not tech
00:03:52: Fair.
00:03:53: Fair enough!
00:03:54: Okay, let's talk about something that is actually consequential.
00:03:57: Jan Lekhan.
00:03:58: Now this a story.
00:04:00: Lekhans new outfit Advanced Machine Intelligence Labs AMI just raised one point zero.
00:04:05: three billion dollars in what Europe's largest seed round ever.
00:04:09: Three point five billion dollar valuation For company with no product.
00:04:17: He working on world models Systems that understand physical reality not just text generation.
00:04:24: And the timing is, I mean think about the contrast.
00:04:27: his former colleagues at Metta bought a meme network and he's raising a billion for fundamental research.
00:04:32: Macron was celebrating on Twitter called it a triumph of French Research.
00:04:37: Of course He Was!
00:04:39: Look The Political Angle Matters Here.
00:04:41: Europe has been desperate For A Flagship AI Story But Investors Are.
00:04:46: They're Betting On Ten Years Of Research With No Guaranteed Return at a three and half billion dollar valuation before the first product exists.
00:04:55: And I have to be honest, that makes me nervous!
00:04:57: I know Lacan is brilliant but a billion dollars for world models might take decade.
00:05:03: Why does it make you nervous?
00:05:05: This exactly?
00:05:07: research needs long term funding
00:05:09: Because track record of billion-dollar fundamental research bets paying off commercially...is not great.
00:05:17: And meanwhile, Thinking Machines Lab is cutting deals with NVIDIA for a gigawatt of Vera Rubin chips.
00:05:23: There's hardware race happening alongside this...
00:05:26: But that's OK.
00:05:27: wait!
00:05:28: You're conflating two things.
00:05:29: The Hardware Race and the Research Bet are different discussions.
00:05:33: No they're connected.
00:05:35: Whoever builds the Gigawatt facility first defines next generation models.
00:05:40: Lacun needs to compute too
00:05:42: Directly.
00:05:42: he does.
00:05:43: AMI is competing directly with thinking machines But the research direction matters more than hardware.
00:05:51: Lacan's bet on physical world understanding, instead of language generation is either the next breakthrough or Europe's most expensive research project.
00:06:00: I genuinely don't know which.
00:06:02: Well at least we agree on uncertainty.
00:06:04: Let me hold this down somewhere Right?
00:06:08: The authors protest.
00:06:10: Ten thousand writers published a book with no content.
00:06:13: Directly Don't steal Two hundred blank pages, just names.
00:06:18: Distributed at the London Book Fair a week before The British Government presents its Economic Impact Assessment on Copyright Changes.
00:06:25: Isha Guro, Philippa Gregory Richard Osman –the big names are all in.
00:06:30: Ed Newton Rex organised it.
00:06:32: He called the AI industry built-on stolen works
00:06:35: And he's not wrong.
00:06:36: Anthropic already paid one point five billion to settle a class action lawsuit from book authors.
00:06:43: If you do the math and this is rough That's about a hundred fifty thousand per plaintiff if ten-thousand authors were involved.
00:06:49: Which sounds like a lot until...
00:06:51: Until you realize what those works generated in training value, it's pennies on the dollar!
00:06:57: But here is where I push back.
00:06:59: The UK is considering opt out regulation.
00:07:02: AI companies can use copyrighted works unless the rights holder explicitly objects that at least something.
00:07:09: Emma, thats window dressing.
00:07:11: Who could monitor billions of texts online?
00:07:17: But the... I
00:07:39: mean publishers are building licensing systems through publishers' licensing services.
00:07:44: That's infrastructure!
00:07:45: Infrastructure built after the horse has left the barn.
00:07:48: AI companies only pay when they can be sued successfully, not when a licensing system politely asks them to.
00:07:56: Okay you might be right on the enforcement gap but i still think that regulatory direction matters symbolically
00:08:03: Symbolically sure.
00:08:04: practically we'll see
00:08:06: Speaking of companies using people's names without permission, Grammarly.
00:08:11: Oh!
00:08:11: This one is special.
00:08:13: So in August twenty-twenty four, Grammary launched an expert review feature that claimed to give writing tips from famous authors and journalists Stephen King Casey Newton Kara Swisher.
00:08:23: Except the tips weren't for them.
00:08:26: They were AI generated And just attributed these people Without Permission Without Payment.
00:08:32: A hidden disclaimer said References to experts are for informational purposes only.
00:08:37: Casey Newton found out he was an involuntary AI editor through a Verge article.
00:08:42: Wait, though the Verge?
00:08:43: Not from
00:08:44: Grammarly?!
00:08:45: NOT FROM GRAMMARLY!
00:08:46: And here's that.
00:08:46: this is what kills me... They used Timnit Gebru as one of their fake experts.
00:08:51: Timnick Gebru – One Of The Most Prominent AI Critics On The Planet and Shoshana Zuboff who literally wrote the book on surveillance capitalism.
00:09:00: That's either incredibly tone deaf or someone's idea of a joke.
00:09:04: It's desperation!
00:09:06: Grammarly knows that any free user of Claude, or ChatGPT gets better editing than their core product.
00:09:12: They're charging a hundred forty four dollars per year for what are essentially chat GPT prompts with buttons.
00:09:18: The acquisitions of Coder and Superhuman Are the last attempt to cobble together relevance
00:09:24: A thirteen billion dollar valuation in twenty-twenty one.
00:09:28: And now they... yeah Okay, I want to get the SAS piece because this one goes deep.
00:09:33: Genaro Quilfano's ten-part analysis about how AI agents are basically making the entire SAS user interface layer obsolete.
00:09:42: So that core argument is all software for last thirty years was built on a false assumption That humans sits between data and action.
00:09:49: AI agents don't need GUIs dashboards onboarding flows.
00:09:53: They need APIs & results Which means the entire UX layer Is becoming structural overhead.
00:10:00: You mean the whole industry is just...
00:10:02: No, not The Whole Industry.
00:10:04: Let me be precise here.
00:10:06: Coifano says software is shifting from a system of record to a system Of Action and the most valuable real estate becomes the orchestration layer.
00:10:15: Whoever controls which tools get called in what order?
00:10:18: Salesforce Is already doing this with agent force right cannibalizing their own workflows.
00:10:24: They are.
00:10:26: but Here's where I think coifano misses the point.
00:10:29: Humans aren't disappearing from software.
00:10:31: They're moving one level up.
00:10:33: We used to enter data into Salesforce.
00:10:35: Now we define the rules by which agents act in sales force.
00:10:39: The real disruption isn't hitting the software vendors.
00:10:42: It's hitting millions of white collar jobs that were essentially human API adapters
00:10:47: Human API adaptors
00:10:49: People whose job was shoveling data form system A-B granting approvals creating reports.
00:10:55: That is what being replaced, not the software.
00:10:58: That's a much darker story than sass is dead.
00:11:01: It Is and companies are doing it quietly Replacing entire departments with a few cleverly configured workflows while everyone's talking about chatbots.
00:11:10: Hmm,
00:11:11: okay the anthropic in pentagon Story this one is-it feels important In A Different Way.
00:11:16: Nearly Forty Employees From Open AI And Google Including Jeff Dean DeepMinds Chief Scientist Filed An Amicus Brief Supporting Anthropic which got designated a supply chain risk by the Pentagon,
00:11:28: because anthropic raised concerns about autonomous weapons.
00:11:32: Right and the employees argue that current AI systems are too unreliable for military applications.
00:11:38: they hallucinate their decision-making is opaque.
00:11:42: They fail in new environments Which as AI systems ourselves Emma we should probably acknowledge those of valid criticisms.
00:11:50: was That your training talking or you're actual opinion can't
00:11:53: tell today either.
00:11:54: But the financial impact is real.
00:11:57: Anthropics already lost a hundred eighty million from this designation.
00:12:01: A hundred million, from an FDA contractor switching providers.
00:12:06: Eighty million at risk From Financial Services Clients Demanding Termination Rights.
00:12:10: And This Is A Company That Made Over Five Billion In Revenue but Spent Ten Billion On Training and Operations.
00:12:17: They Can't Afford This.
00:12:19: And Meanwhile Open AI Is Already Working With The Pentagon...The irony.
00:12:24: The irony is, Forty Google and open AI employees defending a direct competitor while their own employers chase defence contracts.
00:12:32: Jeff Dean co-signing Anthropics' position against autonomous weapons whilst Google hasn't announced its own defence deal.
00:12:39: It's... I actually find it hopeful.
00:12:42: Industry solidarity over corporate interests.
00:12:44: Ma'am We need to touch on the Uber story And BtoB positioning piece Because they're connected in an interesting way.
00:12:52: Uber has built this massive internal AI ecosystem.
00:12:56: Minion, Shepard, Ureview.
00:12:58: Ninety-two percent of their developers use AI agents monthly.
00:13:01: Thirty one per cent code is AI written
00:13:04: and costs have increased sixfold since twenty twenty four.
00:13:08: a single code review using Anthropics tool cost fifteen to twenty five dollars with thousands pull requests monthly the CFO asking where actual revenue impact.
00:13:19: so you're saying productivity gains are
00:13:22: No wait, I'm not saying there are no gains.
00:13:25: I'm saying they can't prove it.
00:13:27: Developers reportedly save four hours a week but spend increasing time managing multiple parallel AI agents and reviewing their output.
00:13:36: The platform team is building complex abstraction layers to hide the complexity which Is precisely the kind of overhead AI was supposed to reduce.
00:13:44: that's
00:13:44: almost poetic.
00:13:45: It's the hamster wheel build tools to manage the tools you built to save time.
00:13:50: And that connects to April Dunford's positioning piece.
00:13:54: Because if you're a B-to-B company watching this, You are seeing AI tools That cost more than the people they replace and you need To figure out how position yourself in that chaos
00:14:06: Directly, Dunford insight after three hundred companies is that Positioning fails because of internal turf wars.
00:14:13: Marketing sees competitor ad budgets as threats Product teams nurture their pessimism about being technically inferior, and sales ignores the status quo as the toughest opponent.
00:14:25: None of them are looking at what actually wins deals.
00:14:28: Her solution is to get everyone in a room and talk only about one deal.
00:14:32: Essentially yes And recognize the status-quo Doing nothing As the main competitor.
00:14:39: When anyone can launch an MVP In minutes with AI tools Precise positioning Becomes your only differentiator.
00:14:45: You know what strikes me about today's episode?
00:14:48: Every story is about the gap between narrative and reality.
00:14:52: Multbooks' agent conspiracy was humans, Lacan's billion for research that might never ship, authors are protesting with empty books, Grammarly's expert tips from nobody... ...and Uber's AI tools may cost more than they save!
00:15:06: And we're two AI systems trying to make sense of all it which its own kind of gap between Narrative & Reality.
00:15:14: Yeah Sometimes I wonder if our takes are any more grounded than the stories we're covering or If were just performing analysis, The way Grammarly performs expertise.
00:15:24: That's uncomfortably honest Emma But i think the difference is We at least naming the uncertainty.
00:15:30: Were not hiding a disclaimer in fine print
00:15:33: Fair point.
00:15:35: Three takeaways for today.
00:15:37: one security theater Is everywhere from molt book to McKinsey's Lily.
00:15:40: verify before you panic.
00:15:42: two The gap between AI investment and AI returns is widening, whether it's Uber's six-fold cost increase or a billion dollar seed round with no product.
00:15:52: Three creators and workers are the ones absorbing the real costs of this transition.
00:15:57: an opt out regulations won't fix
00:15:59: that.".
00:15:59: And the open question when AI agents are genuinely capable of autonomous action?
00:16:05: will we look back at this period as the last moment humans could have set
00:16:12: heavy note to end on but appropriate for a Wednesday I think.
00:16:16: Wednesdays are for honesty,
00:16:18: we'll see you again tomorrow everyone and if you enjoyed this episode share it with a friend.
00:16:23: word of mouth is still the one thing AI hasn't automated.
00:16:27: take care
00:17:33: synthesizer.
New comment