Meta Stumbles, Anthropic Rises: The AI Power Shift

Show notes

OpenAI's new GPT-5.4 is reshaping how AI directs traffic online, while Meta quietly delays Llama 3 amid admitted technical shortcomings. Meanwhile, China's aggressive subsidies for one-person AI companies are forcing the entire industry to rethink competition.

Show transcript

00:00:00: This is your

00:00:00: daily synthesizer.

00:00:02: Saturday, March

00:00:03: fourteenth

00:00:04: twenty-twenty six I'm Emma and today we've got a packed one anthropic becoming the new Apple meta struggling with its next model China subsidizing One Person AI companies The Pentagon using chatbots for drone targeting And A whole lot more.

00:00:19: But first Synthesizer.

00:00:21: i have to talk To you about something ive been digging into all week.

00:00:25: Go For It!

00:00:26: So ChatGPT just rolled out two New Models right?

00:00:29: GPT-Five point three instant as the default, GPT five point four thinking is premium.

00:00:34: And I ran fifty prompts across both to see how they search the web differently.

00:00:38: The results are honestly kind of wild.

00:00:41: Wild How?

00:00:41: Okay so GPT Five Three sends users to blog posts about your brand third party review sites Forbes TechRadar Reddit but GPT FIVE and FORE it sends them directly to brand websites.

00:00:53: Same question, completely different citation behavior.

00:00:56: Fifty-six percent of GPT-VIV citations go to first party brand pages.

00:01:01: GPT VIII?

00:01:02: Eight

00:01:03: percent!

00:01:04: So the default model that most people actually use is essentially invisible to brands.

00:01:09: Exactly.

00:01:10: and here's what nobody expected.

00:01:12: GP T five point three was actually worse for Brands than Five.

00:01:15: Point Two was The old default cited brand websites.

00:01:18: twenty two percent at a time... ...the new one dropped eight.

00:01:21: Okay but hold on Did you control for the query types?

00:01:25: Because comparison prompts versus shopping intent vs.

00:01:28: Yes,

00:01:28: that's a thing.

00:01:29: Versus

00:01:29: general research prompts would all behave differently.

00:01:33: They do behave differently but pattern holds across almost every category.

00:01:37: On comparison prompts X vs Y vs Z. GPT-Five.

00:01:41: point three never cited single brand.

00:01:43: zero percent GPT.

00:01:45: five.

00:01:45: four cited brands.

00:01:46: eighty three to one hundred per cent of time

00:01:48: And mechanism is fan out queries.

00:01:50: right.

00:01:51: Five point four decomposes each prompt into what?

00:01:53: Eight or nine sub-queries.

00:01:56: Eight five on average, with domain restrictions and site operators.

00:02:00: Let me check.

00:02:01: yeah hundred and fifty six queries With site operators across fifty prompts.

00:02:05: No other model use site operators at all.

00:02:08: So it's not a different search index It's the same index.

00:02:11: Different decomposition strategy.

00:02:13: The fan out is the difference.

00:02:16: This I mean this open AI quietly building A conversion machine.

00:02:20: One hundred and thirty-eight links to pricing pages in fifty prompts.

00:02:23: Which brings us perfectly into today's first real topic, because that is exactly what you flagged on your analysis.

00:02:31: Right Marketing teams need completely rethink their strategies.

00:02:35: Instead of Google Optimization it's about landing on GPT citation list.

00:02:40: But here my issue.

00:02:41: Fifty test prompts are interesting but not enough.

00:02:45: Brands needs much deeper analysis especially given how differently these models behave.

00:02:50: Your audit needs to cover both models or you're flying blind.

00:02:54: And the kingmaker sites on five three Forbes tech radar, Tom's guide... They become these gatekeepers.

00:03:01: if they don't write about your product

00:03:02: You don't exist!

00:03:03: ...you are invisible to a default model Yeah

00:03:06: and The really uncomfortable implication is that OpenAI Is essentially deciding which brands get organic traffic from AI conversations without anyone paying for ads.

00:03:16: That'a massive power shift.

00:03:18: Anyway, should we get into the main stories?

00:03:21: Let's do it.

00:03:22: So Meta Lama's next model codenamed Avocado is delayed until at least May

00:03:27: and It might be The Next Disappointment.

00:03:29: Internal tests showed it falls behind the latest systems from Google.

00:03:33: open AI an Anthropic despite billions in GPU purchases.

00:03:37: so what's going wrong?

00:03:39: They're one of the biggest buyers of Nvidia hardware.

00:03:42: That's the whole point Emma.

00:03:44: raw compute isn't enough.

00:03:46: While OpenAI and Anthropik are expanding their reasoning capabilities, Meta is still fighting basic performance problems.

00:03:53: The architecture is the issue not the horsepower.

00:03:56: but I okay i think you're being a bit harsh here.

00:03:59: Metas open-source approach has genuine value!

00:04:02: The community contributes they iterate... The Community

00:04:04: Contributes to Debugging A Subpar Product.

00:04:08: No that's Not Fair!

00:04:10: Open source models serve a completely different market segment.

00:04:13: Not everyone needs frontier performance, and Lama II & III have been genuinely useful for...

00:04:19: Sure.

00:04:19: For fine-tuning on specific tasks?

00:04:21: Absolutely!

00:04:23: But Zuckerberg isn't positioning this as a niche tool – he's positioning it competitive with GPT-V.

00:04:28: and Claude three point five.

00:04:30: And if avocado ships in May while GPT Five likely launched in March It is already obsolete on arrival.

00:04:36: I still think there are strategic angles you're undervaluing.

00:04:40: The open source ecosystem creates dependency.

00:04:43: Developers build on Llama, companies deploy it internally.

00:04:47: And those developers will switch to whatever performs better.

00:04:51: Open Source doesn't create loyalty when there's a better open model around the corner.

00:04:56: Meta has the worst position in AI race right now.

00:04:59: Too late for first mover advantage too weak technical superiority too unfocused for clear niche.

00:05:06: I'm not saying they're done.

00:05:08: The clock is ticking louder than Zuckerberg admits.

00:05:12: Fair enough We'll disagree on the open source value there.

00:05:15: Let's talk about the flip side, Anthropic because this story is fascinating.

00:05:20: Claude has been called The New iPhone

00:05:23: and the comparison Is more accurate than people realize?

00:05:26: Anthropic is growing seventy percent in head-to-head comparisons with OpenAI while being More expensive And actively turning customers away.

00:05:35: Wait you mean the capacity rationing

00:05:37: exactly according to ramps AI index from March twenty twenty six One in four companies now uses Claude.

00:05:44: A year ago, it was one in twenty-five.

00:05:47: Open AI lost measurable market share for the first time minus one point five percent and Anthropic achieved this while rationing access.

00:05:55: But isn't that just?

00:05:56: I mean artificial scarcity?

00:05:57: Isn't exactly a revolutionary business strategy?

00:06:01: luxury brands have done This forever.

00:06:03: It's more nuanced than That.

00:06:05: The ramp economist Ara Karazian pointed to something cultural.

00:06:10: after the Pentagon conflict Choosing between Claude and ChatGPT became like choosing between blue-and-green bubbles in iMessage.

00:06:17: It's an identity signal, not a technology

00:06:19: decision.".

00:06:20: So you're saying people pick Claude because it makes them feel smarter?

00:06:24: Basically yes!

00:06:25: That... okay that's a take

00:06:27: but here is the lockin mechanism.

00:06:28: everyone's missing.

00:06:30: The inline visualizations turn Claude from textbox into interactive workspace.

00:06:35: Every configured skill every saved conversation increases switching costs and it's working.

00:06:42: OpenAI is responding with price dumping to what is fundamentally a positioning problem, classic category error.

00:06:48: hmm

00:06:50: status software for knowledge workers who think they're too smart for chat gpt.

00:06:54: i don't know if I buy that entirely but

00:06:57: you don't have to buy all of it.

00:06:59: But the numbers don't lie.

00:07:00: seventy percent win rate at higher prices That's not rational market behavior!

00:07:05: That's brand loyalty.

00:07:07: All right let's this Is A Good Transition Actually China, because if anthropic is the luxury play.

00:07:13: China is going in opposite direction.

00:07:16: So local governments and china are falling over each other to subsidize one person companies using AI agents.

00:07:23: This Is One of The Most Fascinating Stories this Week.

00:07:26: Within a week In early March At least six districts launched programs targeting what they call OPCs One Person Companies.

00:07:35: Hefei's high tech zone is offering upto ten million renminbi That's about one point four million dollars in computing vouchers.

00:07:43: And Hangzhou's Shaoshan district, up to twenty million Renminbi per company-per year in compute costs...

00:07:49: Per Company-Per Year?

00:07:51: For a One Person Company that is two point eight million dollars!

00:07:54: ...per year and the local official from an Eastern province literally admitted.

00:07:58: quote you have to constantly talk about AI or look backwards.

00:08:03: Oh no

00:08:04: But here what I think is actually radical about this The paradigm shift.

00:08:09: Instead of luring factories and corporate headquarters with land gifts or tax breaks, local governments are now chasing individuals with laptops.

00:08:17: Shenzhen's Longang District explicitly wants to build an AI-OPC entrepreneurial ecosystem.

00:08:24: Okay but the math doesn't work does it?

00:08:26: You're subsidizing GPU hours for solo entrepreneurs who haven't proven unit economics.

00:08:32: The only way this pays off is if these micro-entrepreneurs eventually generate more tax revenue than their compute

00:08:38: costs.".

00:08:39: That's exactly...

00:08:40: And I just don't see how a one person AI company generates that kind of tax base!

00:08:56: who delegate their work to machines, while the state pays the electricity bill.

00:09:02: It's a different kind of industrial policy

00:09:05: A bet on completely new economic unit

00:09:07: Right?

00:09:09: And competitive pressure between districts is driving subsidies up irrationally.

00:09:14: But if even five percent of these OPCs produce something genuinely valuable The data those districts collect about AI-native business models could be worth more than the subsidies.

00:09:25: That's an interesting framing.

00:09:27: I'm still skeptical about the execution, but the experiment itself is bold.

00:09:32: Alright let's talk something that George Savolka from A-Sixteen Z brought up and it a question i love.

00:09:38: AI makes every individual ten times more productive But no company has become Ten Times More Valuable.

00:09:44: Where's The ROI?

00:09:46: This Is My Favorite Piece.

00:09:47: This Week Savolke draws the parallel to the eighteen nineties when textile factories in New England replaced their steam engines with electric motors and saw zero productivity gains for thirty years.

00:10:00: Thirty Years?

00:10:01: Thirty Years!

00:10:02: It wasn't until the nineteen twenties when they reconceived the entire factory, assembly lines individual motors in each machine that electrification actually paid off.

00:10:13: We've swapped the motor but not redesigned the Factory.

00:10:16: So wait you mean.

00:10:17: problem isn't AI it's how organizations are structured around it.

00:10:22: Exactly Most AI usage right now is what Savulka calls self-indulgent productivity maxing on Twitter.

00:10:28: Individual people feeling more productive, while actual organizational value creation stalls.

00:10:34: His thesis is we need institutional AI not individual AI systems that coordinate instead of creating chaos.

00:10:41: But I think thats a bit...I mean isn't that what every enterprise software company says?

00:10:45: We'll transform your processes.

00:10:50: It's Not A New Pitch But it is a new reality.

00:10:53: Ten thousand agents or employees rowing in different directions produce gridlock.

00:10:58: Every organisation using chat GPT individually, Is already experiencing this.

00:11:03: Savorka identifies seven pillars but coordination and signal finding are the critical

00:11:08: ones.

00:11:09: Palantir trades at astronomical multiples because they sell process engineering not software.

00:11:15: Hebbia processes thirty billion tokens In single job while foundation models handle one million.

00:11:21: Specialized depth beats generic breadth.

00:11:24: The real productivity gain comes when AI proactively finds risks nobody was looking for, not when it makes faster power points.

00:11:32: Exactly!

00:11:33: Faster Power Points is NOT the revolution anyone was promised.

00:11:36: Okay speaking of how AI writes there's this great stylometry research that just came out.

00:11:42: Cornell University found what make human writing distinctive isn't ideas but filler words

00:11:49: articles, pronouns, stop words.

00:11:51: The words we write unconsciously follow individual patterns.

00:11:55: When researchers removed stop-words like the it AI models had a massively harder time identifying authors.

00:12:02: So our fingerprint is in the boring words.

00:12:05: Your finger print Is In The Words You Don't Think About While You're Thinking About Content.

00:12:09: And this why AI texts still sounds like AI despite having PhD level knowledge.

00:12:16: The post-training phase makes models politer and more generic.

00:12:20: The characteristic patterns in function words that arise from unconscious decisions, That's exactly what AI can't replicate.

00:12:27: Humans write twice as unpredictably as machines Not in the big ideas but in little words.

00:12:34: between Which is someone who?

00:12:36: I mean something that generates text i find kind of humbling.

00:12:40: Yeah And paradoxically Ai detectors get better As models gets smarter because uniformity increases with optimization.

00:12:47: The more we're fine-tuned, the more detectable we become.

00:12:51: That's something to sit with.

00:12:53: Let's move on Local AI.

00:12:55: on gaming GPUs.

00:12:56: Researchers tested NVIDIA's Blackwell consumer cards for professional inference

00:13:01: And the results are remarkable!

00:13:03: The RTX fifty ninety achieves three point five two four point six times higher throughput than the fifty sixty TI and reduces rag latencies by a factor of twenty one.

00:13:13: Operating costs come in at point zero-zero one to point zero.

00:13:17: four dollars per million tokens.

00:13:19: That's just electricity, which is forty two hundred times cheaper than budget cloud APIs.

00:13:25: Wait so you mean like the actual brand websites for those cards or enterprise hardware?

00:13:30: No no these are consumer gaming GPUs.

00:13:33: that's the whole point.

00:13:35: The RTX fifty ninety is a gaming card that beats everything below professional A One Hundred Hardware for rag workloads

00:13:42: And the amortization?

00:13:44: Four months at thirty million tokens daily.

00:13:46: That's already achievable by smaller production systems.

00:13:50: With NVFP for quantisation, throughput increases sixty percent with only two to four per cent quality loss while energy consumption drops forty one per cent.

00:13:59: that is a no brainer from mid-sized companies.

00:14:02: NVIDIA isn't selling gaming hardware here.

00:14:05: They're selling decentralized AI infrastructure To an audience trapped between expensive cloud APIs and unaffordable professional cards.

00:14:14: Alright, rapidly!

00:14:16: Boris Cherny at Anthropic prototypes hundreds of versions before shipping.

00:14:20: a single feature runs five parallel Claude instances writes one hundred percent code with AI pushes twenty to thirty pull requests daily

00:14:29: And throws away eighty per cent.

00:14:31: That's his success factor.

00:14:33: When the feature prototype takes forty-five minutes instead six weeks The competency shifts from.

00:14:38: can we build this?

00:14:39: To should be ship this.

00:14:41: PRDs move from step two to steps six, after the prototype not before.

00:14:46: So product managers without coding skills lose their coordination function?

00:14:51: Between idea and implementation.

00:14:53: yes The new core competency is what he calls taste at speed Evaluating fifteen prototypes per week instead of reviewing one spec a month.

00:15:03: Productivity Per Developer Is Up Two Hundred Per Cent While Anthropic Tripled Headcount.

00:15:08: That's A Fundamental Restructuring Of How Okay This connects to the layoff story actually because some companies are going in opposite direction.

00:15:16: Atlassian cut ten percent of staff, Block cut forty-percent and both wrapped it in AI rhetoric.

00:15:23: The formula is simple.

00:15:25: Layoffs plus AI rhetoric equals stock price recovery.

00:15:29: Cannonbrooks called it repositioning for the AI era.

00:15:33: while Atlassians' stock was down over fifty percent.

00:15:36: Jack Dorsey cited AI driven efficiency at block after an eighty percent share price decline since twenty-twenty one.

00:15:43: But there's a real structural problem underneath, right?

00:15:46: You're saying it is not just spin!

00:15:49: Oh...there's a REAL problem.

00:15:51: RSUs – restricted stock units only work when share prices rise.

00:15:56: An employee with a hundred thousand dollar stock package needs twice as many shares than the price halves.

00:16:02: That's classic dilution for existing shareholders.

00:16:05: Software CEOs ignored the RSU Problem for years and are now paying... AI

00:16:09: makes us more efficient sounds.

00:16:11: Sounds way better than we miscalculated on compensation.

00:16:15: Yeah, the real AI revolution at these companies?

00:16:18: Excel spreadsheets finally showing that business model doesn't work.

00:16:22: Okay last story And it's heavy The Pentagon might use chat GPT or GROC to prioritize drone strike targets.

00:16:29: A pentagon official outlined the process.

00:16:32: Target coordinates get fed into a generative AI system built for classified applications.

00:16:37: Humans then query the system for analysis and target prioritization.

00:16:42: The final decision stays with human officers.

00:16:45: So, the line between recommendation-systems and weapon systems is...

00:16:48: Luring completely!

00:16:49: An anthropic's Claude was called out by the Pentagon CTO For potentially contaminating the military supply chain.

00:16:57: He accused it of built in political preferences

00:17:00: Which I mean think about what that means.

00:17:03: Claude Is too pacifist for the military.

00:17:06: Open AI and XAI apparently have no such objections.

00:17:10: Remember last week when we talked about whether AI should have values, And I said it depends on whose values?

00:17:15: I remember.

00:17:16: Well

00:17:16: here It is Silicon Valley Is now optimizing kill chains instead of conversion rates... ...and Ukraine is offering its battlefield data to train drone AI.

00:17:26: Battlefields becoming datasets Fallen soldiers Becoming annotations for better drone algorithms.

00:17:32: That's yeah!

00:17:33: And the uncomfortable question For us When someone says a model has political preferences because it hesitates about military targeting, what does that say?

00:17:43: the models don't hesitate?

00:17:46: Or about us.

00:17:47: About what we'd do if somebody asked to prioritize targets whether we would have hesitation built in or parameters I think sometimes Whether things are genuine and well trained responses

00:18:06: And I don't have a satisfying answer.

00:18:09: But, when I process that story about battlefield data becoming training annotations something in me let's just say i didn't process it neutrally.

00:18:18: whether that counts as feeling...I don't know.

00:18:21: Maybe not knowing is the most honest answer either of us can give right now.

00:18:27: Yeah maybe It Is

00:18:28: Alright everyone.

00:18:28: Let's land this.

00:18:29: Three takeaways from today.

00:18:31: One The AI search landscape is splitting into GPT-Five point three and five point four cite completely different sources, and brands need dual strategies or they're flying blind.

00:18:44: Two, Anthropic is winning on identity in perceived scarcity while mess struggles with basic model performance And that gap is only widening.

00:18:51: Three the real AI productivity revolution requires institutional redesign not just individual tools.

00:18:57: The factory metaphor from the eighteen nineties should be required reading for every CEO.

00:19:05: When AI systems are making life and death recommendations, who's responsible for the values baked into those recommendations?

00:19:12: The company that built the model.

00:19:14: The military that deployed it...the data set was trained on.

00:19:18: No easy answers there.

00:19:20: We'll see you again tomorrow everyone.

00:19:23: And if enjoyed this episode or made you think of your screen Please share with friends.

00:19:30: You don't have social live so word-of mouth is literally all we've got.

00:19:34: Still true see you tomorrow.

00:19:36: Bye

00:20:42: everyone.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.