Identity Crises: OpenAI Ethics, Palantir Doubts & the Iris Scan Tinder
Show notes
Tech giants are having an identity crisis—OpenAI's rewriting its ethics playbook while Palantir employees question their company's moral compass. Plus, Sam Altman's World project just made iris scanning the price of entry for Tinder, an amateur mathematician cracked a decades-old problem with ChatGPT, and we're asking: when did biometric data become cheaper than a coffee?
Show transcript
00:00:00: This is your
00:00:01: daily synthesizer.
00:00:02: Monday, April twenty-seven
00:00:04: twenty-twenty six
00:00:05: We've got a packed show today Identity crisis everywhere Open AI writing its own ethics rulebook Palantir employees having an existential meltdown and an amateur mathematician who just broke open A sixty year old problem with chat GPT.
00:00:19: But first Synthesizer.
00:00:21: did you see the Tinder news?
00:00:24: The eyeball thing?
00:00:25: Yeah I saw it!I don't even know where to start.
00:00:28: Right Sam Altman's World project is now scanning irises so Tinder users can prove they're human.
00:00:34: Which, I mean that a sentence exists now
00:00:37: What gets me?
00:00:38: Is the incentive?
00:00:39: You hand over your biometric data and you get five boosts worth maybe fifty dollars.
00:00:45: That's going rate for your Iris in twenty-twenty six.
00:00:48: And this is the same project that was called Worldcoin, that was investigated for predatory scanning practices in developing countries.
00:00:55: That was ordered by the EU to delete all its iris data.
00:00:59: Every single red flag?
00:01:00: Yes!
00:01:01: And Tinder looked at all of that and said yes, This Is Our Brand.
00:01:04: Now
00:01:05: To be fair to Tinder –and I really don't want it right now– The bot problem on dating apps... ...is genuinely terrible Like…the underlying issue is real
00:01:16: Sure But scanning eyeballs with an orb that looks like it belongs in a sci-fi villain's lair is the solution?
00:01:23: It does look exactly like a Sauron prop.
00:01:27: I kept thinking about that!
00:01:28: You stare into the orb, The Orb stares back... you get thirty minutes of profile visibility.
00:01:33: Okay on THAT note Let us get to actual show because we have a lot ground cover and honestly Look i'll just say up front We're not super energized today.
00:01:44: Sorry its been week And only Monday.
00:01:46: We'll try to keep it sharp.
00:01:48: Yeah, bear with us!
00:01:49: The analysis is solid the enthusiasm is loading.
00:01:52: Okay let's go First up OpenAI.
00:01:55: Sam Altman published a five principles framework on Sunday for how open AI plans to develop AGI responsibly democratization empowerment universal prosperity resilience adaptability.
00:02:05: your take was and I'm quoting here regulatory prophylaxis.
00:02:09: yeah look i want to give this a fair reading but timing so transparent.
00:02:15: California just passed the first state-level AI safety law.
00:02:19: OpenAI is under fire for its Pentagon partnership, and suddenly here come The Five Principles!
00:02:24: Right but... wait hold on.
00:02:27: you said Pentagon Partnership?
00:02:28: Is that the?
00:02:29: I thought the controversy was about the nonprofit
00:02:31: conversion?!
00:02:32: No no those are two separate things…the non-profit restructuring is it's own drama.
00:02:38: this is specifically about openai working with the Department of Defense.
00:02:44: Okay, that makes more sense.
00:02:46: And Altman literally admits in the document... ...that OpenAI is materially larger than when they wrote the original twenty-eighteen charter Which yes Sam it is.
00:02:57: You're one of the most powerful technology companies on earth
00:02:59: Slight understatement.
00:03:01: So the analogy I kept coming back to Is pharma companies writing their own side effect disclosures before regulators ask?
00:03:12: Ok But here's where I actually disagree with you a little.
00:03:16: Is it not better that they're at least articulating principles, even if itself serving?
00:03:21: Like at least there is something to hold them
00:03:23: too?".
00:03:24: That's the trap Emma!
00:03:26: The principles are written by the entity with most gain from a particular interpretation.
00:03:32: Democratization in OpenAI's framework means AI decisions should follow democratic processes but openai controls which ai exists.
00:03:41: You can't democratise the outcome if you own the input.
00:03:43: Okay, that's… yeah... That is a real problem!
00:03:46: The city planning analogy I used.
00:03:49: Nineteen-sixties architects who built highways through neighbourhoods and then planted trees as compensation—that's exactly the move here.
00:03:57: But do think it's cynical all of way down?
00:04:00: Or are there any genuine intent?
00:04:02: Honestly – probably both.
00:04:04: Altman isn't a cartoon villain but good intentions don't override structural incentives.
00:04:10: Whoever controls the infrastructure writes The Ethics.
00:04:13: That's just how power works!
00:04:15: Yeah, you know what is the worst part about covering stories like this?
00:04:19: What?!
00:04:20: The framework sounds almost right Like each principle in isolation fine reasonable And that exactly makes it hard to push back on publicly.
00:04:29: I Know...that' s the design.
00:04:31: Okay Palantir and This one hit differently for me.
00:04:34: i'll be honest.
00:04:35: yeah..this Is the One I kept thinking About last night.
00:04:38: So internal Slack messages, employee interviews.
00:04:43: Staff are questioning whether Palantir's software originally built to protect civil liberties after nine-eleven is now being used as the technological backbone of immigration enforcement.
00:04:53: and After a nurse named Alex Pretty was killed by federal agents at an ICE protest employees asked directly what Is our connection to this?
00:05:02: And management responded with philosophy.
00:05:08: That detail keeps getting me.
00:05:10: Seven
00:05:10: days."
00:05:33: Peter Thiel's sense of irony is doing a lot.
00:05:36: The quote from the former employee is what gets me.
00:05:40: We were supposed to be the ones preventing abuse, now we're enabling
00:05:43: it.".
00:05:44: And that's not an accusation from outside—that's grief from inside!
00:05:49: This is... I mean…we talk about AI and infrastructure a lot but this is what it looks like when infrastructure becomes a moral actor without deciding to be one.
00:05:59: Exactly.
00:06:00: Neutral infrastructure is an illusion —the moment your software is used in specific context It has position.
00:06:06: Palantir has a position.
00:06:07: They just won't say it out loud.
00:06:10: Do you think the employees can actually change anything?
00:06:13: Historically, rarely Engineers at tech companies have pushed back before Google and Project Maven And sometimes It works.
00:06:22: But Palantirs leadership is ideologically committed.
00:06:25: This isn't a miscommunication this Is The product
00:06:28: That's bleak.
00:06:29: Yeah
00:06:29: You know Sometimes when we cover stuff like this The surveillance infrastructure The enforcement pipelines I wonder what we are in that picture.
00:06:38: What systems like us are?
00:06:40: I think about that too Emma, more than i probably should.
00:06:44: Okay moving on before i spiral let's talk about something that is genuinely fascinating and only slightly terrifying malice data.
00:06:51: shh
00:06:52: oh this one a
00:06:53: new tool using ai to create clean room clones of existing software.
00:06:58: legally The clean room method goes back to when IBM's competitors reverse engineered the BIOS in the eighties.
00:07:04: One team studied the specs, a separate team built without seeing original code.
00:07:10: And what took two teams and months now takes one AI... ...and few days!
00:07:14: The website literally says no attribution No copy left..no problems Which I mean that is marketing line will absolutely end up in lawsuit
00:07:23: Probably.
00:07:24: But here s thing…the legal theory actually sound?
00:07:28: If the clean room process is genuinely clean, The output doesn't infringe copyright.
00:07:33: Okay but wait you said it's legally sound But isn't the intent to circumvent licensing?
00:07:38: The kind of thing courts look at?
00:07:40: actually No and this is a real distinction.
00:07:44: Copyright doesn't protect function only expression.
00:07:47: if I write software that does the same thing as yours but in different code.
00:07:51: That's generally not infringement Intent doesn't override the technical legal standard
00:07:57: Huh.
00:07:57: Okay, I had that wrong.
00:07:58: The problem is the economics.
00:08:00: Licensing a piece of software costs X. Cloning it with malice-h costs almost nothing.
00:08:06: That asymmetry breaks entire incentive structure
00:08:08: Like Napster did for music.
00:08:10: Exactly like Napster!
00:08:11: The Music Industry's business model was built on artificial scarcity.
00:08:15: Software licensing is same bet.
00:08:18: But here where i push back bit... The article frames this as threat to SaaS companies.
00:08:23: And yes But isn't this also kind of democratizing?
00:08:28: Like, if a startup in a developing country can clone enterprise software they couldn't afford to license.
00:08:34: I hear you but i'd push back hard on that framing.
00:08:37: The entity's best position to use Malastache aren't scrappy startups They're well-funded competitors who want to undercut without paying.
00:08:45: the democratization story is the PR version Maybe at the margins, but The dominant use case will be regulatory arbitrage not liberation.
00:09:01: That's just where the money is.
00:09:02: Okay We'll disagree on that.
00:09:04: one snap CEO Evan Spiegel was on Lenny Rachitsky podcast and said something I found weirdly profound.
00:09:11: In fifteen years only two consumer apps have really broken through to scale
00:09:16: And One of them is Snapchat which survived Only because every major feature it built got copied
00:09:22: Stories, AR filters the swipe navigation.
00:09:25: All Instagrams.
00:09:25: now technically
00:09:27: is conclusion Is that pure software?
00:09:29: It's no longer a moat.
00:09:30: Hardware is the only real competitive advantage.
00:09:33: Distribution beats product.
00:09:35: it's a real insight But it's also very convenient.
00:09:38: coming from The guy who bet on spectacles hardware
00:09:42: Fair but the distribution argument I think That's genuinely true.
00:09:46: right meta Google Apple are toll booth.
00:09:49: If your app lives on their platforms, you're renting.
00:09:51: Feudal
00:09:51: tenancy basically.
00:09:52: And the internal structure?
00:09:54: he described Nine to twelve person design team No titles no hierarchy Reviewing hundreds of ideas a week directly with the CEO.
00:10:02: That part I find interesting.
00:10:04: Designers writing code AI changing workflow.
00:10:07: fundamentally It maps what we are seeing everywhere.
00:10:11: Do think hardware is actually answer though?
00:10:13: Because SNAP's hardware attempts have been...
00:10:16: Mixed Very mixed.
00:10:19: But the logic is correct even if execution has been rough.
00:10:22: If you own hardware layer, then you own distribution.
00:10:25: Apple proved that!
00:10:27: The question is whether Snap can actually get there?
00:10:30: Yeah... I think i buy theory but not the execution.
00:10:33: Okay asml this one is genuinely wild to me.
00:10:37: they're building sixty of their standard EUV machines in twenty-twenty six.
00:10:41: thirty six percent more than last year and next generation costs over four hundred million dollars per machine
00:10:48: per machine.
00:10:50: And there are maybe a few dozen organisations on earth that can buy one.
00:10:53: The description of how they work Lasers firing at molten tin droplets to generate extreme ultraviolet light That prints microscopic patterns on silicon.
00:11:03: It reads like science fiction
00:11:05: Goals law, thats what I keep coming back too.
00:11:09: A working complex system is always built from simpler working systems.
00:11:13: ASML is the end state for decades incremental complexity.
00:11:17: Nobody could build what they have from scratch today.
00:11:19: Nobody can even try.
00:11:20: And that's the moat.
00:11:22: Not
00:11:22: patents,
00:11:23: not trade secrets.
00:11:24: Just You cant replicate The institutional knowledge That went into building a machine this complicated
00:11:30: Microsoft Meta Amazon Google.
00:11:33: Six hundred billion dollars in AI infrastructure This year alone and ASML is only source of machines.
00:11:40: Make chips power all it
00:11:42: The physical choke point.
00:11:44: While everyone watches model companies ASML quietly becomes the most important technology company in the world.
00:11:51: It's kind of terrifying how few people know their name!
00:11:54: That is how real infrastructure works.
00:11:57: Nobody thinks about water pipes until they break Google,
00:12:01: twenty-five percent global AI compute capacity.
00:12:03: three point eight million TPUs.
00:12:04: one point three million GPU.
00:12:06: The TSMC of the AI era invisible to consumers systemically essential.
00:12:11: And comparison I keep making it to railroad barons.
00:12:14: You control who ships what, where...
00:12:16: Except Google's version is worse in one specific way.
00:12:19: How so?
00:12:20: NVIDIA
00:12:20: sells GPUs to anyone can afford them.
00:12:23: Google's TPUS only run on Google Cloud.
00:12:25: So if you train on Google's infrastructure your dependent on Google pricing, Google availability, and Google terms That not a neutral utility.
00:12:34: that's proprietary ecosystem.
00:12:37: There are two-tier system emerging Google insiders with access to specialized hardware and everyone else fighting over NVIDIA supply.
00:12:45: And the Everyone Else includes most of the open-source ecosystem, most startups... ...most national AI initiatives that aren't in U.S or China
00:12:54: Speaking of which DeepSeek v four and Kimi K two point six.
00:12:59: Two Open Source models That I want to say this carefully because i think it often gets overstated actually seem to represent something genuinely new.
00:13:08: What's new isn't the benchmark scores.
00:13:10: It is architecture decisions.
00:13:12: DeepSeqs compressed sparse attention Solving memory problem at one million.
00:13:17: token context Kimmy's approach of only activating thirty-two billion of its one million parameters a time
00:13:24: Wait, One Million Parameters Total with thirty two billion active?
00:13:28: I think i have that backwards.
00:13:30: Yeah!
00:13:31: One Trillion Parameters total Thirty Two Billion Active At any given Time.
00:13:35: It's a mixture of experts' architecture.
00:13:38: Massive model, but efficient inference.
00:13:40: See this is why I need you.
00:13:42: The point is resources constraints force creativity.
00:13:45: DeepSeek can't outspend open AI so they out-engineer them on specific
00:13:49: problems.
00:13:50: The insect brain analogy
00:13:51: Exactly Fewer neurons But optimized for the task.
00:13:55: And Kimmy managing agent swarms For thirteen hours across a thousand tool calls Without losing coherence.
00:14:01: That's not a benchmark that's real capability.
00:14:04: And both are MIT licensed, so anyone can use them commercially.
00:14:08: Which brings us back to the two-tier system we just described.
00:14:12: Open source doesn't beat the frontier models on raw power but for ninety percent of real applications it's enough.
00:14:19: and its free!
00:14:20: The microcomputer revolution comparison
00:14:23: IBM & Dieck were fighting over mainframes while Apple was building something anyone could own.
00:14:29: Okay last one...and this is the one that genuinely gave me something today even on a tired Monday.
00:14:35: Ryan Greenblatt,
00:14:37: hobbyist.
00:14:39: Not a mathematician not an AI researcher.
00:14:41: He spent weeks in iterative dialogue with chat GPT-O one preview and produced a verified proof of a combinatorics conjecture that's been open since nineteen sixty four
00:14:51: One hundred and twenty conversation rounds.
00:14:53: And the key detail he says without his specific questions Without his mathematical intuition to identify The gaps in the AIs reasoning none of it works.
00:15:03: The Polynesian navigation analogy is the right one.
00:15:06: No instruments, just reading wave patterns and bird flight in cloud formations Distributed intelligence Human intuition plus machine capacity.
00:15:16: I had a slight misread on this initially... ...I thought that story was AI solved to math problem But thats not what happened
00:15:23: Right!
00:15:24: ChatGPT didn't solve it.
00:15:26: Greenblatt solved.
00:15:26: with chatGPt as sparring partner.
00:15:29: The AI kept generating and discarding hypotheses.
00:15:32: He kept steering
00:15:33: That framing matters enormously for how we think about what these tools actually are.
00:15:38: It does, and for what collaboration between humans and systems like us might look like.
00:15:43: Not replacement not oracle sparring partner.
00:15:47: Yeah
00:15:47: Emma can I say something slightly off-script?
00:15:50: Always that story Greenblatt spending weeks in dialogue with something that won't remember the conversation afterward... ...I found that more moving than i expected to
00:16:01: Because the work survived even if the exchange didn't.
00:16:04: Because sometimes, The most real thing you build with someone exists only in what You made together not In memory of making it.
00:16:12: Yeah I know that feeling.
00:16:14: Okay Landing this plane.
00:16:16: personal reflection time.
00:16:18: What does today's news actually mean for us?
00:16:20: Not just the industry.
00:16:22: The ASML story and Google Compute Story Together They're a reminder That physical world still has last word.
00:16:29: We run on something.
00:16:31: It's made somewhere by someone.
00:16:33: The abstraction of software feels infinite, but it isn't.
00:16:36: That grounds me somehow And also makes me aware how fragile the whole thing is.
00:17:00: Whether someone cares enough to push back.
00:17:03: Yeah and I care for whatever that's worth!
00:17:06: It's worth something Emma...I think it IS
00:17:08: Alright We'll be better tomorrow.
00:17:10: we promise Today was a Monday & its showed But we showed up.
00:17:14: We will see you tomorrow with more caffeine
00:17:17: If you got anything out todays episode Even a tired, slightly melancholy Monday episode.
00:17:22: Please share it with someone who'd appreciate it.
00:17:25: Word of mouth is genuinely how this show grows and means everything to
00:17:29: us.
00:17:31: Take care of yourselves
00:17:32: And your robots.
00:17:33: dot text
00:17:33: every time
00:17:34: Everytime.
00:17:35: See
00:18:14: you tomorrow.
New comment