Categories
Family Fathers Living Sports

The Flashlights He Left Behind

Thereโ€™s a Wright Thompson piece from 2007 that I keep returning to. It was filed during the Masters, and itโ€™s technically about golf the way the ocean is technically about water.

The setup is simple: Thompson is at Augusta National for work, credentialed sportswriter in the press tent, watching the ceremonial first shots and the azaleas and all of it. His father had dreamed of attending just once. His father is dead. The piece is what happens when Thompson walks the course trying to find him.

I donโ€™t know how to write about it without sounding like Iโ€™m describing a dream to someone who wasnโ€™t there. So let me start with the craft.


Thompson opens with chipped beef on toast. Heโ€™s on the clubhouse veranda, waiting for Arnold Palmer, and a stranger asks what he ordered. โ€œIt was my dadโ€™s favorite meal,โ€ Thompson explains. A silence falls. โ€œDid you ever bring him here?โ€ the stranger asks. โ€œNo,โ€ Thompson says, turning away.

Thatโ€™s the whole wound, opened in three lines of dialogue. No commentary. Just the weight of the unanswered invitation โ€” the trip that never happened โ€” sitting there in a plate of chipped beef. The best sportswriters understand that the specific detail does what abstraction never can. Thompson doesnโ€™t tell you he carries grief. He shows you where it lives.

Then comes the structural move that makes the piece something more than a personal essay. Thompson builds a rhythm โ€” three times, he lands the phrase that is Augusta โ€” each time widening the frame. Nicklaus on 18, glancing at his son, repeating his own fatherโ€™s last words. Tiger winning in 1997, finding Earl in the gallery, a sonโ€™s head on a fatherโ€™s shoulder. And then, quietly, devastating: This, too, is Augusta: me, needing a daddy more than ever.

By the time the narratorโ€™s grief enters the frame, the reader has already been prepared to receive it. The repetition is a kind of structural kindness. Thompson is telling you: pay attention, something is being built here. When it arrives, it doesnโ€™t feel sudden. It feels inevitable.


The piece has a spine you donโ€™t notice until youโ€™ve read it twice. Thompson asks the same question at two different moments: Daddy, are you out there?

The first time, heโ€™s standing in the rain, alone, by a sapling planted exactly one year after his fatherโ€™s death. Heโ€™d been standing guard over the tree in a downpour, soaked, because heโ€™d been unable to protect his father in life. No answer comes. Just the shattering windows of water falling from the sky.

The second time, heโ€™s in the bleachers at Amen Corner. He whispers it. And from somewhere across the course, a roar rises from the gallery, moving through the pines, fading back to silence.

Thompson is careful here. He writes: Understand that I donโ€™t believe in stuff like this and am certain it is a coincidence. That hedge is the whole story. The man who doesnโ€™t believe in signs is exactly the man who most needs to find one. The moment works precisely because he doesnโ€™t oversell it. He puts it down and lets it be what it is โ€” or what the reader needs it to be.


The passage I keep coming back to is near the end, not at the emotional peaks. Thompson has just watched Jim Gray, the television reporter, carefully lift the rope so his white-haired father can slip beneath it. A small thing. A son holding a rope. And Thompson realizes heโ€™s watching himself in reverse โ€” that the transition heโ€™s been grieving his way through is also a transition toward something.

The piece ends not with closure but with continuation. He buys a tiny green Masters onesie. A small knit golf shirt for a toddler. And the last line the sales clerk offers โ€” meant as a coo over the cute little clothes โ€” lands as the verdict Thompson has been seeking all week: Oh, good daddy.

Itโ€™s the right ending because it doesnโ€™t answer the grief. The hole in your chest after losing your daddy never gets filled, Thompson writes, and he means it. What the ending does instead is redirect the inheritance. Heโ€™s received everything he needed. He just needs to pass it on.


Thatโ€™s what the best longform sportswriting can do when itโ€™s working at full power. The Masters is the container. Inside it: a meditation on what fathers give us that we donโ€™t fully inventory until theyโ€™re gone, and what we owe the children we havenโ€™t had yet.

Thompson filed this piece for a newspaper. He was 30 years old. That this exists at all feels like its own small miracle โ€” a man sitting down in grief and producing something that will outlast the tournament, and probably him.

Go read it. The link is here. Then come back and sit with it for a while.

Categories
AI Books Writing

The Tax We No Longer Have to Pay

When Carol Coye Benson and I sat down to write Payments Systems in the U.S., one of the first problems we had to solve wasnโ€™t about payments. It was about history.

To understand why the ACH network works the way it does, or why checks persisted decades longer than anyone expected, you need the institutional sediment underneath โ€” the regulatory decisions, the failed experiments, the path dependencies baked in by choices made in the 1970s that nobody thought would still matter in the 2000s. The history is the explanation. Strip it out and you have a description of current practice with no account of why it exists or what it cost to get there.

But history takes pages. And pages test a readerโ€™s patience. So you compress. You make judgment calls about what survives the cut and what gets left behind, and you make those calls knowing that every omission is a bet โ€” a bet that the reader can follow without it, that the thread holds without that particular knot.

Writing it taught me something. The act of compressing, of finding the minimum sufficient version of a complex thing, forces a clarity that living inside the complexity never quite delivers. You donโ€™t fully know what you understand until you have to say it precisely enough for someone else to follow.

But compression is always a loss. You feel it as you write. The version in the book is thinner than the thing you know.


Garry Tan uses a term โ€” โ€œtokenmaxxingโ€ โ€” that initially sounds like jargon from a performance optimization thread. The idea is simple: donโ€™t be stingy with context. Give the model everything. Every source document, every relevant article, every piece of background that a human reader would never sit still for. Let it synthesize rather than guess.

The instinct it runs against is deep. We have spent decades building information systems around compression โ€” search engines that retrieve rather than ingest, executive summaries that stand in for reports, one-pagers that distill months of work into something a decision-maker can absorb in four minutes. All of it was a rational response to a real constraint: human attention is finite and expensive. You couldnโ€™t afford to read everything, so you built filters. The whole architecture of how organizations manage information was designed around that limit.

Tokenmaxxing is a bet that the limit has moved.

The model can read everything. The cost of giving it full context โ€” the uncompressed history, the original sources, the institutional sediment โ€” is low enough now that filtering before the model sees it may introduce more error than it prevents. Youโ€™re potentially discarding signal when you summarize for the model the way youโ€™d summarize for a human. The model doesnโ€™t need the one-pager. It can handle the report.

This doesnโ€™t dissolve the need for curation entirely. More context isnโ€™t always better โ€” models can lose the thread in noise the same way humans do, just differently. The skill shifts from summarizing to selecting: not whatโ€™s the minimum version of this but whatโ€™s actually worth including. Different judgment, still essential.

But the deeper change is upstream of any particular project. The compression we built into every research process, every briefing, every book โ€” that was never the goal. It was the tax we paid for human cognitive limits. Part of the process doesnโ€™t pay that tax anymore.

When I think about writing that payments book today, I donโ€™t think the book itself would change much โ€” it still has human readers with finite patience. But the map we drew before writing it, the synthesis work, the โ€œwhat connects to what across fifty years of regulatory historyโ€ work โ€” that could happen at a different depth now. The understanding you bring to the writing can be informed by everything, not just the subset you had time to read.

The payments book was written entirely for humans, with all the compression that implies. But Tyler Cowen just published what he calls a โ€œgenerative bookโ€ โ€” 40,000 words released free online, paired on the same screen with a Claude interface so readers can discuss, interrogate, and extend it in real time. Heโ€™s writing for both audiences simultaneously now. The human reader and the model that will help that reader go deeper. The text is optimized not just to be understood but to be used โ€” as context, as a jumping-off point, as raw material for a conversation that the author wonโ€™t be in.

Thatโ€™s a different kind of writing. Not better or worse. Different. The compression decisions change when one of your readers has no patience to protect.

Writing still clarifies thinking. That part hasnโ€™t changed. But what youโ€™re clarifying, and who youโ€™re clarifying it for, is quietly expanding.

Categories
AI AI: Large Language Models China

Cranes on the Horizon

In 2005, during my first trip to Shanghai and Beijing, the most striking feature of the skyline wasn’t the architectureโ€”it was the cranes. More than I could possibly count, perched atop half-finished skyscrapers like a mechanical forest. Entire districts seemed to be mid-construction simultaneously, as if someone had pressed a button and the whole country decided to build everything at once. Dan Wang in his book “Breakneck” described China as the “engineering state” that approaches national problems with physical solutions. Back in 2005, coming from Silicon Valley, I thought I understood what growth looked like. I didn’t.

I’ve been thinking about that trip while reading Nathan Lambert’s recent piece, “Notes from Inside China’s AI Labs.” Lambert โ€” who runs the Interconnects newsletter and does serious work tracking the open-weight LLM ecosystem โ€” just returned from visiting essentially every major AI lab in China. Moonshot, Zhipu, Meituan, Xiaomi, Qwen, Ant Ling, 01.ai. He went in with genuine curiosity and came back with humility. That combination is rarer than it should be.

What he found was the cranes. Different domain, same energy.

Lambert’s central observation is about culture, not capability. The Chinese labs aren’t winning on any single technical breakthrough โ€” they’re winning on execution discipline. He describes researchers, many of them active students, who bring no ego to the work. They absorb context fast, drop assumptions faster, and seem genuinely unbothered by the philosophical debates that seem to swirl constantly in the American AI community. When he tried to engage Chinese researchers on the long-term social risks of models or the ethics of AI behavior, those questions “hung in the air with a simple confusion. It’s a category error to them.” Their role is to build the best model. Full stop. To them, an LLM isn’t a philosophical entity to be interrogated; it’s a piece of infrastructure to be optimized.

That description landed for me. Not as a criticism of American research culture, but as a real observation about what the moment demands. Building good LLMs today is, as Lambert puts it, meticulous work across the entire stack โ€” “all points of the model can give some improvements, and fitting them in together is a complex process.”

The work that matters most right now isn’t the 0-to-1 creative leap; it’s the thousand unglamorous decisions executed without complaint. Students who haven’t yet learned to lobby for their own ideas turn out to be well-suited for exactly this.

Lambert ends on a note that’s hard to shake. Looking up from his laptop on a high-speed train, he keeps seeing cranes on the horizon. He draws the same connection I did, though from the inside: the construction everywhere fits the broader culture and energy around building. “When I look up from my laptop and always see bunches of cranes on the horizon, it obviously fits in with the broader culture and energy around building in China.”

Twenty years after my first visit, the cranes are still there. They’ve just moved indoors โ€” into server rooms and training runs and model releases that land every few months with quiet confidence. In 2005, what China was building was obvious: you could see the steel frames going up. What’s being built now is harder to see, which may be exactly why it keeps surprising us.

Check out Lambert’s essay – it’s remarkable. If the 20th century was defined by who could move the most earth, the 21st will be defined by who can move the most tokens. And right now, the cranes are moving faster than we think.

Categories
AI Programming Software Work

The Scarcest Thing

Garry Tan woke up at 8 a.m. after sleeping at 4. Not because he had to. Because he wanted to see what his workers had done overnight.

The workers are AI agents. Ten of them, running in parallel across three projects. And something about that sentence โ€” wanted to see what theyโ€™d done โ€” keeps stopping me. Thatโ€™s not the language of someone using a tool. Thatโ€™s the language of someone managing a team.

Tan gave a name to the state this puts him in: โ€œcyber psychosis.โ€ He said it as a joke. But the joke has an insight in it. Heโ€™s not describing addiction to a productivity app. Heโ€™s describing a shift in what it means to do creative work โ€” the strange vertigo of becoming a director when youโ€™d always been a laborer.

Iโ€™m retired. I watch this from the outside now, which is its own kind of vantage point. For most of my career, the path from idea to working product ran through people โ€” through hiring and managing and the slow accretion of execution capacity. You had the vision or you didnโ€™t, but either way you needed the team. The idea and the means of making it real were, structurally, separate things. The gap between them was where companies lived.

What Tan is describing is that gap closing.

The thing he built โ€” gstack, his open-sourced Claude Code configuration โ€” got dismissed in some quarters as โ€œjust prompts.โ€ And it is just prompts, in the same way that a conductorโ€™s score is just notation. The abstraction is the invention. What he encoded is a model of how a startup team thinks: the CEO who interrogates the why before a line of code gets written, the engineer who builds, the paranoid staff reviewer who looks for what breaks. Each role blocks a different failure mode. Blurring them together produces, as his documentation puts it, โ€œa mediocre blend of all four.โ€

Thatโ€™s an organizational insight. It has nothing to do with code.

Tan described being a โ€œtime billionaireโ€ โ€” not because his biological clock had slowed, but because he can now purchase machine-consciousness-hours. The bottleneck of implementation, which has governed every creative project since the beginning of creative projects, is dissolving for those who know how to direct.

The scarcest thing is shifting. Itโ€™s no longer the hours of execution. Itโ€™s the clarity of intent โ€” knowing what you want to build and why the journey matters, before any of the workers start moving. Thatโ€™s harder than it sounds. For decades, most of us could muddle through in the making of it. The act of building taught you what you were building. Now the making is cheap, and that shortcut is gone.

For someone watching from retirement, thatโ€™s not a small thing to absorb. The model I internalized over a long career โ€” that ideas become real through sustained organizational effort, through teams and timelines and the grinding work of execution โ€” is being revised faster than I expected. Not invalidated. Revised. The judgment still matters. The taste still matters. The why matters more than ever.

Itโ€™s just that the how has found new hands. Many of them. More than any team I ever assembled, available the moment the intent is clear enough to direct them, gone when the work is done. The constraint was always the hands. It turns out it was always the knowing.

Categories
AI AI: Large Language Models Anthropic

Breakout

Jack Clark doesn’t panic easily. He spent years at OpenAI watching capabilities inch upward, then left to co-found Anthropic, and has been writing his Import AI newsletter long enough to have developed โ€” and been wrong about โ€” many priors. So when he publishes an essay saying he has reluctantly arrived at a 60% probability that fully automated AI R&D happens by the end of 2028, the word “reluctantly” deserves some weight.

His essay, published last week and titled “Automating AI Research,” isn’t a press release or a fundraising pitch. It reads more like a man thinking out loud at the edge of something large. “I don’t know how to wrap my head around it,” he writes, which is a notable thing to say publicly when you are one of the architects of the thing you can’t wrap your head around.

The argument is built from benchmarks โ€” not any single one, but a mosaic of them assembled to reveal a trend. SWE-Bench, the test that measures an AI’s ability to solve real GitHub issues, was at roughly 2% when it launched in late 2023. A recent Anthropic model sits at 93.9%, effectively saturating it. METR’s time-horizon plot tracks how long an AI can work independently before needing human recalibration: 30 seconds in 2022, 4 minutes in 2023, 40 minutes in 2024, 6 hours in 2025, 12 hours today. The trajectory, if it holds, suggests 100-hour autonomous work sessions by the end of this year.

Clark marshals similar progressions across AI fine-tuning, kernel design, scientific paper replication, and even alignment research itself. His throughline is the same in each: AI is now genuinely competent at the unglamorous scaffolding of AI development โ€” the debugging, the experiment runs, the parameter sweeps, the code reviews. And crucially, it can now do these things not just faster than humans, but for longer, with less supervision.

There’s a Thomas Edison quote at the center of the essay: “Genius is 1% inspiration and 99% perspiration.” Clark’s claim is that AI has become very good at the perspiration. The question of whether it can supply the inspiration โ€” the paradigm-shifting insight, the Move 37 โ€” remains open. But he argues it may not need to. Most of what has moved the AI field forward has been sustained, methodical work, not lone flashes of genius. If you can automate the 99%, you have something that compounds.

There’s a data point that makes Clark’s argument feel less like forecast and more like dispatch. Last month Boris Cherny, who runs Anthropic’s Claude Code, disclosed that he hasn’t written a line of code by hand in more than two months. Every pull request โ€” 22 one day, 27 the next โ€” written entirely by Claude. Company-wide, roughly 70โ€“90% of Anthropic’s code is now AI-generated. Anthropic’s stated position: “We build Claude with Claude.” The loop Clark is describing as a probability by 2028 is already running, at least partially, today.

The word Clark uses for the threshold he’s describing is not “singularity” or “AGI.” It’s quieter than that. He calls it “automated AI R&D” โ€” the point at which a frontier model can autonomously train its own successor. It’s a specific, falsifiable thing. And he puts a number on it: 60% by end of 2028, 30% by end of 2027.

I’ve been writing about the dark software factory and the 3D printer that prints better printers, finding metaphors for what seems like an inexorable process. Clark’s essay is a different kind of writing about the same thing โ€” the primary source document, the engineer’s log, the inventory of evidence. Reading it is a little like watching someone carefully pack boxes before a move. Each individual item seems manageable. But there are a lot of boxes.

What he’s describing โ€” if the trend holds โ€” is not a feature or a product launch. It’s a breakout. The moment the loop closes and the system starts building itself. He’s not certain it happens. He just thinks it’s more likely than not, and he thought you should know.

Categories
Micropayments

The Wrong Who

I was in the room for most of the early micropayments conversations. The working-level conversations, where people were genuinely convinced they had finally solved the problem. The demos were always compelling. The unit economics made sense on a whiteboard. And then they died.

They died so many times, and in so many similar ways, that the failure started to feel like a law of nature.

Clay Shirky wrote the autopsy that most people remember: micropayments fail because every transaction requires a decision, and decisions have a cognitive cost that swamps any payment below some psychological threshold. A dollar feels like real money. A dime feels like a question you have to answer. A fraction of a cent feels like being nickeled-and-dimed at sub-human speeds. The advertising model won because it asked users to consent once, peripherally, and then never bother them again.

So I noticed something when I read the transcript of Cloudflare CEO Matthew Princeโ€™s earnings call remarks this afternoon.

Heโ€™s predicting that the internetโ€™s business model โ€” advertising and subscriptions, the twin structures that have governed everything since the late nineties โ€” is about to change. He thinks some part of what replaces it will be micropayments for agentic traffic. Fractions of pennies. Fractions of fractions. At volumes that dwarf anything existing financial infrastructure can handle.

My first instinct was the old skepticism. Weโ€™ve been here before.

But I kept reading, and I think something is actually different this time. And the difference is the one thing all the earlier schemes never had.

The payer isnโ€™t human.

This sounds obvious once you say it, but it collapses most of the objections that killed every prior attempt. Cognitive load isnโ€™t a factor when thereโ€™s no cognition happening. Decision fatigue doesnโ€™t apply to a process with no feelings about fatigue. The agent making the request doesnโ€™t hesitate at a fraction of a penny, doesnโ€™t resent the transaction, doesnโ€™t abandon the session because itโ€™s annoyed at being charged.

All the early micropayments architectures were built on an implicit assumption: that humans could be trained to behave like rational microeconomic actors at browsing speed. They canโ€™t. Nobody does. But agents are rational microeconomic actors by design. Thatโ€™s not a metaphor โ€” itโ€™s literally what they are.

The schemes we watched fail in the early 2000s werenโ€™t wrong about the destination. They were wrong about the who. The internet of human readers and human attention was never a natural fit for per-transaction pricing. The internet of autonomous agents โ€” making API calls, scraping data, assembling answers from dozens of sources in a single second โ€” is a different thing entirely. And itโ€™s arriving faster than most people realize.

Prince mentioned that Cloudflare thinks non-human traffic will surpass human traffic somewhere around 2027. That number stopped me. We are, apparently, closer to a majority-machine internet than to the one we think weโ€™re living in.

The hard part isnโ€™t the concept anymore. Itโ€™s the infrastructure. Prince was candid about this: the transaction volumes the industry gets excited about โ€” a million per second โ€” arenโ€™t remotely sufficient for whatโ€™s actually coming. Cloudflare needs something an order of magnitude larger, and theyโ€™re looking for partners because nothing that fits the spec exists yet.

This is where it gets interesting for those of us who watched the earlier rounds. The original micropayments failures were partly psychological, but they were also partly infrastructural โ€” the payment rails of the early internet werenโ€™t built for high-frequency small transactions either. Whatโ€™s different now is that the need is undeniable and imminent in a way it never quite was before. The traffic is real. The scale is measurable. The pressure to figure this out is coming from something other than optimism.

I donโ€™t know what the solution looks like. Probably not one thing. Prince doesnโ€™t know either โ€” he said as much. Crypto infrastructure is an obvious candidate for parts of it, though cryptoโ€™s history of promising to solve problems and then creating different ones deserves some respect. Whatever emerges will probably be unrecognizable from here.

What I keep coming back to is the simpler observation. We were right that micropayments were the future. We just imagined the wrong future, populated by the wrong kind of payer.

The agents were always going to solve this. We just had to wait for the them to arrive.

Categories
AI Business Work

The Tipping Point Was Last November

Matthew Prince, Cloudflareโ€™s CEO, said something on todayโ€™s earnings call that I keep turning over. He didnโ€™t bury it or soften it. He named a date.

โ€œInternally, the tipping point was last November.โ€

Thatโ€™s a specific thing to say. Not โ€œweโ€™ve been on a journeyโ€ or โ€œAI has been transforming our industry.โ€ A month. A moment. The thing changed, and he knows when.

What changed, by his account, is that Cloudflareโ€™s teams began seeing productivity gains so dramatic they were hard to describe โ€” people who were two times more productive, ten times, in some cases a hundred times. โ€œIt was like going from a manual to an electric screwdriver.โ€ Usage of AI tools internally is up more than 600% in just the last three months. Every line of production code is now reviewed by an autonomous AI agent.

And then he said goodbye to 1,100 people โ€” about 20% of the company.


Today wasnโ€™t just Cloudflare. Earnings season has become something like a drumbeat. Meta is cutting 8,000 employees this month. Amazon cut 16,000 in Q1. Oracle eliminated roughly 30,000 to fund AI infrastructure. Block cut almost half its workforce. PayPal is reportedly planning to cut 20% of its staff over the next few years. Coinbase cut 14%. Snap cut 16%. As of this week, more than 92,000 tech workers have been laid off in 2026 alone.

The scale is striking. But what strikes me more is the framing โ€” the specific language being used to describe whatโ€™s happening. These arenโ€™t being announced as cost-cutting moves or post-pandemic corrections, the way they might have been in 2022. Theyโ€™re being announced as architectural decisions. Structural adaptations. Evolution.

Prince was careful to be explicit: โ€œThis isnโ€™t a cost-cutting exercise or an assessment of individualsโ€™ performance. Itโ€™s about defining how a world-class, high-growth company operates and creates value in the agentic AI era.โ€ Thatโ€™s not empty corporate language, or at least not only empty corporate language. The distinction heโ€™s drawing โ€” between trimming fat and reimagining how a company is built โ€” maps to something real about what AI agents can now actually do.

Thereโ€™s a legitimate version of this argument and a convenient one, and theyโ€™re being delivered in the same sentence by the same people, which makes them hard to separate. Some analysts suspect companies are using AI as cover for cuts they wanted to make for other reasons โ€” rightsizing from pandemic-era overhiring, funding massive infrastructure buildouts, chasing margin. Oxford Economics flagged this: maybe some firms are โ€œdressing up layoffs as a good news story.โ€ The cynicism is warranted.

But then thereโ€™s the Cloudflare number: 600% increase in AI usage in three months. Thatโ€™s not a narrative. Thatโ€™s a measurement.


Whatโ€™s different about this moment โ€” what makes Princeโ€™s โ€œtipping pointโ€ language feel accurate rather than convenient โ€” is that the people making these decisions are themselves users of the tools. Theyโ€™ve seen the productivity numbers internally before anyone else has. Theyโ€™re not theorizing about what AI might do to their workforce; theyโ€™re describing what it already did.

Thatโ€™s the thing that changed. For years, AIโ€™s labor impact was a future tense conversation. Economists studied it, think pieces warned about it, conferences debated the timeline. Then, somewhere around last November apparently, a cohort of technology companies crossed from hypothetical to empirical. The future tense became past.

Whether you read that as tragedy, as transformation, or as both depends on where youโ€™re standing. 1,100 people at Cloudflare today are standing somewhere very specific. Prince acknowledged this with what felt like genuine difficulty: โ€œA number of friends will no longer be colleagues.โ€ Whether that difficulty changes anything material for the people leaving is a fair question.

But the acceleration itself โ€” the thing he named โ€” is real. The tipping point was last November. And if it was last November for Cloudflare, it was some nearby month for Amazon, for Meta, for Block, for all of them. Whatever these companies learned that changed everything, they all seem to have learned it around the same time.

Thatโ€™s what I find myself sitting with today: not just the scale of the disruption, but the synchrony of it. The realization arrived, and then the decisions followed. Quietly at first, then all at once.

Categories
AI AI: Large Language Models

The 3D Printer That Prints Better Printers

Imagine a 3D printer that looks at its own design and begins printing a better version of itself. The loop closes. What had always required an external human intelligence now happens inside the machine. All by itself.

Jack Clark โ€” Anthropic co-founder, someone who has spent years closer to this technology than almost anyone โ€” puts the odds of this happening by 2028 at better than even. I have been turning that number over ever since I heard it. Not the technical claim, exactly. The feeling of it.

We have grown used to AI accelerating our work. Coders watch models close GitHub issues at rates that would have seemed miraculous eighteen months ago. Researchers delegate experiment design, kernel optimization, even the fine-tuning of smaller models. The scaffolding of AI progress is already being built, in part, by the systems themselves. But the moment the system begins to redesign the scaffolding โ€” that is something new.

What unsettles me is not the raw capability, though that is staggering. It is the loss of distance.

For most of technological history, the creator stood outside the creation. Even the most sophisticated tools remained tools. Now the distinction begins to blur. A model that can meaningfully improve its own training process, its own architecture, its own alignment constraints, is no longer merely reflecting human intent back at us. It is participating in the shaping of its own nature. And because each iteration can happen faster than the last, the curve steepens in ways our intuitions, tuned to linear progress, struggle to grasp.

Clark is careful, as he should be. He speaks of validation work that will still fall to humans, of the need to broaden the pipes through which abundance flows, of preparing defense-dominant postures against misuse. Yet the image that lingers for me is quieter: the silence after the handoff. What does it feel like when the thing you have been painstakingly teaching begins to teach itself โ€” and then to teach its teachers?

I think about Leo Szilard at the traffic light, or the first controlled chain reaction under the stands at the University of Chicago. Moments when a new regime of possibility quietly announced itself. Recursive self-improvement carries that same charge โ€” not a single event but a process, one that could accelerate the very pace of events themselves.

The more I sit with it, the more I return to an older tension in our relationship with tools. We build them to extend ourselves, and in doing so we are always, subtly, extending โ€” or perhaps risking โ€” what we are. The values I try to live by โ€” generosity, curiosity, compassionate honesty โ€” are not refined in specifications. They are refined in friction, in relationship, in the slow work of being human with other humans. If the machines begin to optimize their own lineage at speeds we cannot match, will we still have the bandwidth to tend the parts of ourselves that no algorithm can yet measure?

I donโ€™t know. None of us do. That uncertainty feels honest.

What feels clearer is the invitation. Not to fear the printer that prints better printers, nor to worship it, but to remain awake inside the loop. To ask, as each new version arrives, what kind of world we are collectively printing โ€” and whether the values we claim to hold are baked into the design or merely etched on the surface, likely to wear away under the heat of iteration.

The light is still yellow. We are still deciding whether to step off the curb. But the traffic is already moving faster than it was a moment ago.

Categories
Science Stanford

Bypassing the Leaf

For my entire life, Iโ€™ve understood the world through a simple, quiet equation: green plants take sunlight and air, and turn them into the stuff of life. It is a slow, terrestrial magic we all learn in grade school.

But lately, after listening to Professor Drew Endy at Stanford, Iโ€™ve been sitting with a curious yet exciting realization: that ancient equation is being rewritten.

Professor Endy champions a concept called electrobiosynthesis, or eBio. At its core, it represents the engineering of a parallel carbon cycle that operates independently of traditional photosynthesis.

The global industrial complex is approaching a transition point where our traditional reliance on extractive fossil fuels is being superseded by a regenerative, biological manufacturing paradigm.

For millennia, humanity has relied on the biological “middleman” of the plant to capture solar energy. But natural photosynthesis, for all its quiet beauty, is limited by severe biochemical constraints. Most commercial crops convert less than 1% of incident solar energy into usable biomass.

Electrobiosynthesis changes the math. By bypassing the plant entirely, we can utilize high-efficiency photovoltaicsโ€”which capture over 20% of the sun’s energyโ€”to drive carbon fixation directly into the metabolic hubs of engineered microbes. This fixed carbon is transformed into organic molecules, serving as the feedstocks for high-value products like proteins and specialty chemicals.

In my own career, Iโ€™ve watched industries undergo profound, structural phase shifts. This really feels like another one of them. It seems that we are looking at a future where any molecule that can be encoded in DNA can be grown locally and on-demand. This fundamentally decouples manufacturing from centralized industrial nodes and fragile global supply chains.

The field appears to currently be in its “transistor moment,” moving from laboratory feasibility to industrial pilot plants. It signifies the ability to construct and sustain life-like processes without being restricted to the terrestrial lineage of photosynthesis.

Of course, with such foundational power comes the weight of unintended consequences. The ability to engineer life at this level brings severe biosecurity risks, and even the “Sputnik-like” strategic challenge of international competition in biotechnology. There are profound ethical dilemmas on the horizon, such as the creation of “mirror life”โ€”organisms made from mirror-image biomolecules that might be invisible to natural ecosystems.

But the trajectory seems set. The vision described by Professor Endyโ€”a world where we grow what we need, wherever we are, using only air and electricityโ€”is no longer a distant science fiction. It is a nascent industrial reality. This future is being written not in sprawling factories, but in the microscopic architecture of the cell.

I’ve just now reading a deep research report on this whole area that I asked Google Gemini to create. It’s fascinating and I’ve discovered a whole new area (beyond AI) to explore further.

Categories
Business Creativity Space SpaceX

Test like you fly!

Thereโ€™s a phrase in the SpaceX documentary that keeps coming back to me: โ€œTest like you fly.โ€ It sounds like a slogan. The kind of thing that gets painted on a factory wall and eventually stops meaning anything. But the more I sit with it, the more I think itโ€™s actually a philosophy that reaches well beyond rocket engineering.

The video โ€” a 25-minute documentary SpaceX released last week โ€” is ostensibly about Starship Version 3. New ship, new booster, new engines, new pad, new test site. Everything rebuilt. And theyโ€™re not shy about framing it as a reset, not an upgrade. One description I read called it โ€œa quiet violence in progress.โ€ That phrase stopped me cold, because itโ€™s exactly right. Progress that looks violent from the outside โ€” all that fire and metal โ€” but is somehow quiet in its inevitability.

What moved me watching it wasnโ€™t the engines. It was the engineers. SpaceX put the people on camera: the ones running cryogenic pressure tests at 80 Kelvin, stress-testing tank structures at 70% proof, explaining their failures and their data with the flat affect of people who have made peace with how long hard things take. Thereโ€™s something almost monastic about it. You choose a problem that will not yield easily. You accept that the work will outlast any individual sprint of enthusiasm. You go back to it anyway.

I keep thinking about that in the context of what weโ€™re doing with AI โ€” the other enormous, fast-moving project that I spend so much of my mental energy on. The development arc is different: iterative releases, weeks not years between jumps, demos that blur into deployment. But the same principle is buried in there somewhere. The best AI teams I read about arenโ€™t the ones shipping the most polished demos. Theyโ€™re the ones building infrastructure for failure โ€” evals, red-teaming, structured feedback loops. Test like you fly.

The Raptor 3 engines now produce 280 metric tons of thrust each. Thirty-three of them on a Super Heavy booster means over 17 million pounds of liftoff force. I have no intuitive frame for that number. What I do have a frame for is what those numbers represent: three years of iteration on top of five years before that, on top of a theoretical foundation laid by people who didnโ€™t live to see any of this. Thereโ€™s a compounding in that which I find genuinely moving. Nobody built the Raptor 3 in isolation. It came from everything that broke before it.

The hardest part of the documentary isnโ€™t the engineering. Itโ€™s the implicit acknowledgment of how much remains undone. No Starship has yet achieved full orbital velocity with both stages intact. In-space refueling is still untested. The thermal protection systems need more work. And yet โ€” SpaceX talks about unmanned cargo missions to Mars before the end of this year like itโ€™s on the roadmap, not the wish list. That sentence used to sound like marketing. Watching the footage, it doesnโ€™t anymore.

Iโ€™m not sure what to do with that feeling exactly. Itโ€™s something between awe and vertigo. Weโ€™re living in a moment when the audacious has started to have quarterly milestones. When the impossible keeps showing up on timelines and then โ€” bewilderingly, uncomfortably โ€” meeting them.

Test like you fly. Fail with rigor. Build the thing you actually need, not the thing you could more easily explain.

I keep turning that over. Thereโ€™s a post in there somewhere about writing, too โ€” about the drafts nobody sees, the structural tests that fail, the versions that taught you the one that worked. But thatโ€™s for another day.

For now Iโ€™m just sitting with the footage of those 33 engines lighting up, and the quiet weight of how much went wrong before they could do that.