Tech's Manifesto Problem

Tech's Manifesto Problem

By Anna Muratova

Tech has a manifesto problem. Not a shortage. A quality control issue. When an industry starts writing manifestos, it means the informal arguments have stopped working.

In 2023 and 2024, two landed: Marc Andreessen's "The Techno-Optimist Manifesto" and Jason Crawford's introduction to "The Techno-Humanist Manifesto." Both defend progress. Both push back against degrowth and cultural pessimism. Both arrived the moment large language models forced the conversation from theoretical to urgent.

Andreessen writes a sermon. Crawford writes a case brief.

Andreessen's manifesto reads like a creed. "We believe" repeated dozens of times. No citations, no counterarguments engaged. The structure borrows from religious texts: articles of faith, a named enemy, a call to conversion.

His AI position is doctrine:

"Artificial Intelligence is our alchemy, our Philosopher's Stone."

Any deceleration of AI "will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder." That turns policy debate into moral absolute. It makes cost-benefit analysis heresy by definition.

Crawford opens with data. Productivity slowdowns across decades. 75% of young people saying "the future is frightening." Specific institutional failures: FDA testing delays, California's rail disaster, Golden Rice blocked for decades.

On AI, he's measured:

LLMs have "created a general kind of artificial intelligence" that is either "the next big thing in software" or "the next dominant species that will replace humanity."

He names the range without collapsing it.

The machine question, literally

Andreessen frames AI as evolution's next step. The "techno-capital machine," borrowed from Nick Land, treats intelligence and energy as inputs to a feedback loop driven "to infinity." AI is a "universal problem solver." The prescription: build it, accelerate it, remove friction.

Crawford flags the philosophical trap underneath. Accelerationism's goal is to follow "the will of the universe" or "preserve the light of consciousness," but not your consciousness, necessarily. When the technology is intelligence itself, that stops being abstract. If the optimization target is "intelligence in the universe" rather than "human well-being," the logical endpoint doesn't require humans.

Both share a blind spot. Andreessen cites Nordhaus's finding that technology creators capture only 2% of generated value. That number comes from a 2004 paper studying 20th-century innovations. Whether the same ratio holds for foundation models, where training runs cost hundreds of millions and three to five companies control the infrastructure, is the central economic question of the decade. Citing a pre-transformer-era ratio as settled proof is the kind of move that makes skeptics stop reading.

Crawford sidesteps distribution too. His framework centers agency: humans should shape AI, not be shaped by it. But agency requires access to the levers, and those levers sit inside organizations with very specific incentive structures.

The enemy problem

Andreessen names enemies in bulk: ESG, the Precautionary Principle, "trust and safety," degrowth. All framed as "zombie ideas, many derived from Communism."

Crawford names specific failures: permitting delays, FDA bottlenecks, Sri Lanka's fertilizer ban. Every claim is checkable. The criticism lands because it's falsifiable.

For AI policy, this distinction matters. When you list "trust and safety" as an enemy alongside Communism, you bundle people who want to ban AI with people building evaluation frameworks for it. Anyone who has shipped production ML knows the difference between safety theater and risk assessment. Andreessen's framing collapses that. Crawford preserves it: "The right message is not 'don't worry!' but 'here's how we will solve it.'"

What neither manifesto builds

Neither engages with the labor question. What happens when LLMs automate significant portions of knowledge work? Andreessen waves it away with comparative advantage. Crawford acknowledges "technological unemployment" but defers. For manifestos published after GPT-4, that's a conspicuous gap.

Neither addresses the questions: Who trains the frontier models. Who prices API access. Who decides the alignment targets. A manifesto about AI that doesn't address the structure of the AI industry is like a manifesto about electricity that doesn't mention who owns the grid.

Neither manifesto answers that. One of them has the structure to try.

Read the source texts:

Subscribe

If you’re curious about my process and where AI and art are actually headed

Short thoughts, sharp takes, zero hype

What is this about

MCHN.ART makes physical art with drawing robots and a library of art supplies.

There is a team of us: AI runs the entire business. The machines draw. I direct.