Thoughts on Agentic AI, Crypto and Blockchain
1. The Shifting Role of Humans in an AI-Augmented World
As AI agents become capable of handling routine implementation tasks, you have to admit, much of what junior software engineers do today will be automated. But this isn't purely a loss - it lowers the learning curve dramatically. People can spend less time on mechanical work and focus on harder, more creative problems: designing systems, defining requirements, and making architectural decisions.
This shift makes domain knowledge more important, not less. AI can implement, but it can't make informed trade-offs on your behalf. To build the right system, you need to understand the problem space deeply enough to evaluate options, weigh constraints, and make judgment calls. You can delegate implementation to AI. You cannot delegate understanding.
Unit tests reinforce this: if AI generates code, the human's job is to define what correctness looks like - which requires knowing the domain. The future favors people who learn broadly and deeply, not people who memorize syntax.
2. Verifiable Computing as a Trust Layer
In a world of AI-generated code and increasing demand for transparency, verifiable computing becomes critical. It allows you to prove that a specific computation was executed correctly, in a quick and succinct way.
This matters because trust should be minimized, not assumed. Rather than asking society to trust every execution environment, we can verify procedures cryptographically. Blockchain's role here is as a ledger for storing verified states, not as a general-purpose compute platform.
3. Against On-Chain Computation
Moving all computation on-chain is the wrong direction. It's too heavy, and no amount of scaling - rollups, L2s, or otherwise - changes the fundamental overhead of replicated execution across consensus nodes.
The better approach should be opt-in trust:
- Preserve existing centralized infrastructure - it's fast, mature, and cost-effective.
- Integrate a library that collects a witness (execution trace) alongside normal program execution.
- Generate a proof after the run that attests to the correctness of the state transition.
- Post the proof to a blockchain only if the user cares about verifiability or doesn't trust the execution entity before state changes on-chain.
4. On Crypto and Blockchain's Real Value
Crypto is not a scam. But the ecosystem is overwhelmingly fixated on crypto as a financial instrument, at the expense of building anything meaningful on top of the infrastructure.
4.1. Why blockchains need tokens at all.
Every blockchain requires a native token to align validator incentives and prevent malicious behavior. Validators must be compensated for honest work and penalized for dishonesty, and that mechanism requires a token with real economic value.
For example, in Proof-of-Stake, security is directly proportional to the economic cost of attack - if the token is cheap, the chain is cheap to corrupt.
4.2. The natural value model.
As a chain grows - more blocks, more nodes, more decentralization, more valuable information stored immutably - the ledger itself becomes inherently more useful and harder to replace. The token that pays for gas on this ledger should appreciate naturally alongside this growth, because access to a more valuable, more secure ledger is worth more. This is a sustainable model: the blockchain's utility drives the token's value, not the other way around.
4.3. Where it goes wrong.
The problem begins when crypto is treated primarily as a speculative financial instrument. Once tokens behave like stocks - traded, leveraged, and pumped - they absorb external volatility: panic, macroeconomics, wars, speculation.
This destabilizes the very infrastructure the token is supposed to secure. A chain whose security budget fluctuates with market sentiment has a structural fragility that undermines its original purpose. Worse, it distorts the entire ecosystem's incentives. What we see today is largely a crypto casino. Most participants do not care about the infrastructure underneath. They are not building real applications or storing meaningful state on-chain. They are gambling - entering with the sole aim of extracting financial value from token price movements. They parasitize the blockchain's credibility without contributing to its utility. Meme coins are the purest expression of this: tokens with no underlying state, no application, no purpose beyond speculation.
The blockchain needs crypto to function - human greed is precisely the lever that makes a trustless, decentralized system sustainable. But it is the blockchain that gives the token its original, durable source of value, not the market. And the reason the blockchain has value is that it guarantees immutability and transparency of the states stored on it. The states stored on a blockchain are the ultimate source of its value.
4.4. Blockchain is a ledger, not a casino.
If we look at blockchain at this level, its real value is the data stored on it - which is its original purpose. A world ledger. A persistent, tamper-proof store for critical information. Blockchain was never only about crypto.
The proliferation of the crypto casino despite storing nothing more than numbers governed by pre-defined rules - is itself evidence of how powerful the immutable ledger property is. Even with no real applications built on top, the mere guarantee of tamper-proof, decentralized state is enough to sustain an entire speculative economy. Consider the counterfactual: if Bitcoin's ledger were maintained by a centralized bank, following the exact same issuance rules and supply cap, it would not command remotely the same value. People would not fully trust it, because a centralized entity can change the rules, freeze accounts, or falsify records. What makes the ledger valuable is not the rules alone, but the fact that no single party can violate them. That is what decentralized immutability provides, and it is so powerful that even a casino can run on it.
The tragedy is not that speculation exists - is that speculation is nearly all that exists. Part of the reason only the financial use case has developed is that we never found a good way to scale transaction throughput on-chain. The early vision was that everything could live on-chain - smart contracts enforcing arbitrary procedures, entire applications migrated from "web2" to "web3." That vision failed because on-chain computation does not scale (see previous section).
4.5. Verifiable computing as the missing bridge.
With the rise of AI agents and verifiable computing, there is finally a viable bridge between existing centralized services and blockchain infrastructure. Instead of forcing computation on-chain, we can run it off-chain - in agentic codebases, in existing systems - and post only the verified state transitions to the ledger. This lets blockchain do what it does well (store immutable, transparent state) without asking it to do what it does poorly (execute arbitrary programs at scale).
4.6. The next step of programming in the age of agentic AI.
As AI agents become capable of generating code at unprecedented speed and scale, the bottleneck shifts. The question is no longer "can we write enough code?" - it is "can we trust the code that runs?"
The next step of programming is to make each piece of code running on a machine inherently verifiable. The user does not need to understand every line - just as most users of smart contracts never read the Solidity source. But like smart contracts, the code must be open-sourced, so that anyone who cares can inspect the logic and confirm what it claims to do.
The verifiable computing library then closes the gap between source code and execution: it embeds into the program, collects a witness during runtime, and produces a proof that the open-sourced version of the code - for that specific function, with those specific inputs - was executed correctly. The proof, along with the resulting state transition, is recorded on the blockchain.
This is the model: open-sourced logic, verified execution, immutable state.
4.7. Why low overhead matters.
For this to be practical, proof generation cannot impose significant overhead on the program itself. The program should execute once, in a native environment - no VM layer, no replicated execution, no interpretive slowdown. During that single native run, the embedded library collects the witness (the execution trace). After the run completes, the library generates the proof from the collected witness.
This is a dual-execution model: the program runs normally for its result; the library instruments that same run to capture the cryptographic trace. The overhead is in witness collection and proof generation, not in re-executing or simulating the program. With GPU acceleration and optimized protocols (such as GKR with linear-time sumcheck), this becomes practical for real workloads.
4.8. From casino to world ledger.
When crypto becomes a small part of the picture - much like today's meme coins fading into irrelevance - and real applications emerge with meaningful state stored across blockchains, connected to modern software systems through verifiable proofs, that is when blockchain finally moves in the right direction.
The arc is clear: agentic AI generates the code, verifiable computing proves the execution, and blockchain stores the resulting state immutably. Each layer does what it is good at. None tries to do the others' job. That is where blockchain can actually fulfill its original promise - not as a casino, but as a world ledger.