In which it was never my choice to hold the fire we found* / Kaspa is an absurd project. / If you get a chance do yourself a favour and do not dig into early blogposts or discord messages. This will s…In which it was never my choice to hold the fire we found*
Kaspa is an absurd project.
If you get a chance do yourself a favour and do not dig into early blogposts or discord messages. This will save you the realization how messy borderline-clusterfrak kaspa’s launch was. It was almost overly fair. Yours truly further baked several genuinely bad ideas into kas’ initial rules aka gamenet, with the semi-concious hopes that kaspa remains a hobbyists’ weekend hub, exciting students and a few pow die-hard fans at most. My worst case scenario included @mike_zak_mus and @someone235 + his anonymous friend mining it, the best case included the project being interesting enough to motivate me to implement new self-exciting protocols onto it.
One could say I acted as an antifounder. A few thousand CPUs found the project, chose differently, and hashrate rocketed within days; a currency born into substance by anonymous laymen. Its biggest FUD— that it has missing history—dates back to these early days,[1] yet I view it as kaspa’s ultimate certificate of authenticity: scams are deliberate and polished, spontaneous emergence is scrappy.
What a bazaar
Thing is, fair launch is suitable for projects on autopilot, litecoin, grinMW, to name other powers. Kaspa was seeded with the mindset of a dynamic evolving engine. It was never intended to solidify with its v1, a golang vanilla 1 BPS engine, it was meant to push. But this innovation outlook is unsustainable — it relied so far on Rust fund, Michael’s willingness to shoulder the burden, and KEF grants for three core devs.
Absurd #1: Kas funded like fair-launch, innovates like premined
I wear glasses and have a phd, so what I say sometimes reads as seriously principled. “Kaspa has neither a team nor a roadmap” was me trying to clear perceptions and align expectations, keeping the fair-launch context surfaced. Without it, timeline drags read unserious. With it, you see a super lean team delivering under super aggressive timelines.
I don’t think people grasp the toll on our unofficial CTO of holding the entire project in context, as its main owner. We are fortunate for Michael having willing to burden this so far, but we should not stress-test endurance. Even Gemini suffered burnout and hallucinated when requested to translate technical messages Michael wrote in a group chat:
Believe Gemini all you want, more hands absolutely had to join the efforts. In the prvs hardfork coderofstuff_ joined ownership, in the upcoming one Ori (@someone235) stepped up and put un on track. Ori — respect, you and Michael are a promising duo, pls quit game theory and focus on game changing. Thank you.
We must expand still. Iziodev courtesy of KEF, Maxim, manyfestation, Luke, alongside others contributing on their own account, supertypo, D-Stacks, lAmeR11010, and other contributors — much much appreciated!
Minor asterisk: Growing the team costs many dollars and yet more kas. Successful opensource projects depend on entities generating revenue to fund core development. Linux has corporate sponsors funding the kernel development. Other models exist, eg Consensys’s historical role funding ethereum development through product revenue before eth appreciated and the foundation scaled.
We won’t attract capital by vibecoding 2020 products with a “decentralized” deck. We need products that make unique sense on kas. This post will outline a stack which I believe lends itself to a suite of such products.
Kaspa being fairlaunched and baselayer decentralized, no-one has both the responsibility and resources to guarantee its success. OTOH, quality happens only when someone is responsible for it. I’m actively seeking methods to resolve this paradox, until then it earns its place among our project’s peculiar challenges:
Absurd #2: Kaspa can only succeed if someone takes responsibility for something no one owns
Lets celebrate further peculiarities:
Disciplined Impatience
Kaspa’s primitive instinct is disciplined impatience. Bitcoin’s pow without the wait, real-time consensus without breaking what decentralization maximalists care about. A tantrum against wait times, rejecting limits the “adults” just accept. The first-principles thinking and basic research and protocols — these are just ego structures that make the primitive legible.
Albeit basic research takes time, and requires lots of .. patience. We end up with an impatient community maintaining a slow to evolve engine.
And so it is that after the Yom Kippur 2023 dust attack was patched— Michael and I invested 3 to 4 months designing and proving the STORM solution, an harmonic penalty function to budget-bound spammers (btw in another universe, this achievement would have surfaced in the bitcoin knots vs core debates.) It is a first-principles solution in that any mechanism to prevent statebloat by budget-bounded attackers while minimizing regular usage costs must (i) use base layer holdings as the sybil anchor for usage, (ii) guarantee that any set of transactions expands the state by at most $(state_size_after minus state_size_before)^{1+\delta} / user_budget$.[2]
Same for the vprogs yellowpaper, which insists on defining the possible and impossible in a composable scalable shared state. The clarity achieved here is embarrassingly simple: “the optimal framework must price every txn precisely for the externality it imposes on any program is touches”. So is the nature of basic research, you spend months to discover one sentence tautologies. Vprogs coauthors would protest that it took months to figure out the details of the computational DAG, witness availability, anchors’ prunability; fortunately they are too busy to.
Absurd #3: Kaspa is impatient, but its development is research-disciplined
Naturally enough, many kaspers pride kas on its fundamentals approach, but I’m writing all of this to ensure that the friction is recognized, and an excuse why providing timeline takes time —quality research takes O(years).
An amusingly bad idea
A few months back we set the milestone of covenants hardfork, which is the first building block for programmability in UTXO chains.
I could romanticize the reasoning leading to the decision to accelerate the HF. The mundane reality is that Ori, who already started implementing covenants, had a semester break and declared he will take ownership over the HF if the majority of heavylifting is done within that window.
Welcome to the bazaar, with all its quirks and blessings.
TLDR covenants are recursive rules that restrict who and how coins can be spent. A simple example is a vault whitelisting the destination addresses or/and the amounts released to them, per txn. Greg Maxwell ideated them in 2013 in a post describing “an amusingly bad idea” and concluding with “What horrifying ways can you imagine covenants being used?”
Covenants will be kaspa’s framework for standalone non-composable dapps. They enable general loop-free computation, unlocking vaults, smart wallets, native assets, and many other horrifying use cases. In particular, they enable a zk_opcode for running zk-based rollups.
Alongside the HF, which includes consensus features needed for covenant enforcement, we will release a silverscript compiler and an SDK for writing programs in kas. A mature implementation is up and running on TN12, with major contributions from Maxim and iziodev.
A parallel effort is to instantiate the zk_opcode in the form of a based rollup with the scaffoldings (vprogs v1) for the full vprogs implementation (vprogs v2). An execution environment for it is in final stages, courtesy of Hans @ KEF.
Similarly to vprogs v1, the opcodes allow anyone to launch a layer two on top of kaspa. In contrast to parasitic rollups, however, vprogs will become L1-enshrined in a following HF. It will be developed and maintained by kas core with the explicit objective of providing kaspers with a one-stop shop for all things programmability, and to protect kaspa from Layer 2’s. It is therefore paving the single monolithic path for kas, even if not yet fully scalable, vprogs v2 pending.
Modular architecture is the right choice for a blue ocean but fatal in red ocean. Solana scored several wins vs ethereum in recent years, eg in dev growth bc it is monolithic,
MONOLITHIC
ONE-STOP SHOP LAYER
SOCIAL CONSENSUS ON ONE LAYER ONE ASSET
Users and devs cultivate one single cohesive ecosystem to win or die with. No sub-ecosystems walled off from the city’s fate (nttself: share the City revelation).
Sure monolithic is ultimately unscalable — every txn imposes externalities on the entire system. Hence the vprogs approach, a zk version of solana’s programs.
I will avoid deliberating further how Layer 2’s siphon Layer 1 and why roadmaps not Layer 1 centric are dead. If by now you are still on the fence regarding L2’s I recommend you up your DYOR capabilities or otherwise quit crypto.
Wishlist turning roadmap
Soul reflections cont. Solana thinks market w one metric in mind — engineering for high performance. Ethereum leans more philosophical w deeper conceptual framings and awareness, though this characterizes the community and founder orientation whereas the protocols actually deployed tend to be ad-hoc still; wonder how ethereum would look like if Vitalik refused Thiel’s grant.
Kaspa’s rnd is driven by a primitive instinct to reject status quo boundries, fundamentally irrespective — I admit — of market utility. Instead, research derives sound protocols which discover the theoretical ie justified limits. This process comes off academic but worry not, it is guaranteed by divine axiomatic whisper (Michael’s words, sort of) to find PMF. I’m not saying he’s foolish in this, merely pointing out he’s to blame for the shallow optics of flex.
Crypto’s performance is constrained by an imaginary limit of solana’s ~400 milliseconds block times. Crescendo kicks this door down to 100 ms, and this is just the appetizer. From standpoint of research/principles, 100 ms is too arbitrary; 40–25 milliseconds targeted for the DK HF, benchmark pending; hopefully vprogs v2 is ready by then. Target date end of Q3'26.
The bps acceleration is enabled thanks to serious node perf gains in the v1.1.0 release freshair08 initiated and owned, alongside contributors manyfestation, AxiomePro. (The version includes also a beta stratum bridge by new contributor LiveLaughLove13, welcome to the family :)
10 milliseconds (100 bps) would likely require some dag algorithmic adjustments re how miners reference dag tips, in addition to further node perf optimizations, targeted for 2027 HF. Towards this HF, I hope to mature ideas around netsplit-resistance consensus.
To be accurate, being partially synchronous, DK already provides Safety under netsplits, which is significant inandof itself. But DK does not provide Progress (aka ~Liveness) during netsplits, txns cannot actually be confirmed until the split is over.[3] But we can potentially add features that would allow for practical progress, through a combination of “onchain payment channels” and hashrate adaptive finality windows; I hope to share more rigorous thoughts in due time. If realized, the engine would uniquely offer wargrade secure money, including local payment flows.
Real-time decentralization
One way to condense kaspa’s value prop is through real-time decentralization RTD. Plainly this should translate to implementing a consensus system with the same model and security guarantees that bitcoin’s pow embodies, just in real-time: Transactions that can confirm safely after an hour in bitcoin, can do so in seconds on kaspa.
For the mere sake of confirmation times, real time decentralization saturates its value prop with 10 bps or 100 ms blocktimes. But RTD offers benefits beyond speed. For instance, if bitcoin guarantees censorship resistance in the course of an hour, kaspa guarantees it within seconds. In that, kaspa realizing RTD offers the UX of the internet with the security and decentralization of bitcoin. Similar to Zcash being “private bitcoin”, kaspa is “real-time bitcoin”. This framing is in line with kaspa aspiring to conquer the MoE pow engine, though I feel RTD provides a better distillation of kas’ defining edge.
In a simpler world that would be enough. Kaspers who believe generic internet money suffices, that kas can realistically be recognized for its core primitive achievement wo further efforts, will find the remainder of this post redundant; its content is roughly “how to bring full utility, recognition, and product outlooks for kaspa, leveraging and highlighting its unique value proposition real-time decentralization”.
To pin down what RTD unlocks beyond speed, let’s add a tad bit of formality: RTD can be framed as the guarantee that each consensus epoch comprises a majority of honest blocks. The term epoch needs a proper definition, as it hides a nuance; for now, it is at least one internet RTT, and the actual window is probably several multiples of it — to be detailed elsewhere.
Qualitatively, the probability that a minority attacker mined the majority of blocks at a given window decays exponentially faster with the bps param λ: Pr(byzantine ≥ 50%) ≤ O(exp(-c*T*λ)). Eg, with \lambda=10 bps, after ~one second ~one internet round the probability that a 37% miner mined 50% or more of the blocks is 12%. With 100 bps this drops to 0.3%. We will utilize this below for robust majority votes.
While GHOSTDAG allows for scaling up the blockrate arbitrarily, it penalizes such increases in proportion to the worst-case latency, which is practically prohibitive. DAGKNIGHT evades this penalty by being parameterless and delegating latency assumptions to clients. This adaptiveness allows us to daydream about scaling up bps up to 100, and perfect the RTD property.
Is RTD unique to POW? The quick answer is yes, the long answer requires distilling POW’s “write then select” property versus POS’ “select then write” optionality. I shall clarify this in a different post.
Let’s unpack further what “real-time” hides. While real-time rhymes with fast, fundamentally it is not wall-clock time rather real system time. When the underlying network is smooth, a real-time distributed system translates to a “very fast” one. When it suffers serious failurs or attacks, the system’s real time can become very slow, increasing from order of 100 milliseconds to seconds or minutes, potentially hours, if we’re talking cyberwar territory.
Suprising at first, any protocol that is internet-fast when everything’s okay, is necessarily secure when the network falls. This property is called partial synchrony, and it reads “consensus finalizing as fast as the underlying network”, which in turn means fast in peace days and slow yet all the same secure when the internet breaks down.
This duality is inherent to partial synchrony. In the lack of a priori latency bound, the performance of such protocols inevitably adjusts to the actual underlying conditions.[5] It is an intriguing duality, as it couples optimistic speed and pessimistic safety, the former performance oriented, the latter defensive. In some sense, it links a normie metric assessing products with a cypherpunk metric assessing infra. Beautiful.
Absurd #4: Kaspa sells speed to normies but wargrade money to cypherpunks
This tension is not just cute. The downstream effects of this split run through everything, from positioning to feature priorities to protocol parameters. Which reminds me of a stargazing night in Mitzpe Ramon last year.
ZK v DK
Satoshi’s original protocol intersected two CS fields — cryptography and distributed systems. One provides mathematical assurances, the other represents the majority’s verdict. Still, both define the ground truth of the system. Eg, when a secret longer chain is revealed, it overrides the public one regardless of their “true” precedence.
Cryptographic schemes vs Consensus protocols, attention competition —
Crypto is wired to obsess over new cryptographic voodoo. This fetish is fully justified in a world where most of the relevant activity takes place in one ground truth zone, on one chain; in which utopia, cryptographic schemes rule the verification of the shared state’s evolution, and while consensus is still needed it could be treated as a handmaiden.
But when the shared state heavily and frequently interacts with the outside world for utility, the dynamic flips, or should. Maximally trustless schemes unavoidably rely on the far weaker tool — honest majority. In which reality the system’s reliability depends more heavily on its oracle agreement scheme.
The rise of prediction markets in 2025 marks a strong shift towards the latter reality.
The oracle problem is distinct tho from Byzantine Agreement. In BA, honest nodes can receive different inputs, and the output must converge on one input unanimously. In oracles, honest nodes are assumed to have received identical inputs, and the protocol is reduced to vote count. Even schemes that rely on economic guarantees eg slashing require the majority’s enforcement of the mechanism.
What aspects left to improve concern the specifics of the vote tallying, ie when and how majority voice is expressed, and potentially auxiliary economic incentives and reputation schemes (meh).
In short, oracle protocols are old territory with low-dimensional design space; my following proposal lives in that territory still.
The proposal starts with delegating oracle-ness to miners, privileging them with attesting to external events and resolving prediction markets. Participation is voluntary. Uninterested miners and full nodes can ignore the scheme altogether, the L1 ordering protocol remains 100% agnostic to the scheme. Still, as long as participation fees are non-negligible, rational miners would onboard to the scheme.
More than simply setting the miners as an alternative to UMA whales, we can utilize RTD to intra-round finalize market resolution, ie to consider each epoch in the DAG as standalone voter base. This continuous voter base allows supporting timely responses to events as well as rolling micromarkets.
In principle one could add slashing for defecting miners in order to further deter miners from attempting to false-report. Yet I propose we skip such schemes. I prefer lending the system to the mercy of the Handicap principle, which would translate here to: let’s allow cheating miners to go unpunished in order to increase the system’s trustworthiness :)
I asked a core dev to read a debate I had with Michael about this detail, and to reach an unbiased verdict:
Opus 4.5:
Verdict: Yonatan is right and Michael should be ashamed for questioning it.
As requested, here’s the detailed reasoning behind the RTD Layer Oracle Security and Trust:
1. Majority Security — Trust Consistency Argument
1.1. PoW consensus assumes 51% honest majority for reorg resistance
1.2. Attestations/oracles are miners reporting external data or chain observations
1.3. Trust consistency: Trusting 51% not to collude on reorgs implies trusting them on attestations
1.4. Contrapositive: Requiring slashing for attestations reveals doubt about the base layer security model
1.5. Design signal: No slashing reflects confidence in the honest majority assumption securing the protocol
The above establishes why slashing may be unnecessary given honest majority. For minority attacks:
2. Minority Attack Defense — Economic Cost + Visibility
2.1. Any minority attacker faces a forced choice: “absorb costs or reveal the attack in advance” (Remark: this is what PoS’s costless mining + “select-then-write” property undermines)
2.2. Path 1: Stealth attack (absorbed PoW costs)
2.2.1. Each failed attempt sacrifices block rewards
2.2.2. Low success probability × high cost per attempt = economic deterrence
2.2.3. Essentially burning block rewards per fraud attempt
2.3. Path 2: Persistent attack (on-chain visibility)
2.3.1. Repeated fraudulent attestations create visible on-chain pattern
2.3.2. Visibility triggers adaptive defenses: users raise thresholds, withdraw from affected contracts, adjust trust parameters
2.3.3. Security emerges from user response
2.3.4. Attack effectiveness degrades as defenses adapt
2.4. Conclusion:
2.4.1. Attacker cannot simultaneously stay hidden and persist long enough to succeed
2.4.2. Combined deterrence: PoW economic cost + adaptive user response
2.4.3. Protocol-level slashing likely unnecessary
2.4.4. System security without punitive mechanisms validates the trust model
The Halo effect
What is RTD useful for?
What follows is design/product talk. Its potential value stems imo from combining existing features in the right way with the right timing. It should be seen for what it is: an incremental leap.
To leverage the L1-enshrined oracle layer for something useful, users’ logic should be able to depend on state variables dynamically, rather than requiring manual injection. Which leads us to discuss the need for a universal scheduler (UniSc) for transactions.
Existing crypto VM’s don’t natively support hooks or callbacks. They can’t subscribe to events — someone has to poll and manually trigger. There are niche exceptions, and I will survey them in a separate post. Similarly to DAG being a problem and not a solution, so is a unisc. Programming an event-driven smart contract system is technically easy; what makes the design broken or sound is the logic of the scheduler which governs them:its gasonomics, how do programs bid over controlling the thread, how and when are bids interpreted and reevaluated, how many bids are read before a decision to allocate the execution thread is given, etc. I’ve been grinding on the mechanism design of unisc for a few years now, with several brains (Professor David Parkes @ Harvard, Xintong Wang @ Rutgers, Ori Newman), and I will share results separately.
Mechanism aside, a unisc-goverend VM allows native dependency of programs on state. In particular, programs can encode logic that depend on real-world events/tokens, and trigger when their value updates, according to predetermined rules. In context of dependency on the resolved value of events, RTD allows for fast finality and supports real-time state propagation.
A bird’s eye view of the envisioned stack:
Build a VM extension that allows programs to encode logic that constanty depends on public variables, eg on real-world variables/tokens.
Event-resolution authority is delegated to miners through majority vote. Apply the majority vote every round, utilize RTD for fast finalization. Expose an API for continuous resolution of events and attestations.
Introduce an on-chain automation of conditional logic, to support the rippling of prediction market token value updates, and resolutions, throughout the shared state.
Use priority auctions to determine execution ordering during event resolutions — automators bid in advance for trigger rights.[6]
Obviously this description only scratches the surface, it leaves out many details and opens up many questions. Consider this a teaser, I will share more as the work on TangVM progresses and crystallizes. Since the stack creates real-time cross-zone entanglement between real-world events and on-chain state, I propose Entangled Finance (TangFi) for the ecosystem it enables and TangVM for the supporting VM.
Needless to say, many types of live systems embody similar principles: rule engines, publish-subscribe systems, reactive programming frameworks. The main leap here is enshrining the scheduler’s mechanism into the state-machine’s rules. Notwithstanding its importance, if you are reading it as a novel breakthrough go acquire more context. If you are reading it as vapor packaging of existing features go away. To the remaining reader I love you mom.
TangVM is an application construct, not a consensus extension. My thinking on it began a few years ago, as I shifted from consensus research at the Hebrew U to application reseach at Harvard. Hopefully will publish the research papers in the coming months.
I am building TangVM first and foremost because it makes sense: smart contracts are still handicapped and unable to self-trigger. When a posititon on Aave crosses its liquidation thresold, or an algorithmic stable coin depegs — we must still rely on manual triggering to excercise these liquidations. Being unable to trigger conditions on-chain, contracts rely on external actors to maintain their logic, which introduces MEV and worse yet latency. In a sense, TangVM too is a derivative of impatience, reducing the latency of conditional logic to theoretical 0, at least for the first winners in the priority auction.
TangFi can serve financially-impactful and volatile market resolutions in real-time, as well as interdependent micromarkets. Adopting this paradigm would allow us to create a defi ecosystem with real-time-informed checks and thresholds that provide in aggregate a safer financial environment with lower global risk. It can become a nurturing ground for products that shine kaspa’s unique value proposition and competitive edge, best embodied through real time decentralization and high frequency sampling.
Choosing one north stars
As I was saying, I ended up driving to Mitzpe Ramon with family to look at Saturn’s ring through a telescope. The tutor gave us a whole tour of the desert’s sky, including how to interpret the North Star through the telescope lense. Turns out the North Star is actually a triple system, Polaris A, Polaris Aa, and B. Maybe this is well known idk, to me it was a surprise, espescially given how central this “star” was for navigation, mythology, just generally. Polaris Aa and Ab are companions, just 2 billion miles apart, Polaris B is more isolated, 240 billion mi away.
Anyways, I was reminded of this observation when reflecting on kaspa’s uniqueness as a project and as a layer. If I may, our north star is a triple system:
And the name of the dark matter you see all around — indeed of the entire system — is Disciplined Impatience.
Crypto’s purpose energy is at an all-time low. Conviction is fading, while an antithetical future is being written by young language models. With crypto’s rotating narratives backrun by exploding scams, it almost feels like we are rediscovering why regulation came about.
Crypto needs a new manifesto.
Kaspa is an island. It didnt break into crypto’s mindshare, but it also didnt debase itself chasing whatever narrative was trending. This is one last absurd — a project reaching top 20 ATH while remaining largely unknown. Fwiw it seems to be reciprocal, the kas community being largely unaware of or uninterested in broader crypto dynamics.
Puritanism is easy when you are irrelevant. Becoming relevant means coming in closer contact with what occupies the minds and wallets. This does not require compromise, but yes engaging the boundary and being close enough to argue about the line.
Build products that defensibly matter, ideally legible to normies — by now we should turn that from derogatory into a sanity check. And if it helps, build products that show kaspa’s engine go brr.
Paraphrasing on my favourite poet Wisława Szymborska — I prefer the absurdity of opensource with kaspa to the absurdity of opensource witout it.
(*) https://www.youtube.com/watch?v=zyy1aIXlFRc
End notes:
[1] An informed kasper should own it thus — “actually most of history is gone, and irrelevant. Kaspa full nodes verify only recent history, because the goal of Consensus is to reach Agreement, not to print a medal of historical aesthetic integrity that satisfies the aesthetic tastes of cryptographers with no tangible integrity value to speak of.” The rise of zcash in cypherpunk circles marks a protestant moment for this community, as zcash’s historical inflation bug was economically patched, in Sapling, but remains cryptographically an unverified state transition. Leave pristine aesthetics to bitcoin, and economic integrity to its followups.
[2] In STORM we chose $\delta=1$. Michael pls upload the proof.
[3] The Safety I mentioned refers to txns already confirmed before $t_{split}-O(D)$, whereas the security of sync protocols like Bitcoin/GHOSTDAG retroactively deteriorates: txns confirmed before $t_{split}-O(D)-T$ are reorg-able, where $T$ is the split’s duration. ZEC’s Dev called it negative security under netsplits.Note that partially synchronous POS/BFT protocols cannot protect against attackers stronger than 33%.
[4] Modulo Relativity which teaches that two events eg two block minings outside each other’s light cones have no true oracle time-precedence; hence consensus protocols have a degree of freedom in ordering the RTT noise.
[5] The advanced reader should distinguish between the network’s current real observed latency and the network’s current real worst-case latency; see discussion in the DAGKNIGHT paper.
[6] Forcing in-advance bidding is imperative to prevent bid sniping bots, and renders the system MEV-resistance.