[2026-04-01] [staghunt] [coordination-markets] [RTD] Project Staghunt A Six Pager on Coordination Markets 1. Azazel Every generation a new social theory offers a fresh “solution” to “society”, galvanizing our freshmen and the existentially unmoored. Workers of the world unite, tax the rich, check your privilege. But before we Greta ourselves into the abyss, we should less enthusiastically analyze when and how our current system - capitalism - fails or underperforms: What specifically fails when you give agents the autonomy to gather information, make judgment calls, choose actions, bear consequences? Oh! you cry, The Prisoner’s Dilemma! Aha The Moloch!! But the games we actually play suggest a different response, one that promises less than social-welfare optimization, but also demands less than rewiring our selfish instincts and slaying the Moloch. Namely, mechanisms for committable and actionable coordination. Consider Rousseau’s stag hunt: Two hunters can collaborate on a stag and eat well, or each hunt a hare and eat for a few hours. Both are better off coordinating on a stag – wherein neither is incentivized to defect! – but moving from hare to stag alone makes you worse off. If they are stuck in a (hare,hare) equilibrium, the challenge is not defection from cooperation rather failure to reach coordination (stag,stag). If Moloch is the demon of defection, this is a different demon. I call it Azazel - the deity of the wilderness, pre civilization, where humans wander alone and no associations form. The distinction between Moloch and Azazel matters because these two demons require very different remedies. If you believe society is trapped mostly by Moloch, the natural response is to build central institutions that restrain selfish behavior. Central planning, regulation, social-justice activism, superintelligence singletons. But if the problem is mainly Azazel, the remedy changes, people’s behaviour doesn’t need to be corrected or coerced, they already want to collaborate. What we lack, in the Azazel-Staghunt framework, is “merely” mechanisms to communicate and bind shared actions. In short, we need Coordination Markets. 2. Why Cheap Talk Is Not Enough Coordination Markets (CMs) are a category of markets built around binding primitives for coordinated action. These primitives compose, condition on external state, respond dynamically to market activity, and settle atomically. Improving the internet through better communication channels does not suffice, since cheap talk is non-binding. In many scenarios, moving alone is risky: hunting a stag alone risks hunger and injury, rioting alone against a despot risks death, bootstrapping a liquidity pool alone risks getting eaten alive by arbitrageurs. The missing layer, therefore, is not communication but binding intents, otherwise known as assurance contracts (Bagnoli and Lipman 1989). An assurance contract is a primitive allowing one to express and enforce conditional commitments of the form “I will do X if N others do X.” In our stack, these conditional commitments will be called intendos. None of this is fundamentally new. PledgeBank ran conditional commitments from 2005, but the binding there was enforced by social pressure in small groups. The Point built assurance contracts in 2007, then pivoted to Groupon, more demand aggregation than assurance contracts. Kickstarter owned assurance contracts for creative projects, but remained niche. Each of these attempts either collapsed into a single vertical or was heavily biased toward petitions and activism, which is an esoteric death sentence. Instead, we should take a market approach to coordination. A market framing has many implications; three immediate ones: First, endogenous incentives. Participants communicate their intents and commit not (merely) because they believe in the cause, but (also) because the mechanism makes commitment individually rational. Second, a market approach is unopinionated, it should engineer for no specific agenda or world order (e/acc, radical markets). Market rails should remain open to whatever coordination problems and causes users initiate. In fact, if CMs are to unlock even a fraction of their potential, the rails should be optimized for unopinionatedness–-design for hard resilience to pressure, extortion, manipulation. This also means the mechanism should support heterogeneous preferences and risk thresholds, rather than hard-code a single safety threshold. A tenured professor might come out of the closet provided 5 colleagues do so; an assistant professor might require 50; one LP is comfortable with a liquidity bootstrap floor of $1M; another is risk-averse and demands $10M. Third, the mechanism should be able to condition on external state. Conditions should be able to reference on-chain data, real-world events, and other facts verifiable on the web. Intendos should also be able to condition on other intendos, chaining coordination markets into multi-stage sequences. But before any of that, CMs must allow shielding one’s intendo. Even before the move-alone risk, being the first to signal willingness is itself a primary source of risk: revealing one’s political preferences in a hostile environment, disclosing one’s financial intention in the face of arb bots, exposing a social cause when it is still small and can be killed. Allowing for confidential intents should receive axiomatic treatment. On the other hand, collaborating in a dead dark forest is unlikely to catch momentum. Our attention is scarce, and the design should therefore borrow from and interleave with attention markets. It must foster some social p2p dynamic and/or incentive structures that outweigh the mental load of considering one’s stance and opportunities. 3. The Mechanism A social or financial entrepreneur believes she spotted a Stag - a certain opportunity that increases the payoff of its rational (selfish) participants. She posts a Hunt, and individual users or agents discover it and contemplate. An agent deciding to join signs cryptographically an intent to join - an intendo - conditioned on a sufficient number of others joining too. The signed intendos accumulate and Pack, in an opaque process, during which more agents opt in but also may opt out at will. Once some subset of agents crosses the threshold, satisfying internally the threshold conditions of all of its members, the Hunt snaps and executes together. Glossary. Stag - coordination market instance; defines target outcome, execution logic, and community eligibility. Pack - accumulation phase; intendos cluster around a Stag, aggregate state is opaque. Hunt - resolution, the pack hunts: a qualifying subset is found and execution triggers atomically; remaining intendos persist for future resolution. Hunts can compose (through expressive intendos) - users switch platforms and LP positions migrate to the same venue; public endorsements commit and capital deploys to the endorsed cause - all in one atomic event. Four properties are axiomatic for the core primitive of CMs: i. Coordinated Atomicity. The commitments of the qualifying subset - the participants whose thresholds are met together - execute simultaneously across all participants; no partial execution no gradual commit, as this would undermine the assurance. ii. Accumulation Opacity. No-one - neither participants nor operators - can determine how close the initiative is to activation. The threshold crosses or it does not. iii. Capital Multiplexing. Users can co-commit the same capital across different markets (eg user backing a liquidity bootstrap and a supply lock with the same 1000$); whatever commitment activates first applies. - Crucial for capital efficiency, UX scalability, and participation in overlapping markets. iv. Composability. The output of one market is a valid input to another, and can compose with other shared state contracts. Coordination markets as lego blocks. Both composability and capital multiplexing are unique to crypto rails. Traditional payment services operate behind isolated APIs, you can’t lego box API calls, and you can’t pre-authorize PULL based payments without statically delegating your funds to the service provider. Any mechanism satisfying these four properties can facilitate large-scale coordination between people and agents who share similar preferences but couldn’t, so far, safely and practically express actionable coordination. It can compose populations with different resources and risk profiles, allow them to feed each other’s assurance thresholds, and trigger downstream execution that neither population could bootstrap alone. 4. Stags in the Wild A broad category where CMs have particular utility is escaping network effect traps. Apps that “suck but everyone’s using it”, or the media platform that’s blatantly lying but everyone’s watching it. CMs allow the public’s true preference to materialize as concrete switching plans, in formal terms, it allows to aggregate demand and to alter the focality. Example 1: Liquidity migration. Superior DeFi machinery exists but liquidity is stuck on legacy tech, 5.7B$ on BNB for instance. LPs don’t move because moving alone means providing liquidity to an empty pool. Utilizing CMs, each LP can set a threshold for migrating to kaspa - 5M$, 50M, 500M - whatever they need to feel comfortable. Their capital stays productive on BNB until a subset’s thresholds are met and resolved. The same capital can back multiple campaigns to exit BNB to different platforms, solana kaspa tempo. Individual intendos can double-sign conflicting intendos (!) and whatever triggers first applies. Example 2: Content platform bootstrap. Netflix and Disney+ push woke agenda, and large segments of the user base - parents - resent it. Sure HBO and Amazon exist, but not huge for family content. As a concerned parent you need to know the other people ranting on the internet are willing to actually act, not just complain. Entrepreneurs face the same obscurity from the other side: If I launch a streaming service with neutral or conservative agenda, how many of those ranting parents will actually subscribe. This pure coordination failure is solvable by CMs: Users can intendo-commit subscription fees to a new platform, and provided a sufficient number join, the platform launches and charges. Attention markets are designed to amplify organic propagation dynamics of some underlying social network. Ideally, we would wish for coordination initiatives to propagate spontaneously, via group chats, community forums, DMs. Albeit whenever we enforce Opaque Accumulation (Axiom 2) we are shielding participants’ intendos from their peers, which goes against organic social discoverability. Can we preserve p2p propagation while maintaining privacy and deniability? Designated Verifiable Proofs (DVPs) to the rescue. DVPs enable off-the-record messaging: they allow a prover to convince a peer, or a set of peers (Multi-DVP / MDVS), that a certain statement is correct whilst maintaining full deniability in case the proof is leaked. In practice this looks like group chat messages that are internally verifiable yet practically unleakable. Besides attention dynamics, initiators of stags should fund the continuous computation of the state of the pack. The CM stack requires a layer of operators/searchers/solvers continuously checking for subsets of intendos that can be co-satisfied. The funding for this computation can start with the stag entrepreneur -- and migrate to a fee model sustained by the pack once it reaches scale. Entrepreneur spots a Stag → seeds a Hunt, DAC-backed → intendos accumulate (paid if it fails) → signals propagate p2p via DVPs → Pack grows opaque → compute scans for satisfiable subset → threshold hits → Hunt snaps, atomic execution. 6. The Stack Beyond generic support for market mechanics, the stack of coordination markets requires a few new components: i. Intendos, the layer aggregating persistent intents of users; limited implementation of persistent intents have been implemented before, eg in CowSwap, but far from the flexibility required for CMs’ composability and multiplexing. ii. An efficient data structure and incremental algorithmic framework to solve Pack states, namely, “what is the current maximum subset with internally-satisfying thresholds?” For a large set of intendos - including thresholds defined in number of participants or amount of capital - resolving the state belongs to the class of monotone fixed-point computation, which admits incremental algorithms of time complexity O(polylog) or O(1) amortized. iii. A computation fee mechanism appropriate for the algorithmic framework mentioned. This component includes both metering the cost (“computation gas”) per signed intendo, as well as dictating the payment mechanism, potentially distributing it across the set. iv. A cryptographic protocol that can support opaque accumulation of the pack. The broad family here is secure multi-party computation (sMPC), but standard interactive sMPC is incompatible with continuous evaluation over an open permissionless validator set. The mostly-noninteractive alternative is threshold FHE (thFHE), which distributes a shared public key at the outset (DKG) and runs a lightweight sMPC for the decryption phase. Feasibility requires deploying mixed-mode, hiding only essential parts of the computation. Coordination by definition leads to large market moves and cascading effects. Any gap between pack formation and execution increases the manipulation surface and multiplies pressure points. To eliminate or minimize this gap, CMs must run on infra that satisfies censorship resistance, permissionlessness, and fairness in real time. I use real-time decentralization (RTD) as a codename for this metaproperty. 7. The Consistent Individualist In democratic ages, the bonds of human affection are extended, but relaxed (de Tocqueville). The internet, the ultimate egalitarian project, allowed us to connect with complete anons and discover shared ideas and interests. But it did not offer ways to move together and act on common interests. It is missing an association layer which grants dynamic, ephemeral, or context-specific communities ‘write’ permissions to the shared state of the digital space. Project Staghunt aims to build it. --- [2026-03-01] [staghunt] [coordination-markets] [internet] Oxford Union Address March 2026 SECTION ONE - THE INTERNET We have one democracy in the world that is, by a large gap, the very worst. It is in constant decay and turmoil. It is the epicenter of almost any global drama. It amplifies and complicates many world affairs. It has near-total free speech, constant participation, and yet almost no capacity to govern itself. This democracy has failing institutions, arguably no institutions at all. And still we keep returning to it, interacting with it, arguing over it, although we know this toxicity is not good for our health. I obviously care about that democracy a great deal, so I will name it for you. It is the internet. We treat the internet as a finished project, but it is too young to earn this title. Democracies take at least a century to stabilize, correct, and build institutions. We should not expect this 43 year old democracy to have figured itself out. We should not treat it as if its shape and functionality are more or less solved. You can argue it is not a formal democracy, but you cannot deny it governs and dominates most aspects of your private and public lives. Yes it has free speech, but free speech should lead eventually to shared understanding and action. The internet currently lacks methods to act on agreement. Yes you can freely interact and transact with other internet members, but you are still relying on external institutions to enforce these transactions. If the internet is the greatest egalitarian project humanity ever came up with, it is also the one needing the strongest institutions to strive. Yet it has none. The WWW used to feel different. In the late 90s early 2000s, going online was an elevating experience. It felt like a strange form of global fraternity - people talking to strangers with a depth that is mostly gone today. Even the pornographic corners of the early internet arguably had more attachment than much of today’s discourse. What happened? How come the internet is no longer fun? When Alexis de Tocqueville reflected on the new society he remarked: “in democratic ages, the bonds of human affection are extended, but relaxed”, that individuals connect with more people but those connections carry less weight. I believe this is where our diagnosis should begin; namely, by recognizing that our digital democracy obtains the broadest extent of human bonds, but also the shallowest. That we have built barely no institutions to remedy that. And that we have simply not been mindful enough to notice the problem. SECTION TWO - MOLOCH AND AZAZEL You’ve probably heard of Moloch - the deity revered by the Canaanites 3000 years ago, and more recently by Scott Alexander in his famous essay. Moloch is the demon of defection, responsible for fear and rivalry, and incentivizing people to undercut one another even when everyone ends up worse off. In game theory, Moloch usually appears as the Prisoner’s Dilemma. Two players would be better off cooperating, yet each has an incentive to defect. So both defect, and everyone ends up worse off. But Scott overshot it. Not every social failure is Moloch. Consider another game. Two hunters can either collaborate on hunting a stag and eat for a week, or each hunt individually a hare and eat for a few hours. If they coordinate, both are clearly better off. Yet if one hunter fears the other will not show up, she safely hunts the hare. And so we both end up eating rabbits. This is the Stag Hunt: a game where cooperation is stable and self-reinforcing once reached, yet difficult to reach, because without reliable communication and binding commitments, each hunter fears the other may not show up - and the one who trusted is left hungry and exhausted. If Moloch is the demon of defection, this is a different demon. I sometimes call it Azazel - the deity of the wilderness, before civilization, where humans wander alone and no associations form. The distinction between Moloch and Azazel matters because these two demons require very different remedies. If you believe society is trapped mostly by Moloch, the natural response is to build central institutions that restrain selfish behavior. Regulation, central planning, or activist social-justice movements to correct people’s behaviour. But if the problem is mainly Azazel, the remedy changes completely. People do not need to be forced to cooperate. They already want to. What they lack are mechanisms to communicate commitment and bind themselves to shared actions. This difference shapes our political imagination - especially yours, as Oxford students. Workers of the world, unite. Occupy Wall Street! Eat the rich. Every few decades a new slogan galvanizes your hearts. The desire is real, you identify that the state of affairs can be improved if people worked together. But the game is not rigged, and human agency is not corrupt. You can still make a dramatic impact by working with that, and building institutions that better the equilibrium selection of the free markets game. Do not waste your hearts on slogans that try to replace the self-interested individual with some collective virtue. This always leads down the road to serfdom. In other words, solve the Stag Hunt. SECTION THREE The real problem is assurance - “I move only if others move.” Hong Kong protests, Libertarian Party support, liquidity migration. What mechanism would fix this? A. Assurance, atomic action. B. Opacity, neutrality. SECTION FOUR We derived that the internet democracy desperately needs rails for assurance and coordination. Requirement recap: Autonomously, no dictator entity, shared language, enforceable rules, programmable actions, shielding opaque intents. Everything that was built in crypto in the last two decades, from stateless money base, to programmable money, to encryption techniques, are the building blocks the digital democracy needs. Crypto hasn’t necessarily recognized this yet. The industry is still in a bear cycle mood, mourning its old narratives and the lost market caps. But the real mission is still ahead of us. Crypto should be building coordination markets - what I like to call Project Stag Hunt. SECTION FIVE - HOW A COORDINATION MARKET RUNS Let me show how a coordination market actually runs. I will use three simple terms - Stag, Pack, and Hunt. 5.1 STAG A stag is spotted - a better equilibrium. Example proposal - “Move to a new social platform if enough users commit, and pay $5/month.” No one moves yet. Everyone waits to see whether others will move. 5.2 PACK Users join the pack by submitting conditional commitments. “I migrate if at least N others migrate.” Each participant sets their own threshold. Commitments accumulate privately. No one risks moving alone. 5.3 HUNT Eventually a subset of commitments satisfies all their thresholds. At that moment the pack hunts. The migration executes atomically. Accounts activate, communities appear, and the network launches with users already there. 5.4 COMPOSABILITY These hunts are composable - like Lego blocks. A set of investors might sign commitments: “I invest $10 million if this hunt completes.” When the hunt resolves, all of these commitments execute atomically. Users move, capital deploys, infrastructure appears. 5.5 IMPORTANCE OF EMERGENT BEHAVIOUR Nobody specific coordinated this or engineered society for one specific opportunity. Decentralized. No Leviathan or superintelligent singleton. SECTION SIX - THE BUILDERS In 1980, the political scientist Langdon Winner posed a question: Do artifacts have politics? He described a system of bridges on New York’s Long Island - built too low for public buses to reach the beach. They were planned by Robert Moses, a high-status New York urban planner who designed a road that only car owners could use. Whether this was conscious discrimination or an oversight matters little. Artifacts are built in the image of their creators. The cypherpunk pioneers are the Robert Moses of the internet. They built the essential infrastructure for the digital egalitarian project. TCP/IP, encryption, stateless money, programmable contracts, self-custody. But when it comes to civic tech - community coordination and collective action - the contributions are mostly fringe and disconnected. Webs of trust, liquid democracy, DAOs - these artifacts were built with the right intention but by the wrong type of builders: brilliant introverts, paranoid, and more comfortable minimizing trust than organizing cooperation. Hayek wrote: “The consistent individualist ought to be an enthusiastic supporter of voluntary associations”. Many cypherpunks might agree with this in principle, but rarely in temperament. The internet rails - and definitely crypto - could not have been built by any other mindset or culture. Adversarial trustless personalities are the ultimate builders of the bare backbone of the internet. But now the roads need to lead somewhere, and a different class of builders should take the reins. I hope some of you here at Oxford will lead it. --- [2026-02-16] [kaspa] [RTD] [narrative] In which it was never my choice to hold the fire we found* Kaspa is an absurd project. If you get a chance do yourself a favour and do not dig into early blogposts or discord messages. This will save you the realization how messy borderline-clusterfrak kaspa’s launch was. It was almost overly fair. Yours truly further baked several genuinely bad ideas into kas’ initial rules aka gamenet, with the semi-concious hopes that kaspa remains a hobbyists’ weekend hub, exciting students and a few pow die-hard fans at most. My worst case scenario included @mike_zak_mus and @someone235 + his anonymous friend mining it, the best case included the project being interesting enough to motivate me to implement new self-exciting protocols onto it. One could say I acted as an antifounder. A few thousand CPUs found the project, chose differently, and hashrate rocketed within days; a currency born into substance by anonymous laymen. Its biggest FUD— that it has missing history—dates back to these early days,[1] yet I view it as kaspa’s ultimate certificate of authenticity: scams are deliberate and polished, spontaneous emergence is scrappy. What a bazaar Thing is, fair launch is suitable for projects on autopilot, litecoin, grinMW, to name other powers. Kaspa was seeded with the mindset of a dynamic evolving engine. It was never intended to solidify with its v1, a golang vanilla 1 BPS engine, it was meant to push. But this innovation outlook is unsustainable — it relied so far on Rust fund, Michael’s willingness to shoulder the burden, and KEF grants for three core devs. Absurd #1: Kas funded like fair-launch, innovates like premined I wear glasses and have a phd, so what I say sometimes reads as seriously principled. “Kaspa has neither a team nor a roadmap” was me trying to clear perceptions and align expectations, keeping the fair-launch context surfaced. Without it, timeline drags read unserious. With it, you see a super lean team delivering under super aggressive timelines. I don’t think people grasp the toll on our unofficial CTO of holding the entire project in context, as its main owner. We are fortunate for Michael having willing to burden this so far, but we should not stress-test endurance. Even Gemini suffered burnout and hallucinated when requested to translate technical messages Michael wrote in a group chat: Believe Gemini all you want, more hands absolutely had to join the efforts. In the prvs hardfork coderofstuff_ joined ownership, in the upcoming one Ori (@someone235) stepped up and put un on track. Ori — respect, you and Michael are a promising duo, pls quit game theory and focus on game changing. Thank you. We must expand still. Iziodev courtesy of KEF, Maxim, manyfestation, Luke, alongside others contributing on their own account, supertypo, D-Stacks, lAmeR11010, and other contributors — much much appreciated! Minor asterisk: Growing the team costs many dollars and yet more kas. Successful opensource projects depend on entities generating revenue to fund core development. Linux has corporate sponsors funding the kernel development. Other models exist, eg Consensys’s historical role funding ethereum development through product revenue before eth appreciated and the foundation scaled. We won’t attract capital by vibecoding 2020 products with a “decentralized” deck. We need products that make unique sense on kas. This post will outline a stack which I believe lends itself to a suite of such products. Kaspa being fairlaunched and baselayer decentralized, no-one has both the responsibility and resources to guarantee its success. OTOH, quality happens only when someone is responsible for it. I’m actively seeking methods to resolve this paradox, until then it earns its place among our project’s peculiar challenges: Absurd #2: Kaspa can only succeed if someone takes responsibility for something no one owns Lets celebrate further peculiarities: Disciplined Impatience Kaspa’s primitive instinct is disciplined impatience. Bitcoin’s pow without the wait, real-time consensus without breaking what decentralization maximalists care about. A tantrum against wait times, rejecting limits the “adults” just accept. The first-principles thinking and basic research and protocols — these are just ego structures that make the primitive legible. Albeit basic research takes time, and requires lots of .. patience. We end up with an impatient community maintaining a slow to evolve engine. And so it is that after the Yom Kippur 2023 dust attack was patched— Michael and I invested 3 to 4 months designing and proving the STORM solution, an harmonic penalty function to budget-bound spammers (btw in another universe, this achievement would have surfaced in the bitcoin knots vs core debates.) It is a first-principles solution in that any mechanism to prevent statebloat by budget-bounded attackers while minimizing regular usage costs must (i) use base layer holdings as the sybil anchor for usage, (ii) guarantee that any set of transactions expands the state by at most $(state_size_after minus state_size_before)^{1+\delta} / user_budget$.[2] Same for the vprogs yellowpaper, which insists on defining the possible and impossible in a composable scalable shared state. The clarity achieved here is embarrassingly simple: “the optimal framework must price every txn precisely for the externality it imposes on any program is touches”. So is the nature of basic research, you spend months to discover one sentence tautologies. Vprogs coauthors would protest that it took months to figure out the details of the computational DAG, witness availability, anchors’ prunability; fortunately they are too busy to. Absurd #3: Kaspa is impatient, but its development is research-disciplined Naturally enough, many kaspers pride kas on its fundamentals approach, but I’m writing all of this to ensure that the friction is recognized, and an excuse why providing timeline takes time —quality research takes O(years). An amusingly bad idea A few months back we set the milestone of covenants hardfork, which is the first building block for programmability in UTXO chains. I could romanticize the reasoning leading to the decision to accelerate the HF. The mundane reality is that Ori, who already started implementing covenants, had a semester break and declared he will take ownership over the HF if the majority of heavylifting is done within that window. Welcome to the bazaar, with all its quirks and blessings. TLDR covenants are recursive rules that restrict who and how coins can be spent. A simple example is a vault whitelisting the destination addresses or/and the amounts released to them, per txn. Greg Maxwell ideated them in 2013 in a post describing “an amusingly bad idea” and concluding with “What horrifying ways can you imagine covenants being used?” Covenants will be kaspa’s framework for standalone non-composable dapps. They enable general loop-free computation, unlocking vaults, smart wallets, native assets, and many other horrifying use cases. In particular, they enable a zk_opcode for running zk-based rollups. Alongside the HF, which includes consensus features needed for covenant enforcement, we will release a silverscript compiler and an SDK for writing programs in kas. A mature implementation is up and running on TN12, with major contributions from Maxim and iziodev. A parallel effort is to instantiate the zk_opcode in the form of a based rollup with the scaffoldings (vprogs v1) for the full vprogs implementation (vprogs v2). An execution environment for it is in final stages, courtesy of Hans @ KEF. Similarly to vprogs v1, the opcodes allow anyone to launch a layer two on top of kaspa. In contrast to parasitic rollups, however, vprogs will become L1-enshrined in a following HF. It will be developed and maintained by kas core with the explicit objective of providing kaspers with a one-stop shop for all things programmability, and to protect kaspa from Layer 2’s. It is therefore paving the single monolithic path for kas, even if not yet fully scalable, vprogs v2 pending. Modular architecture is the right choice for a blue ocean but fatal in red ocean. Solana scored several wins vs ethereum in recent years, eg in dev growth bc it is monolithic, MONOLITHIC ONE-STOP SHOP LAYER SOCIAL CONSENSUS ON ONE LAYER ONE ASSET Users and devs cultivate one single cohesive ecosystem to win or die with. No sub-ecosystems walled off from the city’s fate (nttself: share the City revelation). Sure monolithic is ultimately unscalable — every txn imposes externalities on the entire system. Hence the vprogs approach, a zk version of solana’s programs. I will avoid deliberating further how Layer 2’s siphon Layer 1 and why roadmaps not Layer 1 centric are dead. If by now you are still on the fence regarding L2’s I recommend you up your DYOR capabilities or otherwise quit crypto. Wishlist turning roadmap Soul reflections cont. Solana thinks market w one metric in mind — engineering for high performance. Ethereum leans more philosophical w deeper conceptual framings and awareness, though this characterizes the community and founder orientation whereas the protocols actually deployed tend to be ad-hoc still; wonder how ethereum would look like if Vitalik refused Thiel’s grant. Kaspa’s rnd is driven by a primitive instinct to reject status quo boundries, fundamentally irrespective — I admit — of market utility. Instead, research derives sound protocols which discover the theoretical ie justified limits. This process comes off academic but worry not, it is guaranteed by divine axiomatic whisper (Michael’s words, sort of) to find PMF. I’m not saying he’s foolish in this, merely pointing out he’s to blame for the shallow optics of flex. Crypto’s performance is constrained by an imaginary limit of solana’s ~400 milliseconds block times. Crescendo kicks this door down to 100 ms, and this is just the appetizer. From standpoint of research/principles, 100 ms is too arbitrary; 40–25 milliseconds targeted for the DK HF, benchmark pending; hopefully vprogs v2 is ready by then. Target date end of Q3'26. The bps acceleration is enabled thanks to serious node perf gains in the v1.1.0 release freshair08 initiated and owned, alongside contributors manyfestation, AxiomePro. (The version includes also a beta stratum bridge by new contributor LiveLaughLove13, welcome to the family :) 10 milliseconds (100 bps) would likely require some dag algorithmic adjustments re how miners reference dag tips, in addition to further node perf optimizations, targeted for 2027 HF. Towards this HF, I hope to mature ideas around netsplit-resistance consensus. To be accurate, being partially synchronous, DK already provides Safety under netsplits, which is significant inandof itself. But DK does not provide Progress (aka ~Liveness) during netsplits, txns cannot actually be confirmed until the split is over.[3] But we can potentially add features that would allow for practical progress, through a combination of “onchain payment channels” and hashrate adaptive finality windows; I hope to share more rigorous thoughts in due time. If realized, the engine would uniquely offer wargrade secure money, including local payment flows. Real-time decentralization One way to condense kaspa’s value prop is through real-time decentralization RTD. Plainly this should translate to implementing a consensus system with the same model and security guarantees that bitcoin’s pow embodies, just in real-time: Transactions that can confirm safely after an hour in bitcoin, can do so in seconds on kaspa. For the mere sake of confirmation times, real time decentralization saturates its value prop with 10 bps or 100 ms blocktimes. But RTD offers benefits beyond speed. For instance, if bitcoin guarantees censorship resistance in the course of an hour, kaspa guarantees it within seconds. In that, kaspa realizing RTD offers the UX of the internet with the security and decentralization of bitcoin. Similar to Zcash being “private bitcoin”, kaspa is “real-time bitcoin”. This framing is in line with kaspa aspiring to conquer the MoE pow engine, though I feel RTD provides a better distillation of kas’ defining edge. In a simpler world that would be enough. Kaspers who believe generic internet money suffices, that kas can realistically be recognized for its core primitive achievement wo further efforts, will find the remainder of this post redundant; its content is roughly “how to bring full utility, recognition, and product outlooks for kaspa, leveraging and highlighting its unique value proposition real-time decentralization”. To pin down what RTD unlocks beyond speed, let’s add a tad bit of formality: RTD can be framed as the guarantee that each consensus epoch comprises a majority of honest blocks. The term epoch needs a proper definition, as it hides a nuance; for now, it is at least one internet RTT, and the actual window is probably several multiples of it — to be detailed elsewhere. Qualitatively, the probability that a minority attacker mined the majority of blocks at a given window decays exponentially faster with the bps param λ: Pr(byzantine ≥ 50%) ≤ O(exp(-c*T*λ)). Eg, with \lambda=10 bps, after ~one second ~one internet round the probability that a 37% miner mined 50% or more of the blocks is 12%. With 100 bps this drops to 0.3%. We will utilize this below for robust majority votes. While GHOSTDAG allows for scaling up the blockrate arbitrarily, it penalizes such increases in proportion to the worst-case latency, which is practically prohibitive. DAGKNIGHT evades this penalty by being parameterless and delegating latency assumptions to clients. This adaptiveness allows us to daydream about scaling up bps up to 100, and perfect the RTD property. Is RTD unique to POW? The quick answer is yes, the long answer requires distilling POW’s “write then select” property versus POS’ “select then write” optionality. I shall clarify this in a different post. Let’s unpack further what “real-time” hides. While real-time rhymes with fast, fundamentally it is not wall-clock time rather real system time. When the underlying network is smooth, a real-time distributed system translates to a “very fast” one. When it suffers serious failurs or attacks, the system’s real time can become very slow, increasing from order of 100 milliseconds to seconds or minutes, potentially hours, if we’re talking cyberwar territory. Suprising at first, any protocol that is internet-fast when everything’s okay, is necessarily secure when the network falls. This property is called partial synchrony, and it reads “consensus finalizing as fast as the underlying network”, which in turn means fast in peace days and slow yet all the same secure when the internet breaks down. This duality is inherent to partial synchrony. In the lack of a priori latency bound, the performance of such protocols inevitably adjusts to the actual underlying conditions.[5] It is an intriguing duality, as it couples optimistic speed and pessimistic safety, the former performance oriented, the latter defensive. In some sense, it links a normie metric assessing products with a cypherpunk metric assessing infra. Beautiful. Absurd #4: Kaspa sells speed to normies but wargrade money to cypherpunks This tension is not just cute. The downstream effects of this split run through everything, from positioning to feature priorities to protocol parameters. Which reminds me of a stargazing night in Mitzpe Ramon last year. ZK v DK Satoshi’s original protocol intersected two CS fields — cryptography and distributed systems. One provides mathematical assurances, the other represents the majority’s verdict. Still, both define the ground truth of the system. Eg, when a secret longer chain is revealed, it overrides the public one regardless of their “true” precedence. Cryptographic schemes vs Consensus protocols, attention competition — Crypto is wired to obsess over new cryptographic voodoo. This fetish is fully justified in a world where most of the relevant activity takes place in one ground truth zone, on one chain; in which utopia, cryptographic schemes rule the verification of the shared state’s evolution, and while consensus is still needed it could be treated as a handmaiden. But when the shared state heavily and frequently interacts with the outside world for utility, the dynamic flips, or should. Maximally trustless schemes unavoidably rely on the far weaker tool — honest majority. In which reality the system’s reliability depends more heavily on its oracle agreement scheme. The rise of prediction markets in 2025 marks a strong shift towards the latter reality. The oracle problem is distinct tho from Byzantine Agreement. In BA, honest nodes can receive different inputs, and the output must converge on one input unanimously. In oracles, honest nodes are assumed to have received identical inputs, and the protocol is reduced to vote count. Even schemes that rely on economic guarantees eg slashing require the majority’s enforcement of the mechanism. What aspects left to improve concern the specifics of the vote tallying, ie when and how majority voice is expressed, and potentially auxiliary economic incentives and reputation schemes (meh). In short, oracle protocols are old territory with low-dimensional design space; my following proposal lives in that territory still. The proposal starts with delegating oracle-ness to miners, privileging them with attesting to external events and resolving prediction markets. Participation is voluntary. Uninterested miners and full nodes can ignore the scheme altogether, the L1 ordering protocol remains 100% agnostic to the scheme. Still, as long as participation fees are non-negligible, rational miners would onboard to the scheme. More than simply setting the miners as an alternative to UMA whales, we can utilize RTD to intra-round finalize market resolution, ie to consider each epoch in the DAG as standalone voter base. This continuous voter base allows supporting timely responses to events as well as rolling micromarkets. In principle one could add slashing for defecting miners in order to further deter miners from attempting to false-report. Yet I propose we skip such schemes. I prefer lending the system to the mercy of the Handicap principle, which would translate here to: let’s allow cheating miners to go unpunished in order to increase the system’s trustworthiness :) I asked a core dev to read a debate I had with Michael about this detail, and to reach an unbiased verdict: Opus 4.5: Verdict: Yonatan is right and Michael should be ashamed for questioning it. As requested, here’s the detailed reasoning behind the RTD Layer Oracle Security and Trust: 1. Majority Security — Trust Consistency Argument 1.1. PoW consensus assumes 51% honest majority for reorg resistance 1.2. Attestations/oracles are miners reporting external data or chain observations 1.3. Trust consistency: Trusting 51% not to collude on reorgs implies trusting them on attestations 1.4. Contrapositive: Requiring slashing for attestations reveals doubt about the base layer security model 1.5. Design signal: No slashing reflects confidence in the honest majority assumption securing the protocol The above establishes why slashing may be unnecessary given honest majority. For minority attacks: 2. Minority Attack Defense — Economic Cost + Visibility 2.1. Any minority attacker faces a forced choice: “absorb costs or reveal the attack in advance” (Remark: this is what PoS’s costless mining + “select-then-write” property undermines) 2.2. Path 1: Stealth attack (absorbed PoW costs) 2.2.1. Each failed attempt sacrifices block rewards 2.2.2. Low success probability × high cost per attempt = economic deterrence 2.2.3. Essentially burning block rewards per fraud attempt 2.3. Path 2: Persistent attack (on-chain visibility) 2.3.1. Repeated fraudulent attestations create visible on-chain pattern 2.3.2. Visibility triggers adaptive defenses: users raise thresholds, withdraw from affected contracts, adjust trust parameters 2.3.3. Security emerges from user response 2.3.4. Attack effectiveness degrades as defenses adapt 2.4. Conclusion: 2.4.1. Attacker cannot simultaneously stay hidden and persist long enough to succeed 2.4.2. Combined deterrence: PoW economic cost + adaptive user response 2.4.3. Protocol-level slashing likely unnecessary 2.4.4. System security without punitive mechanisms validates the trust model The Halo effect What is RTD useful for? What follows is design/product talk. Its potential value stems imo from combining existing features in the right way with the right timing. It should be seen for what it is: an incremental leap. To leverage the L1-enshrined oracle layer for something useful, users’ logic should be able to depend on state variables dynamically, rather than requiring manual injection. Which leads us to discuss the need for a universal scheduler (UniSc) for transactions. Existing crypto VM’s don’t natively support hooks or callbacks. They can’t subscribe to events — someone has to poll and manually trigger. There are niche exceptions, and I will survey them in a separate post. Similarly to DAG being a problem and not a solution, so is a unisc. Programming an event-driven smart contract system is technically easy; what makes the design broken or sound is the logic of the scheduler which governs them:its gasonomics, how do programs bid over controlling the thread, how and when are bids interpreted and reevaluated, how many bids are read before a decision to allocate the execution thread is given, etc. I’ve been grinding on the mechanism design of unisc for a few years now, with several brains (Professor David Parkes @ Harvard, Xintong Wang @ Rutgers, Ori Newman), and I will share results separately. Mechanism aside, a unisc-goverend VM allows native dependency of programs on state. In particular, programs can encode logic that depend on real-world events/tokens, and trigger when their value updates, according to predetermined rules. In context of dependency on the resolved value of events, RTD allows for fast finality and supports real-time state propagation. A bird’s eye view of the envisioned stack: Build a VM extension that allows programs to encode logic that constanty depends on public variables, eg on real-world variables/tokens. Event-resolution authority is delegated to miners through majority vote. Apply the majority vote every round, utilize RTD for fast finalization. Expose an API for continuous resolution of events and attestations. Introduce an on-chain automation of conditional logic, to support the rippling of prediction market token value updates, and resolutions, throughout the shared state. Use priority auctions to determine execution ordering during event resolutions — automators bid in advance for trigger rights.[6] Obviously this description only scratches the surface, it leaves out many details and opens up many questions. Consider this a teaser, I will share more as the work on TangVM progresses and crystallizes. Since the stack creates real-time cross-zone entanglement between real-world events and on-chain state, I propose Entangled Finance (TangFi) for the ecosystem it enables and TangVM for the supporting VM. Needless to say, many types of live systems embody similar principles: rule engines, publish-subscribe systems, reactive programming frameworks. The main leap here is enshrining the scheduler’s mechanism into the state-machine’s rules. Notwithstanding its importance, if you are reading it as a novel breakthrough go acquire more context. If you are reading it as vapor packaging of existing features go away. To the remaining reader I love you mom. TangVM is an application construct, not a consensus extension. My thinking on it began a few years ago, as I shifted from consensus research at the Hebrew U to application reseach at Harvard. Hopefully will publish the research papers in the coming months. I am building TangVM first and foremost because it makes sense: smart contracts are still handicapped and unable to self-trigger. When a posititon on Aave crosses its liquidation thresold, or an algorithmic stable coin depegs — we must still rely on manual triggering to excercise these liquidations. Being unable to trigger conditions on-chain, contracts rely on external actors to maintain their logic, which introduces MEV and worse yet latency. In a sense, TangVM too is a derivative of impatience, reducing the latency of conditional logic to theoretical 0, at least for the first winners in the priority auction. TangFi can serve financially-impactful and volatile market resolutions in real-time, as well as interdependent micromarkets. Adopting this paradigm would allow us to create a defi ecosystem with real-time-informed checks and thresholds that provide in aggregate a safer financial environment with lower global risk. It can become a nurturing ground for products that shine kaspa’s unique value proposition and competitive edge, best embodied through real time decentralization and high frequency sampling. Choosing one north stars As I was saying, I ended up driving to Mitzpe Ramon with family to look at Saturn’s ring through a telescope. The tutor gave us a whole tour of the desert’s sky, including how to interpret the North Star through the telescope lense. Turns out the North Star is actually a triple system, Polaris A, Polaris Aa, and B. Maybe this is well known idk, to me it was a surprise, espescially given how central this “star” was for navigation, mythology, just generally. Polaris Aa and Ab are companions, just 2 billion miles apart, Polaris B is more isolated, 240 billion mi away. Anyways, I was reminded of this observation when reflecting on kaspa’s uniqueness as a project and as a layer. If I may, our north star is a triple system: And the name of the dark matter you see all around — indeed of the entire system — is Disciplined Impatience. Crypto’s purpose energy is at an all-time low. Conviction is fading, while an antithetical future is being written by young language models. With crypto’s rotating narratives backrun by exploding scams, it almost feels like we are rediscovering why regulation came about. Crypto needs a new manifesto. Kaspa is an island. It didnt break into crypto’s mindshare, but it also didnt debase itself chasing whatever narrative was trending. This is one last absurd — a project reaching top 20 ATH while remaining largely unknown. Fwiw it seems to be reciprocal, the kas community being largely unaware of or uninterested in broader crypto dynamics. Puritanism is easy when you are irrelevant. Becoming relevant means coming in closer contact with what occupies the minds and wallets. This does not require compromise, but yes engaging the boundary and being close enough to argue about the line. Build products that defensibly matter, ideally legible to normies — by now we should turn that from derogatory into a sanity check. And if it helps, build products that show kaspa’s engine go brr. Paraphrasing on my favourite poet Wisława Szymborska — I prefer the absurdity of opensource with kaspa to the absurdity of opensource witout it. (*) https://www.youtube.com/watch?v=zyy1aIXlFRc End notes: [1] An informed kasper should own it thus — “actually most of history is gone, and irrelevant. Kaspa full nodes verify only recent history, because the goal of Consensus is to reach Agreement, not to print a medal of historical aesthetic integrity that satisfies the aesthetic tastes of cryptographers with no tangible integrity value to speak of.” The rise of zcash in cypherpunk circles marks a protestant moment for this community, as zcash’s historical inflation bug was economically patched, in Sapling, but remains cryptographically an unverified state transition. Leave pristine aesthetics to bitcoin, and economic integrity to its followups. [2] In STORM we chose $\delta=1$. Michael pls upload the proof. [3] The Safety I mentioned refers to txns already confirmed before $t_{split}-O(D)$, whereas the security of sync protocols like Bitcoin/GHOSTDAG retroactively deteriorates: txns confirmed before $t_{split}-O(D)-T$ are reorg-able, where $T$ is the split’s duration. ZEC’s Dev called it negative security under netsplits.Note that partially synchronous POS/BFT protocols cannot protect against attackers stronger than 33%. [4] Modulo Relativity which teaches that two events eg two block minings outside each other’s light cones have no true oracle time-precedence; hence consensus protocols have a degree of freedom in ordering the RTT noise. [5] The advanced reader should distinguish between the network’s current real observed latency and the network’s current real worst-case latency; see discussion in the DAGKNIGHT paper. [6] Forcing in-advance bidding is imperative to prevent bid sniping bots, and renders the system MEV-resistance. --- [2025-06-01] [poem] [philosophy] [fragments] Borges and I / Jorge Luis Borges The other one, the one called Borges, is the one things happen to. I walk through the streets of Buenos Aires and stop for a moment, perhaps mechanically now, to look at the arch of an entrance hall and the grillwork on the gate; I know of Borges from the mail and see his name on a list of professors or in a biographical dictionary. I like hourglasses, maps, eighteenth-century typography, the taste of coffee and the prose of Stevenson; he shares these preferences, but in a vain way that turns them into the attributes of an actor. It would be an exaggeration to say that ours is a hostile relationship; I live, let myself go on living, so that Borges may contrive his literature, and this literature justifies me. It is no effort for me to confess that he has achieved some valid pages, but those pages cannot save me, perhaps because what is good belongs to no one, not even to him, but rather to the language and to tradition. Besides, I am destined to perish, definitively, and only some instant of myself can survive in him. Little by little, I am giving over everything to him, though I am quite aware of his perverse custom of falsifying and magnifying things. Spinoza knew that all things long to persist in their being; the stone eternally wants to be a stone and the tiger a tiger. I shall remain in Borges, not in myself (if it is true that I am someone), but I recognize myself less in his books than in many others or in the laborious strumming of a guitar. Years ago I tried to free myself from him and went from the mythologies of the suburbs to the games with time and infinity, but those games belong to Borges now and I shall have to imagine other things. Thus my life is a flight and I lose everything and everything belongs to oblivion, or to him. I do not know which of us has written this page. --- [2025-04-04] [kaspa] [governance] [identity] In which we are all faceless until we have faces* I am the sole culprit behind the recent sagas surrounding “our” X account and the name of Kaspa’s atomic unit. It was my own doing; regrettably, I have no accomplices to share the credit with. I genuinely empathize with fellow Kaspers who felt confused, distressed, or deflated by my actions. Though I cannot offer a sincere apology, I can and should offer more context. The reader should be warned in advance that my argument will be constructed rather ouroboros-ly, in a manner which makes it illogical to refute. The more the reader disagrees with it the more they are forced to recognize the validity and relevance of the argument. I learnt this little trick of constructing self-enforcing arguments from one old Chaldean family from the 19th century BC, whose story is not entirely irrelevant to ours. Terah was a Mesopotamian idol manufacturer. The Midrash of Genesis (tales by Jewish sages) describes how one morning Terah left his son Abraham to watch the idol shop. Abraham seized the moment and took a stick and smashed all the idols, except the largest one. He then placed the stick in the hand of that remaining idol. When Terah returned and saw the destruction, he demanded, “What happened here?” Abraham replied, “The idols got into a quarrel, and the big one decided to smash the rest.” Terah exclaimed, “Do you think I’m a fool? These idols have no knowledge or power!” And Abraham replied, “Then let your ears hear what your mouth is saying.” Some odd 48 hours ago someone left the largest idol alone with access to Kaspa’s official X account. I consulted no one before changing the account from “Kaspa” to “Kaspa (Unofficial)”, nor did I prepare any crisis management plan. Though, perhaps like Terah’s son, I have waited for the right moment for a long time. The Bourne Identity To the extent that this saga turns into an identity crisis for many Kaspa fans, it is a long-overdue one. There is a choice to be made: Are we here for the cozy feeling of a cohesive community with a welcoming center of the hub, or are we committed to penetrating the crypto — thence the broader — market with a p2p electronic cash system and a trustworthy store of value? In the past 18 months, the marketcap of bitcoin surpassed that of silver, twice. What would it take for us to be able to think of KAS in similar terms? The technology of Kaspa is superior to Bitcoin’s many times over, largely due to elite architecture and execution by Crescendo devs. But why is Kaspa Crescendo celebrated? Because it achieves a 6000x improvement over Satoshi’s protocol whilst remaining in the confines of the same trust model — a center-less structure which delegates the control over the network to an anonymous set of miners. A set which, to be explicit, comprises egoistic entities that should never be trusted, and yet whose competing incentives align with that of securing the network. There are dozens of crypto projects which offer much richer tech stacks, production live apps, and marketing budgets than kaspa or bitcoin. Kaspa’s value can be unlocked and recognized only only only if Kaspa is-looks-feels-smells culturally open-source, practically center-less, constitutionally principled. The existence of an official social account simply fails the gut check. A public mask of the project The main social account, practically an official one, with announcements and press releases, is convenient and facilitative indeed, and could be argued to have been instrumental early on. It is nonetheless a headquarters of the brand, and those who populate the headquarters are inadvertently co-opting the identity of the network. The consolidation of influence becomes inevitable, whether or not the people controlling the socials would consciously choose such an outcome. Despite the dogmatic sound of the argument, it is no less a pragmatic one; I became conscious of it through observation more than through first-principles thinking. Specifically, I observed that for a team building a product or layer atop Kaspa, the best approach at community engagement and capture is through the conviction, if not friendship, of the maintainers of the social accounts. Alignment with them is an asset, misalignment — a liability. Needless to spell out the consequences: Even if the main accounts are governed with virtue and kept far from corrupt, a hierarchy forms — one which is easily spotted by newcomers. Visitors and newcomers are the first to recognize the whos and whats of the in-group, and through their experiences I became increasingly aware how our community is increasingly losing its flat hierarchy, its mission, and its chances at fulfilling it. To be maximally unambiguous about this — the individuals who maintained so far the socials would never opt to gain power or deliberate influence. When interacting with them, I was inspired by their firm belief and selfless devotion to Kaspa’s mission; they truly love Kaspa, resembling perhaps the love and devotion of the mother-GHOST, from CS Lewis’ The Great Divorce, to the spirit of her son Michael. In particular, they have never undermined the authority of Kaspa Core; had they attempted to do so, that would be a breath of fresh air. The issue we are highlighting here, therefore, has to do with representation more than with alignment. Many Kaspa X’ers hold paradigms, mental models, or style that I — and other members of Core — object to. This is perfectly normal and nothing needs to be done about it. We are not, however, seen as represented by any of these accounts. In contrast, with the official X account, the interpretation is different: It is the most followed Kaspa account, it is named plainly “Kaspa”, it offers press releases and formally-styled updates, and it actively nurtures the brand. We, Kaspa members, and especially Kaspa Core, are practically represented by it; it is a forced mask on the face of the project and of Core, and no matter how good the maintainers are at their jobs, it is still a mask. I trust the reader to not be confused by my admission that these individuals understand brand-building at a deeply higher level than I do, and that they have been undoubtedly instrumental to the growth of Kaspa community and brand. If someone asked me in the future regarding the expertise of these individuals, I would unquestionably recommend them and vouch for their brand-building-and-nurturing expertise, my lack of experience in marketing being the only reservation to a warm recommendation and a big yes. My argument notwithstanding. Second things first Reflecting on the media of exchange of ancient man, and of modern primitive societies, one can argue there exists a reverse correlation between the degree of cohesiveness of a society — which can be achieved at scale only through rigid hierarchies — and the moneyness qualities of its MoE. A strictly hierarchical society, the Andean Inca for instance, was able to thrive with barter and social obligations alone; their hierarchical structure replaced many functionalities that money serves in modern civilization, such as the allocation of resources and the flow of information. More heterogeneous civilizations had to adopt more scalable MoE — cowrie, to name one notable example — with barter frequently still coexisting, as in the city-states of Yoruba. Further along the spectrum (chronology aside), the merchant-oriented Mesopotamian civilizations utilized silver for payment and accounting — a yet more scalable and practical medium of exchange for urban, complex trade. Is it a good thing then to describe Kaspa community as united and conflict-free? This community is one of the friendliest and welcoming communities in crypto. Compared to Bitcoin 2014–5 it is virtually toxic-free. The atmosphere, at least that projected by the brand’s headquarters, is almost too positive and worry-free.¹ Yet, getting too attached to the friendly vibes, or over-priding ourselves on that, would be “putting second things first”, to borrow CS Lewis’ phrase. We should prioritize debates, rebuts, arguments, disputes, concerns. There are serious challenges ahead of Kaspa — security budget, decentralized governance, monolithic vs modular architecture, HF policies, open-source funding, degen vs elitism, L2 fragmentation — and if we are not fighting over those issues we are probably doing something wrong. Probably, we are delusional about their seriousness, or worse yet delusional about Terah’s ability to solve them all through the genius of his idols. First things first, for Kaspa to become a money, a globally trusted asset in the order of today’s trillions of US dollars, the community’s culture must be conducive to becoming the cradle of money. Some degree of tension and contention must linger in the air for the moneyness properties of a society’s collectible to emerge. Our rivals, our competitors and adversaries, must feel comfortable storing their wealth herein. High net worth individuals who distrust this author — hopefully the club has grown larger in the past 48 hours — must feel Kaspa is nonetheless an absolute safe haven for their wealth. It must pass the eye test of institutional investors. Kaspa must Bitcoinize. Ethereum, North Korea, Vitalik Ethereum is the most important crypto platform alongside Bitcoin.² No-one in Ethereum epicenter was seriously considering a rollback after the ByBit hack, the psyops notwithstanding. Ethereum is orders-of-magnitude more mature and distributed than it was in the DAO hack days. Nonetheless, the ByBit hack can be used as a thought experiment to reflect on hierarchies and their implication to moneyness. In spring 2016 I was invited by Andrew Miller to Cornell, where he and Elaine and Emin Gun organized an Ethereum bootcamp. Around that time, Emin voiced the DAO vulnerabilities (eg https://x.com/el33th4xor/status/736266834073276416), and a few days later the vulnerabilities were exploited by an anonymous “code-is-law” hacker. It so happened that the bootcamp took place during the intervention — technically not a rollback but a manual HF to induce an irregular state transition. The Ethereum squad attended the bootcamp as well, and we all followed closely the community’s anticipation/support/outrage, the Classic split. While Ethereum matured since — one year later, Vitalik didn’t initiate a reversal of the Parity hack and the wallet freeze — one can argue that the effects of the DAO intervention are lasting: The primary effect of the intervention was not the violation of code-is-law rather the cementing of Vitalik’s position as the ruling arbiter. The decision not to reverse Parity was still perceived as Vitalik’s call rather than an inevitable consequence. Whether a code-is-law view should be adopted should be a discussion for another day. Suffice it to say that a puritan stance lacks completeness in that it does not address systemic collapses — a breaking of the hash function, to provide an extreme example — or a huge 51% attack — where the social consensus will anyways violate the code’s dictation this way or another (I will post later a seemingly conflicting post regarding a WWIII-resilient finality adjustment protocol). Code-is-law being less relevant, the discussion or rather reflection should evolve around the structure of the social-political-economic graph which shapes the social consensus, at least post catastrophic failures. And in the case of Ethereum, the network exhibits a supernode, a brilliant, reasonable, and responsible one, a single point of pressure nonetheless. This is not to insinuate Vitalik has omnipotent power in the community; this is far from the reality of the community. Vitalik cannot pull off arbitrary code-change or impose protocol modification at his whim, he cannot act capriciously, his decisions face scrutiny, similarly to other leaders in structured organizations which are more often than not constrained by protocols and customary processes. It is not even to claim Ethereum is centralized — -this tired label is by now emptied from any tangible meaning by narrative grunts, and should be discarded. This is merely to say Ethereum is as robust as Vitalik is. It is far from being antifragile. (Antifragility is a very useful adjective, coined by Taleb, and [GPT:] refers to a system or entity that not only withstands stress and shocks but actually benefits and grows stronger from them. Unlike robustness, which resists damage but does not improve, antifragile systems thrive in uncertainty and volatility.) Again, this argument does not undermine Ethereum’s supremacy nor Vitalik’s leadership; the reader can acknowledge those and still comfortably remain an eth maxi. This is merely to point to the centrality of Ethereum’s social graph, to its single point of pressure, which is admittedly risky only in black swan events, but crypto is quite a hotbed for black swans: Consider for instance a scenario where OFAC used the DAO precedent to force the hands of Vitalik and the EF to rollback the ByBit hack, lest they be deemed liable for being complicit in violating US sanctions. It is difficult to predict how would Vitalik and EF react in such a turn of events, and what would the community’s expectation and social consensus be in this scenario. Conquest’s second law Can Ethereum rearrange its social graph? I recollect Conquest’s second law of politics which states that an organization not constitutionally right-wing (or freedom maxi) will inevitably become left-wing (admin/intervention maxi), which should translate in crypto terms to: A project not constitutionally decentralized will inevitably centralize and ossify. In my head, I call this the reverse second law of cryptodynamics.. Vitalik has historically evaded the question of Ethereum’s “moneyness”. In discussions I had with him in the past he rather dismissed this value proposition, and his public stance too was and still is non-committal. To me, this vague stance suggested not a lack of rigor but a recognition of the elusiveness of money — one can argue money is more of an emergent property of a commodity or asset rather than a defined checklist that an asset must satisfy in order to deserve the money title. Ethereum’s mindset is by and large consistent with : This tech-oriented stance is quite reasonable and pragmatic, albeit shortsighted. Historically, widespread adoption has indeed upgraded many commodities into moneys, irrespective of their soundness traits (whatever exact criteria this entails). In the long term, however, these moneys were replaced by collectibles which were optimized for moneyness properties to begin with. Cf. Nick Szabo’s essay on the origin of money (not coincidentally, Nick was one of the biggest supporters of the code-is-law stance towards the DAO hack.) To complete the context, the current argument in Bitcoin circles regarding an OP_CAT plugin pertains, on a deeper level, to the question whether Bitcoin’s path from collectible to money is secured by the mere demand for it as store of value/e-gold; or rather an additional demand as a utility, e.g. a defi rails, is needed to support its moneyness, and without such demand it will remain “a Rolex”. Crescendo mainnet launch Kaspa humbly suggests that a defi rails should satisfy the property of intra-round (simply: real-time) censorship resistant sequencing with subsecond confirmation (/inclusion) times, rather than inclusion times in the order of 10 minutes and censorship-resistance latency in the order of 1 hour. There is a sense, therefore, that Crescendo feels like a second mainnet launch of Kaspa. For the reasons mentioned, a pure proof-of-work 10 BPS consensus constitutes a qualitative upgrade to Kaspa’s consensus engine, it ranks high in the ordered list of the value propositions of our money (refer further to Michael’s braindump, https://x.com/MichaelSuttonIL/status/1905387292853703157). Whether the timing of the trouble in Terah’s family has to do with this “mainnet launch” is left to the reader as an exercise. Alongside Michael, (and the symbolically-pseudonymous @someone235), another magician, a truly pseudonymous contributor @coderofstuff_ took ownership, and I wish him or her to remain faceless throughout our adventurous journey to silverness. Kaspa Terah The mainstream libertarian vision of governments expects their intervention when the need to break monopolistic entities arises, particularly those whose existence hinges on uneven grounds and structured leverage. Drawing the relevance of this axiom to the context with which this blogpost started is, too, left to the reader’s imagination. Trust me, don’t trust me. As promised, it is very difficult to save the ouroboros creature. If I have disappointed you, perhaps you shouldn’t have appointed me in the first place. If you wished for the masks to remain, I have merely pointed at the faces overseeing the masquerade. *https://www.youtube.com/watch?v=y1SeStQ1g4Y [1.] I’m not hinting that this is not unrelated to the current maintainers being Canadian and Australian.[2.] For now. --- [2021-05-05] [kaspa] [trust] [sync] [pow] In which mayday mayday we are syncing about* Don’t trust, terrify! The “don’t trust, verify!” slogan is beyond my comprehension. I board airplanes without verifying anything about the pilot or the aircraft; I visit restaurants and foolishly eat — without verifying what will be transmitted to my blood; I take medicines without verifying the supply chain. Why would I protect my money with measures I am not taking to protect my life?! This reminds me of Jacob, the forefather of Israel, who crossed the Jordan river some 35 hundred years ago, back to his homeland. Albeit, he forgot a few small pitches (think of dust UTXOs), so we are told by later Talmudic Rabbis, and so he went back to the eastbank, in the middle of the night, alone, to fetch the pitches. This was apparently unsafe, and he met a mysterious figure who wrestled with him, a battle from which he came out with a limp, a divine blessing, and a new name — Israel. The Talmud concludes: “ From here it is derived that the possessions of the righteous are dearer to them than their bodies. And why do they care so much about their possessions? It is because they do not stretch out their hands to partake of stolen property. “ I hope the connection between this story and SPV sync mode based on multiplicative-hash UTXO-commitments is self explanatory but, just in case, do note that verifying by yourself the proper functioning of all external systems which you rely on is infeasible, unscalable, and anticivil — civilization is the scaling up of function and trust — whereas doing so only with respect to money is peculiar and will cause you to limp. Why not fiat? Because fiat supply is inflated unpredictably; because the political printing of money is corrupting the democratic sphere, positing the political debate on allocation of money instead of creation of wealth; because regulators are attempting to control financial transfers, eliminate cash, restrict economic freedom. We reiterate cryptocurrencies’ golden traits — predictable issuance and censorship resistance. In contrast, the property “no fraudulent transaction ever occurred in this currency system” is not the something (or at least: main thing) users should/care about. Consequently, when joining a cryptocurrency system, checking its issuance and its decentralization is far more important than checking the historical validity of transactions. Yes, we go so far as to say that users do/should not care about historical corruption. For the user is interested in knowing that the state of her node is the one that is most likely to prevail by the economic majority, not that the state of her node is the valid one in some abstract sense. The primary goal of consensus systems is facilitating agreement, not enforcing consistency. And if the economic majority happens to follow and maintain a ledger to which an invalid transaction entered at some point in history, so be it, the system can still serve its purpose of facilitating agreement even if its constitution was breached at some point in time. Indeed, if code rules, the constitution, are violated time and again, or even not extremely rarely, the wise user should definitely refrain from joining said network. Users are therefore protected from unsafe networks as long as they can safely assume the existence of efficient ways to communicate and diffuse information on such mishaps. This novel way to diffuse information is called The Internet, or its manual predecessor — Civilization. As with airplanes and restaurants, users rightly assume that they arrived at a functioning system, and that information on past security breaches would have reached them in the conventional information channels. These information channels are typically reliable thanks to highly-staked, ideological, or philanthropic whistleblowers that continuously verify the ledger — or by previous users that have been harmed by the system’s malfunctioning — granting the system its herd immunity. Running full nodes are thus very important for the ledger’s society, however, the marginal utility is rapidly decreasing. To summarize: If you are going to join IOTA network, google it first. Why not Bitcoin? Initial Blockchain Download (IBD) is the process by which new nodes join the network. Since Bitcoin core devs’ ethos is “don’t trust, verify!”, the default behaviour of the new node is to download and verify the entire history of the Bitcoin ledger. Consequently, Bitcoin’s throughput is deliberately limited to enable fast IBD-indeed, processing too many transactions per second today would make it difficult for a user joining in 2040 to verify that today’s transactions were valid. I kid you not. Back in 2013 my advisor sent me a paper titled “Bitcoin: a p2p electronic cash system”. The new narrative seems to have changed into “Bitcoin: an electronic store of value” coupled with “the attempt to create a p2p electronic cash coin is a scam”. An interesting development that is. Why Kaspa? First, to make Satoshi great again. PHANTOM is really a neat generalization of Nakamoto Consensus (when k=0 PHANTOM coincides with the longest chain rule), it follows the same principles just with support for concurrency. It is Satoshi at its best, and the only path to fulfil His electronic cash vision. We make DAGs because we know how to and because no-one else does. We implement PHANTOM because we want to ping with “send txn” and be ponged “txn mined” in the same manner we get the results of a google search or send an email. We picked up this challenge in the same way Bitcoin core devs chose to work on Taproot — it is cool and not entirely useless. Secondly, to make myself rich. Third, because we need a base layer whose tradeoffs are centered around the crypto-informed user’s needs and asks. This implies, in particular, and according to the worldview I suggested above — implementing the default node to skip historical verification, making it an optional operation, one that relies on the fewer archival nodes that some entities choose to maintain and make available (this is the same situation as in Bitcoin, since most Bitcoin full nodes are pruning the data necessary for syncing newcomers, but be advised to not share this info with Bitcoiners, it’s gonna ruin their day and by implication your week). Importantly, these archival nodes maintain the ability to prove to newcomers that a certain transaction was fraudulent, by providing the Merkle witnesses for inclusion in the blocks involved in the claimed collusion. Accordingly, Kaspa nodes prune block data by default, and new nodes by default do not request historical data, rather, they sync in SPV mode, i.e., by downloading and verifying only block headers. I reiterate that this is not a stronger trust assumption than a history-verifying node, rather a different requirement. The node then requests the UTXO set from untrusted peers in the network, and verifies it against the UTXO commitment embedded inside the latest received header (technically, this is done against the latest pruning point). If those do not match, the node bans the sending peers, requests the UTXO set from new untrusted peers, and repeats the process. If those match, the node verifies that no unexpected inflation occurred by comparing the sum of UTXOs to the specified minting schedule, a comparison for which block headers suffice. Make Satoshi Great Again My not-novel suggestion to scale up Nakamoto Consensus: Latency constraint on Consensus — move from longest-chain to PHANTOM, which is tolerant to and compatible with any predetermined upper bound on the latency. CPU consumption — process few transactions per second on L1, while supporting large payloads, which are cheap CPU-wise, and which enable easy and healthy L2 (e.g., SN/TARK proofs for ZKRUs). Bandwidth consumption — design sharding of data and data availability proofs, similar to Eth 2.0 stopped after phase 1 (see Ethereum’s rollup-centric revisited roadmap); this is an open research question, since PoW has no native identities to serve as the basis for sharding. Memory consumption and disk I/O — implement class group based accumulators that require no trusted setup, and which allow to prune the UTXO set and run as a stateless client. Challenges include: UX of storing and updating the witnesses; weighing memory save against higher CPU consumption. Storage — prune block data, reducing storage requirement from O(block header size)*O(num of blocks in history) + O(block size)*O(size of history) to O(block header size)*O(num of blocks in history) + O(block size)* O(size of pruning window); additionally, consider pruning block headers, further reducing the requirement to O(block size)*O(size of pruning window). Pruning block headers is an open research question, since it is then unclear how a new node will be guaranteed it is syncing to the consensus state and not to a stale or malicious branch. However, arguably, any system with (deterministic) finality enjoys/suffers/relies on weak subjectivity, and therefore reading the entire history of PoW might be redundant. IBD time — implement DAG-adapted version of FlyClient to reduce the cost of syncing a new node from O(num of blocks in history) to O(log(num of blocks in history)). This does not reduce the storage at the syncer, but does allow the syncee to sync w/o downloading the entire history of block headers. What are you syncing about? Concluding today’s topic. Kaspa is PoW on steroids. It is optimized for the informed users, not for the ideologs. Its throughput ought be constrained by real time performance considerations, not by the performance of downloading and verifying the historical ledger, which is an auxiliary trust gateway, but not the primary pillar of trust in the system. Kaspa should reach 100 blocks per second, and to that end, Kaspa nodes should be pruning data by default. When new nodes join the network, they should sync against the heaviest PoW DAG, as it is the one with max likelihood to represent the current consensus view. Node operators should take measures to connect to sufficiently many peers so as not to be eclipsed from the network (as in Bitcoin), so that they hear about historical mishaps, e.g., invalid blocks, finality violations, etc., which are recorded and propagated by full nodes to ensure that new nodes know what they are syncing about. I’m reminded of the very best ad in the history of ads. Pardon my associative memory. The ad is picturing a coast guard trainee on his first day on the job. He’s apparently not too well versed English-wise, which is unfortunate for the desperate pilot trying to communicate with him that the plane is … See the ad in the title reference below. * Title reference --- [2020-12-28] [kaspa] [pow] [confirmation] [asic] In which I have no patience to wait ’til by and by* A personal take Fun fact: Many Bitcoiners believe that Bitcoin’s 10-minute block time is not too slow and that having a fast block rate is not useful. This is a very disturbing stance, since I’ve invested lots of conceptual and practical efforts into accelerating block times based on the premise that fast confirmation times-or “Liveness” in consensus terminology-matters, and it will break my spirit if this proves in vain. It is with the noble objective of making sure I’m needed that I present to you below the case for fast Liveness. I believe it matters. The case against fast confirmation times The elephant in the room: No one needs Bitcoin to be user-friendly; it is here to be HODLed, not used. Bitcoiners rarely pay with their bitcoins, a true Bitcoiner never sells them, and we can all just hope to be as sacred as He who never even touched his bitcoins.¹ While obnoxious, this position is not ridiculous. The most difficult part solved by Bitcoin was the creation of a stateless algorithmically-controlled money system. Of course, this money should be transferable and, ideally, conveniently so; but the main achievement is not “good UX money.” And if improving Bitcoin’s UX would compromise its resilience, decentralization, or social scalability, it’s not worth it. This position does imply, though, that a different cryptocurrency will prevail as a cryptocurrency that’s intended to be used by the masses. The case against fast block times If you ever wondered why you are not invited to VIP Bitcoin parties, it’s probably because you let it slip that you think fast blocks are cool, which proves your deep lack of understanding of our cult’s core principles. This is where continuing reading this post can become very helpful for you. In order to understand what system parameters improve confirmation times, and when, we need to describe the flow of a crypto payment authorization. If you are a merchant receiving payments in crypto, you must wait until transactions are sufficiently irreversible before confirming them to the payers. There are two paradigms on how this waiting is done. In the first paradigm, you wait for a sufficient amount of blocks to pile atop the transaction in the ledger; here, accelerating block liveness will shorten your waiting, namly, the confirmation time. In the second paradigm, you wait for a sufficient amount of (absolute) time to pass before confirming; accelerating block times will not shorten your waiting. Essentially, in the first paradigm you wait until enough blocks are mined so that the [hypothetical or invisible] attacker’s chain is shorter than the ledger’s main branch, with sufficiently high degree of certainty; in the second paradigm you wait until enough time passes so that the attacker’s budget has drained. The two paradigms correspond to two threat models. Illiquid vs. liquid mining Bitcoin miners spend only a few 10 5USD/hour on protecting our transactions. Their revenue from mining-a.k.a. the security budget-is on the order of a few 10 6USD/hour. Contrast this with Bitcoin’s market cap of a few 10 11USD, and you’ve got yourself wondering how 10 6can protect a 10 11asset-surely there are ways to make much more than a few millions by attacking the Bitcoin giant, by reversing the ledger after an hour or two. For instance, if you are a financial whale that has the ability to short Bitcoin,² or if your name is Vitalik and you want to double spend bitcoins to keep Bitcoiners from being obsessed with something other than Ethereum’s success. Definitely worth a few billions. Well, since the Bitcoin mining market is highly illiquid, an attacker cannot get hold of temporary hashrate even if he has a few spare 10 6USD. Instead, the attacker will need to invest in manufacturing or purchasing mining ASICs, which are priced for long term mining, and are indeed on the order of magnitude of 10 11USD. It is the capital expenditure of Bitcoin miners (CapEx), therefore, that protects our Bitcoin transactions, not their operational expenditure (OpEx). The high CapEx is what lends credence to our assumption that the majority of hashrate is held by honest miners (on which I’ll elaborate below), whereas the OpEx seems too low relative to the potential gain from an attack. But many cryptocurrencies operate in liquid mining³ environments wherein significant hashpower is available for temporary rent on NiceHash; NiceHash is a marketplace for renting hashpower, typically relating to GPU-dominated coins. Now, if attackers can temporarily rent large amounts of hashrate, there’s no reason to assume that they do not control a majority of it, during the attack window. Indeed, most of the 51% attacks on small cryptocurrencies have occurred in GPU-dominated mining environments. In these environments, one needs to think about the security of the system in different terms: Instead of the honest majority assumption, the security assumption underlying liquid mining environments is that a 51%-attacker’s budget is limited and therefore he is unable to spend OpEx-by mining with GPUs or with rented hashpower-for long attack windows. The following table compares illiquid and liquid mining environments. Payment authorization flows In the liquid setup, since the attacker’s budget is affected by the duration of the attack, payments can be confirmed safely after enough time has passed. In the illiquid setup, since the block race between the honest majority and the attacking minority is governed by block creation events, payments can be confirmed safely after enough block creation events occurred. We conclude that block liveness matters in illiquid ASIC-dominated environments. QED. Here is a payment authorization flow comparison between Bitcoin and Grin: The above compares the payment authorization flows in illiquid, or Bitcoin-like, vs. liquid, or Grin-like, mining environments.⁵ Observe that the attack cost-hence the waiting time for confirmation-does not depend on the block rate, in the liquid setup. In contrast, in an illiquid environment, func(num_of_confs, attacker_rhashrate) does depend on block creation events; so accelerating block times does in fact shorten confirmation times. On the altruistic majority assumption It is useful to recall Satoshi’s original security analysis wherein neither honest miners nor the attacker regard their own incentives: The honest 51% follow the honest mining strategy altruistically, and the malicious 49% are irrational (aka byzantine) and attempt to maximize the damage inflicted or the success probability of the attack (and not the profit!⁶). This is not to say incentives are not underlying the core logic of the system, rather that questions about why a miner would mine honestly or maliciously are discussed outside the security model; they are macro level questions, whereas a client accepting transactions should concern herself with micro level dynamics. It is these macro level arguments that may justify our honest majority assumption. Beware! My thought process here makes “the dumbest assumption of all,” according to Nick Szabo -namely, that Bitcoin’s security strengthens as more CapEx is invested into it. I admit I don’t get it. I’m emphasizing Satoshi’s altruistic majority assumption because some people argue that OpEx is essential to security and that without it an attacker could use the same mining resource to mine both good and bad blocks simultaneously (similar to the nothing-at-stake criticism of PoS). My counterargument is that the costless simulation phenomenon is noncontinuous and holds only in the absolute zero OpEx case. But either way, this micro level reasoning with respect to mining requires a different-and, in fact, weaker-security model, namely, one which treats malicious (and honest?) miners as rational profit-maximizing agents participating in a game. Understanding these mining dynamics requires thorough analysis of optimal mining strategies, and an attempt at that was started here; also see this paper. These results show that a rational attacker can in fact maintain long-term profitable attacks, which questions the assumption that attacks are costly and which intrigues rephrasing or rethinking the security model. In other words, I disagree with Bob. ASICs are energy-efficient PoW I’m not a fan of liquid mining systems. Yes, they are more democratic; anyone can plug in their GPUs, opt in, plug out, opt out. No barrier-to-entry, no barrier-to-exit. But this is a double-edged sword; “OpEx-heavy” means you burn the security budget by using it. It means the security-by implication, the ability to accelerate confirmation times-can only scale linearly with the amount of resources burnt on the system. To be very very very secure, we need lots and lots and lots of computational resources burnt for us. It works, but it’s inefficient. In contrast, “CapEx-heavy” means you are protected by the miners’ investment, which locks them into the system and which is not burnt with every new mined block (outside of a negligible hardware wear). It’s like the difference between buying a house and renting it. Your monthly mortgage payment might be similar to the rent, but you never enjoy a month’s rental payment beyond living in the residence for that specific month. An ironic implication: Green PoW attempts which use available commodity hardware, such as Chia’s and Spacemesh’s proof-of-space, are in fact more wasteful than ASIC-able PoW, because they constantly burn the security budget. Pro tip: When dating a Bitcoiner, never tell her you care about energy-efficient PoW. An immediate irreparable turn off. Optimizing for the happy path I would like to finish the blog by making a fool of myself and claiming that the most important threat model is when a no-attacker assumption is applied, formally, attacker_budget = $0 and attacker_rhashrate = 0%. In many use cases the payee is not really concerned by the payer carrying out a double spend attack. E.g., in cup-of-coffee-type purchases, or when the merchant knows and trusts the payer-remittance, in-person payments, etc. In such instances, good UX implies a very fast first confirmation time, that is, a short time span between the user broadcasting the payment and the receiver observing it has been mined into the ledger. This category includes also cases where the good or service paid for will only be delivered to the buyer at a later date. Such is the case in e-commerce which takes several days to ship, thus providing enough time for the seller before real harm can be done. And such is the case when trading crypto against a centralized exchange, or a crypto-to-fiat gateway, which settles the IOU at a later date, off-chain. This time-to-first-confirmation metric is almost always overlooked, as researchers and devs tend to analyze the usability of respective blockchains in a principled methodology, and treat the first confirmation as merely one step towards sufficient (probabilistic) irreversibility of the state. However, while the system is only usable due to its robust worst-case guarantees, real-world usage frequently requires, simply, speedy inclusion in the ledger, which establishes the importance of the time-to-first-confirmation metric. Importantly, note that in PHANTOM and many other DAG-based protocols, valid transactions that entered the DAG and that do not admit double spends will be accepted and admitted to the state regardless of the final ordering; in database language, they are commutative with respect to other transactions co-mined in the same level in the DAG. This implies that an attacker can reverse a payment only if he engaged directly with the victim, i.e., paid the merchant in exchange for another commodity or service and then went on to reverse the transaction. This is in contrast to a single-chain-based protocol where overriding the selected chain automatically overrides all transactions in it.⁷ When we refer to an attacker with attacker_rhashrate = 0 we do not assume, therefore, that 100% of the mining is done by honest nodes, rather that no attacker node engaged with the merchant directly in the to-be-confirmed transaction (or in very recent transactions on which this one depends). Finally, and perhaps most importantly, a fast first confirmation is crucial for a good UX for the payer, regardless of the merchant’s waiting and authorization policy. Paying with crypto is an anxiety-inducing process, and an ordinary end-user wants to see the result of her attempts as fast as possible. Bitcoin’s ten minutes is just way too slow, it’s not the way money goes. That’s the way the money goes Impatience is a core trait of the Kaspa community. The main reason we are obsessed with DAGs is we want to see our transactions confirmed at Internet speed, similar to other web-based services. For use cases that require more than one confirmation we know to revisit the app later to check for the final status. Nevertheless, we need this speedy responsiveness out of the crypto app; we want a cryptocurrency to ping and be ponged, and we want it now. Impatience will also motivate the smart contract layer design I will propose here [WIP], where I will suggest optimizing for speed rather than for decentralization. Stay tuned, and start getting used to instant confirmations. *Title reference [1] Or maybe he just kept losing his private keys out of sheer cumbrousness, and he’s now very upset about this whole Bitcoin thing. [2] One could challenge this claim if one could provide evidence that there aren’t enough financial instruments and liquidity to profit, in the mentioned order of magnitude, from an hours-long attack on Bitcoin; then one would need to provide a model for confirming transactions based on the existing financial liquidity for such manipulations. Until then, we’re good. [3] This term has nothing to do with liquidity mining in DeFi. [4] CapEx dominates in the initial days of the ASIC. Of course, as time develops the miner pays more and more OpEx. A mining-informed friend told me that a very rough approximation of the CapEx to OpEx ratio, at the end of the machine’s lifetime, is 1:1. A PoW system which goes more extreme on the CapEx side would use an optical PoW function, which uses cutting-edge optical computing chips. These chips run the central part of the computation on photons, whose interaction does not emit heat, in contrast to electrons running on wire. Also relevant is a paper by Ganesh et. al., suggesting virtual ASICs, a protocol that mimics any point on the CapEx-OpEx spectrum. [5] (*) elapsed_time is the time that passed between the transaction being included in the block and now. (**) A more nuanced approach to the attack_cost paradigm would consider additionally the attack_success_probability, for the case of an attacker renting less than 50% of the hashrate. This does not change the qualitative differences between the paradigms, and so to keep things simple I assumed instead that the attacker is >50%, during the attack window, when attacking a liquid mining system. And vice versa: elapsed_time can be used in illiquid mining systems as well, to shorten confirmation times, if the client applies more sophisticated confirmation policies; see this paper. Still, accelerating block times would accelerate confirmation times in these policies as well. [6] For instance, the bounds given by Satoshi on the attacker’s probability of successfully overriding the longest chain are calculated for an attacker that never abandons his one-shot attack, even if many blocks behind. Such an attacker is clearly not concerned with cost and profit. [7] Admittedly, transactions removed due to chain reorgs can and in many systems are pushed back to the mempool. However, these transactions are not guaranteed to enter the blockchain in a relevant timeframe, either due to insufficient fees in the post-reorg congestion, or due to users keeping on transacting and innocently double spending previous payments. --- [2020-12-14] [kaspa] [launch] [mining] [fairness] In which I love my truly, truly fair* DAGlabs is a for-profit entity whose business model is based on mining Kaspa. DAGlabs is additionally funding many core Kaspa devs and researchers. Fair launch Kaspa contains no premine or founders’ rewards. At the same time, Kaspa is neither attempting nor pretending to be a “fair launch” coin, as the term “fair” is subjective, too broad, and absurd; early- and late-comers are at odds and cannot be co-satisfied. Bitcoin is extremely unfair to people born in 2030. The last major attempt at a fair launch-the Grin MimbleWimble project-was not successful, disappointed too many early members, and effectively buried the project; the community and coin are influencing nothing in crypto today, which is sad and wasteful, as it is a beautiful protocol and was even built in Rustlang.¹ Arguably, two properties of Grin contributed to its failure: it is GPU-based, hence has no barrier to entry and required no long term commitment from its early investors; and it has a constant block reward, leading to high inflation and hindering the store-of-value property for its early participants. Grin’s attempt at mimicking Bitcoin’s organic growth of value and community therefore failed, since the crypto market at that time (and certainly today) was already mature and developed, investment vehicles were well-employed, and participants’ patience had significantly shortened due to other competing innovations. Community launch In contrast, Kaspa is a “community launch” coin. The goal is that early members would (1) be able to secure themselves allocation of Kaspa, with some degree of certainty, and (2) enjoy a low inflation rate on their holdings. A mechanism to achieve this would need to (a) let the market price the opportunity (rather than, e.g., the core devs price it), (b) tune the inflation parameters in accordance with the “size” of the community as represented by some verifiable quantity. That is, the larger the community gathering around the coin, the lower inflation rate you want this community to suffer from, which translates to a steeper inflation curve.² Since the protocol itself is faceless and organization-less, the mechanism will be deployed outside the protocol, by DAGlabs, as the largest miner on day one, on a best effort basis, and entailing still a large degree of uncertainty. Initial Hashrate Offering DAGlabs will oversee a pre-launch customization of ASICs that will support a hash function specific to Kaspa, providing a head start on mining. These machines will dominate the market for an unknown period until other players decide to enter. The value of these machines must be distributed across early contributors, investors, the community, and a future development fund in some form of ASIC presale. The benefits of an “ASIC presale” model are described thoroughly in Nic Carter’s post, In support of the proof of work [un]fair launch. TL;DR: Kaspa will have good and well-understood PoW security from the start, the regulatory risk normally surrounding token presales is reduced, and early adopters are rewarded and development is monetized at a limited capacity, among other benefits. Unfortunately, and diverging from Carter’s ideal, it is impractical for DAGlabs to support a logistical pipeline of physical distribution and shipment of machines, due to several factors: (i) it would be too expensive and cause logistic havoc, (ii) there is not enough demand for pre-sold ASICs-average miners wouldn’t acquire machines that can only mine a coin that does not yet exist,³ and (iii) the ASICs are not particularly friendly to home users, so even folks with a few GPUs-who would normally be a good audience for an ASIC presale-would find it slightly more challenging to operate an ASIC. Thus, DAGlabs’ main efforts would be (*) distributing the mining machines both geographically and politically, and (**) facilitating economic participation of the community in mining, namely, creating a financial vehicle for early community members to be invested in the coins mined by the machines. Later efforts would include (***) further distributing the full nodes that control the mining process, enabling community members to run the software that dictates the block templates to be mined, and (****) creating a thin logistical pipeline to sell machines to interested parties and enthusiasts. No free launch Before mainnet launch, a DeFi contract will auction out the right to future Kaspa coins. Interested community members will lock some coins (say, ETH) on said DeFi contract and receive Kaspa at a future date. These Kaspa would originate from the machines operated by DAGlabs. The coins will flow to the contract, and in exchange the contract will release the locked ETH and send them to DAGlabs. The money will (1) repay DAGlabs for the capital expenses of the ASICs, (2) fund the operational expenses of these ASICs, and (3) be saved for future development. Importantly, in case DAGlabs does not flow Kaspa in a relevant timeframe (for whatever reason), the contract would automatically unlock the ETH and return them to the participants; thus, even in the worst case, a participant will not pay without receiving the promised Kaspa. To discourage DAGlabs from maliciously keeping Kaspa instead of flowing it to the DeFi contract, there will be an off-chain tool that continuously demonstrates the hashrate of the ASICs, providing transparency around mining. Parties that participated in this contract-based ASIC pre-sale and are interested in owning and holding the ASICs physically should contact DAGlabs to arrange this; DAGlabs won’t be able to cover the shipping cost. Finally, as argued above, participation in the pre-sale importantly determines the size of the early community and the timeline of community growth, which in turn should determine the aggressiveness of the minting curve, or the disinflation rate. The larger the size of the early community (i.e., higher participation in the DeFi contract), the more aggressive we can and need to be with disinflation in order to protect the investment and participation of the community. Thus, the end state of the DeFi presale will determine the disinflation parameters of Kaspa monetary policy embedded in the Genesis block of Kaspa, according to a precommitted formula. A concrete description of the contract and mechanics appears here. Title reference [1] The excitement around Grin was comprised as follows: 15% — writing in Rust, 25% — author’s pseudonym “Mimblewimble” from Harry Potter, 25% — the protocol itself, allowing cut-through of STxOs when syncing new nodes, 35% — a known Blockstream dev contributing to the project. [2] To some extent, the to-be-described mechanism replaces one of the roles of PoW, namely, a slow and organic price-discovery and distribution mechanism. Indeed I argue that in today’s crypto climate PoW can no longer fully serve this role properly, and is instead “just” the best infrastructure for a secure, sovereign cryptocurrency. [3] Most miners, as it turns out, follow the market and do not lead it. They are not particularly interested in the software and the protocol, which explains why they won’t be a proper target audience for an Initial ASIC Offering. --- [2020-11-24] [ghost] [kaspa] [pow] [blockdag] [origin] In which I run with the GHOSTs of my hometown* In his youth he reasoned: Since gold is more valuable, it ranks as money; whilst silver, which is of lesser value, is regarded as… In which I run with the GHOSTs of my hometown* In his youth he reasoned: Since gold is more valuable, it ranks as money; whilst silver, which is of lesser value, is regarded as commodity… But at a later age he reasoned: Since silver [coin] is current, it ranks as money; whilst gold, which is not current, is accounted as commodity. - Babylonian Talmud, Tractate Baba Mezi’a, pg 44 Mistakes happen In 2013 I entered into a master’s degree program in computer science at the Hebrew University. I was recommended by the head of school to work with Aviv Zohar, but Aviv suggested we work on Bitcoin and, coming from mathematics undergrad studies, I found Bitcoin awfully boring and practical. A half a year later, during Shabbat dinner, a friend told me about Bitcoin, and since I hadn’t done anything useful in the time that passed, I decided to return to Aviv (I spent the preceding semester trying to solve the Goldbach conjecture in order to win a $1M prize. In retrospect, I could have gotten the money by investing a few bucks into Bitcoin. Mistakes happen.¹) My project lab turned into a Financial Crypto paper analyzing the security-speed tradeoff of Nakamoto Consensus and proposing the GHOST protocol. GHOST is an alternative to Bitcoin’s longest chain rule, using the proof of work embedded in off-chain blocks by traversing the tree structure (resulting from forks under high speed) and selecting the main chain differently. The idea was to count the number of blocks that extend a certain block rather than the length of the chain above it. This was supposed to alleviate the need to suppress the block time and to allow for very speedy blocks without suffering from orphans and deteriorating security. Some time after the protocol’s publication, Vitalik Buterin incorporated GHOST into the Ethereum white paper and roadmap. Ethereum didn’t end up implementing GHOST. However, they did end up implementing a variant of our Inclusive Blockchain Protocols, also published in Financial Crypto that year (coauthored with Yoad Lewenberg). This paper first proposed the directed acyclic graph structure for blocks, the “blockDAG,” but its focus was on increasing throughput (for the same security) and linearizing the block rewards across miners. Around the same time we developed a more thorough approach towards PoW consensus, and as part of it we realized a weakness in GHOST that does not allow it to scale regardless of network parameters-meaning, GHOST’s block times and throughput, like Bitcoin’s, are still limited by the network’s latency. We then directed our efforts towards devising a PoW protocol, or family of protocols, that would alleviate the speed-security tradeoff, and would allow for arbitrarily short block times (provided the network’s capacity limit is not exceeded). These protocols eventually became my PhD thesis, as well as my personal obsession (there are others, but I find this particular one less useless). Longest chain is largest 0-cluster The proposition underlying my PhD research was that PoW can be made into an internet-speed service, if done correctly. Other than achieving speedy confirmation times, high block times reduce the variance of block rewards, mitigate the need to join mining pools, and thereby contribute to the decentralization of mining. This vision is reinforced by the observation that Nakamoto Consensus is a special case of a larger family of protocols, PHANTOM (in fact, Nakamoto Consensus is the slowest member in this family), which can support a high block rate-subsecond block times, say. These protocols are based on blockDAGs. The core idea behind using a PoW-based DAG is to replace the mining paradigm of Nakamoto Consensus, in which miners propagate and extend the winning chain only, with the more informative paradigm, in which miners propagate and extend the entire history of blocks-each new block points at all recent blocks in the history, rather than to the winning one. This history-the blockDAG ledger-may contain conflicts, so the protocol needs to provide a preference ordering over all blocks in order to resolve inconsistencies. The properties of the resulting blockDAG system, and specifically its resilience to 49% attackers and its speed of convergence, are determined by this ordering protocol. The latest protocol that we devised is called GHOSTDAG, and it is a member of a family of protocols called PHANTOM. In PHANTOM, we select (i.e., give preference in the ordering protocol) the largest k-cluster in the DAG, where a k-cluster is a sufficiently connected set-”sufficiently” as defined by the k parameter. This pre-encoded parameter represents an upper bound over the expected width of the DAG, informally the expected degree of asynchrony. When you choose k=0, PHANTOM coincides with the longest chain rule, and so it is a family of protocols in which Nakamoto Consensus is a special case. GHOSTDAG is a practical efficient variant of PHANTOM which chooses a large enough k-cluster. Block liveness matters Some people argue that “faster block times” is meaningless, and that confirmation times do not depend on block times. I agree that in a highly liquid, GPU-based mining platform a transaction’s time to (probabilistic) settlement does not shorten due to short block times. Having conceded this, in a different post (WIP) I will elaborate on the importance of high block rate for faster confirmations, especially in an ASIC-powered system. The path to preminelessness A few years into my PhD, there was interest from the community to build these protocols into a working platform. Guy Corem, and later Sizhao Yang, helped me raise a few million dollars from crypto VCs-a small amount in crypto 2018 terms-and we bootstrapped a small team of developers and researchers called DAGlabs. The money was given with the understanding that there will be no presale of tokens; premine or founders’ rewards models are quite antithetical to a decentralized cryptocurrency project, as they usually (read: always) imply the control of a central organization, be it a for-profit company or a “non-profit” foundation. Instead, DAGlabs used part of the money to purchase mining equipment, and it will participate in the mining of the new token with a plainspoken advantage of being the first to its market. More on this, and on mechanisms for community members to participate in mining, in a separate post. The major part of the funding went into R&D. DAGlabs developed code that is based on a fork of the BTCD full node codebase. A primary objective was keeping the design of the system as close to Bitcoin’s architecture and assumptions as possible: PoW, blocks, UTXO, transactions fees, etc. The codebase underwent significant refactoring by the core devs, who then adapted it to a blockDAG governed by the GHOSTDAG protocol. Kaspa it is The vision behind this project is to build a Nakamoto-like service that operates on internet speed. We wanted to build a system that surpasses the limits of Satoshi’s v1 protocol (aka Nakamoto Consensus) yet adheres to the same principles embedded in Bitcoin. Contrary to Satoshi’s vision, Bitcoin did not become a peer-to-peer electronic cash system. Instead, it is solidifying as the ultimate store of value, or e-gold, and that’s pretty much it from the Bitcoin side.² This is not a mild achievement by any measure-it’s one of the most important financial revolutions in human history. Yet it leaves lots of room for improvement (of L1) and/or for choosing different tradeoffs (for L1). For a thorough inspection of Satoshi’s vision, I highly recommend Examining a Conspiracy Theory about Satoshi’s Intent by Elliot Old. We look to silver, which presented a different tradeoff vs. gold. As demonstrated in the prefacing quote about gold vs. silver (in the original Aramaic text: Dahava vs. Kaspa), silver was historically treated as less precious than gold but more circulative, less valuable yet more acceptable as payment. With this prospect I suggested the name “Kaspa” for the project; “kaspa” is the Aramaic word for “silver” and “money.” There are no solutions, only tradeoffs Aside from optimizing for speed and accelerating the block rate, there are several axes on which I suggest Kaspa trades off differently than Bitcoin: Increased throughput — gain: lower transaction fees; loss: increased CPU and bandwidth consumption for full nodes. I suggest that Kaspa, unlike Bitcoin, not be optimized for home users being able to run full nodes (even though they most probably would be able to do so, with 2020 commodity hardware) but rather for flexibility and user convenience (manifested in speed of confirmations and low transaction fees). Specifically, Kaspa aims at supporting a few thousands of transactions per second, similar to credit card companies’ transaction volume.An alternative is to keep the throughput of the base layer low, in terms of transaction count and specifically the number of cryptographic primitives per block, yet allow for large payload data, with the intention to support smart contracts in L2. This would keep the CPU requirement for full nodes low, but increase the bandwidth requirement; though one can imagine demand for a new type of node, a partial node, which validates all base layer transactions similarly to a full node, yet skips on the data availability checks with respect to transactions’ payload. I will continue this discussion in a separate post (WIP). Pruning historical data by default — gain: faster sync for new nodes; loss: new nodes sync in SPV mode, by default, or, if they wish to validate the entire history, through resource heavy (hence more centralized) archival nodes. More on this in a different post (WIP), in which I argue that validating historical data thereby protecting against historical corruption is not meaningful to user sovereignty. Support for L2 expressiveness — gain: a plethora of use cases; loss: slight increase in the attack surface of the base layer due to the addition of scripts. The base consensus of Kaspa should be Bitcoin-like limited, with scripts that unlock specific users’ scripts and that have no read/write permissions to the entire state. At the same time, innovation around Ethereum L2, specifically Rollups, allows for the base layer to enforce consensus without being aware of [anything other than a hash commitment to the] state. Kaspa should incorporate this, and should participate in the amusing innovation of DeFi going on over Ethereum. I will propose a new design that optimizes for speed of confirmation of L2 transactions in a different post (WIP). On a zero-sum game vs Litecoin Admittedly, as Nick Szabo likes to remind us, any new cryptocurrency gets a negative on the social scalability part. This can be said on any new social network, not only money-powered ones. The sentiment against proliferation of coins is shared among almost everyone-we are all maximalists with respect to most cryptocurrencies created by other teams; non-coiners just go one coin further.³ In a sense, Kaspa does not aim to become another member of PoW coins but to replace an existing one — Litecoin. Litecoin was supposed to be the silver to Bitcoin’s gold, so we were told. However, it turned out to be quite totally useless, and provided no meaningful functionality over Bitcoin-it is essentially a basket of minor parameter changes from Bitcoin. From the perspective of a 2020 end-user, there’s no major difference between waiting for 15 minutes and waiting for an hour; both are equivalent forms of “forever.” I admit to not fully understanding the “lite” aspect in Litecoin. Depsite the meme, nothing in Litecoin’s design makes it lighter; it bears the same block size limit as Bitcoin, and a theoretical higher throughput capacity. The only reason it is currently “lite” is that it is empty… But a silver should be used more-not less-frequently than gold to justify itself, and a crypto silver’s ledger should in fact be heavier than Bitcoin’s. Perhaps it is widely understood that Litecoin is simply a light-minded version of Bitcoin and its community, being more agile, and less principled than Satoshi’s adamant followers. The founder going by @satoshilite does speak more to it than any further discussion. I suppose this light-mindedness was a positive thing at the time, as a testbed for Segwit was very much needed. (Just saying: if serving as a testbed for one patch to Bitcoin’s protocol provided Litecoin with an intrinsic value of 10 10USD, imagine the potential market cap of Kaspa facilitating as a testbed for ten Bitcoin protocol patches!) My hope is for Kaspa to become a home for the principled community of devs, researchers, and PoW fans, to be a vibrant testbed for new ideas, a place where innovation is welcome (outside the money base!) and the path to integration of tested and justified features is in sight. This agility is one of the privileges of being the self proclaimed “little brother,” and we should take full advantage of it. The burden of PoW is on us. Title reference [1] I did solve the conjecture though, which was really fun and satisfying. It started by first breaking RSA using a hypothetical machine that transmitted waves whose crest points were supposed to coincide with the solutions to the relevant parabolic diophantine equation. It turns out that, for this to work in practice, the size of the machine had to exceed the size of the observable universe, as explained to me by our math professor. He claimed that my solution is equivalent to using an infinite registry, hence trivial. Although he was empathetic and all, I decided to leave the production-ready version for another time. [2] Notably, there are efforts to make Bitcoin usable for payments at large scale: the Lightning project. It is an interesting albeit unproven path for scaling, one which is more complex conceptually and UX-wise, which requires more trust (watch towers, etc.), and whose economics and decentralization dynamics are yet unknown. [3] By the way, Richard Dawkins’ original statement assumes a very specific ordering of arguments. But if you first poll for the question whether Nature was created or rather created itself into being, most of us would probably take the former stance, despite disagreements about the pseudoidentity of its Creator. If you then run the GHOST protocol on the nodes of statements you will arrive at a different conclusion that the one Dawkins hoped for. So ordering matters. --- [2020-03-04] [ethereum] [rollups] [defi] [scalability] In which we’ll bе reduced to a spectrum of gray* Why is Ethereum Ethereum Sure, Ethereum is great and all, but why? What exactly made it into this vibrant and innovative ecosystem? And can it last? I’m writing this post in an attempt to clarify to self and pinpoint Ethereum’s unique value offering and the emergent decentralized finance ecosystem. Ethereum’s success is commonly ascribed to its composability, the ability to compose one smart contract on top of another like lego boxes, leaving little-to-no need for development coordination, as once a dev team deployed a smart contract, any other smart contract can access it, permissionlessly. However, while Ethereum does harvest the power of openness and permissionlessness to reuse, extend, and compose different code bases and code components — this is by no means new or unique, it characterizes general open source values and structure. Almost by definition, open source allows one to build software composed of several code bases and libraries developed by others. So shared functionality, or composability of functionalities, isn’t precising the matter. Composability — everyone’s talking about it The composability everyone’s referring to is, I suppose, the ability to compose different functionalities while acting on and inside the same state. Ethereum combines open source shared functionality with Nakamoto Consensus to enable shared alterable state. Without this shared state, the DeFi movement is reduced to a rising demand for using financial products that offer permissionless access to open APIs which communicate over a shared network (otherwise known as: the internet). If it was just transparent finance and open APIs, users wouldn’t be able to issue transactions that access several dapps or financial products simultaneously in a manner whose consequences are predictable and pre-understood. Rather, they would need to transact separately with each dapp, and while these interactions could be automated, the cascade of events across these composable contracts wouldn’t be predictable. And Ethereum is a game changer for DeFi precisely because it enables predictable cascading transactions, transactions of the form “Alice locks Token Ali and Token Baba in the autonomous contract Compos, Compos increases its pool of Token Ali and Baba, their respective exchange rates adjust, and Alice is added to the list of shareholders benefiting from fees collected by Compos” — all these assertions necessarily happening together (or together not happening). This property is known as atomicity, namely, the guarantee that a set of events and consequences will happen or not happen together. Let your imagination be carried with these endless composability options and you’re on the right track to becoming a DeFi dev. Of course, transparent and permissionless finance is still an improvement over traditional finance, but these do not capture Ethereum’s ecosystem and the flywheel responsible for growing its network effect. CAP In this section I want to illustrate and emphasize that composability + shared state = synchronous composability. Indeed, acting on a shared consensus state effectively requires that each transaction at its turn locks the state, fully executes, and completes its full update of the state, as in the Alice and Compos example above. I disagree with Georgios: To press this point further, and to start unfolding Ethereum’s roadmap, imagine an Ethereum variant where synchronous composability is violated; as we will see, unfortunately this thought experiment is not hypothetical, and there are good chances Ethereum is going down this road. We’ll come back to this shortly. Concretely, imagine that smart contracts wouldn’t communicate synchronously. That is, smart contracts would still be composable in the sense that they automatically interact with each other and trigger mutual function calls, but these function calls do not happen at the same transaction or block, rather, in this for-now-hypothetical Ethereum, it’d a few hours or days for one contract to consider and update upon messages sent by other contracts. It is highly questionable that such a system would be an impactful dapp platform as Ethereum has become. Among other difficulties, one obvious one is the great degree of unpredictability and indeterminism in the hours/days delayed execution of the transaction — the violation of synchronous composability is a violation of atomicity and in turn of predictability: The consequence of the cascade of events triggered by a transaction would be prohibitively unpredictable. While Ethereum users already suffer from some degree of uncertainty regarding the outcome of their transactions — a caveat inherent to acting inside a shared asynchronous environment — delaying parts of the execution for hours, let alone days, would make it rarely useful to issue rich composite transactions. Following these observations, and returning to Ethereum’s original shared state principle, we may conclude that Ethereum’s main value proposition is not mere composability, rather, synchronous composability, i.e., the ability to compose different functionalities while acting on and inside the same state. So far so good, we can summarize Ethereum as satisfying CAP: Composability, Atomicity, Predictability. (The CAP theorem is anyways lame and should be abused in good conscience.) The bad news But shared state is inherently unscalable, as every individual transaction imposes externality costs on the entire network. This is the why and what of sharding — splitting the state across different zones so that a transaction imposes a computational burden on the validators of its shard only. Almost any proposed roadmap for scaling Ethereum contains, in one way or another, sharding. While sharding is connotated with the Eth 2.0 vision, the updated rollups-vision for scaling Ethereum will silo the state across different rollups. (I use siloing to describe spontaneous grouping of smart contracts and validators, to distinguish it from random uniform allocation of validators, as in classical sharding.) The by-design independence of a rollup (or shard) implies that its state can depend on events of another rollup only after the latter’s state reached finality. From rollup Y’s perspective, we replace the question “Is the state of rollup X correct?” with “Is the state of rollup X finalized”, which is sufficient for rollup Y and the system as a whole to operate and cooperate (users of rollup Y do not care, at least in the micro level, if the state of rollup X was corrupted). This means that transactions that touch a few smart contracts, across different rollups, will need to wait for a long time before they are fully executed — if Alice’s tokens are managed in rollup X, and Compos is managed in rollup Y, the execution of Alice’s transaction will begin in rollup X, then wait for its finality, and only then complete inside rollup Y. We reached the crux of rollup-centric Ethereum’s composability problem: The finality delay of the underlying sharding system dictates the degree of asynchrony, and with it the degree of unpredictability of transaction execution. If we had finality periods of 2 seconds, as in Cosmos’ utilization of tendermint (which is not for free, stronger assumptions are made), you can argue that maybe async composability is useful. But with optimistic-rollups, the finality is said to be hours or days long, maybe even weeks. This renders async composability useless for an optimistic-rollups-centric Ethereum.¹ Folks have proposed to circumvent the finality delay using cross-rollup liquidity providers (LPs). The solution is designed for fungible assets only, and it operates as follows: Alice wishes to withdraw her funds from rollup X and transfer them to rollup Y (or to a base layer). Alice contacts an LP that has sufficient funds in the relevant fungible token, on both rollups. Alice sends her token to LP, on rollup X. LP sends Alice the same amount of the corresponding token (minus a commission), on rollup Y. The transactions in steps 3 and 4 are executed immediately on rollups X and Y, respectively, but are actionable outside each rollup only after the finality delay. (BTW, similarly to trustless staking pools, trustless liquidity pools don’t make sense. Happy to help.) Ethereum future.0 After this deliberation, and reflecting on the possible outcomes, I see few possibilities for the Ethereum system henceforth: Monopolistic Scenario: One rollup takes it all, sync composability wins, sharding loses. In this path, there is a scalability gain from decoupling computation from data availability (the former handled by the rollup, the latter by base consensus), which is somewhat beneficial, but users’ transactions still impose full externalities on the entire network, so state bloat is not addressed. Since different rollups introduce different assumptions (e.g., ZKRUs don’t rely on censorship resistance for correctness), one winner takes it all could be more risky. I will elaborate on why this is still somewhat beneficial in another post. Optimistic Scenario: Multiple rollups thrive, sync composability is ruined, async composability is useless, Ethereum network is fragmented into rollup subnetworks, and liquidity providers facilitate the interoperability of the subnetworks with respect to fungible tokens. Ethereum’s base layer would remain the kernel of the ecosystem, its settlement layer, serving the dispute resolution (DR) and snark/stark verification functionalities, and potentially the data availability (DA) one as well. While Ethereum’s base consensus is okay for DR, these scenarios would/should probably lead to using a different DA layer that is optimized for this functionality; Vitalik half-jokingly once suggested BCH as a candidate, and some side-chain projects offer their own DA layer. (This topic is outside the scope of this post, but do remind me to tell you about the ideal requirements from a DA layer, in light of the MEV crisis. Thanks.) All hell breaks loose scenario: A supernova event. With the ecosystem being built on the foundations of a shared state kernel, and with this kernel losing its power due to the unscalability and surging gas prices, natural and genuine forces in the community will clash amid efforts to coordinate migration into specific rollups, and recenter around a shared state. Considering the size of the community, coordinating migration will fail, clashes will increase, and the system and its original vision will collapse onto itself, leading to what I dub a supernova of the Ethereum giant. The network effect of Ethereum as a cohesive ecosystem will evaporate into the ether, and the remnants of this event would form few to many subnetworks interoperating through designated LPs, ignoring or skipping Ethereum’s base layer, implementing their own data availability and dispute resolution layers. Sociodevologically, projects and devs would still identify themselves as hardcore Ethereans. However, the social consensus around this identity, and the legitimacy of proclaiming it, will naturally fracture. “The world will spin and the color will fade, and we’ll be reduced to a spectrum of gray.” *Title reference [1] In contrast to optimistic rollups, ZKRUs rely on (the stronger) validity proofs, and do not require therefore a finality delay. However, there is some delay until proofs are generated, and the tech seems infeasible as a short-to-mid term solution for general purpose smart contracts. Efforts here are pioneered by MatterLabs, Starkware, and Aztec. --- [2023-05-03] [kaspa] [asic] [opow] [mining] Kaspa where to (Part IV) Exciting times for Kaspa! Kaspa is gaining traction, more eyes on us, different edges, intentions, interests. These interests may be at odds, but we are very early in our growth path, still categorically a positive-sum game. In particular, future arrival of ASICs will be an overall positive for Kaspers, GPU miners included, similarly to GPU miners’ arrival being a big win for the CPU miners which they outmined. Decentralization has more to do with the openness, permissionless, level-field nature of the market rather than the degree of heterogeneity in outcome. Fewer entities dominating mining is not inandof itself a sign of centralization, as long as they are unable to impose nonlinear rich-get-richer effects; for an example of the latter in longest-chain consensus, see Theorem 4 in https://www.ifaamas.org/Proceedings/aamas2015/aamas/p919.pdf We are all romantically biased towards a visually egalitarian hashrate distribution pie-chart (a sentiment which leads many, in sociopolitical contexts, to wrongly expect fair systems to demonstrate equal outcomes). And, admittedly, capital itself is a barrier-to-entry and brings about some non-linearity to the game. However, this is the price of victory, paid by each and every ecosystem when passing the tipping point. Moreover, Kaspa uniquely requires CapEx-heavy mining to fulfill its vision, as will be explained shortly. And so, when time comes and Kaspa shifts into “heavy” mining, we will welcome its maturity phase with great satisfaction, albeit a tinge of sadness. On Kaspa and CapEx (A previous post on this topic: https://hashdag.medium.com/in-which-i-have-no-patience-to-wait-til-by-and-by-b79ce53726b3): Kaspa perfects the consensus layer, primarily in terms of speed of confirmation; secondary — throughput capacity, decentralization; down the pipeline — MEV resistance. Speed of confirmation is contingent on the mining market being illiquid, since in liquid mining environments, 51% attackers are theoretically — and, in low marketcaps, practically — feasible, and can be fended away by waiting time (or/and finality gadgets) and not by num of confirmations. Typically, liquid vs illiquid is characterized by GPU vs ASIC, more inherently it is OpEx vs CapEx. The more CapEx will dominate mining costs, the less feasible it’d be for an attacker to rent temporary hashrate, eg via NiceHash; currently, about 5.3% of Kaspa network is NiceHash-able, and so we are seemingly still in safe illiquid territory. CapEx-heavy mining is also more efficient (aka “energy efficient”), as a smaller fraction of the security budget is burnt with every new block. This efficiency is important both for the deflationary (no KAS minting) phase of Kaspa, but mainly for addressing the wastefulness of mining heads on, which is imperative if we are to aim at mass adoption. Notwithstanding the good arguments defending the energy consumption dynamics of POW, in its current form, POW is politically infeasible, and adoption considerations should supersede fundamentalism. https://twitter.com/musalbas/status/1359972560738406401 All in all, CapEx-heavy mining is essential for virtually instantaneous confirmation times in permissionless consensus, thus for Kaspa to fulfill Satoshi’s p2p electronic cash vision. Note: Indeed PoS is pure CapEx, but the security considerations at the limit itself are non-continuous — a system with epsilon OpEx behaves materially differently than one with 0 OpEx, as the latter requires internal, state-dependent Sybil protection whereas the former hinges on external sources to distribute voting power. Optical POW Optical computation is a technology that utilizes interactions of photons, rather than electrons, to process computation. Optical POW (OPOW), envisioned by Michael Dubrovsky, is a POW-function optimized for optical machines. The low energy consumption would render OPOW extremely CapEx-heavy, and would thus be ideal for Kaspa, following above reasoning. The current POW-function of Kaspa, kHeavyHash, is already friendly to optical ASICs (it is, of course, computable by CPUs, GPUs, and regular ASICs too!). This function can probably be further optimized, more R&D is needed here. For the original OPOW paper see https://arxiv.org/abs/1911.05193; for a recent Stanford lab paper on OPOW see https://techfinder.stanford.edu/technology_detail.php?ID=44752 OPOW is a decentralizing force. It levels the mining field by centering competition around capital rather than energy, the former being order-of-magnitude easier to transport, convert, and distribute. Aside from geographical decentralization, the low-carbon signature additionally allows for stealth mining operations, the existence of which is essential for censorship resistance (recall Ethereum’s OFAC-compliant effective censorship of Tornado Cash transactions). AFAIK optical ASICs will not enter the game in the short-to-mid term, pace depends on R&D efforts and funding, which is fortunately above my paygrade (and I will obviously have no dog in this fight). Nevertheless, it is important to recollect and reaffirm the original vision we had when choosing Kaspa’s kHeavyHash, and to ensure community alignment around this. Changes to the current version of kHeavyHash are probably necessary in order to optimize for OPOW, ideally parameter adjustments only, and with reasonable heads up to the mining community. Governance of such changes is TBD, and will depend on whether and how we can avoid centralization on the manufacturing end. This way or another, optical tech initiatives should be first citizens in Kaspa colony, as they are vital for a p2p electronic cash system to scale up while maintaining Satoshi principles, efficient security budget, geographical decentralization, political feasibility. TLDR; Kaspa operates at the speed-of-light. --- [2022-12-14] [kaspa] [roadmap] [funding] Kaspa where to (Part III) Sharing below quick thoughts on development funding models and sustainability, and on the next grant request, with the hopes that my… Kaspa where to (Part III) Sharing below quick thoughts on development funding models and sustainability, and on the next grant request, with the hopes that my incentive biases do not contaminate my thought process too much: DAGKNIGHT (DK) was an academic effort by Sutton and myself, conceived on the New Year of Trees,¹ and released to the open three years later, on the 14th anniversary of Satoshi’s WP release. As its birth-givers, we obviously wish to observe its impact on real world systems, and it doesn’t get more real than Kaspa. The protocol still requires some (presumably, our) attention before standing on its own feet, and we detailed the main TODOs in KIP #2. One operation mode would be to work on this for the sole sake of bringing our research into fruition, in “weekend project” mode, with no other incentive in play. This path is perhaps the default one in the gift culture of open-source, indeed one which DAGlabs RIP converged on when releasing Kaspa mainnet with neither a premine nor a heads-up on mining nor any founders’ rewards or the likes. Another operation mode, popularized by Gitcoin, is to receive a grant from the community in exchange for code contribution, occasionally with some milestones or/and timeline commitment. This mode is the prevailing custom in Kaspa community, and accordingly we are publishing today a grant proposal. The grant request is for 70 MKAS, which is ~0.5% of the circulating supply. For comparison, the previous funded grant was 100 MKAS, which was ~1% of the circulating supply.My thought process here is to consider a hypothetical senior software engineer, Alice, with relevant domain expertise. When planning her path forward, Alice will seek either stability or opportunity, and in particular will typically not consider a short-term day-job for the same paycheck she’d receive for a long-term one. With this in mind, it makes little sense to denominate in USD deep-tech Kaspa grants (instability), rather they should be expressed in KAS terms (opportunity).The same mental mode was behind the denomination of the previous grant in KAS, which was at the time greatly off the ballpark salary for the relevant devs. Hence the KAS denomination of this grant proposal too. The 0.5% ask seems reasonable to biased-me, especially when benchmarked against the previous grant. The community accepting the grant is by no means a necessary condition for having DK implemented on Kaspa — Sutton and I will attempt at implementing it regardless, since, again, there exists already a non-materialistic incentive for us to do so. However, since we are not saints, this will be done on our spare time with non-committal or unspecified timeline. A third operation mode, which too exists in open-source environments, is to found a for-profit enterprise around an open-source layer, which naturally incentivizes the entity to allocate resources to further development of the kernel. E.g., IBM’s symbiotic relationship with Linux. This path has the highest potential for long-term sustainability, though, of course, a for-profit entity would have its own priorities, and thus the timeline for DK execution would remain equally ambiguous.I imagine this model could more realistically materialize around smart contracts, which are in arm’s reach from financial innovation opportunities. In contrast, DK is strictly an infra development, and has little to do with the app layer. [1] More on trees’ new year here. We submitted a JIP to upgrade this holiday and include DAGs, but the committee couldn’t reach consensus. --- [2022-11-26] [kaspa] [roadmap] [dagknight] Kaspa where to (Part II) Crypto winters are warm for projects with character. Last month Michal Sutton and I published the DAGKNIGHT protocol (DK), which to the best of our knowledge is the first POW consensus protocol that is responsive to the network’s actual (*adversarial) latency while being resilient to 49% byzantine attackers. DK is the culmination of nearly three years of research, a period in which we weren’t at all sure if the aforementioned property is at all possible to achieve.¹ Some work still needs to be done before considering DK for Kaspa: (i) Completing several missing details in the proof section. (ii) Preparing the paper for peer-review (depends on conference target). (iii) Devising efficient algorithms — the current pseudocode is highly inefficient, as it was optimized for ease of reasoning rather than real world implementation. (iv) Adapting the consensus algorithm to meet additional requirements of an actual cryptocurrency, e.g., the need to regulate minting, control difficulty, and enforce pruning, all of which require a responsive synchronous protocol (rather than DK’s partially synchronous operation mode). Similarly to GHOSTDAG, DK enables high bps (blocks per second), just with much faster confirmation times. Some research needs to be done in order to suggest the optimal bps — increasing the rate indefinitely doesn’t necessarily shorten confirmation times, as it increases the higher relative latency or DAG width. The increase is both due to more blocks created per second and due to these blocks’ headers being larger. Regarding the latter factor, one can envision a scenario where confirmation times improve by reducing the number of block references inside a block (either in consensus or as a default mining rule), but whether or not this is the case is pending further research. DK enables additional features, on which more research is needed. (i) One example that comes to mind is flexcaps, a proposal to allow miners to create blocks of different sizes and difficulties. While proposed originally for Bitcoin, at high block rates flexcaps require the DK consensus. To see the connection, observe that larger blocks → higher propagation delay of blocks → more blocks created in parallel → wider DAG; and the DK protocol uniquely does not need to bound in advance the width of the DAG, and can cope with it varying even across short timespans. One motivation for flexcap is to support, in times of peak demand, a higher throughput than that which the system can support on average. Indeed, large blocks consume both instantaneous load on the system (CPU, network congestion, etc.) and an accumulating load (larger UTXO → higher RAM and disk I/O for later block processing), which justifies a gap between the maximum limit on resource consumption and the average one.² This gap is enabled through flexcaps (or through the similar elastic block cap proposal). (ii) Another potential feature is the stealth txns, a construction which utilizes the asynchrony caused by high bps to protect users from MEV (relevant when smart contracts will be developed). More generally, and still in the context of MEV, the fact that many miners can create a block in the “next round”, can be utilized to facilitate richer transaction fee mechanisms, in some resemblance to Flashbots’ recent SUAVE. Similarly to the Rust upgrade, this consensus upgrade will require a new grant request from the community. A suggested scope and raise amount will follow. I hope miners and other Kaspa whales will find this initiative as desirable and long-term profitable as the previous grant. Our community is evergrowing in size and interests, and raising large funds in the future might become harder and harder, at which point we will need to find other structures to maintain development. Hopefully we haven’t reached yet the tipping point. Speaking of community size, a Discord moderator pressed a wrong button this week and accidentally kicked out all inactive users from the server, reducing Kaspa Discord community (~12k) by about 25%. The good news is that we learnt that ~9k members were apparently active in the last 30 days, which speaks volume of the quality of the community. In the spirit of thanksgiving, I am grateful to all 9k of you for turning Kaspa from a sound protocol to a sound money. [1] More accurately, one of us was positive throughout that a possibility is within a hand’s reach, and the other was skeptic and believed an impossibility result was lurking in the dark. Interestingly, the roles reversed with respect to the question whether Kaspa can take off. We swore not to reveal the corresponding identities of the weak in faith, but the reader can be assured that between the two of us the truth always lies. [2] In Bitcoin, where there’s no pruning of historical data, the gap is even larger, due to initial blockchain download that full nodes go through by default when onboarding. --- [2022-07-05] [kaspa] [roadmap] [pow] Kaspa where to (Part I) This is a concrete version of a longer post which I started writing but had too much spare time so didn’t complete yet. Kaspa where to (Part I) This is a concrete version of a longer post which I started writing but had too much spare time so didn’t complete yet. Context: One of Kaspa’s core devs, Michael Sutton, suggested a plan to order-of-magnitude enhance Kaspad full-node performance by refactoring the codebase and rewriting it in Rustlang. (https://discord.com/channels/599153230659846165/844142778232864809/993245032670842991) My twosats on the matter: Kaspa was initialized as a live proof of an idea, a demonstration of a novel (and very cool, but that’s beside the point) paradigm for permissionless consensus. In bootstrapping Kaspa, I was fully aware that we do not have the resources — or the manpower — or the organization machinery — required to unlock even 5% of Kaspa’s potential, and that some external strategic move will need to happen for that aspiration to come true (think Project Serum and Solana https://defirate.com/ftx-serum-solana/). For these reasons, as Kaspa OG’s recall, I considered launching Kaspa in testnet mode, then opted for a novel (aka failed) mode, which I coined “gamenet”, and which can be thought of as “testnet with incentives”. While this attempt was apparently naïve and, in retrospect, flawed I am mentioning it to recollect the mindset with which Kaspa was released. (For the same aforementioned reasons I kept myself uninvolved with the exchange listing efforts of Kaspa; I fully appreciate their value to the community, and at the same time I preferred erring on the side of caution). I agree with concerns voiced by some community members that, in principle, to bring real value efforts should be focused on integration, adoption, marketing, etc. In particular, at this stage, high bps and high tps are imho not a meaningful step towards building a non-hypothetical financial system. However, our community is still far from the scale and organization necessary to reach actual adoption, if not merely for its still modest treasury size (which is contribution-based, and managed by a few volunteer OG’s). With that in mind, I believe the most correct usage of the funds donated by miners is to continue the path of demonstrating live the original DAG vision, by improving the base layer node and later on perhaps upgrade its consensus; which is precisely the proposal put forward. The current Kaspad codebase is an adaptation of the Bitcoin client btcd, written in Golang. It enjoys a fine amount of technical debt and great code complexity, which make it difficult for new folks to contribute. The proposed refactoring explicitly aims at writing the codebase in a modular and legible manner, which is arguably even more important than performance enhancement. For full disclosure, I am working tightly with Sutton for a few years now, and am highly biased in favour of any R&D task or project to which he dedicates his rare talent (cf. Proverbs 27:14). Him availing a few months’ of his full capacity to Kaspa is exceptional, and I hope the community (and esp. miners) will match this generosity. I believe a ballpark of 1% of the circulating supply would be very reasonable. --- [2021-11-23] [kaspa] [network] [incident] Kaspa (Black Tuesday) This post assumes reader context on the crash of the Kaspa network in the course of the last 48 hours, and provides some additional notes… Kaspa (Black Tuesday) This post assumes reader context on the crash of the Kaspa network in the course of the last 48 hours, and provides some additional notes and perspective. (1) To simplify logic and debugging, and since the gamenet concept didn’t really catch air, I removed the random block reward and replaced the average block reward of 500 with a deterministic block reward of 500. Thus, if so far we mined on average 86400x500=43200000 Kaspa per day, we will henceforth mine deterministically 86400x500=43200000 Kaspa per day. (2) Many ppl expressed their concern about the future rebase that will take place soon. I want to reiterate that the rebase is cosmetic only, it’s a rename, an altering of representation. If you mined so far, say, 10% of the (total or of the circulating) supply of Kaspa, you will posses 10% after the rebase as well. (3) The deflationary monetary policy HF that I mentioned here (https://discord.com/channels/599153230659846165/909907923084382218/911015904144420895 or https://hashdag.medium.com/kaspa-launch-plan-responding-to-reality-6b4bec449037) will be specified next week, after syncing and mining resume for a few days, the network remains fully operational and confident, and the voluntary Kaspa magicians (Ori, Michael, Elichai) get some sleep and catch up with their own research and ventures. (4) The community by and large reacted solidly to the crash. Thank you! No-one took it lightheartedly, and at the same time most focus was on providing datadirs and other useful info, getting instructions, funny memes. Let’s hope we won’t need to test ourselves again in similar circumstance. (5) There was some genuine misunderstanding regarding the approaches we were looking into. Specifically, we were never considering a rollback in the sense of pointing at an early state which we were satisfied with and wanted to revert to, and discarding blocks and transactions appended to it later. Rather, we were searching for the latest state for which we have a certainly-valid UTXO commitment. While many users shared with us up-to-date datadirs, and while we had our own datadirs, we had to spend effort and time to ensure that the UTXO commitment builds correctly. Thus, we (read: aforementioned devs) had to rebuild the state afresh, feed it with such datadirs, and compare the commitments. Fortunately, the UTXO that we built hashed into the same UTXO commitment string embedded in the latest block in the datadir, producing 710f27df423e63aa6cdb72b89ea5a06cffa399d66f167704455b5af59def8e20, which proved that the DAG UTXO algebra was not erroneous, but “merely” a victim of the memory problem. This is not to say the architecture of this module should not be revised and improved — -a more correct architecture would protect it from DB failures. (6) Kaspa is here to stay, in case you were wondering. --- [2021-11-18] [kaspa] [launch] [governance] Kaspa launch plan (responding to reality) First and foremost I wanted to thank you all for joining and forming this community, for the interest, excitement, and involvement around… Kaspa launch plan (responding to reality) First and foremost I wanted to thank you all for joining and forming this community, for the interest, excitement, and involvement around the project. Seeing my PhD obsession — POW DAG consensus — realize itself into a live network and a spontaneous community is thrilling yet humbling. Thank you, Todah! I’m definitely going to start valuing members-count over citation-count, so please bring more crypto friends to the party! Every few years a new fair-launched POW cryptocurrency captures the excitement of the community — Litecoin, Monero, Grin, and now Kaspa. May the force be with us. Since we didn’t anticipate this rapid growth, I didn’t prepare accessible answers to several FAQs. I hope to write a longer post about all this in the coming days, but for now here are some answers: Monetary policy will be deflationary. Halving will be more aggressive than Bitcoin’s since market conditions are different (order-of-magnitude faster market discovery). When the deflationary scheduling will be activated, and what would be the initial block reward (compared to the current avg of 500) — TBD. We will try to seal this next week or so. These numbers will imply the finite cap on supply. BTW we should rebase the term Kaspa to refer to today’s 1000 Kaspa, say; the current representation feels not so scarce :) Our proof-of-work is a Kaspa variant of heavy-hash, let’s call it k-heavy-hash. My goal here was to create a CapEx heavy POW function, since I believe this concept is both energy-efficient and provides more miner-commitment (stronger than ASIC since less OpEx burnt). Whether k-heavy-hash is actually CapEx heavy, and whether a different POW function will better serve this goal for Kaspa — is a question I’m open to discuss. The project is maintained by a few devs, all of which have other full time dealings, and some of which are funded by DAGlabs (but totally self managed). In particular, there’s no company or entity behind the project that is responsible for your wallet, full node, funds, miners. We are here during our spare time. I, for one, am a full time postdoc at Harvard university, and while this project is my PhD baby, I am at the same time dedicated to my postdoc baby. So this is a purely community project, please take that into consideration. What can you do to help? Arrange for more dev-power to learn the codebase and join the efforts; DAGlabs can potentially fund additional devs, as long as they have the ability to manage themselves, open issues, fix bugs, manage PRs, etc. Roadmap. There is no official roadmap as there’s no organized development. I can write down what devs should be working on IMO, post bug fixing and version updates (HFs). In short, IMO priorities should be accelerating gradually to 10 blocks per second, then implementing an amazing upgrade to the consensus protocol, pending theoretical research results of Michael Sutton. In parallel, if someone can promote a privacy gadget (e.g., bulletproofs) and implementation for Kaspa that would definitely leap us forward. Next week there will be a hard fork (HF) in order to fix a bug and decrease header size. Tune in on this, especially if you are running a mining full node. A few weeks later there will be a HF to embed said monetary policy. Will we list the coin on exchanges? There is no “we”, there’s “you”. And I suggest waiting for the community to grow more organically before bringing retail. When is it recommended to stress test for utxo throughput? You are free to stress test as you wish. However, note that even if bugs are discovered, some time will pass before the devs can make themselves available to fix them. Therefore, I suppose better wait for the current system to prove itself stable for more than two weeks, say. What about block explorer? I believe next week devs will deploy one. --- [2021-09-22] [kaspa] [launch] [governance] Kaspa launch plan (proposal) tldr launch Kaspa in gamenet mode, a research oriented experimental network inject deliberate fragility into Kaspa launch via random semi-scarce monetary policy construct battlefield for reward-based and MEV-based reorgs as community matures and hashrate grows, go full scarcity mode, transition from game- to main- net mode, rendering early (gamenet) stage miners profitable in retrospect gamenet 2.0 requires developing Ethereum bridge to simulate and practice MEV-reorgs, in order to test and demonstrate DAG consensus’ antifragility Why Kaspa shouldn’t launch as an ordinary cryptocurrency [L1, POW, lack of EVM] There seems to be little room now for a new L1, especially one powered by PoW such as Kaspa. The market has matured, and with it the scope of attacks and manipulations has expanded from direct attacks on block ordering (eg double spends via reorgs, liveness attacks) to attacks that regard txn content — typically in the zero-to-one-confirmations phase — by miners or bots (aka flashbots). Moreover, Kaspa lacks as of now EVM support, rendering it significantly less relevant to the current market. Gamenet: A proposal to launch Kaspa in a novel experimental mode Kaspa may be launched as a research-oriented consensus engine that is focused on experimentation, novel testing of dynamics, and a vibrant battlefield for real world cryptoeconomics attacks. I call it gamenet mode. Cryptoeconomics phase 1 [CPU/GPU mining, uncertain scarcity, low hashrate and security, non-commercializable] The platform should be CPU/GPU-mineable, to facilitate the base activity; I believe Ethash is a neutral candidate that fits our needs. However, its token should be deliberately unfit for commercialization, in order to penetrate hardcore communities, individuals, and zones that refrain from cooperating with non-BTC or non-ETH tokens. Accordingly, the token’s supply should challenge the ordinary notion of scarcity, and should be unfit for exchange listing. This implies that the platform will obtain low hashrate and low security, at the initial stages. Gamenet activity [battle-field, simulation, real world game, selfish mining, reorgs, MEV] The theme of gamenet’s activity can be thought of as a continuous hackathon over a live network which serves as a battle-test field for simulating real world attacks, manipulations, and dynamics of a multi-player network. The block rewards will incentivize occasional reorg/selfish-mining attacks by strategic and sophisticated miners of the gamenet, whereas transactions in the network will implicitly or directly reflect MEV exploits from real world DeFi systems such as Ethereum. Community [R&D groups and individuals, testbed for innovation, recognition by broad crypto community] The goal is to attract research and dev groups (e.g., flashbots fans) to play, compete and/or collude over the live system, and to extract insights on the real-world dynamics of other live systems such as Ethereum. Further, commercial L2 projects that propose solutions to certain exploits, such as using cryptographic primitives for MEV, can implement those over Kaspa gamenet and prove the robustness of their solution, while others may attempt to refute it. I hope Kaspa will become a center of a vibrant R&D community, and that the general community will look to Kaspa gamenet as a credible source of insights regarding cryptoeconomic dynamics in the wild. An example for the community’s interest around the topic may be found in this recent summit http://reorg.wtf/ Cryptoeconomics phase 2 [monetary policy solidification, recover scarcity, compensate early community] As the community and the platform matures, we will want to transition to a solid monetary policy, regulate the supply, and trade over exchanges. To this end, we may set an automatic trigger in consensus that will eliminate the randomness in the monetary policy, and the uncertainty of the supply, once a certain hashrate is reached. This will automatically compensate early participants — the early miners of the network — by rendering the mined tokens scarce in retrospect. See below for specifics. See https://fc21.ifca.ai/papers/222.pdf by Lucianna Kiffer for a related design. Kaspa development and support [community+product management, further development and support] As a continuous real-world hackathon, Kaspa gamenet can significantly enjoy some product and community management, conveying and demonstrating to the community the rules of the game and example dynamics. Community members are welcome to take the lead on these fronts. On the development side, and closer to the backend, explicit MEV activity can be simulated via a bridge from Ethereum. While not necessary for gamenet, I believe gamenet+MEV will make the platform much more attractive a battle-field. Individuals capable and willing to take upon themselves some of these efforts, and who require funding, may DM me. Timeline [Kaspad ready mid October, Kaspa launch end of October, full gamenet activity TBD] Kaspad — the core consensus component of Kaspa — will be production ready (though untested) by mid October. The remaining features are (1) implementation of the monetary policy described below, (2) plugging in the chosen PoW function. The execution of components that enhance gamenet activity depends on community engagement. Monetary policy and block rewards Requirements The block reward should incorporate randomness so as to: Test selfish mining and reorg attacks, Introduce uncertainty regarding the supply, Incentivize extending the main chain, 95% of the time (say), Incentivize forking the chain when the reward — or the MEV opportunity — is exceptionally high. Concrete proposal Have each block mint a random amount of Kaspa, where the randomness is a function of the last $M$ blocks. The result of the randomness, the “sampling”, should depend on the block’s hash, to ensure that it cannot be gamed. The randomness should not be a function of the previous block only ($M=1$) because that will lend itself to frequent forking attacks. At the same time, it should be responsive to recent blocks, to ensure a sufficient degree of uncertainty with respect to the supply (so $M=10¹²$ won’t do, as well). I propose $M=~100$ blocks. Formal description Given a DAG $G$, GHOSTDAG outputs a chain $C(G) \subseteq G$. For each block $B ∈ G$, let $merging(B)$ be the earliest block in $C(G)$ that contains $B$ in its past. For each block $B ∈ G$, $B.selectedParent$ is the tip of the chain $C(past(B))$. For each block $B ∈ G$, $mergeSet(B,G)$ is defined as $past(B)∖ past(B.selectedParent)$ if $B$ is in $C(G)$, and the emptyset otherwise. The reward of blocks in G, $rew(*)$, is defined by: $rew(genesis) = const_1$ $rew(B) = const_2*avg_{D \in prvs M blocks of B}rew(D)*4^x + const_3*sum_{D\in mergeSet(B)}(rew(D))$ where $x$ is a random variable drawn from the normal distribution (mean 0, std 1), and which is uncontrollable by the miner (it is the result of the block hash). $const_1=1$, and $0