A decentralized, unbiasable randomness beacon
Randomness.cloud hosts a draft specification for a public randomness beacon that combines multiparty entropy, verifiable delay functions (VDFs), and Bitcoin anchoring—without consensus, staking, tokens, or trusted hardware.
This document is intended for cryptographers, protocol designers, and engineers who want to scrutinize the design, security assumptions, and failure modes of a next-generation randomness beacon.
Design in one paragraph
In each round, a set of Entropy Providers (EPs) publishes signed entropy contributions to a gossip network. Nodes deterministically aggregate them (XOR) to obtain a unique input, then apply a VDF to derive the round's randomness output. The VDF makes grinding and withholding attacks ineffective. Round records (entropy root, VDF output, proof, and hash chain) are periodically batched and anchored to Bitcoin, yielding a tamper-evident, globally auditable randomness log. Access is metered using LSAT and Lightning, with no native token.
High-level properties
- No consensus, no token: No blockchain, no staking, no governance token. Just signed contributions, VDFs, and Bitcoin anchoring.
- Unbiasable randomness: Multiparty entropy plus a VDF prevents grinding and selective withholding.
- Minimal trust assumption: Security reduces to a single condition: at least one honest EP per round.
- Economically sustainable: LSAT + Lightning micropayments fund EPs, VDF workers, and gateways.
- Globally verifiable: Any node can recompute entropy aggregates, verify VDF proofs, and check anchors.
Protocol sketch
Rounds
Time is divided into fixed-length rounds r = 0, 1, 2, .... Each round has:
- a contribution window where EPs submit signed entropy for round
r, and - a VDF window where workers compute
VDF(E_r).
Nodes only need to agree on round numbers, not exact wall-clock time.
Entropy aggregation
From the set of valid contributions for round r, each node computes:
E_r = XOR(e_1, e_2, ..., e_n)
and a Merkle root over all contributions. Validity rules (correct signatures, registry membership, per-round limits, window cutoff) ensure there is exactly one canonical aggregate, even if nodes saw contributions at slightly different times.
VDF and round record
VDF workers compute:
(V_r, π_r) = VDF(E_r)
where computing V_r takes a fixed delay and verifying π_r is fast. The round record is:
R_r = { r, MerkleRoot, E_r, V_r, π_r, hash(R_{r-1}) }
Any node that verifies π_r and the Merkle root obtains the same randomness output for round r.
Why not drand / VRF / block hashes?
- drand: Requires DKG and threshold BLS; no VDF; committee size and rotation are operationally heavy.
- VRF oracles: Verifiable but biasable; providers see outputs before publishing and can withhold.
- Block hashes: Miners/validators can grind and selectively publish blocks to steer outcomes.
This design removes DKG, shared secrets, and consensus, and uses a VDF to make grinding and withholding uneconomical.
Status & review
The current document is a draft summary intended for review and critique. A fuller specification will include:
- Formal message formats and validity rules
- Concrete VDF scheme selection and parameters
- Threat model and attack analysis
- Reference implementation notes
The goal is to keep the protocol small enough to be implementable and auditable, while achieving strong security guarantees.
Contact
If you are a cryptographer, protocol designer, or engineer and want to review or attack the design, you are exactly the intended audience.
For feedback, discussion, or implementation interest, reach out at:
hello@randomness.cloud