We are constantly updating our content online. To avoid missing out on important announcements or new articles subscribe to our email newsletter. Plus use our social media to get the latest content and interact with us.
DFINITY is a public network of client computers providing a "decentralized world compute cloud" where software can be installed and run with all the usual benefits expected of "smart contract" systems hosted on a traditional blockchain. The underlying technology is also designed to support highly resilient tamperproof private clouds that provide the added benefit that hosted software can call into smart contracts on the public cloud.
DFINITY is an Ethereum-family technology and is fully compatible with the public Ethereum network - if you can run a Dapp on Ethereum, you can run it on DFINITY too. There exist several fundamental differences between the networks however, and they are really sister systems offering different things. DFINITY introduces new crypto:3 protocols and techniques that aim to deliver extreme performance, unlimited scalability, interoperability and other benefits. Another difference is that whereas in Ethereum "The Code is Law", DFINITY introduces governance by a decentralized intelligence called the Blockchain Nervous System. These differences involve tradeoffs, and DFINITY is best understood as an exciting new extension of the Ethereum ecosystem that will make it much, much stronger.
Technology components used by the DFINITY project are being released into the public domain for those interested in decentralized technology (for example, software related to a crucial technique known as Threshold Relay is already in the public domain). A beta network created using the "Copper Release" client software is expected towards the end of Q1 2018. The expectation is that the Copper network will launch end Q2 2018. A supporting foundation, DFINITY Stiftung, has been created in Zug, and will assist with work.
DFINITY is conceived as an extension of the Ethereum ecosystem - a sister world computer network that prioritizes performance and scalability and where smart contracts are subject to a decentralized intelligence, which is very different to "The Code is Law". This will bring people into the ecosystem that have different needs that performance and decentralized governance by a distributed AI can solve. Of course, the features we provide involve some design tradeoffs. The ecosystem will be broader and more attractive because users can choose whatever suits them better. All our engineers and researchers care deeply about Ethereum and open source. We will maintain the maximum possible level of compatibility and Stiftung DFINITY will contribute funding and effort to Ethereum projects.
This is not a zero sum game. Right now numerous decentralized platforms are vying for dominance. The Ethereum ecosystem can win by eschewing monoculture. For example, during the 1990s many different hardware platforms vied for dominance. These included the PowerPC, SPARC and 8086 family architectures. In the end 8086 won largely because it was a more diverse ecosystem that provided more options. Dell, HP and many other operations became much bigger individually than dominant/monopoly operations on the others, which eventually disappeared or switched to 8086. This is the vision we have for the EVM ("Ethereum Virtual Machine") and systems that run on it. We believe that DFINITY will drive the value of the Ethereum network upwards.
DFINITY's Blockchain Nervous System (or "BNS") solves a bunch of critical problems for certain people, and it will also allow us to accelerate development far beyond what limitations of current architectures allow. With respect to business, many organizations cannot easily move significant systems and assets onto the decentralized cloud when, if their systems deadlock or they are hacked, "The Code is Law" approach prevents them finding a solution. In many impactful potential consumer applications, arguably it is also unfair to hold users responsible for flaws in smart contract code they cannot read themselves. The BNS can address and often resolve such situations by executing arbitrary privileged code. Generally speaking, the BNS will tend to make decisons that maximize the value of "dfinities" and we expect this will result in it maintaining an initial "genesis" constitution that declares systems whose primary purpose is vice or violence should be frozen since this will make its appeal broadest. There is no concept of a hardfork - traditional client software client such as geth or parity (the two main Ethereum clients) is wrapped in a proxy that is BNS aware. It can continuously upgrade the inner client without interrupting dependent applications and users.
Yes. DFINITY protocol research began with the assumption that the network must contain a million or more mining computers, and that together these will be required to provide a massive virtual compute capacity (i.e. scale out). Research objectives also include considerations regarding how the network can meet different kinds of computational requirements. For example, Web search is in many ways very suited to vast decentralized networks, but the need for results to be returned quickly requires engineers are given flexibility in how they schedule validated computations. These kinds of considerations are baked into DFINITY's thinking.
If a business is fundamentally an intermediary that processes information and money, they will eventually face competition from open decentralized systems running on world compute platforms. These can provide great benefits to the world. Currently, we suffer a winner-takes-all model where whoever locks in network effects and monopolistic domination first can often maintain their position for a very long time. Users lose ownership of their data and destiny, the monopolies become rent seekers and the rate of development and progress slows down. We envisage open versions of services such as Gmail and Uber that provide better guarantees of fairness to users, are less monopolistic because users own their own data, and develop faster through open contribution programs. We also want to see more traditional intermediary businesses challenged. For example, PHI involves completely replacing commercial banking with autonomous systems that give out loans algorithmically by using human validators as proxies. World compute platforms can also help existing businesses reengineer themselves to become more efficient and grow into new sectors. DFINITY is an enabler of this future.
Unsurprisingly, DFINITY can be traced back to cypherpunk and decentralization thinking, but there are some twists. Back in 1999 Dominic Williams was using Wei Dai's crypto++ library and came across his bMoney proposal. The idea struck him as important, although he was consumed with working on a Dot Com era technology and had no time to follow up. He independently developed deep interests in distributed computing and scalability - he launched an MMO game in 2010 that grew to 3MM users that relied heavily on technology he created. In 2013 Dominic abandoned everything he was doing to concentrate on decentralization technology, working through 2014 on theory. DFINITY came out of an earlier project called Pebble that was involved scaling needs.
In 2015 Dominic teamed up with Tom Ding, a crypto entrepreneur, and co-founded a crypto studio, incubator and investor in Palo Alto called String Labs. Due to extensive theoretical groundwork already existing, and pressing demand for the functionality only an AI-governed world compute "decentralized cloud" can easily deliver, String Labs decided DFINITY would be the first protocol it helped incubate to production. The core team was later joined by Timo Hanke, a cryptographer who had previous designed ASIC Boost. Stiftung DFINITY has been formed to take the project to the next stage.
We stay carefully in touch but are only indirectly linked. We are located near Stanford University and Dominic's designs rely heavily on applying the BLS algorithm to create randomness, which was designed by Dan Boneh and his PhDs. Dominic attends events and occasionally talks on campus. The Decentralized and Distributed Systems (DEDIS) Group at École Polytechnique Fédérale de Lausanne (EPFL) has two members working full time on DFINITY at any given time. Before DFINITY in 2014 the Pebble project included several academics now well known for their interests in crypto including Andrew Miller, Elaine Shi, Steve Omohundro and Ferdinando Ametrano. Although DFINITY uses very different systems, Honey Badger is closely related to an approach to distributed consensus originally used by Pebble. The DFINITY project counts several academics as contributors - and interested parties should contact us to see how they can help.
We can be found in Silicon Valley and all around the world, especially at crypto conferences!!! Feel free to drop us a line [email protected]
Yes. The first three client releases are:
So far, the only means we have found to organize a vast number of mining clients in an attack-resistant network that produces a virtual computer is to apply cryptographically produced randomness. Of course, Satoshi also relied on randomness by having miners race to solve a current puzzle whose solutions can only be found randomly using brute force computation — then allowing winners to append blocks of Bitcoin transactions to his chain. DFINITY needs stronger and less manipulable randomness that is produced more efficiently in fixed time. Randomness does not only play an important role to ensure that consensus power and rewards are fairly distributed among all miners. Turing-complete blockchains like DFINITY require a higher level of randomness since smart applications may enable high-volume transactions that hinge on aleatory conditions, so that the potential gain of manipulation could be arbitrarily high.
The solution we found is Threshold Relay, which applies cryptography to create randomness on demand of sufficient network participants in a manner that is almost incorruptible, totally unmanipulable and unpredictable. Using Threshold Relay, DFINITY network participants produce a deterministic Verifiable Random Function (or VRF) that powers network organization and processing.
Expert. It's worth noting that randomness was playing a key role in distributed computing long before the advent of Satoshi's blockchain. The most powerful Byzantine Fault Tolerant consensus protocols that can bring participants to agreement without a leader in an asynchronous network, where no assumptions can be made about how long it takes to deliver messages, also depend on a construct called the "common coin" (no pun intended). This produces a series of random coin tosses and is typically implemented using a unique deterministic threshold signature system, with IBM Research applying RSA this way in 2000. During 2014, Dominic worked on a scalable crypto project that involved derivatives of a more recent best-of-breed protocol. This is the origin of his application of BLS signatures to produce random values in decentralized networks, and his subsequent thinking about their numerous powerful applications. Whereas RSA threshold systems depend for setup on a trusted dealer, BLS threshold systems can be easily set up without a trusted dealter.
Yes, a source of randomness can be essential within an open cloud platform. For example, beyond trivial examples of applications in fair lottery and games systems, randomness can be used to randomize the order of transactions submitted to a financial exchange to make "front running" by miners harder. But perhaps the most powerful applications are within autonomous systems. A great example is provided by the PHI decentralized commercial banking system, which is currently being developed on by the String Labs team. PHI is fully autonomous but is able to judiciously give out loans algorithmically using human validators as proxies, who are randomly selected one after another to prevent collusion. Arguably, most autonomous systems that need to make decisions on external data they cannot self-validate necessarily depend upon random selection to validate propositions about the outside world and resist attack.
DFINITY has introduced a novel mechanism called "Threshold Relay". This produces a deterministic source of randomness that is almost incorruptible, totally unmanipulable and unpredictable.
Many of the protocols DFINITY uses to scale out depend on randomness being unmanipualable and unpredictable. In a Proof-of-Work system there is an expense associated with creating several candidate blocks so as to "select" the hash but it can be done. In a Proof-of-Stake system where no brute force computation is involved, a miner can easily modify the content of a block to determine its hash, making a block hash completely useless. However, a block hash does not suffice for our purposes in either case.
These schemes generally involve a "last revealer" who can choose not to play ball and therefore influence the result if the others proceed anyway. Levying fines on those who withhold their commitments doesn't really work since the rewards gained by manipulating the randomness might be far higher (after all, any number of apps might be depending upon the randomness produced by the cloud). Apart from the flawed security, such schemes are also necessarily slow and prone to failure because they depend on all participants supplying their commitments before they can proceed.
It would be impossible to decide who should run the hardware and it might be turned off!
That would require everyone to agree on a single winning block first, which means running a full consensus protocol, for which there is not enough time. Furthermore, the process of agreement would introduce a surface for bias (manipulation) because it would open up a choice between two or more messages. As a matter of defense in depth we want to avoid that. If the agreement and signing phase are clearly separated and the signing is considered unpredictable then there can't be bias.
A DFINITY network is composed from mining clients - often referred to as "processes" - that are connected in a peer-to-peer broadcast network. Each client must have a "mining identity" it uses to sign its communication messages and participate, which is recorded in the globally maintained network state. In the public/open DFINITY network, a mining identity is created by making a security deposit paid in a quantity of dfinities set by the decentralized Blockchain Nervous System governance mechanism, whereas in a private DFINITY network valid mining identities are defined by a trusted dealer such as a corporate systems administrator. Each client is expected to make available some standard quantity of computational resource - data processing capacity, network bandwidth and storage - to which they are held using mechanisms such as USCIDs explained in a later FAQ. As the network grows, the broadcast network is sharded into many sub-networks to prevent communications bottlenecks forming.
Expert. Connections between processes are organized in a Kademlia-like structure using derivatives of their public identities proven as genuine using zero knowledge proofs. Each process maintains connections to some number of other processes and each consequently has a very high chance of having their message broadcasts propagate throughout the network by gossip and receiving messages broadcast by other processes. The properties of such broadcast mechanisms are essential to the operation of decentralized network generally. An adversary can try to subvert this using an "eclipse attack", which involves surrounding a correct process with faulty processes that then filter which messages it can send and receive. In the Tungsten release of DFINITY we plan to make such attacks much harder by constraining the peers to which processes can connect using our endogenous random beacon and cryptographic operations derived from the identities themselves. The network will be forced to continually reorganize into constrained random forms, making it almost impossible for an adaptive adversary to perform attacks on targeted sectors.
Note: also see the technical papers and introductory decks.
A network of clients is organized as described in the foregoing FAQ. Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a "threshold group". The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key ("identity") created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following mining "epoch". The network begins at "genesis" with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values - if they were not then the group's signatures on messages would be predictable and the threshold signature system insecure - and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message individually (here the preceding group's threshold signature) creating individual "signature shares" that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group's signature on the message. Each signature share can be validated by other group members, and the single group threshold signature produced by combining them can be validated by any client using the group's public key. The magic of the BLS scheme is that it is "unique and deterministic" meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17 (you can verify this and experiment with group sizes and fault threats using a hypergeometric probability calculator). This is due to the law of large numbers - even though individual actors might be unpredictable, the greater their number the more predictably they behave in aggregate.
As well as being incredibly robust, such systems are also highly efficient. In a broadcast gossip network, a group of 400 can produce its threshold signature by relaying only about 20KB of communications data. Meanwhile the BLS threshold cryptography libraries DFINITY was involved in creating can perform the computation for the necessary operations in fractions of a millisecond on modern hardware.
Note: also see the technical papers and introductory decks.
A Threshold Relay "blockchain" is created by taking Threshold Relay, using the randomness to define a priority list of "forgers" at each block height, then having the current group also "notarize" the blocks produced. So for example, at block height h the random number produced at block height h-1 would randomly order all mining client processes in the network into a priority list, with the first process being in slot 0, the second in slot 1 and so on. When the members of group h first receive the preceding signature that selected their group, they set their stop watches running (these will be slightly out of sync, naturally, since they will receive the preceding group signature at different times). They then wait for the network's current block time to expire before they begin processing blocks produced by the priority list of mining processes.
For optimization purposes, slot 0 is allowed to produce a block immediately after the block time expires, and successive slots can produce blocks after additional small increments in time. The slots themselves are weighted, with blocks from slot 0 having a score of 1, from slot 1 having a score of 0.5, and so on (the same rewards are also provided to forgers if their block is included in the chain). The members of the current group produce signature shares on blocks they receive according to the following rules: (i) they have not previously signed a block representing a higher scoring chain (ii) the block references a block signed by the previous group (iii) the block is valid with respect to its content and their local clock, and (iv) they have not seen their group's signature on a valid block.
Group members thus continue creating signature shares on blocks until their group has successfully signed a block and they have received the signature, whereupon they sign the previous randomness and relay to the next group (and stop signing blocks that they see). Of course, in practice the highest priority block from slot 0 will normally be waiting in member's network input queues for processing upon expiry of the block time, and this will be signed and no others. The scoring of blocks from different slots exists to help forgers and groups to choose between candidate chains, but it is the group notarization that accelerates and cements convergence since new blocks can only build on blocks the previous group has signed.
The block notarization process provides enormous advantages. Whereas in traditional Proof-of-Work and Proof-of-Stake blockchains it is always possible to go back in time and create a new branch of the chain, in Threshold Relay chains only blocks that have been broadcast at the correct time and notarized by the then correct group can be included in valid chains. This addresses key attacks and vulnerabilities such as "selfish mining" and "nothing at stake" that greatly increase the number of confirmations required before a block's inclusion in the chain is fully secure or "finalized". By contrast, Threshold Relay chains build consistency at a furious rate - normally there will only be a single candidate chain whose head is in slot 0, and once this has been signed it can be trusted for most purposes. Finality is usually provided in seconds.
The advantages of Threshold Relay blockchains are overwhelming. They don't depend upon expensive Proof-of-Work processes. If required, a network can run multiple chains in parallel without undermining their security properties. They finalize transactions far faster than any other system making it possible to create superior user experiences. And, because a fixed block time is allotted to forgers, far more transactions can be included (by contrast in Proof-of-Work systems, the faster a miner can broadcast a new block the greater the chance another will build upon it, encouraging him to build on empty blocks that he does not have to validate and thus also encouraging him to make his block empty - which is why 50% of Ethereum's blocks are currently empty). Services such as SPV can also be provided to clients if they have a Merkle root hash of the current set of groups in the network. Meanwhile, security is more predictable, since viable chains must always be notarized and visible.
In of itself Threshold Relay blockchains cannot "scale-out", although their performance properties certainly provide "scale-up" gains when compared with existing systems. DFINITY however applies their properties in a three-level scale-out architecture that addresses in order consensus, validation and storage. The consensus layer involves a Threshold Relay chain that creates a random heartbeat that drives a Validation Tree of Validation Towers in the validation layer, which does for validation what a Merkle tree does for data and provides almost infinitely scalable global validation. The random beacon also defines the organization of mining clients into storage (state) shards in the storage layer, which use their own Threshold Relay chains to quickly reach consensus on received transactions and resulting state transitions that are passed up to the validation layer. The top-level Threshold Relay consensus blockchain then records state roots provided by the Validation Tree that anchor all the storage in the network.
You will notice no mention of blocks of transactions is made, and this is because there are none. A DFINITY cloud is intended to store exabytes of state and process millions or billions of transactions a second. No process would be able to view more than an almost infinitesimally small fraction anyway. What the network does instead is focus on ensuring that recorded state - as defined by its root hash - only progresses through valid transitions upon application of valid transactions. Thereafter, the correct provenance of any data, the execution of any transaction, or the performance of required actions by the mining clients themselves, can be proven using Merkle paths to the current global root.
History tip. This architecture and supporting protocols were devised by Dominic Williams in early 2015 and were briefly introduced along with other technical innovations at a Bitcoin Devs meetup in San Francisco and during an "Introduction to Consensus" talk given at Devcon1 in London (if you blinked you'd have missed it!!!).
Note: also see the technical papers and introductory decks.
In a traditional blockchain system such as Bitcoin or Ethereum, the blocks record every transaction from genesis forwards and each member of the network participates in checking that the contents of blocks and recorded updates to state are valid. This would never work in a system such as DFINITY because the volumes of data and transactions are too large for any individual process to process - potentially involving exabytes of state and billions of transactions each second. Therefore, DFINITY needs a way to securely validate updates to shards of its state using relatively small subsets of the processes in its network. It will then store just a single Merkle root anchoring the global state in its top-level chain (making it, ironically, far lighter weight than that of Bitcoin or Ethereum).
To construct the Merkle root we will need a Validation Tree, which helps a decentralized network validate unlimited things in a similar way that a Merkle Tree makes it possible to notarize the existence of unlimited data using the single root hash. At each node of this tree will be a Validation Tower that validates assigned inputs and produces output digests attesting to their processing. At the lowest level in the tree, towers will receive transactions and proposed consequential transformations occurring to assigned shards of state data. The purpose of a tower is to validate things using a relatively small subset of processes in the network but do so with similar security as if all the processes in the network had been used - as occurs, in theory at least, in Bitcoin and Ethereum. On first sight this sounds impossible, but luckily it is not.
A validation tower proceeds through an infinite sequence of levels, with each new level introducing an attestation that some transformation is valid. For example, a validation tower might be assigned to validate updates made to some shard of storage by transactions submitted to the network. Each level of a tower is constructed by a new group of processes that has been selected by the random beacon produced by the top-level Threshold Relay chain. When a group builds a new level of the tower, it attests that some new transformations are valid and that the transformations represented by some number of lower levels to a depth d are valid too.
Transformations are considered "validated" once the level first attesting to them has been buried to depth d, which indicates that the d-1 levels above also attest to them. Therefore, in normal circumstances, whenever a group of processes builds a new level, they also transition the transformations in the level previously buried to depth d-1 and now buried to depth d into a fully validated and irreversible state. The natural question is why an attacker can't somehow get an invalid transformation buried to depth d. This might take a lot of attempts since he will have to control the groups building d consecutive levels, but if getting his faulty transformation validated wins him a trillion dollars, it will be worth it!
To understand why this is not possible, first consider that once a process takes part in signing a new tower level, it enters a dormant state. Here it will remain until such time as the level it participated in building has been buried sufficiently deeply - which in normal circumstances will be to depth d - after which time it will be reactivated (in practice processes will need to prove that their last tower level has been validated when participating in network roles such as block forging, and this is how they are excluded). The challenge for the adversary therefore, is to overcome the expense involved with having his processes frozen, since they must be joined to the network with a substantial security deposit in participation tokens or other expensive operation. Once a process has committed to a faulty level, he can only free it when he finds other processes that will build on top and bury get it buried d deep.
The adversary may hope to play the odds and trade the expense against some massive gain. His first problem is that he does not know which processes will be called upon by the random beacon to build the next level, nor the levels after that, since the randomness is unmanipulable and unpredictable. If correct process(es) are selected to build the next level they will reject the adversary's level since it is invalid. The adversary might therefore hope to withhold his faulty level from the next processes assigned by the randomness until such time as the next processes are also controlled by him and they will validate his faulty level. However, the Validation Tower protocol prevents this happening: whenever a new level is not built in lock step with the random beacon, each unvalidated level beneath it resets to needing a further d-1 levels built on top before they become valid.
Therefore, the adversary needs to commit his faulty processes to building a faulty level (that, say, validates an invalid state transition awarding him a trillion dollars) and then hope that by complete chance he will control the following d-1 consecutive groups that the random heartbeat will select to build additional levels on the tower. But it should be clear that the math will not stack up well for him (no pun intended). For example, if each level must be built by 10 processes, 6 of whom must cooperate to create a new level, and the network has 10,000 processes of which 3,000 are faulty (and, in an already highly unlikely event, controlled by this single adversary), the chance that he controls a single level building group is 0.047. If he commits to a bad level, he will need - by luck - a further 9 levels built by groups selected by him to succeed in having is fraud made valid. This will only occur with a probability of 0.047^9=0.000000000001119, or to put it another way, once every 893,550,862,955 tries!
You may now see the problem the adversary has. Each time he attempts to commit the fraud by committing to bad levels and hoping his processes will be selected to build d-1 following levels, with overwhelming probability he will have the processes he uses decommissioned (meanwhile, correct processes below will not be decommissioned since they can build out from under him). In this example, each level he builds that is never validated by the tower will cost him 6 processes. If the participation token security deposit associated with a process is valued at $10,000, then losing the 6 processes used to create a bad level will cost him $5.36e16. Clearly, in the real world he will run out of resources long before he manages to fraudulently award himself a trillion dollars of crypto!
What is fantastic about Validation Towers is that they enable the network to apply a small subset of processes to validate state transitions with complete security. All that is required for their operation is an incorruptible, unmanipulable and unpredictable source of randomness: supplied courtesy of Threshold Relay chains.
Note: also see the technical papers and introductory decks.
The purpose of a Validation Tree is to produce a Merkle tree over the current state data stored by the virtual computer and key events that network processes (mining clients and full nodes) must prove have occurred. The power of a Merkle tree is that it produces a single "root hash" digest that, while being as small as 20 bytes, can act as a signature for a virtually unlimited input data set. The input data is arranged in some suitable well defined fashion and becomes the leaves of the tree, and the hashes of the leaves are then themselves combined pair or n-ary-wise hierarchically in a tree until a single root hash is produced. Thereafter, the existence of a data leaf can be proven by providing a "Merkle path" up through the tree to the root, which comprises every higher hash whose value is partly dependent on its data.
By producing a Merkle root, the Validation Tree can anchor virtually unlimited quantities of data the network either stores as state or needs to hold for purposes of its internal functioning. As a distributed data structure, it involves an arrangement of Validation Towers that act as the nodes of the tree. These are driven by the heartbeat of a random beacon such as a Threshold Relay system, and validate inputs producing "fully validated" hashes as outputs. At the bottom level of the tree, Validation Towers sit directly above the state leaves, which will typically be shards of state managed by subsets of network processes. These pass up state transitions (updates to state created by computation performed by transactions) to their assigned Validation Towers, and the towers eventually produce hashes describing their validated new state. The power of the system is that a Validation Tower can also validate and combine the outputs of other towers to produce a Merkle tree.
For example, a bottom level Validation Tower will produce attestations to a current root hash anchoring some range ("shard") of state. In a Merkle tree, its parent node would combine this hash with those of its sibling(s) recursively up through the hierarchy until the root hash is produced. The challenge in an enormous decentralized network such as DFINITY, is that there may be too many leaf hashes for individual processes to combine into a Merkle tree. We might hope to simply assign subsets of processes to construct different parts of the Merkle tree and have the protocol assemble it from components, but in this case we would have no way of knowing that the components, and thus the overall tree, were correct. The solution is for Validation Towers to be used to combine the leaf hashes, upwards in a tree, until the root hash is produced. Thus higher towers receive and combine the hashes of their respective child nodes, then producing a new fully validated hash that is passed to their parents, recursively until the root is produced.
Thus, there is some root Validation Tower that produces valid root hashes, and it is from this that fully validated root hashes are taken and recorded in a network's top level record of consensus (such as a Threshold Relay blockchain). Each individual tower operates independently and can proceed at a different rate, which prevents the progression of the network being dependent upon some subset of the processing it is performing. The most recent root hash recorded by consensus then anchors the global state stored in any number of shards, and is also used to anchor critical events that have occurred as though they too are simply data. An individual process that was required to have participated in producing a level of some validation tower can thus prove performance of the action in communications with other processes by supplying a Merkle path to some root recorded in the consensus record. In this way we can anchor exabytes of data, and restrict participation in the network to processes whose behavior is correct.
Of course, a considerable journey is involved between an update to state being applied and the transformed state being anchored by a root hash recorded by the master consensus layer (since, the combination of hashes must proceed upwards through towers in the hierarchy). This is unavoidable, since the master record can never be incorrect, but it does not have to reduce the speed with which all computations are finalized. If a shard is maintained by a sufficiently large set of processes, many clients of the network will accept a transaction to be finalized the moment the shard advertises it as decided. Meanwhile, finality is certainly achieved the moment the lowest tower has validated the transaction, even if it will take a while before the master consensus notarizes it. In the applications envisaged for decentralized cloud systems, the additional computational expense is also of no consequence: they provide enormous reductions in the costs associated with running cloud services through the properties of autonomy, unstoppability and tamperproofing, among others, which dramatically lower requirements for supporting human capital.
Note: also see the technical papers and introductory decks.
A key purpose of the decentralized cloud is to provide a compute platform where unstoppable applications can be built and run. This depends upon its capacity to securely store state in the protocol buffers of clients. In Bitcoin and Ethereum, there is a single blockchain recording transactions, and the Ethereum state database is replicated across all clients. Networks such as DFINITY are designed to store unlimited quantities of state as needed, and therefore it is not possible for clients to maintain copies of everything held - it might, after all, be many exabytes or more. Therefore it is necessary to partition (shard) the storage of state across clients, naturally raising questions about what factor of replication is needed to provide the necessary level of security. In turn, this depends upon the answer to another crucial question: how can we know that the data really is replicated.
The challenge that must be addressed is that although numerous clients might appear to hold replicas of data, the impression might only be a chimera constructed by an adversary for the purposes of earning mining rewards without doing any work. The problem is well illustrated by systems such as IPFS and its associated incentive system, FileCoin. IPFS is a decentralized file store, where files and other objects are addressed by the hash of their data. The problem is that when a user uploads their file, it is not clear how many times it is replicated, nor whether those clients currently storing - or caching - their file will continue to do so. FileCoin aims to solve this by paying participation token rewards to clients that can show that they hold copies of data, creating the necessary incentive for its widespread maintenance. The system involves challenges being made that clients can satisfy using copies of the data they hold. However the unaddressed problem is that the protocol cannot be sure whether the clients are in fact just proxies for some giant mainframe where all the data is stored without any replication at all!
To provide realistic guarantees about the safety of data, networks such as DFINITY need to be much more sure about the underlying replication factor involved. This will also enable them to ensure that replication is not too high - after all, it would be ridiculous to replicate a file across 1M mining clients. The solution is provided by USCIDs, which require clients to maintain copies of the state data assigned to them in a unique form - hence the acronym "Unique State Copy ID". These work by requiring each client to store all data encrypted using a key derived from their identity, about which all other clients are necessarily aware. A specially tuned symmetric encryption algorithm is used that makes encryption relatively slow and decryption extremely fast. It is designed so that while it is possible to encrypt data when it is updated and written, it would not be practical to encrypt all the assigned state in reasonable time.
The USCID system requires that clients make attestations to their uniquely encrypted state during protocol communications. For example, when a client produces a candidate block in a Threshold Relay chain PSP, this must contain such an attestation. In order for the block to have a chance of being included in the chain and a reward returned, it must be broadcast within a limited time window of a few seconds, and here the cheating client has a problem. The attestation is the output of a hash chain produced by a random walk over their uniquely encrypted state - starting at some random block dictated by the random beacon present in Threshold Relay networks, the block is added to a hash digest that then selects another random block, and on, until the data of all the blocks in a random chain of some required length have all been added to the digest. Since hashing is fast, producing the attestation correctly will be easy so long as the data is encrypted using the derived key as required. However, if it is held in plaintext, for example on the imagined central mainframe, blocks will have to be encrypted on the fly before being fed into the digest. Because of the properties of the specially designed symmetric encryption algorithm used, production of the attestation will take too long for it to be useful.
During normal communications a client will continually produce such attestations, which will rarely ever be validated. However, when the random beacon randomly requires validation, or when a reward is being earned during block origination, validation can be performed by other clients that hold replicas of the same data. An individual client with the same data can validate the attestation for itself by starting at the same block, decrypting the data to plaintext and then re-encrypting it using the attestor's derived key, and on, until the same output hash should have been created whereupon it can be compared. This will necessarily take a while due to the properties of the chosen encryption scheme but no matter as it can be performed indpendently of the short term progression of the network. Of course, clients must anyway maintain earlier versions of the state using a special database in case of a chain reorganization, so walking the version of a copy from some earlier moment in time does not present a challenge. A structure similar to a Validation Tower is used to decide definitively whether an attestation is valid. If it is not, the attestor's security deposit will be lost and the job of holding replicas will be assigned to another client.
The Blockchain Nervous System (BNS) has access to special op codes in the virtual machine. This allows the BNS to freeze, unfreeze and modify otherwise independent software objects (smart contracts). It can also configure the DFINITY client software run by users, for example to make them upgrade to a new version of the network protocol.
DFINITY's BNS is not a traditional AI like a neural network or Bayesian classifier. On one hand it needs input from human-controlled "neurons" to make decisions on proposals, but on the other decisions result from decentralized "follow" relationships between neurons and non-deterministic algorithmic processes. The BNS improves its ability to make decisions as neurons are reconfigured by owners when new information comes to light and feedback is received. The actual process behind decisions is unknowable: neuron follow relationships exist only in neuron client software run by users on their own computers, and the distributed state of neuron client software cannot be captured. The process is non-deterministic because timing affects the way the neurons cascade to deliver decisions. The purpose of the BNS is to leverage crowd wisdom and knowledge to decide wisely on complex proposals such as "Adopt protocol upgrade X" or "freeze contract Y".
A neuron has voting power proportional to dfinities that are locked inside it. Each neuron can either vote under the direction of its owner, or alternatively automatically seek to follow the voting of other neurons whose addresses the owner configured. This is similar to longstanding "liquid democracy" concepts. In the BNS the follow relationships exist only on client computers and are unknowable, which is why the system might better be described as "opaque" liquid democracy. The BNS uses a system called "Wait For Quiet" to decide when it has received sufficient input to make a decision. Other information and algorithms can be used to assist with decision making, and "influential" neurons could potentially be driven by more traditional AI systems (whose designers are encouraged to come forwards and make proposals).
To see why, imagine conversely that the follow relationships between neurons and the follows that occur where knowable, and that some controversial decision was made. It might be possible to show that some particular neuron caused a cascade of follows and that it's owner was "responsible" for the decision, resulting in social opprobrium or even legal or government action against them. In extremis, public follow relationships might result in out-of-band pressuree being applied - even by kidnapping or extortion - to the owners of neurons high in the influence graph. This would degrade the ability of the BNS to make good decisions in pursuit of its objectives.
You create a neuron by making a security deposit of dfinities. The influence of the neuron is proportional to the deposit size. Deposited dfinities can only be released by dissolving the neuron, which takes 3 months - giving neuron owners a strong incentive to help drive good decision making as bad decisions may devalue the dfinities they have locked up. Meanwhile, you can earn additional dfinities by making your neuron vote. You do this by taking the "delegate key" released when you created your neuron, and installing it into neuron client software you run on a computer (such as your laptop). This will detect and report proposals made to the BNS. Initially the neuron client will ignore proposals for a default period to provide you with a chance to direct it how to vote. However, after this time it will look at the neuron follow list you have defined for the decision category. This is a list of the addresses of other neurons, in priority order, that should be followed. Once the default period is up, your neuron will begin trying to follow other neurons rather than waiting for you. You can update your follow lists at any time. For example, if you follow a talented coder on reddit, and they advertise their neuron address, you might insert it into your follow list for technical decisions. Of course, your follow list is invisible to the world as it only exists on your computer. If you want more time to decide how a proposal should be handled, you can temporarily freeze the neuron to prevent it following automatically.
Yes: creating and running a neuron is "thought mining". At the end of each Dfinity mining epoch you will receive a thought mining reward proportional to the number of dfinities you locked inside your neuron(s). The reward will be factored down by the proportion of decisions your neuron failed to vote on. But since you can configure your neuron client to follow the votes of other neurons specified by address, your neurons should reliably earn you dfinities so long as your client software runs regularly. Note that your configuration should be done carefully - as mentioned above, if the BNS makes bad decisions the value of the dfinities you have locked up in your neurons could be adversely affected.
The constitution is a written document that guides neuron owners regarding system objectives. Currently, the constitution directs and corrals the community around three main objectives: scheduling appropriate protocol upgrades in a timely way, reversing and mitigating hacks such as The DAO, and freezing prohibited system types. Some level of subjectivity is involved, particularly in the third objective. The initial constitution requires that systems whose primary purpose is vice or violence be frozen (note that the constitution makes no requirements regarding law, since the virtual DFINITY computer created by the decentralized network is inherently without geography and jurisdiction). The constitution makes carve-outs to clarify thinking. For example, games of pure chance can be evaluated wrt gambling, but neuron holders are directed to pass prediction markets that provide benefits to society. Similarly, a prositution exchange should frozen but a network of genuine sex therapists is explicitly ok. The constitution aims to clarify these matters. Where the constitution is not clear, ultimately it will be for the creators of systems whose status is unclear to either have the constitution amended, or persuade the community of neuron holders of their case and then take their chances with the BNS.
Yes. Proposals can be submitted to the BNS to amend the Constitution. Thus ultimately the BNS decides its own objectives.
Generally speaking, the use of quorums is problematic in decentralized voting for two reasons. Firstly, it creates an edge that can be exploited - for example by last minute "ambush" voting that changes the decision outcome on a controversial proposal in a manner that gives people no chance to respond, and secondly because it is very difficult to know how many people will participate in voting. The BNS uses Wait For Quiet to address attacks related to the first issue, and because neurons can automatically follow others, is able to set quorums much higher, for example at 40%.
The Blockchain Nervous System is a "responsible super user" rather than a Big Brother. It can freeze forbidden system types, but the community can submit proposals to amend the Constitution if they feel something shouldn't be forbidden. The BNS also never destroys anything - if it makes a "mistake" freezing a system it can unfreeze it later. The purpose is not to be moralistic or even to enforce the law. For example, the Constitution takes no moral view on whether a SilkRoad exchange is a good or bad thing. Its main aim - currently - is simply to create a mainstream environment that is attractive for brand-sensitive businesses as well as users generally, and pure "Code is Law" systems exist for alternative use. Nonetheless, the BNS can also simply amend the constitution to lift restrictions any time it wants, and ultimately it is for the community of neuron holders to drive how it behaves.
You mine DFINITY by running instances of mining client software, each of which must have a "mining identity". DFINITY mining clients are expected to supply a relatively small but approximately fixed amount of computational and storage capacity to the network. For this reason, mining operations will run many, many clients.
DFINITY mining is very different to proof-of-work mining where hashing puzzles are solved. In the DFINITY network, mining clients play roles processing data and are rewarded for performance of those roles. Consequently there is no need to add your clients to some kind of pooling system (this is not even possible) and each client you run will receive regular rewards as its participates in supporting the network, which it will do in various ways.
You must provide a "dfinities" security deposit to the Blockchain Nervous System, which is at risk if your client does not perform properly or gets hacked. The Blockchain Nervous System adjusts the current size of security deposit required to account for fluctuations in the value of dfinities and other factors.
In contrast to traditional decentralized networks like Bitcoin and Ethereum where new tokens are issued according to some predefined schedule, economic matters such as payment of mining rewards are subject to the Blockchain Nervous System, which wishes to create stability. It is also possible that whereas initially the DFINITY network will pay mining rewards in dfinities, eventually it will switch to using a price stable cryptofiat token such as PHI.
DFINITY uses new cryptography to hold clients to its promises. For example, the network determines whether clients have correctly maintained a unique copy of assigned state data using USCIDs (Unique State Copy IDs). Unless a client can produce a correct USCID when - for example - it creates a block, then it will not be able to claim its reward. A more serious problem would occur if a client computer became hacked, since honeypot crypto can be stolen and the client even permanently expelled from the network by the protocol if it performs a provably "Byzantine" act.
You will prefer and be expected to have fast connectivity. For example, you might install 10 server machines in your basement and connect them with consumer fibre. These might host 100 mining clients.
You might start using cloud hosting, but will prefer to migrate to bespoke arrangements to maximize profits. You will distribute these and carefully firewall them from each other to make it harder for an attacker to gain widespread access as this would result in major losses.
DFINITY Stiftung will provide a procedure where people that will be recommended genesis allocations of dfinities can assign these to mining identities in the genesis state it will propose. Before the network goes live, such miners must run special software that joins their mining identities to groups that will bootstrap DFINITY Threshold Relay (thus allowing the Copper release of the network to launch with a PSP blockchain).
Dfinities are participation tokens that in current designs perform four well-defined roles within the network:
Although dfinities will have value and might be exchanged, the general view within DFINITY is that currency needs to be stable and should either be created by existing financial institutions using the colored coin model (where, e.g, a bank stands behind tokens that it issues into the system) or by next generation cryptofiat schemes that piggyback the economies where they are used e.g. PHI (although PHI is unlikely to be available before 2018).
To acquire dfinities/DFN before the network goes live, you will need to participate in funding DFINITY Stiftung by making donations. Such donations will result in recommended allocations being made in a special smart contract on Ethereum (DFINITY will literally boot itself off Ethereum) that records part of the genesis state of the public DFINITY network. Note that DFINITY Stiftung cannot control participants in a decentralized network, and therefore, when it judges the client software is sufficiently mature to launch that the public network, can only recommend to the worldwide mining community that they use the "official" software version that boots the network from the special Ethereum smart contract system.
There will be three funding rounds: "pre-seed", "seed" and "main". The pre-seed round is limited to early contributors, some of whom have already donated years of research work or significant funding and resources to assist DFINITY Stiftung and the wider project to come into being. A total of 9.5% of all dfinities that are created will be reserved for this early contributor round. Another 12.5% of all dfinities created will be allocated as an endowment for the Foundation to support its operations. The remaining 78% will be allocated to those making donations in the seed and main rounds. Both rounds have target maximums, and if they are somehow reached, the autonomous decentralized systems used to orchestrate the process will automatically stop collecting new donations after 12 or 24 hours, depending upon the system configuration.
It will be possible to make donations in ETH, BTC or CHF (if a substantial amount is being donated, please make contact if you feel you should be able to donate to the Foundation in CHF). The date for the seed round has not yet been decided, but is expected to take place during January 2017, and will be announced on Twitter and other places. It is expected that the target maximums for the seed and main rounds will be 1M and 20M CHF respectively. The baseline allocation will be 10 DFN per 1 CHF (Swiss Franc) in value donated calculated using the current exchange rates.
In current discussions, the seed round will provide a 3X multiplier (1 CHF donated results in a recommended allocation of 30 DFN), which reflects that a test DFINITY network is not yet running and the general lack of information and publicity that exists at the seed stage. The main round will start with a 1.25X multipler, which decreases linearly over the 6 weeks the round can run.
Note that you should only consider making donations if you wish to see the network launched and participate for your own reasons. DFN are unsuitable as a speculative investment and not intended to be used that way. Many factors including undiscovered flaws in the new theories being applied by DFINITY could lead to failure of the project making DFN participation tokens useless and thus valueless. The worldwide mining community might even ignore the allocation recommendations of DFINITY Stiftung, which is not in a position to issue DFN. Currently, due to lack of regulatory clarity DFINITY Stiftung is not planning on accepting donations from the USA. This is regrettable, but there are numerous agencies in the USA whose positions are ambiguous that might place participants in jeopardy. Please contact us directly if you have specific questions.
In DFINITY all economic measures are subject to the Blockchain Nervous System, including inflation. Initially, it will issue new dfinities as mining rewards and thought mining rewards (provided to those running neurons). The precise amounts of dfinities issued will relate to fluctuations in the value of dfinities, whether the BNS wants to create an incentive for miners to join additional clients and other factors. Eventually though, the BNS might decide rewards should be paid using a stable currency such as PHI or some other system - since it has complete control over the protocol - effectively ending inflation. The BNS is driven by neurons, and the owners of neurons will tend to make them favor decisions that maximize the value of deposited dfinities through driving effective network operation and mass adoption.
Stay tuned to the latest updates in development and sign up for our newsletter. You’ll be the first to know when we launch DFINITY.