Go to the home pageDFINITY
General
Frequently Asked Questions

General

The Internet Computer extends the functionality of the public Internet so that it can host backend software, transforming it into a global compute platform.

Using the Internet Computer, developers can create websites, enterprise IT systems and internet services by installing their code directly on the public Internet and dispense with server computers and commercial cloud services.

Enabling systems to be directly built onto the Internet is undoubtedly cool, but the Internet Computer is about more than just this. It seeks to address serious long-standing technical problems that trouble IT, such as security, and also to provide the means to undo the ever increasing monopolization of internet services, user relationships and data, and restore the Internet to its permissionless, innovative and creative roots.

For example, the Internet Computer hosts its native software within an unstoppable tamperproof environment, making it possible to create systems that don’t depend on firewalls, backup systems and failover for their security. This framework also makes interoperability between different systems as simple as a function call, and automatically persists memory, removing the need for traditional files, enabling organizations to dispense with standalone infrastructure such as database servers. These features enable internet software systems to be completely self contained, address today’s IT security challenges, and the dramatically reduce the exorbitant complexity and cost of IT.

Increasing monopolization of internet services is addressed through support for “autonomous software”, which runs without an owner. This allows “open” versions of mainstream internet services to be created, such as social media websites or SaaS business services, that run as part of the fabric of the Internet itself. These new open services can provide users with far superior guarantees about how their data is processed, and moreover can share their user data and functionality with other internet services using permanent APIs that can never be revoked, eliminating “platform risk” and allowing for dynamic and collaborative extension of the ecosystem in a way that generates mutualized network effects that enable them to compete with Big Tech monopolies, providing tremendous new opportunities to entrepreneurs and investors.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

The Internet Computer makes the world better by providing solutions to the critical problems facing tech today. Here are some key ways it makes things better:

Ending The Captive Customer Trap

The Internet Computer makes it possible to build websites, enterprise systems and internet services by uploading software into a seamless open universe where it runs securely and can easily interact with users and other software. By contrast, builders using the legacy IT stack must roll their own platforms by selecting from a multitude of commercial cloud services, cloud tools, proprietary and open source variations of operating systems, components such as databases and firewalls, virtualization technologies, software development platforms and more. The resulting complexity, the highly custom nature of the systems assembled, the special developer knowledge needed to maintain them, and the associated vendor relationships, makes it expensive and difficult to migrate and adapt legacy systems as needs change. The effect is compounded by legacy stack vendors strategizing to create captive customers, for example by encouraging dependence on custom features and using restrictive licensing. In the future, developers building on the Internet Computer will marvel how anything used to get built at all, and jealously guard their freedom.

Powering Systems That Are Secure By Default

Using the legacy stack it is almost impossible to build and maintain systems that are truly secure. Once systems have been built to provide desired functionality, additional hardening work must performed to make them safe, which involves protecting them from the outside world using firewalls, and careful configuration and administration of their components. A single mistake by just one member of the IT team, a malicious insider, or a failure to apply software updates in time, can result in hackers jumping firewalls and creating havoc. Consequently, the legacy stack is behind a rolling global meltdown in security, with ever increasing hacks, data thefts, and incidents where entire infrastructures cease to function after ransomware encrypts server machines. By contrast, the Internet Computer provides a tamperproof environment where unstoppable software runs that does not depend on firewalls and hardening for security, in which installed software systems are secure by default and run with the same security guarantees provided to smart contracts. In the future, when systems get hacked or experience downtime, people will fairly ask, “why didn’t they build on the Internet Computer?”.

Fixing Debilitating IT Complexity, Costs and Delays

The legacy stack is always evolving, but the problem of overarching complexity in IT isn’t going away, and some would say is worsening. Complexity drives costs, slows down system development, and of course, is a contributing factor in security woes that cost yet more money to mitigate. Today, 85% of IT costs at a typical Fortune 500 company reside with IT Operations (i.e. people), who often have to spend more than 90% of their time dealing with system complexity that is unrelated to the functionality they are trying to provide, such as configuring infrastructure components so they will talk to each other. Solving for complexity can deliver huge dividends by reducing costs and time to market. The Internet Computer has dramatically reimagined software in a way that addresses the challenge. For example, when developers write code that describes data (such as the profile of a person, say), that data is securely automatically persisted within the memory pages hosting their software, removing the need for the developer to marshal it in and out of databases or even think much about how persistence works at all (the feature is called “orthogonal persistence”). Without need for legacy components such as databases, and working using reimagined software, Internet Computer developers focus on coding up “what” they want to achieve, rather than traditional complexities of “how” systems are constructed and interoperate, driving incredible efficiencies.

Powering “Open Internet Services” And Ending Big Tech Monopolies

The primary aim of technology companies has always been to establish monopolistic positions, which then allow vast profits to be generated. Originally the strategy was pursued by operating system vendors, such as Microsoft, but it also became the goal for internet platforms and services. This has been seen with everything from online auctions sites, through social gaming, apartment rentals, ride sharing, email, search, online advertising, cloud services, SaaS business systems, social networks and much more. Venture capitalists fund entrepreneurial startups they believe can execute well and capture enough user relationships and data that they can create compounding network effects that makes competition almost impossible within their respective fields, but in recent years the system is coming undone. The challenge is that the largest players in Big Tech have hijacked sufficient user relationships that creating new internet services is becoming more and more difficult, and consolidation is making the situation worse as the largest monopolies buy up the smaller monopolies.

The problem arises from the way the internet services share their user relationships, data and functionality via APIs on the Programmable Web. In recent years, many opportunities involve building on the APIs provided by Big Tech. For example, Zynga became the largest social games company primarily by publishing via Facebook, but one day Facebook changed the rules and within 3 months 85% of Zynga’s $15 billion value had been lost. More recently, LinkedIn had been allowing thousands of startups to query its database of professional profiles and incorporate them into their own services, but when it was purchased by Microsoft it revoked API access for all but a few fellow Big Tech players, such as Salesforce, causing widespread damage. These are examples of “platform risk” at play, and for example in 2019 when Mark Zuckerburg, the CEO of Facebook, rejected a meeting request from the CEO of Tinder, the world’s biggest dating service, he said “I don’t think he’s that relevant. He probably just wants to make sure we won’t turn off their API”. This kind of thing has become the norm and even the smaller tech monopolies are worried.

Nowadays, most venture capitalists will not invest in startups creating services that depend on the APIs of Big Tech, even if they are otherwise exciting propositions, greatly limiting opportunity, competition and innovation, which will harm all of us. The Internet Computer addresses this by providing technology that supports the creation of a new kind of “open internet service” that runs as part of the fabric of the Internet without an owner. These can provide better guarantees to users about how their data is processed, but an equally important superpower is that they can create “permanent” APIs that they can guarantee will never be revoked or degraded (because they cannot, whatever they decide). Thus, for example, an open equivalent to LinkedIn might be created that provides an API that other internet services might use without risk to incorporate the professional profiles it hosts, creating powerful “mutualized network effects" that can help it out-compete the monopolistic LinkedIn - since the API is guaranteed, thousands of new services might extend its functionality, making its services and the professional profiles it hosts more valuable, and allowing those other services comfortable that they can forwarding new users to it. A key purpose of the Internet Computer is to power the reengineering of key Internet services in open form, in a way that completely reverses the incentives involved in hijacking of data, driving the formation of a far more dynamic, collaborative, richer and ultimately more successful internet ecosystem.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

To participate in the open network that creates the Internet Computer, a data center must first get a DCID (Data Center Identity). This is obtained through an algorithmic governance system that is part of the Internet Computer itself, called the Network Nervous System, which plays a roughly analogous role to ICANN (a not-for-profit organization that helps govern the Internet) but has broader functionality and drives many aspects of network management. Once a data center has obtained a DCID, it can make node machines, which are essentially server computers with a standardized specification, available to the network. When the Internet Computer needs more capacity, the Network Nervous System inducts these machines, forming them into “subnetworks” that can host software canisters. Special software running on node machines interacts with other nodes using ICP and applies advanced computer science to allow their compute capacity to be securely added to the Internet Computer.

Data centers receive remuneration in the form of tokens, which are also used by canisters to create “gas” that powers computation and storage, in moderated amounts such that approximately constant returns are received per node over time. Participation in the Network Nervous System algorithmic governance system is possible by locking tokens to create “neurons”, which provides those voting with voting rewards and enable automatic voting in a liquid democracy style scheme. ICP has been designed so that the Internet Computer is managed in an open way and can grow without bound so that it may eventually incorporate millions of node machines.

Today, people create enterprise IT systems and internet services using a legacy IT stack that is intrinsically insecure. They must first arrange hosting, typically using cloud services provided by Big Tech, and then create assemblies from their own software and configurations of legacy components such as web and database servers, which they must regularly patch to fix insecurities, and typically protect by firewalls, VPNs and other anti-intrusion systems. The problem is that these systems contain so many pathways that they cannot be made secure, and the security of the legacy IT stack cannot be fixed. The Internet Computer provides a completely different kind of environment that cannot be hacked or stopped, which does not depend upon Big Tech vendors, where systems can be created without legacy components such as databases, and software logic always runs as designed against the expected data.

Essentially this is possible because the Internet Computer is created by independent data centers around the world running a mathematically secure protocol to combine their computational capacity using advanced computer science, which creates a new kind of environment for building and hosting enterprise IT systems and internet services using a new kind of software that do not rely upon today’s legacy IT stack at all.

Today, if you want to build a new internet services, very often you will need to incorporate user data, user relationships or functionality contained and corralled within monopolistic established services owned and run by Big Tech. Essentially, this means that to function correctly, new services you or others build often depend heavily upon access to APIs (Application Programming Interfaces) provided by companies such as Microsoft, Google, Amazon and Facebook. Sadly, the history of the past decade shows that building on these APIs is like building on sand, which is why we are seeing fewer and fewer interesting new startup services being launched and succeeding.

The Internet Computer provides a means to create internet services in a totally new way using “autonomous software”. These internet services can provide guarantees to users about how their data is processed, but even more importantly, can provide guarantees about the availability of the APIs they provide to other services that wish to incorporate the data or functionality they share. Ultimately, this will enable entrepreneurs and communities of developers to create a new breed of service that benefits from better network effects. These network effects will be mutualized and allow vast numbers of services to be built out collaboratively because they can build on and extend each other without trust, creating a far more diverse, dynamic, fertile and eventually dominant ecosystem.

Dominic Williams is the Founder, President and Chief Scientist of the DFINITY Foundation and Internet Computer project. He hails from the UK but moved to Palo Alto, California in 2012, and continues to travel among the various operations DFINITY maintains around the globe. He has a background as a technology entrepreneur, distributed systems engineer and theoretician. His last major venture was an MMO game that grew to millions of users, which ran on a novel horizontally scalable game server technology he created. More recently, he has distinguished himself through contributions to distributed computing and crypto theory, with works including Threshold Relay, Probabilistic Slot Consensus, Validation Towers and Trees, Puzzle Towers, The 3 E's of Sybil Resistance, and the core architecture of the Internet Computer and the framework it provides for hosted software.

Technical

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.

The Motoko canister SDK can be found at http://sdk.dfinity.org

The Internet Computer is not a blockchain, although it might fairly be considered an evolution of blockchain technology, and was inspired years ago by the “blockchain computer” concept pioneered by Ethereum. A similarity is that the platform is formed by a highly fault tolerant decentralized network protocol that uses tokens and gas to mediate participation and control, although in the case of the Internet Computer, these aspects are mostly hidden from users and developers.

The platform hosts software systems and services with the same kinds of strong availability and security guarantees that smart contracts receive. However, whereas blockchains have terrible performance, fixed or very expensive capacity, and often require augmentation with websites running on untrusted platforms such as Amazon Web Services to be useful, the Internet Computer is fast, can increase its compute and storage capacity without bound as demand increases, is relatively inexpensive to build on, and supports completely self-contained systems that can securely serve user experiences directly to web browsers such that secure end-to-end systems can be created without problematic dependencies.

The Internet Computer leans more towards the internet model, and incorporates computing power provided by independent conventional data centers rather than traditional cryptocurrency miners, although participation by amateurs and small operations is also possible. Data center participation is permissioned by an onboard open algorithmic governance system called the Network Nervous System, that runs as part of the Internet Computer, in an algorithmic alternative to ICAAN and IANA, which for example provide internet ASN numbers for usage with BGP routers, say. Data centers are remunerated for providing compute power using tokens in amounts that provide approximately constant returns per compute unit lent relative to their local currency, such as the Swiss franc.

Development of the decentralized protocol, initial software implementations and support of the public network is pursued by the not-for-profit DFINITY Foundation, based in Zurich, Switzerland, and its various research centers around the world, and partner organizations. The Internet Computer depends on advanced cryptography, distributed computing, a deterministic virtual machine based on WebAssembly, aspects of computer hardware, and many other technologies.

Important Note: This Answer Is Very Out of Date

So far, the only means we have found to organize a vast number of node clients in an attack-resistant network that produces a virtual computer is to apply cryptographically produced randomness. Of course, Satoshi also relied on randomness by having miners race to solve a current puzzle whose solutions can only be found randomly using brute force computation - then allowing winners to append blocks of Bitcoin transactions to his chain. DFINITY needs stronger and less manipulable randomness that is produced more efficiently in fixed time. Randomness does not only play an important role to ensure that consensus power and rewards are fairly distributed among all miners. Turing-complete blockchains like DFINITY require a higher level of randomness since smart applications may enable high-volume transactions that hinge on aleatory conditions, so that the potential gain of manipulation could be arbitrarily high.

The solution we found is Threshold Relay, which applies cryptography to create randomness on demand of sufficient network participants in a manner that is almost incorruptible, totally unmanipulable and unpredictable. Using Threshold Relay, DFINITY network participants produce a deterministic Verifiable Random Function (or VRF) that powers network organization and processing.

Important Note: This Answer Is Very Out of Date

A network of clients is organized as described in the foregoing FAQ. Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a "threshold group". The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key ("identity") created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following "epoch". The network begins at "genesis" with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values - if they were not then the group's signatures on messages would be predictable and the threshold signature system insecure - and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.

In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message individually (here the preceding group's threshold signature) creating individual "signature shares" that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group's signature on the message. Each signature share can be validated by other group members, and the single group threshold signature produced by combining them can be validated by any client using the group's public key. The magic of the BLS scheme is that it is "unique and deterministic" meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.

Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

As well as being incredibly robust, such systems are also highly efficient. In a broadcast gossip network, a group of 400 can produce its threshold signature by relaying only about 20KB of communications data. Meanwhile the BLS threshold cryptography libraries DFINITY was involved in creating can perform the computation for the necessary operations in fractions of a millisecond on modern hardware. You can learn more about Threshold Relay by reading the DFINITY Consensus Whitepaper.