General

Frequently Asked Questions

General

The DFINITY Foundation is a not-for-profit scientific research organization based in Zurich, Switzerland, that oversees research centers in Palo Alto, San Francisco, and Zurich, as well as teams in Japan, Germany, the UK, and across the United States. The Foundation’s mission is to build, promote, and maintain the Internet Computer. Its accomplished team — which includes many of the world’s top cryptographers and distributed systems and programming language experts, with nearly 100,000 academic citations and 200 patents collectively — is committed to building advanced experimental technologies to improve the public internet.

The Internet Computer extends the functionality of the public Internet so that it can host backend software, transforming it into a global compute platform.

Using the Internet Computer, developers can create websites, enterprise IT systems and internet services by installing their code directly on the public Internet and dispense with server computers and commercial cloud services.

Enabling systems to be directly built onto the Internet is undoubtedly cool, but the Internet Computer is about more than this alone. It seeks to address serious long-standing problems that bedevil IT, including system security, and to provide a means to reverse and undo the ever increasing monopolization of internet services, user relationships and data, and restore the Internet to its permissionless, innovative and creative roots.

For example, the Internet Computer hosts its native software within an unstoppable tamperproof environment, making it possible to create systems that don’t depend on firewalls, backup systems and failover for their security. This framework also makes interoperability between different systems as simple as a function call, and automatically persists memory, removing the need for traditional files, enabling organizations to dispense with standalone infrastructure such as database servers. These features enable internet software systems to be completely self contained and address today’s security challenges while dramatically reducing the exorbitant complexity and cost of IT.

The increasing monopolization of internet services is addressed through support for “autonomous software”, which runs without an owner. This allows “open” versions of mainstream internet services to be created, such as social media websites or SaaS business services, that run as part of the fabric of the Internet itself. These new open services can provide users with far superior guarantees about how their data is processed, and moreover can share their user data and functionality with other internet services using permanent APIs that can never be revoked, eliminating “platform risk” and allowing for dynamic and collaborative extension of the ecosystem in a way that generates mutualized network effects that enable them to compete with Big Tech monopolies, providing tremendous new opportunities to entrepreneurs and investors.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

The Internet Computer makes the world better by providing solutions to the critical problems facing tech today. Many of these critical problems arise from issues with the current structure of the public internet. Here are some key ways the Internet Computer it makes the public internet better:

Ending The Captive Customer Trap

The Internet Computer makes it possible to build websites, enterprise systems and internet services by uploading software into a seamless open universe where it runs securely and can easily interact with users and other software. By contrast, builders using the legacy IT stack must roll their own platforms by selecting from a multitude of commercial cloud services, cloud tools, proprietary and open source variations of operating systems, components such as databases and firewalls, virtualization technologies, software development platforms and more. The resulting complexity, the highly custom nature of the systems assembled, the special developer knowledge needed to maintain them, and the associated vendor relationships, makes it expensive and difficult to migrate and adapt legacy systems as needs change. The effect is compounded by legacy stack vendors strategizing to create captive customers, for example by encouraging dependence on custom features and using restrictive licensing. In the future, developers building on the Internet Computer will marvel how anything used to get built at all, and jealously guard their freedom.

Powering Systems That Are Secure By Default

Using the legacy stack it is almost impossible to build and maintain systems that are truly secure. Once systems have been built to provide desired functionality, additional hardening work must performed to make them safe, which involves protecting them from the outside world using firewalls, and careful configuration and administration of their components. A single mistake by just one member of the IT team, a malicious insider, or a failure to apply software updates in time, can result in hackers jumping firewalls and creating havoc. Consequently, the legacy stack is behind a rolling global meltdown in security, with ever increasing hacks, data thefts, and incidents where entire infrastructures cease to function after ransomware encrypts server machines. By contrast, the Internet Computer provides a tamperproof environment where unstoppable software runs that does not depend on firewalls and hardening for security, in which installed software systems are secure by default and run with the same security guarantees provided to smart contracts. In the future, when systems get hacked or experience downtime, people will fairly ask, “why didn’t they build on the Internet Computer?”.

Fixing Debilitating IT Complexity, Costs and Delays

The legacy stack is always evolving, but the problem of overarching complexity in IT isn’t going away, and some would say is worsening. Complexity drives costs, slows down system development, and of course, is a contributing factor in security woes that cost yet more money to mitigate. Today, 85% of IT costs at a typical Fortune 500 company reside with IT Operations (i.e. people), who often have to spend more than 90% of their time dealing with system complexity that is unrelated to the functionality they are trying to provide, such as configuring infrastructure components so they will talk to each other. Solving for complexity can deliver huge dividends by reducing costs and time to market. The Internet Computer has dramatically reimagined software in a way that addresses the challenge. For example, when developers write code that describes data (such as the profile of a person, say), that data is securely automatically persisted within the memory pages hosting their software, removing the need for the developer to marshal it in and out of databases or even think much about how persistence works at all (the feature is called “orthogonal persistence”). Without need for legacy components such as databases, and working using reimagined software, Internet Computer developers focus on coding up “what” they want to achieve, rather than traditional complexities of “how” systems are constructed and interoperate, driving incredible efficiencies.

Powering “Open Internet Services” And Ending Big Tech Monopolies

The primary aim of technology companies has always been to establish monopolistic positions, which then allow vast profits to be generated. Originally the strategy was pursued by operating system vendors, such as Microsoft, but it also became the goal for internet platforms and services. This has been seen with everything from online auctions sites, through social gaming, apartment rentals, ride sharing, email, search, online advertising, cloud services, SaaS business systems, social networks and much more. Venture capitalists fund entrepreneurial startups they believe can execute well and capture enough user relationships and data that they can create compounding network effects that makes competition almost impossible within their respective fields, but in recent years the system is coming undone. The challenge is that the largest players in Big Tech have hijacked sufficient user relationships that creating new internet services is becoming more and more difficult, and consolidation is making the situation worse as the largest monopolies buy up the smaller monopolies.

The problem arises from the way the internet services share their user relationships, data and functionality via APIs on the Programmable Web. In recent years, many opportunities involve building on the APIs provided by Big Tech. For example, Zynga became the largest social games company primarily by publishing via Facebook, but one day Facebook changed the rules and within 3 months 85% of Zynga’s $15 billion value had been lost. More recently, LinkedIn had been allowing thousands of startups to query its database of professional profiles and incorporate them into their own services, but when it was purchased by Microsoft it revoked API access for all but a few fellow Big Tech players, such as Salesforce, causing widespread damage. These are examples of “platform risk” at play, and for example in 2019 when Mark Zuckerburg, the CEO of Facebook, rejected a meeting request from the CEO of Tinder, the world’s biggest dating service, he said “I don’t think he’s that relevant. He probably just wants to make sure we won’t turn off their API”. This kind of thing has become the norm and even the smaller tech monopolies are worried.

Nowadays, most venture capitalists will not invest in startups creating services that depend on the APIs of Big Tech, even if they are otherwise exciting propositions, greatly limiting opportunity, competition and innovation, which will harm all of us. The Internet Computer addresses this by providing technology that supports the creation of a new kind of “open internet service” that runs as part of the fabric of the Internet without an owner. These can provide better guarantees to users about how their data is processed, but an equally important superpower is that they can create “permanent” APIs that they can guarantee will never be revoked or degraded (because they cannot, whatever they decide). Thus, for example, an open equivalent to LinkedIn might be created that provides an API that other internet services can use without risk to incorporate professional profiles it hosts, laying the foundations for powerful “mutualized network effects" that can help it out-compete the monopolistic LinkedIn: since it can guarantee the availability of the API it shares in perpetuity, thousands of new services can safely build on top and extend its functionality, driving the value of its core services and the professional profiles it hosts, while also encouraging the other services to forward to new users and apply open LinkedIn as their own user repository. A key purpose of the Internet Computer is to power the reengineering of key Internet services in open form, in a way that completely reverses the monopolistic incentive to hijacking data, driving the formation of a far more dynamic, collaborative, richer and ultimately more successful internet ecosystem.

How do I know if my device supports Internet Identity?

The Internet Identity takes advantage of the Web Authentication (WebAuthn) API to provide secure cryptographic authentication. While WebAuthn is supported by several of the most popular browsers, it is not supported by all. If you have any issues, we recommend you do the following:

  1. Check if your browser supports WebAuthn
  2. If your browser is supported, you may have encountered a bug. Please let us know by filing a ticket to help us resolve the issue: https://support.dfinity.org/hc/en-us/requests/new

To see a walkthrough of Internet Identity watch this demo: https://www.youtube.com/watch?v=oxEr8UzGeBo

What if I lose my phone, can I still use my Internet Identity?

If you have your Internet Identity tied to only one device and you lose that one device, you would be locked out. Hence as a best practice we always recommend you to add multiple devices to your Internet Identity.

How do I add more devices to my Internet Identity?

To add more devices under your existing Internet Identity, please see the guide here.

I can’t log back into my NNS apps, what should I do?

This may be a simple issue the DFINITY Foundation can help with. Please let us know by filing a ticket here: https://support.dfinity.org/hc/en-us/requests/new

My users do not have WebAuthn-supported devices, can I still build apps for them on the IC?

Since the Internet Computer is a general computing environment, you can still have more traditional login systems (username/password) for your app. Several of our community developers are building different authentication models for the Internet Computer not using WebAuthn.

Where can I learn about the Internet Computer consensus layer?

How many nodes are in each subnet on the Internet Computer?

The number of nodes per subnet on the Internet Computer is a dynamic number, continuing to expand as more nodes come online. Refer to the Internet Computer Dashboard for more information.

At Genesis, the NNS subnet launched with 28 nodes, and application subnets have 7 nodes each. This is an initial number in order to test more subnets as nodes come online.

The NNS governance system (which is governed by the votes of neuron holders) decides how large a given subnet on the Internet Computer will be. Subnets do not need to be limited to 7 or 28 nodes, this is just the initial implementation of the Internet Computer. The DFINITY Foundation has successfully tested larger arrangements.

The continuous goal of the Internet Computer Protocol is to maximize node ownership independence as well as geographic variance.

One of the Internet Computer Protocol's key features is resumability. This means that it is relatively trivial for a node to leave or join the network without affecting the liveness of a subnet or taking a long time. This is important for the success of the Internet Computer, since some blockchain projects decrease in the number of nodes as it becomes harder and harder to fully participate in a network. Resumability is also important for the performance of the Internet Computer, allowing it to resume a node without downloading the entire state, and to do it fast. For more info, read about catch-up packages as part of Chain Key Technology.

Is there anything the Internet Computer Protocol is doing or anything about the technology that makes it an environmentally sustainable option?

The Internet Computer's sustainability impact compared to other blockchains:

The consensus algorithm of IC uses a voting mechanism that does not waste power on computing hashes of artificial difficulties, instead devoting that power to the real computation of user programs that do useful work.

We expect to see some real-world numbers of power usage soon, but the IC network is much more energy efficient than Proof-of-Work-based blockchains.

The Internet Computer's sustainability impact compared to classical distributed systems:

The IC has overhead due to:

  1. Replication (identical instances of each canister are replicated across multiple nodes).
  2. IC Protocol (including P2P gossip, consensus, DKG, certification).

The overhead is not dissimilar to any distributed system that involves replication. But on a subnet with a reasonable workload, the majority of resources (CPU, memory) would be used by actual canister code execution.

Are all blocks stored permanently on the Internet Computer?

They are not. We do not rely on historical blocks for verification purpose. Each replica only stores enough blocks to keep the network healthy (e.g., help other replicas catch up), and will delete old ones when they are no longer needed.

This is also due to the finalization algorithm. Once input block is finalized, new states can be computed deterministically, and only need to keep the latest canister state. Older blocks and older states are not that useful.

Is there room for further reduction of latency with an updated protocol?

The exact latency of a subnet depends on a number of factors, including the number of replicas, network topology, and real-world latency between them. Block time is not exactly equivalent to latency, in terms of when you send an ingress message and when you receive a reply. It takes time for a replica to receive your message, and it will only include it in the next block. Then it takes time to finalize and execute. A client (e.g., browser) is also polling to see if computation result is ready, so that also takes time. There is still room for continuous optimization. But as far as the consensus protocol is concerned, it will be difficult to do significantly better.

Where is the code for random beacon and DKG?

  • The random beacon is created in this GitHub repo.
  • You can find the DKG algorithm implementation in this GitHub repo.
  • You can find the main code responsible for orchestrating the DKG (sending, receiving, and validating DKG messages, and making sure that they make it into the blockchain) in this GitHub repo.

Is there a mechanism to prevent splits and hardforks?

We have designed an extensive upgrade protocol for the Internet Computer that is controlled by the NNS. Honest node operators (i.e., data centers) can be sure that their nodes will follow this protocol if they have done a correct installation. The upgrade protocol can also cover OS upgrades in addition to upgrading the replica software. Issues are resolved through NNS voting, and once approved, upgrades are automatically rolled out.

As for the hard fork, while nothing prevents individuals from starting a new Internet Computer by themselves, they would be unable to obtain the same key material that drives the NNS network on the Internet Computer, so they would have to choose a different public key. Furthermore, it will be next to impossible for them to hard fork with the identical state of the Internet Computer, because there is no way to obtain such data, even for node operators. Data privacy is taken very seriously.

What are the maximum "transactions per second" do you estimate the Internet Computer can handle as it currently exists right now, and what do you estimate it will be able to handle in a year from now?

The Internet Computer can scale capacity without any bounds simply by ingesting more node machines to the network and creating more subnets. There is no limit to how high the TPS can go.

Is there any type of global consensus mechanism? It seems that each subnet is an island of consensus, and doesn't really inherit security from other subnets.

Correct, there is no explicit global consensus, only in the subnets. This is what allows the network to scale out. Subnets can always be added to increase the capacity of the Internet Computer.

How does latency increase as subnet replication increases?

The block rate goes down a bit with higher replication factors because a higher number of replicas are needed to send messages for the block to be agreed upon, and the Distributed Key Generation (DKG) work is more computationally expensive. This shouldn't be very significant yet, but once there are so many replicas that they can't all talk to each other and need to forward messages, the message latency will go up, and this will add a bit to the round time (should be linear in the "hops" of communication in the subnet).

How does throughput decrease as subnet replication increases?

This is similar to how the block rate decreases.

Will cross-canister or cross-subnet calls ever be atomic?

Cross-subnet calls cannot be, because the subnets are "islands of consensus." Cross-canister calls within a subnet theoretically could be, but this would make the parallel execution of different canisters more difficult, so they are not really atomic.

What is the key innovation here? How or why is the Internet Computer better than Ethereum with rollups and sharding or similar protocols?

The fact that the subnets are islands of consensus is the main innovation. With the chain keys, subnets can securely communicate with each other and form a single Internet Computer, but because there is no need for a "global" consensus, the network can scale out without any bounds effectively by adding subnets.

Why is the random beacon be necessary or useful for low-replication subnets?

The random beacon is useful in small subnets to randomly rank the block makers. This means that even if some replicas are malicious, they cannot always be the top-ranked block maker, so we'll get honestly generated blocks most of the time. Wrt replication factor: we can always create subnets of different replication factors, so you can choose the replication factor you want and run your canisters on such subnets only.

Are there any plans on obfuscating or hiding the canisters from node operators?

On our roadmap, we have plans to employ trusted execution environments (e.g., SGX/SEV) to get both attestation guarantees (that the replicas have not been tampered with, i.e., by the node operator) and privacy guarantee (namely that the state of the replicas/canisters is accessed only through their public interface — so nothing should be leaked besides the results of the computation).

How is the <now()> system call deterministic if it is based on system time?

Our consensus implementation also reaches agreement on (an approximation of) the current time, which is passed on to the deterministic processing layer. If a canister asks for the current time, it would be answered deterministically using the time included in the agreed-upon block.

By which rule are block makers initially chosen? Can each replica potentially be a block maker in a subnet?

A replica has to join a subnet first before it can become a block maker or a notary. Once a replica successfully joins, the selection is just a random shuffle of all participants using the random beacon. So each round, a new random beacon is generated and then all replicas can calculate their ranks.

Do rank-(n>1) block makers start broadcasting blocks immediately or only after they timed out by not having received a block from a rank-(n-1) block maker.

The rank-0 block maker gets a priority. Other block makers will wait a brief moment before proposing, honest replicas will also gossip rank-0 block immediately, and other ranks will occur after some delay.

Does this mean subnet membership cannot change dynamically during a consensus round?

Exactly. Membership can only be changed at fixed intervals because we require a round of decentralized key generation for new members (as well as existing members) to generate keys to use in a future interval. This interval is adjustable for each subnet through the NNS.

How are identities assigned to replicas prior to starting the consensus algorithm, and how are public keys distributed to all nodes?

A new replica first has to submit its identity to the NNS to join a subnet. The NNS will have to vote and approve. Data is stored in the registry canister and all replicas will monitor the registry, and hence learn the changes of adding/removing replicas in subnets, including their pubkeys, etc.

So is this consensus algorithm only used to agree on inputs to computations, or is it also used to agree on outputs of computations?

For inputs, they are blocks. For outputs, they are certifications. Both inputs and outputs have to go through consensus, otherwise we risk divergence.

Is the limit of potentially malicious replicas (<33%) theoretically proven under your assumptions, or is there room to improve in future versions?

Yes, it is proven to be only secure with >= 3f+1 replicas, where f is the number of faulty/malicious replicas.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

To participate in the open network that creates the Internet Computer, a data center must first get a DCID (Data Center Identity). This is obtained through an algorithmic governance system that is part of the Internet Computer itself, called the Network Nervous System, which plays a roughly analogous role to ICANN (a not-for-profit organization that helps govern the Internet) but has broader functionality and drives many aspects of network management. Once a data center has obtained a DCID, it can make node machines, which are essentially server computers with a standardized specification, available to the network. When the Internet Computer needs more capacity, the Network Nervous System inducts these machines, forming them into “subnetworks” that can host software canisters. Special software running on node machines interacts with other nodes using ICP and applies advanced computer science to allow their compute capacity to be securely added to the Internet Computer.

Data centers receive remuneration in the form of tokens, which are also used by canisters to create cycles that powers computation and storage, in moderated amounts such that approximately constant returns are received per node over time. Participation in the Network Nervous System algorithmic governance system is possible by locking tokens to create “neurons”, which provides those voting with voting rewards and enable automatic voting in a liquid democracy style scheme. ICP has been designed so that the Internet Computer is managed in an open way and can grow without bound so that it may eventually incorporate millions of node machines.

Today, people create enterprise IT systems and internet services using a legacy IT stack that is intrinsically insecure. They must first arrange hosting, typically using cloud services provided by Big Tech, and then create assemblies from their own software and configurations of legacy components such as web and database servers, which they must regularly patch to fix insecurities, and typically protect by firewalls, VPNs and other anti-intrusion systems. The problem is that these systems contain so many pathways that they cannot be made secure, and the security of the legacy IT stack cannot be fixed. The Internet Computer provides a completely different kind of environment that cannot be hacked or stopped, which does not depend upon Big Tech vendors, where systems can be created without legacy components such as databases, and software logic always runs as designed against the expected data.

Essentially this is possible because the Internet Computer is created by independent data centers around the world running a mathematically secure protocol to combine their computational capacity using advanced computer science, which creates a new kind of environment for building and hosting enterprise IT systems and internet services using a new kind of software that do not rely upon today’s legacy IT stack at all.

Today, if you want to build a new internet services, very often you will need to incorporate user data, user relationships or functionality contained and corralled within monopolistic established services owned and run by Big Tech. Essentially, this means that to function correctly, new services you or others build often depend heavily upon access to APIs (Application Programming Interfaces) provided by companies such as Microsoft, Google, Amazon and Facebook. Sadly, the history of the past decade shows that building on these APIs is like building on sand, which is why we are seeing fewer and fewer interesting new startup services being launched and succeeding.

The Internet Computer provides a means to create internet services in a totally new way using “autonomous software”. These internet services can provide guarantees to users about how their data is processed, but even more importantly, can provide guarantees about the availability of the APIs they provide to other services that wish to incorporate the data or functionality they share. Ultimately, this will enable entrepreneurs and communities of developers to create a new breed of service that benefits from better network effects. These network effects will be mutualized and allow vast numbers of services to be built out collaboratively because they can build on and extend each other without trust, creating a far more diverse, dynamic, fertile and eventually dominant ecosystem.

Dominic Williams is the Founder, President and Chief Scientist of the DFINITY Foundation and Internet Computer project. He hails from the UK but moved to Palo Alto, California in 2012, and continues to travel among the various operations DFINITY maintains around the globe. He has a background as a technology entrepreneur, distributed systems engineer and theoretician. His last major venture was an MMO game that grew to millions of users, which ran on a novel horizontally scalable game server technology he created. More recently, he has distinguished himself through contributions to distributed computing and crypto theory, with works including Threshold Relay, Probabilistic Slot Consensus, Validation Towers and Trees, Puzzle Towers, The 3 E's of Sybil Resistance, and the core architecture of the Internet Computer and the framework it provides for hosted software.

If you are an entrepreneur, developer, or enterprise looking for better ways to build software, the Internet Computer presents an unprecedented, unfolding opportunity. All of this will accelerate because the Internet Computer is expanding the functionality of the public internet — from a global network to also becoming a public compute platform. Many will use the Internet Computer to reduce complexity when building websites and enterprise systems without the need for legacy technologies such as cloud services, databases, and firewalls. Entrepreneurs and developers will take advantage of the emerging “open internet boom” to create pan-industry platforms, DeFi apps, and open internet services by building reimaged software directly on the public internet. Through the use of canisters, users can simply deploy code, store data, and process computation through the Internet Computer Protocol (ICP). The Internet Computer is infinitely scalable as well as decentralized, allowing for applications to exist on the Internet in perpetuity. This enables users to build and deploy open internet services that can “run on the network itself rather than on servers owned by Facebook, Google or Amazon.” -MIT Technology Review

Motoko compiles to WebAssembly (Wasm) due to its versatility to operate across the internet. The Internet Computer leverages Wasm canisters to store data and execute code directly to the internet. Wasm is a binary instruction format for a stack-based virtual machine. It is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.

The Wasm stack machine is designed to be encoded in a size- and load-time-efficient binary format. Wasm aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms. Wasm describes a memory-safe, sandboxed execution environment that may even be implemented inside existing JavaScript virtual machines. When embedded in the web, Wasm will enforce the same-origin and permissions security policies of the browser.

Wasm is designed to maintain the versionless, feature-tested, and backwards-compatible nature of the web. Wasm modules will be able to call into and out of the JavaScript context and access browser functionality through the same web APIs accessible from JavaScript. Wasm also supports non-web embeddings. Source: WebAssembly.org.

For these reasons as well as its compatibility across browsers, Motoko compiles down to Wasm. Additionally, any language that can compile down to Wasm can then be deployed on the Internet Computer.

The Internet Computer supports both open canisters and private canisters, which enable the owners of the canister to make their technology open or to privatize it. Private canisters will necessitate greater amounts of cycles to maintain than open canisters.

The DFINITY SDK Documentation is the main resource for all developer needs.

The DFINITY Foundation is continuously building tools to support developers and improve their experience in deploying technology to the Internet Computer.

The DFINITY Canister SDK is a software development kit that developers can use to create applications for the Internet Computer.

The Rust CDK is a canister development kit that developers can use to program software for the Internet Computer in Rust.

The Vessel Package Manager is a simple package manager for the Motoko programming language.

The Motoko VS Code Extension is Syntax highlighting for the Motoko programming language in VS Code.

When you write source code for an application that runs on the Internet Computer, you compile the source code into a WebAssembly module. To deploy the WebAssembly module that contains your program on the Internet Computer, the program is executed inside of a conceptual computational unit called a software canister. Once deployed, end-users can interact with the software canister by accessing the entry point functions you have defined for that canister through a front-end client such as a browser.

Nodes for the Internet Computer are currently deployed through independent data centers located across the world that combine their computing power by running the Internet Computer Protocol.

Unlike projects like Bitcoin or Ethereum where a machine of any caliber can run a “node,” the Internet Computer has a minimum standard of technical specifications for each machine to ensure higher speeds, lower latency, and greater reliability. Additionally, unlike proprietary cloud and compute systems that are run by an individual organization, the Internet Computer is decentralized and governed by the Network Nervous System (NNS).

In order to deploy a node to the Internet Computer, the node machine must meet a minimum set of specifications to ensure a minimum level of computational efficiency across the Internet Computer. As the NNS scales the Internet Computer’s capacity, more and more decentralized nodes running the Internet Computer Protocol will continue to be added, allowing capacity to scale infinitely through subnets.

Terminology

What is the Network Nervous System?

The NNS is the autonomous software that governs the Internet Computer and manages everything from economics to network structure. The NNS is hosted within the network itself, and is part of the system of protocols that securely weaves together the compute capacity of node machines to create the Internet Computer blockchain network, allowing the network to be autonomous and adaptive. The NNS acts as an autonomous “master” blockchain with a public key to validate all ICP transactions.

The NNS will put proposals to a vote, such as whether to expand the network by adding subnets or onboarding new node machines, which are then voted upon by holders of ICP utility tokens who have locked up their tokens.

The NNS also combines nodes from independent data centers to create subnets, which are used to host canisters, an evolution of smart contracts. The NNS continues to create subnets based on the capacity demands of hosting canisters and the ability to connect to other subnets and allowing the Internet Computer to scale indefinitely.

Internet Identity is an innovation that was researched and developed by the DFINITY Foundation. Internet Identity, allows you to use a singular log-in across the open web. Any service built on the Internet Computer can take advantage of this cryptographic innovation that allows users to log in without the need for a password, seed phrase or any personal identifying information. Instead, you create authentication profiles that use the authentication methods you choose such as facial recognition from a smartphone, your computer unlock password, or a security key.

The Internet Identity service enables you to authenticate securely and anonymously when you access applications that use the service as an authentication method. A different identity is created for each application you log in to, and you will be able to use all of your registered devices or authentication methods to log in to the same account.

For FAQs on using Internet Identity of the NNS Application (or other applications which use Internet Identity) please refer to this FAQ: https://dfinity.org/faq/internet-identity-and-nns-app-faqs

What is a subnet?

A subnet is a special kind of blockchain within the Internet Computer network that can seamlessly integrate with other blockchains to increase capacity. The Network Nervous System combines nodes from independent data centers to create subnets, which are used to host canisters, an evolution of smart contracts. Each subnet runs its own blockchain, and canisters run on-chain in such a way that they can transparently call canisters on other subnets/chains. In fact, there is no difference between calling a shared canister function on the same subnet or a different subnet, it's just a function call within a seamless universe of secure code.

The subnets are transparent to canister code and users of canisters — developers and users just interact with the Internet Computer, while the ICP protocol securely and transparently replicates data and computation across subnet nodes in the background. The system is more secure than a traditional blockchain because the precise decentralization of data and computation is controlled by the protocol rather than left to chance. Pooling (traditional PoW, PoS) or the creation of validator nodes with vast amounts of stake that create more blocks (as per PoS), is not possible. Direct interaction between subnets, and with subnets, is enabled by "Chain Keys," which are made possible by novel advanced cryptography developed by DFINITY.

What is Chain Key Technology?

Chain Key Technology is a 48 byte public chain key rendering old blocks unnecessary which enables the Internet Computer to operate at web-speed.

Chain Key Technology allows the Internet Computer to finalize transactions that update smart contract state in 1–2 seconds. This is an enormous improvement, but still insufficient alone to allow blockchain to provide competitive user experiences, which require that responses be provided to users in milliseconds. The Internet Computer solves this by splitting smart contract function execution into two types, known as “update calls” and “query calls.” Update calls are those we are already familiar with, and take 1–2 seconds to finalize their execution, while query calls work differently because any changes they make to state (in this case, the memory pages of canisters) are discarded after they run. Essentially, this allows query calls to execute in milliseconds.

One of the primary complaints about blockchain technology is its lack of speed, and one of the greatest preconceptions is that it is intentionally slow. The roots of such thinking date back to the first blockchain, Bitcoin, which typically takes roughly 30–60 minutes to finalize transactions. Developed a few years later, Ethereum used an updated form of Proof-of-Work to speed things up, but it is still far from achieving the web speed that’s needed to deliver compelling online user experiences.

The Internet Computer is the third great innovation in blockchain. The first innovation was Bitcoin, which introduced cryptocurrency, and is now playing the role of digital gold. The second innovation was Ethereum, which introduced smart contracts, which are now powering the DeFi revolution. This third major innovation, the Internet Computer, introduces the first true blockchain computer, which will enable the world to reimagine how we build everything — using a blockchain with seamless, infinite capacity. Inside the Internet Computer Protocol, Chain Key Technology makes this all possible — a combination of dozens of computer science breakthroughs such as Random Beacon, Probabilistic Slot Consensus, Advanced Consensus Mechanism, Network Nervous System, subnets, etc., that allows the Internet Computer to be the first blockchain computer that runs at web speed with unbounded capacity.

What are Canisters?

Canisters are computational units that include both program and state. A software canister is similar to a container in that both are deployed as a software unit that contains compiled code and dependencies for an application or service.

Containerization allows for applications to be decoupled from the environment, allowing for easy and reliable deployment. The canister differs from a container, however, in that it also stores information about the current software state with a record of preceding events and user interactions.

While a containerized application might include information about the state of the environment in which the application runs, a software canister is able to persist a record of state changes that resulted from an application’s functions being used.

This concept of a canister consisting of both program and state is an important one because when a canister function is invoked by sending a message to one of its entry points, there are only two types of calls: non-committing query calls and committing update calls.

What is an ICP utility token?

The ICP utility token (formerly known as “DFN”) is the primary mechanism that allows the broader internet community to participate in the governance of the Internet Computer network. ICP can also be dissolved and converted into cycles, which are then used to run websites and applications as well as power computations on the Internet Computer via canisters.

What is a Neuron?

Neurons allow users to “time-lock” their ICP utility tokens in order to be rewarded with voting power, which can be used to vote on proposals submitted to the system that can be automatically executed. Neurons can be made to follow each other in various ways such that they vote automatically, representing a form of liquid democracy. Users can also dissolve their neurons to release the tokens inside the neuron and convert them into cycles to power computation.

What are Cycles?

Cycles act as the computational resource to execute actions on the Internet Computer. In general, all canisters consume resources in the form of CPU cycles for execution, bandwidth for routing messages, and memory for persisted data. Canisters maintain an account balance to pay for the cost of communication, computation, and the storage consumed by their applications. The cost of computation is expressed in units of cycles.

Cycles reflect the real costs of operations, including resources such physical hardware, rack space, energy, storage devices, and bandwidth. In simple terms, a cycle unit represents the cost of executing a single WebAssembly instruction. Programs must be able to pay for complete execution (all or nothing), but the cost associated with cycles will make efficient programs cost-effective.

By setting limits on how many cycles a canister can consume, the platform can prevent malicious code from draining resources. The relative stability of operational costs also makes it easier to predict the cycles that are required to process a million messages, for example.

Cycles can be compared to “gas” for Ethereum and “credits” for AWS, but have much farther reaching uses with regard to data, compute, and execution. Their design also inherently accounts for technological pitfalls, such as rapidly rising costs of usage.

What is Motoko?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.

The Motoko canister SDK can be found at http://sdk.dfinity.org

What is an open internet service?

An open internet service is a technology deployed to the Internet Computer that is “ownerless” — meaning the code itself is deployed to the very fabric of the internet, allowing the service to operate autonomously based on user support. When a developer wishes to create an “Open Internet Service”, they sign over control of its canisters to tokenized public governance canisters that thereafter take responsibility for upgrades and configuration.

These proposals are then governed by the NNS, enabling users of the application or service to vote and make decisions about its code, policies, and functions. An open internet service can mark shared functions as “permanent” (i.e. APIs can be marked as permanent). In such cases, canister upgrades cannot overwrite the shared functions, and if an upgrade degrades the functionality it provides, constructively revoking the API, the Internet Computer’s own governance system can make progressive modifications to the governance system of the internet service until such time that the expected functionality is restored.

The purpose of permanent APIs is to allow developers to build services that rely on data or functionality supplied by other services without platform risk. Platform risk is a problem created by dependence on a public or private technology company’s API or data for the success of one’s product. For example, when LinkedIn revoked access to its API from thousands of companies that depended on it, they lost their entire data and authentication source overnight. open internet services operate without an owner, supporting continuous access to said “code” by entrepreneurs who depend on it without fear of being cut off.