General

Frequently Asked Questions

General

The DFINITY Foundation is a not-for-profit scientific research organization based in Zurich, Switzerland, that oversees research centers in Palo Alto, San Francisco, and Zurich, as well as teams in Japan, Germany, the UK, and across the United States. The Foundation’s mission is to build, promote, and maintain the Internet Computer. Its accomplished team — which includes many of the world’s top cryptographers and distributed systems and programming language experts, with nearly 100,000 academic citations and 200 patents collectively — is committed to building advanced experimental technologies to improve the public internet.

The Internet Computer extends the functionality of the public Internet so that it can host backend software, transforming it into a global compute platform.

Using the Internet Computer, developers can create websites, enterprise IT systems and internet services by installing their code directly on the public Internet and dispense with server computers and commercial cloud services.

Enabling systems to be directly built onto the Internet is undoubtedly cool, but the Internet Computer is about more than this alone. It seeks to address serious long-standing problems that bedevil IT, including system security, and to provide a means to reverse and undo the ever increasing monopolization of internet services, user relationships and data, and restore the Internet to its permissionless, innovative and creative roots.

For example, the Internet Computer hosts its native software within an unstoppable tamperproof environment, making it possible to create systems that don’t depend on firewalls, backup systems and failover for their security. This framework also makes interoperability between different systems as simple as a function call, and automatically persists memory, removing the need for traditional files, enabling organizations to dispense with standalone infrastructure such as database servers. These features enable internet software systems to be completely self contained and address today’s security challenges while dramatically reducing the exorbitant complexity and cost of IT.

The increasing monopolization of internet services is addressed through support for “autonomous software”, which runs without an owner. This allows “open” versions of mainstream internet services to be created, such as social media websites or SaaS business services, that run as part of the fabric of the Internet itself. These new open services can provide users with far superior guarantees about how their data is processed, and moreover can share their user data and functionality with other internet services using permanent APIs that can never be revoked, eliminating “platform risk” and allowing for dynamic and collaborative extension of the ecosystem in a way that generates mutualized network effects that enable them to compete with Big Tech monopolies, providing tremendous new opportunities to entrepreneurs and investors.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

The Internet Computer makes the world better by providing solutions to the critical problems facing tech today. Many of these critical problems arise from issues with the current structure of the public internet. Here are some key ways the Internet Computer it makes the public internet better:

Ending The Captive Customer Trap

The Internet Computer makes it possible to build websites, enterprise systems and internet services by uploading software into a seamless open universe where it runs securely and can easily interact with users and other software. By contrast, builders using the legacy IT stack must roll their own platforms by selecting from a multitude of commercial cloud services, cloud tools, proprietary and open source variations of operating systems, components such as databases and firewalls, virtualization technologies, software development platforms and more. The resulting complexity, the highly custom nature of the systems assembled, the special developer knowledge needed to maintain them, and the associated vendor relationships, makes it expensive and difficult to migrate and adapt legacy systems as needs change. The effect is compounded by legacy stack vendors strategizing to create captive customers, for example by encouraging dependence on custom features and using restrictive licensing. In the future, developers building on the Internet Computer will marvel how anything used to get built at all, and jealously guard their freedom.

Powering Systems That Are Secure By Default

Using the legacy stack it is almost impossible to build and maintain systems that are truly secure. Once systems have been built to provide desired functionality, additional hardening work must performed to make them safe, which involves protecting them from the outside world using firewalls, and careful configuration and administration of their components. A single mistake by just one member of the IT team, a malicious insider, or a failure to apply software updates in time, can result in hackers jumping firewalls and creating havoc. Consequently, the legacy stack is behind a rolling global meltdown in security, with ever increasing hacks, data thefts, and incidents where entire infrastructures cease to function after ransomware encrypts server machines. By contrast, the Internet Computer provides a tamperproof environment where unstoppable software runs that does not depend on firewalls and hardening for security, in which installed software systems are secure by default and run with the same security guarantees provided to smart contracts. In the future, when systems get hacked or experience downtime, people will fairly ask, “why didn’t they build on the Internet Computer?”.

Fixing Debilitating IT Complexity, Costs and Delays

The legacy stack is always evolving, but the problem of overarching complexity in IT isn’t going away, and some would say is worsening. Complexity drives costs, slows down system development, and of course, is a contributing factor in security woes that cost yet more money to mitigate. Today, 85% of IT costs at a typical Fortune 500 company reside with IT Operations (i.e. people), who often have to spend more than 90% of their time dealing with system complexity that is unrelated to the functionality they are trying to provide, such as configuring infrastructure components so they will talk to each other. Solving for complexity can deliver huge dividends by reducing costs and time to market. The Internet Computer has dramatically reimagined software in a way that addresses the challenge. For example, when developers write code that describes data (such as the profile of a person, say), that data is securely automatically persisted within the memory pages hosting their software, removing the need for the developer to marshal it in and out of databases or even think much about how persistence works at all (the feature is called “orthogonal persistence”). Without need for legacy components such as databases, and working using reimagined software, Internet Computer developers focus on coding up “what” they want to achieve, rather than traditional complexities of “how” systems are constructed and interoperate, driving incredible efficiencies.

Powering “Open Internet Services” And Ending Big Tech Monopolies

The primary aim of technology companies has always been to establish monopolistic positions, which then allow vast profits to be generated. Originally the strategy was pursued by operating system vendors, such as Microsoft, but it also became the goal for internet platforms and services. This has been seen with everything from online auctions sites, through social gaming, apartment rentals, ride sharing, email, search, online advertising, cloud services, SaaS business systems, social networks and much more. Venture capitalists fund entrepreneurial startups they believe can execute well and capture enough user relationships and data that they can create compounding network effects that makes competition almost impossible within their respective fields, but in recent years the system is coming undone. The challenge is that the largest players in Big Tech have hijacked sufficient user relationships that creating new internet services is becoming more and more difficult, and consolidation is making the situation worse as the largest monopolies buy up the smaller monopolies.

The problem arises from the way the internet services share their user relationships, data and functionality via APIs on the Programmable Web. In recent years, many opportunities involve building on the APIs provided by Big Tech. For example, Zynga became the largest social games company primarily by publishing via Facebook, but one day Facebook changed the rules and within 3 months 85% of Zynga’s $15 billion value had been lost. More recently, LinkedIn had been allowing thousands of startups to query its database of professional profiles and incorporate them into their own services, but when it was purchased by Microsoft it revoked API access for all but a few fellow Big Tech players, such as Salesforce, causing widespread damage. These are examples of “platform risk” at play, and for example in 2019 when Mark Zuckerburg, the CEO of Facebook, rejected a meeting request from the CEO of Tinder, the world’s biggest dating service, he said “I don’t think he’s that relevant. He probably just wants to make sure we won’t turn off their API”. This kind of thing has become the norm and even the smaller tech monopolies are worried.

Nowadays, most venture capitalists will not invest in startups creating services that depend on the APIs of Big Tech, even if they are otherwise exciting propositions, greatly limiting opportunity, competition and innovation, which will harm all of us. The Internet Computer addresses this by providing technology that supports the creation of a new kind of “open internet service” that runs as part of the fabric of the Internet without an owner. These can provide better guarantees to users about how their data is processed, but an equally important superpower is that they can create “permanent” APIs that they can guarantee will never be revoked or degraded (because they cannot, whatever they decide). Thus, for example, an open equivalent to LinkedIn might be created that provides an API that other internet services can use without risk to incorporate professional profiles it hosts, laying the foundations for powerful “mutualized network effects" that can help it out-compete the monopolistic LinkedIn: since it can guarantee the availability of the API it shares in perpetuity, thousands of new services can safely build on top and extend its functionality, driving the value of its core services and the professional profiles it hosts, while also encouraging the other services to forward to new users and apply open LinkedIn as their own user repository. A key purpose of the Internet Computer is to power the reengineering of key Internet services in open form, in a way that completely reverses the monopolistic incentive to hijacking data, driving the formation of a far more dynamic, collaborative, richer and ultimately more successful internet ecosystem.

The Internet Computer is formed by an advanced decentralized protocol called ICP (Internet Computer Protocol) that independent data centers around the world run to combine the power of individual computers into an unstoppable seamless universe where internet native software is hosted and run with the same security guarantees as smart contracts. It is integrated with Internet standards such as DNS, and can server user experiences directly to Web browsers and smartphones.

To participate in the open network that creates the Internet Computer, a data center must first get a DCID (Data Center Identity). This is obtained through an algorithmic governance system that is part of the Internet Computer itself, called the Network Nervous System, which plays a roughly analogous role to ICANN (a not-for-profit organization that helps govern the Internet) but has broader functionality and drives many aspects of network management. Once a data center has obtained a DCID, it can make node machines, which are essentially server computers with a standardized specification, available to the network. When the Internet Computer needs more capacity, the Network Nervous System inducts these machines, forming them into “subnetworks” that can host software canisters. Special software running on node machines interacts with other nodes using ICP and applies advanced computer science to allow their compute capacity to be securely added to the Internet Computer.

Data centers receive remuneration in the form of tokens, which are also used by canisters to create cycles that powers computation and storage, in moderated amounts such that approximately constant returns are received per node over time. Participation in the Network Nervous System algorithmic governance system is possible by locking tokens to create “neurons”, which provides those voting with voting rewards and enable automatic voting in a liquid democracy style scheme. ICP has been designed so that the Internet Computer is managed in an open way and can grow without bound so that it may eventually incorporate millions of node machines.

Today, people create enterprise IT systems and internet services using a legacy IT stack that is intrinsically insecure. They must first arrange hosting, typically using cloud services provided by Big Tech, and then create assemblies from their own software and configurations of legacy components such as web and database servers, which they must regularly patch to fix insecurities, and typically protect by firewalls, VPNs and other anti-intrusion systems. The problem is that these systems contain so many pathways that they cannot be made secure, and the security of the legacy IT stack cannot be fixed. The Internet Computer provides a completely different kind of environment that cannot be hacked or stopped, which does not depend upon Big Tech vendors, where systems can be created without legacy components such as databases, and software logic always runs as designed against the expected data.

Essentially this is possible because the Internet Computer is created by independent data centers around the world running a mathematically secure protocol to combine their computational capacity using advanced computer science, which creates a new kind of environment for building and hosting enterprise IT systems and internet services using a new kind of software that do not rely upon today’s legacy IT stack at all.

Today, if you want to build a new internet services, very often you will need to incorporate user data, user relationships or functionality contained and corralled within monopolistic established services owned and run by Big Tech. Essentially, this means that to function correctly, new services you or others build often depend heavily upon access to APIs (Application Programming Interfaces) provided by companies such as Microsoft, Google, Amazon and Facebook. Sadly, the history of the past decade shows that building on these APIs is like building on sand, which is why we are seeing fewer and fewer interesting new startup services being launched and succeeding.

The Internet Computer provides a means to create internet services in a totally new way using “autonomous software”. These internet services can provide guarantees to users about how their data is processed, but even more importantly, can provide guarantees about the availability of the APIs they provide to other services that wish to incorporate the data or functionality they share. Ultimately, this will enable entrepreneurs and communities of developers to create a new breed of service that benefits from better network effects. These network effects will be mutualized and allow vast numbers of services to be built out collaboratively because they can build on and extend each other without trust, creating a far more diverse, dynamic, fertile and eventually dominant ecosystem.

Dominic Williams is the Founder, President and Chief Scientist of the DFINITY Foundation and Internet Computer project. He hails from the UK but moved to Palo Alto, California in 2012, and continues to travel among the various operations DFINITY maintains around the globe. He has a background as a technology entrepreneur, distributed systems engineer and theoretician. His last major venture was an MMO game that grew to millions of users, which ran on a novel horizontally scalable game server technology he created. More recently, he has distinguished himself through contributions to distributed computing and crypto theory, with works including Threshold Relay, Probabilistic Slot Consensus, Validation Towers and Trees, Puzzle Towers, The 3 E's of Sybil Resistance, and the core architecture of the Internet Computer and the framework it provides for hosted software.

If you are an entrepreneur, developer, or enterprise looking for better ways to build software, the Internet Computer presents an unprecedented, unfolding opportunity. All of this will accelerate because the Internet Computer is expanding the functionality of the public internet — from a global network to also becoming a public compute platform. Many will use the Internet Computer to reduce complexity when building websites and enterprise systems without the need for legacy technologies such as cloud services, databases, and firewalls. Entrepreneurs and developers will take advantage of the emerging “open internet boom” to create pan-industry platforms, DeFi apps, and open internet services by building reimaged software directly on the public internet. Through the use of canisters, users can simply deploy code, store data, and process computation through the Internet Computer Protocol (ICP). The Internet Computer is infinitely scalable as well as decentralized, allowing for applications to exist on the Internet in perpetuity. This enables users to build and deploy open internet services that can “run on the network itself rather than on servers owned by Facebook, Google or Amazon.” -MIT Technology Review

Motoko compiles to WebAssembly (Wasm) due to its versatility to operate across the internet. The Internet Computer leverages Wasm canisters to store data and execute code directly to the internet. Wasm is a binary instruction format for a stack-based virtual machine. It is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.

The Wasm stack machine is designed to be encoded in a size- and load-time-efficient binary format. Wasm aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms. Wasm describes a memory-safe, sandboxed execution environment that may even be implemented inside existing JavaScript virtual machines. When embedded in the web, Wasm will enforce the same-origin and permissions security policies of the browser.

Wasm is designed to maintain the versionless, feature-tested, and backwards-compatible nature of the web. Wasm modules will be able to call into and out of the JavaScript context and access browser functionality through the same web APIs accessible from JavaScript. Wasm also supports non-web embeddings. Source: WebAssembly.org.

For these reasons as well as its compatibility across browsers, Motoko compiles down to Wasm. Additionally, any language that can compile down to Wasm can then be deployed on the Internet Computer.

The Internet Computer supports both open canisters and private canisters, which enable the owners of the canister to make their technology open or to privatize it. Private canisters will necessitate greater amounts of cycles to maintain than open canisters.

The DFINITY SDK Documentation is the main resource for all developer needs.

The DFINITY Foundation is continuously building tools to support developers and improve their experience in deploying technology to the Internet Computer.

The DFINITY Canister SDK is a software development kit that developers can use to create applications for the Internet Computer.

The Rust CDK is a canister development kit that developers can use to program software for the Internet Computer in Rust.

The Vessel Package Manager is a simple package manager for the Motoko programming language.

The Motoko VS Code Extension is Syntax highlighting for the Motoko programming language in VS Code.

When you write source code for an application that runs on the Internet Computer, you compile the source code into a WebAssembly module. To deploy the WebAssembly module that contains your program on the Internet Computer, the program is executed inside of a conceptual computational unit called a software canister. Once deployed, end-users can interact with the software canister by accessing the entry point functions you have defined for that canister through a front-end client such as a browser.

Nodes for the Internet Computer are currently deployed through independent data centers located across the world that combine their computing power by running the Internet Computer Protocol.

Unlike projects like Bitcoin or Ethereum where a machine of any caliber can run a “node,” the Internet Computer has a minimum standard of technical specifications for each machine to ensure higher speeds, lower latency, and greater reliability. Additionally, unlike proprietary cloud and compute systems that are run by an individual organization, the Internet Computer is decentralized and governed by the Network Nervous System (NNS).

In order to deploy a node to the Internet Computer, the node machine must meet a minimum set of specifications to ensure a minimum level of computational efficiency across the Internet Computer. As the NNS scales the Internet Computer’s capacity, more and more decentralized nodes running the Internet Computer Protocol will continue to be added, allowing capacity to scale infinitely through subnets.

Terminology

What is the Network Nervous System?

The NNS is the autonomous software that governs the Internet Computer and manages everything from economics to network structure. The NNS is hosted within the network itself, and is part of the system of protocols that securely weaves together the compute capacity of node machines to create the Internet Computer blockchain network, allowing the network to be autonomous and adaptive. The NNS acts as an autonomous “master” blockchain with a public key to validate all ICP transactions.

The NNS will put proposals to a vote, such as whether to expand the network by adding subnets or onboarding new node machines, which are then voted upon by holders of IPC utility tokens who have locked up their tokens.

The NNS also combines nodes from independent data centers to create subnets, which are used to host canisters, an evolution of smart contracts. The NNS continues to create subnets based on the capacity demands of hosting canisters and the ability to connect to other subnets and allowing the Internet Computer to scale indefinitely.

What is a subnet?

A subnet is a special kind of blockchain within the Internet Computer network that can seamlessly integrate with other blockchains to increase capacity. The Network Nervous System combines nodes from independent data centers to create subnets, which are used to host canisters, an evolution of smart contracts. Each subnet runs its own blockchain, and canisters run on-chain in such a way that they can transparently call canisters on other subnets/chains. In fact, there is no difference between calling a shared canister function on the same subnet or a different subnet, it's just a function call within a seamless universe of secure code.

The subnets are transparent to canister code and users of canisters — developers and users just interact with the Internet Computer, while the ICP protocol securely and transparently replicates data and computation across subnet nodes in the background. The system is more secure than a traditional blockchain because the precise decentralization of data and computation is controlled by the protocol rather than left to chance. Pooling (traditional PoW, PoS) or the creation of validator nodes with vast amounts of stake that create more blocks (as per PoS), is not possible. Direct interaction between subnets, and with subnets, is enabled by "Chain Keys," which are made possible by novel advanced cryptography developed by DFINITY.

What is Chain Key Technology?

Chain Key Technology is a 48 byte public chain key rendering old blocks unnecessary which enables the Internet Computer to operate at web-speed.

Chain Key Technology allows the Internet Computer to finalize transactions that update smart contract state in 1–2 seconds. This is an enormous improvement, but still insufficient alone to allow blockchain to provide competitive user experiences, which require that responses be provided to users in milliseconds. The Internet Computer solves this by splitting smart contract function execution into two types, known as “update calls” and “query calls.” Update calls are those we are already familiar with, and take 1–2 seconds to finalize their execution, while query calls work differently because any changes they make to state (in this case, the memory pages of canisters) are discarded after they run. Essentially, this allows query calls to execute in milliseconds.

One of the primary complaints about blockchain technology is its lack of speed, and one of the greatest preconceptions is that it is intentionally slow. The roots of such thinking date back to the first blockchain, Bitcoin, which typically takes roughly 30–60 minutes to finalize transactions. Developed a few years later, Ethereum used an updated form of Proof-of-Work to speed things up, but it is still far from achieving the web speed that’s needed to deliver compelling online user experiences.

The Internet Computer is the third great innovation in blockchain. The first innovation was Bitcoin, which introduced cryptocurrency, and is now playing the role of digital gold. The second innovation was Ethereum, which introduced smart contracts, which are now powering the DeFi revolution. This third major innovation, the Internet Computer, introduces the first true blockchain computer, which will enable the world to reimagine how we build everything — using a blockchain with seamless, infinite capacity. Inside the Internet Computer Protocol, Chain Key Technology makes this all possible — a combination of dozens of computer science breakthroughs such as Random Beacon, Probabilistic Slot Consensus, Advanced Consensus Mechanism, Network Nervous System, subnets, etc., that allows the Internet Computer to be the first blockchain computer that runs at web speed with unbounded capacity.

What are Canisters?

Canisters are computational units that include both program and state. A software canister is similar to a container in that both are deployed as a software unit that contains compiled code and dependencies for an application or service.

Containerization allows for applications to be decoupled from the environment, allowing for easy and reliable deployment. The canister differs from a container, however, in that it also stores information about the current software state with a record of preceding events and user interactions.

While a containerized application might include information about the state of the environment in which the application runs, a software canister is able to persist a record of state changes that resulted from an application’s functions being used.

This concept of a canister consisting of both program and state is an important one because when a canister function is invoked by sending a message to one of its entry points, there are only two types of calls: non-committing query calls and committing update calls.

What is an ICP utility token?

The ICP utility token (formerly known as “DFN”) is the primary mechanism that allows the broader internet community to participate in the governance of the Internet Computer network. ICP can also be dissolved and converted into cycles, which are then used to run websites and applications as well as power computations on the Internet Computer via canisters.

What is a Neuron?

Neurons allow users to “time-lock” their ICP utility tokens in order to be rewarded with voting power, which can be used to vote on proposals submitted to the system that can be automatically executed. Neurons can be made to follow each other in various ways such that they vote automatically, representing a form of liquid democracy. Users can also dissolve their neurons to release the tokens inside the neuron and convert them into cycles to power computation.

What are Cycles?

Cycles act as the computational resource to execute actions on the Internet Computer. In general, all canisters consume resources in the form of CPU cycles for execution, bandwidth for routing messages, and memory for persisted data. Canisters maintain an account balance to pay for the cost of communication, computation, and the storage consumed by their applications. The cost of computation is expressed in units of cycles.

Cycles reflect the real costs of operations, including resources such physical hardware, rack space, energy, storage devices, and bandwidth. In simple terms, a cycle unit represents the cost of executing a single WebAssembly instruction. Programs must be able to pay for complete execution (all or nothing), but the cost associated with cycles will make efficient programs cost-effective.

By setting limits on how many cycles a canister can consume, the platform can prevent malicious code from draining resources. The relative stability of operational costs also makes it easier to predict the cycles that are required to process a million messages, for example.

Cycles can be compared to “gas” for Ethereum and “credits” for AWS, but have much farther reaching uses with regard to data, compute, and execution. Their design also inherently accounts for technological pitfalls, such as rapidly rising costs of usage.

What is Motoko?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.

The Motoko canister SDK can be found at http://sdk.dfinity.org