🚀 Gate.io #Launchpad# for Puffverse (PFVS) is Live!
💎 Start with Just 1 $USDT — the More You Commit, The More #PFVS# You Receive!
Commit Now 👉 https://www.gate.io/launchpad/2300
⏰ Commitment Time: 03:00 AM, May 13th - 12:00 PM, May 16th (UTC)
💰 Total Allocation: 10,000,000 #PFVS#
⏳ Limited-Time Offer — Don’t Miss Out!
Learn More: https://www.gate.io/article/44878
#GateioLaunchpad# #GameeFi#
Liberating Ethereum Performance: The Innovation Road Beyond the EVM Bottleneck
About the performance of the Ethereum Virtual Machine (EVM)
Every operation on the Ethereum mainnet costs a certain amount of Gas. If we put all the calculations required to run the basic application on the chain, either the app will crash or the user will go bankrupt.
This gave birth to L2: OPRU introduces a collator to bundle a bunch of transactions before committing to mainnet. This not only helps the app to undertake the security of Ethereum, but also gives users a better experience. Users can submit transactions faster, and fees are cheaper. While operations have become cheaper, it still uses the native EVM as the execution layer. Similar to ZK Rollups, Scroll and Polygon zkEVM use or will use EVM-based zk circuits, and zk Proof will be generated in every transaction or a large batch of transactions performed on its prover. While this allows developers to build "full-chain" applications, is it still efficient and cost-effective to run high-performance applications?
**Which of these high-performance applications are available? **
Games, on-chain order books, Web3 social, machine learning, genome modeling, etc. come to mind first. All of these are computationally intensive and expensive to run on L2. Another problem with EVM is that the calculation speed and efficiency are not as good as other current systems, such as SVM (Sealevel Virtual Machine).
While the L3 EVM can make computations cheaper, the structure of the EVM itself may not be the best way to perform high computations because it cannot compute parallel operations. Every time a new layer is built above, in order to maintain the spirit of decentralization, a new infrastructure (new network of nodes) needs to be built, which still requires the same number of providers to expand, or a whole new set of Node providers (individual/corporate) to provide resources, or both are required.
Therefore, whenever a more advanced solution is built, the existing infrastructure has to be upgraded, or a new layer built on top of it. To solve this problem, we need a post-quantum secure, decentralized, trustless, high-performance computing infrastructure that can truly and efficiently use quantum algorithms to compute for decentralized applications.
Alt-L1s like Solana, Sui, and Aptos are capable of parallel execution, but due to market sentiment, lack of liquidity and lack of developers in the market, they will not challenge Ethereum. Because of the lack of trust, and the moat built by Ethereum with network effects is a milestone. So far, the ETH/EVM killer does not exist. The question here is, why should all computation be on-chain? Is there an equally trustless, decentralized enforcement system? This is what the DCompute system can achieve. **
The DCompute infrastructure must be decentralized, post-quantum secure, and trustless. It does not need or should not be blockchain/distributed technology, but it is very important to verify calculation results, correct state transitions, and final confirmation. important. This is how the EVM chain works. While maintaining the security and immutability of the network, decentralized, trustless, and secure computing can be moved off-chain.
What we mainly ignore here is the issue of data availability. This post is not without focus on data availability, as solutions like Celestia and EigenDA are already moving in this direction.
1: Only Compute Outsourced
(来源:Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)
2. Outsource computing and data availability
(来源:Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)
When we saw Type 1, zk-rollups were already doing this, but they were either limited by the EVM, or needed to teach developers a whole new language/instruction set. The ideal solution should be efficient, effective (cost and resources), decentralized, private and verifiable. ZK proofs can be built on AWS servers, but they are not decentralized. Solutions like Nillion and Nexus are trying to solve the problem of general computing in a decentralized way. But these solutions are unverifiable without ZK proofs.
Type 2 combines an off-chain computation model with a data availability layer that remains separate, but computation still needs to be verified on-chain.
Let's take a look at the different decentralized computing models available today that are not fully trustworthy and possibly completely trustless.
Alternative Computation s
Ethereum Outsourced Computing Ecosystem Map (Source: IOSG Ventures)
- Secure Enclave Computations/Trusted ution Environments
TEE (Trusted Execution Environment) is like a special box inside a computer or smartphone. It has its own lock and key, and only certain programs (called trusted applications) can access it. When these trusted applications run inside the TEE, they are protected by other programs and even the operating system itself.
It's like a secret hideout that only a few special friends can get into. The most common example of a TEE is the Secure Enclave, which exists on the devices we use, such as Apple's T1 chip and Intel's SGX, to run critical operations inside the device, such as FaceID.
Since the TEE is an isolated system, the authentication process cannot be compromised because of the trust assumptions in the authentication. Think of it as having a security door that you believe is safe because Intel or Apple built it, but there are enough security breakers in the world (including hackers and other computers) that can breach that security door. TEEs are not "post-quantum secure," which means that a quantum computer with unlimited resources can crack the TEE's security. As computers rapidly become more powerful, we must build long-term computing systems and cryptography schemes with post-quantum security in mind.
- Secure Multi-Party Computation (SMPC)
SMPC (Secure Multi-Party Computing) is also a well-known computing solution in the blockchain technology industry. The general workflow in the SMPC network will consist of the following three parts:
Imagine a car production line, where the building and manufacturing components of the car (engine, doors, mirrors) are outsourced to the original equipment manufacturer (OEM) (job nodes), and then there is an assembly line that puts all the components together to make the car (resulting in node).
Secret sharing is important to a privacy-preserving decentralized computing model. This prevents a single party from getting the full "secret" (the input in this case) and maliciously producing erroneous outputs. SMPC is probably one of the easiest and safest decentralized systems. While a fully decentralized model does not currently exist, it is logically possible.
MPC providers like Sharemind provide MPC infrastructure for computing, but providers are still centralized. How to ensure privacy, how to ensure that the network (or Sharemind) is not malicious? This is the origin of zk proof and zk verifiable calculation.
- Nil Message Compute(NMC)
NMC is a new distributed computing method developed by the Nillion team. It is an upgraded version of MPC, where nodes do not need to communicate by interacting through results. To do this, they used a cryptographic primitive called One-Time Masking, which uses a series of random numbers called blinding factors to mask a Secret, similar to one-time padding. OTM aims to provide correctness in an efficient manner, which means that NMC nodes do not need to exchange any messages to perform computations. This means that NMC will not have the scalability issues of SMPC.
- Zero-Knowledge Verifiable Computing
ZK Verifiable Computation (ZK Verifiable Computation) is to generate a zero-knowledge proof for a set of inputs and a function, and to prove that any calculation performed by the system will be performed correctly. Although ZK verification calculation is new, it is already a very critical part of the Ethereum network expansion roadmap,
ZK proves that there are various implementation forms (as shown in the figure below, according to the summary in the paper "Off-Chaining_Models"):
(来源:IOSG Ventures, Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)
Above we have a basic understanding of the implementation of zk proofs, so what are the conditions for using ZK proofs to verify calculations?
Developer's Dilemma - Prove Efficiency Dilemma
Another thing I have to say is that the threshold for building circuits is still very high. It is not easy for developers to learn Solidity. Now developers are required to learn Circom to build circuits, or learn a specific programming language (such as Cairo) to build zk-apps seems like a distant prospect.
(source:
(source:
As the stats above show, retrofitting a Web3 environment to be more development-friendly seems to be more sustainable than bringing developers into a new Web3 development environment.
If ZK is the future of Web3, and Web3 applications need to be built using existing developer skills, then ZK circuits need to be designed in such a way that they support computations performed by algorithms written in languages like Java or Rust to generate proofs.
Such solutions do exist, and I think of two teams: RiscZero and Lurk Labs. Both teams share a very similar vision that they allow developers to build zk-apps without going through a steep learning curve.
It's still early days for Lurk Labs, but the team has been working on this project for a long time. They focus on generating Nova Proofs through general-purpose circuits. Nova proofs were proposed by Abhiram Kothapalli of Carnegie Mellon University and Srinath Setty of Microsoft Research and Ioanna Tziallae of New York University. Compared to other SNARK systems, Nova proves to have special advantages in performing incremental verifiable computation (IVC). Incremental Verifiable Computation (IVC) is a concept in computer science and cryptography that aims to enable the verification of a computation without recomputing the entire computation from scratch. Proofs need to be optimized for IVC when computation time is long and complex.
(Source: IOSG Ventures)
Nova proofs are not "out of the box" like other proof systems, Nova is just a folding trick, and developers still need a proof system to generate proofs. That's why Lurk Labs built Lurk Lang, a LISP implementation. Since LISP is a lower-level language, it makes it easy to generate proofs on general-purpose circuits and translate to Java easily, which will help Lurk Labs gain support from 17.4 million Java developers. Translations for other common languages such as Python are also supported.
All in all, Nova proofs seem to be a great original proof system. While their disadvantage is that the size of the proof increases linearly with the size of the computation, Nova proofs, on the other hand, have room for further compression.
The size of STARK proofs does not increase with computation, so it is more suitable for verifying very large computations. To further improve the developer experience, they also released the Bonsai Network, a distributed computing network verified by proofs generated by RiscZero. This is a simple diagram representing how RiscZero's Bonsai network works.
(source:
The beauty of the Bonsai network design is that calculations can be initialized, verified, and output all on-chain. All of this sounds like utopia, but the STARK proof also brings problems - the verification cost is too high.
Nova proofs seem to be well suited for repeated computations (its folding scheme is cost-effective) and small computations, which may make Lurk a good solution for ML inference verification.
**Who is the winner? **
(Source: IOSG Ventures)
Some zk-SNARK systems require a trusted setup process during the initial setup phase, generating an initial set of parameters. The trust assumption here is that trusted setups are performed honestly, without any malicious behavior or tampering. If attacked, it could lead to the creation of invalid proofs.
STARK proofs assume the security of low-order tests for verifying the low-order properties of polynomials. They also assume that hash functions behave like random oracles.
Proper implementation of both systems is also a security assumption.
The SMPC network relies on the following:
*SMPC participants can include "honest but curious" participants who can try to access any underlying information by communicating with other nodes. *The security of the SMPC network relies on the assumption that participants correctly execute the protocol and do not intentionally introduce errors or malicious behavior.
OTM is a multi-party computation protocol designed to protect the privacy of participants. It achieves privacy protection by enabling participants to not disclose their input data in the computation. Therefore, "honest but curious" participants would not exist, since they would not be able to communicate with other nodes in an attempt to access the underlying information.
Is there a clear winner? We don't know. But each method has its own advantages. While NMC looks like an obvious upgrade to SMPC, the network is not yet live or battle-tested.
The benefit of using ZK verifiable computation is that it is secure and privacy-preserving, but it does not have built-in secret sharing. The asymmetry between proof generation and verification makes it an ideal model for verifiably outsourced computation. If the system uses pure zk-proof calculations, the computer (or a single node) must be very powerful to perform a lot of calculations. In order to enable load sharing and balancing while preserving privacy, there must be secret sharing. In this case, a system like SMPC or NMC can be combined with a zk generator like Lurk or RiscZero to create a powerful distributed verifiable outsourced computing infrastructure.
This becomes even more important today that MPC/SMPC networks are centralized. The largest MPC provider right now is Sharemind, and a ZK verification layer on top of it could prove useful. The economic model of the decentralized MPC network has not yet worked out. In theory, NMC mode is an upgrade of the MPC system, but we haven't seen its success yet.
In the race for ZK proof schemes, there may not be a winner-take-all situation. Each proof method is optimized for a particular type of computation, and no one fits all types of models. There are many types of computational tasks, and it also depends on the trade-offs developers make with each proof system. The author believes that both STARK-based systems and SNARK-based systems and their future optimizations have a place in the future of ZK.