Why Does Web3 Need An Independent Data Availability Layer

When the data economy develops to a certain extent, people will participate in it extensively and deeply, and everyone will inevitably participate in different data storage activities. In addition, with the advent of the Web3 era, most scientific and technological fields will slowly begin to upgrade or transform in recent years. As an important infrastructure of Web3, decentralized storage will be applied to more application scenarios in the future. For example, we are familiar with the data storage network behind social data, short videos, live streaming, intelligent cars and so on, and will also adopt the decentralized storage mode in the future.

Data is the core asset of Web3 era, and data owned by users is the main feature of Web3. Allowing users to securely own data and the assets that data represents will help guide the next billion users to the Web by eliminating the various concerns that ordinary users have about asset security. A separate data availability layer will be an integral part of Web3.

From decentralized storage to the data availability layer

In the past, data was stored in the cloud in a traditional centralized way, and data was usually completely stored on a centralized server. Amazon Web Services (AWS) is the originator of cloud storage and currently the world’s largest cloud storage provider. As time goes by, users’ demands for personal information security and data storage continue to increase, especially after the data leakage of some large data operators, the disadvantages of centralized storage begin to emerge gradually, and the traditional storage method can no longer meet the current market demand. In addition, with the continuous advancement of Web3 era and the expansion of blockchain applications, data has become diversified and the data scale has also been growing. The dimensions of personal network data are more comprehensive and more valuable, making data security and data privacy more important and increasing requirements for data storage.

The advent of decentralized data storage. Decentralized storage is one of the earliest and most popular infrastructures in the Web3 space, starting with Filecoin in 2017. There is a fundamental difference between decentralization and centralization compared to AWS. AWS builds and maintains its own data centers with multiple servers, and users who need to buy storage services can pay AWS directly. Decentralized storage follows the sharing economy and uses massive edge storage devices to provide storage services. Data is actually stored on the storage provided by the Provider node. Therefore, the decentralized storage project has no control over this data. The essential difference between decentralized storage and AWS is whether users have control over their data. In such a system without centralized control, the data security factor is very high.

Decentralized storage is a storage business model that stores files or file sets in fragments in the storage space through distributed storage. The reason why decentralized storage is important is that it solves various pain points of Web2 centralized cloud storage, more conforms to the needs of the development of the era of big data, can store unstructured edge data at lower cost and higher efficiency, and enable various emerging technologies. Therefore, decentralized storage is the cornerstone of Web3.

At present, there are two common decentralized storage projects. One is for the purpose of block production and mining with storage. The problem brought by this mode is that the storage and download on the chain will slow down the actual usage speed, and it often takes hours to download a photo. The other is to use one or several nodes as the centralized node, which can be stored and downloaded only after the verification of the centralized node. Once the centralized node is attacked or damaged, it will also cause the loss of stored data.

Compared with the first project, MEMO’s storage stratification mechanism solves the problem of storage download speed well, making the storage download speed reach the level of seconds. Compared with the second type of project, MEMO adopts the role of Keeper to randomly select verification nodes, avoiding centralization and ensuring security. Moreover, MEMO is unique to RAFI technology, which allows for a multifold increase in repair capabilities, resulting in greater security, reliability, and availability of storage.

Data Availability (DA) essentially means that nodes do not need to store all data or maintain the status of the entire network in a timely manner when they do not participate in consensus. For such nodes, you need an efficient way to ensure data availability and accuracy. Because the core of blockchain is the immutability of data. Blockchain ensures that data is consistent across the network. Consensus nodes tend to be more centralized in order to ensure performance. Other nodes need to obtain the available data confirmed by consensus through DA. The independent data availability layer effectively eliminates the single point of failure and maximizes data security.

In addition, Layer2 expansion solutions such as zkRollup also require the use of a data availability layer. As the execution layer, Layer2 makes use of Layer1 as the consensus layer. In addition to updating the result status of batch transactions to Layer1, it also needs to ensure the availability of original transaction data, so as to ensure that the Layer2 network status can still be restored if no prover is willing to generate proof. Avoid the extreme case of user assets being locked in Layer2. However, if the original data is directly stored in Layer1, it violates the function of Layer1 as the consensus layer under the modularization of blockchain network. Therefore, it is a more reasonable design and an inevitable trend in the longer term to store the data in the exclusive data availability layer and only record the mergen of the calculation of these data in the consensus layer.

Figure 1 shows the generic Layer2 stand-alone data availability layer model for Fox Tech. Fox is a Zkevem-based zkRollup project that uses MEMO as a separate data availability layer, also using this architecture.

Figure 1: General Layer2 independent data availability layer model

Celestia for Independent Data Availability Layer Analysis

An independent data availability layer is a public chain, superior to a usability board made up of a group of subjectively aware people. If enough of the private keys of committee members are stolen (as happened in both Ronin Bridge and Harmony Horizon Bridge) to make the data availability down the chain unusable, Users could be threatened to withdraw money from Layer2 only if they pay a sufficient ransom.

Since the off-chain data availability committee is not secure enough, what if the blockchain was introduced as a trust body to ensure the availability of off-chain data?

What Celestia does is make the data availability layer more decentralized — providing a separate DA public chain with a series of verification nodes, block producers and consensus mechanisms to improve security.

The Layer 2 releases the transaction data to the Celestia main chain, where the authenticator of Celestia signs the Merkle Root of the DA Attestation, and sends it to the DA Bridge Contract on the Ethereum main chain for verification and storage. This effectively proves all data availability by using the Merkle Root of the DA Attestation instead. The DA Bridge Contract on the Ethereum main chain only needs to verify and store the Merkle Root, and the overhead is greatly reduced.

Celestia’s fraud is proof of optimism that the network is highly efficient as long as no one slips up. I wouldn’t have proof of fraud if I hadn’t made a mistake. The light node does not need to do anything, as long as the data is received, according to the code recovery, the whole process does not go wrong, optimism proved to be very efficient.

MEMO for independent data Availability layer analysis

MEMO is a new generation of high-capacity and high-availability enterprise-class storage network created by aggregating global edge storage devices through algorithmic features. Founded in September 2017, the team focuses on the field of decentralized storage. MEMO is a highly secure and reliable large-scale distributed data storage protocol based on blockchain point-to-point technology, which can realize large-scale data storage. Unlike one-to-many centralized storage, MEMO allows for many-to-many storage operations that go to the data center. In the main chain of MEMO, smart contracts used to constrain all nodes are mainly stored. A series of key operations, such as upload of stored data, matching of storage nodes, normal operation of the system and operation of punishment mechanism, are all controlled by smart contracts.

On the technical side, existing distributed storage systems, such as Filecoin, Arweave, Storj and others, allow all computer users to connect and rent out their unused hard drive space for a fee or token. Although both of them are decentralized storage, they have their own characteristics. The difference of MEMO is that it uses erasure codes and data repair technology to improve the storage function, making data more secure and making storage and download more efficient. That’s because creating a more purely functional distributed storage system is MEMO’s ultimate goal.

MEMO improves the ease of storage and optimizes the incentive mechanism of providers. In addition to the User and Provider roles, Keeper is introduced to protect nodes from malicious attacks. The system maintains economic balance through mutual constraints of multiple roles, and can support enterprise-level commercial storage with high capacity and high availability. It can provide secure and reliable cloud storage services for NFT, GameFi, DeFi, SocialFi, etc., and is compatible with WEB2. It is the product of perfect integration of blockchain and cloud storage.

Leave a Reply