In 1998, when Greg Linden applied for a patent for “item to item collaborative filtering”, he could not have imagined the social impact it would have. Previously, Amazon had used data based on an individual’s purchases to recommend a product they might like; However, this approach was inefficient because it recommended things that were extremely similar to what the customer had bought before. In fact, this first method of using data was so bad that Amazon was almost better off having their team of book critics pick what to promote on the landing page. Linden’s technique was revolutionary because it made associations across products and customers – if there were correlations between buying a bike and sneakers, then a customer who bought a bike would be shown sneakers as well.
Blockchain and AI Meet in the Metaverse
However, it also contains the potential to generate immense economic value for participants and their real-world counterparts. The term “Metaverse” was created by combining the words “virtual” and “universe”. The Acceleration Studies Foundation (ASF) has classified the Metaverse into four categories: a virtual world that tells a perfect virtual story, a mirror world that reflects the current real world, an augmented reality that shows a mixture of real and virtual information, and life logging, which captures and stores everyday information about people and things. With the development of technology, more and more people are using the Metaverse, and as activities that are just as realistic as those in the real world are being performed, vast amounts of data are being generated. Data generated in the Metaverse has value in and of itself, but it also has the potential to generate a lot of economic value for participants and their real-world counterparts.Blockchain technology is required to guarantee the reliability of data in the Metaverse, and artificial intelligence is used to secure the diversity and rich content of the Metaverse. In order to ensure the reliability of data, blockchain technology will be used. To ensure the diversity and richness of content, artificial intelligence will be used.
In this chapter, we will explore the human instinct for creation in the virtual world, the phenomenon of the real and virtual world colliding in the virtual world, and the reliability of data in this virtual world. Blockchain and NFT technologies are described as trust technologies, and the Metaverse platform built on this technology will be explored. We will look at how blockchain and artificial intelligence can be combined to create a better world.
2. Virtual world and desire of creation
2.1 Human desire for creation
This theory has changed the paradigm of the concept of the culture. Creativity is an important factor that distinguishes humans from other animals and it is something that has helped humans create culture. A paper published in 2004 described the SeaCircle as the new concept of the culture, and it regarded the SeaCircle as human cultural activities for creating. In the concept of the SeaCircle, humans are the spiritual beings, and only humans constitute a culture. It explains the elements of insight of culture [2]. According to the SeaCircle theory, creativity is explained as an element of Open Mind and Spirit [3]. This theory has changed the paradigm of the concept of the culture.
The SeaCircle concept envisions the Metaverse as a space where people can be more immersed in creative activities, without the limitations of physical space and resources.
2.2 Connection between the virtual world and the real world
The online world composed of data is mostly under the condition of abundance, so Pareto’s law would not apply. The traditional economics based on the offline world tries to apply Pareto’s law to the online world, but it is very difficult to maximize the efficiency of data due to its abundant characteristic. Recently, the virtual world and the real world have been developed side by side. The First and Second Industrial Revolutions were the process of maximizing efficiency through division of labor, so the production of materials and the consumption of materials were separated. In the Third Industrial Revolution, as online transactions are actively conducted, data has become an important commodity, and offline transactions are gradually being replaced by online. In the Firth Industrial Revolution, an intelligent revolution is occurring as things and humans become hyper-connected. There is a convergence phenomenon in which production and consumption occur at the same time, such as social customization or digital DIY(Design It Yourself). The offline world composed of materials is dominated by Pareto’s law, which attempts to own and concentrate on the core of 20% due to limited resources. The online world composed of data is mostly under the condition of abundance, so Pareto’s lawThe Fourth Industrial Revolution is creating a world where the offline and online worlds are converging. This is happening in manufacturing, logistics, finance, automotive, sports, healthcare, education, food, and everyday life. In addition, as the problems of material production and supply were solved in the First, Second, and Third Industrial Revolutions, and interest in human personal desire and spirit increased in the Fourth Industrial Revolution stage, a new convergence between the offline world and the online world is being created.
2.3 Combination of virtual and real in Metaverse
The Metaverse is a reflection of the real world, with political, economic, social, and cultural interactions all appearing in the virtual space. Figure 1 shows the process of interworking and convergence between the real world and the Metaverse [4]. The Metaverse expresses an alternative world that cannot be achieved well in the real world.
Data is the new oil
Big data is being used by an ever-increasing number of companies to help inform their business decisions. Amazon was one of the first companies to start using big data in this way, but now almost every company with an online presence does so in some way. For example, Walmart uses data on which products are often bought together, as well as data on the weather conditions when certain products are purchased, to help them decide how to organize their store and increase sales. Similarly, Google uses data on users’ search habits to sell to advertisers so that they can better target their ads to users’ interests.
The amount of data available will continue to grow; Dr. Mark van Rijmenam predicts that “the average person will interact with a connected device every 18 seconds” by 2035. However, as van Rijmenam also notes, there is a growing demand from consumers to maintain their privacy. A solution is needed that increases security and individual privacy during data transfers, but a solution of this sort won’t see widespread adoption if it doesn’t allow for companies to continue to monetize their data profitably.
How data flows today
Many companies around the world sell data to other companies in order to make a profit. The data is used by the companies to make better decisions and improve their service. For example, Facebook buys data from 150 companies to make their advertisements more accurate. Even though this is a beneficial process for all parties, the data is at a higher risk of being accessed by unauthorized people when it is transferred between companies and countries.
Data is flowing through many different marketplaces and companies. Acxiom, Nielsen, Experian, Equifax, and Corelogic all gather data on millions of people, but each company is specialized in the types of data they hold. For example, Acxiom is focused on consumer behavior, and the data they sell is used to make 12% of targeted marketing sales in the US on a yearly basis. Another example is Corelogic, which collects data on property transactions. They sell it to real estate companies and banks so they can better determine mortgage rates. The average consumer has very little knowledge of where their information goes and even less control over it. Corporations that want this data for any reason can approach these big data brokers and buy it directly from them, offering no transparency to the consumer and creating an environment prone to hacks and data leaks.
A New Model: Decentralized Data Marketplaces
Blockchain technology is used by decentralized data marketplaces in order to improve security and privacy when connecting potential buyers and sellers of data. With decentralized systems, entities such as companies, data unions, or individuals can publish datasets while maintaining complete control over who gets access to the data. These systems are powerful for facilitating data transactions because the inherent transparency increases user trust and the impenetrable security makes hacks and data leaks effectively obsolete. Although this technology is still in the beta stage, there is lots of potential for it to grow. Startups, like the non-profit Ocean Protocol, have proved that decentralized data marketplaces can work and be beneficial for users.
Ocean Protocol is a decentralized data marketplace that allows users to grant access to datasets in exchange for tokens on Ethereum. The tokenization of data is important because it makes data more interoperable with the rest of the crypto and DeFi space. The NFTs represent unique assets and act as a copyright claim over a particular dataset, meaning ownership can never be compromised when sharing data. Each dataset is also given one or more unique tokens; holding a data token allows access to a particular service, so sellers grant access to their dataset by sending a token to a buyer.
When a seller publishes a dataset on Ocean, they are automatically creating a unique ERC-20 datatoken and ERC-721 Data NFT. The seller can choose to fix the price for access or let Ocean Market’s Automated Market Maker auto-discover the price. Balancer provides the proof-of-liquidity service, allowing anyone to stake OCEAN in a “datatoken AMM pool” corresponding to a specific dataset.
If more OCEAN is staked in a specific AMM pool, that dataset will become more expensive to access. The people who stake their OCEAN in a certain dataset are incentivized to pick a good dataset, because they earn a percentage of the fees every time a transaction is made involving that dataset. The more OCEAN that is staked, the higher the transaction fee, and the more money the staker will earn. Therefore, the people who stake OCEAN in a dataset are also acting as curators, picking datasets that they believe to be of good quality. The amount of OCEAN that is staked in a certain dataset is a proxy for the quality of that dataset. In other words, people will try to pick datasets that they think will be successful, in order to make more money. As more OCEAN is staked in better datasets, the quality of datasets will improve overall.
Compute-to-data and Increasing Utility
The ability to train algorithms on data sets without actually having access to the individual data points is the most interesting potential utility of Ocean. The transparent nature of blockchain prevents buyers from running any algorithm that the seller does not approve, ensuring that the data is never compromised. Ocean gives sellers the option to allow potential buyers to see sample data from the dataset, so that buyers can know how much “cleaning up” of the data they will need to do.
The compute-to-data technique could potentially revolutionize healthcare by giving doctors access to otherwise private and sensitive data, like health records. However, the existing healthcare system doesn’t allow hospitals to share patient records, which limits the potential benefits of this technique. If every hospital in the world shared all of their patient data, doctors would have a much better idea about correlations between symptoms and early indicators of disease, potentially saving millions of lives.
0 companies Compute-to-data offers a way to make our data more secure and private by letting consumers choose whether or not to make their data available in the first place. While this is not a easy solution, it is more realistic than expecting companies to stop recording and selling our data.
Even if consumers continue to express their dislike of the current system and the government increases regulation, companies will still be incentivized to collect data because it is so profitable. The first layer of compute-to-data is important because it offers a chance for companies to continue to monetize their data without giving up any of their customers’ personal information. Additionally, GDPR prevents data from being shared across borders, which is an obstacle for many companies that want to buy user data. However, Ocean allows access to data without transferring it off-premise, making it compliant with this new regulation.
Ocean’s Traction
The adoption of blockchain-based marketplaces by brokers and buyers of data is the first step in achieving a secure, decentralized method of sharing data. As of this writing, Ocean’s transaction volume is relatively low; however, via Ocean’s grants platform, there appear to be at least a dozen active teams building on Ocean, and maybe many more. There was a spike in usage after V3, their latest iteration, was released, but then a decline again due to issues with the staking feature in the marketplace and an increase in Ethereum gas fees. V4, which is still in beta, promises to guarantee safe staking and integrate Ocean with multiple other networks that have low gas fees like Polygon and Binance Smart chain.
The founder of Ocean, Trent McConaghy, sees a few ways to increase the number of users. The most important in the near term is incentivization through liquidity mining; Ocean will release this shortly after V4. In the longer term, Ocean is working on catalyzing people to create their own projects on the platform through the grant program and other initiatives. This has created a community that is passionate about the success of the new data economy.
Beyond marketplaces: Data Unions
Trent has explained how Ocean’s technology can have applications beyond the first level. For example, data unions powered by Ethereum DAO’s could be used to pay royalties to individuals for the use of their data. In addition, compute-to-data could be integrated with decentralized social media platforms.
A data union is an ecosystem that collects private user data. These organizations help build the new data economy, and if they are successful, users will benefit from the data they collect.
Motivation for Adoption: An Ideal Future
There are many reasons for both consumers and businesses to be attracted to the utility of decentralized data marketplaces. The main reasons are government pressure, hacks, and changing consumer habits based on greater awareness of how their data is abused. The only way companies will stop exploiting the existing business model is a combination of new laws that prevent them from using data in the way they do now, and consumers hurting their profits through choosing to use other services. As more and more data is collected, the existing centralized system will continue to be exposed for lack of security and privacy. This will drive consumers to seek a new model.
As more people learn about Ocean and data unions, and the usefulness of blockchain technology, the choice of whether to stick with the existing data business model or switch to decentralized data marketplaces will be up to consumers. However, for people to make the switch to products like Ocean and Swash, the interfaces must be user-friendly and people need to have a basic understanding and trust in blockchain technology. Even though it might take some time for this to happen, the advantages of decentralization for people are unmistakable – there’s a chance for a future where privacy is still possible.
Leave a Reply