The role of decentralized networks in a data-abundant, hyperconnected world

When it comes to storing computer data, it seems like we’re running out of numbers. If you’re old enough, you may be able to remember when floppy disk space was measured in kilobytes in the 1980s. If you are a little younger, you are probably more familiar with USB sticks, which are denominated in gigabytes, or hard drives which are now terabytes.

The unfathomable data footprint of humanity

But we are now producing data at an unprecedented rate. As a result, we must be able to grasp numbers so large that they seem almost beyond human comprehension. To get a taste of the new space we’re entering, consider the following: Research firm IDC estimates that total global creation and data consumption in 2020 was 59 zettabytes – that’s 59 trillion gigabytes of old money.

While the total volume of data is now almost unfathomable, the rate of growth is even more remarkable. As early as 2012, IBM calculated that 90% of the world’s data was created in the last two years. Since then, the exponential growth in global data volume has accelerated further and the trend is set to continue. In fact, IDC anticipates that humanity will create more data in the next three years than it has in the last three decades.

The obvious question is: what has changed? Why are we suddenly producing a lot more data than ever before? Of course, smartphones are part of the story. Everyone now effectively has a mobile computer in their pocket that dwarfs the performance of previous generations of desktop computers. These machines are constantly connected to the Internet and continuously receive and send data, even when they are idle. The average American Generation Z adult unlocks their phone 79 times a day, roughly every 13 minutes. The constant availability of these devices has contributed to the avalanche of new data: 500 million new tweets, 4,000 terabytes of Facebook posts, and 65 billion new WhatsApp messages sent to cyberspace every 24 hours.

Smartphones are just the tip of the iceberg

However, smartphones are only the most visible manifestation of the new data reality. While you can assume that video hosting sites like Netflix and YouTube make up the lion’s share of global data, the total consumer share is only around 50%, and that percentage is expected to gradually decrease in the years to come. So what does the rest make up?

The rise of the Internet of Things and connected devices has further expanded our global database. The fastest year-over-year growth is in a category of information known as embedded data and productivity data. This is information taken from sensors, connected machines, and automatically generated metadata that resides behind the scenes and is not visible to end users.

Take, for example, autonomous vehicles that use technologies like cameras, sonar, LIDAR, radar, and GPS to monitor the traffic environment, draw a route, and avoid hazards. Intel has calculated that the average autonomous vehicle using current technologies produces four terabytes of data per day. To put this in perspective: a single vehicle generates a volume of data every day that corresponds to almost 3,000 people. In addition, it is vital that this data is stored securely.

On the one hand, it is useful for planning maintenance intervals and for diagnosing technical problems most efficiently. It could also be used as part of a decentralized system to coordinate traffic flow and minimize energy consumption in a given city. Finally, and probably most importantly in the short term, it will be important to resolve litigation in the event of an injury or accident.

Autonomous vehicles are only a tiny part of the whole story. According to McKinsey & Company, the proportion of companies using IoT technology increased from 13% to 25% between 2014 and 2019. By 2023, the total number of devices is expected to have reached 43 billion. From industrial IoT to entire smart cities; The future economy will have an enormously increased number of connected devices that potentially produce highly sensitive or even critical data.

Is the end of Moore’s law in sight?

There are two factors to consider, and both point to the increasing utility of decentralized networks. First, while we have more data than ever to tackle global challenges like climate change, financial instability, and the spread of airborne viruses like COVID-19, we may be approaching a hard technical limit on the amount of that information central computers are processed in real time. While the volume of data has grown exponentially in recent years, processing power has not increased to the same extent.

In the 1960s, Intel co-founder Gordon Moore coined the Moore Law, according to which computing power doubles if the number of transistors on a microchip doubles every two years. But Moore himself admitted that it was not a scientific law; it was more of a passing statistical observation. In 2010, he admitted that the computing power of computers will reach a hard technical limit in the coming decades, as transistors are now approaching the size of atoms. After that, more cores can be added to processors to increase speed. However, this increases the size, cost, and power consumption of the device. Therefore, to avoid a bottleneck, we need to find new ways to monitor and act on data.

The second factor to consider is cybersecurity. In an increasingly connected world, millions of new devices are going online. The data they provide may have an impact on how power grids are controlled, how healthcare is managed, and how traffic is managed. As a result, edge security – the security of data that resides outside of the network core – is of paramount importance. This poses a complex challenge for cybersecurity professionals as the many different combinations of devices and protocols offer new attack surfaces and opportunities for man-in-the-middle intervention.

Learning from networks in nature

What is the alternative when centralized processing is too slow and insecure for data-rich economies to come? Some experts have looked for inspiration in nature, arguing that we should move from a top-down to a bottom-up model of monitoring and responding to data. Take ant colonies, for example. While each individual ant has relatively modest intelligence, ant colonies as a whole manage to create and maintain complex, dynamic networks of food pathways that can connect multiple nests to temporary food sources. They do this by following a few simple behaviors and responding to stimuli in their local environment, such as: B. the pheromone traces of other ants. Over time, however, evolution has revealed instincts and behaviors at the individual level that produce a system that is highly effective and robust at the macro level. When a path is destroyed by wind or rain, the ants find a new route without a single ant being aware of the overall goal of maintaining the network.

What if the same logic could be applied to the organization of computer networks? Similar to ant colonies, in a blockchain network, many nodes can be combined with modest processing power to produce a global result that is greater than the sum of its parts. Just as instincts and behavior are vital, the rules for how nodes interact are critical to the success of a network in achieving macro-level goals.

Aligning the incentives of each decentralized actor into a mutually beneficial network took thousands of years for nature to rule them. It is therefore not surprising that this is also a difficult challenge for the human designers of decentralized networks. While the genetic mutations of animals are essentially random in terms of their potential benefits, we have the advantage that we can specifically model and design incentives to achieve common overall goals. This was our priority: The aim was to remove all perverse incentives for individual actors that undermine the usefulness and security of the entire network.

By carefully designing incentive structures in this way, decentralized networks can significantly improve the level of edge security. Just as the pathfinding network of an ant colony continues to function if a single ant is lost or died, decentralized networks are just as robust, so the network remains fully functional even if individual nodes crash or go offline. In addition, no single node has to process or understand all of the data in order for the entire network to respond. In this way, some researchers believe that we can create an economic incentive structure that automatically detects and reacts to common challenges in a decentralized manner.

Conclusion

The volume of data we produce is exploding and our ability to monitor and react to it using central computer networks is reaching its limits. For this reason, decentralized networks are uniquely suited to the challenges ahead. Much remains to be explored, tested, and experimented, but the basic robustness and usefulness of the underlying technology has been demonstrated. On the way to a data-rich, hyper-connected world, decentralized networks could play an important role in getting the maximum economic and social benefit from the Internet of Things.

The views, thoughts, and opinions expressed here are the sole rights of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Stephanie So is an economist, policy analyst, and co-founder of Geeq, a blockchain security company. During her career, she has applied technology in her specialist fields. In 2001, she was the first to use machine learning for social science data at the National Center for Supercomputing Applications. More recently, she explored the use of distributed network processes in healthcare and patient safety in her role as a lecturer at Vanderbilt University. Stephanie is a graduate of Princeton University and the University of Rochester.

Stay in the Loop

Get the daily email from CryptoNews that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

- Advertisement - spot_img

You might also like...