Skip to main content

Today, it seems like technology is evolving at an exponential rate.  And with the ever-increasing compression of time-space that we are experiencing globally, its not only logical, but expected, for the IT industry to take the lead in finding a way to ease the pain.  Time-Space compression, a term coined by David Harvey in 1989 in “The Condition of Postmodernity”, is the idea that our perception of space is time-dependent, and all distances are understood in terms of the time taken to traverse them. So, if the time taken to conduct a specific space-related function is reduced, the corresponding space is perceived to have reduced as well.  It describes an accelerated frame of how people experience time and space. This concept owes its existence to the development of advanced technologies concerned with transport and communication.

Speed and Volume

In this postmodern world in which we live, this concept has evolved from what Harvey described in the 1980’s to the condition we find ourselves in today in which the time to conduct a space-related function has been so drastically reduced that our perception of physical distance has become virtually nonexistent.   We are living in the “instant” age of communications.  Today, there are over 15 billion internet-connected devices worldwide.  But by the year 2025, that number is projected to be 150 billion.  And while this is not a huge problem right now, the existing equipment, storage, and infrastructure will not be able to manage that increase in volume over the next 2-3 years.  Network congestion, latency, and storage limitations are all expected to become pressing issues when the volume of data being sent and received over the internet increases by 900%.

IoT and AI

One factor that is driving these increases in speed and volume requirements is the Internet of Things (IoT).  Smart home devices and environmental sensors are literally everywhere, and each one of them works on the premise of gathering data then sending it to a centralized data center or cluster of servers for processing and reporting.   In addition to the IoT, Artificial Intelligence (AI) applications are growing in popularity.  Autonomous (self-driving) cars, navigational aids, surveillance applications, and speech-recognition communication devices all must gather and process data instantaneously.   Any delays in data transmission due to network congestion or latency could pose dangerous risks.

Enter Edge Computing

The concept of edge computing is one possible solution to the overwhelming speed necessary for network functionality due to extreme time-space compression.  Edge computing is a distributed IT architecture that reduces latency for end users by moving compute and storage closer to the source.  The process involves the use of nodes at the edge of a network, which process data and run applications.  These nodes, or clusters, are able to process and handle the data that would otherwise be sent to the cloud, closer to the end user.  The result is more efficiency, lower costs, shorter data transmission times, and an improved digital experience. In real-time data processing applications, this approach can eliminate lag time and bandwidth usage, which can be critical in many scenarios.

Edge computing holds promise for adapting the postmodern world to our new model of compressed time-space.  Going forward, actors such as SaaS-providers, network architects, cloud engineers, and data center operators must all be mindful of the need to distribute a portion of the compute load to the edge in order to provide a quality end-user experience and prevent catastrophic failures in network design.  At Reese Data Center, we understand the need for complex, distributed network architectures.  Contact us today to find out how we can help you simplify yours.