What is Edge Computing?
If you are curious about edge computing and want to learn more about it then this is for you. Adapted from a more formal document, which can be found here, this article serves to inform the reader about edge computing. Not only do we cover what is edge computing, we investigate how we got to edge computing, the benefits of edge computing, and some use cases of edge computing.
Today’s Information Technology (IT) field has many challenging aspects to consider. One of the most prominent challenges is the ever-growing amount of data. Specifically, information from numerous devices – per citizen in the world’s population -that request, send, and process data.
Every day, people connect to the Internet to search, buy, read, or simply browse; in doing so we create a request, which is then processed by a server somewhere. There is a multitude of requests that must be handled by a multitude of servers and the magnitude of this is only increasing. This results in vast amounts of data being stored somewhere safe, and then processed as fast as possible to be sent back -all with a minimal waiting time.
What we have described above is referred to as the centralized model. Essentially data center(s), in a single location where organizations use provisioned hardware, software, and network. No matter how advanced or modern a Data center is, it simply cannot keep up with the exponential expansion. It is a very expensive task to build, maintain, secure, and run a Data center. Data centers require a very high initial investment, along with the ongoing high cost of maintenance to include modernization, cooling, electricity, securing the location, and technically competent personnel. Predicting the business needs and this provisioning is a very complex task, and often resources are either unused or insufficient.
It is all about virtualization! At the root of today’s authentic tech changes, is the process of virtualization. Virtualization is a set of hardware and software, that enables the most efficient utilization of resources to increase performance. Virtualization also offers an added layer of security; If a virtualized environment fails, the remaining environments, and the host OS, can still function. Virtualization is the technology that allows “the cloud” to be possible.
New Terminology, Old technology
When we speak about ” the Cloud”, we are talking about using one or more data centers, where services and resources are supplied by a vendor or provider. In the Networking world before the modern “cloud”, the icon of a cloud was actually used to identify the Internet or another network.
Organizations choose to utilize a Cloud provider, because it offers almost immediate provisioning of resources, such as servers and storage, with a minimal amount of waiting time.
An organization pays for what they use.
The infrastructure architecture of the Cloud providers is identical to what one can find in any Data center – network, servers, visualization, security, and technical personnel.
Organizations do not need to purchase equipment, wait for it to arrive, then configure and enable it; this is done already by the Cloud provider. We can refer to this as “ready-to-use” technology.
The Cloud offers the agility that a data center alone cannot. The cloud scales, based on the need, demand and usage scope of an organization. The agility that a business can obtain when moving to the Cloud, is incomparable.
However, the Cloud cannot yet provide the storage, performance, bandwidth, and latency that can be had in a Datacenter.
Often, data-sensitive information and applications require high-availability (99.999%), which is best obtained in an on-premises Datacenter. In the Cloud, this same high-availability will take extra steps and costs. In some situations, it will still be more efficient to maintain some operations in a Datacenter.
Alternatively, businesses can consider utilizing a private cloud, such as Nutanix, to harness the best of both worlds and create a hybrid cloud between their on-premises data center and a cloud provider.
So, What is Edge Computing
The way we produce, consume, and request data has only been intensified by unpredictable events, such as the Covid-19 pandemic. Many organizations needed to promote remote work, while others were overwhelmed by exponential growth. For example, stores that offer groceries, had a great shift from in-store shopping to online shopping and delivery; and remote meetings through platforms such as Zoom has become the new normal.
Additionally, Machine Learning, hand in hand with Artificial Intelligence and the quiet but stable presence of IoT, continues to change the landscape.
For organizations, it is becoming clear that a decentralized infrastructure is a way to cope with such data expansion. To understand edge computing we should understand the need for data generated faster, with less or no latency, and closer to the user.
Essentially edge computing is decentralized technology at the edge of a network, offering cost and latency benefits and real-time computation. This is accomplished by grouping computing and storing of data in locations near the users, at the perimeter. Edge computing adds an extra layer of agility and scalability to a hybrid or multi-cloud environment.
In a decentralized model, there would be edge computing solutions and edge computing locations.
The solutions can span from health monitors, Internet of Things (IoT) devices, or any device that can collect data and process it in real-time. (even a simple Raspberry Pi). Many of the “smart” devices that we often already use are an example. Locations can be in the form of small-scale Data centers, within strategic proximity.
For example, in a place where data is created from various devices (for instance IoT sensors), the data serves to ensure a specific functionality. It is more practical and less costly to process such great amounts of data locally, instead of sending that data to a distant data center. By doing this we also reinforce data sovereignty. Various devices, apps, or equipment at the edge location would remain connected to a centralized source (Cloud/Datacenter) for major updates, maintenance, and health checks.
Most of the technology we have today has evolved and transitioned to be faster, smaller, and higher performing. But the logic, or the basics, behind the technology has not changed. Edge computing is not a new technology. If we look back, remote computing is an early form of edge computing. Again, the computing resources were located closer to users, in the form of remote offices/branch offices. This removed the need for a single central location, which usually was also distant. It added more flexibility, efficiency, and reliability.
Based on a Cisco – Gartner, research 17% of the businesses were not oriented towards an Edge computing infrastructure. However, 54% were clearly interested in knowing more and exploring its potential.
What are the benefits of edge computing for both organizations and individuals?
Benefits that can help organizations provide better services and grow:
- Latency: can approach the performance of the traditional on-premises environment
- Bandwidth: can be organized – different number of devices sending at once / at intervals to a data center
- Cost: heavy-duty tasks are finalized in the Cloud -pay for what you use, only when you use it
- Saving: main functionality tasks are processed locally / saved on sending it remotely
- Functionality: leverages the architecture of the infrastructure to independent content elements
Use Cases for Edge Computing
When we look at different industries, there is a clear picture that more organizations need to constantly collect and process more data.
In transportation – the collection of traffic data from intersections will be used to improve the traffic and the traffic information. When is the next bus, traffic lights’ timing, if nearby there is a scooter/bike to rent, etc.
In healthcare, for instance, when remotely monitoring patients to ensure their well-being we know that a potential delay or interruption, even for a moment, can be fatal. Here, edge computing can speed up model training and proactively alert and update users.
In agriculture, too, complex algorithms/heavy processing can improve efficiency. However, this will be data intensive. If developed further and implemented properly, edge computing and IoT can ensure that machinery runs without the need for human interaction, harvesting, watering, temperature/humidity control, proper care for the animals on a farm, and forecasting possible environmental disasters.
In manufacturing, plants/warehouses, mining, and the oil sector – edge computing can help to keep the processes running and operating without lower latency and delay. Latency is not permissible in an industrial plant and High-Availability is critical. This is where Edge computing can be beneficial in relaying and processing data from various equipment to ensure efficient and timely operation. The Cloud can also be used to host machine learning to perform various data mining/model training or Business Intelligence (BI)
In retail, edge computing can help in processing transactions at the store(s) and use the cloud for redundancy. Additionally, serving multiple retail locations in one central area.
Why would organizations use Edge Computing?
- Decouples: processing and data transmissions
- Processes: in a (near) real-time
- Locates: data in a nearby Point of presence (PoP)
- Optimizes: collection, pre-processing, and implementation of data
- Secures: governance by having the data in a specific place, thus greater control and visibility
In any use case where there is the need to collect and process data that needs to be completed right there and right now, an organization would benefit from edge computing. There is a parallel and very promising development to take into great consideration: The Hyperconverged Infrastructure (HCI).
For example, like Nutanix, some of the so-called “heavy-duty tasks “could be performed in an on-prem data center, that has its own Nutanix or a cluster of them.
The cloud, the hyper-converged infrastructure, and the edge computing are here to stay.
They will continue to enhance one another’s functionality and neither of them can bypass or substitute the other. There are simply too many critical tasks that need to be performed in “loco”, that is: here and now; and the quantity will only continue to increase.
Edge computing requires thorough optimization and evaluation of resources and functionality. In some cases, its implementation can be costly and complex. To fully benefit performance-wise and in service to the users and the organization’s needs, organizations need to thoroughly plan, analyze, and implement strategies that fit their specific needs. Not everything can be in the cloud and not everything can be in the edge location. An enterprise/cloud architect needs to pick the best services to use at that edge location. They must be those services that provide the best processing, performance, and data optimization and deliver immediate results with no interruption in connectivity.
Finally, since everything is a question of cost and interest, let’s look at this. Fundamentally, if edge computing is planned and implemented properly, costs could be contained within a reasonable frame. As already mentioned, a hybrid cloud (ex. Nutanix and some other Cloud providers, and on-premises) could also be a solution, in industries that require high security over the information.
Marketing research from IBM Institute for Business Value and Oxford Economics points towards the growth of edge computing from $3.5 Billion in 2019 to $43.4 billion in 2027.22
The report estimates that by using edge computing, various organizations could have an average ROI of 24%, in the initial 3 years after that investment.23
As we know by Economies of Scale, when there is more demand for a service or product, its price can be reduced. Edge computing applies to all industries.
Further on, the report from IBM Institute for Business Value states that:
“A majority of respondents tell us edge computing will help them reduce operating costs
(57 percent) and automate workflows (56 percent) in the next five years. Close to half expect edge capabilities to increase productivity (47 percent) and accelerate decision making (46 percent).”24
What must we consider when thinking of edge computing? Edge computing can help with the constantly increasing number of devices, access points, and data. This growth is simply unmanageable in a centralized model. Edge computing is not a stand-alone solution, it is part of a larger solution and should be included in the same security policies and organizational structure as the primary cloud or datacenter. Organizations must ensure that the edge location is set up specific to their needs and expected usage, to be beneficial to the organization; otherwise, they may not see the ROI expected.
Cloud Architect Career Development Program
We’ll send you a nice letter once per week. No spam.