The cloud services market is increasingly driven by customers with highly targeted IT requirements. For example, organizations want solutions built to address a variety of edge and distributed computing use cases, and as a result they are no longer willing to accept homogeneous technologies that do not meet their requirements.
This is an understandable perspective. While cloud computing has become an attractive option for those with centralized business functions, it has proven less beneficial for organizations that rely on external infrastructure. Processing and protecting data at the edge is a good example of this, with some implementing cloud-first strategies without on-site support. The downside to this is that mission-critical, remote applications can suffer from performance and reliability issues, with the knock-on effect of cloud contracts becoming costly and inefficient.
Chief Product Officer at StorMagic.
Difficult challenges
In these circumstances, organizations often choose to resume on-premise IT infrastructure deployment and support themselves. On the positive side, this can deliver the high levels of reliability and performance they need, but at the same time it raises some key challenges that the outsourced cloud model needed to address. This includes the associated costs of deploying hardware, power and cooling systems at each remote location – and, in some cases, whether there is even space to house the required technology at the edge.
Beyond these issues, management and maintenance costs can become prohibitive, regardless of whether it is a small organization managing one remote location or an enterprise with dozens. The availability of local expertise can also be a major challenge, especially for organizations operating specialist systems where fully trained staff are essential. Even with these requirements all in place, most remote locations will still require some level of cloud or enterprise data center connectivity. IT teams must also decide what data should be stored at the edge, in the cloud, or in their data center.
Organizations in this situation can easily become dependent on a complex and disjointed strategy, when what they really need is a cost-effective approach with the flexibility to meet their tailored peripheral needs. This has become an even bigger problem, especially in the past twelve months, due to the acquisition of VMware by Broadcom. After announcing a series of product bundle updates and new subscription fees, not to mention the termination of several existing VMware partner agreements, many customers have felt lost by these changes.
The path forward
For organizations operating in industries such as retail, manufacturing, healthcare and utilities, all these problems will be all too familiar. These companies rely on access to real-time data to inform their decision-making, meet performance standards and maintain the efficiency of supply chains.
At the same time, the range and complexity of edge applications is increasing enormously. From patient health monitoring devices and smart shelves and self-checkouts in retail to digital twins used at manufacturing sites, these innovations are creating massive data sets and putting even more pressure on existing data centers and cloud computing services.
To address these issues, companies are looking to digital transformation technologies and AI analytics to drive the performance improvements they need. For example, the data generated at these edge sites becomes so time-sensitive that the AI systems must be deployed locally so that decision making keeps pace with operational requirements.
The problem is that there just isn’t time to send all the data to a cloud for AI processing, so the answer is to implement more of this functionality efficiently at the edge. This is contributing to a major increase in edge investment, with research from IDC showing that global spending on edge computing is expected to reach $232 billion by 2024, up 15% from 2023, and that this figure will rise to almost $350 billion by 2027. .
A streamlined approach
In practical terms, organizations achieve these goals by deploying a full-stack HCI (hyperconverged infrastructure) at the edge as part of a cloud strategy. HCI consolidates compute, network, and storage resources into a single, streamlined data center architecture.
Unlike traditional approaches that rely on specialized hardware and software for each designated function, the use of virtualization reduces server requirements without impacting performance, providing a solution equivalent to an enterprise-grade infrastructure. By running applications and storing data at any remote location, this approach also takes advantage of cloud and data center connectivity, depending on the need. This can be achieved without the hardware architecture and implementation challenges associated with traditional edge technologies.
A particular benefit is that today’s HCI solutions are designed with the limitations of smaller remote locations in mind. This includes features that simplify the process of connecting edge technologies to cloud services and enterprise data centers. The most effective HCI solutions can provide these capabilities with as few as two servers and do so without compromising availability or performance, with failover occurring in as little as thirty seconds – a capability that maintains data integrity and keeps operations running. This can be achieved while also providing the associated cost benefits of lower hardware expenditure.
For organizations with limited space at their remote locations, HCI reduces the physical footprint required to install hardware. They also consume less power and do not require as much cooling, spare parts or on-site maintenance compared to traditional technologies.
This is all possible thanks to the inherent simplicity built into modern HCI systems, where the elimination of complexity also allows for easy installation and remote management. In fact, HCI installations can be managed by IT generalists rather than requiring dedicated experts, with systems typically deployed within an hour, avoiding disruption to daily operations and allowing new sites or applications to become operational quickly and effectively made.
Given the growing reliance on edge computing, many organizations are also likely to see their needs increase over time. HCI systems can meet these scaling requirements, allowing users to meet changes in demand or the need for complex reconfiguration exercises without delay.
Centralized management tools allow administrators to remotely manage and secure all edge sites on a daily basis from a single console. The system then automatically distributes and allocates compute and storage resources in real time, optimizing hardware resources for maximum efficiency, avoiding unnecessary and costly overprovisioning.
Putting all this together, organizations that rely on effective edge infrastructure now have a proven alternative to inefficient legacy solutions. As a result, it is now possible to create a win-win edge strategy that delivers the benefits of high-performance remote computing, with the value and flexibility that only cloud computing can provide.
We have listed the best tools for data visualization.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro