In a Christmas Carol, the main character is visited by three ghosts who show him the error of his ways, while also offering an opportunity to correct his course.
Network cybersecurity teams may find themselves in a similar pattern of reflection as the year draws to a close and we look ahead.
After that year, some may be haunted by the ghosts of past disruptions, as well as by the ghost of contemporary architectural concerns.
In that context, a visit from the Ghost of Network Operations Yet to Come could be welcome, especially if it charts a path forward to improve performance and resilience.
Over to you, spirits.
The ghost of network operations of the past
The first ghostly visit channels flickering memories: of network failures with cascading consequences and manual interventions.
Ops teams don't have to think too far back about problems caused by a lack of resiliency in network routes, where the failure of a single link caused a domino effect for everyone downstream or with some sort of interconnection or dependency on that connectivity. Engineers are probably haunted by incidents like this: the radius of the explosion was well above any conceivable comfort threshold.
Principal Solutions Analyst at Cisco ThousandEyes.
Another reminder comes into focus: of the manual intervention that affected organizations needed to respond. When systems went down and the organization lost connectivity, troubleshooting began immediately: what was the cause? How can connectivity be revived? Do we have to manually switch to a backup link, re-advertise our routes and reroute traffic to get back online, knowing it could take many hours?
It is a striking reminder of where both providers and customer organizations would no longer want to find themselves.
The spirit of network operations is there
The arrival of the Ghost of Network Operations Present is causing engineers to see the sweet spots that some organizations have hit today, with built-in resiliency, redundant routes, and some automation made possible by the shift to software-defined networking.
Observing other organizations, it's clear that software-defined networking makes teams happier. There is no physical disconnection of a network device for upgrades or maintenance; the impact of changes is tested non-destructively in advance to understand what effect they will have before being applied to the production environment. Teams have noticeably fewer 'unknowns' to deal with in the upgrade processes and they have a better understanding of the environment they will impact by making a change.
But it quickly becomes clear that this visibility has a limitation, which only extends to what is hosted on their own domain. In that regard, the Ghost of Network Operations Present serves one of its purposes: to serve as a reminder that the present is constantly in flux, and that with enough time passed, the present becomes the past.
At this point, there is a natural turning point in the ghostly encounter, and the question arises in the mind of the network ops engineer: what does the future hold?
The vision then transitions to an organization that is heavily dependent on hosting environments outside the immediate controlled domain, and therefore beyond the reach of software-defined capabilities. Applications are instead hosted in the cloud, and while the cloud infrastructure is resilient, the applications themselves appear brittle and prone to performance degradation, leading to critical functions suddenly becoming unavailable.
Cut to an image of engineers frantic as payments suddenly stop being completed; then that the communication channels in the workplace break down; and from employees who ask for data and get nothing in return.
Make it stop.
The spirit of network operations yet to come
If the future direction is not clear, the third appearance makes it so. There is a need to address the root cause of application outages and instability by extending visibility from the command domain to all domains hosting components of the end-to-end application architecture.
There must be oversight over the complex orchestration of components that allow an application to function. Only by mapping that entire service chain can individual dependency points be identified; the same individual points that cause various functionalities within the application to become temporarily inaccessible.
With that clarity comes a path forward: Network Assurance. This focuses on the end-to-end 'network' of interconnected private environments, service providers and services that form the user experience of an application or service – from Internet Service Providers (ISPs), public cloud, SaaS and more. It provides a holistic view of the digital experience by showing each connected element, such as a router, DNS resolver or web server, and its impact in relation to other elements and performance as a whole.
But end-to-end visibility alone is not enough: there is also a requirement to be able to use telemetry and insights as a basis to initiate positive – automated – action. Being able to understand which parts of that monitoring and remediation process can and cannot be automated is important to staying forward-looking. It also brings into focus closed-loop recovery: the idea that systems exist that recognize the causes of an application failure, and automatically initiate escalation and resolution, without human intervention.
We have recommended the best network monitoring tools.
This article was produced as part of Ny BreakingPro's Expert Insights channel, where we profile the best and brightest minds in today's technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro