Categories
Cloud Hosting

Why NPM Tools Need to Work Across On-Prem, Cloud, and Hybrid Environments | eWEEK – eWeek

The networking landscape has shifted dramatically over the past two years as remote work, cloud migration, and container-based architectures have augmented network infrastructure. This digital transformation has put added pressure on NetOps teams to gain visibility into on-premises, cloud, and hybrid environments to ensure performance of the entire network and applications, regardless of location

The networking landscape has shifted dramatically over the past two years as remote work, cloud migration, and container-based architectures have augmented network infrastructure. This digital transformation has put added pressure on NetOps teams to gain visibility into on-premises, cloud, and hybrid environments to ensure performance of the entire network and applications, regardless of location.

As a result, traditional network performance monitoring (NPM) tools are no longer enough for organizations that want to proactively plan, monitor, and optimize their network services or that want to find and fix network performance problems quickly.

In fact, according to Gartners Market Guide for Network Performance Monitoring 2021, by 2025, 60% of organizations will have seen a reduction in traditional network monitoring tool needs due to increases in remote work and cloud migration, as compared to 2021.

In essence, organizations can no longer afford to have visibility gaps across infrastructure. To overcome this challenge, they need to ensure their infrastructure is equipped to handle issues both on and off premises as well as leverage a combination of data sources to provide a holistic end-to-end view of the entire network.

How is this done? Lets dive into more details on the state of NPM and how its adapting to offer cloud and hybrid visibility.

Also see: The New Focus on CloudOps: How Enterprise Cloud Migration Can Succeed

NPM tools leverage a combination of data sources, including network-device-generated traffic data; raw network packets; and network-device-generated health metrics and events to monitor, diagnose, and find performance issues. This includes giving NetOps teams forensic data to identify the root cause of performance issues and insights into the end-user experience.

Traditional NPM tools focus on the core network and data center, capturing information from within the traditional network perimeter. The traffic types, flow rates, and traffic patterns are almost a known factor except for network anomalies that happen.

Customers with on-premises network designs have a robust, scalable, and stable environment and, with the help of firewalls, reliable WAN edge devices and other network components. NPM tools are 99% aware of the possible issues that could happen, and they monitor and act based on this analysis.

With customers migrating to cloud and container-based architectures (and other technologies like SD-WAN and microservices), its become more difficult to capture traffic and isolate problems. Today, organizations need tools that can monitor LAN, WAN, and into the cloud, so lets dive into each area thats impacting this shift around visibility.

Its no secret that the remote workforce has resulted in organizations redesigning network infrastructure, but these changes often dont account for monitoring or visibility.

For example, with the increase in the remote workforce, the number of connections coming in through the VPN concentrators or firewall has increased tremendously for most large enterprises. These devices and network designs must be redesigned to accommodate this increase in scale, throughput, and number of user access licenses.

With more remote workers, enterprises are looking for NPM tools that can monitor and analyze traffic patterns, utilization, and application monitoring from the VPN concentrators. NPM tools are now available to read useful data (Flow, SNMP, API, etc.) from these devices to help analyze and monitor remote user traffic.

Cloud migration has been happening for years, but due to recent events, has been accelerating at a staggering pace. The drive toward more cloud-enabled applications and services, which may not be owned by the organizations, further complicates monitoring and troubleshooting.

For example, there are cloud solutions offered by multiple cloud companies (like Google, Amazon, Azure) that also provide services and applications hosted on their portal or managed services by other vendors.

With this mix of vendors, the type of useful readable data for NPM tools to understand and monitor can be challenging. But NPM tools are now able to capture raw cloud data and convert to readable IPFix data, or via API to create useful reports for monitoring and analysis.

Some NPM tools have global or private agents deployed in the cloud at various sites, which make use of synthetic traffic for monitoring and analyzing network SLAs. Cloud vendors are also trying to add more ways (advanced API, service tags, etc.) for raw data to be easily accessible for NPM tools.

Containerization and microservices allow an organization to package software and its dependencies in an isolated unit, either on-premises or in the cloud. Having visibility into these services is crucial for managing performance across a user base, but its fundamentally different due to changes in traffic flow.

NPM tools need to be able to access the raw data, read this information, and analyze useful information from these containers and microservices to export this data into useful user reportable format. Each vendor has their own way to implementing how the containers or services are hosted on-premises or in the cloud, and the format in which raw data can be accessed. Most vendors have APIs to get access to this data; its just a matter of NPM tools to implement this API format to fetch this data.

Different cloud solutions have different ways the end-to-end solution is hosted. For example, deploying an NPM tool and reading raw data from Amazon AWS is completely different from hosting the tool on a Google GCP platform. The data formats provided by different vendors have different data variables, service names, region formats, etc. There are also large new enterprise private cloud solutions coming up (like SAP or Salesforce private cloud solutions), which add to the complexity of read data from hybrid cloud-hosted enterprises.

The shift to cloud and the changing network architecture is putting a premium on NPM solutions that work across all these technologies. This means capturing data that includes streaming telemetry, flow data, packet data, and SNMP, which can be used for such outputs as predictive analysis, real-time monitoring, AI/ML-assisted analytics, and historical analysis.

Enterprises now need to think about how NPM tools are planning to address new hosted technologies and migration from an on-premises design to a complete cloud solution.

About the Author:

Jubil Mathew is a Technical Engineer at LiveAction

Read more:

Why NPM Tools Need to Work Across On-Prem, Cloud, and Hybrid Environments | eWEEK - eWeek

Related Post