Picture a world where various aspects of your life can be effortlessly attended to through a single device. Take the Fitbit technology for example. In the future, a Fitbit may be able to send health data to the wearer’s personal trainer who will analyze it and devise a customized workout. The wearer receives the new workout and it is automatically downloaded to a treadmill when they next visit the gym. In the meantime, their nutritionist plans a suitable post-workout meal. The Fitbit wearer’s fridge identifies if any of the ingredients from the meal are missing, and orders them from the online retailer in time from the wearer’s return from the gym.

Screen_Shot_2016-02-09_at_9.41.42_AM.png
The evolution of services, where billions of things are going to be connected, over networks that will be largely virtualized, is a powerful combination. As a result, this will have significant implications for the way in which digital services are created, delivered, and the way they need to be assured.

Service Assurance Today

In order to get a better grasp of where we are headed, we first must have an understanding of where we are today. In particular, looking at how services are currently assured. It is often the case that every operator has multiple network domains, which are specific areas of the network made up of thousands of network elements. Now, in order to manage a network domain, there must be an inventory system in place. These systems are typically manually populated which results in huge data integrity issues. Despite this untrustworthy data, that system is the heart and soul of the association between the resources in the network and the services provided by that network domain.

These network domains are instrumented so that, in the event of a failure of one of those resources, an alarm will get raised. The biggest issue associated with the fault systems typically in place is that a network domain could be sending hundreds of alarms a day. Therefore, people are relied upon to look at the output of these alarms and take the raw alarm prioritization and decide what to work on first, and what to fix, as problems occur. In the end, you end up with a team of detectives for each aspect of the network, trying to dig through information in order to try to understand what is going on. 

NFV Stage 1: Virtualization of Existing Network Elements

Now what happens when Network Functions Virtualization (NFV) comes along? The first stage of NFV would be categorized as the virtualization of existing network elements. The impact of this stage on service assurance is three-fold. The first is that ultimately the virtualization layers of the underlying NFV infrastructure (NFVI) need to be managed. The second is that these virtual network functions cannot be managed in the same fashion as existing physical networks, since they have different attributes that cannot be handled by existing OSS. Thirdly, and most importantly, there is going to be a mix of existing physical network elements as well as virtual network functions. As a result, there is going to be a substantial increase in complexity and potential failure modes you somehow need to address.

So what can be done regarding OSS strategies to address this stage one of NFV? Firstly, you need to add new systems to manage the new virtualized network elements. Existing OSSs cannot be used to manage these new network functions due to the fact that significant gaps exist. There is substantially more complexity here, for example, CPU utilization down in the compute infrastructure can cause issues, delay, and delay variation of VoLTE calls. However, this still requires significant investment and does not even provide the agility, which is the main rationale for introducing NFV in the first place.

NFV Stage 2: Micro-Services

The second stage of NFV occurs when micro-services start being deployed. Currently there are, for example, shared firewalls, or shared DNS across multiple customers. However, what happens in micro-services is that each customer will instantiate their own VNF for their own firewall, or DNS. In this world, with the separate VNFs per customer service, versus shared multi tenant VNFs, there is going to be a migration to container-based virtualization. Additionally, the network typology becomes extremely dynamic. The virtualization layers must be fully automated and assured, as one could simply not manage all this manually. In addition, the dynamic topology fundamentally breaks the existing OSS paradigm.

Screen_Shot_2016-02-09_at_9.44.24_AM.pngEven if service providers manage to deliver the first stage of NFV without large OSS overhaul, the emergence of micro-services is going to force change because it fundamentally breaks the existing OSS paradigm. How are you going to automate that level of scale? How are you going to inform the fault and performance OSS of the entities as they dynamically appear and disappear? How are you going to sectionalize issues as the service path changes? The topology of the network will be continually undergoing change as customers add and create services dynamically. This is what is driving the need for a fundamental OSS re-architecture.

Today, CENX is able to assist service providers with solutions for real-time orchestrated service assurance. Exanova Service Intelligence allows for the collection of all of this real-time data, and facilitates in populating the service information model dynamically. This addresses the challenges of the new network. Essentially, Exanova is helping to deliver service agility, quality services, and reduce opex for service providers around the world.

Search Our Blog

Subscribe To Our Blog

HybridNetworks.jpg

Let us know what you thought about this post.

Leave your comment below.