The network today has become a dynamic system, consisting of both physical and virtual infrastructure. As a result of this new environment, service providers are faced with additional challenges when it comes to their network operations. During the MEF16 event in Baltimore, CENX’s CTO, Chris Purdy, had the opportunity to sit down with TelecomTV to talk about the challenge of assuring services across next generation networks. Here is part two of that interview:
TelecomTV: I wanted to ask you about some of the challenges faced in this transition to virtual network services, specifically for service providers.
Chris Purdy: Virtualization makes it a lot harder to assure your services. Before, things were very deterministic on physical boxes. Now what’s happening, of course, is the VNFs are going across a shared compute infrastructure. I think there are two major ways that it makes it more complex.
The first is, historically, whatever the function that particular network element did was on the same box, and it was all tightly tied together. So, if a failure or issue occurred, it would be on the box and was very clear. Now what happens is there’s an underlying compute infrastructure that’s a shared resource for a large number of VNFs. And the mapping of those VNFs to the compute infrastructure is absolutely a very significant challenge.
If you look at data centers, we are a big application with many components that runs in a virtualized environment, and the number of times where we’ve had degradation in the performance of our application because of underlying compute, storage, or network issues which are very difficult to sectionalize and understand. That’s what’s going to be the first big challenge; how do I essentially understand an SLA between the whole compute storage and network infrastructure in the NVFI, and make sure the VNF is getting what it needs to be successful so you can sectionalize those problems?
The second way it adds complexity is that any one NFV domain will be delivering VNFs of many different types. I might be delivering a firewall to my IP VPN service, I might be delivering a virtual EPC to my mobile core, and I might be delivering a virtual x gateway to my mobile private network. So I’ve got many different network domains in a service provider, and each domain is now going to have a network topology, which consists of both physical network elements and virtual network functions to deliver an end-to-end service. The complexity of sectionalizing troubles in that combined physical and virtual environment is going to be one of the greatest challenges that service providers face. This is an area where we have invested significantly.
TelecomTV: I also wanted to ask you about CENX’s role in the coalescence between standardization and open initiatives.
Chris Purdy: It’s fascinating the way this is all happening. The MEF has discussed how it’s trying to take an agile approach to standards, which is very important. We’ve been an agile vendor for a long time. But it’s interesting when we sell to the big service providers, we often agile develop the product and then head to a waterfall when it comes to putting it into production. So there is a lot of change that has to happen.
But, I think there has to be a much tighter working relationship between the standards bodies and the open source community. It has to happen in an agile and evolutionary way. I do personally have some concerns that when you do these things incrementally, you don’t step back and do the high-level design enough to make sure that you got your framework and information models right. Instead, it’s constant evolution. Unfortunately, that is the world we find ourselves in. It’s going to be an interesting world because it’s certain that we aren’t going to come to lockdown standards that survive long periods of time, and we’re going to see continuous evolution.