What actually goes on behind the glowing LEDs?

In 1919 Irish poet WB Yeats wrote ‘The Second Coming’, a work that conjured the image of a “spiritus mudi”—a vast warehouse that contained all the archetypes of human concepts. This enormous storage facility was located somewhere out in the inhospitable desert, yet magically accessible to every person walking the earth. Almost a century later, in the age of high speed data transport, intelligent networks and virtualisation, it’s easy to forget that behind the almost magical connection delivering information to the screen in front of the end user’s eyes, there is a solid, squat building full of humming electrical equipment. The datacentre is almost an abstract concept in itself. It sits at the heart of the network and carries out many of the critical tasks that keep the services fl owing; rarely, if ever, occupying the attention of the millions of customers it serves.

The vision of a datacentre as a hulking steel warehouse packed with racks and racks of servers studded with flashing lights isn’t far wrong. But what actually goes on behind the glowing LEDs? A telecom operator’s datacentre houses critical applications such as OSS and BSS and everything essential for running the Master Control Centre. As a result, a datacentre requires 24/7 uninterrupted availability, high security, high speed connectivity and lots and lots of power. That power is by far the biggest factor in running a datacentre, so if a carrier can reduce the electricity bill by 30 per cent, they can dramatically reduce cost. This consideration has influenced a number of approaches to datacentre building. Scale—a common concern in the telecoms industry—is another important dynamic affecting which approach an operator takes.

UK-based network fixed and mobile service provider TalkTalk, which focuses on both the enterprise and consumer markets, favours the ‘build big’ approach. The company recently opened a new facility in Corsham, built on 30 acres of Wiltshire scrubland. This datacentre covers a very large physical area in a campus-style build, which at the moment is only one tenth occupied. With “significant” amounts of power available on site TalkTalk can continue to expand and build out at this location over the next ten years—a very different proposition to urban build-outs, where operations are restricted by the availability of physical building sites.

Dave Mullender, head of network services at TalkTalk, says that as a fairly large consumer of datacentre space—both for the company’s own use and for delivery of services out to its customers—the way TalkTalk uses datacentres is a key consideration. “We need to ensure they are of the right quality and sustainability. We own some facilities outright and some we use on a rental model. But with installations like Corsham we wanted to find some facilities where we could expand and scale as we continue to grow our business. Our datacentre requirements grow as we do and we didn’t want to get into a ‘traditional building’. We needed something that was able to scale efficiently, something that was datacentre-specific and financially sensible in terms of investment.” TalkTalk uses very modern techniques centred around modular building, in much the same way as oil rigs are constructed before they are floated out to sea and assembled. Mullender says his company can build the entire datacentre in a factory and have it delivered as several separate modules which are then ‘plugged together’ on site. “This means rather than taking a year on site, we can build the datacentre in one or two months. We can scale much faster and fl ex to the business requirements of ourselves and our partner. We can also control investments so we are only investing in the space that we need and offer better value to our partners and customers,” says Mullender.

There are a mind boggling array of requirements and efficiencies to be considered in datacentre planning. For campus deployments at this level “a hell of a lot of power” is required, Mullender says, although these modular build techniques also include cooling systems which use fresh air cooling to make the datacentre much more efficient. According to TalkTalk, its PUE (power utilisation efficiency) is about 1.25. That means for each unit of power delivered to the servers from the grid, the company “wastes” a quarter unit on cooling, which is power never delivered to the actual equipment. Mullender reckons an average datacentre would have a PUE number of about two, highlighting the economic and environmental benefits of TalkTalk’s latest build.

However there are also some considerable planning permission requirements that need to be met in order to keep expanding such a datacentre. In fact, while technology has sped up the construction of the facilities themselves, long, drawn out planning permission applications can easily become the bottleneck. New builds are not always possible, though, and a carrier may well end up with multiple variations on datacentre models that have been brought in through acquisitions, with some more suited to specific roles and others that are multi-purpose. Whatever the case, Mullender warns that the operator has to be prepared to invest in these kind of facilities, although there is another trend stimulating this factor in the background.

Mike Sapien, industry analyst with Ovum, believes that there is a high level of consolidation going on in the date centre space, with operators simultaneously improving the quality of the facilities they are moving to. “France Telecom went from 30 datacentres down to ten and introduced a state of the art super centre,” he says, noting that there is also a consolidation of usage taking place too. At one point it may have been that a datacentre was designed for either in-house use, to help the operator run its own operations, or for retail purposes where capacity would be sold to customers. But now operators focused on external markets have realised that their internal needs are similar and have started to use single datacentres for both purposes.

“AT&T announced a datacentre build in North Carolina and they use it for both internal and external projects—whatever fills it up, fills it up,” Sapien says. “After all, you still need to staff the place whether it’s full or empty. And because datacentre real estate is enormously expensive, there has been a pruning of datacentre needs and a building of premium centres as well as operators creating regional hubs that cater to all operations within a given geographic region.” But are operators always best placed to be building and constructing these datacentres? Sapien believes that in general, 50 per cent of operators are building their own datacentres outright, with the other half outsourcing the creation and running of the installations to third parties.

Scale plays a major role, and even some operators that manage their own datacentre installations, are still going to specialists for their hardware or real estate needs, with either approach welcomed by European backbone network operator and cloud services firm Interoute.

Matthew Finnie, CTO of Interoute, claims his firm is best positioned to consolidate the platform the company is deriving its services from. “The economics of the network are far more efficient than the economics of datacentres. With the latter, you’re placing content in a physical datacentre where the availability of that content is limited by the availability of that location. Yet if you adopt a network model you can take a workload and primarily route it through that location, then if that location isn’t there for whatever reason, you route it somewhere else. This ‘Virtual Datacentre’ [what Interoute refers to as the VDC] becomes the platform for everything else, because the next evolution in networking is Software Defined Networks,” Finnie says. Interoute has got Virtual Datacentre installations in Amsterdam, London, Berlin and Geneva. The firm doesn’t charge its users for network usage because of the scale of its network—instead they pay for compute only. Against the background of explosive adoption of cloud computing services, this seems to be a sensible approach. “Cloud computing is all about being able to distribute workloads, not just about virtualising stuff in the datacentre. But that alone doesn’t do anything for availability or the agility of platforms.

However, with network integration all this falls into place naturally,” he says. “If you don’t own your infrastructure then you probably don’t need all the infrastructure you’ve got. This is an old, old telecoms problem. The days of the integrated netco and servco are over. We sell the same thing to big content players as we do to our telco customers—fibre, waves and chunks of IP transit. This game is a scale game. If you have the assets you can do something with it,” he says.

Finnie says that, while other carriers might not have the assets to compete with Interoute, they may still have 25,000 people working for them. So they will become a service management company and focus on managing their customers’ expectations and experience. “It’s all about getting rid of a supply chain model where a guy turns up on your doorstep with a box,” he says.

The crux of Interoute’s approach dovetails nicely with the big concept of cloud computing. Carriers like TalkTalk are building out huge datacentres, allowing them to deliver multiple tier three [top end] facilities to different customers all in the same site, and hooking those locations up to the core network with 4Tbps pipes. But Interoute is essentially knitting together the capabilities of several disparate datacentres over the network itself, making the building of these ‘super centres’ unnecessary. To paraphrase early Sun Microsystems employee, John Gage, the network has finally become the computer, and the growth in cloud adoption is driving this movement. Finnie says: “If you thought the network opportunity was big, then the compute opportunity is even bigger and the big driver for capacity will be compute resources.

“Datacentres used to be a customer of the network. A datacentre has to have people there 24/7, but the trend now is that the flexibility of the platforms we are putting in will mean the need for these flash, fancy datacentres is no longer. If your service becomes self healing, instead of the building it’s housed in being self healing, then your cost is a fraction of what it was and the economics will shift.”

The datacentre owner’s challenge, for those who don’t have networks, is to yield a higher revenue per installation through managed services. But if those managed services become itinerant workloads, then you’re left with only real estate assets and an infrastructure play. “People don’t need fancy locations as they can build out over two separate datacentre locations and connect them via the network. Networks and communications are completely virtualised already, so why not datacentres?” Finnie asks. There’s an element of ‘meta-virtualisation’ in this argument. Datacentres and cloud offerings work by virtualising the servers and processes that used to perform these tasks on an individual basis. What a virtual datacentre does is expand this idea out to the macro level—adding another layer of virtualisation.

Ovum’s Mike Sapien backs the concept. “All these services are virtual now. In the old days of TDM networks, everything was a one-for-one relationship,” he says. “You had a physical circuit going into a physical fibre pair, going to a physical customer location. These networks were very simple and it was easy to identify where the connections went because they were tied to a single phone number. But cloud services—and any service where things are shared—are an order of difficulty harder because you have to manage and maintain services that are not tied to any particular customer.

“This is more like the early days of ISDN when you suddenly had numerous services running on top of one copper pair, but the systems were built around one connection to one customer,” he says. “The network, hosting, datacentre, and co location, all has to be in the mix for the service that encompasses it all. Datacentre operators are really making a big soup that the customer pays to drink from.”

According to Sapien, all global carriers mention the advantage of having or owning the network but few have done much to integrate or enhance it for datacentre-based services. “The real differentiation will come when the network is fully integrated into the cloud service and provides features that others cannot match. This can be an intelligent network, on demand service model that gets close to the cloud, or a pay-as-you-go model,” he says. Ovum expects all the global carriers to develop more integration of the intelligent network into advanced services, such as hosted vertical applications, and begin to develop intelligent network services that are fully integrated into cloud-based services. To achieve this, there are many ways to use the mix of ecosystem partners within and outside the carrier datacentre space. Ovum believes that global carriers will have to develop this ecosystem in each datacentre and for each major region over time to provide the global availability of their respective advanced, cloud services. One additional party not mentioned so far is the enterprise customer, which may also be a potential partner either in sharing the risk or in providing services to other non-carrier customers.

“There are usually a select few customers that global carriers have a very deep, strategic relationship with that can be leveraged to expand new services. These customers generally are a catalyst of new services but there are situations that make it more strategic and more interactive to allow the global carrier to expand with the help of customers who understand that the new service may involve special investment and skills that can be shared,” Sapien says.

By this token, TalkTalk’s Mullender believes the datacentre model affects the way people consume infrastructure, because they can consume less hardware and more ‘service’. “If you’re going down that route then you want to consume not just the virtualised platform but you also need connectivity to that environment out to where it needs to be delivered,” he says.

It’s clear that while compute and connectivity go hand in hand on one level, they are still two very distinct services. But in a vision of the not too distant future these services will merge to the point where the physical locations and capabilities of individual datacentres become almost irrelevant and the slogan once seen on Sun Microsystems advertising hoardings will come to fruition.


Post your comment

@telecoms