AT&T Labs, IBM Research and consultancy Applied Communication Sciences (ACS) said this week they have seen the culmination of a project that started out as a hypothetical network study in 2007, resulting in a real-world, proof-of-concept this May. The technology, co-developed under the US Government’s DARPA CORONET program, aims to bring cloud and data centre interconnectivity into a more dynamic, elastic state, taking cloud-to-cloud setup times down to minutes from days or even weeks.

James Middleton

July 29, 2014

2 Min Read
AT&T, IBM bring cloud-to-cloud connectivity down to minutes
Cloud-to-cloud connectivity can be super fast

AT&T Labs, IBM Research and consultancy Applied Communication Sciences (ACS) said this week they have seen the culmination of a project that started out as a hypothetical network study in 2007, resulting in a real-world, proof-of-concept this May.

The technology, co-developed under the US Government’s DARPA CORONET program, aims to bring cloud and data centre interconnectivity into a more dynamic, elastic state, taking cloud-to-cloud setup times down to minutes from days or even weeks.

The shift relies heavily on software defined networking (SDN) developments to create a smart network that AT&T imaginatively calls the User-Defined Network Cloud (UDNC). A key area it will impact is cloud-to-cloud connectivity. As AT&T explained, when using a traditional network model, the connection between clouds is static – meaning it can’t expand or contract based on the need for bandwidth. Moreover it has traditionally been labour-intensive, expensive and time-consuming to set up these cloud-to-cloud connections.

The carrier believes that as the cloud’s potential grows, the networking between the datacentres – or clouds themselves – needs to be similarly dynamic. So, the proof-of-concept technology can set-up a cloud-to-cloud connection in under a minute. “This prototype uses just the right amount of bandwidth, and can enable setup times as short as 40 seconds, compared with the previous setup time of several days,” AT&T said.

Why is this important? The developers use the example of an emergency situation. Imagine a hurricane is headed toward a datacenter. Ideally an operator would quickly provision enough bandwidth to transfer all that data to another datacentre. While not possible before, this proof-of-concept technology makes it achievable. But in the future, the use of flexible, on-demand bandwidth for cloud applications – such as load balancing, remote datacentre backup operation, and elastic workload scaling – will provide major service flexibility and efficiencies for businesses.

Researchers at IBM labs revealed a related development back in December, with a technology that allows for the storing and moving of data across multiple cloud platforms in real time. IBM said the method for dynamic data migration and backup uses a “cloud-of-clouds” approach, a multi-cloud distributed storage system that can link data in over 20 different public and private clouds.

Researchers at the company have developed a software toolkit that lets users drag and drop block or file storage across almost any cloud platform, with little data replication. The development avoids service outages because it can tolerate crashes of any number of clients, using the independence of multiple clouds linked by a distributed storage algorithm to increase overall dependability.

The storage services don’t talk to one another directly but instead go through the cloud service for authentication and storage synchronisation; data is encrypted as it leaves one storage platform and decrypted before reaching the next. If one cloud fails, the back-up immediately responds.

 

About the Author(s)

James Middleton

James Middleton is managing editor of telecoms.com | Follow him @telecomsjames

You May Also Like