Website Hosting Deals and Review to find best reliable hosting providers with better price and top service

To The Cloud!

To The Cloud!

“To the Cloud!” exclaims a couple in one of Microsoft’s TV commercials for Windows Live, advertising how the couple can use Windows Live while at the airport to watch a TV show that they have recorded on their PC at home. Perhaps commercials like these contribute to the pervasive buzz surrounding the term “cloud” that has swept the globe. Despite this buzz, it has become clear to me that many people do not really understand what it means to be “hosted on the cloud.”

Recently, at an industry networking event focused on managed hosting and outsourced IT, I met the CEO of a local company. He was actually already familiar with InfoRelay, and he mentioned that he had planned to contact us because he was tired of hosting servers at the office on his business cable connection, and wanted to “plug into the cloud.” When I asked him which features of the cloud interested him the most, he promptly and confidently replied, “well, the cloud has a very direct and reliable connection to the Internet, which is one of the reasons why I think it is important for us to host in the cloud.”

This man, the CEO of a multi-million dollar business, was describing the cloud as some sort of centralized holy grail of connectivity and trustworthiness; a system to which apparently InfoRelay had managed to connect directly in an effort to offer this service to our clients. Now, let me assure you — even the nicest guy might have found it difficult to avoid laughing out loud in this situation. And apparently I must be the nicest guy, because I also found it very hard to suppress the hardy snickering I felt brewing within my rib cage.

What I’ve realized though, is that this CEO is not in the minority; the general public does not truly understand the cloud. Although “cloud” has become a buzzword, it is becoming evident that perhaps the cloud has staying power — real companies are spending hundreds of millions of dollars, if not more, to develop robust cloud offerings. But just as suspended water droplets need an atmosphere to form puffy clusters in the sky, servers and SAN storage often require complex connectivity to form a redundant, high-performance computing environment.

As the company selected by CoreSite to sponsor CloudCommunity, a program which began in San Jose and Los Angeles and is now spreading to Reston, InfoRelay has gained even more insight into some of the technical challenges and requirements facing cloud-based service providers, and the carriers that provide these clouds with connectivity. While the true definition of cloud computing varies depending on who you ask, most agree that the main tenets include virtualization, scalability, and redundancy. Clouds don’t rely on a single server, and have no single point of failure.

From the carrier point of view, one of the first requirements that clients have is some level of redundant connectivity to the Internet. Some clients; the ones with the budget, staff, and wherewithal to purchase and manage their own routing equipment and manage BGP sessions with multiple providers, prefer to handle network redundancy on their own. The vast majority, however, are not about reinventing the wheel; they are happy to accept multiple diverse connections from InfoRelay, with VRRP or another redundancy protocol enabled to ensure maximum uptime.

Of course, as it turns out, redundant connectivity to the Internet is one of the simpler requirements. Today’s clouds are expected to meet strict criteria usually specified within service level agreements (SLAs), and while most consumers don’t realize it, based upon my observations and those of my peers, most clouds cannot withstand the catastrophic failure of a single data center. This is changing, but as of early 2011, this is still the case — not for all, but for most cloud providers. This is one aspect of cloud computing with which our clients have sought our assistance.

In InfoRelay’s three main multi-facility metropolitan markets (Washington, DC / Northern Virginia, San Jose/Silicon Valley, and Los Angeles/Southern CA), we use dark fiber to connect multiple data centers. This dark fiber has proven to be useful not only to carry basic Internet connectivity to remote POPs, but also to link the locations and provide a medium for direct intra-site IP connectivity. This allows our clients to get creative. While it may be impractical to replicate all client data across the country, replication within the same metropolitan area or even within several hundred miles is often technically feasible and cost effective. This is one main reason why we launched our New York City location: to provide a replication site for our Reston, Virginia cloud that is more than 200 miles away from the primary location, but still within 6 milliseconds, roundtrip via the Internet.

Of course, some of our Northern Virginia clients would never sleep knowing that their data is in the city that never sleeps — or in other words, New York isn’t geographically diverse enough for them to feel comfortable. So the question our clients then ask is whether or not we can provide direct long-haul layer 2 connectivity between coasts. Yes, this can be done, though it’s a bit more expensive than standard Internet bandwidth. As we’ve explained to our clients, however, there is no way to get around latency — generally in the range of 50-60 ms. roundtrip. Clients generally aren’t concerned about data taking 1/20th of a second to get to the other side of the country; what they usually do not realize, however, is that there is a relationship between latency and throughput — or transfer speeds, on a TCP/IP network (the Internet). Even if we provide one gigabit (1,000Mbps) or ten gigabits (10,000Mbps) of cross-country capacity, without the use of advanced TCP acceleration techniques or other technologies, technical limitations of TCP/IP will only allow clients to use a small fraction of the available pipe (or “bandwidth”).

As the cloud has evolved, I believe that carriers have followed suit, or are at least beginning to do so. This has been largely demand-driven, and the carriers that choose to remain in touch with the needs of their clientele will ultimately be rewarded. Just as the unique requirements of cloud based offerings have served as a catalyst for changing paradigms amongst other carriers, they have also pushed us at InfoRelay to see things a bit differently. Years ago, our Network Engineering department’s mission was to ensure that our network was highly optimized in performance, extremely reliable, and redundant with no single point of failure. Today, those points remain requisite for our network, but a detailed understanding of client requirements as they relate to global cloud deployments is often required to engineer the proper solution. Seems like a lot of work — so why do we do this? I concede; I suppose it’s all in the name of creating the holy grail of connectivity and trustworthiness — to allow others to “plug into the cloud.”

Russel Weiss is the President of InfoRelay Online Systems, Inc.., and has over 16 years experience in the colocation and web hosting industry.

More Cloud Hosting Articles

Leave a Comment

Your email address will not be published. Required fields are marked *

Choose a Rating