Friday, April 12, 2013

"Intercloud" - Not all the same! Federation versus Multicloud

Types of Intercloud: Federation and Multi-Cloud

Lately, more and more people are talking about cloud interoperability.

Unfortunately  it's become almost a "marketing war" - who can gain the most momentum with "their" approach, the soonest. There's two camps, and it's really confusing to understand - they are actually different!

Executive Summary 

Multi-Cloud are User API's

One approach which has the support of OGF and also roughly equivalent is the approach the FP7 Helix Nebula project has taken is called the "Multi-Cloud" approach. This is ideal in situations where there is a user (like a Grid or HPC user) wanting to access several clouds to fulfill his/her computing requirements. Generally this is for academic and research computing constituencies as this technology architecture is "from the User into the Network" type of "explicit demand for resources" where the user is very specifically controlling the computing they want. This also can work for companies who absolutely must access different public clouds and have the IT staff to operate a specific gateway box, or can write code, to do so.

Basically, Multi-cloud "leaves it to the user", even if there is a layer that doesn't make it seem so.

Federation is like the Internet

Another approach which has the support of several Tier 1 Telcos, several commercial Labs, different FP7 projects including those from universities in Naples, Amsterdam, and Helsinki, is called the "Intercloud" approach, which is the subject of the work of the IEEE P2302 Working group as well as the IEEE Intercloud Testbed. This is ideal for large public/commercial Mobile and Internet scenarios, or Enterprise cloud deployments in conjunction with Telco MPLS/VPN. This technology architecture is "from the Network to the User" with "implicit demand for resources"  where the user is unaware of what is happening behind the scenes. Think of it as similar technology to Mobile Roaming, or the Public Internet ability for any browser to access any web site on the Internet. 

Basically, Federation "makes it invisible", just like the Internet or Phone network.

A Closer Look

Professor Raj Buyya from Melbourne University has produced some great explanations for this in a recently published paper. I extract from his work to dive into what really differentiates between these - Federation and Multi-Cloud.
  • A Federation is achieved when a set of cloud providers voluntarily interconnect their infrastructures in order to allow sharing of resources among each other.
  • Multi-Cloud denotes the usage of multiple, independent clouds by a client or a service. Unlike a federation, a multi-cloud environment does not imply volunteer interconnection and sharing of providers’ infrastructures. Clients or their representatives are directly responsible for managing resource provisioning and scheduling.
Both federations and multi-clouds are types of Inter-Clouds!

Federations are like the Internet, or the Phone System

In a Federation, the clouds have decided to join together and create mechanisms which are largely transparent to the users. Connections between clouds are made underneath via special protocols from cloud to cloud. Actually, it's quite hard to view the independent clouds as independent any more! For examples of Federations, think of the Internet, where any browser can access any website - this is enabled by DNS, routing protocols, peering/exchange agreements - set up by the IP transit providers in advance, and transparent to the users. In the phone network, standards for interconnections of phone companies utilize SS7 networking, standardized numbering plans, and origination/termination agreements to result in a system where any phone can dial any other phone worldwide. The mobile phone system adds a roaming layer on top of this providing and even more comprehensive notion of Federation.

Multi-clouds are like Social Networks, or like Calling Cards

In a Multi-cloud, the underlying separate clouds are still quite visible as separate clouds. Connections between clouds are made via over the top via user APIs. In other words the user has placed a mechanism - a box or a software API - in front of the multiple clouds (unbeknownst to them) which makes enables that user to view and use them all at once. It's like a Social Network of today. When you participate in one social network, it's completely separate from another social network. You might be members of many social networks, but they are "walled gardens" and don't have any substantial interoperability across them. If you want a "merged" friends or contact list, you must use a utility perhaps found in your email program, or an program deigned to "aggregate" social networks. It will use the different API's of the social networks to access each one, using your credentials on each, and provide a layer merging together the most important (say "contacts") features of each Social network. Another example is the Calling Card. In the phone network, you may choose not to use the Federation capabilities, perhaps because they are too expensive (direct dial long distance and mobile roaming can be expensive!) for example. In this case you can use a Calling Card, where you manually use the phone network at hand, say through a "toll free" mechanism, to connect to your Calling card, and then through that system, you manually direct it to dial the end phone. In this way you are using the "user API's" of the phone system (phone numbers) to construct an over the top end to end connection.

Hybrid Cloud is not Intercloud

Another term used is Hybrid Cloud. It has been defined as a composition of two or more different cloud infrastructures - e.g a private and a public cloud. Thus a hybrid cloud is a type of a Multi-Cloud that connects miscellaneous clouds in terms of their deployment models. Often hybrid clouds are used for cloud bursting - the usage of external cloud resources when local ones are insufficient.

Intercloud Brokers/Exchanges

The term Inter-Cloud broker or exchange has been used with different meanings. In most cases it means a service that acts on behalf of the client in order to provision resources and deploy application components. A Cloud broker or exchange is an automated entity with the following responsibilities:
  • Automatic resource provisioning and management across multiple clouds for a given application. This would include allocation and de-allocation of resources (e.g. VMs and storage).
  • Automatic deployment of application components in the provisioned resources.
  • Scheduling and load balancing of the incoming requests to the allocated resources.

Intercloud Architectural and Topological Taxonomy

Now, let's follow Prof. Buyya's scientific classification methodology to better understand all this.

We can broadly classify Inter-Clouds as:
  • Volunteer federation - when a group of cloud providers voluntarily collaborate with each other to exchange resources. This type of Inter-cloud is mostly viable for governmental clouds, private cloud portfolios, or a public cloud system.
  • Independent - when multiple clouds are used in aggregation by an application or its broker/exchange. This approach is essentially independent of the cloud provider and can be used to utilize resources from both governmental and private clouds. Another term used for this is Multi-Cloud.
From an architectural perspective Volunteer federations can be further classified as:
  • Peer-to-Peer - in the architectures from this group clouds communicate and negotiate directly with each other without mediators.
  • Centralized - in every instance of this group of architectures there is a central entity that either performs or facilitates resource allocation. Usually this central entity acts as a repository where available cloud resources are registered, but may also have other responsibilities like acting as a market place for resources.
From an architectural perspective Independent Multicloud developments can be further classified as:
  • Services - application provisioning is done by a service which can be hosted either externally or in-house by the cloud clients. Most such services include broker components in themselves. Typically application developers specify an SLA or a set of provisioning rules and the service performs the deployment and execution in the background, in a way respecting these predefined attributes.
  • Libraries - often custom application brokers that directly take care of provisioning and scheduling application components across clouds are needed. Typically such approaches make use of inter-cloud libraries that facilitate the usage of multiple clouds in a uniform way.
The whole taxonomy is depicted below, showing example projects falling into each category:



And we can consider the topology of the different Inter-Cloud architectures as follows:


How to Choose the Right Intercloud Architecture

To answer this, lets look to a formal definition of Inter-cloud computing (from the GICTF):

“A cloud model that, for the purpose of guaranteeing service quality, such as the performance and availability of each service, allows on-demand reassignment of resources and transfer of workload through a [sic] interworking of cloud systems of different cloud providers based on coordination of each consumers requirements for service quality with each providers SLA and use of standard interfaces.”

Which to choose?

The "Multi-Cloud" approach, which is the subject of the work of the OGF, several academic projects, and also proprietary "CloudSwitch" like boxes for enterprises, is ideal in situations where there is a user (like a Grid or HPC user) wanting to access several clouds to fulfill his/her computing requirements. Generally this is for academic and research computing constituencies as this technology architecture is "from the User into the Network" type of "explicit demand for resources" where the user is very specifically controlling the computing they want.

The "Intercloud" approach, which is the subject of the work of the IEEE P2302 Working group as well as the IEEE Intercloud Testbed, is ideal for large public/commercial Mobile and Internet scenarios, or Enterprise cloud deployments in conjunction with Telco MPLS/VPN. This technology architecture is "from the Network to the User" with "implicit demand for resources"  where the user is unaware of what is happening behind the scenes. Think of it as similar technology to Mobile Roaming, or the Public Internet ability for any browser to access any web site on the Internet. 

Saturday, November 17, 2012

Carrier Cloud Strategies: Advice from GigaOM Pro


Last month, Jo Maitland at GigaOM Pro referenced the work Cloudscaling is doing in a report titled, “How carriers can catch up in the cloud race.” (subscription required, and worth it) Her work provides practical advice for Communication Service Providers (CSPs) asking this very question.
How did the CSPs let themselves get in the situation of needing to catch up in the first place?
Looking back, most carriers had hosting and co-location offerings, which provided a level of security, reliability, and performance which was usually a notch above the same type of offering from the low-cost, internet centric public datacenter provider. CSPs believed that there would be an upper crust of enterprises which would value the premier service being offered and stick with the CSP for their hosting needs. In the earlier days of the Internet, the network connection to the enterprise was expensive, not as fast as the LAN, and was almost always provided by the CSP. The CSP likely connected the multiple offices of the enterprise together with an MPLS VPN, and so for both bandwidth reasons and for the ability to conveniently have the outsourced datacenter within your MPLS VPN network, the CSP was a logical choice for most.
Much has changed since then. Today, most mainstream datacenters have the same advanced security, reliability, and performance capabilities CSPs offer. As Jo points out, “The larger players have been able to buy their way into the market: We saw Verizon acquire Terremark”. One could argue, that the multi-carrier network of the “carrier neutral” hosting and co-location companies is actually better connectivity than any one CSP could offer. Some believe this strategic multi-carrier connectivity is the real reason Verizon acquired Terremark, with its “NAP of the Americas.” Finally, performance improvements and cost reduction in encryption-capable routers are allowing customer-configurable, IPSec-based VPN capability over multiple carrier networks an attractive alternative to the carrier-configured and controlled MPLS-based VPN.
Jo goes on to point out, CSPs find themselves looking at a $40-50 billion market led by such offerings as AWS, GCE and Rackspace Cloud. It’s no wonder that many CSPs are beginning to build out cloud offerings themselves. As Jo puts it, “with mixed successes: Telstra, SingTel, French operator SFR, British Telecom, KT Corporation, Orange, AT&T, KDDI Corporation and Chunghwa Telecom, among many others, have all taken a shot at creating a cloud-services business.”
So, why have carriers experienced mixed success? Because building cloud infrastructure at scale is not easy. Solving this problem is Cloudscaling’s raison d’ĂȘtre.
Jo’s report outlines three areas in which carrier clouds have fallen short: cost competitiveness, acumen in selling cloud services, and bad technology bets. She then points out that, in an effort to address those shortcomings, many carriers “are now turning to commercial products or well-supported open-source systems to take another crack at it.” That’s what we’re seeing, too.
Cost competitiveness is partially about equipment cost and software licensing, but equally important is operational efficiency. Achieving operational efficiency starts with cloud design, and that’s where the discomfort starts for many carriers. One need look no further than comparing  the architecture of EC2 or GCE with that of Terremark or Savvis. They’re completely different. Our approach is to help CSPs understand how cloud design and architecture profoundly impact their long-term ability to deliver services cost competitively.
The operational efficiency of cloud virtualization and automation provides for ultimately powerful pricing advantages over virtually any other model in hosting and co-location.
Cloud developers and application technicians know how they want to acquire clouds services, what they should pay for them, and how they should work. Amazon has been quite successful in listening to the community and delivering a constant stream of innovation. CSPs usually set out to offer their cloud services in their home regions, where Amazon is not strong. We suggest a go- to-market model drafting on this runaway success, delivering what the early adopters want, but in the countries and using the special capabilities that the CSP has.
And finally, it’s hard to argue with “open cloud” technology, as epitomized by projects like OpenStackOpenFlow and Open Compute. At this point in the maturity cycle of cloud technology, there is strong momentum and investment in these directions. The early pioneers on a different technology base are going to be marginalized into niche market segments (at best) and OpenStack, like the Linux OS, or IP networking, will become the standard “open cloud” technology direction that will dominate the industry. It’s not a difficult prediction to make at this point, but if a CSP isn’t going that way yet, it’s time to try a new approach.
(Excerpted with permission from GigaOM Pro.)

Tuesday, March 30, 2010

Thinking about Intercloud Topology, and Using XMPP as a transport in Intercloud Protocols

I have been thinking about the topology for the Intercloud. There are Intercloud Exchanges (analogous to Internet Exchanges and Peering Points) where clouds can interoperate, and there is an Intercloud Root, containing services such as Naming Authority, Trust Authority, Directory Services, and other “root” capabilities. It is envisioned that the Intercloud root is of course physically not a single entity, a global replicating and hierarchical system similar to DNS. All elements in the Intercloud topology contain some gateway capability analogous to an Internet Router, implementing Intercloud protocols in order to participate in Intercloud interoperability. Let's call these Intercloud Gateways.

The Intercloud Root and Intercloud Exchanges would facilitate and mediate the initial Intercloud negotiating process among Clouds and all these elements. It is this Presence and Messaging capability that's got me thinking, just what protocol would we use for that?

If you think about it, cloud instances must be able to dialog with each other. One cloud must be able to find one or more other clouds, which for a particular interoperability scenario is ready, willing, and able to accept an interoperability transaction with and furthermore, exchanging whatever subscription or usage related information which might have been needed as a pre-cursor to the transaction. Thus, an Intercloud Protocol for presence and messaging needs to exist which can support the 1-to-1, 1-to-many, and many-to-many Cloud to Cloud use cases.

Extensible Messaging and Presence Protocol (XMPP) is exactly such a protocol. XMPP is a set of open XML technologies for presence and real-time communication developed by the Jabber open-source community in 1999, formalized by the IETF in 2002-2004, continuously extended through the standards process of the XMPP Standards Foundation. XMPP supports presence, structured conversation, lightweight middleware, content syndication, and generalized routing of XML data. XMPP root services would be located in the Intercloud Root in the topology explained above.

XMPP defines protocols for communicating between groups of entities which register with an XMPP server. Registration is dynamic and provides the basis for Presence. In a large implementation, such as the global Intercloud envisioned herein, XMPP servers are connected together. This is identical to the way service providers connect XMPP servers together already supporting cross-domain Instant Messaging. In this way, XMPP facilitates both presence and many-to-many messaging across service provider domains. XMPP messages are extensible, and can be used to carry messages of different types. For example, an XMPP Message can carry Instant Messaging (IM) type traffic. We could be using a Cloud extension to XMPP.

Deepak Vij and I were thinking about all this, and we put some detail into a whole design for XMPP as a core transport in Intercloud. We wrote a paper and posted it on the CSP Site. Check it out. It seems, XMPP might be just what the doctor ordered for Intercloud.

Tuesday, December 29, 2009

Intercloud is coming

I've been running around the world talking about Intercloud for well over a year now. I'm thankful that companies such as Cisco, and now Huawei, have been supportive in this adventure. Just as interoperability between the global Carriers of IP traffic, the result of which we now call the "Internet", the interoperability of Cloud Computingis an important development which needs to occur, and that's what I call "the Intercloud".

I am happy to report that the concept of the Intercloud is finally picking up steam!

First off, it was great to see that somebody wrote a Wikipedia entry for Intercloud (no it wasn't me!), which you can see here. You will notice that the first official reference to the work Intercloud in the Wikipedia entry, is the paper I wrote with my team at Cisco. If you want to read the whole paper, and also see related presentations and so on, please check it out at my consulting site here.

The end of the year was quite active for me on the Intercloud front, as I was fortunate enough to keynote address on the subject at the 6th IFIP International Conference on Network and Parallel Computing (NPC 2009), October 19-21, 2009, in Gold Coast, Australia; and then another keynote at IARIA's First International Conferences on Advanced Service Computing SERVICE COMPUTATION 2009, November 15-20, 2009, in Athens/Glyfada, Greece. I gave similar talks at both events, you can download the talk here.

But the big end of the year wrap-up, was the mecca of all the Cloud Standards people getting together in Long Beach, at a Cloud Standards Workshop hosted by OMG. The event agenda can be seen here. What a great line-up of many of the grooups working on different angles of the Cloud Standards problem.

Two very important groups have stepped forward, especially focused on the Intercloud problem, are of particular interest to me (as an Intercloud guy).

The first is a group I've been working with for some time, called the Open Cloud Consortium. They finally announced an Intercloud Workgroup! This is an idea we've been working on for some time, where Open Cloud Testbed could be used to host some Root Intercloud services. This is especially exciting as the testbed is the same physical infrastructure as the National Lambda Rail project and has ties with the next generation Internet. I hope to be able to publish more info on this as soon as it is available.

The next group of interest is the newly announced Global Inter-Cloud Technology Task Force, started in Tokyo, and expanding globally. They have picked up the work on Intercloud and have gotten support from carriers in the Asia-Pac region to really take this forward. They describe some of the charter and work here. I'll bee teaming with them also to progress the Intercloud in Asia-Pac.

So the year closes with great progress on the Intercloud. I hope 2010 shows as much progress.

Tuesday, June 16, 2009

IP Addressing is Broken for VM Mobile capable Clouds

I have been thinking that IP addressing is sort of broken when
it comes to Clouds. A lot of people in my former company have been thinking
about this (needless to say). I am going to paraphrase some thoughts and
write-ups we've had in this space. Although extending Layer 2 as widely
as possible solves a lot of problems, it doesn't solve the general
"private to public" or "public to public" problem. You always get back
to routing in the capital-I Internet.

It all starts with the fact that, in a highly virtualized environment,
IP address space explodes. Everything has multiple IP addresses; servers
have IP addresses for management, for the physical NICs, for all of the
virtual machines and the virtual NIC therein, and if any virtual
appliances are installed they have multiple IP addresses as well.

Several areas are of concern here, on the one hand, the IPv4 address
space simply starts to run out. Consider an environment inside the Cloud
which has 1M actual servers. As explained above, assuming a 16 core
server, each server could have 32 VM's, and each VM could have a handful
of IP addresses associated with it (virtual NICs, etc). That could
easily explode to a Cloud with well over 32M IP addresses. Even using
Network Address Translation (NAT), the 24-bit Class A reserved Private
Network Range provides a total address space of only 16M unique IP
addresses!

For this reason many Cloud operators are considering switching to IPv6
which provides for a much larger local address space in the trillions of
unique IP addresses. Switching to IPv6 is quite an undertaking, and some
believe that switching from one static addressing scheme to another
static addressing scheme (eg IPv4 to IPv6) might not be the right
approach in a large highly virtualized environment such as Cloud
Computing. If one is reconsidering addressing, one should consider the
Mobility aspects of VMs in Cloud.

VM Mobility provides for new challenges in any static addressing scheme.
When one moves a running VM from one location to another, the IP address
goes with the running VM and any application runtimes hosted by the VM.
IP addresses (of either traditional type) embody both Location and
Identity in the IP address, eg, routers and switches use the form of the
IP address not only to identify uniquely the endpoint, but by virtual of
decoding the address, infer the Location of the endpoint (and how to
reach that endpoint using switching and routing protocols). So while an
addressing scheme is being reconsidered, let's consider two schemes
which embody Mobility.

You might think that Mobile IPv4 <http://www.ietf.org/rfc/rfc3344.txt>
and Mobile IPv6 <http://www.ietf.org/rfc/rfc3775.txt> mechanisms can be
used in this case. Because IP addresses in either case are still
provider-supplied and follow top level address allocations, we still
find VM mobility issues when a VM attempts more general mobility from
one Cloud provider to another for example.

In an attempt to completely generalize the addressing solution, a
completely dynamic scheme where Location and Identification have been
separated has been developed. This new scheme is called Location
Identity Separation Protocol
<http://tools.ietf.org/html/draft-farinacci-lisp-10> (LISP). LISP based
systems can interwork with both IPv4 and IPv6 based networks, through
protocol support on edge routers. However, internal to a Cloud, which
may in itself span several geographies, LISP addressing may be used.

The basic idea behind the Loc/ID split is that the current Internet
routing and addressing architecture combines two functions: Routing
Locators (RLOCs), which describe how a device is attached to the
network, and Endpoint Identifiers (EIDs), which define "who" the device
is, in a single numbering space, the IP address. Proponents of the
Loc/ID split argue that this "overloading" of functions places the
constraints on end-system use of addresses that we detailed. Splitting
these functions apart by using different numbering spaces for EIDs and
RLOCs yields several advantages, including improved scalability of the
routing system through greater aggregation of RLOCs. To achieve this
aggregation, we must allocate RLOCs in a way that is congruent with the
topology of the network. EIDs, on the other hand, are typically
allocated along organizational boundaries.

Because the network topology and organizational hierarchies are rarely
congruent, it is difficult (if not impossible) to make a single
numbering space efficiently serve both purposes without imposing
unacceptable constraints (such as requiring renumbering upon provider
changes) on the use of that space. LISP, as a specific instance of the
Loc/ID split, aims to decouple location and identity. This decoupling
will facilitate improved aggregation of the RLOC space, implement
persistent identity in the EID space, and hopefully increase the
security and efficiency of network mobility.

Although LISP isn't in routers yet, it is alive <http://www.lisp4.net/>
and open <http://gforge.info.ucl.ac.be/projects/openlisp> ,
it may be just what the doctor ordered for the IP addressing 'challenge'
in Clouds.

Thinking about Cloud to Cloud Interoperability Use Cases

Cloud computing is a term applied to large, hosted datacenters, usually geographically distributed, which offer various computational services on a “utility” basis. Most typically the configuration and provisioning of these datacenters, as far as the services for the subscribers go, is highly automated, to the point of the service being delivered within seconds of the subscriber request. Additionally, the datacenters typically use hypervisor based virtualization as a technique to deliver these services. The concept of a cloud operated by one service provider or enterprise interoperating with a clouds operated by another is a powerful idea. So far that is limited to use cases where code running on one cloud explicitly references a service on another cloud. There is no implicit and transparent interoperability. In this article, I write about use cases for interoperability, and an architecture for Intercloud standards.

Of course from within one cloud, explicit instructions can be issued over the Internet to another cloud. For example, code executing within Google AppEngine can also reference storage residing on AWS. However there are no implicit ways that clouds resources and services can be exported or caused to interoperate.

In this Blog I come up with two main use cases for Cloud Interoperability; the first is a use case involving a physical metaphor (servers, disks, network segments, etc). The second is a use case involving an abstract metaphor (blob storage functions, message queue, email functions, multicast functions, etc). We look at cloud interoperability challenges using use cases illustrating the two major personality types of clouds.

Virtual Machine Instantiation and Mobility

One of the most basic resources which cloud computing delivers is the Virtual Machine, which is a physical metaphor type of resource. One way or another, a subscriber requests the provisioning of a particularly configured virtual machine with certain quantities of resources such as memory processor speeds and quantities.. The format of this request varies widely by cloud computing platform and also is somewhat specific to the type of hypervisor (the virtualization layer of the operating system inside the cloud computing platform). In a few seconds they receive pointers and credentials with which to access it. The pointers are usually the MAC and IP addresses and sometimes a DNS name given to the VM. The credentials are usually a pair of RSA keys (a public key and a private key, which one uses in the API to speak with the VM). Most often, the VM presents an x86 PC machine architecture. On that VM, one boots a system image yielding a running system, and uses it in a similar manner as one would use a running system in your own datacenter.

VM Mobility is that feature in a particular hypervisor which allows a running system to be moved from one VM to another VM. As far as the running system is concerned it does not need to be reconfigured, all of the elements such as MAC and IP address and DNS name stay the same; any of the ways storage may be referenced (such as a World Wide Name in a SAN) stay the same. Whatever needs to happen to make this work is not the concern of the running system.

VM Mobility has been implemented with several hypervisors but there are limitations. Usually these limitations are a result of the “scope” of applicability of the network and storage addressing. Typically, VM Mobility is restricted to a Layer 3 subnet and a Layer 2 domain (for VLANs) because the underlying network will support the VM operating outside of the local scope of those addresses. Needless to say, the network addressing scheme in a cloud operated by an entirely different service provider is not only a different subnet but a different class B or class A network altogether. Routers and switches simply would not know how to cope with the “rogue” running system.

Another aspect is that, the instantiation instructions of the VM for the running system are very specific to that cloud computing platform and the hypervisor which it uses. We would want to re-issue some of these instructions to the new cloud so that the VM it delivered onto which the VM would move, was as suitable as the first VM which was provisioned for us. If the new Cloud takes an entirely different set of instructions, this is another barrier to VM Mobility.

All of this assumed that in the universe of cloud computing systems out there, we were able to find another cloud, which was ready, willing, and able to accept a VM mobility transaction with me. And that I was able to have a reliable conversation with that cloud, perhaps exchanging whatever subscription or usage related information which might have been needed as a pre-cursor to the transaction, and finally that I had a reliable transport on which to move the VM itself.

Storage Interoperability and Federation

Now let us consider an interoperability use case involving an abstract metaphor. In this case, we are running script or code in my datacenter or in the cloud, which is utilizing Cloud based storage functions. In cloud computing, storage is not like disk access, there are several parameters around the storage which are inherent to the system, and one decides if they meet your needs or not For example, object storage is typically replicated to several places in the cloud, In AWS and in Azure it is replicated three places. The storage API is not explicit in this, but implicitly, we know that a write will return as successful when one replicate of the storage has been affected, and then a “lazy” internal algorithm is used to replicate the object to two additional places. If one or two of the object replicates are lost the cloud platform will replicate it to another place or two such that it is now in three places. A user has some control over where the storage is, physically, for example, one can restrict the storage to replicate entirely in North America or in Europe.

There is no ability to vary from these parameters; that is what the storage system provides. One would have thought that there might be several API’s each with a different underlying characteristic, and, you could always use a “better” service implementation than the API demanded. To this end, we do envision other providers implementing say, five replicates, or a deterministic replication algorithm, or a replicated (DR) write which doesn’t return until and unless n replicates are persisted. One can create a large number of variations around “quality of storage” for Cloud.

In the interoperability scenario, suppose AWS is running short of storage, or wants to provide a geographic storage location for an AWS customer, where AWS does not have a datacenter, it would be sub-contracting the storage to another service provider. In either of these scenarios, AWS would need to find another cloud, which was ready, willing, and able to accept a storage subcontracting transaction with them. AWS would have to be able to have a reliable conversation with that cloud, again exchanging whatever subscription or usage related information which might have been needed as a pre-cursor to the transaction, and finally have a reliable transport on which to move the storage itself.

Although the addressing issues are not as severe in this case where an abstract metaphor is used, the naming, discovery, conversation setup items challenges all remain.

What Makes a Cloud - A Cloud - and not just a Datacenter

Cloud computing has emerged recently as a label for a particular type of datacenter. It can be hosted by anyone; an enterprise, a service provider, or a government. I have been thinking, a way to define cloud computing, is to realize that a Cloud is just a special kind of datacenter. We list seven key characteristics which make a large datacenter into a cloud

1. Implement a pool of computing resources and services which are shared amongst subscribers.

2. Charge for resources and services using an “as used” metered and/or capacity based model.

3. Are usually geographically distributed, in a manner which is transparent to the subscriber (unless they explicitly ask for visibility of that).

4. Are automated in that the provisioning and configuration (and de-configuration and un-provisioning) of resources and services occur on the “self service”, usually programmatic request of the subscriber, occur in an automated way with no human operator assistance, and are delivered in one or two orders of seconds.

5. Resources and services are delivered virtually, that is, although they may appear to be physical (servers, disks, network segments, etc) they are actually virtual implementations of those on an underlying physical infrastructure which the subscriber never sees.

6. The physical infrastructure changes rarely. The virtually delivered resources and services are changing constantly.

7. Resources and services may be of a physical metaphor (servers, disks, network segments, etc) or they may be of an abstract metaphor (blob storage functions, message queue functions, email functions, multicast functions, etc). These may be intermixed.

Cloud computing services as defined above are best exemplified by the Amazon Web Services (AWS) or Google AppEngine. Both of these systems exhibit all seven characteristics as detailed above. Various companies are beginning to offer similar services, such as the Microsoft Azure Service, and software companies such as VMware and open source projects such as UCSB Eucalyptus are creating software for building a cloud service. Each of these offerings embody cloud computing with a self-contained set of conventions, file formats, and programmer interfaces. If one wants to utilize that variation of cloud, one must create configurations and code specific to that cloud.