Tuesday, December 29, 2009

Intercloud is coming

I've been running around the world talking about Intercloud for well over a year now. I'm thankful that companies such as Cisco, and now Huawei, have been supportive in this adventure. Just as interoperability between the global Carriers of IP traffic, the result of which we now call the "Internet", the interoperability of Cloud Computingis an important development which needs to occur, and that's what I call "the Intercloud".

I am happy to report that the concept of the Intercloud is finally picking up steam!

First off, it was great to see that somebody wrote a Wikipedia entry for Intercloud (no it wasn't me!), which you can see here. You will notice that the first official reference to the work Intercloud in the Wikipedia entry, is the paper I wrote with my team at Cisco. If you want to read the whole paper, and also see related presentations and so on, please check it out at my consulting site here.

The end of the year was quite active for me on the Intercloud front, as I was fortunate enough to keynote address on the subject at the 6th IFIP International Conference on Network and Parallel Computing (NPC 2009), October 19-21, 2009, in Gold Coast, Australia; and then another keynote at IARIA's First International Conferences on Advanced Service Computing SERVICE COMPUTATION 2009, November 15-20, 2009, in Athens/Glyfada, Greece. I gave similar talks at both events, you can download the talk here.

But the big end of the year wrap-up, was the mecca of all the Cloud Standards people getting together in Long Beach, at a Cloud Standards Workshop hosted by OMG. The event agenda can be seen here. What a great line-up of many of the grooups working on different angles of the Cloud Standards problem.

Two very important groups have stepped forward, especially focused on the Intercloud problem, are of particular interest to me (as an Intercloud guy).

The first is a group I've been working with for some time, called the Open Cloud Consortium. They finally announced an Intercloud Workgroup! This is an idea we've been working on for some time, where Open Cloud Testbed could be used to host some Root Intercloud services. This is especially exciting as the testbed is the same physical infrastructure as the National Lambda Rail project and has ties with the next generation Internet. I hope to be able to publish more info on this as soon as it is available.

The next group of interest is the newly announced Global Inter-Cloud Technology Task Force, started in Tokyo, and expanding globally. They have picked up the work on Intercloud and have gotten support from carriers in the Asia-Pac region to really take this forward. They describe some of the charter and work here. I'll bee teaming with them also to progress the Intercloud in Asia-Pac.

So the year closes with great progress on the Intercloud. I hope 2010 shows as much progress.

Tuesday, June 16, 2009

IP Addressing is Broken for VM Mobile capable Clouds

I have been thinking that IP addressing is sort of broken when
it comes to Clouds. A lot of people in my former company have been thinking
about this (needless to say). I am going to paraphrase some thoughts and
write-ups we've had in this space. Although extending Layer 2 as widely
as possible solves a lot of problems, it doesn't solve the general
"private to public" or "public to public" problem. You always get back
to routing in the capital-I Internet.

It all starts with the fact that, in a highly virtualized environment,
IP address space explodes. Everything has multiple IP addresses; servers
have IP addresses for management, for the physical NICs, for all of the
virtual machines and the virtual NIC therein, and if any virtual
appliances are installed they have multiple IP addresses as well.

Several areas are of concern here, on the one hand, the IPv4 address
space simply starts to run out. Consider an environment inside the Cloud
which has 1M actual servers. As explained above, assuming a 16 core
server, each server could have 32 VM's, and each VM could have a handful
of IP addresses associated with it (virtual NICs, etc). That could
easily explode to a Cloud with well over 32M IP addresses. Even using
Network Address Translation (NAT), the 24-bit Class A reserved Private
Network Range provides a total address space of only 16M unique IP
addresses!

For this reason many Cloud operators are considering switching to IPv6
which provides for a much larger local address space in the trillions of
unique IP addresses. Switching to IPv6 is quite an undertaking, and some
believe that switching from one static addressing scheme to another
static addressing scheme (eg IPv4 to IPv6) might not be the right
approach in a large highly virtualized environment such as Cloud
Computing. If one is reconsidering addressing, one should consider the
Mobility aspects of VMs in Cloud.

VM Mobility provides for new challenges in any static addressing scheme.
When one moves a running VM from one location to another, the IP address
goes with the running VM and any application runtimes hosted by the VM.
IP addresses (of either traditional type) embody both Location and
Identity in the IP address, eg, routers and switches use the form of the
IP address not only to identify uniquely the endpoint, but by virtual of
decoding the address, infer the Location of the endpoint (and how to
reach that endpoint using switching and routing protocols). So while an
addressing scheme is being reconsidered, let's consider two schemes
which embody Mobility.

You might think that Mobile IPv4 <http://www.ietf.org/rfc/rfc3344.txt>
and Mobile IPv6 <http://www.ietf.org/rfc/rfc3775.txt> mechanisms can be
used in this case. Because IP addresses in either case are still
provider-supplied and follow top level address allocations, we still
find VM mobility issues when a VM attempts more general mobility from
one Cloud provider to another for example.

In an attempt to completely generalize the addressing solution, a
completely dynamic scheme where Location and Identification have been
separated has been developed. This new scheme is called Location
Identity Separation Protocol
<http://tools.ietf.org/html/draft-farinacci-lisp-10> (LISP). LISP based
systems can interwork with both IPv4 and IPv6 based networks, through
protocol support on edge routers. However, internal to a Cloud, which
may in itself span several geographies, LISP addressing may be used.

The basic idea behind the Loc/ID split is that the current Internet
routing and addressing architecture combines two functions: Routing
Locators (RLOCs), which describe how a device is attached to the
network, and Endpoint Identifiers (EIDs), which define "who" the device
is, in a single numbering space, the IP address. Proponents of the
Loc/ID split argue that this "overloading" of functions places the
constraints on end-system use of addresses that we detailed. Splitting
these functions apart by using different numbering spaces for EIDs and
RLOCs yields several advantages, including improved scalability of the
routing system through greater aggregation of RLOCs. To achieve this
aggregation, we must allocate RLOCs in a way that is congruent with the
topology of the network. EIDs, on the other hand, are typically
allocated along organizational boundaries.

Because the network topology and organizational hierarchies are rarely
congruent, it is difficult (if not impossible) to make a single
numbering space efficiently serve both purposes without imposing
unacceptable constraints (such as requiring renumbering upon provider
changes) on the use of that space. LISP, as a specific instance of the
Loc/ID split, aims to decouple location and identity. This decoupling
will facilitate improved aggregation of the RLOC space, implement
persistent identity in the EID space, and hopefully increase the
security and efficiency of network mobility.

Although LISP isn't in routers yet, it is alive <http://www.lisp4.net/>
and open <http://gforge.info.ucl.ac.be/projects/openlisp> ,
it may be just what the doctor ordered for the IP addressing 'challenge'
in Clouds.

Thinking about Cloud to Cloud Interoperability Use Cases

Cloud computing is a term applied to large, hosted datacenters, usually geographically distributed, which offer various computational services on a “utility” basis. Most typically the configuration and provisioning of these datacenters, as far as the services for the subscribers go, is highly automated, to the point of the service being delivered within seconds of the subscriber request. Additionally, the datacenters typically use hypervisor based virtualization as a technique to deliver these services. The concept of a cloud operated by one service provider or enterprise interoperating with a clouds operated by another is a powerful idea. So far that is limited to use cases where code running on one cloud explicitly references a service on another cloud. There is no implicit and transparent interoperability. In this article, I write about use cases for interoperability, and an architecture for Intercloud standards.

Of course from within one cloud, explicit instructions can be issued over the Internet to another cloud. For example, code executing within Google AppEngine can also reference storage residing on AWS. However there are no implicit ways that clouds resources and services can be exported or caused to interoperate.

In this Blog I come up with two main use cases for Cloud Interoperability; the first is a use case involving a physical metaphor (servers, disks, network segments, etc). The second is a use case involving an abstract metaphor (blob storage functions, message queue, email functions, multicast functions, etc). We look at cloud interoperability challenges using use cases illustrating the two major personality types of clouds.

Virtual Machine Instantiation and Mobility

One of the most basic resources which cloud computing delivers is the Virtual Machine, which is a physical metaphor type of resource. One way or another, a subscriber requests the provisioning of a particularly configured virtual machine with certain quantities of resources such as memory processor speeds and quantities.. The format of this request varies widely by cloud computing platform and also is somewhat specific to the type of hypervisor (the virtualization layer of the operating system inside the cloud computing platform). In a few seconds they receive pointers and credentials with which to access it. The pointers are usually the MAC and IP addresses and sometimes a DNS name given to the VM. The credentials are usually a pair of RSA keys (a public key and a private key, which one uses in the API to speak with the VM). Most often, the VM presents an x86 PC machine architecture. On that VM, one boots a system image yielding a running system, and uses it in a similar manner as one would use a running system in your own datacenter.

VM Mobility is that feature in a particular hypervisor which allows a running system to be moved from one VM to another VM. As far as the running system is concerned it does not need to be reconfigured, all of the elements such as MAC and IP address and DNS name stay the same; any of the ways storage may be referenced (such as a World Wide Name in a SAN) stay the same. Whatever needs to happen to make this work is not the concern of the running system.

VM Mobility has been implemented with several hypervisors but there are limitations. Usually these limitations are a result of the “scope” of applicability of the network and storage addressing. Typically, VM Mobility is restricted to a Layer 3 subnet and a Layer 2 domain (for VLANs) because the underlying network will support the VM operating outside of the local scope of those addresses. Needless to say, the network addressing scheme in a cloud operated by an entirely different service provider is not only a different subnet but a different class B or class A network altogether. Routers and switches simply would not know how to cope with the “rogue” running system.

Another aspect is that, the instantiation instructions of the VM for the running system are very specific to that cloud computing platform and the hypervisor which it uses. We would want to re-issue some of these instructions to the new cloud so that the VM it delivered onto which the VM would move, was as suitable as the first VM which was provisioned for us. If the new Cloud takes an entirely different set of instructions, this is another barrier to VM Mobility.

All of this assumed that in the universe of cloud computing systems out there, we were able to find another cloud, which was ready, willing, and able to accept a VM mobility transaction with me. And that I was able to have a reliable conversation with that cloud, perhaps exchanging whatever subscription or usage related information which might have been needed as a pre-cursor to the transaction, and finally that I had a reliable transport on which to move the VM itself.

Storage Interoperability and Federation

Now let us consider an interoperability use case involving an abstract metaphor. In this case, we are running script or code in my datacenter or in the cloud, which is utilizing Cloud based storage functions. In cloud computing, storage is not like disk access, there are several parameters around the storage which are inherent to the system, and one decides if they meet your needs or not For example, object storage is typically replicated to several places in the cloud, In AWS and in Azure it is replicated three places. The storage API is not explicit in this, but implicitly, we know that a write will return as successful when one replicate of the storage has been affected, and then a “lazy” internal algorithm is used to replicate the object to two additional places. If one or two of the object replicates are lost the cloud platform will replicate it to another place or two such that it is now in three places. A user has some control over where the storage is, physically, for example, one can restrict the storage to replicate entirely in North America or in Europe.

There is no ability to vary from these parameters; that is what the storage system provides. One would have thought that there might be several API’s each with a different underlying characteristic, and, you could always use a “better” service implementation than the API demanded. To this end, we do envision other providers implementing say, five replicates, or a deterministic replication algorithm, or a replicated (DR) write which doesn’t return until and unless n replicates are persisted. One can create a large number of variations around “quality of storage” for Cloud.

In the interoperability scenario, suppose AWS is running short of storage, or wants to provide a geographic storage location for an AWS customer, where AWS does not have a datacenter, it would be sub-contracting the storage to another service provider. In either of these scenarios, AWS would need to find another cloud, which was ready, willing, and able to accept a storage subcontracting transaction with them. AWS would have to be able to have a reliable conversation with that cloud, again exchanging whatever subscription or usage related information which might have been needed as a pre-cursor to the transaction, and finally have a reliable transport on which to move the storage itself.

Although the addressing issues are not as severe in this case where an abstract metaphor is used, the naming, discovery, conversation setup items challenges all remain.

What Makes a Cloud - A Cloud - and not just a Datacenter

Cloud computing has emerged recently as a label for a particular type of datacenter. It can be hosted by anyone; an enterprise, a service provider, or a government. I have been thinking, a way to define cloud computing, is to realize that a Cloud is just a special kind of datacenter. We list seven key characteristics which make a large datacenter into a cloud

1. Implement a pool of computing resources and services which are shared amongst subscribers.

2. Charge for resources and services using an “as used” metered and/or capacity based model.

3. Are usually geographically distributed, in a manner which is transparent to the subscriber (unless they explicitly ask for visibility of that).

4. Are automated in that the provisioning and configuration (and de-configuration and un-provisioning) of resources and services occur on the “self service”, usually programmatic request of the subscriber, occur in an automated way with no human operator assistance, and are delivered in one or two orders of seconds.

5. Resources and services are delivered virtually, that is, although they may appear to be physical (servers, disks, network segments, etc) they are actually virtual implementations of those on an underlying physical infrastructure which the subscriber never sees.

6. The physical infrastructure changes rarely. The virtually delivered resources and services are changing constantly.

7. Resources and services may be of a physical metaphor (servers, disks, network segments, etc) or they may be of an abstract metaphor (blob storage functions, message queue functions, email functions, multicast functions, etc). These may be intermixed.

Cloud computing services as defined above are best exemplified by the Amazon Web Services (AWS) or Google AppEngine. Both of these systems exhibit all seven characteristics as detailed above. Various companies are beginning to offer similar services, such as the Microsoft Azure Service, and software companies such as VMware and open source projects such as UCSB Eucalyptus are creating software for building a cloud service. Each of these offerings embody cloud computing with a self-contained set of conventions, file formats, and programmer interfaces. If one wants to utilize that variation of cloud, one must create configurations and code specific to that cloud.