The Cloud – Top Down – Future Down

So everyone is grabbing his infrastructure and looking in which ways they can propel their stuff “into the cloud”. Different vendors have different visions of how the cloud should look like. But why not shoot ourselves beyond the StarTrek era, and then look back? Let’s try to reverse engineer the cloud of the future back into current and future solutions!



In Star Trek the toilet is never clogged

Very true. In Star Trek the toilets are never clogged. They probably solved this issue at least 400 years ago. In fact, I do not even know how a toilet looks in Star Trek. Artificial gravity never fails. Which is, considering the toilet story, probably a good thing. One problem though: If Spock took a REALLY big dump and the toilet would clog up, no one would probably know how to solve the problem. Dump the core???


The cloud seen from Star Trek down

To find out where the Cloud is going, we need to step away from technology, and just find out what we actually want to get out of the cloud. In the end, you need to get your applications delivered. Nothing more. You have some kind of spacey device that should show your apps running anywhere (as long as they work).

In order to run those apps, you’d probably need to build them. Not on some operating system, but on a framework that has done 99% already for you. You’d be looking at a framework that would enable you to build your app out of 6 lines of code (or voice commands, thought commands??). This application would have to run on some infrastructure determined by some form of Service Level Agreement or SLA (hence the “as long as they work”), and it would have to scale to whatever needs you have for it.

So think of the cloud being a black box. What would we input and output:


Input: your application, SLA.
Output: Your application in the required scale.




So, that’s it? It could never be THAT simple right? Well actually, it is. There are no buttons to press, no dials to turn. No obvious vMotion, linked clones, deploying and tuning databases or workflows to install new hardware. No RAID types, no scale out arrays. No obvious ones at least.

Of course, all these detailed stuff has to be there. It is the way we are progressing, but on steroids. We build more and more workflows, we automate more and more. At some point, we will be done automating. No more risks of things going wrong. Fully transparent failovers. Automated and mitigated replacement requests.


From here to Star Trek: Waves of revolution

You may think this summum in automation is way way out into the future. But think of this: How exciting is replacing a broken hard drive nowadays? The systems flags a failure. The disk has already been copied over to a spare one, or a RAID set is already in the process of being rebuilt. Replacing the disk is an administrative task, nothing more. And we move on and on. Systems like a Vblock are starting to be able to do exactly this for compute blades as well. We are on the verge of having the ability to detected, deal with, replace and reconfigure all hard- and software components. All automatically. All without risk.

Where your stuff lives is also going through waves of revolution. I remember only too well I got a visit from the head of finance in a previous life. He wanted to see where HIS servers where. Like a kid in a candy store he watched the happy green LEDs blink on and off. Yes, this was before the age that we write the word “leds” as “leds” and not “LEDs”. Also, blue leds were not available or cost a fortune. So all LEDs were happily blinking yellowish green.

You can probably guess the rest of this story: We introduced VMware, and with it came vMotion. The look his face when I told him I had no clue where “his” stuff was running. He could not wrap his head around the fact I couldn’t care less, as long as it was functional according to his demands (we call that a SLA nowadays).

The next phase was basically the same thing over again, this time round for storage. Storage vMotion, auto tiering. Who cares where the blocks are that make up your VM? As long as it is performant, and in accordance with the given SLA.

Nowadays we are moving up the stack once again: VMs are sitting in private clouds like baby birds in their nest. The light shines bright through the peephole where their parents come and go to check up on them, maybe feed them a caterpillar. Can you smell the freedom that lies beyond that bright hole? At some point, your VMs may leave that nest, to find their way in the clouds. Maybe this cloud today, maybe some other cloud tomorrow. If your application’s SLA allows for that of course.

Applications and their VMs will live anywhere and everywhere. The main thing will be: They WORK. They just work. Do you really care where they run? I don’t. Let go… Just let go.


SLA stuff

In the previous story, we moved through the waves of revolution into the future. The one thing that kept coming back, was the SLA. Coincidentally, the SLA is also part of the “Star Trek looking down” part. And this is no surprise: If you run an application, you need to make sure it performs, is secure, replicated, backed up, you name it. There should be some metadata included with each workload.


Back to the Future

Jumping back to the future, we see a cloud that accepts applications with their metadata, and in return these applications are run on demand and size on demand. So how far are we in this vision today? Looking at the immense portfolio of tools, we have come a long way already.

If we’d need to put products to the features discussed, there are a lot of good things in products like vCloud director, vCloud connector and Appsync but also hardware appliances like VCE’s Vblock, with best of breed products inside. And both standardization and best of breed is what it takes to drive the workflow and automation further and further.

Looking at storage as an example, we see that more and more gets automated. We can migrate workloads between datastores, we can migrate blocks of data between tiers. SSD is the new disk, disk is the new tape. Tape is the… uhm well actually in Star Trek there is no more place for tape. As soon as we start producing holographic storage, we’ll probably be doing the same with spinning disks 🙂

Taking things one step further, we now can migrate virtual workloads into storage appliances like Isilon, or we can migrate workloads to servers that have local caching mechanisms like VFcache. If we were real workflow fans, we could probably come up with ways to analyze VMs and migrate them to the appropriate tier of compute automatically. Not too hard to do actually.


A great opportunity: Workflow automation, root-cause analysis and self-healing

Looking at things we can do today, we have two major things to do: The first one is to automate and automate more. We used to think about storage, spindles and raid groups. Now storage is automating these things for us. We used to think about sizing VMs. We are now able to auto grow a VM on the fly, or spin up another one and add it to the load balancer if we want. Automation is the next wave of the future and it can propel us into the future of Cloud.

Wait, did I say TWO major things? I guess I did. The second one is this one: What should we do if the Star Trek toilet DOES clog up? Today we start fixing. Tomorrow we’ll rely on the system to fix it for us. So maybe Star Trek does have an automated decloaking -eh- declogging device? Root cause analysis is the next thing we’d need. Something that scans through the infrastructure automatically. Something that relates hard- and software components automatically and self-heal when things break.

Think all of this is future? Well, a large part of this future is here. Today. And more is coming! Ask your local vExpert or vSpecialist!

Comments are closed.

Soon to come
Translate
DutchEnglishFrenchGermanItalianPortugueseRussianSpanish
Blogroll
Links
Archives