I’ve been mulling about this post for some time trying to find the right way to convey the thoughts in my head. Hopefully, I do so adequately.
Some background:
At work, I’ve been taking several different AWS trainings as the company continues to drive its Cloud-first vision. There are a great many people who like to tease me as a result of my heavy affiliation with all things VMware (Note: that teasing has stopped very suddenly – more on that in the future). They feel I’m skeptical, and, ultimately, that I will refuse to embrace Cloud because they know that VMware’s core market and products have typically been focused on the Data Center.
They’re right – I am skeptical. Not because I don’t see what Cloud can offer. I see opportunities to optimize our operations and increase scalability. I’m skeptical because of the additional work that’s only now being done which should have been done before. That work is primarily the push to optimize things for cloud consumption. I’m skeptical because there was never a push to optimize for the Data Center. If we had all optimized for the Data Center from the beginning, Cloud might not even be as far along as it is today. I don’t even necessarily mean that in relation to this anecdote, but instead more of a global thought. Were we all automating before like we are now, would the Cloud have risen as it has?
In all honesty, I like to think I’m a little more pro-Cloud than people realize. I can easily see the extension of the VMware stack abstracted as different service names (admittedly, a little over-simplified). EC2 is just a running Virtual Machine. EBS is just VSAN. VPCs, Transit Gateways, etc. are all just different parts of NSX. It’s interesting, as the vAdmin, to see that the use of the Cloud is so simple – it’s just consumption of services. There’s no limits. No need for forecasting. Just endless consumption and endless cost.
I can’t tell you for certain that all Cloud Providers are easy to consume – I haven’t had an opportunity to take training or play with any other cloud services at this point. That said – I feel as though VMware has put themselves in a really interesting position. It may take a while to realize it, but I think we may be on the verge of seeing VMware transform itself (again, one might say) to be a giant in how all applications are architected, delivered, and secured in the future. Allow me to explain that line of thinking.
A few years ago, VMware announced VMware Cloud on AWS – a strategic partnership with AWS that would allow customers the ability to have a Hybrid Cloud environment where the traditional vSphere environment in a Data Center could easily extend into a Managed Service environment in the Cloud. Further, once those systems were in VMConAWS, there was an opportunity to begin using native AWS services for some legacy applications which, before then, would have been completely unavailable (unless re-writing the application to be cloud-native). This has been a pretty ground-breaking announcement, in my opinion.
Fast forward to the last month or more – VMware has recently announced similar partnerships with Azure and Google Cloud Platform. These aren’t quite the same as VMConAWS and instead are branded as VMware Services on Azure and Google Cloud VMware Solutions. I have no insight as to the differences here, but I suspect that AWS actually sold VMware hardware in their Data Centers whereas Azure and GCP wouldn’t sell them hardware, but instead allowed them to install on bare-metal. Extrapolating that thought just a bit further – my guess is that those services will not include a VMware-managed “Management Cluster” or “Cloud vCenter” like the VMConAWS offering. Not quite a Managed Service, but still a way to run things in other clouds and not in the Data Center. It is entirely possible that more information will become available during VMworld (which I hope to have posted this prior to…).
“James, why does this even matter? You’re rambling a little.”
These announcements are super important for the next phase of the Cloud (at least, this is how I see it coming to be). VMware now has partnerships to run the vSphere stack on five of the major cloud providers. This enables the vAdmin the ability to move workloads to and from different cloud technologies as necessary. While I don’t know that VMware Services on Azure (to pick one example) will offer the same features as VMConAWS, I imagine that they’ll have to be pretty similar. Assuming that these solutions ARE similar, this gives customers the flexibility of choice – the option to choose VMConAWS, VMware Services on Azure, or Google Cloud VMware Solutions. This is an absolutely huge win for the customer.
So, if I haven’t lost you in this post yet, technology has gone from a Private Cloud in a Data Center to a Hybrid Cloud that can manage vSphere workloads side by side with cloud-native workloads. As of the recent announcements, customers now have a choice in where they can place those vSphere workloads in the cloud. Instead of having one pipe to a cloud provider, we now have (at least) three different pipes to different cloud providers – all while simultaneously running cloud-native workloads.
Where do we go from here?
In my opinion, VMware has poised themselves in a very strategic position to control the cloud battles. To me, VMware is looking to be the company that orchestrates the entirety of the Cloud. We’re only missing a few pieces and I think they’ve already started working on them.
First – I anticipate VMware to double-down on Cost metrics for deploying to these various Cloud Providers. At one point, vRealize Business for Cloud offered a way to compare cloud costs. Nowadays, I believe that this functionality has been put into vRealize Operations and vRBC has been deprecated. I fully expect that, with VMware’s dedication to AI/ML, we’ll begin to see more information about these Cost Metrics – specifically when it comes to running applications in the Cloud. The way I see it is that vROps (or whatever new product might be released to do this) provides a Dashboard which, after compiling all the Cloud Cost data across all service providers in-use, provides suggestions on where to run applications based on total cost for the day. In my dream world, this is delivered directly to a CFO who decides whether or not to press a button. Pressing a button in that dashboard results in the next piece.
Second – I anticipate VMware to, again, double-down on the work they’ve done for Cloud Automation Services to enable movement between clouds. Cloud Automation Services is effectively the SaaS version of vRealize Automation (which I expect to hear more of at VMworld). One of the really interesting pieces about vCAS is that it allows you to create “cloud-agnostic” blueprints. You submit a request for a given blueprint and, based on approval workflows or tags (among other things), your deployment appears in your on-prem. Private Cloud or any of the major Public Clouds with all of the cloud-specific API translations being done for you by vCAS. This piece would be instrumental in the ability to deploy an application to multiple clouds at once OR move an entire application from one cloud provider to another (including moving it back to your own on-prem presence).
Lastly – This is the part that I get fired up about because I legitimately believe we could be on the cusp of changing cloud consumption. Now that we have a mechanism for deploying and managing our workloads to the different cloud providers… what if we could deploy a single workload across all three of those providers, using different services in each provider, and ensure that it was orchestrated and secured in a way that meant we, the customers, only received a benefit?
I know that this can likely be accomplished already, but it involves significant time in building that automation and security. In this scenario, all of that is done by the hard working engineers at VMware who write the code to translate your request automatically to the right service from the right cloud. For example – maybe I submit a job to grab some data from a Data Lake in Azure. I want to run a handful of Lambda functions to pull that data out of the Data Lake and send that data over to GCP where I can run some AI/ML on GPUs for a fraction of the cost of other the other clouds. I want all of that to be done automagically, based on a series of automated approval workflows that evaluate overall cost and estimated time to execute, all orchestrated and secured for me.
Right now, we have pipes out to different cloud providers that allows us a hybrid cloud with each of the individual providers. I think we’re about to see those pipes turn into a cone of Cloud Services where the Cloud Provider might begin to blur. For all I know, this could already exist and I’m just too immature in my cloud journeys to see it.
What do you think? I’m writing this immediately prior to VMworld 2019 – do you think we’ll hear anything about this at the conference?