At Re: invent, Amazon AWS launched Outpost last year and finally validated the hybrid cloud concept. Not that it was really necessary, but still …
At the same time, the strategy that was once defined as a cloud-first strategy (with the idea of launching each new initiative in the cloud frequently with a single service provider) has now become a multi-cloud strategy. This new strategy is based on a wide range of possibilities, ranging from deployments in public clouds to local infrastructures.
Getting everything from a single service provider is very simple and solves many problems. In the end, however, this means that you accept a block that does not pay off in the long run. Last month, I talked to the IT director of a large manufacturing company in Italy who described how his company had enthusiastically embraced one of the key cloud providers for nearly every major business project in recent years. He reported that the strategy had led to a runaway IT budget, even if new initiatives such as IoT projects were taken into account. The company's main goal for 2019 is to find a way to regain control by repatriating some applications, building a multi-cloud strategy, and avoiding past mistakes and all-in with a single vendor.
There are multi-cloud and multi-cloud
I did not recommend choosing a different provider for each project, but working on a solution that abstracts applications and services from the infrastructure. This means that you can buy a service from a vendor, but you can also opt for pure computing power and storage and instead build your own service. This service is optimized for your needs and can be easily replicated and migrated to different clouds.
Let us give an example here. You can access a NoSQL database from your provider, or choose to build your NoSQL DB service based on products available in the marketplace. The former is easier to handle while the latter is more flexible and less expensive. Containers and Kubernetes make it easier to deploy, manage, and migrate from cloud to cloud.
Kubernetes is now available from all major suppliers in various designs. The core is the same and it's pretty easy to migrate from one platform to another. Once inside a container, you will find plenty of prepared images and others that can be prepared for any need.
The storage is, as always, a bit more complicated than the calculation. Data has gravity and is therefore difficult to move; However, there are some tools that are useful for multi-cloud purposes.
Block storage is the easiest to move. It's usually smaller and now there are several tools that can be used to protect, manage and migrate, both at the application and infrastructure levels. There are many solutions. In fact, almost every vendor offers a virtual version of their storage devices running in the cloud, as well as other tools that facilitate migration between clouds and local infrastructures. Think of Pure Storage or NetApp, just to name a few. At the application level, it's even easier. Back to the aforementioned NoSQL solutions, solutions such as DatosIO or Imanis Data can help with migration and data management.
Saving files and objects is much larger. If you do not plan in advance, this can be a bit complicated (but still feasible). Start working with standard protocols and APIs. Those who choose to use the S3 API for object storage will find it very easy to select a compatible storage system both in the cloud and for local infrastructures. At the same time, you can now access data with many interesting products and move them transparently across multiple repositories (the list is getting longer by the day), but to give you an idea, look at HammerSpace, Scality Zenko, RedHat Noobaa, and SwiftStack 1Space ). I recently wrote a report on GigaOm on this topic. More information here.
The same applies to other solutions. Why stick to a single cloud storage backend when you have multiple, get the most out of them, keep control of the data and manage it on a single overlaying platform that hides complexity and optimizes data placement with policies ? Take a look at what Cohesity is doing to get an idea of what I'm saying here.
The human factor of the multi-cloud
Regaining control of your infrastructure is good from a budget point of view and for the freedom of choice it offers in the long run. However, to work more on the infrastructure side of things requires investing in people and their abilities. I would call that an advantage, but not everyone thinks that way.
In my opinion, it is very likely that a more competent team can make better decisions, respond faster, and build optimized infrastructures that can positively impact the competitiveness of the entire organization. Organization is too small, it's hard to strike the right balance.
Close the circle
Amazon AWS, Microsoft Azure, and Google Cloud make impressive ecosystems, and you can decide you only want to stay with one of them. Maybe your cloud bill is not that high and you can still afford it.
You can also decide that Multi Cloud means multiple cloud silos. However, this is a very bad strategy.
Alternatively, there are several options to build your Cloud 2.0 infrastructure and keep control of the entire stack and all your data. It may not be the easiest way, nor the least expensive, but it's probably the one that will pay off the long term and increase the agility and competitiveness of your infrastructure. This March, March 26th, I will jointly host a Wasabi-sponsored GigaOm webinar on this topic, and recently I interviewed Zachary Smith, CEO of Packet, to discuss new ways of thinking about cloud computing. Infrastructures goes. It's worth listening if you want to learn more about a different approach to cloud and multi-cloud.
Originally published on Juku.it