April 26th, 2021
April 26th, 2021
A multi-cloud deployment strategy is the ability to deploy your application to any cloud, allowing you to save time and money. Having a multi-cloud deployment strategy will set you up with the freedom to choose the cloud server you want based on the features it offers, the location of the datacenter, and the cost associated with the cloud computing itself. If your setup is done correctly, you can deploy your application to any cloud server with little trouble.
To allow for smooth multi-cloud deployment, Translucent Computing’s multi-cloud architecture patterns rely on one common abstraction layer across all clouds: Kubernetes, a container orchestration system.
By now, all major cloud platforms generally support Kubernetes, and the platform is a must for any modern cloud-native app. Kubernetes does differ from platforms such as Anthos, which is a management platform that allows you to manage all the cloud as one, but it’s not a very big difference. Both platforms have a place in the current cloud-native world, so your solution will depend on your needs.
Your multi-cloud deployment strategy must account for each type of cloud server and how they handle their infrastructure setup, such as storage, networking, load balancers, virtual machines, monitoring, or any cloud-proprietary managed services (which we try to avoid, if other cloud servers don’t have good alternative solutions). Some of these cloud infrastructure tasks can be handled by Terraform, an open-source infrastructure as code software tool. Other abstractions will come when working with Kubernetes. We want to get our apps as quickly as possible into the Kubernetes world, because that’s where all the fun starts!
Through the challenges we’ve encountered from working on many cloud-native projects, Translucent has put together best practices for an open source product-ready technology stack that supports the full cloud-native journey, which we call TEKStack AI.
Our technology stack has the best-of-the-best tools that take us from development to building, to deployment with monitoring and observatory throughout the stack. It does take a lot of effort and investment to keep current with all the tools as new ones come out every day. At Translucent, “learning is in our DNA”. We take our inspiration from Cloud Native Computing Foundation, and this community has a good sense of direction when it comes to cloud-native tools and practices. Some of the fundamental tools we use are Minikube, Jenkins, Spinnaker, Helm, Prometheus and Loki to name a few. Because the list is vast, it’s important to keep some concepts and functions steady to make sure you fully understand the tool.
Yes, if done right, cloud-native architecture saves us money! By thinking of your application in terms of container-based environments or packaged apps of microservices will allow you to optimize your business and development workflow. Cloud-native architecture will enable you to start thinking about multi-cloud strategy. Cloud-native strategy will lead your company to greater team collaboration and flexibility. You can then build new features around your business, and scale your applications when business demand needs it, which can lead to thousands if not millions of dollars in savings.
Cost savings and extra revenue will also come from new API-driven collaborations and continuous integrations. New APIs will create new business units and new revenues. Non-traditional tech companies will now become data-driven tech companies! Agile DevOps automation will give you a stable system that can run 24/7, with observability and reporting that will require fewer people to maintain it and lessen the support burden.
The biggest part of going cloud-native is the adaptation of Kubernetes and containers. There has been rapid growth in the number of companies adopting Kubernetes, and for good reason. Kubernetes is revolutionizing application development, bringing previously unimagined flexibility and efficiency to the development process. The speed and agility that Kubernetes offers will drive customer experience by delivering quality features to the market faster.
There are many benefits to having Kubernetes and containers as the main pillar of your multi-cloud deployment strategy: you get an immutable infrastructure, with predictable, repeatable, and faster development and deployments. This strategy will introduce strong dev/prod parity that allows you to keep consistent builds across all cloud environments. Strong dev/prod parity leads to greater team collaboration and less time spent on infrastructure. The spotlight is on what’s really important: bringing new business features to the market and greater value to your company.
Kubernetes and GitOps go hand in hand. Choosing GitOps as a prescriptive style of Infrastructure as Code will drive agile collaboration, observability, system configuration, and DevOps best practices. Gone are the days where infrastructure knowledge is hidden in silos. Every single piece of code becomes auditable, be it application code or infrastructure. Everyone can observe and contribute (with the correct permission, of course).
Kubernetes is becoming the common language for cloud-native applications. It reduces friction when a new team member joins a team or company by allowing easier training, so that team members can start running and contributing right away.
The bad news: If you’re using the cloud, you’re already locked in (or you will be locked in soon)! This doesn’t mean you can’t use multiple cloud deployment strategies. Even if you’re already using one cloud server, you can still use other cloud servers for current or new initiatives. It does mean that you’ll need to set up a few things yourself if you want to switch to a different cloud server, but most of the time that can all be automated with tools like Terraform.
On each cloud service, we aim to automate everything. Some of the infrastructure tasks we automate for each cloud service are: storage, networking, load balancers, virtual machines, monitoring, and managed services that belong to that cloud. We try to avoid services that don’t have easily compatible alternatives.
In our work with one of our clients, we found some great cloud-specific alternatives for a database. Locally and in Google Cloud, we use Helm deployed MySQL DB; in Amazon, we use RDS. Transitioning to Google Cloud SQL wouldn’t be an issue either.
Using an application data abstraction like Liquibase to support the whole system will help with the whole process as well.
Businesses have many decisions to make when it comes to cloud-native architecture, and these are not easy decisions because cloud-native applications are hard to build. Clouds are very complex, but your platform doesn’t have to be! Cloud-native architecture, proper planning, and using the right tools to support your cloud journey can take away a lot of these complexities.
Thinking about and getting your application ready for multi-cloud deployment from the start is a good idea, because it will lead you to make the right choices for your architecture. Using the Kubernetes container orchestration system within your local development all the way to production will get you there really quickly. With this setup, you’ll be able to run your application in most clouds. There’s still the infrastructure part you need to deal with, but that can come as you build out your system. If you adapt infrastructure as code principles early on, this step will also be easier and save costs in the long run.
We have many options when it comes to clouds, their features, and their physical locations. With the growth of Kubernetes, we can make use of all of these options if needed, by applying a multi-cloud deployment strategy throughout.
April 26th, 2021
by Robert Golabek in Kubernetes In Action
⟵ Back
See more:
December 10th, 2021
Cloud Composer – Terraform Deploymentby Patryk Golabek in Data-Driven, Technology
December 2nd, 2021
Provision Kubernetes: Securing Virtual MachinesAugust 6th, 2023
The Critical Need for Application Modernization in SMEs