Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

The Intro, 

More companies are strategizing to be on Hybrid cloud or even Multi-cloud,  for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology. This article is an abstraction of the common generic components summarized according to the implementation.  And give you an overall idea of the flow and why we choose certain technologies to set you off the right place to begin your own journey on Hybrid Multi-cloud environments. 
The idea of distributed computing is not new, by leveraging the combined processing power, memory and storage of multiple software components on multiple machine instances to achieve better performance. The problem now is how can we scale out the deployment for these software components quickly among the clouds with stability of actual machines. Having the freedom to bring up clusters close to the clients issuing the request and close to the data stores due to data gravity. Or sometimes they want to deploy a section of the application supporting the cognitive services they were running on the specific cloud providers. 



To host platforms on multiple clouds can be difficult, as it introduces extra complexity. No matter if it’s finding the people with knowledge on all cloud vendors, securing across clouds and governance across the board. We found the most common questions from customers are, automation, security and uniformity.  I have broken down how this study tackles above concerns using Red Hat and it’s partner’s technologies into four sections: 


The Overview,

We have logically separated it into three main areas, 

Unified management Hub, which hosts the management platform to manage all custers, a vault securing and issuing infrastructure credentials, a repository that stores the infrastructure code. And a CI/CD controller which continuously monitors and applies updates. I found many customers decided to host the hub in their own data center on top of their existing virtualization infrastructure. 

Managed clusters, these are the clusters that run the customer’s application, scaling up/down for distributed computing needs. Metrics and status are constantly synchronized back to the unified management hub. These clusters are deployed across major cloud vendors such as Azure, AWS and Google cloud. 

Bootstrap automation,  this is a temporary instance that is used for bootstrapping the unified management hub. It consists of multiple Ansible playbooks to install all the components on the hub, and also setup assigned the administrative roles. 



The Technology Stack, 

In the case study, customers have chosen several the following technologies and the reason why: 

  • Red Hat OpenShift Platform
    • Instead of directly using and learning the offering from all vendors, or even learning the subtle differences between the Kubernetes offering, using a platform offering sits on top across data centers, private and public cloud will provide an unified way to deploy, monitor and automate all the clusters. 
    • OpenShift GitOps
      • Automate delivery through DevOps practices across multicluster OpenShift and Kubernetes infrastructure, with the choice of either automatically or manually synchronizing the deployment of clusters according to what’s in the repository. 
    • Core Monitoring
      • OpenShift has a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. On top of that, we can also define monitoring for user-defined projects as well. 
    • Grafana Loki
      • Horizontally-scalable and better log aggregation system, and more cost effective and easy to operate especially in a multi-cluster environment. 
    • External Secret 
      • Enable use of external secret management systems (like HashiCorp Vault in this case) to securely add secrets into the OpenShift platform. 
  • Red Hat Advanced Cluster Management for Kubernetes
    • Controls clusters and applications from a single unified management hub console , with built-in security policies, provisioning cluster and application lifecycles. Especially important when it comes to managing on top of multi-clouds. 
  • Red Hat Ansible Automation
    • Used for automate configuration and installation of the management hub. 
  • Hashicorp Vault  
    • Secure centralized store for dynamic infrastructure and application across clusters. For low trust networks between  clouds and data centers. 

The Setup,

The key to automate is “Infrastructure as code”, by versioning and storing clusters, networks, servers, data stores or even applications as software into a centralized and controlled repository, it allows the environment to be agile, consistent and less error prone. As the creation, updates are all pre-configured, and can be applied by simply executing the code, with less human errors and can be replicated across different environments. 

We will start by bootstrapping the management hub. Here are the steps  
  1. First, we need to setup the Red Hat OpenShift Platform (OpenShift) that hosts the Management Hub. By using the OpenShift installation program, it provides flexible ways to get OpenShift installed. Ansible playbook used to kick off the installation with configurations.
  2. Ansible playbooks are again used to deploy and configure Red Hat Advanced Cluster Management for Kubernetes (RHACM) and later other supporting components (External secret management) on top of the provisioned OpenShift cluster. 
  3. Install Vault with Ansible playbook. The vault we choose is from our partner Hashicorp, the  vault is to manage secrets for all the Openshift clusters.
  4. Ansible playbook is used again to configure and trigger the Openshift Gitops operator on the hub cluster. And deploy the Openshift Gitops instance for continuous delivery. 
For identity management, we use the existing one and use it as a source for openshift groups. And later use it to authenticate users when logging into the Hub and managed clusters. 




Now we have the centralized unified management hub ready to go, we are now ready to deploy the cluster on multi cloud to serve the developers and end users. In my next article, I will go over my study on GitOps. And simplify provisioning or updating in the complex setting.
1

View comments

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations. 

To overcome the common obstacles of going hybrid and multi-cloud, such as finding talents with multi-cloud knowledge. Secure and protect across low trust networks or just day to day operation across the board.
1

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.

Quick recap, 

In my series of articles I went over the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments. These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

The Intro, 

More companies are strategizing to be on Hybrid cloud or even Multi-cloud,  for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology.
1

Recently I had an opportunity to work with Sanket Taur (IBM UK) and his team on a demo, showcasing how Red Hat products can help speed up innovation with SAP Landscapes. To be honest I was shocked at how little time we were given to create the entire demo from scratch. It’s less than a week. While still doing our day job, having a couple of hours per day to work on it. If this doesn’t convince you..

You want Kafka to stream and process the data. But what comes after you set up the platform, planned the partitioning strategy, storage options, and configured the data durability? Yes! How to stream data in and out of the platform. And this is exactly what I want to discuss today.
1

Getting Started with Apache Camel ? This post is for you. But I am not going to dive into how to write the Camel route, there are plenty of materials out there that do a better job than me. A good place to get started is the one and only Camel Blog, it gives you all the latest and greatest news from the community. If you want to start from the basics, I highly recommend Camel in Action II Book. It has everything you need to know about writing Camel.
2

Introduction: 

Contract first application development is not limited to synchronized RESTFul API calls. With the adoption of event driven architecture, more developers are demanding a way to set up contracts between these asynchronous event publishers and consumers.. Sharing what data format that each subscriber is consuming, what data format is used from the event publisher, in a way OpenAPI standards is going to be very helpful.
6

Serverless should not be optional, but instead it should be there for all cloud native environments. Of course not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stash away in the corner, that are needed occasionally. Some need to to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless.
4

Camel K, a project under the famous Apache Camel project. This project totally change the way developers work with Kubernetes/OpenShift cloud platforms. By automating the nasty configuration and loads of prep work from developers. 

If you are an old time developer like me. I did my best slowly trying to adapt the latest and greatest cloud native “ecology”. It’s not difficult, but with small things and traps here and there. I’ll tell yel’ it's not a smooth ride.
1
Popular Posts
Popular Posts
About Me
Archive 檔案室
Labels
Labels
Blog of My Friends
Blog of My Friends
Facebook Groups
Facebook Groups
Support a friend
Support a friend
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.