Sometime ago I posted a blog about achieve fault tolerant messaging, in the article I mention using the fabric registry discovery to hide all the IP address and port detail in the client side. With new Fuse 6.2 you will not be able to find this anymore.  So how do we achieve fault tolerant messaging in the new Fuse?

In JBoss Fuse 6.2, we can do it by starting up a MQ Gateway. What it does is it provides a single IP and port to accept connections from clients.  All the individual IPs and Port of the broker are hidden away tothe client, the client only needs to know the IP and port of the MQ Gateway (Default Port: 61616). The gateway will discover all the brokers in the fabric, and no matter what the incoming protocol is, it can be OpenWire, MQTT, STOMP and AMQP. The gateway will see which broker is available in the host specified for the protocol, and connect the client to the broker. If multiple brokers are available in the host group, gateway can dispatch request to them, by this we can achieve load balance. There are 3 different way to do load balance, Random, Round Robin and Sticky.

If any thing goes wrong with the broker a client is connecting, and got disconnected, the gateway will look for another available broker in the group. This gives you high availability to your service.

To startup a MQ Gateway, simply choose the gateway-mq profile to the container.

It will automatically find brokers in fabric, 



I updated my failover demo instead of using fabric registry discovery, I change my application and connect it to MQ Gateway.  In this demo, you will first need to create a broker failoverMS, choose Master/Slave type and use blogdemo as Group.





Click on the red triangle to start provision the brokers. It will take you yo the page to create container for the broker profile.


And will start create number of container you specified in the configuration.


Then we can startup the MQ-Gateway, before we do that, go to the gateway-mq profile and edit the io.fabric8.gateway.detecting.properties file. Change port to 8888 and defaultVirtualHost to blogdemo.



And add the gateway-mq profile to a container, as previously stated.
Let's take a look at the client application, it's written with Camel. Look at the configurations in the Camel route, here you see no matter if it's an OpenWire or MQTT protocol, it is set to tcp://localhost:8888, which is the IP and port of MQ Gateway.

This application send a message to the broker, through OpenWire and MQTT protocol every 5 secs. Deploy the profile "" to a new container.



Go to container console of testcon, you will find the camel routes are running



In the MQ Gateway, you can see the request was dispatched to the broker we have created.
And the messages was sent to broker.






Video can be found here.


The application code can be found here.
https://github.com/jbossdemocentral/jboss-fuse-mqgateway-failoverdemo

Enjoy!
1

View comments

  1. Are you looking for a business loan, personal loans, mortgage loans, car loans, student loans, unsecured consolidation loans,project funding etc ... Or simply refuse loan from a bank or financial institution for one or more reasons? We are the right solutions for credit! We offer loans to businesses and individuals with low and affordable interest rate of 2%. So if you are Interested in an urgent and secured loan. For more information kindly email us today Via: elegantloanfirm@hotmail.com.



    ReplyDelete

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations. 



To overcome the common obstacles of going hybrid and multi-cloud, such as finding talents with multi-cloud knowledge. Secure and protect across low trust networks or just day to day operation across the board. I have identify some solutions from the study, where I will be covering in the serie of articles: 


Customize monitoring and logging strategy,

One of the keys to manage multiple clusters is observability. In a single cluster environment, metrics and logs are segregated in different layers, application, cluster components and nodes, now adding in the complexity of multiple clusters on different clouds will definitely make it more chaotic than ever. By customizing how to gather and view the metrics allows you to be more effective, easier to operate, pinpoint and locate problems quickly. To do that, we will need to decide how we want to aggregate the collected data, by regions, operations or a centralized point. (Depends on your HA strategy and how much data you are collecting).  And then decide how to scrape or collect the metrics and logs. Historical data are valuable not only for us to view and identify problems. Since many vendors now support AI based remediation, by validating and extracting the data for operational models. Therefore we will also need to persist data.



How it works, 

Checkout my previous article on the setup of the hybrid and multi cloud environment, and if you are interested, other articles on GitOps and secure dynamic infrastructure. But this time, let’s look at how observability works. First to view things in a single pane of glass, we host Grafana dashboard to stream and query the Observatorium service in all managed clusters. 


When bootstrapping the managed clusters on each cloud, we will need to install the following, 



For Monitoring, 


Prometheus is installed to scrape metrics for cluster components, as well as the applications. A Thanos sidecar gets deployed along with a Prometheus instance to persist metrics to storage and allow instant query to Prometheus data. You may have multiple Prometheus instance per cluster depends on how you want to distribute the workload. 


Observatorium is installed based on your defined strategy for observability(in this case, per region), this will deploy a set of service instances on the cluster. Where they will be responsible for aggregating, storing input metrics, as well as efficiently storing the data. Also providing endpoints(API) for observation tools like Grafana to query the persisted data. 



  1. Queries from the Grafana dashboard in Hub cluster, the central Querier component in Observatorium process the PromQL queries and aggregate the results. 

  2. Prometheus scraps metrics in the local cluster, Thano sidecar pushes metrics to Observatorium to persist in storage. 

  3. Thanos sidecar acts as a proxy that serves Prometheus’s local data over Thanos’s gRPC API from the Querier. 



For Logging, 


Promtail collects log files with fine-grained control of what to ingest, what to drop, and the final metadata to attach to the log line. Similar to Prometheus, multiple Promtail instances can be installed per cluster depending on how you want to distribute the logging workload. 


Observatorium in each defined region are also configured with Grafana Loki, where it aggregates logs by labeling each log stream pushed from Promtail. Not only it persist logs in stooges but also allow you to query the high cardinality data for better analysis and  visualization. 



  1. Promtail is used to collect log and push to Loki API (Observatorium)

  2. In Observatorium, Loki distributor sends logs in batches to ingester, where they will be persisted. Couple of things to beware both ingester and querier requires large memory consumption, will need more replicas

  3. Grafana dashboard in Hub cluster display logs via requesting 

    1. Real time display(tail) with websocket

    2. Time-series base query with HTTP



This summarizes how observability was implemented in our customer’s hybrid multi-cloud environment. If you want to dive deeper into the technology, check out a video done by Ales Nosek.

 






1

View comments

Loading