Skip to main content

Red Hat A-MQ - Create cluster of brokers support AMQP in Fabric

In this tutorial, I am going to show you how to create a cluster of message broker using the built-in management platform Fabric. This make life much much easier to either config and manage brokers.

Please obtain a copy of JBoss A-MQ by going to
and download JBoss A-MQ from Red Hat.

Here are the summary steps to create.

1. Unzip to install
2. Create User
3. Startup A-MQ server
4. Create Fabric
5. Create A-MQ broker profile
6. Create the container
7. Create a new version and apply AMQP Setting
8. Container version rolling upgrade
9. Test it.

To install simply unzip the file to a directory.

Before starting up JBoss A-MQ, we need to set a default user.
Remove the # before admin=admin,admin

or you can add your own prefer user id,  password and role.

Go to $A-MQ_INSTALLATION_PATH/bin , startup A-MQ by executing



Because we are going to manage brokers from Fabric,
create the fabric with this command

fabric:create --wait-for-provisioning

There are two different ways to create MQ broker, we can create one using Hawtio GUI or by command. I am going to show you the command part first in the tutorial.

type in

mq-create --group demo mybroker

This creates a A-MQ profile called mybroker, that belongs to a group called demo

Type profile-list or profile-list | grep mybroker to see if the profile has been successfully created.

Add broker settings to mq-broker-demo.mybroker profile by typing

profile-edit --resource broker.xml mq-broker-demo.mybroker 

and paste the following content into the file, save. 







you should be able to see the new resource added by checking with 

profile-display mq-broker-demo.mybroker 

Now the profile is ready, we are going to create 3 containers (instances) that will run this messaging profile. 

container-create-child --profile mq-broker-demo.mybroker root demoContainer1
container-create-child --profile mq-broker-demo.mybroker root demoContainer2
container-create-child --profile mq-broker-demo.mybroker root demoContainer3

All 3 container should startup automatically, you can check if it correctly install by listing all the containers:


Now, you have successfully create a clusters of A-MQ broker with Master-Slave topology. But it still doesn't run AMQP yet. 

I want to show you how to do version upgrades,

Create version 1.1
fabric:version-create 1.1

Add AMQP connector to the 1.1 version of our profile mq-broker-demo.mybroker
profile-edit --resource broker.xml mq-broker-demo.mybroker 1.1

add the amqp connect in the transportConnectors section of xml.
<transportConnector name="amqp" uri="amqp://"/>

Last part of setting, we need to upgrade the containers to our updated version 1.1
fabric:container-upgrade 1.1 demoContainer1

And do the rest, 
fabric:container-upgrade 1.1 demoContainer2
fabric:container-upgrade 1.1 demoContainer3

Now, testing 

git clone

activemq-amqp-example/src/main/resources/ change the connectionfactory.myJmsFactory setting to 


Test  AMQP by running consumer, under activemq-amqp-example/ directory, 

mvn -P consumer

And startup Producer to test 

mvn -P producer

You should see the consumer picking up the message created by the producer. 


Chisoon Chang said…
Tried this demo, so I can understand and traced what went wrong with your bio-sensor demo. However, I ran into a problem at your slide 7 where profile-display to show the new configuration of mybroker. The mybroker does not have the new PID as, I even did a profile-refresh, it still carried the as that of the original activemq.xml from the jboss-am-q source. I followed your steps twice (profile-delete mybroker and mq-create mybroker again) and came out the same result. Something seems wrong with my local set-up. Suggestion?
Chisoon Chang said…
Although I could not repeat this demo on my local RHEL 6.x box, I have fundamental question about the purpose of setting cluster of brokers. Under what condition or program reqirements, we need to set up such cluster of brokers?

Popular posts from this blog

Red Hat Fuse - Announcing Fuse 7 Tech preview 3 release.

Red Hat Fuse 7.0 technical preview three is out today! On the pathway to become one of the best cloud-native integration platform, Fuse gives developer freedom to choose how they want to develop the integration solution, where they want to deploy it and capabilities to address new integration personas that do not have development experience.
By supporting the three major runtime, developer is free to work on the runtime of their choice.By supporting standalone and cloud deployment, it simplifies the complexity to distinguish between these environments, allowing application to deploy freely among the environment of your choice. All levels of developers are welcome, you can either dive deep into creating customize complex integration logic, or using the new low code platform to quickly build a simple integration. In this Tech Preview release you get it all.
Fuse StandaloneSpring-boot for microserviceKaraf 4 for OSGi loverJBoss EAP for JavaEE developersFuse on OpenShiftPlugins for easy co…

JBoss EAP 6 - 效能調校 (一) DataSource 的 Connection Pool

效能沒有什麼Best Practice, 反正能調整的就那些。 通常,一個程式的效能大概有70-80% 都跟程式怎麼寫的其實比較有關係。

最近我最疼愛的小貓Puji 因為膀胱結石開刀的時候過世了,心情很差請原諒我的口氣沒有很好,也沒有心情寫部落格。

Puji R.I.P.



JBoss 的 SubsystemDatasource WebWeb Service EJB Hibernate JMSJCAJVM 調校OS (作業系統)

先來看一下 DataSource Subsystem, DataSource 的部分主要是針對Connection Pool 做調校。

通常,程式都會需要跟資料庫界接,電腦在本機,尤其是在記憶體的運算很快,但是一旦要外部的資源連接,就是會非常的耗資源。所以現在的應用程式伺服器都會有個Pool 放一些先連接好的 資料庫connection,當程式有需要的時候就可以馬上提供,而不用花那些多餘的資源去連接資料庫。

這就是為什麼要針對Connection Pool 去做調校。

以下會討論到的參數,都是跟效能比較有關係,Datasource 還有很多參數,像是檢核connection 是否正確的,我都不會提到。如果你追求的是非常快速的效能,那我建議你一個檢核都不要加。當然,這樣就會為伺服器上面執行的程式帶來風險。這就是你要在效能與正確,安全性上面的取捨了。 (套句我朋友說的話,不可能又要馬兒好,又要馬兒不吃草的..)

最重要的調校參數就是 Connection 的 Pool 數量。(也就是那個Pool 裡面要放幾條的connection.) 這個參數是每一個應用程式都不一樣的。


Connection Pool 最少會存留的connection 數量


Connection Pool 最多可以開啓的 connection 數量


事先將connection pool 裡面建立好min-pool-size 的connection.

我的建議是觀察一下平常程式要用到的量設定為 min-pool-size 。

My 2cents on the future of Integration - With Service Mesh/Istio and Serverless/KNative

It's been a year and half since I blogged about "Agile Integration architecture" (Gosh, time just flies). With the "microservices" and "cloud-native" hype, I was especially curious on how all these new concept and technology affect us on how to architect the integration systems. If you ever pay close attention to all the latest and greatest news from the Kubernetes community, I am sure you will hear a lot about the new "Service Mesh". And rumor has it that this is how integration can/should be done in cloud native world, but, is that so? Anyone who has ever worked on an integration project would tell you, it's a LOT more COMPLEX and can get worst overtime. I did a talk with Christian Posta in Red Hat Tech Exchange coming from a more comprehensive view of how different Red Hat technologies are applied under different patterns when building integration solutions. In fact he also did a great blog about it.

Since then, another topics has be…