Skip to main content

Fuse Integration Service - Auto Dealership Management Demo, Part One

This series of blog is based on building an auto dealership management system on Fuse Integration Service. We will be creating three major functions in the system.
  • Sales report tracking 
  • Vehicle inventory status
  • Customer IoT Service
We will be exporting a sales report to a web page, provide current inventory status of available cars through web service. And collect customer data from IoT devices on their car then alert close by shops. It would be better if you have some basic knowledge of Apache Camel before you begin, because I will not explain it in a great detail, I'll put my focus on how it works with the base platform, OpenShift. For Camel basic, you can check out my previous JBoss Fuse workshop. 

I thought this would be the perfect chance to show how Fuse Integration Service can benefit and support microservice architecture, so yes, this application fits in my category of  microservice integration. Each and every function in the application will be divided into small separate deployment modules and are independent from each other. 

In part one, we will be looking at how to create restful webservice, as our API endpoint for the sales tracking report application. 
The content will be in JSON format. The data comes from XML files submitted by reporting application in each branch, and each branch will upload the XML file to a folder in our system. 
For our first microservices, it is going to be just two simple camel route, taking in the XML file, parsing content and transform it into Java bean, stores it in a temporary pool, the other route on the other-hand will output the pool of data into JSON Format.  
First thing first, in order to create a package that can be package to deploy as Fuse Integration Service, we need to 
The XML provided are a list of sales opportunities, in order to break them down into individual pieces, we are going to split the file with the split pattern built-in in Camel.  

Developing Camel Application

This is the first camel route that reads from a file folder, logs the content, split the XML file, transform data into Java bean and then stores everything in a temporary pool . 


 <route id="inputReport">  
  <from uri="file:reportfolder"/>  
  <log message="${body}"/>  
  <to uri="dozer:opptrans?sourceModel=oppxml.Oppo&amp;targetModel=com.redhat.fis.dms.model.Opportunity&amp;unmarshalId=oppxml&amp;mappingFile=transformation.xml"/>  
  <bean ref="opportunityGenerator" method="addOpportunity(${body})"/>  

Dozer transformation:
 <mappings xmlns="" xmlns:xsi="" xsi:schemaLocation="">   
  <field> <a>discount</a> <b>discount</b> </field>  
  <field> <a>custName</a> <b>custName</b> </field>  
  <field> <a>openDate</a> <b>opendate</b> </field>  
  <field> <a>phone</a> <b>phone</b> </field>  
  <field> <a>stage</a> <b>stage</b> </field>  
  <field> <a>type</a> <b>type</b> </field>  
  <field> <a>vehicleId</a> <b>vehicleId</b> </field>  
and configuration in camel context
<bean id="dozer" 

Java Bean:
 package com.redhat.fis.dms.mockprocessor;  
 import java.util.ArrayList;  
 import java.util.HashMap;  
 import java.util.List;  
 import com.redhat.fis.dms.model.Opportunity;  
 public class OpportunityGenerator {  
  List<Opportunity> opportunityList = new ArrayList<Opportunity>();  
  public void addOpportunity(Opportunity opportunity){opportunityList.add(opportunity);}  
  public HashMap<String, ArrayList<ArrayList<String>>> getAllList(){  
  HashMap<String, ArrayList<ArrayList<String>>> resultMap = new HashMap<String, ArrayList<ArrayList<String>>>();  
  ArrayList<ArrayList<String>> data = new ArrayList<ArrayList<String>>();   
  for(Opportunity opportunity: opportunityList){    
   ArrayList<String> opportunityinList = new ArrayList<String>();  
  resultMap.put("data", data);   
  return resultMap;  
and configuration in camel context
 <bean id="dozer" class="org.apache.camel.converter.dozer.DozerTypeConverterLoader"/>  

The second camel route gets the data from the pool and transform the binary content  into JSON format.

 <restConfiguration component="jetty" port="9191"/>  
  <rest path="/AutoDMS">  
   <get uri="/salesTracking">  
    <to uri="direct:salesTracking"/>  
  <route id="salesTracking">  
   <from uri="direct:salesTracking"/>  
   <bean ref="opportunityGenerator" method="getAllList"/>  
    <json prettyPrint="true" library="Jackson"/>  

In this part of the demo, we are exposing a Restful webservice endpoint, therefore we need to set it with the Rest DSL in Camel. 

And That's all. We can then starts to deploy the application onto our PaaS platform OpenShift, if you don't know how to setup Fuse Integration Service, please refer to my older post.

Deploying Application

Since OpenShift is based on Docker and Kubernetes techonology, we need to package our application as a docker image. Don't worry if you don't know what those is, you don't need to know for now, basically, just think of it as a new way of packaging application that includes runtime environment and the way to orchestrate and manage those running application. Fuse Integration Service, hides all these complexity away by provide a "Source to Image" mode to deploy via help from Maven Plugins.

Take a look at the pom.xml we generated in the project, notice it was automatically added with few maven plugins,

First is the docker maven plugin, it helps you with creating the docker file and contacting to docker registry.

The other plugin is the fabric8 plugin, which is going to help you to generate the setup file for Kubernetes.

And lastly it has put together some useful profile so it is easier to commit and deploy your application.

We are going to deploy with the maven with the local deploy, as you can see, it contains many goals, other then the common maven goals like clean and install, to build the application software, we see few unfamiliar commands:

This goal builds the docker file, in our case, we will be build it base on our Fuse Integration Service "jboss-fuse-6/fis-java-openshift:1.0" image template. Here you can also set any environment variable specifically in the container. We also describes the exports ports, with the port elements. In our case, it also generates the running script to start our spring camel application.

This generates the kubernetes.json file. What this json file is actually the OpenShift template, it describes a set of objects by OpenShift, for example services, build configurations, and deployment configurations. This goal for this file is to help us create this template rather then having to create from scratch.

With the json file we created in last step, and pushing it up to the OpenShift.
Normal we have to setup the Kubernetes environment setting in the machine, it'll tell the tool where to push the code to. Since we have already installed the openshift client, it'll find the current login token and namespace by parsing the users ~/.kube/config.

That's the theory part, now, back to our project, in command line, under the project folder, login to OpenShift with it's client tool. Make sure you create a project called demo and deploy the mq-basic application on it, if you have not previously done so.  This will save us from the need to set Kubernetes settings. Then simply type in console.

  • mvn -Pf8-local-deploy

After successfully deploying application on OpenShift, you will see the application on the console.

Let's start playing with this application by upload a sales report xml file to load the sales report into the route.  
  • oc rsync ./reports/ salesreportfile-hzmya:/deployments/reportfolder

Go to the POD view, and click on Connect, this will take us to the application console,

And in the report route, notice it had pick up the xml file and processed it into our bean.

I made a web application that will take the json from the web services we published (
sdasd) , and display it. (/sales/sales.html)

If you go back to the application console, here is the endpoint diagrams that shows all routes, that has been called.

That's all! Thanks!


Scratsh said…
you should try one of this tool for inserting more readbale xml or code:

Sarah Rose said…
eNvent's Automotive Dealer Management Software DMS has proved to be the lifeline for Indian automobile dealers. Our Dealership Management System is user friendly, versatile & cost-effective.

Popular posts from this blog

Red Hat Fuse - Announcing Fuse 7 Tech preview 3 release.

Red Hat Fuse 7.0 technical preview three is out today! On the pathway to become one of the best cloud-native integration platform, Fuse gives developer freedom to choose how they want to develop the integration solution, where they want to deploy it and capabilities to address new integration personas that do not have development experience.
By supporting the three major runtime, developer is free to work on the runtime of their choice.By supporting standalone and cloud deployment, it simplifies the complexity to distinguish between these environments, allowing application to deploy freely among the environment of your choice. All levels of developers are welcome, you can either dive deep into creating customize complex integration logic, or using the new low code platform to quickly build a simple integration. In this Tech Preview release you get it all.
Fuse StandaloneSpring-boot for microserviceKaraf 4 for OSGi loverJBoss EAP for JavaEE developersFuse on OpenShiftPlugins for easy co…

JBoss EAP 6 - 效能調校 (一) DataSource 的 Connection Pool

效能沒有什麼Best Practice, 反正能調整的就那些。 通常,一個程式的效能大概有70-80% 都跟程式怎麼寫的其實比較有關係。

最近我最疼愛的小貓Puji 因為膀胱結石開刀的時候過世了,心情很差請原諒我的口氣沒有很好,也沒有心情寫部落格。

Puji R.I.P.



JBoss 的 SubsystemDatasource WebWeb Service EJB Hibernate JMSJCAJVM 調校OS (作業系統)

先來看一下 DataSource Subsystem, DataSource 的部分主要是針對Connection Pool 做調校。

通常,程式都會需要跟資料庫界接,電腦在本機,尤其是在記憶體的運算很快,但是一旦要外部的資源連接,就是會非常的耗資源。所以現在的應用程式伺服器都會有個Pool 放一些先連接好的 資料庫connection,當程式有需要的時候就可以馬上提供,而不用花那些多餘的資源去連接資料庫。

這就是為什麼要針對Connection Pool 去做調校。

以下會討論到的參數,都是跟效能比較有關係,Datasource 還有很多參數,像是檢核connection 是否正確的,我都不會提到。如果你追求的是非常快速的效能,那我建議你一個檢核都不要加。當然,這樣就會為伺服器上面執行的程式帶來風險。這就是你要在效能與正確,安全性上面的取捨了。 (套句我朋友說的話,不可能又要馬兒好,又要馬兒不吃草的..)

最重要的調校參數就是 Connection 的 Pool 數量。(也就是那個Pool 裡面要放幾條的connection.) 這個參數是每一個應用程式都不一樣的。


Connection Pool 最少會存留的connection 數量


Connection Pool 最多可以開啓的 connection 數量


事先將connection pool 裡面建立好min-pool-size 的connection.

我的建議是觀察一下平常程式要用到的量設定為 min-pool-size 。

My 2cents on the future of Integration - With Service Mesh/Istio and Serverless/KNative

It's been a year and half since I blogged about "Agile Integration architecture" (Gosh, time just flies). With the "microservices" and "cloud-native" hype, I was especially curious on how all these new concept and technology affect us on how to architect the integration systems. If you ever pay close attention to all the latest and greatest news from the Kubernetes community, I am sure you will hear a lot about the new "Service Mesh". And rumor has it that this is how integration can/should be done in cloud native world, but, is that so? Anyone who has ever worked on an integration project would tell you, it's a LOT more COMPLEX and can get worst overtime. I did a talk with Christian Posta in Red Hat Tech Exchange coming from a more comprehensive view of how different Red Hat technologies are applied under different patterns when building integration solutions. In fact he also did a great blog about it.

Since then, another topics has be…