Recently I had an opportunity to work with Sanket Taur (IBM UK) and his team on a demo, showcasing how Red Hat products can help speed up innovation with SAP Landscapes. To be honest I was shocked at how little time we were given to create the entire demo from scratch. It’s less than a week. While still doing our day job, having a couple of hours per day to work on it. If this doesn’t convince you..  I don’t know any other stronger proof than this, to show how agile and fast a cloud solution can be from development to production. 

I encourage you to attend Sanket’s session for more details, this blog is JUST my view on the demo, and things I did to make it running. The demo was a simple approval process of Sales Orders. The SOs are created in the Core SAP platform (In this case ES5), therefore we need to create an application that speaks to the Core SAP platform and retrieve all the data needed. 


First thing first, we need a Kubernetes(k8s) platform. And then I used Camel K — an enhanced framework based on Camel (part of Red Hat Integration product) to create the application. There was some mixup during the setup, instead of the OData v4 endpoint from ES5 for SO, line items and customer details. I was given an OData v2 endpoint. (Needless to say, how more efficient the OData v4 is, compared to v2. Please do update it when you have a chance). Note that Camel K only supports OData v4. HOWEVER, we can still get the results using normal REST API calls (So you are still covered).  

This is how Camel helps you retrieve all the information needed. As you can see I have made several requests to get all the data needed as well as doing some transformation to extract results to return. 

 

from("direct:getSO")

   .setHeader("Authorization").constant("Basic XXXX")
   .setHeader("Accept").constant("application/json")
   .toD("https://sapes5.sapdevcenter.com/sap/opu/odata/iwbep/GWSAMPLE_BASIC/SalesOrderSet('${header.SalesOrderID}')?bridgeEndpoint=true")
   .unmarshal().json()
   .setHeader("CustomerID").simple("${body[d][CustomerID]}")
   .marshal().json()
   .bean(this, "setSO(\"${body}\",\"${headers.CustomerID}\")")

;

 from("direct:getItems")
     .setHeader("Authorization").constant("Basic XXXX")
     .setHeader("Accept").constant("application/json")
     .toD("https://sapes5.sapdevcenter.com/sap/opu/odata/iwbep/GWSAMPLE_BASIC/SalesOrderSet('${header.SalesOrderID}')/ToLineItems?bridgeEndpoint=true")
     .unmarshal().json()
     .marshal().json()
     .bean(this, "setPO(\"${body}\")")
 ;

 from("direct:getCustomer")
     .setHeader("Authorization").constant("Basic XXXX")
     .setHeader("Accept").constant("application/json")
     .toD("https://sapes5.sapdevcenter.com/sap/opu/odata/iwbep/GWSAMPLE_BASIC/BusinessPartnerSet('${header.CustomerID}')?bridgeEndpoint=true")
     .unmarshal().json()
     .marshal().json()     
     .bean(this, "setCust(\"${body}\")")

 ;

 

The endpoints to trigger the call to SAP, is exposed as an API. Here I use Apicurio Studio to define the API contract. With two endpoints, fetch and fetchall. One returns SO, PO and Customer data, where the other one returns a collection of them. 

 

We can now export the definition as a OpenAPI Specification contract in the form of YAML (Link to see the yaml). Save the file into the folder of where your Camel application is. Add the API yaml file name to your Camel K application mode line, and Camel K will automatically map your code to this contract.

   // camel-k: language=java dependency=camel-openapi-java open-api=ibm-sap.yaml dependency=camel-jackson

 

By using the Camel K CLI tool. Run the command to deploy the code to the OpenShift platform. 

   kamel run SapOdata.java

 

And you should now see a microservice running. Did you notice how Camel K helps you, not only it detects and loads the libraries needed for you, but also containerised it as a running instance. 

Go to my git repo to see the full code and running instructions.

 

Kafka was used in the middle to set the event driven architecture. So the SO approval application can notify the shopping cart client when it’s been approved. 

Since everything was put together in a week, with everyone in different timezones, miss communication will happen. What I did not realize was that all the client applications, SO approval and shopping carts were all written in JavaScript, and must communicate via HTTP. But Kafka only does Kafka protocols!!! Therefore, I set up an Http Bridge in front of the Kafka clusters, so it will now translate the Kafka protocols. 


And now clients can access the topic via HTTP endpoints.  For more information on how to set, go to my Github repo for more detailed instructions.

 

Last but not least, we need to migrate all UI5 SAP web applications to OpenShift. The UI5 is basically an NODEJS app. We first create the docker file to containerize it. And push it to a container registry.

  docker push quay.io/<YOUR_REPO>/socreate

 

And deploy the application to OpenShift.

  oc new-app quay.io/<YOUR_REPO>/socreate:latest --as-deployment-config

 

BUT WAIT!! Since UI5 only does binds to *localhost* (weird..), we need to add a proxy that can tunnel traffic to it. Therefore, I added a sidecar proxy running right next to the NodeJS application. By adding the following configuration. 

spec:
      containers:
        - name: nginx
          image: quay.io/weimei79/nginx-sidecar
          ports:
            - containerPort: 8081
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always


 

This will start the proxy, and since this NGINX proxy starts on port 8081, make sure you update all related settings on OpenShift.

  oc expose dc socreate --port=8181
  oc expose svc socreate

 

And this is how you would migrate the UI5 application from a local SAP instance onto OpenShift. More detailed migration instructions, check out my Github repo

 

Once it’s done, you can see all the applications are running as a container on the cloud. And ready to approve the SOs. 

 

This is actual developer view on top of our demo OpenShift platform

 

Thank you Sanket for this fun ride, all the nail biting moments, but this is all the fun in IT right? We work through problems, tackle issues and ultimately get everything done! 🙂 If you are a SAPer, and want to join the container world of cloud, what are you still waiting for? Join the ride! This is the story on how we made SAP Cloud Native and Event Driven in 4 days. 

To see the full version, be sure to attend Sanket’s session:

SAP & OpenShift: From classic ABAP development to cloud native applications: Use cases and reference architecture to implement with SAP Landscapes to unlock innovation enabled by Hybrid Cloud and Red Hat OpenShift.

Register here:

https://events.redhat.com/profile/form/index.cfm?PKformID=0x373237abcd&sc_cid=7013a000002w6W7AAI

0

Add a comment

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations. 

To overcome the common obstacles of going hybrid and multi-cloud, such as finding talents with multi-cloud knowledge. Secure and protect across low trust networks or just day to day operation across the board.
1

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.

Quick recap, 

In my series of articles I went over the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments. These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

The Intro, 

More companies are strategizing to be on Hybrid cloud or even Multi-cloud,  for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology.
1

Recently I had an opportunity to work with Sanket Taur (IBM UK) and his team on a demo, showcasing how Red Hat products can help speed up innovation with SAP Landscapes. To be honest I was shocked at how little time we were given to create the entire demo from scratch. It’s less than a week. While still doing our day job, having a couple of hours per day to work on it. If this doesn’t convince you..

You want Kafka to stream and process the data. But what comes after you set up the platform, planned the partitioning strategy, storage options, and configured the data durability? Yes! How to stream data in and out of the platform. And this is exactly what I want to discuss today.
1

Getting Started with Apache Camel ? This post is for you. But I am not going to dive into how to write the Camel route, there are plenty of materials out there that do a better job than me. A good place to get started is the one and only Camel Blog, it gives you all the latest and greatest news from the community. If you want to start from the basics, I highly recommend Camel in Action II Book. It has everything you need to know about writing Camel.
2

Introduction: 

Contract first application development is not limited to synchronized RESTFul API calls. With the adoption of event driven architecture, more developers are demanding a way to set up contracts between these asynchronous event publishers and consumers.. Sharing what data format that each subscriber is consuming, what data format is used from the event publisher, in a way OpenAPI standards is going to be very helpful.
6

Serverless should not be optional, but instead it should be there for all cloud native environments. Of course not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stash away in the corner, that are needed occasionally. Some need to to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless.
4

Camel K, a project under the famous Apache Camel project. This project totally change the way developers work with Kubernetes/OpenShift cloud platforms. By automating the nasty configuration and loads of prep work from developers. 

If you are an old time developer like me. I did my best slowly trying to adapt the latest and greatest cloud native “ecology”. It’s not difficult, but with small things and traps here and there. I’ll tell yel’ it's not a smooth ride.
1
Popular Posts
Popular Posts
About Me
Archive 檔案室
Labels
Labels
Blog of My Friends
Blog of My Friends
Facebook Groups
Facebook Groups
Support a friend
Support a friend
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.