It's been a year and half since I blogged about "Agile Integration architecture" (Gosh, time just flies). With the "microservices" and "cloud-native" hype, I was especially curious on how all these new concept and technology affect us on how to architect the integration systems. If you ever pay close attention to all the latest and greatest news from the Kubernetes community, I am sure you will hear a lot about the new "Service Mesh". And rumor has it that this is how integration can/should be done in cloud native world, but, is that so? Anyone who has ever worked on an integration project would tell you, it's a LOT more COMPLEX and can get worst overtime. I did a talk with Christian Posta in Red Hat Tech Exchange coming from a more comprehensive view of how different Red Hat technologies are applied under different patterns when building integration solutions. In fact he also did a great blog about it.


Since then, another topics has been brought up numerous time, what about Serverless, how would it impact integration? Does it mean the death of services?  If we have all Microservices connected via Serviceless mechanism in service mesh, does that mean we go back to the old days of writing all the integration transformation logics in our application again? Obviously, this is complicated question to answer, and the new features keeps popping up, I am going to try to give my best to try and explain how I see everything fits into Agile Integration vision. Honestly, there is no exact right answer when it's about architecting system, it requires constant refactoring, my thought is to give a more general and flexible way of doing it that has less impact when change and being able to adapt change quickly. (That is what I call Agile Integration).

Service Mesh 

I remember last time, I started the agile integration reference architecture simply because the lack of organization of Microservices, and it still stand today. JUST because we have service mesh, doesn't mean it will magically solve the spaghetti connectivity if you are not being careful. To set the ground, what service mesh helps is to relief developer from dealing with repetitive, boring work in a distributed environment. Yes. Being cloud native with Microservices, means the system is more vulnerable to chain reaction disaster if not designed correctly, as you are working with small bots and gears in the system, that you never know how any small missed-handle can impact the entire system. Therefore we need to make each component more robust, failure proof and ideally damage recoverable. And Service Mesh helps you with getting the BASICs ready by adding a sidecar next to your running application. For every Microservices you create, no matter what they do, core business or content orchestration, they all need some kind of failure proof. Service Mesh gives you all that, from an application networking level. So that as a developer you will no longer have to wrap every single one of your Microservices with the circuit breaker, error retry and even handling version, deployment routing, and apply authorization,  those that has NOTHING to do with what the actually app is responsible for. These are often common rules that applies to the whole cluster. And service mesh is best for that, as these common policies are detached from actual application, can be managed centrally and apply individual on the sidecar so the microservice running your app is protected behind it.

Another great feature with service mesh, is to ability to trace and observe the incoming request for the entire system. This will certainly help developer/operation have more insight into the complex spider web-like connectivity.

Dealing with REAL Life Integration
But when it comes to actually application implementation, some folks thinks we can now rely on Service Mesh to configure and connect. MAYBE, if you are developing only a couple of small microservices, you can probably get away from that. BUT that is not what's happening in real life. For instance, formatting the data to the right granularity (Splitting/Aggregation), routing base on processing outcome of some content, a more complex orchestration of services call that requires precise rollback and business handling (Saga) , collection of events triggering, we can't possibly write all these into the Service Mesh config yml file (I am just being practical :p). So, I am sure now we have mutual agreement that integration logic still needs be written somewhere, DON'T tried to do it in your microservice with the business logic. That is why we need the conceptual layer in agile integration. They are the composite and core layer. Remember,  I mean CONCEPTUAL not physically on top of another, but separating the responsibility of the microservices so it's easier to locate, maintain and organize your applications. Taken from the lesson learnt from SOA. You will still need some kind of integration patterns to compose service that has the right granularity for the receiving end.

Monitoring and tracing is crucial in ANY integration system, and let's face it, there are NOT JUST http calls. Majority of times, they are events, and honestly an event driven reactive system is 100 times more flexible and modularize than sticking a bunch of API together. To be able to collect these data are needed beyond simple request tracing.


Serverless
I had to talk about this, the whole serverless development, deployment and runtime concept can take agile integration in another level not just conceptually but physically too. The idea of quickly spin up integration piece of logic, the ability to scale freely up and down. That is what I call true agility. I was thrilled to see Nicola demonstrate Camel-K on KNative.  Camel just opens up a wide range of connect possibility for the serverless call. To me the core spirits of server less is (For now)

  • Being able to quickly produce/start application without complex configuration and heavy runtime. 
  • Elastic resource allocation responding to loads. 

And what integration helps to bring to the table is the ability to allow the system to responds more events from boarder range of endpoints, for instance, IoT, SOAP and messaging and other protocol that are not simple http calls.


This is just a HIGH level overview of how I see Integration are shaped in the future of a serverless cloud native world. Of course I have not touch upon many topics like APIs, Event driven architecture, self service ability. They deserve separate blogs. Might do another later this week. Again, these are just my 2cents coming out from a more practical real life point of view.


46

View comments

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations. 

To overcome the common obstacles of going hybrid and multi-cloud, such as finding talents with multi-cloud knowledge. Secure and protect across low trust networks or just day to day operation across the board.
1

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.

Quick recap, 

In my series of articles I went over the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments. These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

The Intro, 

More companies are strategizing to be on Hybrid cloud or even Multi-cloud,  for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology.
1

Recently I had an opportunity to work with Sanket Taur (IBM UK) and his team on a demo, showcasing how Red Hat products can help speed up innovation with SAP Landscapes. To be honest I was shocked at how little time we were given to create the entire demo from scratch. It’s less than a week. While still doing our day job, having a couple of hours per day to work on it. If this doesn’t convince you..

You want Kafka to stream and process the data. But what comes after you set up the platform, planned the partitioning strategy, storage options, and configured the data durability? Yes! How to stream data in and out of the platform. And this is exactly what I want to discuss today.
1

Getting Started with Apache Camel ? This post is for you. But I am not going to dive into how to write the Camel route, there are plenty of materials out there that do a better job than me. A good place to get started is the one and only Camel Blog, it gives you all the latest and greatest news from the community. If you want to start from the basics, I highly recommend Camel in Action II Book. It has everything you need to know about writing Camel.
2

Introduction: 

Contract first application development is not limited to synchronized RESTFul API calls. With the adoption of event driven architecture, more developers are demanding a way to set up contracts between these asynchronous event publishers and consumers.. Sharing what data format that each subscriber is consuming, what data format is used from the event publisher, in a way OpenAPI standards is going to be very helpful.
6

Serverless should not be optional, but instead it should be there for all cloud native environments. Of course not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stash away in the corner, that are needed occasionally. Some need to to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless.
4

Camel K, a project under the famous Apache Camel project. This project totally change the way developers work with Kubernetes/OpenShift cloud platforms. By automating the nasty configuration and loads of prep work from developers. 

If you are an old time developer like me. I did my best slowly trying to adapt the latest and greatest cloud native “ecology”. It’s not difficult, but with small things and traps here and there. I’ll tell yel’ it's not a smooth ride.
1
Popular Posts
Popular Posts
About Me
Archive 檔案室
Labels
Labels
Blog of My Friends
Blog of My Friends
Facebook Groups
Facebook Groups
Support a friend
Support a friend
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.