Serverless should not be optional, but instead it should be there for all cloud native environments. Of course not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stash away in the corner, that are needed occasionally. Some need to to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless. 

Serverless let developers focus on code instead of worrying about infrastructural setups. To provide this environment, along with proper monitoring and reliable backbone to handle large throughput of events.  

This is what Serverless Integration (Kubernetes based) looks like, 

Everything is based on containers. Why care about the underlying technologies for serverless ? Shouldn’t it all be transparent? If your goal is to host or build on a hybrid/multi cloud that is locked-in free from all vendors, it means there are NOT just developers involved in the picture. You will eventually need cooperation between teams and work with all sorts of applications such as traditional services and microservices. Having unification and standardization technology, will flatten the learning curve for teams to adopt new kinds of applications and make maintenance less complex. 

From the development to the platform everything should seamlessly work together, and being able to automate and manage with ease.



Let’s break down all the elements. 


The Platform:  A Platform that provides full infrastructure and platform management with self service capability, service discovery and applying container policy and compliance.


The Serverless Platform:  Handles autoscaling of the functions/application. Abstraction of the underlying infrastructure.  Setup revision of the deployments for easy rollback. And unify events for the publishers and consumers.


The Event Mesh: Events are published to the mesh, and to the distributed consumers. The basic structure of the events are consistent and should be portable among platforms. All the events are flexible, governed and pushed quickly.  Powered by a reliable streaming network, it helps to store the streams of event for tracing, auditing or later replay for big data processing/ AI/ML datasets.


The Integration Functions:  Typical characteristics of serverless integration include, small, lightweight, stateless and event driven. These characteristics allow the application to be elastic, to tackle the under/over provisioning that we face today. From the operation side these are the  applications that cease and quickly spin up after being triggered by events. For better resource optimization. And for developers  it is a simple modular code snippet that they write and gets automatically spun up. So they can focus on code instead of deployment related issues. And Integration functions are the application that typically handles routing, transformation of data payload in events, and also other composing and orchestrating problems.  Also commonly used for connecting to external services, and bridges between systems.


The microservice or the long running applications: These are the long running applications that contain states, heavier or that is always being called. Some of them will send events to the mesh to trigger serverless functions to initiate and start, or simply another consumer of the events. 


The Service Registry: For sharing standard event schemas and API designs across API and event-driven architectures, either for events listening by serverless function or regular applications.  Decoupling data structure and managing data types at runtime.






The API management: Gateway to secure and manage outgoing API endpoints. With access control and limits for the consumer, managing consoles and data analytics for access endpoints. 


These are again, my two cents on the components that you need to have in order to deliver a complete serverless application environment. 

4

View comments

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations. 

To overcome the common obstacles of going hybrid and multi-cloud, such as finding talents with multi-cloud knowledge. Secure and protect across low trust networks or just day to day operation across the board.
1

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

Quick recap, 

Last article I talked about the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments.  These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.

Quick recap, 

In my series of articles I went over the study I did among Red Hat customers that makes the jump towards deploying their workloads on hybrid and multi-cloud environments. These articles are abstractions of the common generic components summarized according to the actual implementations.

Hybrid multi cloud can be a difficult, this is my study of a real customer use case on their journey using GitOps, multi cluster management system and securing dynamic infrastructure secrets.   

The Intro, 

More companies are strategizing to be on Hybrid cloud or even Multi-cloud,  for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology.
1

Recently I had an opportunity to work with Sanket Taur (IBM UK) and his team on a demo, showcasing how Red Hat products can help speed up innovation with SAP Landscapes. To be honest I was shocked at how little time we were given to create the entire demo from scratch. It’s less than a week. While still doing our day job, having a couple of hours per day to work on it. If this doesn’t convince you..

You want Kafka to stream and process the data. But what comes after you set up the platform, planned the partitioning strategy, storage options, and configured the data durability? Yes! How to stream data in and out of the platform. And this is exactly what I want to discuss today.
1

Getting Started with Apache Camel ? This post is for you. But I am not going to dive into how to write the Camel route, there are plenty of materials out there that do a better job than me. A good place to get started is the one and only Camel Blog, it gives you all the latest and greatest news from the community. If you want to start from the basics, I highly recommend Camel in Action II Book. It has everything you need to know about writing Camel.
2

Introduction: 

Contract first application development is not limited to synchronized RESTFul API calls. With the adoption of event driven architecture, more developers are demanding a way to set up contracts between these asynchronous event publishers and consumers.. Sharing what data format that each subscriber is consuming, what data format is used from the event publisher, in a way OpenAPI standards is going to be very helpful.
6

Serverless should not be optional, but instead it should be there for all cloud native environments. Of course not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stash away in the corner, that are needed occasionally. Some need to to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless.
4

Camel K, a project under the famous Apache Camel project. This project totally change the way developers work with Kubernetes/OpenShift cloud platforms. By automating the nasty configuration and loads of prep work from developers. 

If you are an old time developer like me. I did my best slowly trying to adapt the latest and greatest cloud native “ecology”. It’s not difficult, but with small things and traps here and there. I’ll tell yel’ it's not a smooth ride.
1
Popular Posts
Popular Posts
About Me
Archive 檔案室
Labels
Labels
Blog of My Friends
Blog of My Friends
Facebook Groups
Facebook Groups
Support a friend
Support a friend
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.