Written by Poonam ChandersyTechnical Content Writer
Royal Cyber’s Cloud Enterprise Architect, Sumair Baloch helps us deep dive into the features that make Event-driven Serverless Architecture a class apart from other architectures and gives us an insight into its benefits, limitations, and everything in between! With a passion and hands-on experience working on GCP and AWS, Sumair has worked on projects on an Architect and Developer level utilizing cloud services to design and develop enterprise-level applications that integrate with different business flows and overall dynamics.
Event-driven serverless architecture culminates two separate ways of designing architecture and is yet unique. This architecture is counterintuitive to a typical client – the approach of a server or mini servers leading to a database design is eliminated. Instead, a client-facing application in direct contact with the database and cloud functions (mini servers) is triggered only after the information is stored on the database. This architecture can be achieved in any public cloud environment and can also be conducted on-premise. During a tete-a-tete with Sumair, he shares some unique aspects of this architecture.
Q1. Given the direct interaction between the application and database, how does one keep the database secure from simply copying the database keys?
Ans: This is a general and frequently asked question by cloud architects as well. The lack of a middleman between the UI and the database is seen as a huge vulnerability. Google Cloud Platform provides a toolset called “Rules” where one can write custom rules for who is allowed access to the data – either to add, modify, or delete from a collection. You can create multiple groups and define user authentications; thus, if a person accessing the database does not have a role associated with a user, they cannot interact with the database at all. Basically, “Rules” acts as a filter for interacting with the database.
Q2. When it comes to triggering a cloud function, is it like the concept of flagging/toggling as seen in DevOps?
Ans: : Yes, it is flagging and tagging where we use flags to trigger our functions because we do not want any useless cloud function running. We also do not want the cloud function to begin automatically and then lie in an idle state as that would be a waste of resources. In this case, the flags act as a filter, which lets the cloud function know that action needs to be performed if the flag exists.
Q3. How can industries like banking benefit from this architecture??
Ans: In my experience with banking applications, stability and security are a high priority of their structures. One of the significant concerns banking applications deal with includes various compliances such as PCI compliance. Through the triggers on Firestore, one can get a sense of the transaction flow as to whether it is pending or completed – a handy feature for banking applications. One can also expand beyond the cloud functions and connect GCP services via cloud functions, thus adding more capabilities to the general application. In short, the flexibility of this architecture is viable to meet any business need.
Q4. Can you add as many cloud functions as you want?
Ans: When adding cloud functions, you can go as high as you want, but the best approach is to keep it between 15-20 because each cloud function is a separate entity. Therefore, because it is a loosely coupled resource, you would not want too many loose ends. You can also customize a cloud function to do multiple things, depending on the use cases.
Q5. What would be the effect of eliminating the Firebase and Firestore services when developing this architecture?
Ans: Well, you employ pub/sub-functions with various topics connected to the cloud functions and then based on these topics, you could trigger the cloud functions, and the database could go behind the cloud functions, store the data and send a response back to the user.
Q6.Given its unique design, how does one go about debugging within this architecture?
Ans: Firebase provides us with a toolchain, which can be downloaded and installed into your system. You can add your project to it, and any cloud function can be mocked in your local environment. So, you can do what you want with a specific cloud function, see the results on your local machine, and deploy it to the cloud once everything is working. If it is not working, you can use logs to figure out what is going on and then debug the error.
Q7. What are the limitations of Event-Driven Serverless Architecture?
Ans: A significant limitation of this architecture is that once you start using these systems, you are locked in with one vendor specifically, making it difficult to move your application to a different environment. As seen in the use case of employing this architecture within GCP, because you rely on their Firebase and Firestore services, you heavily rely on these services for your general application to work.
However, you can create a Middleware and have your cloud functions connected to it on one side, and you could move your UI front end to another environment and connect any service to that Middleware, such as a Microservice or Azure function. However, reworking the cloud function will be required because it has moved to a different environment.
Q8. What are the benefits of this architecture?
Ans: One significant benefit is the speed concerning UI responsiveness. Another advantage is that you can scale up your applications horizontally, both on the hardware and software levels since multiple copies via cloud functions can be created. Usually, we have a single core running various processes in a parallel fashion, but now, you can have multiple cores that can run various processes. With this, you can scale your cloud functions and processes on an exponential level.
Through cloud functions, you can also set a lock on how the chunking of resources takes place. For instance, if you have 1000 resources and break them into chunks of 10, each chunk has 100 resources. So, each cloud function, say 10, are scanning 100 resources in a combination of 1000 resources. Thus, bringing down the latency of running these processes on a single core.
Q9. What is the ideal project team required to deploy such an environment?
Ans: First, you will need an architect to design the architecture and a DevOps member to create some CI/CD pipelines before deploying the code on the cloud function. The ideal team consists of four to five members: an Architect, a DevOps, two to three Developers and one QA.
We at Royal Cyber provide GCP consulting services including re-platforming, re-architecting, and re-factoring of a cloud environment to be more cloud native. In addition, our Google Cloud Practice has certified GCP professional Architects, DevOps, Developers and Data Engineers who can help grow your business at an accelerated rate and unlock your business’ true potential. Join us for an Event Driven Serverless Webinar to learn more or contact us for further details.