Categories
Digital Transformation Platform Migration & Modernization

My Double Life in Microservices: From Homegrown to MuleSoft

As a software engineer who has been building microservices for more than six years now, I’ve led sort of a double life. Before I explain this dichotomy, I should provide a little background about myself. I’ve spent my whole career as a software plumber writing infrastructure code for big service platforms and more recently as a back-end developer implementing APIs and REST services. In this post, I will compare my experience building microservices through two very different approaches, which I think of as two different lives.

Reuse Knowledge and Experience in Technology

Throughout my career, I would typically reuse my knowledge and experience in the technology I was working with to move from one thing to another quite unrelated thing. For example at IBM, I spent a lot of time working on systems management platforms using Java and all things related to that environment. Then I jumped to a server product that synchronized a mail system with mobile devices, again in the Java ecosystem.

In my most recent job switch coming to Green Irony, I was hired because I have extensive experience building microservices, particularly for solutions in the insurance industry. For the first time in my career, I was asked to work on the same thing, microservices for insurance, but use a completely different technology stack.

Building an Integration Network from Scratch

In my previous job, we initially implemented a monolithic Enterprise Service Bus (ESB) to build an insurance solution. It became cumbersome to maintain and enhance, so we migrated to microservices rolling our own implementation. To do this we:  

  • Used Node.js running on AWS EC2 
  • Created templates for creating new microservice containers. This included code templates for the Node.js based microservice as well as Terraform scripts for building and packaging the microservice into a Docker container.
  • Built an open-source Node.js server stack that was the basis of each microservice 
  • Used Swagger to define our APIs and to use a library that provided request validation based on a Swagger specification. This was one of the best decisions we made. 
  • Built a bunch of reusable libraries to do things like configuration management, authentication and authorization and request-based security to name a few. 
  • Built-up a list of NPM dependencies that we relied on to perform various tasks.

Examining this list of accomplishments, it’s apparent that only one of the items directly contributed to solving the customer’s business needs, and that was defining our APIs in Swagger. Had we known about MuleSoft, we could have eliminated a lot of work along with the implicit cost of maintaining it and keeping it updated as dependencies and technology changed.

Understanding the Value of MuleSoft

When I first considered moving to Green Irony, I was given a demo on MuleSoft. I was shown how to build flows using Anypoint Studio, dragging and dropping connectors. My first thought was, “What do you need me for? Almost anybody can build stuff with a drag and drop IDE like that.”

It took me some time to ponder and absorb MuleSoft. Then at a second, more technical demonstration, I was shown how DataWeave makes it fairly easy to access and transform data as necessary to integrate data from third-party APIs to and from our project’s data types. And it all clicked how MuleSoft’s SAPI, PAPI, and EAPI architecture matched what we had been doing for our microservices.

We built APIs called “bindings” that were responsible for communicating directly to 3rd party APIs. These were our SAPIs. We built mediation services that contained business logic and utilized one or more bindings. These were our PAPIs. Finally, we built an API that would use one or more bindings that were provided to the ultimate user of our APIs These were our EAPIs.

Once all this clicked for me, I realized that MuleSoft was less about building flows with Anypoint Studio, and more about accelerating our ability to build the solutions for our customers.

Old Life vs. New Life

In my previous “roll your own” (RYO) life, we spent a lot of time and effort integrating various libraries to read and parse Swagger, provide request validation, and map data to and from TypeScript definitions. As Swagger grew into the OpenApi Specification, we were stuck on Swagger 2 because we never had the time to address the technical debt required to update our Swagger integration. We spent quite a bit of time and effort trying to find usable solutions to build Typescript definitions from Swagger docs or vice versa, generating Swagger documentation from TypeScript types. On top of that, we had to build client code that understood how to talk to the various services we were building. We had to figure out how to secure the various APIs to provide access only to authorized users.

In my new MuleSoft life, we design APIs using RAML in the Design Center. When we export the API to Exchange, a library is automatically generated providing connectors to utilize the API. When we implement the API, the Studio and APIkit automatically create the routes needed to implement each of the API’s endpoints and provide automatic request validation to ensure that incoming requests were properly formed and have all required values.

In my previous RYO life, much of our effort was spent manually writing code to transform and map data from one API to another. We had a homegrown data mapper library, but it was often difficult to figure out how to make it do exactly what you needed. This difficulty led to somewhat varied and at times convoluted implementations that made reading and understanding another developer’s mapping implementation quite confusing. Enhancing and maintaining these mappings was quite cumbersome.

In my new MuleSoft life there is DataWeave. DataWeave is MuleSoft’s expression language for accessing and transforming data that travels through a Mule application. It makes it very easy to read XML and write JSON, and vice versa. You can easily combine data from multiple sources and multiple formats. It’s quite functional in nature, so it provides a lot of powerful features to implement complex mappings.

In RYO life, unit testing was difficult. Typically more effort was spent figuring out how to scaffold code and mock external resources than actually writing the test cases and verifying the code. Often instead of unit tests, we wrote integration tests that relied on access to actual services or had fairly kludgy mock solutions. Also, when we were running unit tests, the code was not in the same runtime environment as when it was running in the server container on AWS. Sometimes code under test passed when it failed in the actual runtime because it was too hard to simulate the environment.

In MuleSoft life, unit testing is significantly easier. In Studio, you select a flow right mouse on a flow, select MUnit -> “Create blank test for this flow”. If the flow uses connectors to issue HTTP or SOAP requests or use a custom Mule connector to invoke a PAPI or a SAPI, you just mock that connector and provide your mocked response. If you need more granularity in your code to facilitate testing, you can refactor larger flows into sub-flows. You can easily set up the Mule event to mimic actual incoming requests, and/or you can initialize flow variables to further emulate the request environment. When you run the MUnit tests, Studio actually starts up your Mule application so that your code is running in the same environment as when it’s live. No need to spend lots of time figuring out how to emulate and scaffold your runtime environment.

MuleSoft for the Win

In my RYO life, I spent lots of time implementing system infrastructure and building custom solutions for security and authentication, authorization on services and endpoints, parsing Swagger and providing runtime request validation. As libraries and technologies progressed, we would automatically incur technical debt to bring our custom implementations up to the latest and greatest for a library or technology we depended on. Often, we would stay behind on older technology, like Swagger instead of moving to OpenAPI, because we couldn’t afford the time or effort to modernize our custom solution.

MuleSoft is a wonderful accelerator that allows us to focus on the problem we are trying to solve for our customers instead of building and maintaining a custom technical stack just to enable working on the actual solution. You still need experienced developers who understand how to tackle enterprise-level problems, but MuleSoft allows for robust solutions with much less time and effort. Developers can focus on the important parts of building solutions for their customers.

If you’d like to dive into the topic more, check out “Adapt vs. Adopt: The Value of Anypoint Platform” whitepaper. It’s a deep dive into the technical, business, and financial considerations an organization must take when evaluating building a homegrown network vs investing in MuleSoft.