The Good the Bad and the VertX

Shay Dratler
6 min readMay 31, 2021

Throughout all my years of experience as a Software developer, I have been fortunate enough to ride many waves in the tech industry. Learning how to detect a storm before its arrival became a skill I perfected over time, and this ensured smooth sailing on more than one occasion. One of these unforeseen incidents appeared in the form of a system reaching the end of its life. Seeing as how the system answered many customers, the team and I understood that we needed to act quickly, and find a solution with minimal customer impact.

How do we replace a production system in the most optimal way?

Realizing that we must replace our current solution, we aimed to understand which provider would guarantee the smoothest transition and long-term solution for our customers.

PROCESS

As an initial step, we started investigating various technologies, including Lambda Serverless and other Java frameworks (such as Spring Boot and Joobi). We also wanted to have a microservice, non-blocking-oriented framework, and once we came across VertX, we felt as though we hit the jackpot. We soon realized VertX had great potential and became curious about what it has to offer — we aimed to list Kafka as a source input, enrich the input with more data, submit a request to a 3rd party API, and send the outcome into another Kafka.

From there on out, VertX became the obvious choice — and we have been working with it for the last eight months.

Although VertX proved worthy and suitable as a production system, we still worried about how to approach the phase of replacing the system while millions of live users are still using it in real-time. Compromising our customers’ needs and ongoing work was out of the question — we needed to find the right technology stack. To put things simply, we needed to replace a working solution — one that needed to be retired — without impacting our users in the slightest sense.

BUT HOW?

First, we mapped out our current solution at a high level, and broke down on what the previous system did and what our goal system will do.

The High-Level Design (HLD) looked like this:

We then zoomed into the solution, observing the services at a high-level overview, once again:

We decided to break the transition down into four major stages each one handling a different part of the request’s life cycle within the solution:

  1. The Kafka Service stage will consume and produce messages
  2. The Request Enrichment stage will adapt App A’s request to App B’s needs and vice versa.
  3. The Cache Data stage will store Data needed to enrich requests from 3rd party APIs that don’t handle high volume well.
  4. The Request Submitter stage will submit requests on to App B and then will until the request has been fulfilled by App B (synchronized), these requests can reach up to 5 minutes of requests.

Moving from this high-level design, we decided to start breaking down our services into smaller verticals. Here is a brief sample of four out of four services we break down into Verticals and Work Verticals in an abstract.

By breaking down the original plan into Verticals and Workers Verticals we succeeded in scaling VertX’s capability.

RESULTS

Looking back at this decision, it appears the pros outweigh the cons of making the transition into VertX. However, that is not to say that this adaptation did not present challenges along the way. Nevertheless, as always, these growing pains quickly turned into learning curves for our team.

Benefits

  • VertX presents many solutions for various scenarios — whether it’s working with Redis, using RDBMS or simply aiding web clients. VertX’s solutions come in shapes and sizes, thus offering flexible suggestions.
  • Light-weight: VertX’s footprint is very small, especially compared to other frameworks available and used in the market
  • Time-saving: VertX can affect the restart time, whether using Kubernetes or other virtual solutions that need to be deployed
  • Best of both worlds: the flexibility is due to VertX’s nature — with the structure of Java work and ambition of JavaScript
  • Modular: much like in surgery, without understanding all the different implications, the process would be doomed from the start. Therefore, although obvious for non-java developers, those who use Java frameworks (such as Play or Spring) will receive the mandatory as well as the simple properties to guarantee full comprehension.
  • Many more exciting features: such as Web Workers, the Eventbus solution, the Event-loop flow, Promise Syntax or built-in Parser, etc.

Challenges

  • Bottlenecks: while working with VertX, expect to break down to small microservices or follow the Event Loop flow. Otherwise, you might encounter a bottleneck — or worse, crosstalk.

For example:

public class SampleVertical extends AbstractVerticle{
@Override
public void start(Promise<Void> promise){
String crossTalkStringOne;
WebClient client = WebClient.create(vertx,options);vertx.eventBus().consumer("address").handler(message->{
crossTalkStringOne = message.//
//Slow Code \ Api client.get("/api/id/"+crossTalkStringOne).handler(handler->{ //Here you might get problems }); });
}
}

The solution we propose is to use metrics for monitoring it. VertX’s monitoring of SPI is very rich and has many solutions — meaning one can come across bottlenecks quite easily. Therefore, we used a micrometer monitor, which helped tremendously.

  • Scope Mixture: VertX uses the same approach as JavaScript for code block and closure. This is tricky in case you are being blocked by a third-party API that is slow, as this is not the VertX way. Consequently, you might encounter issues such as crosstalk.

Here’s how we resolved it:

public class SampleVertical extends AbstractVerticle{
@Override
public void start(Promise<Void> promise){
WebClient client = WebClient.create(vertx,options);vertx.eventBus()
.consumer(“address”)
.handler(message-> vertx.createContext()-> {
//Move the String into the consumer eventbus
String id = message.
//Slow Code \ Api client.get(“/api/id/”+id).handler(handler->{ //the slow api requests are running on a separate context }); });
}
}

To put things simply, VertX provides many solutions for many different causes, but we found that if we enforce a slow scope to move into a new context, then crosstalk will not occur. The reason behind this is that VertX creates a new associated thread for the new context.

  • Scope variables:
  • Pyramid of Death: When working with VertX for the first time, and while having multiple callbacks within a callback, it’s sometimes very hard to follow the requests flow.

For example:

public class SampleVertical extends AbstractVerticle{
@Override
public void start(Promise<Void> promise){
WebClient client = WebClient.create(vertx,options);
vertx.eventBus().consumer(“address”).handler(message->{
client.get().handler(h->{
doSomething(h2->{ String varableInScope = “hi”; doSomethingElse(h3->{ String varableInScopeTwo = “Hi there I’m in other scope”; // This Code can be ugly }); }); });
});
}
}

For this matter, there are several approaches you should consider:

  • You can maintain a modular code by addressing verticals on Event bus
  • You can wrap it as a handler class — but beware, such a class might increase memory usage.
  • Versioning: When I started working with Vertx it was 3.8.5. Now, eight months later, its minimum range stands at 3.9.6 at the 3.X.X series or 4.0.6 at the 4.X.X series.

The transition to the 4.X.X series is slightly better, and significantly reduces the complexity of the code.

For example:

public class SampleVertical extends AbstractVerticle{
@Override
public void start(Promise<Void> promise){
WebClient client = WebClient.create(vertx,options);
vertx.eventBus().consumer(“address”).handler(message->{
client.get().onSuccess(success->{
//Success is here doSomething().onSuccess(successOnSumthing->{ doSomethingElse().onSuccess(successHandlerOnElse); }); }).onFailure(fail->{ //Fail flow is here }); });
});
}
}

It’s important to see the silver lining in receiving so many updates, as this means the framework is alive. Despite the fact that between 3.X.X to 4.X.X there are breaking changes, there is a migration manual on the VertX front page.

On the positive side of things, there is a migration guide on the front page of VertX.

  • Rx-Java is thinking outside the box — VertX is reactive, but it’s not RxJava. This is a somewhat grey area in my opinion, as I would recommend working with VertX, and then adding RxJava.

CONCLUSIONS & AFTERMATH

Personally, I have learned a lot from this experience and by utilizing the VertX tool, including how to break matters into smaller verticals, how to measure appropriately, and much more.

If you haven’t tried working with VertX, I highly recommend that you start with a small project and gradually, over time, build up into a project that is of a larger scale. Working with VertX can get tricky, but if you approach this matter with a microservice mindset and focus on one vertical, as well as use Worker Vertical and read the documentation, you can expect greatness.

Happy Coding!

--

--

Shay Dratler

Developer, Problem lover, Eager to learn new stuff