Last time we discussed how we use MicroServices in our architecture to be able to build, scale, and maintain a rock solid architecture. Today we want to extend that idea a bit more, and introduce you to one of our core players: RabbitMQ.
RabbitMQ is an open source Message Broker written in Erlang. Erlang is a programming language developed by Ericsson since the early 80’s (also released as Open Source), that provides a robust, reliable, and distributed platform which is absolutely great to horizontally scale your application(s) when designed in the right way. Yeah, we love that too!
So let’s see how we are using this kick-ass technology in our own infrastructure to bring an amazing SMS service to our customers!.
What Do We Handle With RabbitMQ?
Everything! Let’s see a few examples of the kind of events and commands that flow through RabbitMQ:
- When our downstream carriers send us an inbound SMS for you.
- When you send us an HTTP request to send one or thousands of SMS.
- When you add funds to your account, or we decide to give you a bonus.
- Invoice generation.
- … And Everything else, actually 🙂
So What is a Message Broker?
A message broker is a specific piece of software written to implement routing and queueing of messages across different components in a distributed architecture.
One one side “of the wire”, messages can be produced in a specific format (JSON, for example), and then sent to a specific “routing key” (for example “clients.refund”). On the other side there will be “consumers”, that will be “listening” for messages going to a specific set of keys.
The broker is responsible for getting the message from the producers to the consumers in a safe way (although this will depend on the exact product and configuration) and this is what makes it possible to implement a rich ecosystem of microservices and workers by using RabbitMQ (which, by the way achieved 1 million messages per second in Google Compute Engine recently!).
Queues For The Win
So at it’s core, RabbitMQ acts as a router, where messages can be sent to different consumers based on different topologies. A topology will describe how those messages are to be delivered to the different consumers of those messages (perhaps all consumers might get a copy, or maybe just one consumer will get a specific message at a time, etc).
Our services are basically queue consumers, some of them will reply back, but others don’t need to.
RPC And Fire-And-Forget
Our microservices are actually written in one of two ways: they can reply back, or they can just receive a command and act upon that without further need for a confirmation or response.
In the first case, we are talking about RPC (Remote Procedure Calls) services. These are services that receive a request and send a response back, and we use it to decouple some of our software services (this works just like when you call a local function in your code, but instead of having the code in your own libraries, it “lives” somewhere else, perhaps in another host in the network, maybe even in a different country!).
For the second case, we use the “fire and forget” philosophy, where we just inject a task that needs to be accomplished no matter what, so we don’t really need a response back, we just want our system to do a specific task, and that’s it. We trust that this will be “eventually” done.
Retrying Failed Operations
Using queues and routing have another advantage: we can re-route failed tasks to different queues and services, and then analyze what happened, generate logs, or trigger alerts and notifications as needed, and perhaps then decide to automatically retry that operation after some reasonable time has passed.
Let’s say our database is down (perhaps a temporary network issue), there are certain operations that we can retry later without any human intervention, giving our ops guys some time to fix the issue while still accepting new requests from clients when possible. Cool, isn’t it?
When we want to scale, we just add more consumers for the service we want to grow, and then we can shut them down when we feel we don’t need them anymore. If we want to scale RabbitMQ we can make use of its clustering and federation features. So we can scale our services, but we can also scale RabbitMQ itself if needed.
To work with such a reliable, flexible, and scalable technologies is a joy. We really thank all the Erlang and RabbitMQ developers that make this possible and make it easier for us to develop new features for our customers, and bring a high-quality service based on these rock solid software. As you can see, your business is in good hands 🙂
— The PortaText Team.