Deploying a Platform Independent Microservice Solution

Deploying a Platform Independent Microservice Solution

A startup needed a scalable, distributed deployment solution for their new conferencing application with high-availability. Platform independence was a key requirement.

Why Microservices?

Modern web applications demand maximum flexibility in their production environments; microservices satisfy stringent requirements for scalability, availability, and fault tolerance.

The microservice revolution, enabled by key technologies like Docker, seems to be taking the IT world by storm. While monolithic solutions are still in widespread use, many new startups, and even well-established silicon valley heavyweights, are adopting microservice architectures for quite specific reasons. This isn’t just a fad.

Microservices have branched from traditional SaaS (Software as a Service), by decoupling and distributing processes within a system to a much greater extent, holding to the tenets that services should be light, ephemeral, rapidly deployable, immutable, and ultimately disposable. This means each service should be small and discrete, with minimal dependencies. With fewer dependencies, microservices allow us to move toward tech-stack independence - each service has its own dependencies, and we are no longer tied to any particular set of technologies.

Thus with a microservice approach organizations can more easily achieve operational goals like continuous integration, delivery, and deployment, while enjoying the benefits of a system which is easily scalable (scaling individual services rather than the whole application is a huge advantage), portable, and fault-tolerant. Furthermore, microservices free developers to choose the most suitable technologies for any particular purpose.

That is not to say that microservices don’t have their downsides, and anyone venturing to implement a microservices solution will have to deal with the fallacies of distributed computing to some extent. Luckily, with the state of today’s tech, these aren’t an insurmountable obstacle. Still, containerization with Docker presents its own challenges. Besides latency, it’s mostly just very easy to get swamped in containers and images and lose track of what’s what and where it’s at. Along with that, networking containers and getting them to work together can be intimidating. Making container management (and scaling) simpler has been a key focus of development over the last few years, with new cloud services and container management software such as Amazon’s EC2 Container Service, Google’s Container Engine, CoreOS, and Kubernetes all providing deployment and management solutions.

The advantages offered by a microservice approach were worth any risks to our client, as they needed a system that could be platform independent, with services hosted simultaneously on AWS and Google Cloud. With Docker, CoreOS, and Kubernetes. We were able to design a system with great fault tolerance and high availability, in which services could be easily migrated from one cloud platform to another. Let’s take a look at the challenges the project faced, and how our devops team implemented a solution to address them.

Technical Challenges

The application needed to handle a large number of users and cope with surges in demand without interruptions. Meanwhile, achieving continuous delivery was paramount for our client.

Newcomers in video conferencing need to offer more than a snazzy UI and innovative features; in order keep users engaged these products demand rock-solid reliability, scalability, and cross-platform flexibility. Our client contracted Datamart to create a cutting-edge devops solution to power their web-based video conferencing and screen sharing application.

Any system which would perform satisfactorily in production, while still being reasonably cost effective, needed to meet the following requirements:

  • Responsive and dynamically scalable environment
  • Automated application version delivery to staging and production environments
  • Compatibility with the most trusted cloud-hosting platforms (Amazon AWS and Google Cloud)
  • Maximal solution reliability (independent of platform stability)

A microservice approach is especially well suited to such an application given requirements for availability, fault-tolerance, and scalability. Take for example an auto-complete feature we implemented for a search tab. As an individual service, this is a really small feature that can be quickly implemented and packaged in a Docker image. That image can then be uploaded to a repository accessed by container management software running on a cloud platform, and automatically replicated and run in proper zones as needed. If it fails, users might not even notice that one small piece of the application stopped working before the afflicted container is automatically re-launched. Deploying an updated version is as simple as taking down the current containers and launching new ones in their place.

Besides easing deployment and simplifying continuous delivery, software containers free your application from dependence on any particular hosting platform. With Docker, you can run your containers in any Linux environment - it makes no difference whether it’s a local server or a virtual machine instance on a cloud platform. From a devops and systems administration perspective, this is a radical development. Different parts of an application can be hosted in different places, and transferred from platform to platform as needed.

Engineered Solution

Our devops team developed and managed the deployment of a system leveraging a variety of technologies comprising a dynamic and fault tolerant microservice solution.

We designed a solution architecture which would be platform-independent, with a single deployed production environment running simultaneously on Amazon and Google cloud. We achieved this using software containers, VPN, REST,WebSockets, and a variety of technologies to manage container deployment.

For containerization, we of course used Docker - which is currently the most obvious choice. To manage deployment, scaling, and ensure continuous integration and continuous delivery, we assembled a formidable technology stack with Kubernetes and EC2 Container Service (container management for Google Cloud and AWS, respectively), Firebase (discovery service and mobile app integration), Jenkins (automation servers), Elasticsearch (metrics analysis and monitoring), Git (version control), and HockeyApp (build distribution repository).

Although we utilized a variety of cloud services from both Google and Amazon, the nucleus of our solution was based on CoreOS clusters running Kubernetes container management software. Kubernetes strives to give you a management and scheduling interface for working with services in logical groupings on top of clustered server infrastructure. CoreOS is a specialized Linux operating system optimized for distributed environments taking advantage of clusters consisting of individual server nodes. Together, with CoreOS handling clustered infrastructure and Kubernetes handling container management, scaling, and automated deployment, we had a powerful base on which to develop our system.

Another key problem to overcome with microservices is that of data gravity. Monolithic architectures with centralized databases were designed for a reason - applications depend on data, and the more proximal they are to the data the better. However, microservices adopt a different approach. Centralized databases have a tendency to couple services together in unpredictable ways, slowing down development and complicating production operations. To fight data gravity and keep services close to the data they need while avoiding erratic dynamics which can occur with a centralized database, we had to use more, smaller, dedicated, and discrete databases for specific services.

To achieve this, we used Redis, a popular in-memory key-value (NoSQL) database which is seeing increasing use in containerized applications. Redis allowed us to have small dedicated databases for our services running in containers. Because Redis runs and stores its data in memory, database transactions can be executed quite rapidly - however, the volume of data will be limited. That’s often not a problem for synchronous services with discrete functions, but if needed we can always turn to any number of other solutions from the world of Apache HADOOP.

Value Delivered

Our client received a cost-effective, dynamically scaled solution optimized for deployment on cloud platforms; our devops team proved their readiness to implement complex systems architecture and manage challenging deployments.

As a result of our efforts, our client obtained a cutting edge solution delivering high performance and reliability with a technology stack they can depend on for the foreseeable future. With dynamic scaling, we were able to create a responsive system which would be as cost-effective as possible running on cloud hosting platforms. Taking advantage of containers and container management software, the resulting system was highly fault-tolerant - in the event that a container crashed, it was immediately noticed and promptly re-deployed. As it turned out, a microservices approach was the right one for our client’s needs.

By the project’s conclusion, Datamart’s DevOps team had demonstrated their readiness to manage complex deployments, oversee a challenging continuous deployment effort, and implement cutting edge systems architecture requiring thorough knowledge of a wide variety of different software.