Being able to support high spikes in traffic is critical to the health of your online applications. But what do you do when traffic is 50 TIMES higher than what you expected? This challenge was very real for developer Niantic whose Pokémon GO application became an overnight sensation.
Here we explore why a major UK publication is following Niantic’s example.
Why Pokémon uses Kubernetes
Pokémon GO may be past its peak, but it’s left a lasting impact. Not only is the game a landmark in augmented and virtual-reality gaming, it’s also one of the most inspiring examples of the benefits of a containerized infrastructure based on Google’s container orchestration platform, Kubernetes.
Sure, Niantic had some teething problems when around 0.6% of the world’s total population attempted to log into the game at once. But Pokémon GO is ultimately a success story about bringing a digital product to millions of people at the same time. And this was made possible by using containers to scale at record pace (while also lowering engineering costs and enabling developers to continually adapt and improve the service).
For a simple introduction to containers and Kubernetes, check out this video
Of course you don’t need a hit game on your hands to benefit from a containerized infrastructure – as one leading news brand has discovered.
The business benefits of containerization
Here at Inviqa we’re a technology partner and consultant to a major UK publication that has decided to make the same technical choices that paved the way for Pokémon GO.
Why? Because the decision to use a containerized infrastructure comes with clear business benefits:
- Improve your scalability. Kubernetes provides the publication with a fluid, flexible infrastructure that adapts based on its changing needs.
- Save on engineering costs. By standardising your infrastructure and bypassing the server-level engineering normally needed to launch applications, containers lower engineering costs. With cloud-based containers, you no longer have to invest time and money in configuring servers with dependencies, supporting libraries, and so on.
- Iterate faster. With containers, developers can create, test, and build an app within one container, which they can hand over to a container management platform. Platforms like Kubernetes only need to know a container’s network and storage requirements, and not much about the container’s function, which helps teams to iterate faster and increase speed to market.
Containers essentially fence off an application from other applications on your servers and enable applications inside that container to have direct access to your underlying server hardware without the overhead of virtualization. They’re portable, self-contained, and they allow you to standardize your infrastructure across the board.
- Save on hardware costs. In the legacy way of running a website, you’re hampered by the physical limitations of the machines you use to house your app and its environment (the latter of which may include a webserver, readable file system, and so on). What’s more, it’s costly and complex to add machines when the capacity of your existing ones is stretched. Kubernetes can significantly cut hardware costs by better utilising the hardware you pay for. It allows you to take a container and automatically decide which computer it should run on, whether that computer belongs to you or a cloud provider.
- Simplify development. Kubernetes clusters can automate the handling of the likes of storage, networking, and autoscaling. They’re low maintenance and can get your applications up-to-speed with very little downtime or support. What’s more, life is made even easier for this publication by using ContinuousPipe, a product by Inviqa designed to accelerate development workflows and support continuous delivery by automating deployments of your containerised applications to Kubernetes.
This major UK publication is benefitting from a containerized infrastructure
By using this workflow, the publication’s project team is now able to make almost 100 deployments a day. These deployments are made to both the production environment and temporary environments which the publication is using for quality assurance (QA) purposes to remove bottlenecks.
With these environments, the publication can rest assured that, should one machine fail, containers running on the other machines will be able to handle pull requests – so the end user will experience no downtime. At the time of writing, the publication’s production environments are backed by 147 running containers, and staging and development environments are backed by more than 250 containers.
What’s more, using a container-based infrastructure allows the team to run experiments with new software versions, programming languages, and so on, without having to change anything about the way they operate and manage deployments.
So, as this publisher is demonstrating, media brands and organisations of all kinds can benefit from the example set by Pokémon GO. There are clear business benefits to a containerized infrastructure, including the flexibility to scale in line with fluctuating demand for digital products and services, and the ability to continually test new features in isolated environments for faster, more reliable applications.
What’s not to like?