Launched in 2008, Spotify is one of the best audio-streaming platforms in the world with over 345 million users, including 155 million subscribers, across 170 markets. Partnering with Facebook gave the platform an instant success and helped it to rise in prominence.
Having this much number of users, Spotify still manages to provide their users seamless audio streaming within seconds. Integrating multiple technologies such as Machine learning for recommendations, Cloud Computing for storing data, Cyber Security, Databases, etc. and one of the most important being DevOps in the form of Kubernetes allowing them to deliver seamless productivity.
What is KUBERNETES ?
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
So, you might be thinking “What is the use of Kubernetes in Spotify ?”
Before answering this, Let’s see some facts:
Looking at the stats shown above, it is clear that Spotify has seen a rapid growth in the number of users as well as subscribers from approximately 5 million active users in 2012 to 100 million active users in 2016 and achieving a huge 345 million mark in 2020 and still growing.
“Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations at Spotify.
Spotify adopted microservices and container technology and it was running over their fleet of Virtual Machines and for container orchestration, they use a homegrown system known as Helios. The challenge they had was a small team working on the features which was not very efficient. Also, the delivery time should not be too much since customers want the service to be as fast as possible. Avoiding these challenges, they adopted Kubernetes by the late 2017.
“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.
The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
I will discuss some technologies in the further blogs, till then have a read at this.
Link for my LinkedIn profile: