Music streaming giant Spotify hosts more than 70 million songs and close to 3 million podcast titles on its platform.
Delivering this without the support of AI and machine learning is probably best compared to attempting to traverse the Amazon rainforest armed with nothing but a spoon.
To cut through this jungle of music, Spotify’s research team deploy hundreds of machine learning models that improve the user experience, all the while trying to balance the needs of users and creators.
AI News caught up with Spotify research lead Rishabh Mehrotra at the AI & Big Data Expo Global on September 7 to learn more about how AI supports the platform.
AI News: How important is AI to what Spotify does?
Rishabh Mehrotra: AI is at the centre of what we do. Machine learning specifically has become an indispensable tool for powering personalised music and podcast recommendations to more than 365 million users across the world. It enables us to understand user needs and intents, which then helps us to deliver personalised recommendations across various touch points on the app.
It’s not just about the actual models which we deploy in front of users but also the various AI techniques we use to adopt a data driven process around experimentation, metrics, and product decisions.
We use a broad range of AI methods to understand our listeners, creators, and content. Some of our core ML research areas include understanding user needs and intents, matching content and listeners, balancing user and creator needs, using natural language understanding and multimedia information retrieval methods, and developing models that optimise long term rewards and recommendations.
What’s more, our models power experiences across around 180 countries, so we have to consider how they are performing across markets. Striking a balance between pushing global music but still facilitating local musicians and music culture is one of our most important AI initiatives.
AN: As a Spotify user, I was surprised to realise just how much the platform incorporates AI into almost every aspect of what it offers. It’s so seamless that I feel like most people don’t even notice it’s there. How crucial is AI to the user experience on Spotify?
RM: If you look at Spotify as a user then you typically view it as an app which gives you the content that you’re looking for. However, if you really zoom in you see that each of these different recommendation tools are all different machine learning products. So if you look at the homepage, we have to understand user intent in a far more subtle way than we would with a search query. The homepage is about giving recommendations based on a user’s current needs and context, which is very different from a search query where users are explicitly asking for something. Even in search, users can seek open and non-focused queries like ‘relaxing music’, or you could be searching the name of a specific song.
Looking at sequential radio sessions, our models try to balance familiar music with discovery content, aimed at not only recommending content users could enjoy at the moment, but optimising for long term listener-artist connections.
A good amount of our ML models are starting to become multi-objective. Over the past two years, we have deployed a lot of models that try to fulfil listener needs whilst also enabling creators to connect with and grow their audiences.
AN: Are artists’ wants and needs a big consideration for Spotify or is the focus primarily on the user experience?
RM: Our goal is to match the creators with the fans in an enriching way. While understanding user preferences is key to the success of our recommendation models, it really is a two-sided market in a lot of ways. We have the users who want to consume audio content on one side and the creators looking to grow their audiences on the other. Thus a lot of our recommendation products have a multi-stakeholder thinking baked into them to balance objectives from both sides.
AN: Apart from music recommendations and suggestions, does AI support Spotify in any other ways?
RM: AI plays an important role in driving our algotorial approach – Expert curators with an excellent sense for what’s up and coming, quite literally teach our machine learning system. Through this approach, we can create playlists that not only look at past data but also reflect cultural trends as they’re happening. Across all regions, we have editors who bring in deep domain expertise about music culture that we use proactively in our products. This allows us to develop and deploy human-in-the-loop AI techniques that can leverage editorial input to bootstrap various decisions that various ML models can then scale.
AN: What about podcasts? Do you utilise AI differently when applying it to podcasts rather than music?
RM: Users’ podcast journeys can differ in a lot of ways compared to music. While music is a lot about the audio and acoustic properties of songs, podcasts depend on a whole different set of parameters. For one, it’s much more about content understanding – understanding speakers, types of conversations and topics of discussions.
That said, we are seeing some very interesting results using music taste for podcast recommendations too. Members of our group have recently published work that shows how our ML models can leverage users’ music preferences to recommend podcasts, and some of these results have demonstrated significant improvements, especially for new podcast users.
AN: With so many models already turning the cogs at Spotify, it’s difficult to see how new and exciting use cases could be introduced. What are Spotify’s AI plans for the next few years?
RM: We’re working on a number of ways to elevate the experience even further. Reinforcement learning will be an important focus point as we look into ways to optimise for a lifetime of fulfilling content, rather than optimise for the next stream. In a sense this isn’t about giving users what they want right now as opposed to evolving their tastes and looking at their long term trajectories.
AN: As the years go on and your models have more and more data to work with, will the AI you use naturally become more advanced?
RM: A lot of our ML investments are not only about incorporating state-of-the-art ML into our products, but also extending the state-of-the-art based on the unique challenges we face as an audio platform. We are developing advanced causal inference techniques to understand the long term impact of our algorithmic decisions. We are innovating in the multi-objective ML modelling space to balance various objectives as part of our two-sided marketplace efforts. We are gravitating towards learning from long term trajectories and optimising for long term rewards.
To make data-driven decisions across all such initiatives, we rely heavily on solid scientific experimentation techniques, which also heavily relies on using machine learning.
Reinforcement learning furthers the scope of longer term decisions – it brings that long term perspective into our recommendations. So a quick example would be facilitating discovery on the platform. As a marketplace platform, we want users to not only consume familiar music but to also discover new music, leveraging the value of recommendations. There are 70 million tracks on the platform and only a few thousand will be familiar to any given user, putting aside the fact that it would take an individual several lifetimes to actually go through all this content. So tapping into that remaining 69.9 million and surfacing content users would love to discover is a key long-term goal for us.
How to fulfil users’ long term discovery needs, when to surface such discovery content, and by how much, not only across which set of users, but also across various recommended sets are a few examples of higher abstraction long term problems that RL approaches allow us to tackle well.
AN: Finally, considering the involvement Spotify has in directing users’ musical experiences, does the company have to factor in any ethical issues surrounding its usage of AI?
RM: Algorithmic responsibility and causal influence are topics we take very seriously and we actively work to ensure our systems operate in a fair and responsible manner, backed by focused research and internal education to prevent unintended biases.
We have a team dedicated to ensuring we approach these topics with the right research-informed rigour and we also share our learnings with the research community.
Is there anything else you would like to share?
On a closing note, one thing I love about Spotify is that we are very open with the wider industry and research community about the advances we are making with AI and machine learning. We actively publish at top tier venues, give tutorials, and we have released a number of large datasets to facilitate academic research on audio recommendations.
For anyone who is interested in learning more about this I would recommend checking out our Spotify Research website which discusses our papers, blogs, and datasets in greater detail.
Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.