the ConversAItion: Season 6 Episode 34

How Lyft Revolutionizes Ridesharing With Machine Learning

Lyft, the popular ridesharing company, has powered on-demand transportation for over a decade—and AI has been along for the ride from the start. This week, Craig Martell, Lyft’s Head of Machine Learning, joins us on The ConversAItion to discuss how AI enables his team to make accurate predictions when it comes to matching riders and drivers, the unexpected connection between social sciences and machine learning and why the AI industry is poised for “hockey stick growth.”
Listen on Apple Podcasts badge
“AI is everywhere. You open up the app, and there’s AI when you start to search for a destination. Then we have to match you with a driver, we have to present you with different products. Pricing is an AI model, we’re guessing on a price. And one of the most important ones is when we tell you the ETA—that ETA is a guess. We’re going to guess the path the driver is going to take. We’re going to guess how long it’s going to take to get there, whether or not there’s an accident, or a street light, or if it’s raining. [AI] drives almost everything.”
Craig Martell

About Craig Martell

Craig Martell is the Head of Machine Learning at Lyft, where he oversees two machine learning teams. Prior to Lyft, he served as the Head of Machine Intelligence at Dropbox and LinkedIn. Craig is also an adjunct professor of machine learning at Northeastern University in Seattle. He earned his PhD in computer science at the University of Pennsylvania, and completed graduate studies in philosophy, political science and political theory. Craig can be found on LinkedIn here.  

Short on time? Here are 4 quick takeaways:

  1. AI and the field of social sciences both center around predicting human behavior. 

    Craig believes AI’s primary goal is to predict human behavior—a perspective informed by a unique background spanning computer science, philosophy, political science and political theory. With advanced degrees in each field, he’s spent time studying ways to understand how people think, and how to build robust models that predict a specific outcome. This experience enabled Craig to make a seamless transition from academia to the tech industry, landing at companies like LinkedIn and DropBox prior to joining Lyft in 2020.

  2. In a fast-moving industry like transportation, accurate estimations are critical—and AI is there to help.

    At first glance, the concept behind Lyft seems simple: matching people who need rides with people who can provide them. Looking deeper, however, the process is quite complex. When a user requests a ride, Lyft has to simultaneously make a variety of estimations: where the closest driver is, their ETA to a rider’s pick-up location, the ETA to the final destination, the final price and more. All of these estimations are dependent on a number of moving parts, even including things like inclement weather that may impact a driver’s speed, or accidents along the expected route. 

    When it comes to the accuracy of those predictions, the stakes are high. If Lyft makes the right estimates, riders feel satisfied with their customer experience. But if Lyft inaccurately predicts a longer wait time for a driver, a rider might cancel the ride and use Uber instead. And, on the flip side, if Lyft predicts a two-minute wait for a driver, but the driver ends up taking three minutes, Craig says a rider might grow frustrated with Lyft and be less likely to use the app next time. It’s in these dynamic, high-stakes situations that machine learning comes into play—those models are critical in helping ensure that each of these estimations are as accurate as possible to keep customers happy. 

  3. Every machine learning model is a hypothesis, and design plays a critical role in realizing that hypothesis.

    Craig likes to think of AI modeling as a cognitive science experiment. He believes that every machine learning model is a scientific hypothesis about what action will create a certain desired behavior in your users. Think, for instance, of an online clothing store sidebar that reads People who bought these jeans also bought these shoes, or a carousel of featured pairs of shoes, with the ultimate goal of driving users to a specific pair the store wants to highlight. 

    A critical component of building these hypotheses is design; the ways in which those sidebars and carousels look to an end user have an extraordinary impact on what a consumer ends up clicking on. For this reason, Craig’s philosophy is that AI is less about algorithms and more about applications. Why? It’s easier than ever to build and deploy basic algorithms. But what’s more important—and harder to accomplish—is ensuring that the design of those algorithms is helping you achieve a specific outcome of your hypothesis. 

  4. As demand grows for AI, tech companies are changing not only their machine learning models, but also their hiring models. 

    Today, top machine learning talent is hard to come by, and expensive to retain. To cope, many companies are designing machine learning platforms that allow for what Craig calls “AI practitioners”—employees who aren’t necessarily experts in the technology, but are capable of building the knowledge needed to correctly deploy it and ensure that it’s driving business value. 

    As demand continues to skyrocket for both AI experts and “practitioners,” Craig sees more robust training in both academia and the workplace as a way to narrow the talent gap. If universities continue to offer and encourage participation in more machine learning courses, and businesses begin to see the value in deploying flexible AI platforms, Craig foresees exponential, “hockey stick” growth in the AI industry.


Read the transcript



Jim Freeze Hi! And welcome. I’m Jim Freeze, and this is The ConversAItion, a podcast airing viewpoints on the impact of artificial intelligence on business and society. 


It’s great to be back in my virtual recording studio for a brand new conversation about the latest AI applications shaping the way we live, learn and work. Every season, I’m energized by the people, products and companies fueling the AI economy, and the ever-evolving range of topics to cover. From my perspective, there has never been a better time to educate people about the growing impact of this increasingly-pervasive technology – and I’m thrilled to share that, once again, we have an impressive line-up of AI experts this season.

We’ll dive into some of the most timely and important AI issues, from how AI is redefining the future of work to mitigating the supply chain crisis, and we’ll do it with some of today’s best-known brands. Whether you’re a tech enthusiast, enterprise executive or just curious about how the technology works, each episode will bring you valuable insights about the transformative impact of AI on both our personal and professional lives. As always, thank you for joining us – it’s great to have you here!

For our first episode, I’m excited to sit down with Craig Martell, the Head of Machine Learning at Lyft. Craig is an AI veteran with prior stints at LinkedIn and Dropbox; today, he oversees two teams focused on Lyft’s machine learning platform, and the applied technology powering everything from live ETAs to location recommendations. 

We’ll discuss why Craig sees AI as a “cognitive science test,” how the AI talent landscape has evolved and much more. Craig, welcome to the ConversAItion!

Craig Martell Thanks Jim, glad to be here.

Jim Freeze We’re thrilled to have you. So, you have a PhD in computer science and graduate degrees in philosophy, political science and political theory. Your background spans both humanities and technology; can you walk us through your diverse background and ultimately what brought you to Lyft?

Craig Martell Happy to. So I started off in political theory and I ended up in computer science. But I ended up in computer science wanting to do what I called “testable philosophy,” because I really think that what machine learning is doing for us is testing the way humans behave. What do I mean by that? Well, most of artificial intelligence is trying to predict human behavior. So, in machine learning, we build a model of that behavior and we say, if we present this to people, they will click on this shirt, or if we present this driver to someone, they will accept the ride. So what we’re really saying is, this is a model of the way people think. So for me, it’s a pretty consistent transition from thinking about people in different ways—political theory, political science, philosophy—to building systems that test the way people behave. Does that make sense?

Jim Freeze It makes total sense. It’s really interesting. So throughout your career, then, somehow you’ve got to Lyft by, you know, applying that kind of ethos, if you will.

 Craig Martell Prior to Linkedin, which was in 2013, I was a professor studying machine learning, working for the naval postgraduate school. And then I was at Linkedin for a long time, and then when the folks at Lyft approached me, that was really exciting for a couple of reasons. One was that the problem is really hard, and it might seem easy, but it’s actually really hard. You say to us you want a ride, and you want to ride to a particular location, and we have to instantaneously—or apparently instantaneously to you—find a driver that you’ll want to say yes to, that’s not too far away, whose estimated time of arrival is within your within your comfort level. Because if you think about it, it really costs you nothing to hang up the app, and open up Uber. So if we get the ETA too long, then you’re gonna say, I don’t want to wait that long and you’ll move on to a different app. If we get the ETA too short—meaning we estimate that it only takes 2 minutes for the driver to arrive, but the driver takes 3 minutes—you’ll be really angry with us. And halfway through that extra minute, you’ll hang up and go to Uber. So the problem space is really fascinating, and the way it’s presented to you has a great deal to do with how you decide whether or not you want to take the ride. So, there’s all kinds of cognitive aspects to machine learning for something as dynamic and as fast as Lyft, and then that feeds into the second reason that I was really excited about talking with the folks at Lyft. It’s really transforming transportation, and that’s a pretty exciting thing just in and of itself.

Jim Freeze Yeah, now it’s amazing how you and your competitors have just completely changed the landscape for transportation. You’ve kind of already started to touch on this, but how did you apply machine learning and the company’s machine learning platform? What role did the teams play within the company and your services?

Craig Martell So the machine learning platform is a pretty fascinating task these days. What do I mean by that? Well, it used to be the case that if you want machine learning to be successful in your company, you would hire as many people as possible, very expensive people with PhDs in machine learning, and not only were they expensive, but they’re few and far between. Even though we’re churning out many more of them, folks like Google and Amazon and Microsoft are sucking them up. So it’s no longer the case that what was successful ten years ago, like hiring as many experts as you can, is successful today. So instead you have to, in a very real way, replicate those experts. You have to build a platform that allows for non-experts, what I call practitioners, that allows for AI practitioners to to be as impactful as an expert.

So you have to help them with how to gather data. You have to help them with how to do the training. You have to help them with the selection of the algorithm that might work best for their problem. Then, after you’ve shipped an ML service, you have to help them evaluate whether that service is bringing business value. You have to help them retrain as the world changes around us, and over the last couple years the world has changed quite a bit, so you have to help them retrain their model as the world changes around us and you have to help them know when it’s time to rebuild the model and try something new. So what we’re trying to do with our machine learning platform stack is impose correct ML operational behavior on an ML practitioner to make it as easy as possible for them to deliver value. So that’s what we’re doing with the machine learning platform team. Does that make sense?

Jim Freeze That makes total sense. I mean, working for an AI company, trying to keep pace with the hiring—it doesn’t scale because there’s so many companies in search of that limited talent pool. I’d like to kind of go on to a notion that I heard about in a podcast in which you discuss AI modeling as a cognitive science test. I think that’s a testament to your diverse background. Can you share a little bit more about this ethos and how you apply it at Lyft?

Craig Martell So, there’s 2 pieces to that. One of them is that it’s a cognitive science experiment and the other part is that it’s an experiment. So, let’s talk about the experiment part first because I think most of your listeners will understand that. Every machine learning model is a hypothesis about what’s going to create the desired behavior in your users. It is a hypothesis that says, if I present text in the following way, I will get the following response. Or, it’s a hypothesis that says, if I present other people who bought those pants bought these things, and I present them with a particular carousel structure with particular UI features, that will increase people’s purchasing behavior of these other items. So, you’re really making a scientific hypothesis about human behavior, how these things that you’re building will impact human behavior. 

Well, that’s cognitive science. I mean, what you’re saying is, I can tap into your cognition in a certain way by showing you a screen that looks like this, or one that looks like this, or a button that’s blue versus a button that’s green, or or the text over the button or underneath the picture. Whatever the experiment you’re doing, you’re really saying: presenting it in this way is going to have a particular cognitive impact on our users. And as soon as you start thinking about it that way, I think you think more deeply about what you’re doing. So if you’re just an engineer and you’re going to try 10 different things, well, that’s fine. You try 10 different things. But if you think that you’re trying to impact cognition, well maybe you talk to the design team more. Maybe you ask the design team: Hey, you’re experts on what people like and don’t like, and of these 2, how can we tweak them? What’s your hypothesis about how we can tweak them to get even a greater impact? 

So I think it’s really important that machine learning practitioners understand that they’re not in a bubble, that design can really help them. The way the front of the mobile app is presented can really help them, and it’s not just themselves in isolation doing a sort of hyper-nerdy experiment, and so that’s why I call it the cognitive science experiment, because I think it really should bring in almost all of the teams at your company.

Jim Freeze That’s really interesting. I’m actually jumping ahead of my mind to a question I wanted to ask you: you believe that AI is less about algorithms and more about the applications. Is that the notion you’re talking about when you talk about talking to a designer as an example?

Craig Martell Well, so just to be clear, I could hear the AI experts in the audience screaming at that statement. So, it’s not less about the algorithms, in the sense that the algorithms are extremely important, and still are the core of what we’re doing. The reason I say that is today, it’s less about the algorithms because the algorithms themselves have been packaged up, commoditized, made really easy to access. But what hasn’t been made really easy to access is the experimental design: what are you trying to achieve with this algorithm, what’s the data that you’re gathering to feed this algorithm, and what is the experiment between A and B? Is it just the text, or is it the list of things you’re recommending in a new order, or is it a way to get people to engage with your app differently? So when I say that, I’m trying to say there’s so much more that a practitioner is now free to think about because the algorithms themselves, I say, commoditized. But I think that’s true, but they’ve also been wrapped in API-ed. So, it’s really easy to use them without having that expert, without needing that expertise. 

Jim Freeze Yeah, right. It’s the whole low code/no code theory, right? Which is trying to democratize AI, correct?

Craig Martell Yeah, I’m not buying into the low code/no code. I also hear the engineers screaming. I’m saying there are really easy APIs for a non-machine learning expert to be able to access those algorithms.

Jim Freeze I’m glad you said that, because I actually don’t fully buy into the low code/no code either. Powerpoint to reality is very different. I appreciate you saying that. Could you talk just a little bit more about how AI works in the background specifically, and in Lyft’s ridesharing app? Maybe a couple examples that our listeners could relate to.

Craig Martell It’s actually everywhere. I know it doesn’t seem like it. But you open up the app, and we recommend destinations for you. That’s AI. When you start to search for a destination, driving the search is AI. Once you’ve said, this is the destination that I want, then we have to present to you different products. If you noticed some of the products are above the fold, meaning you have to scroll down to see the other ones and some are below the fold, well that ordering of those products is personalized to you. Pricing is an AI model, because we have to tell you the pricing before we have even found a person. We have to guess on a person, but that driver might get matched with somebody else by the time you say yes, so we’re guessing on a price that we’re committing to you once you click on it, and that may or may not be good for us, right? Once you’ve clicked on a product, we have to match you with a driver. We then tell you—here’s one of the coolest ones, and as I said before I think one of the most important ones—is we then tell you the ETA. Well, that ETA is a guess. We know where the driver is, we know where you are. We’re gonna guess the path they’re going to take. We’re gonna guess how long it’s going to take to get there. We’re gonna guess whether or not there’s an accident, or a street light, or if it’s raining, whether that’s slower. So there’s so many inferences between when you open the app and the driver shows up. It drives almost everything.

Jim Freeze That’s fascinating. It’s obvious that AI is very pervasive throughout the app.

Craig Martell Yeah, and I just want to be clear because I want to give credit where credit’s due. My teams are centralized, and we provide centralized services. So the Applied ML team, which we didn’t really talk about before, they dive in with expert help on really difficult and challenging problems. But each of the product teams has their own science organization that does AI independently of my team. So I just want to make sure I’m not hogging all the credit. What I just said to you, all of those things takes an army of people distributed across the company to be able to get it done right.

Jim Freeze I don’t doubt that at all. So how do you see AI evolving specifically in the transportation industry in the next 5 to 10 years?

Craig Martell I don’t know if I can answer specifically in the transportation industry, but I think we’re going to continue down this path of practitioners over experts. We’ll always need experts. There are going to be some problems like a massive real-time rider-driver marketplace. That’s a really hard problem. That’s not an off-the-shelf problem, and we’re going to need experts, economists, statisticians, expert machine learners to help us solve those problems. So we’ll always need experts, but the demand for AI is going to continue to grow throughout everything that we use. 

We now get very frustrated if the artifacts we use aren’t smart enough, and I think that that’s going to continue. And simultaneously, AI training is getting significantly more robust. So if you grab any CS undergrad who’s graduating today, they’ve had 2 or 3 machine learning classes, and five years ago maybe they had one, and ten years ago they had none. CS itself has kept up with this demand, and so now you have combined 3 things: a massive increase in demand, an increase in these platforms that make it easier for non-experts to be successful in machine learning, and now CS undergrads who come to the table with a really good practitioner’s experience already in how to build these models. So I think we’re poised for even more of a hockey stick growth in AI, because I think all the pieces are in place. And again, we’re still gonna need experts, and ideally there’s still a lot of people going to get PhDs in machine learning because we’re still going to need that. But I think we also have the tools in place now to be able to start to meet the demand that I see coming.

Jim Freeze That’s a very good prognostication. I appreciate that. Craig, it’s been an absolute pleasure. Thank you very much.

Craig Martell I I really enjoyed it. Thanks so much.

Jim Freeze That’s all for this episode of The ConversAItion. Join us next time for an episode featuring Laura Patel, Principal Data Scientist at UPS. We’ll discuss how Laura’s background in physics informs her work at UPS today, the ways in which UPS uses AI to help manage millions of packages every day and how today’s supply chain challenges accelerate innovation.

This episode of The ConversAItion podcast was produced by Interactions, a conversational AI company. I’m Jim Freeze, signing off, and we’ll see you next time. 




Check out more episodes of The ConversAItion.