A Taste for Acceleration: DoorDash Revs Up AI with GPUs

On-demand food platform runs a fine-tuned recommendation engine.
by Rick Merritt

When it comes to bringing home the bacon — or sushi or quesadillas — DoorDash is shifting into high gear, thanks in part to AI.

The company got its start in 2013, offering deals such as delivering pad thai to Stanford University dorm rooms. Today with a phone tap, customers can order a meal from more than 310,000 vendors — including Chipotle, Walmart and Wingstop — across 4,000 cities in the U.S., Canada and Australia.

Part of its secret sauce is a digital logistics engine that connects its three-sided marketplace of merchants, customers and independent contractors the company calls Dashers. Each community taps into the platform for different reasons.

Using a mix of machine-learning models, the logistics engine serves personalized restaurant recommendations and delivery-time predictions to customers who want on-demand access to their local businesses. Meanwhile, it assigns Dashers to orders and sorts through trillions of options to find their optimal routes while calculating delivery prices dynamically.

The work requires a complex set of related algorithms embedded in numerous machine-learning models, crunching ever-changing data flows. To accelerate the process, DoorDash has turned to NVIDIA GPUs in the cloud to train its AI models.

Training in One-Tenth the Time

Moving from CPUs to GPUs for AI training netted DoorDash a 10x speed-up. Migrating from single to multiple GPUs accelerated its work another 3x, said Gary Ren, a machine-learning engineer at DoorDash who will describe the company’s approach to AI in an online talk at GTC Digital.

“Faster training means we get to try more models and parameters, which is super critical for us — faster is always better for training speeds,” Ren said.

“A 10x training speed-up means we spin up cloud clusters for a tenth the time, so we get a 10x reduction in computing costs. The impacts of trying 10x more parameters or models is trickier to quantify, but it gives us some multiple of increased overall business performance,” he added.

Making Great Recommendations

So far, DoorDash has discussed one of its deep-learning applications — its recommendation engine that’s been in production about two years. Recommendations are definitely becoming more important as companies such as DoorDash realize consumers don’t always know what they’re looking for.

Potential customers may “hop on our app and explore their options so — given our huge number of merchants and consumers — recommending the right merchants can make a difference between getting an order or the customer going elsewhere,” he said.

Because its recommendation engine is so important, DoorDash continually fine tunes it. For example, in its engineering blogs, the company describes how it crafts embedded n-dimensional vectors for each merchant to find nuanced similarities among vendors.

It also adopts the so-called multi-level, multi-armed bandit algorithms that let AI models simultaneously exploit choices customers have liked in the past and explore new possibilities.

Speaking of New Use Cases

While it optimizes its recommendation engine, DoorDash is exploring new AI use cases, too.

“There are several areas where conversations happen between consumers and dashers or support agents. Making those conversations quick and seamless is critical, and with improvements in NLP (natural-language processing) there’s definitely potential to use AI here, so we’re exploring some solutions,” Ren said.

NLP is one of several use cases that will drive future performance needs.

“We deal with data from the real world and it’s always changing. Every city has unique traffic patterns, special events and weather conditions that add variance — this complexity makes it a challenge to deliver predictions with high accuracy,” he said.

Other challenges the company’s growing business presents are in making recommendations for first-time customers and planning routes in new cities it enters.

“As we scale, those boundaries get pushed — our inference speeds are good enough today, but we’ll need to plan for the future,” he added.