Scraping Uber Eats Restaurant Data with Python (2026)
Scraping Uber Eats Restaurant Data with Python (2026)
Uber Eats doesn't offer a public API for consumer data. No docs, no keys, no free tier. But their web app makes GraphQL calls to an internal API that returns everything you see on screen -- restaurant details, full menus with prices, delivery time estimates, ratings, and more.
The trick is intercepting those requests. Here's how to do it reliably with Python and Playwright, plus how to work around Uber's bot detection at scale.
How Uber Eats Loads Data
When you browse a restaurant page, the frontend fires a POST to https://www.ubereats.com/api/getStoreV1 with a JSON payload containing the store UUID. The response is a massive JSON blob with everything: menu sections, item names, prices, customization options, delivery fee estimates, and store metadata.
You can't just hit this endpoint with requests though. Uber uses aggressive bot detection -- device fingerprinting, cookie validation, and behavioral analysis. You need a real browser context.
Setting Up Playwright
Playwright gives you a real Chromium instance that Uber's bot detection has a harder time flagging.
# pip install playwright playwright-stealth
# playwright install chromium
import asyncio
import json
from playwright.async_api import async_playwright
async def setup_browser(proxy_config=None):
"""Set up a Playwright browser context with anti-detection settings."""
pw = await async_playwright().start()
browser = await pw.chromium.launch(
headless=True,
args=[
"--disable-blink-features=AutomationControlled",
"--no-sandbox",
"--disable-setuid-sandbox",
"--disable-dev-shm-usage",
"--disable-accelerated-2d-canvas",
"--no-first-run",
"--no-zygote",
],
)
context_options = {
"viewport": {"width": 1366, "height": 768},
"user_agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/126.0.0.0 Safari/537.36"
),
"locale": "en-US",
"timezone_id": "America/New_York",
"geolocation": {"latitude": 40.7128, "longitude": -74.0060},
"permissions": ["geolocation"],
}
if proxy_config:
context_options["proxy"] = proxy_config
context = await browser.new_context(**context_options)
# Remove automation indicators
await context.add_init_script("""
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
Object.defineProperty(navigator, 'plugins', {get: () => [1, 2, 3, 4, 5]});
Object.defineProperty(navigator, 'languages', {get: () => ['en-US', 'en']});
""")
return pw, browser, context
Intercepting the Store API
Instead of parsing HTML, intercept the network calls. Playwright makes this straightforward with response listeners:
async def scrape_restaurant(url: str, proxy_config=None) -> dict:
"""Scrape a restaurant page and capture its API response."""
pw, browser, context = await setup_browser(proxy_config)
store_data = {}
api_responses = []
page = await context.new_page()
# Apply stealth patches if playwright-stealth is installed
try:
from playwright_stealth import stealth_async
await stealth_async(page)
except ImportError:
pass
# Capture all API responses we care about
async def capture_response(response):
url_lower = response.url.lower()
if any(endpoint in url_lower for endpoint in ["getstorev1", "getmenu", "storev2"]):
try:
body = await response.json()
api_responses.append({
"url": response.url,
"status": response.status,
"data": body,
})
if "data" in body:
store_data.update(body["data"])
except Exception:
pass
page.on("response", capture_response)
try:
# Visit homepage first to get cookies set
await page.goto("https://www.ubereats.com", wait_until="domcontentloaded", timeout=30000)
await page.wait_for_timeout(2000)
# Navigate to the restaurant
await page.goto(url, wait_until="networkidle", timeout=45000)
await page.wait_for_timeout(3000) # wait for async API calls
# Scroll to trigger lazy loading
await page.evaluate("window.scrollTo(0, document.body.scrollHeight / 2)")
await page.wait_for_timeout(1000)
except Exception as e:
print(f"Navigation error: {e}")
finally:
await browser.close()
await pw.stop()
return store_data
Extracting Restaurant Metadata
The store data response contains restaurant details at the top level:
def extract_restaurant_info(store_data: dict) -> dict:
"""Extract restaurant metadata from the API response."""
return {
"uuid": store_data.get("uuid"),
"name": store_data.get("title"),
"description": store_data.get("description"),
"rating": store_data.get("rating", {}).get("ratingValue"),
"review_count": store_data.get("rating", {}).get("reviewCount"),
"price_range": store_data.get("priceRange"), # "$", "$$", "$$$"
"categories": store_data.get("categories", []),
"delivery_fee": store_data.get("farePlan", {}).get("deliveryFee"),
"surge_fee": store_data.get("farePlan", {}).get("isExpensiveDelivery", False),
"estimated_time_min": store_data.get("etaRange", {}).get("min"),
"estimated_time_max": store_data.get("etaRange", {}).get("max"),
"estimated_time_text": store_data.get("etaRange", {}).get("text"),
"address": store_data.get("location", {}).get("address"),
"city": store_data.get("location", {}).get("city"),
"latitude": store_data.get("location", {}).get("latitude"),
"longitude": store_data.get("location", {}).get("longitude"),
"phone": store_data.get("phoneNumber"),
"is_open": store_data.get("isOpen"),
"accepts_instructions": store_data.get("acceptsDeliveryInstructions", False),
"slug": store_data.get("slug"),
}
Extracting Menu Items and Prices
The store data response nests menus inside sections. Each section has a list of items with UUIDs, titles, prices, and optional image URLs.
def extract_menu(store_data: dict) -> list[dict]:
"""Extract all menu items with prices from the store data."""
items = []
# Menu structure can vary -- try both known layouts
sections = store_data.get("catalogSectionsMap", {})
if not sections:
# Try alternate structure
menu = store_data.get("menu", {})
sections = menu.get("sections", {}) if menu else {}
for section_id, section in sections.items():
section_title = section.get("title", "Uncategorized")
section_description = section.get("description", "")
# Items can be in different keys depending on API version
item_sources = (
section.get("itemsMap", {}).values()
or [section.get("items", [])]
)
for item_list in item_sources:
if not isinstance(item_list, list):
item_list = [item_list]
for entry in item_list:
item = entry.get("catalogItem", entry) # handle both wrapped and unwrapped
if not item:
continue
price_info = item.get("price", {})
# Price is in cents
unit_price = price_info.get("amount", 0) / 100
# Customization options (modifiers)
customizations = []
for group in item.get("customizationList", []):
customizations.append({
"name": group.get("title"),
"min_selections": group.get("minNumOptions", 0),
"max_selections": group.get("maxNumOptions"),
"options": [
{
"name": opt.get("title"),
"price": opt.get("price", {}).get("amount", 0) / 100,
}
for opt in group.get("options", [])
]
})
items.append({
"section": section_title,
"name": item.get("title", ""),
"description": item.get("itemDescription", ""),
"price_usd": unit_price,
"image_url": item.get("imageUrl", ""),
"uuid": item.get("uuid", ""),
"is_available": item.get("isAvailable", True),
"is_popular": item.get("isPopular", False),
"customizations_count": len(customizations),
"customizations": customizations,
})
return items
async def main():
url = "https://www.ubereats.com/store/example-restaurant/abc123"
data = await scrape_restaurant(url)
if not data:
print("No data captured -- check if URL is valid")
return
info = extract_restaurant_info(data)
menu = extract_menu(data)
print(f"Restaurant: {info['name']}")
print(f"Rating: {info['rating']} ({info['review_count']} reviews)")
print(f"Delivery: ${info['delivery_fee']} -- ETA {info['estimated_time_text']}")
print(f"Address: {info['address']}, {info['city']}")
print(f"Menu items: {len(menu)}")
# Show popular items
popular = [i for i in menu if i["is_popular"]]
print(f"\nPopular items ({len(popular)}):")
for item in popular[:5]:
print(f" ${item['price_usd']:.2f} -- {item['name']}")
asyncio.run(main())
Handling Anti-Bot Detection
Uber invests heavily in bot detection. Here's what you're up against and how to deal with it.
Fingerprint detection. Uber checks navigator.webdriver, canvas fingerprints, and WebGL renderer strings. Playwright's default Chromium leaks automation signals. The add_init_script trick above patches the most obvious flags. For tougher fingerprinting, use playwright-stealth:
# pip install playwright-stealth
from playwright_stealth import stealth_async
page = await context.new_page()
await stealth_async(page)
await page.goto(url)
IP-based rate limiting. Hitting the same endpoints from one IP gets you blocked fast -- usually after 20-50 requests. For any serious data collection, you need residential proxies that rotate per request. ThorData's residential proxy network is well-suited for food delivery scraping because their proxy pool covers the same geolocations as Uber's service areas, which matters for getting accurate local pricing and delivery estimates.
# Proxy rotation with Playwright
proxy_config = {
"server": "http://proxy.thordata.com:9000",
"username": "YOUR_USER",
"password": "YOUR_PASS",
}
pw, browser, context = await setup_browser(proxy_config=proxy_config)
Cookie and session validation. Uber sets tracking cookies on first visit. Always load the homepage first (as shown above), wait for cookies to set, then navigate to the target restaurant. Skipping this step and going straight to the restaurant URL results in 403s or empty responses.
Geo-restriction. Uber checks your IP's geolocation against the delivery area. A proxy in San Francisco won't return NYC restaurant data reliably. Use geo-targeted proxies matching the market you're scraping.
Collecting Multiple Restaurants by Location
To scrape restaurant listings for a city, start from the feed endpoint that loads when you set a delivery address:
async def get_restaurants_by_location(lat: float, lng: float, proxy_config=None) -> list[dict]:
"""Get restaurant listings for a geographic coordinate."""
pw, browser, context = await setup_browser(proxy_config)
restaurants = []
page = await context.new_page()
# Set the delivery location via cookie/URL
delivery_url = f"https://www.ubereats.com/feed?diningMode=DELIVERY&pl=JTdCJTIyYWRkcmVzcyUyMiUzQSUyMiUyMiUyQyUyMnJlZmVyZW5jZSUyMiUzQSUyMiUyMiUyQyUyMnJlZmVyZW5jZVR5cGUlMjIlM0ElMjJnb29nbGVfcGxhY2VzJTIyJTJDJTIybGF0aXR1ZGUlMjIlM0F7bGF0fSUyQyUyMmxvbmdpdHVkZSUyMiUzQXtsbmd9JTdE"
feed_data = []
async def capture_feed(response):
if "getFeedV1" in response.url or "feed" in response.url.lower():
try:
body = await response.json()
if "data" in body:
feed_data.append(body["data"])
except Exception:
pass
page.on("response", capture_feed)
try:
await page.goto("https://www.ubereats.com", wait_until="domcontentloaded")
await page.wait_for_timeout(2000)
# Set location by navigating to location-specific URL
await page.goto(
f"https://www.ubereats.com/category/new-york-ny",
wait_until="networkidle",
timeout=30000
)
await page.wait_for_timeout(3000)
except Exception as e:
print(f"Error: {e}")
finally:
await browser.close()
await pw.stop()
# Parse collected feed data
for feed in feed_data:
for item in feed.get("feedItems", []):
store = item.get("store", {}) or item.get("storeInfo", {})
if store and store.get("title"):
restaurants.append({
"uuid": store.get("storeUuid") or store.get("uuid"),
"name": store.get("title"),
"rating": store.get("rating", {}).get("ratingValue"),
"review_count": store.get("rating", {}).get("reviewCount"),
"delivery_time_text": store.get("etaRange", {}).get("text"),
"price_range": store.get("priceRange"),
"slug": store.get("slug"),
"categories": store.get("categories", []),
})
# Deduplicate by UUID
seen = set()
unique = []
for r in restaurants:
if r["uuid"] and r["uuid"] not in seen:
seen.add(r["uuid"])
unique.append(r)
return unique
Saving to a Database
For ongoing price tracking, dump results into SQLite:
import sqlite3
from datetime import datetime
def init_db(db_path: str = "ubereats.db") -> sqlite3.Connection:
conn = sqlite3.connect(db_path)
conn.executescript("""
CREATE TABLE IF NOT EXISTS restaurants (
uuid TEXT PRIMARY KEY,
name TEXT,
address TEXT,
city TEXT,
latitude REAL,
longitude REAL,
rating REAL,
review_count INTEGER,
price_range TEXT,
slug TEXT,
first_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS menu_snapshots (
id INTEGER PRIMARY KEY AUTOINCREMENT,
restaurant_uuid TEXT,
scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
delivery_fee REAL,
estimated_time_min INTEGER,
estimated_time_max INTEGER,
is_open INTEGER,
FOREIGN KEY (restaurant_uuid) REFERENCES restaurants(uuid)
);
CREATE TABLE IF NOT EXISTS menu_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
restaurant_uuid TEXT,
item_uuid TEXT,
section TEXT,
name TEXT,
description TEXT,
price_usd REAL,
is_available INTEGER,
is_popular INTEGER,
scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (restaurant_uuid) REFERENCES restaurants(uuid)
);
CREATE INDEX IF NOT EXISTS idx_items_restaurant ON menu_items(restaurant_uuid);
CREATE INDEX IF NOT EXISTS idx_items_scraped ON menu_items(scraped_at);
""")
conn.commit()
return conn
def save_restaurant_data(conn: sqlite3.Connection, info: dict, items: list[dict]):
"""Save restaurant metadata and menu items."""
# Upsert restaurant
conn.execute(
"""INSERT OR REPLACE INTO restaurants
(uuid, name, address, city, latitude, longitude, rating, review_count, price_range, slug)
VALUES (?,?,?,?,?,?,?,?,?,?)""",
(
info["uuid"], info["name"], info.get("address"), info.get("city"),
info.get("latitude"), info.get("longitude"), info.get("rating"),
info.get("review_count"), info.get("price_range"), info.get("slug"),
)
)
# Snapshot
snapshot_id = conn.execute(
"""INSERT INTO menu_snapshots
(restaurant_uuid, delivery_fee, estimated_time_min, estimated_time_max, is_open)
VALUES (?,?,?,?,?)""",
(info["uuid"], info.get("delivery_fee"), info.get("estimated_time_min"),
info.get("estimated_time_max"), int(info.get("is_open", True)))
).lastrowid
# Menu items
for item in items:
conn.execute(
"""INSERT INTO menu_items
(restaurant_uuid, item_uuid, section, name, description, price_usd, is_available, is_popular)
VALUES (?,?,?,?,?,?,?,?)""",
(
info["uuid"], item["uuid"], item["section"], item["name"],
item.get("description", ""), item["price_usd"],
int(item.get("is_available", True)), int(item.get("is_popular", False)),
)
)
conn.commit()
return snapshot_id
What to Watch Out For
A few practical notes from running this at scale:
Prices vary by address. Uber adjusts delivery fees and sometimes menu prices based on your delivery location. Set coordinates that match a real residential address in the target city. Use geo-targeted proxies from ThorData that match the market you're scraping.
Menu availability changes. Items show as unavailable during off-hours. Scrape during peak lunch/dinner windows (11am-2pm and 5pm-9pm local time) for complete menus with accurate availability status.
Rate yourself. Even with proxies, don't hammer the API. One restaurant every 5-10 seconds is sustainable. Faster than that and you'll burn through proxy IPs quickly, and each new IP has to establish a session again.
Store UUIDs are stable. Once you have a restaurant's UUID, you can track it over time without re-discovering it from search. Build your UUID database first, then schedule regular menu snapshots.
Handle the "store temporarily unavailable" case. Some restaurants deactivate temporarily. The API still returns data but isOpen is false and the menu may be incomplete. Track this in your snapshots.
Session warmup matters. Always navigate to the Uber Eats homepage before visiting a restaurant page. Skipping the homepage navigation causes cookie issues that result in empty API responses.
Practical Use Cases
Restaurant price comparison -- Track how the same dish varies in price across multiple delivery platforms (Uber Eats, DoorDash, Grubhub) and the restaurant's own app.
Market research for restaurant owners -- Monitor competitor pricing, popular items, and delivery fee structures in a local market.
Food delivery analytics -- Track restaurant ratings over time, correlate rating changes with menu updates, and identify newly popular restaurants.
Building a local food directory -- Aggregate restaurant data across a city, normalize categories and cuisines, and build a searchable directory with richer filtering than what Uber Eats offers natively.
Pricing anomaly detection -- Alert when a restaurant significantly raises prices (common pattern before discontinuing a promo) or when delivery fees spike.
Conclusion
Uber Eats data is accessible with the right setup -- Playwright handles the JavaScript rendering and session management, and intercepting the internal GraphQL API gives you clean structured data without HTML parsing. The main ongoing challenges are Uber's fingerprinting-based bot detection (mitigated by playwright-stealth) and IP-based blocking (mitigated by residential proxies).
For production scraping, ThorData's residential proxy network with geo-targeting is the reliable solution for staying under Uber's radar while getting accurate local pricing and availability data. Combined with SQLite for storage and scheduled Playwright runs, you'll have a robust Uber Eats intelligence pipeline that can track hundreds of restaurants across multiple cities.