← Back to blog

How to Scrape Depop Listings in 2026 (Fashion Data, Seller Stats, Trends)

How to Scrape Depop Listings in 2026 (Fashion Data, Seller Stats, Trends)

Depop has become the default marketplace for vintage and secondhand fashion. If you're building trend analysis tools, doing price research on resale fashion, or tracking seller performance, you need access to listing data at scale.

Depop has no official public API. But their mobile app communicates with a REST API that's completely straightforward to reverse-engineer. The endpoints return clean JSON and cover everything from search to individual product details to seller stats. In 2026, the API is stable and well-structured — the challenge is staying under the radar when scraping at scale.

This guide covers the full stack: API discovery, working Python code, error handling, pagination across thousands of listings, anti-detection techniques, proxy strategy, and data storage with SQLite. By the end you'll have a production-ready scraper for Depop data.

What Data You Can Extract

Depop's internal API exposes an impressive breadth of data:

Listing data: - Title and full description - Price and currency - Condition (new, used, like new, good, fair) - Brand (from brand field and auto-detected) - Category, subcategory - Size and size system (UK, US, EU, one size) - Color - Multiple product photos with full-resolution URLs - Like count and save count - Date listed and date last updated - Shipping options and cost per method - Hashtags and style tags

Seller data: - Username and display name - Follower count and following count - Total items sold and active listing count - Review rating (1-5) and total review count - Verification status - Last active timestamp - Bio and location

Search and discovery: - Category-filtered search - Price range filtering - Sort by: relevance, newest, price ascending/descending - Trending searches and browse categories

What you won't get: purchase history, buyer identities, message thread content, or payment data. Everything else is accessible.

Understanding Depop's Architecture

Depop's apps (iOS, Android, web) communicate with two API systems:

  1. webapi.depop.com/api/v2/ — The primary REST API used by the mobile apps. Returns clean JSON, handles search, listings, and profiles. No authentication required for public reads.

  2. api.depop.com/api/v2/ — A secondary API used for some features. Same structure, occasionally has endpoints that v1 webapi does not.

The v2 REST API is what we focus on. It's stable — the same endpoints that worked in 2023 still work in 2026 with identical response schemas.

Key Endpoints

Base URL: https://webapi.depop.com/api/v2/

Search:
  GET /search/products/             -- keyword search with filters
  GET /search/suggested/            -- autocomplete suggestions
  GET /search/top_searches/         -- trending search terms

Products:
  GET /products/{id}/               -- individual listing detail
  GET /products/{id}/similar/       -- similar/recommended listings

Sellers:
  GET /shop/{username}/             -- seller profile and stats
  GET /shop/{username}/products/    -- all listings from a seller
  GET /shop/{username}/reviews/     -- seller reviews
  GET /shop/{username}/followers/   -- follower list

Categories:
  GET /categories/                  -- full category tree
  GET /categories/{id}/products/    -- browse by category

Anti-Bot Protections

Depop's defenses are lighter than platforms like DoorDash or Amazon, but they're real and will stop naive scrapers:

Rate limiting: Around 60 requests per minute per IP before HTTP 429 responses. Limits reset after a 60-second window. Progressive backoff: first offense is a soft 429, repeated violations lead to temporary IP bans.

Header validation: Missing or incorrect headers trigger 403 responses. The API checks for a specific set of mobile-like headers. X-Depop-Client header is required. User-Agent must look like the mobile app, not a Python script.

IP reputation: Datacenter IPs (AWS, GCP, DigitalOcean ranges) work initially but get flagged after sustained scraping. The IP reputation check appears to be on a 24-hour cycle — a flagged IP recovers by the next day. Residential IPs from real ISPs are rarely blocked.

Geo-restrictions: Some endpoints return different data based on IP location. Search results are personalized by region — scraping from a US IP gives US-centric results. Use location-specific IPs when you need data from specific Depop markets (UK, US, AU).

Session tracking: Depop tracks request patterns across sessions. Making the same search repeatedly with identical parameters can trigger soft blocks. Vary your request patterns: different sort orders, slight parameter variations.

Setting Up Your Environment

pip install httpx tenacity sqlite-utils python-dateutil

We'll use: - httpx for HTTP requests (faster than requests, supports HTTP/2) - tenacity for robust retry logic - sqlite-utils for easy SQLite storage - python-dateutil for timestamp parsing

Core Scraping Code

Here's a full, production-ready scraper with proper error handling:

import httpx
import time
import random
import json
import sqlite3
import logging
from datetime import datetime
from typing import Optional

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(message)s"
)
logger = logging.getLogger(__name__)

# --- Configuration ---

PROXY_URL = "http://USER:[email protected]:9000"

# Mobile app headers -- critical for avoiding 403s
HEADERS = {
    "User-Agent": "Depop/3.100.0 (com.depop.depop; iOS 17.5; iPhone14,5)",
    "Accept": "application/json",
    "Accept-Language": "en-US,en;q=0.9",
    "Accept-Encoding": "gzip, deflate, br",
    "X-Depop-Client": "ios",
    "X-Depop-Client-Version": "3.100.0",
    "X-Depop-Client-Platform": "ios",
}

BASE_URL = "https://webapi.depop.com/api/v2"

# --- HTTP Client with retry logic ---

def make_client(use_proxy: bool = True) -> httpx.Client:
    """Create an HTTP client with optional proxy."""
    kwargs = {
        "headers": HEADERS,
        "timeout": httpx.Timeout(20.0, connect=10.0),
        "follow_redirects": True,
    }
    if use_proxy:
        kwargs["proxy"] = PROXY_URL
    return httpx.Client(**kwargs)


def safe_get(client: httpx.Client, url: str, params: dict = None,
             max_retries: int = 5) -> Optional[dict]:
    """
    GET request with exponential backoff retry.
    Returns parsed JSON or None on failure.
    """
    for attempt in range(max_retries):
        try:
            resp = client.get(url, params=params)

            if resp.status_code == 200:
                return resp.json()

            elif resp.status_code == 429:
                # Rate limited -- exponential backoff
                wait = (2 ** attempt) + random.uniform(0, 1)
                logger.warning(f"Rate limited (429), waiting {wait:.1f}s (attempt {attempt+1}/{max_retries})")
                time.sleep(wait)
                continue

            elif resp.status_code == 403:
                logger.error(f"Forbidden (403) -- check headers or proxy")
                return None

            elif resp.status_code == 404:
                logger.debug(f"Not found (404): {url}")
                return None

            else:
                logger.warning(f"HTTP {resp.status_code}: {url}")
                time.sleep(2)

        except httpx.TimeoutException:
            wait = (2 ** attempt) + 1
            logger.warning(f"Timeout, retrying in {wait}s (attempt {attempt+1}/{max_retries})")
            time.sleep(wait)

        except httpx.NetworkError as e:
            logger.error(f"Network error: {e}")
            time.sleep(5)
            continue

    logger.error(f"Failed after {max_retries} attempts: {url}")
    return None

Search Function with Full Pagination

def search_listings(
    client: httpx.Client,
    query: str,
    limit: int = 50,
    offset: int = 0,
    sort: str = "relevance",
    min_price_cents: int = None,
    max_price_cents: int = None,
    category_id: int = None,
    condition: str = None,
) -> dict:
    """
    Search Depop listings.

    sort options: relevance, newlyListed, priceAscending, priceDescending
    condition options: new, used, likeNew, good, fair

    Returns dict with 'products' list and pagination 'meta'.
    Prices in cents (multiply USD by 100).
    """
    params = {
        "what": query,
        "limit": min(limit, 50),
        "offset": offset,
        "sort": sort,
    }

    if min_price_cents:
        params["priceMin"] = min_price_cents
    if max_price_cents:
        params["priceMax"] = max_price_cents
    if category_id:
        params["categoryId"] = category_id
    if condition:
        params["condition"] = condition

    data = safe_get(client, f"{BASE_URL}/search/products/", params=params)
    if not data:
        return {"products": [], "meta": {}}

    products = []
    for item in data.get("products", []):
        products.append(parse_listing_summary(item))

    return {
        "products": products,
        "meta": data.get("meta", {}),
    }


def parse_listing_summary(item: dict) -> dict:
    """Parse a listing summary from search results."""
    price_data = item.get("price", {})
    return {
        "id": item.get("id"),
        "slug": item.get("slug"),
        "title": item.get("description", "")[:200],
        "price": price_data.get("amount", 0),
        "currency": price_data.get("currency", "USD"),
        "condition": item.get("condition", {}).get("slug"),
        "brand": (item.get("brand") or {}).get("name"),
        "category_id": (item.get("category") or {}).get("id"),
        "seller_id": (item.get("seller") or {}).get("id"),
        "seller_username": (item.get("seller") or {}).get("username"),
        "likes": item.get("likes", 0),
        "photo_count": len(item.get("pictures", [])),
        "main_photo": item.get("pictures", [{}])[0].get("url") if item.get("pictures") else None,
        "date_updated": item.get("dateUpdated"),
        "status": item.get("status"),
    }


def scrape_all_results(
    client: httpx.Client,
    query: str,
    max_results: int = 500,
    sort: str = "newlyListed",
    delay_range: tuple = (1.0, 2.5),
    **filters,
) -> list:
    """
    Paginate through search results to collect up to max_results listings.
    Handles rate limits with delays between pages.
    """
    all_products = []
    offset = 0
    page = 1

    logger.info(f"Starting search: '{query}', max_results={max_results}, sort={sort}")

    while len(all_products) < max_results:
        logger.info(f"Page {page}: fetching offset={offset}")

        result = search_listings(
            client, query,
            limit=50,
            offset=offset,
            sort=sort,
            **filters
        )

        products = result["products"]
        if not products:
            logger.info(f"No more results at offset {offset}")
            break

        all_products.extend(products)
        logger.info(f"Collected {len(all_products)} total listings")

        # Check if more pages exist
        meta = result["meta"]
        total_count = meta.get("totalCount", 0)
        if offset + 50 >= total_count:
            logger.info(f"Reached end of results (total={total_count})")
            break

        offset += 50
        page += 1

        # Polite delay between pages
        delay = random.uniform(*delay_range)
        time.sleep(delay)

    return all_products[:max_results]

Fetching Full Product Details

The search endpoint returns summaries. For full data (description, all photos, shipping, hashtags), fetch each product individually:

def get_product_details(client: httpx.Client, product_id: int) -> Optional[dict]:
    """Fetch full details for a single listing."""
    data = safe_get(client, f"{BASE_URL}/products/{product_id}/")
    if not data:
        return None

    price_data = data.get("price", {})
    shipping = data.get("nationalShipping", {})

    return {
        "id": data.get("id"),
        "slug": data.get("slug"),
        "title": data.get("description", ""),
        "price": price_data.get("amount", 0),
        "currency": price_data.get("currency", "USD"),
        "original_price": (data.get("originalPrice") or {}).get("amount"),
        "condition": (data.get("condition") or {}).get("slug"),
        "brand": (data.get("brand") or {}).get("name"),
        "brand_id": (data.get("brand") or {}).get("id"),
        "category": (data.get("category") or {}).get("name"),
        "category_id": (data.get("category") or {}).get("id"),
        "size": (data.get("size") or {}).get("name"),
        "size_system": (data.get("size") or {}).get("sizeSystem"),
        "color": data.get("colour"),
        "likes": data.get("likes", 0),
        "seller_id": (data.get("seller") or {}).get("id"),
        "seller_username": (data.get("seller") or {}).get("username"),
        "shipping_national": shipping.get("free", False),
        "shipping_cost": (shipping.get("price") or {}).get("amount"),
        "hashtags": data.get("hashtags", []),
        "photos": [p.get("url") for p in data.get("pictures", [])],
        "photo_count": len(data.get("pictures", [])),
        "date_listed": data.get("dateUpdated"),
        "status": data.get("status"),
        "is_sold": data.get("status") == "sold",
    }


def get_product_details_batch(
    client: httpx.Client,
    product_ids: list,
    delay_range: tuple = (0.8, 1.5),
) -> list:
    """
    Fetch details for multiple products with polite delays.
    Skips failures and continues.
    """
    results = []
    total = len(product_ids)

    for i, pid in enumerate(product_ids):
        logger.debug(f"Fetching product {pid} ({i+1}/{total})")
        detail = get_product_details(client, pid)
        if detail:
            results.append(detail)

        time.sleep(random.uniform(*delay_range))

        # Log progress every 50 products
        if (i + 1) % 50 == 0:
            logger.info(f"Detail progress: {i+1}/{total} ({len(results)} successful)")

    return results

Seller Profile Scraping

def get_seller_profile(client: httpx.Client, username: str) -> Optional[dict]:
    """Fetch seller stats and metadata."""
    data = safe_get(client, f"{BASE_URL}/shop/{username}/")
    if not data:
        return None

    return {
        "id": data.get("id"),
        "username": data.get("username"),
        "display_name": data.get("displayName", ""),
        "bio": data.get("bio", ""),
        "followers": data.get("followers", 0),
        "following": data.get("following", 0),
        "items_sold": data.get("itemsSold", 0),
        "active_listings": data.get("activeListings", 0),
        "review_rating": data.get("reviewRating"),
        "review_count": data.get("reviewCount", 0),
        "verified": data.get("verified", False),
        "verified_seller": data.get("verifiedSeller", False),
        "last_active": data.get("lastSeen"),
        "location": data.get("location"),
        "joined": data.get("created"),
    }


def get_seller_listings(
    client: httpx.Client,
    username: str,
    max_items: int = 200,
    delay_range: tuple = (1.0, 2.0),
) -> list:
    """
    Fetch all active listings from a seller.
    Paginates automatically.
    """
    all_listings = []
    offset = 0

    while len(all_listings) < max_items:
        params = {
            "offset_id": offset,
            "limit": 12,
        }
        data = safe_get(client, f"{BASE_URL}/shop/{username}/products/", params=params)
        if not data:
            break

        products = data.get("products", [])
        if not products:
            break

        for item in products:
            all_listings.append(parse_listing_summary(item))

        meta = data.get("meta", {})
        next_offset = meta.get("lastOffsetId")
        if not next_offset or next_offset == offset:
            break

        offset = next_offset
        time.sleep(random.uniform(*delay_range))

    return all_listings[:max_items]


def get_seller_reviews(client: httpx.Client, username: str, limit: int = 50) -> list:
    """Fetch reviews for a seller."""
    params = {"limit": min(limit, 50), "offset": 0}
    data = safe_get(client, f"{BASE_URL}/shop/{username}/reviews/", params=params)
    if not data:
        return []

    reviews = []
    for r in data.get("reviews", []):
        reviews.append({
            "rating": r.get("rating"),
            "message": r.get("message", ""),
            "reviewer": (r.get("reviewer") or {}).get("username"),
            "created": r.get("created"),
        })
    return reviews

Data Storage with SQLite

def init_database(db_path: str = "depop.db") -> sqlite3.Connection:
    """Initialize SQLite database with appropriate schema."""
    conn = sqlite3.connect(db_path)
    conn.row_factory = sqlite3.Row

    conn.executescript("""
        CREATE TABLE IF NOT EXISTS listings (
            id INTEGER PRIMARY KEY,
            slug TEXT,
            title TEXT,
            price INTEGER,
            currency TEXT,
            original_price INTEGER,
            condition TEXT,
            brand TEXT,
            brand_id INTEGER,
            category TEXT,
            category_id INTEGER,
            size TEXT,
            size_system TEXT,
            color TEXT,
            likes INTEGER DEFAULT 0,
            seller_id INTEGER,
            seller_username TEXT,
            shipping_national BOOLEAN,
            shipping_cost INTEGER,
            hashtags TEXT,
            photos TEXT,
            photo_count INTEGER DEFAULT 0,
            date_listed TEXT,
            status TEXT,
            is_sold BOOLEAN DEFAULT 0,
            search_query TEXT,
            scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        );

        CREATE INDEX IF NOT EXISTS idx_brand ON listings(brand);
        CREATE INDEX IF NOT EXISTS idx_category ON listings(category_id);
        CREATE INDEX IF NOT EXISTS idx_seller ON listings(seller_username);
        CREATE INDEX IF NOT EXISTS idx_price ON listings(price);
        CREATE INDEX IF NOT EXISTS idx_query ON listings(search_query);
        CREATE INDEX IF NOT EXISTS idx_date ON listings(date_listed);

        CREATE TABLE IF NOT EXISTS sellers (
            id INTEGER PRIMARY KEY,
            username TEXT UNIQUE,
            display_name TEXT,
            bio TEXT,
            followers INTEGER DEFAULT 0,
            following INTEGER DEFAULT 0,
            items_sold INTEGER DEFAULT 0,
            active_listings INTEGER DEFAULT 0,
            review_rating REAL,
            review_count INTEGER DEFAULT 0,
            verified BOOLEAN DEFAULT 0,
            last_active TEXT,
            location TEXT,
            joined TEXT,
            scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        );

        CREATE TABLE IF NOT EXISTS trend_snapshots (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            query TEXT,
            snapshot_date TEXT,
            listing_count INTEGER,
            avg_price REAL,
            min_price REAL,
            max_price REAL,
            total_likes INTEGER,
            median_price REAL,
            scraped_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        );
    """)

    conn.commit()
    return conn


def save_listings(conn: sqlite3.Connection, listings: list, search_query: str = ""):
    """Save listing records to database."""
    saved = 0
    for item in listings:
        try:
            conn.execute("""
                INSERT OR REPLACE INTO listings
                (id, slug, title, price, currency, original_price,
                 condition, brand, brand_id, category, category_id,
                 size, size_system, color, likes, seller_id,
                 seller_username, shipping_national, shipping_cost,
                 hashtags, photos, photo_count, date_listed, status,
                 is_sold, search_query)
                VALUES
                (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
            """, (
                item.get("id"), item.get("slug"), item.get("title"),
                item.get("price"), item.get("currency"), item.get("original_price"),
                item.get("condition"), item.get("brand"), item.get("brand_id"),
                item.get("category"), item.get("category_id"), item.get("size"),
                item.get("size_system"), item.get("color"), item.get("likes"),
                item.get("seller_id"), item.get("seller_username"),
                item.get("shipping_national"), item.get("shipping_cost"),
                json.dumps(item.get("hashtags", [])),
                json.dumps(item.get("photos", [])),
                item.get("photo_count"), item.get("date_listed"),
                item.get("status"), item.get("is_sold"), search_query
            ))
            saved += 1
        except sqlite3.Error as e:
            logger.error(f"DB error saving listing {item.get('id')}: {e}")

    conn.commit()
    logger.info(f"Saved {saved}/{len(listings)} listings to database")
    return saved

Trend Tracking Over Time

Depop trends move fast. Track them by running regular snapshots:

def take_trend_snapshot(
    client: httpx.Client,
    conn: sqlite3.Connection,
    query: str,
    sample_size: int = 100,
) -> dict:
    """
    Capture current market state for a query.
    Run daily to build trend data over time.
    """
    results = scrape_all_results(client, query, max_results=sample_size, sort="newlyListed")

    if not results:
        return {}

    prices = [r["price"] for r in results if r.get("price", 0) > 0]
    prices_sorted = sorted(prices)

    # Median price
    mid = len(prices_sorted) // 2
    if len(prices_sorted) % 2 == 0 and len(prices_sorted) > 0:
        median_price = (prices_sorted[mid - 1] + prices_sorted[mid]) / 2
    elif prices_sorted:
        median_price = prices_sorted[mid]
    else:
        median_price = 0

    snapshot = {
        "query": query,
        "snapshot_date": datetime.now().strftime("%Y-%m-%d"),
        "listing_count": len(results),
        "avg_price": sum(prices) / len(prices) if prices else 0,
        "min_price": min(prices) if prices else 0,
        "max_price": max(prices) if prices else 0,
        "total_likes": sum(r.get("likes", 0) for r in results),
        "median_price": median_price,
    }

    conn.execute("""
        INSERT INTO trend_snapshots
        (query, snapshot_date, listing_count, avg_price, min_price,
         max_price, total_likes, median_price)
        VALUES
        (:query, :snapshot_date, :listing_count, :avg_price, :min_price,
         :max_price, :total_likes, :median_price)
    """, snapshot)
    conn.commit()

    return snapshot


def get_trend_history(conn: sqlite3.Connection, query: str, days: int = 30) -> list:
    """Retrieve historical trend data for a query."""
    cursor = conn.execute("""
        SELECT snapshot_date, listing_count, avg_price, median_price,
               min_price, max_price, total_likes
        FROM trend_snapshots
        WHERE query = ?
        ORDER BY snapshot_date DESC
        LIMIT ?
    """, (query, days))
    return [dict(zip([d[0] for d in cursor.description], row)) for row in cursor.fetchall()]

Proxy Strategy with ThorData

For casual scraping (a few hundred listings), datacenter proxies or no proxy works fine. For production-scale trend analysis across thousands of listings per day, you need residential proxies.

Depop's rate limiting is IP-reputation based. Datacenter IPs get flagged because they're shared across thousands of scrapers. Residential IPs from real ISPs look like normal users and rarely get blocked.

ThorData provides residential proxies with geographic targeting — critical for Depop because listings are geo-personalized. US IPs see US-centric fashion, UK IPs see UK vintage stock. If you're building a comprehensive dataset, scrape from multiple regions.

THORDATA_USER = "your_username"
THORDATA_PASS = "your_password"
THORDATA_HOST = "proxy.thordata.com"
THORDATA_PORT = 9000

def get_proxy_url(country: str = None, session_id: str = None) -> str:
    """
    Generate ThorData proxy URL with optional country targeting.
    country: ISO 2-letter code (US, GB, AU, DE, etc.)
    session_id: Use same session_id to keep the same IP for a session
    """
    user = THORDATA_USER
    if country:
        user += f"-country-{country}"
    if session_id:
        user += f"-session-{session_id}"

    return f"http://{user}:{THORDATA_PASS}@{THORDATA_HOST}:{THORDATA_PORT}"


def scrape_multi_region(
    query: str,
    regions: list = None,
    max_per_region: int = 200,
) -> dict:
    """
    Scrape the same query from multiple regional IPs.
    Useful for building a comprehensive picture across markets.
    """
    if regions is None:
        regions = ["US", "GB", "AU"]

    results = {}
    for region in regions:
        proxy_url = get_proxy_url(country=region)
        with httpx.Client(
            headers=HEADERS,
            proxy=proxy_url,
            timeout=httpx.Timeout(20.0, connect=10.0),
        ) as client:
            region_results = scrape_all_results(client, query, max_results=max_per_region)
            results[region] = region_results
            logger.info(f"Region {region}: {len(region_results)} listings")

    return results

Anti-Detection Best Practices

Beyond proxies, vary your scraping behavior to avoid pattern detection:

# Vary User-Agent across iOS versions and Depop app versions
USER_AGENTS = [
    "Depop/3.100.0 (com.depop.depop; iOS 17.5; iPhone14,5)",
    "Depop/3.98.0 (com.depop.depop; iOS 17.4; iPhone15,2)",
    "Depop/3.96.1 (com.depop.depop; iOS 17.3; iPhone14,8)",
    "Depop/3.95.0 (com.depop.depop; iOS 17.2; iPhone14,5)",
]

def random_headers() -> dict:
    """Generate slightly varied headers to avoid fingerprinting."""
    ua = random.choice(USER_AGENTS)
    version = ua.split("/")[1].split(" ")[0]
    return {
        **HEADERS,
        "User-Agent": ua,
        "X-Depop-Client-Version": version,
    }

def adaptive_delay(base: float = 1.0, jitter: float = 0.5) -> float:
    """Calculate a randomized delay."""
    return base + random.uniform(0, jitter)

# Rotate sort orders to vary search patterns
SORT_ORDERS = ["relevance", "newlyListed", "priceAscending", "priceDescending"]

def scrape_with_variation(query: str, client: httpx.Client, max_results: int = 100) -> list:
    """Scrape with varied patterns to reduce bot detection signals."""
    # Randomly choose sort order
    sort = random.choice(SORT_ORDERS[:2])

    # Add slight randomness to limit
    effective_limit = min(max_results + random.randint(-5, 5), max_results)

    return scrape_all_results(
        client, query,
        max_results=effective_limit,
        sort=sort,
        delay_range=(1.2, 3.0),
    )

Complete Usage Example

def main():
    conn = init_database("depop_fashion.db")

    with make_client(use_proxy=True) as client:

        # 1. Check trending search terms
        trending_data = safe_get(client, f"{BASE_URL}/search/top_searches/")
        if trending_data:
            trending = [t.get("name") for t in trending_data.get("topSearches", [])]
            logger.info(f"Trending searches: {trending[:10]}")

        # 2. Build price dataset for a category
        queries = [
            "vintage carhartt",
            "y2k jeans",
            "90s windbreaker",
            "vintage levis 501",
            "coquette dress",
        ]

        for query in queries:
            logger.info(f"\nScraping: {query}")
            listings = scrape_all_results(client, query, max_results=200)
            save_listings(conn, listings, search_query=query)
            snapshot = take_trend_snapshot(client, conn, query, sample_size=100)
            logger.info(
                f"Snapshot: avg ${snapshot.get('avg_price', 0)/100:.2f}, "
                f"{snapshot.get('listing_count', 0)} listings"
            )
            time.sleep(5)

        # 3. Get seller analysis for top likers
        cursor = conn.execute("""
            SELECT seller_username, SUM(likes) as total_likes, COUNT(*) as listing_count
            FROM listings
            GROUP BY seller_username
            ORDER BY total_likes DESC
            LIMIT 20
        """)
        top_sellers = cursor.fetchall()

        for seller_row in top_sellers:
            username = seller_row[0]
            profile = get_seller_profile(client, username)
            if profile:
                logger.info(
                    f"@{username}: {profile['followers']} followers, "
                    f"{profile['items_sold']} sold"
                )
            time.sleep(2)

        # 4. Price analysis report
        print("\n=== Price Analysis ===")
        for query in queries:
            cursor = conn.execute("""
                SELECT COUNT(*), AVG(price), MIN(price), MAX(price)
                FROM listings WHERE search_query = ?
            """, (query,))
            row = cursor.fetchone()
            if row and row[0]:
                print(f"\n{query}:")
                print(f"  Listings: {row[0]}")
                print(f"  Price range: ${(row[2] or 0)/100:.2f} - ${(row[3] or 0)/100:.2f}")
                print(f"  Average price: ${(row[1] or 0)/100:.2f}")

    conn.close()


if __name__ == "__main__":
    main()

What You Can Build

Resale price guides — What is a vintage Carhartt jacket actually worth? Collect 500+ sold listings and you have a real market price, not a guess. Update weekly to track price movements.

Trend spotting — Detect emerging styles before they peak. "Coquette", "dark academia", and "gorpcore" all had detectable signals on Depop months before they went mainstream. Monitor new listing velocity and likes-per-listing ratio as leading indicators.

Seller analytics — Identify top performers and their pricing strategies. High-volume sellers often have pricing patterns that you can reverse-engineer.

Inventory monitoring — Track when specific items get listed or go to "sold". Useful for sneakers, rare vintage pieces, or limited-edition items where timing matters.

Brand resale value analysis — Which brands hold value best on Depop? Compare original retail prices (if listed) against sold prices across categories.

Cross-platform arbitrage — Compare Depop prices against eBay, Vinted, and Grailed for the same items. Pricing gaps represent buying opportunities.

The secondhand fashion market is data-poor compared to traditional retail. Having structured, regularly-updated access to Depop's listing data is a genuine competitive advantage for anyone operating in this space — whether you're a reseller, a trend researcher, or building tools for either.

Key Takeaways

  1. Use the mobile API headers — The API is designed for the app. Missing X-Depop-Client or wrong User-Agent causes 403s.

  2. Respect the 60 req/min limit — Add 1-2 second delays between requests. Use exponential backoff on 429s.

  3. Paginate with offset — The API caps at 50 results per page, use offset to walk through results.

  4. Use residential proxies for scaleThorData for consistent residential IPs with geographic targeting.

  5. Cache seller profiles — They change infrequently. No need to re-fetch daily.

  6. Store everything in SQLite — The trend value compounds over time. A month of daily snapshots is worth far more than a one-time scrape.

  7. Vary your patterns — Rotate User-Agents, vary delays, mix sort orders. Uniform patterns are detectable.