Scraping Skyscanner Flight Prices with Python (2026)
Scraping Skyscanner Flight Prices with Python (2026)
Skyscanner is one of the most useful flight price aggregators on the web, and unlike some of its competitors it exposes a fairly consistent internal API that the frontend depends on. That API is undocumented and unofficial, but it has been stable enough that you can build reliable data collection around it. This guide covers the practical approach: hitting the indicative search and browse endpoints with httpx, constructing valid request payloads, parsing price calendar responses, and surviving Skyscanner's bot detection layer.
The target audience is developers who want raw flight price data — for dashboards, travel alert tools, price comparison apps, or data pipelines — and who want to do it without paying for a third-party flight data API.
What Data Is Available
Skyscanner's internal endpoints surface more than just one-way price quotes. Through the indicative search and browse routes you can pull:
- Price calendars — cheapest price per day for a given route over a rolling window (useful for "flexible dates" tooling)
- Cheapest month grids — month-by-month pricing for an origin/destination pair
- Browse routes — aggregated pricing for all destinations from a given origin, useful for building "anywhere" searches
- Quote summaries — carrier, stops, min/max price per route for a date range
- Multi-city combinations — pricing for triangle routes and open-jaw itineraries
- Currency and locale variants — same route priced in different markets often shows meaningfully different rates
None of this requires authenticated sessions. The endpoints are called by Skyscanner's own frontend, so they accept the same session tokens and headers that any real browser request would carry. That said, Skyscanner's bot protection layers are non-trivial and require specific handling — covered below.
How Skyscanner's Internal API Works
The primary endpoint for price calendar and cheapest-month data is:
POST https://www.skyscanner.net/api/v3/flights/indicative/search
This endpoint powers the flexible date search shown on Skyscanner's UI. The request body is a JSON payload that specifies the query type, locale, market, currency, and route details. The response returns a grid of prices keyed by date or month.
There are also browse endpoints that return aggregated quote summaries:
GET https://www.skyscanner.net/api/v3.0/flights/browse/browsequotes/v1.0/{market}/{currency}/{locale}/{origin}/{destination}/{outbound_date}
GET https://www.skyscanner.net/api/v3.0/flights/browse/browseroutes/v1.0/{market}/{currency}/{locale}/{origin}/{destination}/{outbound_date}
GET https://www.skyscanner.net/api/v3.0/flights/browse/browsedates/v1.0/{market}/{currency}/{locale}/{origin}/{destination}/{outbound_date}
The browsequotes endpoint returns the cheapest quotes found for a route; browseroutes returns aggregated results grouped by carrier route; browsedates returns a grid of prices by date — similar to the indicative search but without the flexible query syntax.
For IATA codes, Skyscanner uses standard airport codes (LHR, JFK, CDG) but also supports city codes and its own internal place codes. You can look up place codes via GET https://www.skyscanner.net/api/v3/place/search.
Request Payload Structure
The indicative search endpoint expects a specific payload shape. The query object contains nested dateTimeGroupingType (either "DATE_TIME_GROUPING_TYPE_BY_MONTH" or "DATE_TIME_GROUPING_TYPE_BY_DATE"), locale/market/currency fields, and the route specifics.
import httpx
import json
INDICATIVE_URL = "https://www.skyscanner.net/api/v3/flights/indicative/search"
def build_monthly_payload(
origin: str,
destination: str,
market: str = "UK",
locale: str = "en-GB",
currency: str = "GBP",
) -> dict:
"""Build payload for month-by-month pricing grid."""
return {
"query": {
"currency": currency,
"locale": locale,
"market": market,
"dateTimeGroupingType": "DATE_TIME_GROUPING_TYPE_BY_MONTH",
"queryLegs": [
{
"originPlace": {
"queryPlace": {
"iata": origin
}
},
"destinationPlace": {
"queryPlace": {
"iata": destination
}
},
"anytime": True,
}
],
}
}
def build_date_range_payload(
origin: str,
destination: str,
year: int,
month: int,
market: str = "UK",
locale: str = "en-GB",
currency: str = "GBP",
) -> dict:
"""Build payload for day-level calendar pricing within a specific month."""
import calendar
last_day = calendar.monthrange(year, month)[1]
return {
"query": {
"currency": currency,
"locale": locale,
"market": market,
"dateTimeGroupingType": "DATE_TIME_GROUPING_TYPE_BY_DATE",
"queryLegs": [
{
"originPlace": {
"queryPlace": {"iata": origin}
},
"destinationPlace": {
"queryPlace": {"iata": destination}
},
"dateRange": {
"startDate": {"year": year, "month": month, "day": 1},
"endDate": {"year": year, "month": month, "day": last_day},
},
}
],
}
}
market controls which regional pricing Skyscanner uses — this affects currency availability and carrier results. locale drives the language of carrier names and place labels in the response. Always set these to match each other consistently. Running the same route query with different markets (UK, US, DE) often reveals price disparities worth tracking.
Anti-Bot Measures: Understanding Akamai Bot Manager
Skyscanner runs on Akamai Bot Manager — one of the more aggressive enterprise bot detection systems in production today. Understanding what it does is necessary before writing any code that will run at volume.
TLS Fingerprinting
Python's httpx and requests present TLS handshakes that are identifiable as non-browser clients. Akamai's edge nodes fingerprint the TLS ClientHello: cipher suite ordering, extension presence, extension ordering, elliptic curves, and compression methods. A standard Python HTTP client's TLS fingerprint is distinct from Chrome's or Firefox's, and Akamai blocks based on this before any application-level scoring runs.
The mitigation is either to use a proxy that terminates and re-initiates the TLS connection with a browser-matching fingerprint (which is what residential proxies typically do at their egress point), or to use a library like curl_cffi that impersonates specific browser TLS fingerprints.
Akamai Sensor Data
Akamai's JavaScript agent injects a sensor data collection script that monitors browser behavior: mouse movement paths, touch events, scroll patterns, keystroke cadence, WebGL renderer strings, screen metrics, font availability, and dozens of other signals. This encrypted payload is sent back to Akamai's scoring service, which rates the session's legitimacy. A session token from a "real" browser will have a behavioral profile; a session token from a curl-based scraper won't — and Akamai will correlate the two.
Session-Based Tokens
Skyscanner requires valid headers including x-skyscanner-channelid and often a session cookie established during an initial page load. These tokens are tied to the session's behavioral profile. A token harvested from a real browser session will work for a period, but using that token from a different IP or with anomalous timing gets flagged. Sticky sessions (same IP across the initial page load and subsequent API calls) are necessary to maintain token validity.
Datacenter IPs
Skyscanner blocks datacenter IP ranges aggressively. AWS, GCP, Azure, DigitalOcean, Hetzner, and similar ranges are effectively banned. You will receive 403 responses or silent redirects to CAPTCHAs. This is the most immediate practical problem for any new scraping project targeting Skyscanner.
The solution for all of the above is ThorData's residential proxy network. Residential IPs originate from real ISP customers and pass Akamai's IP reputation check that kills datacenter requests outright. ThorData supports sticky sessions — you can pin a session to a specific exit IP for as long as needed, which matters for Skyscanner because the token harvest and the API call need to originate from the same IP. Their US, UK, and EU residential pools are all well-represented, covering Skyscanner's major market segments.
Making the Request
import httpx
import json
import time
import random
PROXY = "http://USER:[email protected]:9000"
HEADERS = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-GB,en;q=0.9",
"content-type": "application/json",
"origin": "https://www.skyscanner.net",
"referer": "https://www.skyscanner.net/",
"user-agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/125.0.0.0 Safari/537.36"
),
"x-skyscanner-channelid": "website",
"x-skyscanner-devicetype": "DESKTOP",
}
def fetch_indicative(
origin: str,
destination: str,
grouping: str = "monthly",
year: int = None,
month: int = None,
) -> dict:
"""Fetch indicative pricing from Skyscanner."""
if grouping == "monthly":
payload = build_monthly_payload(origin, destination)
else:
payload = build_date_range_payload(origin, destination, year, month)
with httpx.Client(
proxies={"https://": PROXY, "http://": PROXY},
headers=HEADERS,
timeout=30.0,
follow_redirects=True,
http2=True,
) as client:
# Brief pause mimicking human think time
time.sleep(random.uniform(1.5, 3.5))
resp = client.post(
INDICATIVE_URL,
content=json.dumps(payload),
)
resp.raise_for_status()
return resp.json()
Key implementation notes:
content=json.dumps(payload)is used instead ofjson=payload— this ensures the serialization is fully under your control and thecontent-typeheader is set explicitly. Usingjson=payloadcan result in subtle encoding differences that trigger validation errors on some Skyscanner endpoints.http2=Trueenables HTTP/2, which better matches what browsers send and helps pass Akamai's protocol-level fingerprinting.- The
x-skyscanner-channelidandx-skyscanner-devicetypeheaders are required; without them the endpoint returns a 400. - The
accept-encodingheader should explicitly includebr(Brotli) to match Chrome's default.
Parsing Price Calendar Responses
The response structure from the indicative endpoint nests pricing data under content.results.quotes and date/month grid data under content.results.grid. The quotes object is a flat map keyed by quote ID; the grid references those quote IDs per cell.
def parse_monthly_prices(response: dict) -> list[dict]:
"""Extract monthly price data from an indicative search response."""
results = response.get("content", {}).get("results", {})
quotes = results.get("quotes", {})
grid = results.get("grid", {})
parsed = []
for row in grid.get("grid", []):
for cell in row:
month_year = cell.get("monthYearDate", {})
month = month_year.get("month")
year = month_year.get("year")
quote_ids = cell.get("quoteIds", [])
if not quote_ids:
continue
prices = []
currency_unit = ""
for qid in quote_ids:
quote = quotes.get(qid, {})
price_data = quote.get("minPrice", {})
amount = price_data.get("amount")
if amount is not None:
prices.append(float(amount))
currency_unit = price_data.get("unit", currency_unit)
if prices:
parsed.append({
"month": month,
"year": year,
"min_price": min(prices),
"avg_price": round(sum(prices) / len(prices), 2),
"quote_count": len(prices),
"currency": currency_unit,
})
return sorted(parsed, key=lambda x: (x["year"], x["month"]))
def parse_daily_prices(response: dict) -> list[dict]:
"""Extract day-level pricing from an indicative search response."""
results = response.get("content", {}).get("results", {})
quotes = results.get("quotes", {})
grid = results.get("grid", {})
parsed = []
for row in grid.get("grid", []):
for cell in row:
date_data = cell.get("date", {})
year = date_data.get("year")
month = date_data.get("month")
day = date_data.get("day")
if not all([year, month, day]):
continue
quote_ids = cell.get("quoteIds", [])
prices = []
currency_unit = ""
for qid in quote_ids:
quote = quotes.get(qid, {})
price_data = quote.get("minPrice", {})
amount = price_data.get("amount")
if amount is not None:
prices.append(float(amount))
currency_unit = price_data.get("unit", currency_unit)
if prices:
parsed.append({
"date": f"{year}-{month:02d}-{day:02d}",
"min_price": min(prices),
"currency": currency_unit,
})
return sorted(parsed, key=lambda x: x["date"])
Browse Endpoints for Route Discovery
Beyond the indicative search, the browse endpoints are useful for discovering which routes are cheapest from a given origin. The browseroutes endpoint gives you a flat list of all available destinations with their cheapest prices:
def fetch_browse_routes(
origin: str = "UK",
currency: str = "GBP",
locale: str = "en-GB",
departure_date: str = "anytime",
) -> dict:
"""Fetch browse routes — cheapest prices to all destinations from an origin."""
market = "UK"
url = (
f"https://www.skyscanner.net/api/v3.0/flights/browse/browseroutes/v1.0/"
f"{market}/{currency}/{locale}/{origin}/anywhere/{departure_date}"
)
with httpx.Client(
proxies={"https://": PROXY, "http://": PROXY},
headers=HEADERS,
timeout=30.0,
) as client:
time.sleep(random.uniform(1.0, 2.5))
resp = client.get(url)
resp.raise_for_status()
return resp.json()
def parse_browse_routes(response: dict) -> list[dict]:
"""Parse browse routes response into a list of destination prices."""
routes = response.get("Routes", [])
quotes = {q["QuoteId"]: q for q in response.get("Quotes", [])}
places = {p["PlaceId"]: p for p in response.get("Places", [])}
carriers = {c["CarrierId"]: c for c in response.get("Carriers", [])}
results = []
for route in routes:
quote_ids = route.get("QuoteIds", [])
if not quote_ids:
continue
quote = quotes.get(quote_ids[0], {})
dest_id = route.get("DestinationId")
dest = places.get(dest_id, {})
outbound = quote.get("OutboundLeg", {})
carrier_ids = outbound.get("CarrierIds", [])
carrier_names = [carriers.get(cid, {}).get("Name", "") for cid in carrier_ids]
results.append({
"destination": dest.get("Name", ""),
"destination_code": dest.get("IataCode", ""),
"country": dest.get("CountryName", ""),
"min_price": quote.get("MinPrice"),
"direct": quote.get("Direct", False),
"carriers": carrier_names,
})
return sorted(results, key=lambda x: (x["min_price"] or 9999))
Finding the Cheapest Month
Once you have the parsed monthly prices, identifying the cheapest travel window is straightforward:
def cheapest_month(prices: list[dict]) -> dict | None:
if not prices:
return None
return min(prices, key=lambda x: x["min_price"])
def cheapest_n_months(prices: list[dict], n: int = 3) -> list[dict]:
return sorted(prices, key=lambda x: x["min_price"])[:n]
def run_monthly_analysis(origin: str, destination: str):
print(f"\nFetching monthly prices: {origin} -> {destination}")
raw = fetch_indicative(origin, destination, grouping="monthly")
monthly = parse_monthly_prices(raw)
if not monthly:
print("No pricing data returned.")
return
print(f"\n{'Month':<12} {'Min Price':>12} {'Quotes':>8}")
print("-" * 35)
for entry in monthly:
print(
f"{entry['year']}-{entry['month']:02d} "
f"{entry['currency']} {entry['min_price']:>8.2f} "
f"{entry['quote_count']:>6}"
)
cheapest = cheapest_month(monthly)
top3 = cheapest_n_months(monthly, 3)
print(f"\nCheapest month: {cheapest['year']}-{cheapest['month']:02d} "
f"at {cheapest['currency']} {cheapest['min_price']:.2f}")
print("\nTop 3 cheapest months:")
for i, m in enumerate(top3, 1):
print(f" {i}. {m['year']}-{m['month']:02d}: {m['currency']} {m['min_price']:.2f}")
if __name__ == "__main__":
run_monthly_analysis("LHR", "JFK")
Multi-Route Batch Fetching
For price monitoring across many routes, run batches with per-route delays:
import time
import random
ROUTES = [
("LHR", "JFK"), ("LHR", "LAX"), ("LHR", "BKK"),
("LGW", "BCN"), ("MAN", "DXB"), ("EDI", "AMS"),
]
def batch_fetch_routes(routes: list[tuple], delay_range=(3.0, 7.0)) -> dict:
"""Fetch monthly prices for multiple routes with polite delays."""
results = {}
for origin, destination in routes:
key = f"{origin}-{destination}"
try:
raw = fetch_indicative(origin, destination)
monthly = parse_monthly_prices(raw)
results[key] = monthly
print(f" {key}: {len(monthly)} months of data")
except httpx.HTTPStatusError as e:
print(f" {key}: HTTP {e.response.status_code}")
results[key] = []
except Exception as e:
print(f" {key}: Error — {e}")
results[key] = []
# Randomized delay between routes to avoid pattern detection
time.sleep(random.uniform(*delay_range))
return results
Error Handling and Production Patterns
Skyscanner's API returns a few specific error patterns you need to handle:
import httpx
import time
import logging
logger = logging.getLogger(__name__)
def fetch_with_retry(
origin: str,
destination: str,
max_attempts: int = 5,
base_delay: float = 2.0,
) -> dict | None:
"""Fetch indicative prices with exponential backoff retry."""
for attempt in range(1, max_attempts + 1):
try:
resp_data = fetch_indicative(origin, destination)
# Skyscanner sometimes returns 200 with an error object
if "error" in resp_data or resp_data.get("status") == "error":
raise ValueError(f"API error in response: {resp_data.get('error')}")
return resp_data
except httpx.HTTPStatusError as e:
status = e.response.status_code
if status == 400:
# Bad request — likely a payload issue, no point retrying
logger.error(f"400 Bad Request for {origin}-{destination}: {e.response.text[:200]}")
return None
elif status == 403:
# Bot detection triggered — back off longer
wait = base_delay * (3 ** attempt)
logger.warning(f"403 Forbidden attempt {attempt}/{max_attempts}. Waiting {wait:.0f}s")
time.sleep(wait)
elif status == 429:
# Rate limited — respect Retry-After header
retry_after = int(e.response.headers.get("Retry-After", base_delay * attempt))
logger.warning(f"429 Rate limited. Waiting {retry_after}s")
time.sleep(retry_after)
elif status >= 500:
# Server error — retry with backoff
wait = base_delay * (2 ** attempt)
logger.warning(f"5xx server error attempt {attempt}/{max_attempts}. Waiting {wait:.0f}s")
time.sleep(wait)
else:
logger.error(f"Unhandled HTTP {status}")
return None
except httpx.TimeoutException:
wait = base_delay * attempt
logger.warning(f"Timeout attempt {attempt}/{max_attempts}. Waiting {wait:.0f}s")
time.sleep(wait)
except Exception as e:
logger.error(f"Unexpected error: {e}")
if attempt == max_attempts:
return None
time.sleep(base_delay * attempt)
logger.error(f"All {max_attempts} attempts failed for {origin}-{destination}")
return None
Storing the Output
For ongoing price tracking, SQLite handles the load without any infrastructure overhead:
import sqlite3
from datetime import datetime
def init_db(path: str = "flights.db") -> sqlite3.Connection:
conn = sqlite3.connect(path)
conn.execute("""
CREATE TABLE IF NOT EXISTS monthly_prices (
origin TEXT NOT NULL,
destination TEXT NOT NULL,
year INTEGER NOT NULL,
month INTEGER NOT NULL,
min_price REAL,
avg_price REAL,
quote_count INTEGER,
currency TEXT,
fetched_at TEXT DEFAULT (datetime('now')),
PRIMARY KEY (origin, destination, year, month, fetched_at)
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS daily_prices (
origin TEXT NOT NULL,
destination TEXT NOT NULL,
travel_date TEXT NOT NULL,
min_price REAL,
currency TEXT,
fetched_at TEXT DEFAULT (datetime('now')),
PRIMARY KEY (origin, destination, travel_date, fetched_at)
)
""")
conn.commit()
return conn
def save_monthly_prices(
conn: sqlite3.Connection,
origin: str,
destination: str,
prices: list[dict],
):
conn.executemany(
"""INSERT OR REPLACE INTO monthly_prices
(origin, destination, year, month, min_price, avg_price, quote_count, currency)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
[
(
origin, destination,
p["year"], p["month"],
p["min_price"], p.get("avg_price"),
p.get("quote_count"), p["currency"],
)
for p in prices
],
)
conn.commit()
def get_price_history(
conn: sqlite3.Connection,
origin: str,
destination: str,
year: int,
month: int,
) -> list[dict]:
"""Get historical price readings for a specific route/month."""
rows = conn.execute(
"""SELECT min_price, avg_price, currency, fetched_at
FROM monthly_prices
WHERE origin=? AND destination=? AND year=? AND month=?
ORDER BY fetched_at""",
(origin, destination, year, month),
).fetchall()
return [
{"min_price": r[0], "avg_price": r[1], "currency": r[2], "fetched_at": r[3]}
for r in rows
]
def get_cheapest_route_in_month(
conn: sqlite3.Connection,
year: int,
month: int,
) -> list[dict]:
"""Find which routes had the biggest price drops in a given month."""
rows = conn.execute(
"""SELECT origin, destination, MIN(min_price) as current_min,
AVG(min_price) as avg_min, currency
FROM monthly_prices
WHERE year=? AND month=?
GROUP BY origin, destination
ORDER BY (AVG(min_price) - MIN(min_price)) DESC
LIMIT 20""",
(year, month),
).fetchall()
return [
{
"route": f"{r[0]}-{r[1]}",
"current_min": r[2],
"avg_min": round(r[3], 2),
"drop": round(r[3] - r[2], 2),
"currency": r[4],
}
for r in rows
]
Complete Price Alert Pipeline
Putting it all together into a working alert system:
import json
from datetime import datetime
def run_price_alerts(
routes: list[tuple],
alert_threshold_pct: float = 15.0,
db_path: str = "flights.db",
):
"""
Monitor routes and alert when prices drop significantly vs historical average.
alert_threshold_pct: trigger alert when price is this % below historical avg
"""
conn = init_db(db_path)
alerts = []
for origin, destination in routes:
prices_data = fetch_with_retry(origin, destination)
if not prices_data:
continue
monthly = parse_monthly_prices(prices_data)
if not monthly:
continue
save_monthly_prices(conn, origin, destination, monthly)
# Check each month for price drops vs history
for price_point in monthly:
history = get_price_history(
conn, origin, destination,
price_point["year"], price_point["month"]
)
if len(history) < 3:
# Not enough historical data yet
continue
historical_prices = [h["min_price"] for h in history[:-1]] # exclude today
hist_avg = sum(historical_prices) / len(historical_prices)
current = price_point["min_price"]
drop_pct = (hist_avg - current) / hist_avg * 100
if drop_pct >= alert_threshold_pct:
alerts.append({
"route": f"{origin} -> {destination}",
"month": f"{price_point['year']}-{price_point['month']:02d}",
"current_price": current,
"historical_avg": round(hist_avg, 2),
"drop_pct": round(drop_pct, 1),
"currency": price_point["currency"],
})
time.sleep(random.uniform(4.0, 8.0))
conn.close()
if alerts:
print(f"\n=== PRICE ALERTS ({len(alerts)} drops >= {alert_threshold_pct}%) ===")
for alert in sorted(alerts, key=lambda x: -x["drop_pct"]):
print(
f" {alert['route']} in {alert['month']}: "
f"{alert['currency']} {alert['current_price']:.2f} "
f"(was avg {alert['currency']} {alert['historical_avg']:.2f}, "
f"-{alert['drop_pct']}%)"
)
else:
print("No significant price drops detected.")
return alerts
# Example: monitor 6 routes, alert on 20%+ drops from historical avg
if __name__ == "__main__":
WATCH_ROUTES = [
("LHR", "JFK"), ("LHR", "BKK"), ("LGW", "BCN"),
("MAN", "DXB"), ("STN", "PMI"), ("BHX", "FCO"),
]
run_price_alerts(WATCH_ROUTES, alert_threshold_pct=20.0)
Legal and Ethical Considerations
Skyscanner's terms of service prohibit automated scraping for commercial purposes. Scraping for personal use, research, or building tools that don't compete directly with Skyscanner sits in a grayer area. The practical reality is that flight price data is generally considered factual information not protectable by copyright, and Skyscanner aggregates it from airline APIs and GDS systems rather than creating it themselves.
If you're building a commercial product that resells Skyscanner's aggregated pricing, you should use their official Affiliate API instead — it provides the same data through a legitimate channel with proper rate limits and commercial terms. The unofficial API approach is more appropriate for:
- Personal price monitoring tools
- Academic or market research
- Developer experimentation and learning
- Building dashboards for your own travel planning
For commercial data products, investigate whether the airlines themselves offer direct feeds (IATA NDC, airline APIs) or whether a legitimate B2B flight data provider fits your needs better.
Regardless of purpose, be a good citizen: rate-limit your requests, use polite delays, avoid peak traffic times, and don't run concurrent scrapers against the same IP.
Key Takeaways
- The
/api/v3/flights/indicative/searchendpoint is the right target for price calendar and cheapest-month data — it powers Skyscanner's own flexible search UI. - Locale, market, and currency must be consistent; mismatches cause 400 errors or empty results.
- Skyscanner's Akamai protection is aggressive at the IP reputation layer — datacenter IPs get blocked before the request reaches the application. ThorData's residential proxies with sticky session support are the practical fix, since session correlation between the token harvest and API call requires the same exit IP.
- Use
httpxwith explicitcontent=json.dumps(payload)and HTTP/2 enabled, with the required Skyscanner channel headers. - Parse the
quotesmap first, then dereference quote IDs from the grid cells — the response is a join table, not inline pricing per cell. - Always store raw response JSON alongside parsed rows so you can re-parse when Skyscanner's response schema shifts without re-crawling.
- Implement exponential backoff for 403 and 429 responses — Skyscanner's bot detection occasionally flags legitimate patterns, and backing off is often enough to recover.
- Track prices over time using the historical comparison pattern: same route, same month, across multiple fetch runs. Price drop alerting only becomes meaningful with at least a week of baseline data.