Back to blog

Scraping Google Trends: Methods, Tools, and Best Practices

While using Google Trends, you can discover the search interest rates for specific keywords during specific time frames in specific regions and analyse the popularity of Search keywords over time. This makes it possible to see how popular a topic is over time and across regions, without exposing sensitive search data. In this guide, we'll explain the kinds of data available from Google Trends, compare scraping techniques, and demonstrate two methods of gathering Google Trends data.

Kipras Kalzanauskas

Oct 27, 2025

10 min read

Why scrape Google Trends data?

While the Google Trends interface is great for exploring a few keywords interactively, it's not designed for large-scale research or automation.

If you want to track hundreds of products, compare regions, or collect daily metrics for analysis, manual downloads quickly become impractical. You can automate the process with scraping and get the same data in structured formats like JSON or CSV.

Here are the main reasons professionals scrape Google Trends data:

  • Keyword research. Google Trends is a powerful tool for SEO specialists and content marketers. It helps identify which search terms are gaining traction, how interest shifts over time, and where audiences are most active. Armed with this data, professionals can improve their marketing strategies to gain more website traffic.
  • Market research. Marketers use Google Trends to understand consumer interests, detect seasonal demand changes, and predict upcoming market shifts. Google Trends also helps track search patterns, giving insights into what customers want and when.
  • Brand monitoring. Businesses can use Google Trends to track their brand's popularity, compare it with competitors, and respond quickly to changes in public interest or measure campaign effectiveness in near real time.
  • Content generation and planning. Writers and publishers can scrape topic trends to identify what audiences care about most. For example, collecting trending searches from multiple regions can help you localize blog topics, YouTube ideas, or ad content to match rising interests.
  • Exploring historical data. Businesses can also explore historical data by adjusting the time frame to view trends over a few years or even specific date ranges. This flexibility allows for a more granular analysis of search volume trends, which can be useful for identifying seasonal patterns.

Understanding Google Trends data

Google Trends is designed to help users understand:

  • The popularity of certain topics
  • Geographical interest in various terms
  • Seasonality of interest

Imagine analyzing the keyword "iPhone" in the United States. The Google Trends chart will display a line graph showing how interest has fluctuated over time.

Here's what the numbers mean:

  • 100 – Peak popularity for the selected parameters
  • 50 – Half as popular as the peak
  • 25 – Quarter as popular
  • 0 – Below threshold OR insufficient data

You can even compare topics across different countries or platforms because the data points are normalized against total search volume and scaled on a 0–100 range.

Google Trends includes a "Compare" feature where you can input several keywords and compare their relative interest levels on the same chart. For example, comparing "iPhone", "Samsung Galaxy", and "Pixel" will instantly reveal which brand dominates user attention and how that shifts over time.

If you only need to analyze one or two keywords, you can easily export that data manually. However, if you want to monitor dozens of keywords across multiple regions or timeframes, manual downloads can quickly become unmanageable.

Overview of scraping methods

There are a few ways to scrape Google Trends data, each with its own trade-offs in difficulty, flexibility, and reliability. The best method for you depends on your technical experience and the scale of data you want to collect.

The three main approaches include:

  • Using Google's official Trends API
  • Building a DIY scraper
  • Using a web scraper API

Google Trends API

After years of demand, Google finally launched its official Trends API in alpha on July 24, 2025. For the first time, developers can now access normalized, consistently scaled data directly from Google's backend without relying on third-party solutions.

The API supports:

  • Consistent 5-year trend data that can be merged across requests
  • Clean JSON responses with stable scaling for all time ranges
  • Native authentication and query structure

While this is an exciting milestone, it's still in a limited-access alpha stage.

Only a handful of testers can use it at the moment, and general availability hasn't been announced. If you want to try it, you can apply to join the alpha tester program on Google's developer site.

Benefits:

  • Reliable, accurate data directly from Google
  • No need to bypass anti-scraping systems
  • Proper documentation and support

Challenges:

  • Very limited access (invite-only at this stage)
  • No public timeline for general availability
  • May have usage quota or pricing when publicly released
  • Limited to a 5-year historical window

Building a DIY scraping solution

You can also scrape Google Trends data by developing a scraper from scratch. This is the most challenging option, but it's also the cheapest and gives you the most control over your scraper, provided you're technically proficient enough to build and maintain it.

You can use any programming language to build a scraper, but Python is the most popular for web scraping. All you need is a workflow to download the Trends data, clean it, and store it.

Benefits:

  • Full control over the scraping process and data pipeline
  • Flexible customization for specific data needs or formats
  • No dependency on third-party APIs or rate limits

Challenges:

  • Google frequently updates its HTML structure, breaking selectors
  • Risk of IP blocking without proper safeguards
  • Requires ongoing maintenance and debugging

Prerequisites

Let's build a custom scraper. First, ensure that your development environment is properly set up.

1. Install Python. Download and install the latest stable version of Python from the official website. It's recommended to use Python 3.10 or newer for better library support. You can verify that Python is installed by running:

python --version

2. Choose an IDE. You can use Cursor, Visual Studio Code, or PyCharm to follow along. Any text editor or IDE that supports Python will work.

3. Create a virtual environment. A virtual environment allows you to isolate Python packages into separate directories to avoid version conflicts. To create a new virtual environment, execute the following command in your terminal:

python -m venv google_trends

This will generate a folder named "google_trends," which contains the isolated environment. Activate it using the appropriate command for your operating system:

On Windows, run:

google_trends\Scripts\activate

On macOS and Linux, run:

source google_trends/bin/activate

4. Sign up for Decodo. To avoid frequent blocks from Google while scraping, you'll need a reliable proxy service. Decodo offers premium rotating residential and ISP proxies optimized for scraping Google properties like Google Trends.

To get started, simply sign up and activate a 3-day free trial to test out the service. Once you create an account, note down your credentials and proxy endpoint, as these will be required when configuring your scraper later in the guide.

Method 1: Scraping Google Trends with Playwright

Google Trends is a JavaScript-heavy site that blocks automated scraping through strict anti-bot systems. To access its data, we’ll use Playwright, a browser automation framework capable of executing JavaScript and mimicking real browsing behavior. Playwright can run headless or visible browser sessions, allowing us to load dynamic elements such as charts and export buttons.

1. Install dependencies

Start by installing Playwright and setting up the required browsers:

pip install playwright pandas
playwright install

This will download Chromium, Firefox, and WebKit. We’ll use Chromium (Chrome) for best compatibility with Google services.

2. Handling Google's anti-bot mechanisms

Google Trends enforces rate limits and detects most automated browsers. To avoid detection, we use:

  • Residential proxies. These make our requests appear as coming from real user IPs.
  • Stealth settings. We disable Playwright's automation flags and spoof the user agent.
  • Session cleanup. Each run clears cookies and local storage to start fresh.

Why you need proxies for Google Trends scraping

Google Trends enforces strict rate limits, and repeated access from the same IP will quickly trigger temporary bans or CAPTCHA challenges.

To stay under the radar, it's best to route your traffic through residential proxies. These proxies make your requests appear like those of real users coming from regular networks instead of datacenters.

Premium residential proxies rotate your IP address with each request, distributing your scraping activity across thousands of legitimate residential IPs.

This makes your requests appear as if they're coming from different real users around the world, rather than a single automated bot.

If you don't already have proxies set up, you can sign up on Decodo and activate a free 3-day trial to follow along. Now let's create a more sophisticated scraper that bypasses Google's detection:

browser = await p.chromium.launch(
headless=False,
args=['--no-sandbox', '--disable-blink-features=AutomationControlled']
)
context = await browser.new_context(
accept_downloads=True,
proxy={
"server": PROXY_SERVER,
"username": PROXY_USERNAME,
"password": PROXY_PASSWORD
},
user_agent='Mozilla/5.0 (...) Chrome/120.0.0.0 Safari/537.36'
)
free-trial.svg

Get residential proxies for Google Trends

Claim your 3-day free trial of residential proxies and explore full features with unrestricted access.

This combination helps bypass Google’s detection while maintaining stable connections through Decodo proxies.

3. Downloading Google Trends data

Once the page loads, Playwright locates and clicks the export buttons on the charts automatically. Each corresponds to a CSV file such as:

  • Interest_Over_Time.csv
  • Interest_By_SubRegion.csv
  • Related_Topics.csv
  • Related_Queries.csv

The script scrolls through the page to ensure all widgets are visible and waits for them to load before triggering downloads. Playwright’s expect_download() context ensures each file is saved correctly into a downloads folder.

If any file fails to download, the page is reloaded and the process repeats.

4. Retry logic and main execution

Google often returns temporary errors like HTTP 429 (Too Many Requests). The script includes retry loops that wait and reload the page if rate-limited or if chart widgets don’t appear.

Each attempt waits several seconds between retries, incrementally increasing the delay to appear more human-like. The script stops after several reloads or once all four CSV files are saved.

5. Understanding and cleaning the output

When successful, you’ll get up to four CSV files:

1. Interest by subregion data. This CSV shows geographic distribution of search interest across different regions (states, provinces, or cities depending on your query). Each row contains a location and its relative interest value (0-100).

2. Interest over time data. This CSV contains time-series data showing how search interest changed during your specified date range. Each row represents a time point (hour, day, week, or month, depending on the range) with corresponding interest values.

3. Related topics. This CSV lists the topics that people searched for alongside your main query. Each entry shows a related subject and its relative popularity score, helping reveal broader themes or associations behind user interest. The data is divided into "Top" and "Rising" categories, showing consistently popular topics and those gaining momentum, respectively.

4. Related queries. This CSV captures specific search phrases users entered in connection with your keyword. It helps identify trending questions or alternative formulations of the same interest. Like related topics, it’s split into "Top" and "Rising" sections, reflecting both stable and emerging search patterns.

The script also performs automatic cleaning:

  • Related data is split into "Top" and "Rising" categories.
  • Time and region datasets are restructured and saved in a cleaned folder.

These cleaned CSVs are ready for analysis or integration into your data pipeline.

The complete custom scraper code

Here's the full Playwright script that brings together all the steps we've covered:

import os
import asyncio
import pandas as pd
import csv
import time
from playwright.async_api import async_playwright
download_dir = os.path.join(os.getcwd(), "downloaded")
os.makedirs(download_dir, exist_ok=True)
cleaned_dir = os.path.join(os.getcwd(), "cleaned")
os.makedirs(cleaned_dir, exist_ok=True)
# Proxy configuration
PROXY_SERVER = "http://gate.decodo.com:7000"
PROXY_USERNAME = "YOUR_PROXY_USERNAME"
PROXY_PASSWORD = "YOUR_PROXY_PASSWORD"
async def download_google_trends_data():
print("Starting download...")
async with async_playwright() as p:
browser = await p.chromium.launch(
headless=False,
args=['--no-sandbox', '--disable-blink-features=AutomationControlled', '--disable-dev-shm-usage']
)
context = await browser.new_context(
accept_downloads=True,
proxy={
"server": PROXY_SERVER,
"username": PROXY_USERNAME,
"password": PROXY_PASSWORD
},
user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
viewport={'width': 1920, 'height': 1080},
extra_http_headers={'Accept-Language': 'en-US,en;q=0.9', 'Referer': 'https://www.google.com/'}
)
await context.add_init_script("""
Object.defineProperty(navigator, 'webdriver', {get: () => undefined});
window.chrome = {runtime: {}};
""")
page = await context.new_page()
# Visit Google.com first
await page.goto("https://www.google.com", wait_until='domcontentloaded', timeout=20000)
await asyncio.sleep(3)
# Clear cookies and cache before loading Google Trends
await context.clear_cookies()
await page.evaluate("() => { localStorage.clear(); sessionStorage.clear(); }")
# Load Google Trends with cache-busting timestamp
timestamp = int(time.time())
url = f"https://trends.google.com/trends/explore?geo=US&q=louvre&hl=en-GB&t={timestamp}"
for attempt in range(5):
print(f"Attempt {attempt + 1}/5")
response = await page.goto(url, wait_until='domcontentloaded', timeout=30000)
if response.status == 429:
print("Rate limited, waiting...")
await asyncio.sleep(15 + (attempt * 10))
continue
elif response.status == 200:
await asyncio.sleep(5)
# Check for error messages
error_count = await page.locator('text=/oops|something went wrong/i').count()
if error_count > 0:
print(f"Error detected on page, reloading (attempt {attempt + 1}/5)...")
await asyncio.sleep(10 + (attempt * 5))
continue
else:
print("Page loaded successfully")
break
# Handle cookies
try:
for selector in ['button:has-text("Accept all")', 'button:has-text("I agree")']:
button = page.locator(selector)
if await button.count() > 0:
await button.first.click()
await asyncio.sleep(3)
break
except:
pass
# Scroll
for _ in range(3):
await page.mouse.wheel(0, 300)
await asyncio.sleep(2)
await asyncio.sleep(15)
# Download files
print("Downloading files...")
await page.wait_for_selector("button.export", state="visible", timeout=20000)
file_names = ["Interest_Over_Time.csv", "Interest_By_SubRegion.csv", "Related_Topics.csv", "Related_Queries.csv"]
downloaded_files = []
max_reload_attempts = 3
reload_count = 0
# Initial download attempt
download_buttons = [btn for btn in await page.query_selector_all("button.export") if await btn.is_visible()]
for idx, button in enumerate(download_buttons[:4]):
try:
await button.scroll_into_view_if_needed()
await asyncio.sleep(2)
async with page.expect_download(timeout=30000) as download_info:
await button.click()
download = await download_info.value
file_path = os.path.join(download_dir, file_names[idx])
await download.save_as(file_path)
print(f"Downloaded: {file_path}")
downloaded_files.append(file_names[idx])
await asyncio.sleep(2)
except:
print(f"Failed to download {file_names[idx]}")
continue
# Retry up to 3 times for missing files
while reload_count < max_reload_attempts and len(downloaded_files) < 4:
missing_files = [f for f in file_names if f not in downloaded_files]
print(f"Reloading page (attempt {reload_count + 1}/{max_reload_attempts}) to get missing files: {', '.join(missing_files)}")
await asyncio.sleep(5)
await page.reload(wait_until='domcontentloaded')
# Wait for widgets to load after reload
await asyncio.sleep(10)
# Scroll again
for _ in range(3):
await page.mouse.wheel(0, 300)
await asyncio.sleep(2)
await asyncio.sleep(15)
# Try downloading missing files
download_buttons = [btn for btn in await page.query_selector_all("button.export") if await btn.is_visible()]
for idx, button in enumerate(download_buttons[:4]):
if file_names[idx] in downloaded_files:
continue
try:
await button.scroll_into_view_if_needed()
await asyncio.sleep(2)
async with page.expect_download(timeout=30000) as download_info:
await button.click()
download = await download_info.value
file_path = os.path.join(download_dir, file_names[idx])
await download.save_as(file_path)
print(f"Downloaded: {file_path}")
downloaded_files.append(file_names[idx])
await asyncio.sleep(2)
except:
continue
reload_count += 1
# Final status
if len(downloaded_files) == 4:
print(f"Success: Downloaded all 4 files")
else:
critical_files = ["Interest_Over_Time.csv", "Interest_By_SubRegion.csv"]
has_critical = all(f in downloaded_files for f in critical_files)
if has_critical:
print(f"Success: Downloaded {len(downloaded_files)}/4 files (critical files present)")
else:
print(f"Warning: Downloaded {len(downloaded_files)}/4 files (critical files missing)")
await browser.close()
def clean_related_data(file_path, output_top_path, output_rising_path, columns):
if not os.path.exists(file_path) or os.path.getsize(file_path) < 10:
return
try:
with open(file_path, "r", encoding="utf-8-sig") as file:
lines = [line for line in csv.reader(file)]
top_start = next(i for i, line in enumerate(lines) if line and line[0] == "TOP") + 1
rising_start = next(i for i, line in enumerate(lines) if line and line[0] == "RISING") + 1
top_df = pd.DataFrame(lines[top_start:rising_start - 1], columns=columns)[:-1]
rising_df = pd.DataFrame(lines[rising_start:], columns=columns)[:-1]
top_df.to_csv(output_top_path, index=False, encoding="utf-8-sig")
rising_df.to_csv(output_rising_path, index=False, encoding="utf-8-sig")
except:
pass
def clean_interest_by_subregion_data(file_path, output_path):
if not os.path.exists(file_path) or os.path.getsize(file_path) < 10:
return
try:
with open(file_path, "r", encoding="utf-8-sig") as file:
lines = [line for line in csv.reader(file)]
region_data = pd.DataFrame(lines[3:-1], columns=["Region", "Interest"])
region_data.to_csv(output_path, index=False, encoding="utf-8-sig")
except:
pass
def clean_interest_over_time_data(file_path, output_path):
if not os.path.exists(file_path) or os.path.getsize(file_path) < 10:
return
try:
df = pd.read_csv(file_path, skiprows=2)
if df.shape[1] >= 2:
cleaned_df = df.iloc[:-1, [0, 1]]
cleaned_df.columns = ["Week", "Search Interest"]
cleaned_df.to_csv(output_path, index=False, encoding="utf-8-sig")
except:
pass
if __name__ == "__main__":
asyncio.run(download_google_trends_data())
print("Cleaning files...")
clean_related_data(
os.path.join(download_dir, "Related_Topics.csv"),
os.path.join(cleaned_dir, "cleaned_top_topics.csv"),
os.path.join(cleaned_dir, "cleaned_rising_topics.csv"),
["Topics", "Interest"]
)
clean_related_data(
os.path.join(download_dir, "Related_Queries.csv"),
os.path.join(cleaned_dir, "cleaned_top_queries.csv"),
os.path.join(cleaned_dir, "cleaned_rising_queries.csv"),
["Query", "Interest"]
)
clean_interest_by_subregion_data(
os.path.join(download_dir, "Interest_By_SubRegion.csv"),
os.path.join(cleaned_dir, "cleaned_region_data.csv")
)
clean_interest_over_time_data(
os.path.join(download_dir, "Interest_Over_Time.csv"),
os.path.join(cleaned_dir, "cleaned_interest_over_time.csv")
)
print("Completed")

Note: Remember to replace YOUR_PROXY_USERNAME and YOUR_PROXY_PASSWORD with your actual proxy credentials from your Decodo dashboard.

Method 2: Scraping Google Trends using the backend API

Instead of automating the browser with Playwright, we can directly interact with Google Trends' internal API endpoints using Python. This method is faster, more lightweight, and doesn't require a browser or Chrome driver.

You'll need two key components:

  • API URLs (captured from browser DevTools)
  • HTTP client (e.g., httpx or Requests)

1. Inspect Google Trends network requests

Google Trends loads its data dynamically using background API calls. We can intercept these calls to identify the exact endpoints the website uses, then replicate those requests in our own code.

To start, open Google Trends in your browser and launch Developer Tools (press F12 or right-click → Inspect). Then navigate to the Network tab and filter by Fetch/XHR.

Look for requests labeled relatedsearches or filters for widgetdata. You'll see several API requests appear in the Network panel. These requests contain the actual JSON data that populates the charts and tables on the Google Trends interface.

Once you identify the relevant requests, right-click on each request and select CopyCopy link address. This gives you the exact endpoint you'll use to query Google's backend directly.

2. Send requests to the API

Once you've captured the URLs, you can use Python's Requests or httpx library to retrieve the JSON data. The copied URLs will look complex, with encoded parameters like this:

https://trends.google.com/trends/api/widgetdata/relatedsearches?hl=en-US&tz=-60&req=%7B%22restriction%22:%7B...%7D&token=APP6_UEAAAA...

Some key components of the URL are:

  • hl – Language (e.g., en-US)
  • tz – Timezone offset
  • req – URL-encoded JSON containing search parameters (keyword, geo, time range, etc.)
  • token – Authentication token (generated per request, expires quickly)

Let's set up a basic request:

import requests
import json
# Set the geographical location
geo_location = "US"
# API URLs captured from Developer Tools
topics_url = "https://trends.google.com/trends/api/widgetdata/relatedsearches?hl=en-US&tz=-60&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222024-10-13+2025-10-13%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22%7D,%22keywordType%22:%22ENTITY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222023-10-13+2024-10-12%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22NG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAaO5qV362vksfqDx0cSRxqDfcd_xhiGHa"
queries_url = "https://trends.google.com/trends/api/widgetdata/relatedsearches?hl=en-US&tz=-60&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222024-10-13+2025-10-13%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22%7D,%22keywordType%22:%22QUERY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222023-10-13+2024-10-12%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22NG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAaO5qV4tE2aqGXa6bdn_sGMtdStDwuOP7"
# Fetch the data from the API
topics_response = requests.get(topics_url)
queries_response = requests.get(queries_url)

At this point, we're making direct HTTP requests to Google's backend API without a browser. This is already faster than using Playwright or Selenium, as there's no page rendering, JavaScript execution, or DOM manipulation involved.

3. Parse and structure the JSON data

Google wraps the JSON responses with )]}', to prevent JSON hijacking attacks. We need to strip this before parsing:

# Clean the JSON responses
topics_data = json.loads(topics_response.text.replace(")]}',", ""))
queries_data = json.loads(queries_response.text.replace(")]}',", ""))

Each endpoint returns ranked lists for related topics and queries. We can loop through these lists, extract the titles, search volumes, and links, and combine everything into a structured dataset:

result = []
# Parse the topics data
for topic in topics_data["default"]["rankedList"][1]["rankedKeyword"]:
topic_object = {
"Title": topic["topic"]["title"],
"Search Volume": topic["value"],
"Link": "https://trends.google.com/" + topic["link"],
"Geo Location": geo_location,
"Type": "search_topic",
}
result.append(topic_object)
# Parse the queries data
for query in queries_data["default"]["rankedList"][1]["rankedKeyword"]:
query_object = {
"Title": query["query"],
"Search Volume": query["value"],
"Link": "https://trends.google.com/" + query["link"],
"Geo Location": geo_location,
"Type": "search_query",
}
result.append(query_object)

4. Saving to CSV

import pandas as pd
# Create a Pandas DataFrame and save to CSV
df = pd.DataFrame(result)
df.to_csv("google_trends_keywords.csv", index=False)
print("Data saved to google_trends_keywords.csv")
print(df.head(10))

This will produce a google_trends_keywords.csv file containing our explore page data. Here's the result we got:

The complete API code

Here's the full script combining all the steps:

import requests
import json
import pandas as pd
# Set the geographical location
geo_location = "US"
# API URLs from browser Network tab
topics_url = "https://trends.google.com/trends/api/widgetdata/relatedsearches?hl=en-US&tz=-60&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222024-10-13+2025-10-13%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22%7D,%22keywordType%22:%22ENTITY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222023-10-13+2024-10-12%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22NG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAaO5qV362vksfqDx0cSRxqDfcd_xhiGHa"
queries_url = "https://trends.google.com/trends/api/widgetdata/relatedsearches?hl=en-US&tz=-60&req=%7B%22restriction%22:%7B%22geo%22:%7B%22country%22:%22US%22%7D,%22time%22:%222024-10-13+2025-10-13%22,%22originalTimeRangeForExploreUrl%22:%22today+12-m%22%7D,%22keywordType%22:%22QUERY%22,%22metric%22:%5B%22TOP%22,%22RISING%22%5D,%22trendinessSettings%22:%7B%22compareTime%22:%222023-10-13+2024-10-12%22%7D,%22requestOptions%22:%7B%22property%22:%22%22,%22backend%22:%22IZG%22,%22category%22:0%7D,%22language%22:%22en%22,%22userCountryCode%22:%22NG%22,%22userConfig%22:%7B%22userType%22:%22USER_TYPE_LEGIT_USER%22%7D%7D&token=APP6_UEAAAAAaO5qV4tE2aqGXa6bdn_sGMtdStDwuOP7"
# Get the data from the API URLs
topics_response = requests.get(topics_url)
queries_response = requests.get(queries_url)
# Remove the extra symbols and parse JSON
topics_data = json.loads(topics_response.text.replace(")]}',", ""))
queries_data = json.loads(queries_response.text.replace(")]}',", ""))
result = []
# Parse the topics data and add to result list
for topic in topics_data["default"]["rankedList"][1]["rankedKeyword"]:
topic_object = {
"Title": topic["topic"]["title"],
"Search Volume": topic["value"],
"Link": "https://trends.google.com/" + topic["link"],
"Geo Location": geo_location,
"Type": "search_topic",
}
result.append(topic_object)
# Parse the queries data and add to result list
for query in queries_data["default"]["rankedList"][1]["rankedKeyword"]:
query_object = {
"Title": query["query"],
"Search Volume": query["value"],
"Link": "https://trends.google.com/" + query["link"],
"Geo Location": geo_location,
"Type": "search_query",
}
result.append(query_object)
print(result)
# Create a Pandas dataframe and save the data into CSV
df = pd.DataFrame(result)
df.to_csv("keywords.csv", index=False)

Note: The API URLs shown here are temporary and will expire after a short while. Each time you want to scrape new data, you need to open Google Trends, inspect the network requests in your browser's DevTools, and extract fresh URLs.

Common challenges and troubleshooting

Even with the right tools and setup, scraping Google Trends comes with its share of challenges. Here are the most common issues you'll encounter and how to resolve them.

1. Outdated or poorly maintained libraries

Tools like PyTrends, while once popular, are no longer actively maintained and often break due to frequent changes in Google's response structure. Many other developers report issues like:

  • HTTP 400/429 errors
  • Inconsistent JSON data
  • Failed authentication or deprecated endpoints

Solution: Avoid relying solely on PyTrends. Instead, use a headless browser like Selenium or Playwright. Or use our Web Scraping API to abstract the complexity of maintaining scripts.

2. Frequent CAPTCHA or 429 errors

Google enforces rate limits and bot detection aggressively. After several consecutive requests from the same IP or without human-like behavior, you may either see the 429 error or CAPTCHA pages.

Solution: Rotate IPs using residential proxies and randomize delays between requests.

3. Changing HTML or JSON structure

Google Trends occasionally updates its frontend layout and backend JSON schema. This breaks CSS selectors, JSON keys, or download button identifiers in static scripts.

Solution: Regularly inspect the HTML structure and network requests in DevTools to confirm endpoints still work.

4. Download directory and permission issues

When automating CSV exports with Playwright, incorrect browser configurations can prevent downloads or cause permission errors.

Solution: double-check your Chrome options; ensure your scraper runs with write permissions in the working directory; verify that files are downloading to the expected folder before scaling the scraper.

5. Slow performance or timeouts

Rendering Google Trends pages with JavaScript-heavy content can be resource-intensive.

Solution: Use headless mode for faster loading and use Playwright over Selenium for resource-friendliness.

To sum up

Following this guide, you can efficiently download Google Trends data, manage proxy rotation, and bypass Google's bot protection mechanisms using either Playwright automation or direct backend API scraping.

For those seeking a simpler approach, using our ready-made Web Scraping API lets you scrape Google Trends quickly and easily without extensive coding.

It's ideal for users seeking a low-code or no-code solution that handles all the technical complexity for you. Curious to see how Decodo simplifies web scraping? Sign up now for a 3-day free trial.

free-trial.svg

Get high-quality residential IPs

Choose a residential proxy plan from Decodo and enjoy industry-leading performance and reliability.

About the author

Kipras Kalzanauskas

Senior Account Manager

Kipras is a strategic account expert with a strong background in sales, IT support, and data-driven solutions. Born and raised in Vilnius, he studied history at Vilnius University before spending time in the Lithuanian Military. For the past 3.5 years, he has been a key player at Decodo, working with Fortune 500 companies in eCommerce and Market Intelligence.


Connect with Kipras on LinkedIn.

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

Frequently asked questions

Is Google Trends free to use?

Yes, Google Trends is completely free and offers real-time insights that marketers, researchers, and analysts can use for keyword research, market analysis, and trend forecasting.

Does Google Trends offer an official API?

Yes, Google released an official API in July 2025, though it's still in limited alpha testing. Most users still rely on third-party libraries or scraping methods until public access is available.

Can you scrape historical data?

Yes, by adjusting the date range, you can access multi-year data for long-term trend analysis.

Is PyTrends still reliable?

Not entirely. PyTrends is an unofficial API wrapper, and it often breaks as it's no longer being maintained. For stable scraping, use Playwright, Selenium, direct API requests, or Decodo's Web Scraping API.

How to Scrape Google Search Data

Business success is driven by data, and few data sources are as valuable as Google’s Search Engine Results Page (SERP). Collecting this data can be complex, but various tools and automation techniques make it easier. This guide explores practical ways to scrape Google search results, highlights the benefits of such efforts, and addresses common challenges.

Dominykas Niaura

Dec 30, 2024

7 min read

How to scrape Google Images

How to Scrape Google Images: A Step-By-Step Guide

Google Images is arguably the first place anyone uses to find photographs, paintings, illustrations, and any other visual files on the internet. Its vast repository of visual content has become an essential tool for users worldwide. In this guide, we'll delve into the types of data that can be scraped from Google Images, explore the various methods for scraping this information, and demonstrate how to efficiently collect image data using our Web Scraping API.

Dominykas Niaura

Oct 28, 2024

7 min read

How to Scrape Google Finance

Google Finance is one of the most comprehensive financial data platforms, offering real-time stock prices, market analytics, and company insights. Scraping Google Finance provides access to valuable data streams that can transform your analysis capabilities. In this guide, we'll walk through building a robust Google Finance scraper using Python, handling anti-bot measures, and implementing best practices for reliable data extraction.

Dominykas Niaura

Jun 25, 2025

10 min read

© 2018-2025 decodo.com. All Rights Reserved