Back to blog

How to Scrape Google Maps: A Step-By-Step Tutorial 2025

Ever wondered how to extract valuable business data directly from Google Maps? Whether you're building a lead list, analyzing local markets, or researching competitors, scraping Google Maps can be a goldmine of insights. In this guide, you’ll learn how to automate the process step by step using Python – or skip the coding altogether with Decodo’s plug-and-play scraper.

Dominykas Niaura

Aug 18, 2025

10 min read

How to scrape Google Maps

The benefit of scraping Google Maps

Let’s start with the "why." Google Maps is already rich and only gets richer every day with invaluable data that’s continually updated. There are restaurants, cafes, bars, supermarkets, hotels, pharmacies, auto repair shops, gyms, historical landmarks, theaters, parks… you name it. Google Maps covers virtually every category of interest.

The data extracted from Google Maps can be a pivotal resource for businesses and analysts alike. It’s used for many applications, such as market research, price aggregation, brand monitoring, competitor analysis, and more. Furthermore, this wealth of information can support customer engagement strategies, location planning, and service optimization, which is helpful for competitive positioning in various industries.

How to scrape Google Maps with Python and proxies

One way to retrieve data from Google Maps is via the official Google Maps API, but that approach comes with limitations like usage quotas, data restrictions, and fees for high-volume access. For developers who want more flexibility, scraping the Google Maps frontend is still a powerful workaround, especially when combined with proxies to stay under the radar.

In this guide, we’ll use Playwright – a fast, modern alternative to Selenium. It pairs well with proxies and handles dynamic websites like Google Maps more reliably. We’ll also use regular expressions to extract structured data and export our results to a CSV file.

Our example target will be Google Maps results for London establishments serving the famous West Asian dish – falafel.

Setting up your environment and imports

Make sure you have a coding setup that allows you to write and run scripts. This could be through a platform like Jupyter Notebook, an Integrated Development Environment (IDE) such as Visual Studio Code, or a basic text editor paired with a command-line tool.

You’ll need to have installed Python 3.7+ on your system and use the following command on Command Prompt (Windows) or Terminal (macOS, Linux) to install the necessary libraries for the script we’ll be using to scrape Google Maps:

pip install playwright
playwright install

Now create a new Python script file and import these libraries:

from playwright.sync_api import sync_playwright
import csv
import re
from typing import List, Dict

Getting residential proxies

Using proxies in a scraping project is essential for smooth and continuous data collection. Proxies mask your scraping activity by routing requests through various IP addresses, helping to maintain anonymity and avoid IP detection and bans from target websites like Google Maps.

Furthermore, proxies enable users to scale their efforts without hitting rate limits and access content across different regions. For this tutorial, we recommend using our residential proxies, but you can try datacenter, mobile, or ISP proxies, depending on your case.

  1. Create a Decodo account on our dashboard.
  2. Find residential proxies by choosing Residential on the left panel.
  3. Choose a subscription, Pay As You Go plan, or opt for a 3-day free trial.
  4. In the Proxy setup tab, configure the location, session type, and protocol according to your needs.
  5. Copy your proxy address, port, username, and password for later use. Alternatively, you can click the download icon in the lower right corner of the table to download the proxy endpoints (10 by default).

Integrating proxies

Let’s integrate residential proxies into this Playwright script. You can adjust the server address in the Decodo dashboard based on your desired location or session type, and make sure to replace the placeholder credentials with your own proxy username and password in the code. For this tutorial, we’ve chosen Europe as the general region and a sticky session of up to 10 minutes:

class GoogleMapsScraper:
def __init__(self, headless: bool = False):
self.proxy_config = {
"server": f"http://eu.decodo.com:10001",
"username": "YOUR_PROXY_USERNAME", # Replace
"password": "YOUR_PROXY_PASSWORD" # Replace
}
self.headless = headless

Extracting place information

This function pulls key details from each Google Maps result (like name, rating, review count, and address). It combines HTML attribute access with regular expressions to clean and standardize the data.

Google recently updated their web structure so that much of a listing’s information is bundled into the same text block, rather than being split into separate elements. That means we can’t just target ratings or addresses directly – we need logic to break these chunks apart and filter out the noise (like service options, phone numbers, or marketing blurbs).

The multiple patterns and keyword lists in this function make that possible. You can easily adapt the function to your needs by adding more keywords or patterns to the lists inside – great for targeting specific business types or cleaning up region-specific formats.

def extract_place_info(self, text: str, element) -> Dict:
"""Extract place information from text"""
place_info = {'title': 'N/A', 'address': 'N/A', 'rating': 'N/A', 'reviews': 'N/A'}
try:
# Get place name from aria-label
aria_label = element.get_attribute('aria-label')
if aria_label:
place_info['title'] = aria_label.strip()
# Extract rating: "4.9(1,017)" format – works with both . and , decimals
rating_match = re.search(r'(\d+[.,]\d+)\s*\(\d+', text)
if rating_match:
rating = rating_match.group(1).replace(',', '.')
place_info['rating'] = f"{rating}/5"
# Extract review count: "(1,017)" format – handles different number formats
review_match = re.search(r'\((\d+(?:[.,]\d+)*)\)', text)
if review_match:
place_info['reviews'] = review_match.group(1)
# Extract address: universal patterns for any business type
address_patterns = [
# Numbered addresses: "4 Strutton Ground", "123 Main Street"
r'[·•]\s*(\d+\s+[A-Za-z][^·•\n]{3,35})',
# Any category followed by separator and address: "Category · Address"
r'[A-Za-z][^·•\n]*[·•]\s*([^·•\n]{4,50})',
# Common address keywords (universal)
r'[·•]\s*([^·•\n]*(?:Street|St|Road|Rd|Lane|Ln|Avenue|Ave|Boulevard|Blvd|Drive|Dr|Way|Place|Pl|Court|Ct|Circle|Cir|Square|Sq|Market|Centre|Center|Unit|Floor|Ground|Close|Crescent|Gardens|Park|Terrace|Row|Hill|Bridge|Station|Mall|Plaza|Building|Tower|House|Hall)\s*[^·•\n]*)',
# Addresses with postal codes or area codes
r'[·•]\s*([^·•\n]*\b(?:[A-Z]{1,2}\d{1,2}[A-Z]?\s*\d[A-Z]{2}|\d{5}(?:-\d{4})?)\b[^·•\n]*)',
# Generic pattern for medium-length text segments (likely addresses)
r'[·•]\s*([^·•\n]{8,45})',
# Numbers + text (often addresses)
r'[·•]\s*(\d+[^·•\n]{4,40})'
]
for pattern in address_patterns:
match = re.search(pattern, text, re.IGNORECASE)
if match:
address = match.group(1).strip()
# Clean up the address – remove everything after opening hours/status indicators
address = re.split(r'\s+(?:Open|Closed|⋅|Closes?|Opens?)\s', address, flags=re.IGNORECASE)[0]
address = re.split(r'\s+(?:\d{1,2}:\d{2}\s*[AP]M|\d{1,2}[AP]M)', address, flags=re.IGNORECASE)[0]
# Remove business descriptions – stop at common business descriptor words
business_descriptors = [
r'\s+(?:No-frills|Traditional|Modern|Upscale|Casual|Fine|Fast|Quick|Family|Authentic|Popular|Local|Trendy|Cozy|Cosy|Elegant|Contemporary|Vibrant|Lively|Bustling|Quiet|Relaxed|Intimate|Spacious|Compact|Small|Large|Friendly|Welcoming)\s+(?:cafe|restaurant|bar|shop|store|market|deli|bistro|eatery|venue|place|spot|counter|service|dining)',
r'\s+(?:with|offering|serving|specializing|featuring|known for)\s+.*',
r'\s+(?:cafe|restaurant|bar|shop|store|market|deli|bistro|eatery|venue|establishment)\s+with\s+.*',
r'\s+(?:Specializes?|Serves?|Offers?|Features?|Known for)\s+.*',
r'\s+(?:menu of|selection of|variety of|range of)\s+.*',
r'\s+(?:counter-serve|counter serve|takeaway|take-away|dine-in|sit-down)\s+.*',
r'\s+(?:for|serving)\s+(?:vegan|vegetarian|halal|kosher|organic|fresh|homemade|traditional)\s+.*'
]
for descriptor_pattern in business_descriptors:
address = re.split(descriptor_pattern, address, flags=re.IGNORECASE)[0]
# Remove email addresses from the address string
address = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b[,\s]*', '', address)
# Remove phone numbers from the address string (various formats)
phone_patterns = [
r'\+\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,9}', # International: +970 8 282 9468, +1-555-123-4567
r'\(\d{3}\)[\s\-\.]?\d{3}[\s\-\.]?\d{4}', # US format: (555) 123-4567
r'\b\d{3}[\s\-\.]\d{3}[\s\-\.]\d{4}\b', # US format: 555-123-4567, 555.123.4567
r'\b\d{3}\s\d{4}\s\d{4}\b', # UK mobile: 020 7123 4567
r'\b\d{4}\s\d{3}\s\d{3}\b', # Alternative format: 1234 567 890
r'\b\d{2}\s\d{4}\s\d{4}\b', # Another format: 02 1234 5678
r'\b\d{10,15}\b' # Simple 10-15 digit sequences
]
for phone_pattern in phone_patterns:
address = re.sub(phone_pattern, '', address)
# Remove separators and symbols
address = re.sub(r'^[·•♿\s]+|[·•♿\s]+$', '', address)
address = re.sub(r'\s+', ' ', address) # Normalize whitespace
address = address.strip() # Remove leading/trailing whitespace
# Additional cleanup – stop at standalone descriptive words
stop_words = [
'No-frills', 'Traditional', 'Modern', 'Upscale', 'Casual', 'Fine', 'Fast', 'Quick',
'Family', 'Authentic', 'Popular', 'Local', 'Trendy', 'Cozy', 'Cosy', 'Elegant', 'Contemporary',
'Vibrant', 'Lively', 'Bustling', 'Quiet', 'Relaxed', 'Intimate', 'Spacious', 'Compact',
'Small', 'Large', 'Friendly', 'Welcoming', 'Specializing', 'Featuring', 'Serving', 'Offering', 'Known',
'counter-serve', 'counter', 'takeaway', 'take-away', 'dine-in', 'sit-down'
]
for stop_word in stop_words:
if stop_word in address:
address = address.split(stop_word)[0].strip()
break
# Ensure it's a valid address (not service options, hours, prices, emails, phone numbers, etc.)
if (len(address) > 3 and
not re.search(r'@[A-Za-z0-9.-]+\.[A-Za-z]{2,}', address) and # No email addresses
not re.search(r'\+\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,9}', address) and # No international phone numbers
not re.search(r'\b\d{10,15}\b', address) and # No long digit sequences (phone numbers)
not re.match(r'^(Open|Closed|€\d+|\$\d+|£\d+|\d+:\d+|Mon|Tue|Wed|Thu|Fri|Sat|Sun)', address, re.IGNORECASE) and
not address.lower().strip() in [
'takeaway', 'delivery', 'dine-in', 'pickup', 'drive-through', 'no-contact delivery',
'open', 'closed', 'temporarily closed', 'permanently closed',
'accepts credit cards', 'cash only', 'wheelchair accessible',
'free wifi', 'parking available', 'reservations recommended'
]):
place_info['address'] = address
break
# Fallback: get title from first line if aria-label failed
if place_info['title'] == 'N/A':
first_line = text.split('\n')[0].strip()
if first_line and len(first_line) > 2:
place_info['title'] = first_line
except:
pass
return place_info

Navigating and scraping Google Maps

This method handles the full scraping workflow. From launching the browser and navigating to Google Maps, to scrolling through results and extracting listing data. It uses Playwright to automate interactions, such as accepting cookie prompts, simulating scrolls to load more businesses, and identifying the HTML elements that contain the data we need.

It’s also flexible: you can modify the search query, number of scrolls, or Google domain (like .com, .co.uk, .de, etc.) to adapt it for different regions and languages. Combined with the extract_place_info method, it loops through each result, extracts structured data, and appends it to a list for saving.

def scrape_google_maps(self, search_query: str = "restaurants in london", max_scrolls: int = 5, domain: str = "com") -> List[Dict]:
"""Scrape Google Maps places - supports different domains and languages"""
places = []
try:
with sync_playwright() as playwright:
browser = playwright.chromium.launch(
headless=self.headless,
proxy=self.proxy_config
)
context = browser.new_context(
viewport={'width': 1280, 'height': 800},
user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36'
)
page = context.new_page()
# Navigate to Google Maps with specified domain
url = f"https://www.google.{domain}/maps/search/{search_query.replace(' ', '+')}"
page.goto(url, timeout=30000)
page.wait_for_timeout(3000)
# Handle cookies - try specific XPath first, then fallbacks
cookie_selectors = [
'/html/body/div/div[2]/div[1]/div[3]/form[2]/input[14]', # Current working XPath
'//*[@id="yDmH0d"]/c-wiz/div/div/div/div[2]/div[1]/div[3]/div[1]/div[1]/form[2]/div/div/button',
'button:has-text("Accept")', 'button:has-text("I agree")', 'button:has-text("Aceitar")',
'button:has-text("Akzeptieren")', 'button:has-text("Accepter")', 'button:has-text("Accetta")',
'button:has-text("Aceptar")', 'input[value*="Accept"]', 'input[value*="agree"]'
]
for selector in cookie_selectors:
try:
element = page.locator(selector).first
if element.is_visible(timeout=3000):
element.click()
page.wait_for_timeout(1000)
break
except:
continue
page.wait_for_timeout(5000)
page.screenshot(path='./screenshot.png')
# Scroll to load more results
try:
panel = page.locator('//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[1]/div[1]')
for i in range(max_scrolls):
panel.press('PageDown')
page.wait_for_timeout(2000)
except:
pass
# Get all place elements
selectors = [
'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[1]/div[1]/div/div/a',
'a[data-value="Directions"]',
'div[role="article"] a'
]
place_elements = []
for selector in selectors:
elements = page.locator(selector).all()
if len(elements) > 1:
place_elements = elements
break
# Extract data from each place
for element in place_elements:
try:
text = element.inner_text()
parent_text = element.locator('..').inner_text() if element.locator('..') else ""
combined_text = f"{text}\n{parent_text}"
place_info = self.extract_place_info(combined_text, element)
if place_info['title'] != 'N/A':
places.append(place_info)
except:
continue
browser.close()
except Exception as e:
print(f"Error: {e}")
return places

Saving results to a CSV file

Once the data has been scraped and structured, we need a way to store it for later use. The save_to_csv method handles this by writing all collected places into a CSV file with four columns: Title, Address, Rating, and Reviews.

By default, the file is saved as places.csv in your working directory, but you can change the filename or path to fit your project. Each place is written as a new row, making the data easy to open in Excel, Google Sheets, or any data analysis tool.

def save_to_csv(self, places: List[Dict], filename: str = './places.csv'):
"""Save to CSV"""
try:
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['Title', 'Address', 'Rating', 'Reviews'])
for place in places:
writer.writerow([place['title'], place['address'], place['rating'], place['reviews']])
print(f"Data saved to {filename}")
except:
pass

Running the scraper

Finally, the main function ties everything together. It initializes the scraper, runs a search (in this case, falafel in London), and prints out the results in the terminal with titles, addresses, ratings, and review counts.

You can easily swap the query or domain to target different business types and countries. For example, gyms in London (.co.uk) or supermarkets in Rome (.it). If results are found, the function also saves them to a CSV file for later use, completing the full scraping workflow from query to clean dataset.

def main():
scraper = GoogleMapsScraper(headless=False)
# Example: Use for any type of business in different countries
# places = scraper.scrape_google_maps("pharmacies in Berlin", domain="de")
# places = scraper.scrape_google_maps("gyms in London", domain="co.uk")
# places = scraper.scrape_google_maps("auto repair shops in New York", domain="com")
# places = scraper.scrape_google_maps("supermarkets in Rome", domain="it")
places = scraper.scrape_google_maps("falafel in London", max_scrolls=5, domain="com")
print(f"\nFound {len(places)} places:")
for i, place in enumerate(places, 1):
print(f"\n{i}. {place['title']}")
print(f" Address: {place['address']}")
print(f" Rating: {place['rating']}")
print(f" Reviews: {place['reviews']}")
if places:
scraper.save_to_csv(places)
if __name__ == "__main__":
main()
free-trial.svg

Get proxies for Google Maps

Experience the power of 115M+ residential IPs from 195+ global locations.

The full Google Maps scraping code

Here’s our full script for scraping Google Maps to find falafel places in London:

from playwright.sync_api import sync_playwright
import csv
import re
from typing import List, Dict
class GoogleMapsScraper:
def __init__(self, headless: bool = False):
self.proxy_config = {
"server": f"http://eu.decodo.com:10001",
"username": "YOUR_PROXY_USERNAME", # Replace
"password": "YOUR_PROXY_PASSWORD" # Replace
}
self.headless = headless
def extract_place_info(self, text: str, element) -> Dict:
"""Extract place information from text"""
place_info = {'title': 'N/A', 'address': 'N/A', 'rating': 'N/A', 'reviews': 'N/A'}
try:
# Get place name from aria-label
aria_label = element.get_attribute('aria-label')
if aria_label:
place_info['title'] = aria_label.strip()
# Extract rating: "4.9(1,017)" format – works with both . and , decimals
rating_match = re.search(r'(\d+[.,]\d+)\s*\(\d+', text)
if rating_match:
rating = rating_match.group(1).replace(',', '.')
place_info['rating'] = f"{rating}/5"
# Extract review count: "(1,017)" format – handles different number formats
review_match = re.search(r'\((\d+(?:[.,]\d+)*)\)', text)
if review_match:
place_info['reviews'] = review_match.group(1)
# Extract address: universal patterns for any business type
address_patterns = [
# Numbered addresses: "4 Strutton Ground", "123 Main Street"
r'[·•]\s*(\d+\s+[A-Za-z][^·•\n]{3,35})',
# Any category followed by separator and address: "Category · Address"
r'[A-Za-z][^·•\n]*[·•]\s*([^·•\n]{4,50})',
# Common address keywords (universal)
r'[·•]\s*([^·•\n]*(?:Street|St|Road|Rd|Lane|Ln|Avenue|Ave|Boulevard|Blvd|Drive|Dr|Way|Place|Pl|Court|Ct|Circle|Cir|Square|Sq|Market|Centre|Center|Unit|Floor|Ground|Close|Crescent|Gardens|Park|Terrace|Row|Hill|Bridge|Station|Mall|Plaza|Building|Tower|House|Hall)\s*[^·•\n]*)',
# Addresses with postal codes or area codes
r'[·•]\s*([^·•\n]*\b(?:[A-Z]{1,2}\d{1,2}[A-Z]?\s*\d[A-Z]{2}|\d{5}(?:-\d{4})?)\b[^·•\n]*)',
# Generic pattern for medium-length text segments (likely addresses)
r'[·•]\s*([^·•\n]{8,45})',
# Numbers + text (often addresses)
r'[·•]\s*(\d+[^·•\n]{4,40})'
]
for pattern in address_patterns:
match = re.search(pattern, text, re.IGNORECASE)
if match:
address = match.group(1).strip()
# Clean up the address – remove everything after opening hours/status indicators
address = re.split(r'\s+(?:Open|Closed|⋅|Closes?|Opens?)\s', address, flags=re.IGNORECASE)[0]
address = re.split(r'\s+(?:\d{1,2}:\d{2}\s*[AP]M|\d{1,2}[AP]M)', address, flags=re.IGNORECASE)[0]
# Remove business descriptions – stop at common business descriptor words
business_descriptors = [
r'\s+(?:No-frills|Traditional|Modern|Upscale|Casual|Fine|Fast|Quick|Family|Authentic|Popular|Local|Trendy|Cozy|Cosy|Elegant|Contemporary|Vibrant|Lively|Bustling|Quiet|Relaxed|Intimate|Spacious|Compact|Small|Large|Friendly|Welcoming)\s+(?:cafe|restaurant|bar|shop|store|market|deli|bistro|eatery|venue|place|spot|counter|service|dining)',
r'\s+(?:with|offering|serving|specializing|featuring|known for)\s+.*',
r'\s+(?:cafe|restaurant|bar|shop|store|market|deli|bistro|eatery|venue|establishment)\s+with\s+.*',
r'\s+(?:Specializes?|Serves?|Offers?|Features?|Known for)\s+.*',
r'\s+(?:menu of|selection of|variety of|range of)\s+.*',
r'\s+(?:counter-serve|counter serve|takeaway|take-away|dine-in|sit-down)\s+.*',
r'\s+(?:for|serving)\s+(?:vegan|vegetarian|halal|kosher|organic|fresh|homemade|traditional)\s+.*'
]
for descriptor_pattern in business_descriptors:
address = re.split(descriptor_pattern, address, flags=re.IGNORECASE)[0]
# Remove email addresses from the address string
address = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b[,\s]*', '', address)
# Remove phone numbers from the address string (various formats)
phone_patterns = [
r'\+\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,9}', # International: +970 8 282 9468, +1-555-123-4567
r'\(\d{3}\)[\s\-\.]?\d{3}[\s\-\.]?\d{4}', # US format: (555) 123-4567
r'\b\d{3}[\s\-\.]\d{3}[\s\-\.]\d{4}\b', # US format: 555-123-4567, 555.123.4567
r'\b\d{3}\s\d{4}\s\d{4}\b', # UK mobile: 020 7123 4567
r'\b\d{4}\s\d{3}\s\d{3}\b', # Alternative format: 1234 567 890
r'\b\d{2}\s\d{4}\s\d{4}\b', # Another format: 02 1234 5678
r'\b\d{10,15}\b' # Simple 10-15 digit sequences
]
for phone_pattern in phone_patterns:
address = re.sub(phone_pattern, '', address)
# Remove separators and symbols
address = re.sub(r'^[·•♿\s]+|[·•♿\s]+$', '', address)
address = re.sub(r'\s+', ' ', address) # Normalize whitespace
address = address.strip() # Remove leading/trailing whitespace
# Additional cleanup – stop at standalone descriptive words
stop_words = [
'No-frills', 'Traditional', 'Modern', 'Upscale', 'Casual', 'Fine', 'Fast', 'Quick',
'Family', 'Authentic', 'Popular', 'Local', 'Trendy', 'Cozy', 'Cosy', 'Elegant', 'Contemporary',
'Vibrant', 'Lively', 'Bustling', 'Quiet', 'Relaxed', 'Intimate', 'Spacious', 'Compact',
'Small', 'Large', 'Friendly', 'Welcoming', 'Specializing', 'Featuring', 'Serving', 'Offering', 'Known',
'counter-serve', 'counter', 'takeaway', 'take-away', 'dine-in', 'sit-down'
]
for stop_word in stop_words:
if stop_word in address:
address = address.split(stop_word)[0].strip()
break
# Ensure it's a valid address (not service options, hours, prices, emails, phone numbers, etc.)
if (len(address) > 3 and
not re.search(r'@[A-Za-z0-9.-]+\.[A-Za-z]{2,}', address) and # No email addresses
not re.search(r'\+\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,4}[\s\-\.]?\d{1,9}', address) and # No international phone numbers
not re.search(r'\b\d{10,15}\b', address) and # No long digit sequences (phone numbers)
not re.match(r'^(Open|Closed|€\d+|\$\d+|£\d+|\d+:\d+|Mon|Tue|Wed|Thu|Fri|Sat|Sun)', address, re.IGNORECASE) and
not address.lower().strip() in [
'takeaway', 'delivery', 'dine-in', 'pickup', 'drive-through', 'no-contact delivery',
'open', 'closed', 'temporarily closed', 'permanently closed',
'accepts credit cards', 'cash only', 'wheelchair accessible',
'free wifi', 'parking available', 'reservations recommended'
]):
place_info['address'] = address
break
# Fallback: get title from first line if aria-label failed
if place_info['title'] == 'N/A':
first_line = text.split('\n')[0].strip()
if first_line and len(first_line) > 2:
place_info['title'] = first_line
except:
pass
return place_info
def scrape_google_maps(self, search_query: str = "restaurants in london", max_scrolls: int = 5, domain: str = "com") -> List[Dict]:
"""Scrape Google Maps places - supports different domains and languages"""
places = []
try:
with sync_playwright() as playwright:
browser = playwright.chromium.launch(
headless=self.headless,
proxy=self.proxy_config
)
context = browser.new_context(
viewport={'width': 1280, 'height': 800},
user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36'
)
page = context.new_page()
# Navigate to Google Maps with specified domain
url = f"https://www.google.{domain}/maps/search/{search_query.replace(' ', '+')}"
page.goto(url, timeout=30000)
page.wait_for_timeout(3000)
# Handle cookies - try specific XPath first, then fallbacks
cookie_selectors = [
'/html/body/div/div[2]/div[1]/div[3]/form[2]/input[14]', # Current working XPath
'//*[@id="yDmH0d"]/c-wiz/div/div/div/div[2]/div[1]/div[3]/div[1]/div[1]/form[2]/div/div/button',
'button:has-text("Accept")', 'button:has-text("I agree")', 'button:has-text("Aceitar")',
'button:has-text("Akzeptieren")', 'button:has-text("Accepter")', 'button:has-text("Accetta")',
'button:has-text("Aceptar")', 'input[value*="Accept"]', 'input[value*="agree"]'
]
for selector in cookie_selectors:
try:
element = page.locator(selector).first
if element.is_visible(timeout=3000):
element.click()
page.wait_for_timeout(1000)
break
except:
continue
page.wait_for_timeout(5000)
page.screenshot(path='./screenshot.png')
# Scroll to load more results
try:
panel = page.locator('//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[1]/div[1]')
for i in range(max_scrolls):
panel.press('PageDown')
page.wait_for_timeout(2000)
except:
pass
# Get all place elements
selectors = [
'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[1]/div[1]/div/div/a',
'a[data-value="Directions"]',
'div[role="article"] a'
]
place_elements = []
for selector in selectors:
elements = page.locator(selector).all()
if len(elements) > 1:
place_elements = elements
break
# Extract data from each place
for element in place_elements:
try:
text = element.inner_text()
parent_text = element.locator('..').inner_text() if element.locator('..') else ""
combined_text = f"{text}\n{parent_text}"
place_info = self.extract_place_info(combined_text, element)
if place_info['title'] != 'N/A':
places.append(place_info)
except:
continue
browser.close()
except Exception as e:
print(f"Error: {e}")
return places
def save_to_csv(self, places: List[Dict], filename: str = './places.csv'):
"""Save to CSV"""
try:
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['Title', 'Address', 'Rating', 'Reviews'])
for place in places:
writer.writerow([place['title'], place['address'], place['rating'], place['reviews']])
print(f"Data saved to {filename}")
except:
pass
def main():
scraper = GoogleMapsScraper(headless=False)
# Example: Use for any type of business in different countries
# places = scraper.scrape_google_maps("pharmacies in Berlin", domain="de")
# places = scraper.scrape_google_maps("gyms in London", domain="co.uk")
# places = scraper.scrape_google_maps("auto repair shops in New York", domain="com")
# places = scraper.scrape_google_maps("supermarkets in Rome", domain="it")
places = scraper.scrape_google_maps("falafel in London", max_scrolls=5, domain="com")
print(f"\nFound {len(places)} places:")
for i, place in enumerate(places, 1):
print(f"\n{i}. {place['title']}")
print(f" Address: {place['address']}")
print(f" Rating: {place['rating']}")
print(f" Reviews: {place['reviews']}")
if places:
scraper.save_to_csv(places)
if __name__ == "__main__":
main()

After running this script, the terminal will display the number of places found along with each place's title, address, rating, and review count. It will also confirm that the data has been saved to a CSV file.

You’ve now scraped Google Maps for falafel in London, but you can quickly appropriate this script for any other target of interest in any other location.

How to use our ready-made Google Maps scraper

Another option for scraping Google Maps is to use our Web Scraping API with the Advanced plan or free trial, both of which include ready-made scrapers designed specifically for this target. This approach removes the need to build custom code; however, the results are returned in HTML, so you'll probably want to parse them for better readability. Here's how to get started:

  1. Log in or create an account on our dashboard.
  2. On the left panel, click Scraping APIs and Pricing.
  3. Choose the Advanced plan or claim a 7-day free trial to test our service.
  4. In the Scraper tab, set the target to be Google Maps.
  5. Enter your query and configure parameters such as location, language, website domain, device type, browser type, pagination, JavaScript rendering, and more.
  6. Click Send Request or click the three dots to schedule your task.
  7. Copy or export the result in HTML format.

Using this ready-made scraper for Google Maps simplifies the data-gathering process, making it a convenient choice for those who prefer a no-code solution.

To sum up

Congratulations on learning how to scrape Google Maps using Python and residential proxies! The key takeaway is to tailor your script to the specific needs of your target, and stay alert for any changes Google may make that could affect your scraping logic. With reliable proxies on your side, you'll be well-equipped to gather business data (or even that elusive falafel spot). Prefer to skip the coding? Decodo’s ready-made scraper is here to help.

About the author

Dominykas Niaura

Technical Copywriter

Dominykas brings a unique blend of philosophical insight and technical expertise to his writing. Starting his career as a film critic and music industry copywriter, he's now an expert in making complex proxy and web scraping concepts accessible to everyone.


Connect with Dominykas via LinkedIn

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

Frequently asked questions

Is it possible to scrape data from Google Maps?

It's possible to scrape data from Google Maps using various tools and techniques to extract information like place names, reviews, addresses, and contact information.

What is the best scraping tool for Google Maps?

The best tool for scraping Google Maps depends on your specific needs. Some use the official Google Maps API, while others prefer dedicated third-party APIs. Another option is creating custom code tailored to your specific data collection goals.

How do I extract data from Google Maps?

Extracting data from Google Maps involves using a scraping API, where you can request data on locations, or a custom scraping code with proxies to retrieve the information of interest and bypass possible blocking.

How do I scrape Google Maps in Python?

To scrape Google Maps in Python, you can use Playwright to automate browser interactions and extract the raw HTML or text directly from page elements. Playwright handles tasks like navigation, scrolling, and clicking, making it well-suited for dynamic sites such as Google Maps. Since scraping at scale can trigger rate limits or IP blocks, it’s important to use proxies to keep your data collection reliable.


Alternatively, you can use a scraping tool like our Web Scraping API, which offers a ready-made scraper for Google Maps.

What is the best tool to scrape Google Maps?

While the official Google Maps API is the most direct way to access Google Maps data, the choice of the best tool may depend on the complexity of the data you’re looking to scrape and your coding skills. However, to avoid blocks, you’ll need proxies.

What are ready-made scrapers?

Ready-made scrapers are pre-configured tools within our Web Scraping API, designed for easy and quick data collection. They eliminate the need for extensive technical knowledge, custom scraper development, and proxy management, making them ideal for users seeking a low/no-code solution. By using ready-made scrapers, you can access and structure large data sets efficiently.

How to Scrape Google Reviews: A Step-by-Step Guide (2025)

Whether you're hunting for the best tacos in town or avoiding a haircut horror story on vacation, Google reviews have become the go-to guidebook for public opinions. With millions relying on it to gauge everything from coffee quality to customer service, scraping this goldmine of insights can unlock serious business intel – if you know how.

Zilvinas Tamulis

May 12, 2025

16 min read

How to Bypass Google CAPTCHA: Expert Scraping Guide 2025

Scraping Google can quickly turn frustrating when you're repeatedly met with CAPTCHA challenges. Google's CAPTCHA system is notoriously advanced, but it’s not impossible to avoid. In this guide, we’ll explain how to bypass Google CAPTCHA verification reliably, why steering clear of Selenium is critical, and what tools and techniques actually work in 2025.

Dominykas Niaura

Aug 11, 2025

10 min read

© 2018-2025 decodo.com. All Rights Reserved