Back to blog

How to Scrape Bing Search with Python

Web scraping is the art of extracting data from websites, and it's become a go-to tool for developers, data analysts, and startup teams. While Google gets most of the spotlight, scraping Bing search results can be a smart move, especially for regional insights or less saturated SERPs. In this guide, we'll show you how to scrape Bing using Python with tools like Requests, Beautiful Soup, and Playwright.

Zilvinas Tamulis

May 16, 2025

12 min read

Why scrape Bing search?

While Google often takes center stage, Bing has its own set of perks that make it worth a look, especially for those digging into unique data. Scraping Bing search results can unlock insights you might miss elsewhere, thanks to its expanded content variety, cleaner search results, and stronger regional relevance.

Bing's algorithm often surfaces pages that are different from those of Google, which can be especially useful when you're scouting competitors or trying to find less mainstream content. For example, researching niche industry blogs might uncover gems on Bing that never crack Google's top 10.

Because fewer companies actively target Bing for SEO, its results also tend to be less influenced by aggressive keyword-stuffing or content farms. This means you're more likely to get genuinely informative pages rather than a sea of clickbait and affiliate-heavy articles.

The final benefit is that Bing is the default search engine on Microsoft devices, giving it a stronger foothold in specific regions and enterprise environments. If you're analyzing user behavior in the United States or among corporate audiences, Bing might actually give you a clearer picture than Google.

In short, Bing isn't just the "other" search engine – it's a valuable data source with unique advantages, especially when you're looking for fresh perspectives, cleaner results, or region-specific insights.

Use cases for scraping Bing search results

Now that we've covered why Bing is worth scraping, let's look at how you can actually put that data to work.

For SEO geeks out there, Bing provides a fresh angle for understanding how your site – or your client's – appears in search results. You can monitor keyword rankings, track changes over time, and spot pages that perform well on Bing but not Google.

Bing search results can also help uncover audience preferences, trending topics, and content gaps. For example, a startup preparing to launch a product can analyze Bing SERPs to see what questions users are asking and which solutions currently dominate the space.

Want to know who's gaining traction in your specific market? Scraping Bing makes it easy to track which competitors are ranking for particular keywords or getting featured in results. This helps businesses fine-tune their messaging or identify opportunities others have missed.

In a nutshell, scraping Bing isn't just about gathering simple data – it’s about gaining an edge in how you optimize, strategize, and expand your business.

Tools and methods to scrape Bing search

So you've got a solid reason and a clear use case – now it's time to talk tools. There are several ways to scrape Bing search results, depending on your goals, budget, and level of technical know-how. Here's a list of the most common methods:

  • Manual scraping. Copying and pasting search results into a spreadsheet might work for one-off research, but it quickly becomes unsustainable. It's slow, error-prone, and definitely not what you'd call developer-friendly. Excellent for a demo, terrible for data at scale.
  • Python (Requests + Beautiful Soup). For simple HTML pages, Python's Requests and Beautiful Soup libraries are lightweight and practical. This approach is perfect for quick scripts where JavaScript rendering isn't a factor, like grabbing titles, URLs, and snippets from basic result pages.
  • Playwright. Playwright lets you automate entire browser sessions, making it ideal for scraping JavaScript-heavy or dynamic content. It's great for more advanced use cases, such as extracting rich snippets or simulating real user behavior across pages.
  • APIs and third-party scrapers. If you want to save time (and a few headaches), using a dedicated scraping API is a wise choice. Decodo's Web Scraping API, for example, handles everything from rotating proxies to parsing HTML – so you can focus on the data, not the infrastructure.

When scraping Bing at scale or on a frequent basis, proxy usage is essential to avoid being blocked. Proxies mask your IP address and help distribute requests across multiple locations, making it harder for Bing to detect scraping activity. Rotating residential or datacenter proxies, like those offered by Decodo, can dramatically increase success rates and keep your scraping smooth and uninterrupted.

Regardless of whether you're building your scraper with Python or leaning on a third-party API, there's no shortage of ways to extract Bing search data. Just be sure to pick the method that matches your scale – and don't skip the proxies unless you like 403 errors.

Reliable residential proxies for scraping

Kick off your 3-day free trial and scrape Bing without hitting roadblocks or rate limits.

How to scrape Bing search results using Python

Setting up your environment

Now that you know the why and how to scrape Bing search results, it's time to set up your Python environment. We're going to get started with Requests and Beautiful Soup and then move on to Playwright for more dynamic pages. Here's how to get started:

  1. Install Python. First, make sure Python 3.7 or later is installed on your machine. You can download it from the official Python website. To check if it's installed, run:
python --version

2. Create and activate a virtual environment (recommended). It's a good practice to isolate your scraping project using a virtual environment to avoid clutter and library conflicts:

python -m venv bing-scraper-env
source bing-scraper-env/bin/activate # On Windows use: bing-scraper-env\Scripts\activate

3. Install the required libraries. You'll need a few Python packages to get started:

pip install requests beautifulsoup4 playwright

4. Install the browser binaries. After installing Playwright, run the following command to install the necessary browser binaries:

playwright install

5. Test your setup. Here's a simple script to verify everything is working. This script uses Requests and Beautiful Soup to fetch and parse Bing's homepage:

import requests
from bs4 import BeautifulSoup
response = requests.get("https://www.bing.com")
soup = BeautifulSoup(response.text, "html.parser")
title = soup.title.string
print("Bing page title:", title)

If you see a title like "Search - Microsoft Bing" printed in your terminal, congrats – you're all set to start scraping! Next, we'll dive into how to actually extract search results.

Basic Bing search scraping with Python

Before we dive into browser automation, let’s start with the basics – making an HTTP request and parsing the HTML response. This method is excellent for simple scraping tasks where JavaScript isn't heavily involved. We'll use Python's Requests library to fetch the page and Beautiful Soup to extract the data.

Note that Bing may return different HTML or block the request entirely if it detects automated access, so this approach works best for small-scale tests or when paired with rotating user-agent headers and proxies.

For proxy details and credentials, visit the Decodo dashboard, purchase a plan that suits your needs, and get the username, password, and endpoint information.

import requests
from bs4 import BeautifulSoup
# Replace with your actual proxy credentials
proxy_user = "user"
proxy_pass = "pass"
proxy_host = "gate.decodo.com"
proxy_port = "7000"
# Build proxy dictionary
proxies = {
"http": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}",
"https": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}",
}
# User-Agent to simulate a real browser
headers = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/120.0.0.0 Safari/537.36"
)
}
# Search query
query = "samsung"
url = f"https://www.bing.com/search?q={query.replace(' ', '+')}"
# Send request through proxy
response = requests.get(url, proxies=proxies, timeout=10)
# Parse HTML
soup = BeautifulSoup(response.text, "html.parser")
results = soup.find_all("li", class_="b_algo")
# Extract all results
for result in results:
title_tag = result.find("h2")
url_tag = title_tag.find("a") if title_tag else None
desc_tag = result.find("p")
title = title_tag.get_text(strip=True) if title_tag else "No title"
link = url_tag["href"] if url_tag else "No URL"
description = desc_tag.get_text(strip=True) if desc_tag else "No description"
print(f"Title: {title}")
print(f"URL: {link}")
print(f"Description: {description}")
print("-" * 80)

The script above does the following:

  • Defines the prerequisites. Proxy credentials, user-agent header, query, and target URL are all written here and are later used in the script.
  • Makes a request. The script sends an HTTP GET request through a proxy server, together with a user-agent. This ensures that the request is undetected and allows you to repeat it multiple times.
  • Parses the response. Once Requests grabs the HTML page, Beautiful Soup steps in to analyze it and parse the required information. Here, it finds all <li> elements with the class "b_algo", which is the container where each search result is located.
  • Finds the title, URL, and description. Once all containers have been found, the script then loops through them and finds the title (h2), URL (href), and description (p).
  • Prints the results. While still inside the loop, results are printed, then the process is repeated until all results from the search result page are found.

Important: When using proxies, your IP location can affect Bing's language and locale settings. Unlike Google, Bing may return no results if the query is in English, but your proxy appears to be from a region like France or Germany. To avoid this, either:

  • Use universal search terms (e.g., brand names), or
  • Manually set the language and region in your search request by adding the setlang and cc (country code) parameters:
url = (
f"https://www.bing.com/search?q={query.replace(' ', '+')}"
f"&setlang=en&cc=US"
)

Advanced techniques and common challenges when scraping Bing

As you expand your Bing scraping setup, you'll quickly hit limitations with simple HTTP requests and HTML parsing. Bing's search engine results page includes dynamic elements, paginated content, and bot-detection mechanisms that make scraping with just Requests and Beautiful Soup unreliable at scale. This is where Playwright comes in, offering a browser automation layer that behaves much closer to a real user.

Here’s why Playwright is a better choice for scraping Bing:

  1. JavaScript rendering. Bing uses JavaScript to load certain rich elements (such as knowledge panels and news cards). Playwright executes JS like a real browser, letting you scrape the complete, rendered page, not just the raw HTML.
  2. Pagination control. Unlike basic scraping, where pagination can be inconsistent or fail altogether, Playwright allows you to click "Next" buttons and dynamically load additional search result pages with full browser context.
  3. Simulated human behavior. With support for keyboard input, scrolling, mouse movement, and delays, Playwright makes your bot mimic a real user, helping you avoid detection and bans.
  4. Better CAPTCHA avoidance. While not foolproof, Playwright can bypass some of Bing's light anti-bot measures simply by acting more like a browser than a bot.
  5. Screenshot and debugging capabilities. Playwright can capture screenshots or even record videos of the scraping session, making it easier to debug changes in the DOM or scraping failures.

Here's an example Playwright script that navigates to https://bing.com/, enters the search query, and scrapes the first 3 result pages. It also takes a screenshot of each page, so you can see what the pages looked like during the process and if any issues were encountered:

from playwright.async_api import async_playwright
# Proxy configuration
PROXY_SERVER = "http://gate.decodo.com:7000"
proxy_config = {
"server": PROXY_SERVER,
"username": "user", # Replace with your actual proxy details
"password": "pass"
}
async def scrape_bing(query="Samsung", pages=3):
results = []
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False,
proxy=proxy_config
)
context = await browser.new_context()
page = await context.new_page()
# Go to Bing and search
await page.goto("https://www.bing.com/", wait_until="networkidle")
# Focus the search bar by clicking
await page.wait_for_selector("#sb_form_q", timeout=10000)
await page.click("#sb_form_q")
# Simulate real typing
await page.keyboard.type("Samsung", delay=100)
# Wait a moment for suggestions/events to settle
await page.wait_for_timeout(500)
# Press Enter twice
await page.keyboard.press("Enter")
await page.wait_for_timeout(200)
await page.keyboard.press("Enter")
await page.wait_for_selector("li.b_algo", timeout=30000)
for page_number in range(1, pages + 1):
await page.wait_for_selector("li.b_algo", timeout=10000)
await page.screenshot(path=f"bing_page_{page_number}.png", full_page=True)
elements = await page.query_selector_all("li.b_algo")
for el in elements:
title_el = await el.query_selector("h2")
link_el = await title_el.query_selector("a") if title_el else None
desc_el = await el.query_selector("p")
title = await title_el.inner_text() if title_el else "No title"
url = await link_el.get_attribute("href") if link_el else "No URL"
desc = await desc_el.inner_text() if desc_el else "No description"
results.append({
"Title": title,
"URL": url,
"Description": desc
})
next_button = await page.query_selector("a.sb_pagN")
if next_button and page_number < pages:
await next_button.click()
await page.wait_for_timeout(3000)
else:
break
await browser.close()
# Save to CSV
with open("bing_results.csv", "w", newline='', encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=["Title", "URL", "Description"])
writer.writeheader()
writer.writerows(results)
print(f"Scraped {len(results)} results across {pages} page(s). Saved to 'bing_results.csv'.")
asyncio.run(scrape_bing())

Using APIs for scraping Bing

Let’s face it – while the Playwright script we just built gets the job done, it’s long, a bit finicky, and definitely not the most beginner-friendly. Setting up browser automation, handling page loads, navigating pagination, rotating proxies, and praying Bing doesn't throw a CAPTCHA mid-run... It's a lot. Honestly, even experienced developers don't love maintaining scraping scripts that can break overnight with a minor UI change.

That’s where scraping APIs come in. Instead of juggling libraries and debugging selectors, scraping APIs handle everything for you – HTTP requests, JavaScript rendering, proxy rotation, CAPTCHA bypassing, and more. They're convenient when you need scale, reliability, and fast iteration.

If you’re looking for a rock-solid solution, Decodo's Web Scraping API is a great choice. It's built for performance, includes premium proxy rotation, built-in error handling, and works out-of-the-box with primary targets like Bing. Here's what a simple API request would look like with Python:

import requests
url = "https://scraper-api.decodo.com/v2/scrape"
payload = {
"target": "bing_search",
"query": "samsung",
"page_from": "1",
"num_pages": "10",
"parse": True
}
headers = {
"accept": "application/json",
| "content-type": "application/json",
"authorization": "Basic [your basic auth token]"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)

You won't believe it, but the API does almost exactly as the previous long Playwright script. It even comes with a user-friendly web UI where you can easily configure and schedule requests, plus export results in JSON, or table formats with just a few clicks.

Final notes

Scraping Bing search results with Python opens up a world of opportunities – from uncovering regional insights to tracking competitors in a less saturated search space. With tools like Python's Requests, Beautiful Soup, Playwright, and Decodo's Web Scraping API, you've got plenty of options to fit your needs and scale. Just remember: if you're scraping frequently or at volume, don't forget proxies – they're your best friend on this adventure. Try out the methods covered here and see what kind of insights Bing has that Google is hiding.

About the author

Zilvinas Tamulis

Technical Copywriter

A technical writer with over 4 years of experience, Žilvinas blends his studies in Multimedia & Computer Design with practical expertise in creating user manuals, guides, and technical documentation. His work includes developing web projects used by hundreds daily, drawing from hands-on experience with JavaScript, PHP, and Python.


Connect with Žilvinas via LinkedIn

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

How to Set Up SOCKS5 Proxy Servers

Video: How to Set Up SOCKS5 Proxy Servers

Do you need a SOCKS5 proxy? In this video, we will show you a step-by-step SOCKS5 proxy setup. Learn how to get SOCKS5 and other proxy protocols - HTTP & HTTPS proxies.

Martin Ganchev

Dec 28, 2023

2 min read

What Is Web Scraping? A Complete Guide to Its Uses and Best Practices

Web scraping is a powerful tool driving innovation across industries, and its full potential continues to unfold with each day. In this guide, we'll cover the fundamentals of web scraping – from basic concepts and techniques to practical applications and challenges. We’ll share best practices and explore emerging trends to help you stay ahead in this dynamic field.

Dominykas Niaura

Jan 29, 2025

10 min read

Frequently asked questions

Can I use Python to scrape Bing search results?

Yes, you can use Python with libraries like Requests and Beautiful Soup for basic scraping or Selenium for JavaScript-heavy pages.

How do I avoid getting blocked while scraping Bing?

Rotate proxies, use realistic headers, mimic real user behavior with delays, and switch up your user agents. Decodo's Web Scraping API handles this out of the box, so you don't have to play cat and mouse with Bing's defenses.

What's the best proxy for scraping Bing search?

Residential proxies are harder to detect but are more expensive; datacenter proxies are faster but more easily blocked. For consistent scraping, rotating IPs regularly is essential, especially when extracting multiple Bing search engine results at scale.

Does Bing have an API for search results?

Yes, Microsoft offers the Bing Web Search API, which returns structured JSON data. It’s a reliable alternative if you want official access and don’t mind the usage limits and cost.

Why would someone scrape Bing instead of Google?

Bing is helpful for region-specific results or when diversifying data sources beyond Google. It's a smart move for niche research and lightweight projects.

What data can I extract from Bing search results?

You can extract page titles, URLs, meta descriptions/snippets, and sometimes rich results like site links. With Decodo's Web Scraping API, you can parse and structure this data effortlessly.

© 2018-2025 decodo.com. All Rights Reserved