Scrapy is an open-source web crawling framework for Python that extracts data from websites, processes it, and stores it. The proxy middleware enables it to bypass IP-based rate limiting and access geographically restricted content, improving the efficiency and reach of web scraping and crawling tasks.
Asynchronous scraping
Ensure lightning-fast data collection by allowing multiple requests to be processed simultaneously.
Built-in selectors
Streamline data extraction by quickly and accurately gathering the information you need from web pages.
Middleware support
Integrate proxies in just minutes thanks to simple middleware support.
Why residential proxies?
A residential proxy serves as a mediator, allowing users to get an IP address from an authentic desktop or mobile device connected to a local network. Due to its origin, residential proxies are a perfect match for overcoming geo-restrictions, bypassing CAPTCHAs, managing multiple accounts, and conducting web testing with the CapSolver platform.
Decodo offers top-notch residential proxies with an extensive IP pool of over 55M IPs across 195+ locations. With an unparalleled responsive rate, clocking in at under 0.6 seconds, a success rate of 99.86%, and an affordable entry point with Pay As You Go, Decodo is a great deal for hustlers and fast-growing companies.
Set up Decodo proxies with Scrapy
To install the Scrapy proxy middleware, you’ll need to set up a Scrapy project first. Follow the official installation and documentation to create your project. Then, follow the instructions below to set up the middleware:
Getting residential proxies
Log in to your Decodo dashboard, find residential proxies by choosing Residential under the Residential Proxies column on the left panel, and select a plan that suits your needs. Then, follow these steps:
Open the Proxy setup tab.
Configure the parameters: set your authentication method, location, session type, and protocol.
Select the number of proxy endpoints you want to generate (default – 10).
Copy the endpoints by clicking the Copy button.
Installation
Once you’ve obtained the endpoint information, you can install the middleware:
Open your terminal tool and navigate to the project folder using cd yourprojectname.
Download the proxy middleware using the following command: curl https://raw.githubusercontent.com/Decodo/Scrapy-Middleware/master/decodo_auth.py > decodo_auth.py or get it from the GitHub repository.
Configuration
Open the settings.py file in your project folder.
Edit the file by adding the following properties at the bottom:
SMARTPROXY_USER = 'username'## Decodo username SMARTPROXY_PASSWORD = 'password'## Decodo password SMARTPROXY_ENDPOINT = 'gate.decodo.com'## Endpoint you'd like to use SMARTPROXY_PORT = '7000'## Port of the endpoint you're using
3. Change the yourprojectname in the above to the name of your project folder.
4. Enter the Decodo credentials and endpoint information you’ve received from the dashboard.
5. Your web crawlers will now be going through Decodo proxies.
Configurations & Integrations
Easy Decodo proxy setup with popular applications and free third-party tools. Check out these guides to get started right away.
Country, state, city, ZIP code, and ASN-level targeting
Rotating and sticky session options
<0.6s avg. response time
99.86% success rate
99.99% uptime
Seamless integration with scraping tools and bots
24/7 tech support
14-day money-back
SSL Secure Payment
Your information is protected by 256-bit SSL
What people are saying about us
We're thrilled to have the support of our 130K+ clients and the industry's best.
Clients
Awards
Industry experts
Attentive service
The professional expertise of the Decodo solution has significantly boosted our business growth while enhancing overall efficiency and effectiveness.
N
Novabeyond
Easy to get things done
Decodo provides great service with a simple setup and friendly support team.
R
RoiDynamic
A key to our work
Decodo enables us to develop and test applications in varied environments while supporting precise data collection for research and audience profiling.
C
Cybereg
Best Usability 2025
Awarded for the ease of use and fastest time to value for proxy and scraping solutions.
Best User Adoption 2025
Praised for the seamless onboarding experience and impactful engagement efforts.
Best Value 2025
Recognized for the 5th year in a row for top-tier proxy and scraping solutions.
A proxy is an intermediary between your device and the internet, forwarding requests between your device and the internet while masking your IP address.
Residential Proxies
from $1.5/GB
Real, physical device IPs that provide a genuine online identity and enhance your anonymity online. Learn more
Pagination is the system websites use to split large datasets across multiple pages for faster loading and better navigation. In web scraping, handling pagination is essential to capture complete datasets rather than just the first page of results. This guide explains what pagination is, the challenges it creates, and how to handle it efficiently with Python.
Ever wondered how to extract valuable business data directly from Google Maps? Whether you're building a lead list, analyzing local markets, or researching competitors, scraping Google Maps can be a goldmine of insights. In this guide, you’ll learn how to automate the process step by step using Python – or skip the coding altogether with Decodo’s plug-and-play scraper.
At first glance, residential and datacenter proxies may seem the same. Both types act as intermediaries that hide your IP address, allowing you to access restricted websites and geo-blocked content. However, there are some important differences between residential and datacenter proxies that you should know before making a decision. We’re happy to walk you through the differences so you can choose what's right for you.
In this article, we’ll explore the different kinds of errors and exceptions, what causes them, and provide solutions to solving them. No more headaches and cursing your code until it gets scared and starts working – master the language of Python to understand precisely what it wants from you.
Artificial intelligence is transforming various fields, ushering in new possibilities for automation and efficiency. As one of the leading AI tools, ChatGPT can be especially helpful in the realm of data collection, where it serves as a powerful ally in extracting and parsing information. So, in this blog post, we provide a step-by-step guide to using ChatGPT for web scraping. Additionally, we explore the limitations of using ChatGPT for this purpose and offer an alternative method for scraping the web.
Scrapy is an open-source web crawling framework for Python designed to extract, process, and store data from websites.
What is Scrapy used for?
Scrapy is used to create web crawlers (spiders) that define how to navigate and scrape web pages, making it a powerful tool for web scraping and data mining tasks.
Be aware that any tools mentioned in this article belong to a third party, not Decodo. Therefore, Decodo will not be responsible for any of the services offered by the third-party. Make sure to thoroughly review the third-party’s policies and practices, or do your due diligence, before using or accessing any of their services.
The Fastest Residential Proxies
Dive into a 115M+ ethically-sourced residential IP pool from 195+ locations worldwide.