nodriver Explained: How Undetected Chromedriver's Successor Actually Works
nodriver is a Python package for browser automation and web scraping built as the successor to undetected-chromedriver. It skips the usual WebDriver layer, talks to Chrome more directly than Selenium, and uses an async-first design. In this guide, you'll learn what nodriver is, how it works in Python, and where it fits for scraping JavaScript-heavy pages when basic browser automation starts showing its limits.
Lukas Mikelionis
Last updated: Apr 01, 2026
8 min read

TL;DR
- nodriver is a Python browser automation library built as the successor to Undetected Chromedriver.
- It doesn't use Selenium or a separate Chromedriver binary.
- It communicates with Chrome more directly through DevTools-style browser protocols.
- Less setup and an async-first workflow.
- It can work better than traditional browser automation on JavaScript-heavy sites with lighter bot checks.
- Headless mode can still be unreliable in some cases.
- Stronger anti-bot protections can still block it.
What is nodriver?
nodriver is a Python browser automation library for controlling Chrome without the traditional WebDriver layer. Instead, it communicates with the browser more directly through Chrome DevTools Protocol-style mechanisms. That makes nodriver behave more like a regular browser session.
nodriver is also fully asynchronous, so you can structure browser actions with Python's async and await patterns instead of the synchronous flow common in older automation tools.
In day-to-day use, that translates into 3 practical benefits:
- You don't need Selenium
- You don't need to keep a matching driver binary installed
- You can start each run with a fresh browser profile when you want a cleaner session state
That doesn't make the nodriver invisible to anti-bot systems. But it does make it a strong option when you want lightweight, Python-native browser automation for JavaScript-heavy pages without the usual WebDriver overhead.
How does nodriver differ from other tools?
nodriver fits best when you work in Python, want less setup than Selenium, and need a lighter way to automate JavaScript-heavy pages.
nodriver vs. Selenium
Selenium is built around the W3CÂ WebDriver standard, which means your code talks to a browser through a WebDriver implementation.
nodriver bypasses the traditional WebDriver layer, which reduces setup overhead and makes browser control feel closer to working with Chrome directly.
As a result, nodriver is often the leaner choice when your goal is simple Python-based scraping rather than broad automation. Selenium, however, is the more mature and standardized ecosystem. Its documentation is broader, its browser support is stronger, and it remains the safer choice when you need long-term stability, cross-browser consistency, or a widely adopted testing stack.
nodriver vs. undetected-chromedriver
undetected-chromedriver still lives inside the Selenium model and works by patching Chromedriver to reduce obvious automation fingerprints. nodriver moves away from that approach. It drops the traditional driver layer and uses a fully async design instead.
nodriver vs. Playwright
Playwright also uses direct browser protocols and modern control patterns instead of the classic WebDriver model. But it is much more polished around reliability features such as locators, auto-waiting, retryability, isolated browser contexts, and browser management.
That makes it the stronger default choice for robust automation and testing workflows.
Installing nodriver and setting up your environment
To run nodriver, you need Python 3.9+ and a local Chrome installation. Start in a virtual environment so the scraper stays isolated from the rest of your Python tooling.
Create the environment:
Then activate it with the command for your OS:
With the environment active, install nodriver:
Note: Don't name your script nodriver.py. Python may import your local file instead of the installed package, which causes an avoidable import conflict.
Keep the first test simple: start the browser, open a page, save a screenshot, and close the session.
If the script saves a screenshot without errors, your environment is working.
You may encounter a syntax error when running on Python 3.13 or 3.14. To fix it, follow these steps:
- Open …/nodriver_env/lib/python3.14/site-packages/nodriver/cdp/network.py
- Go to line 1365.
- You will see a comment that looks like #: JSON (±Inf) or contains a strange symbol.
- Delete that line or change the symbol to a standard + or -.
- Save and run your script.
Once that local setup works, the next step is preparing it for production-style scraping.
Proxy setup with Decodo (for production use)
nodriver can reduce some browser-level automation signals, but it doesn't solve IP-based rate limits, bans, or geo-restrictions on its own.
Residential proxies help with that because they use real ISP-assigned IPs, which generally makes them a better fit than datacenter IPs for demanding Web data collection workflows.
Environment variables are the cleanest way to manage proxy settings during development. python-dotenv loads key-value pairs from a .env file, which makes it a practical choice for proxy credentials and target settings.
Install it with:
Then create a .env file like this in the root of the project:
A clean way to pass a proxy into nodriver is through browser_args:
Built for runs that don’t break halfway
As targets tighten controls, stable IPs matter more than ever. Decodo keeps sessions steady under load.
Build a simple nodriver scraper: Navigating pages and extracting elements
Once nodriver is installed, the basic scraping flow is straightforward:
- Start a browser session with await nodriver.start()
- Open a page with await browser.get(url)
- Use the returned tab object to find elements and pull data from the rendered DOM.
That is the core loop you'll repeat in almost every scraper:
Navigating to pages
Pass the target URL to await browser.get(url), and use the returned page object as your working surface.
Because nodriver is asynchronous, you can also open multiple pages and coordinate them concurrently instead of processing every URL in strict sequence.
Finding elements on a page
nodriver gives you a few overlapping ways to find elements. In practice, the simplest approach is to pick 1 style and stay consistent.
For CSS selectors, use page.select() and page.select_all(). And for text-based lookup, use page.find() and page.find_all().
Here is a simple example that extracts link text from a page:
This is enough to show the main idea. Select the elements, loop through them, and shape the output into a Python structure you can reuse later.
Extracting text and attributes
Text extraction in nodriver can feel slightly unusual if you are coming from another framework. In current usage patterns, element.text behaves like an attribute rather than an awaited method call, so do not assume it works like Playwright or Selenium.
Attribute extraction is rougher. element.attributes is not always exposed as a clean dictionary, so if you need something specific, such as an href, you may need to inspect the attribute list and extract the value manually.
A slightly more realistic example is collecting article titles and URLs from a listing page:
The workaround is clunky, but that is part of the point. nodriver can be effective, but its API still feels uneven in places. Knowing that early helps you work around the rough edges instead of assuming every convenience method will behave perfectly.
Handling dynamic content and JavaScript-rendered pages
Dynamic content is where simple scraping setups usually fail. Many modern sites do not include the data you want in the initial HTML response. They load it later through JavaScript, often after the page renders, after you scroll, or after you click something.
That is why plain HTTP-based scraping with Requests and Beautiful Soup 4 often falls short on modern targets. nodriver addresses that by controlling a real browser session, which means JavaScript runs, the page renders, and the DOM updates the way it would in a normal browser session.
Instead of parsing whatever came back from a single request, you wait for the page to reach the state you need, then extract the final rendered content.
The timing problem
A page may finish loading in the browser while the content you need is still missing because scripts are still fetching or rendering it. nodriver gives you 2 basic ways to handle that: pause for a fixed amount of time with await page.sleep(seconds), or wait for a specific element to appear with await page.wait_for("selector").
That works, but it's blunt. It can make your scraper slower than necessary, and it still fails when the page takes longer than expected.
A better option is to wait for the actual content you need:
That is usually the better strategy. It's faster when content appears quickly and more reliable when the page is inconsistent. In practice, wait_for() should be your default, while sleep() is better as a debugging tool or fallback when no stable page marker exists.
Lazy loading
Pages can load only the first batch of items, then fetch more when you scroll. If you scrape too early, you get only a fraction of the page. In nodriver, the usual way to handle that is to scroll the page with injected JavaScript, wait for new items to load, then repeat until you reach the end.
The pattern is simple: scroll, wait, then check whether more content appeared. On real targets, you usually combine it with a stopping condition. That may be a "Load more" button disappearing, an end-of-list message appearing, or the number of loaded items no longer increasing.
Handling clicks
Many sites hide data behind tabs, accordions, modal triggers, or pagination controls. In those cases, scraping isn't just about reading the page. It's about driving the interface to reveal the data first.
A complete example
Here is a realistic example of scraping a JavaScript-rendered product listing that lazy-loads more cards as you scroll:
This example shows the main pattern clearly. Wait for the first batch of cards. Scroll. Let the next batch load. Check whether the page actually added more items. Then extract the rendered results once the list stabilizes.
Advanced scraping techniques with nodriver
Once you move past demo scripts, the problem usually changes. You start trying to make the scraper run repeatedly, at scale, without falling apart. That is where a few more advanced patterns start to matter.
Concurrency
nodriver is built on asyncio, so you can scrape multiple pages at the same time instead of waiting for each one to finish before starting the next. That's one of its real advantages. If you are collecting data from a category page, product detail pages, or search result URLs, asyncio.gather() can help you parallelize the work and improve throughput.
This pattern is useful, but it needs discipline. More parallelism means more memory use, more open tabs, and more chances for a session to stall. If you push concurrency without limits, the scraper will slow down or become unstable.
That is why resource management matters. Close browsers properly. Do not leave sessions hanging. For larger jobs, it is often better to run several smaller browser batches than one oversized session that tries to do everything at once.
User agents
A user agent is the browser identification string that a site sees in the request and browser session. Rotating it between sessions can reduce repeated fingerprints and make your traffic look less predictable. In nodriver, the usual approach is to pass a custom user agent through browser_args when starting the browser.
The important part is consistency. If you rotate user agents, the rest of the session should still make sense. A desktop Chrome user agent paired with a clearly mobile viewport is the kind of mismatch that creates unnecessary signals.
Rotation helps most when it looks realistic, not when it is random for the sake of randomness.
Common limitations, issues, and troubleshooting
nodriver is useful, but it's not fully polished. That's worth mentioning because some failures come from the tool itself, not from your code. If you expect Playwright-level polish, you'll waste time debugging issues that are not entirely yours to fix.
Unpredictable headless mode
You can configure headless mode behavior with options such as user_data_dir, browser_executable_path, browser_args, and lang. But in practice, headless execution has been a weak point in recent community reports. This may be a side effect of how the project approaches stealth, or it may simply be a bug in the current version.
So, it's safer to begin in a visible browser while you build and debug the scraper.
Some lower-level APIs are clunky
Attribute extraction is a good example. Instead of returning a clean dictionary, element.attributes returns an array, which makes basic parsing less convenient than it should be.
Related to that, get_attribute() may not work reliably in some versions, so you should test attribute access early instead of assuming it behaves as it does in other frameworks.
Page interactions can also be inconsistent. Some documented methods, such as click_mouse or mouse_click, may not behave the way you expect.
That doesn't make nodriver unusable, but it does mean you should verify browser actions one by one instead of trusting method names alone. If a click fails, inspect the visible page state, confirm the element is actually ready, and test whether another interaction path works better.
Other errors and bugs
A few errors come up often enough to call out directly:
- If you get import errors, check your filename first. Don't name your script nodriver.py, because Python may import your local file instead of the installed package.
- If you hit a "maximum recursion depth exceeded" error, headless mode is one of the first things worth testing.
- If you get "element not found," the problem is often timing rather than selector syntax, so add a proper wait_for() strategy before rewriting the scraper.
- If you get "connection refused," the browser may not be installed correctly or may not be accessible from your environment.
How to debug nodriver?
The best way to debug nodriver is to make each step visible.
- Save screenshots at different stages so you can see what the browser actually rendered.
- Add logging so you know which step failed instead of guessing.
- Wrap fragile actions in try/except blocks so one failed interaction doesn't hide the real issue.
- Before writing selectors into code, test them in browser DevTools against the rendered DOM, not just the initial HTML response.
When nodriver isn't enough
Stronger anti-bot systems can still detect browser automation even when you avoid the usual WebDriver path. At that stage, the issue is usually broader than browser control alone.
For example, if a site starts returning error 1020, that is a sign the target is blocking access at the protection layer, not that your selectors suddenly stopped working. If the browser begins triggering Google CAPTCHAs, that usually points to a broader fingerprinting or traffic-quality problem rather than a simple scripting mistake. And if repeated requests lead to your IP getting banned, the bottleneck is clearly your network identity, not the browser API.
nodriver can help with lighter bot checks, but it is not enough on its own for every target. High-volume scraping still needs proxy rotation. which is where a residential proxy network such as Decodo becomes relevant. More aggressive anti-bot stacks still require stronger infrastructure. And if a site exposes an API, that will usually be more reliable than scraping the rendered page. The honest takeaway is simple: nodriver is a capable tool, but it is not a complete scraping strategy by itself.
Fix blocks at the source
Stop losing time to bans and CAPTCHAs. Use high quality residential IPs so your scraping keeps moving.
nodriver alternatives
If nodriver is not the right fit, the best alternative depends on your actual goal. Some tools are better for stable automation and testing. Others are better for JS-heavy scraping at scale, or for reducing the operational work around proxies, retries, and blocking.
Tool
Best fit for
Main strength
Main tradeoff
Selenium
Legacy automation stacks, cross-browser workflows, and teams that want a widely adopted standard
Mature ecosystem, broad support, strong long-term stability
More setup, relies on the WebDriver model, easier to detect on protected sites
Playwright
Modern browser automation, larger scraping projects, and teams that value reliability and maintainability
Strong waiting logic, robust locators, clean browser context handling, polished developer experience
Less focused on stealth as a core value proposition
Puppeteer
JavaScript or TypeScript teams already working in the Node.js ecosystem
Mature Chrome automation model, natural fit for frontend-heavy or full-stack JS workflows
Less convenient for Python developers, narrower fit if your workflow is already Python-first
Decodo Web Scraping API
Production-scale data collection, heavily protected targets, and teams that want simpler operations
Handles more of the blocking, retry, proxy, and extraction overhead for you
Less direct browser-level control
nodriver
Python developers who want async-first, stealth-minded browser automation with less setup than Selenium
Lightweight setup, no separate Chromedriver binary, direct browser communication
Less mature than Playwright and Selenium, rougher around edge cases
Conclusion
nodriver gives you a lighter alternative to Selenium, drops the separate Chromedriver dependency, and makes browser automation feel more direct. For JavaScript-heavy pages and lighter anti-bot defenses, that can be enough to make your workflow simpler and less brittle.
At the same time, nodriver isn't as polished as Playwright, nor as established as Selenium, and not strong enough on its own for every protected target.
If your target gets more aggressive with anti-bot measures, your volume goes up, or reliability starts to matter more than experimentation, you'll usually need more than browser automation alone. That may mean proxies, a managed unblocking layer, or a scraping API. But for learning, prototyping, and scraping dynamic pages without the usual WebDriver overhead, nodriver is worth understanding.
About the author

Lukas Mikelionis
Senior Account Manager
Lukas is a seasoned enterprise sales professional with extensive experience in the SaaS industry. Throughout his career, he has built strong relationships with Fortune 500 technology companies, developing a deep understanding of complex enterprise needs and strategic account management.
Connect with Lukas via LinkedIn.
All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

