Back to blog

How to Bypass CreepJS and Spoof Browser Fingerprinting

CreepJS is a browser fingerprinting audit tool used to test how detectable your automated browser is. If you’re trying to bypass CreepJS or improve browser fingerprinting, it helps you spot inconsistencies across signals like WebGL, fonts, and navigator data. This guide shows what actually gets flagged and how to fix the parts that still give your browser away.

TL;DR

  • CreepJS is an open-source browser fingerprinting audit tool that shows how detectable your browser setup looks to anti-bot systems.
  • It checks signals such as canvas, WebGL, audio, fonts, navigator properties, timezone, language, screen data, and signs of browser tampering.
  • Raw Playwright, Puppeteer, and Selenium setups are easy to detect because patching a few obvious leaks doesn't make the whole fingerprint believable.
  • Stealth plugins and patched browsers can improve results, but they often leave inconsistencies that still make the browser look modified.
  • Tools that shape the fingerprint more deeply tend to perform better because their browser signals line up more naturally.
  • CreepJS is most useful as a diagnostic benchmark. It helps you catch fingerprinting problems before deploying against real targets.
  • A strong CreepJS result doesn't guarantee you'll bypass every anti-bot system, because real sites also inspect network and behavioral signals.

What is CreepJS and how does it work?

CreepJS shows what a browser reveals when a website runs JavaScript to inspect its properties and environment. Instead of just collecting fingerprint data, it highlights the inconsistencies, leaks, and patched signals that can make an automated browser stand out.

At a high level, it does 4 things:

  • Collects signals from browser APIs such as Canvas, WebGL, Audio, Fonts, and navigator properties.
  • Converts parts of that data into fingerprint values that are easier to compare.
  • Checks whether those signals make sense together.
  • Renders the results in sections so you can inspect what looks normal, unusual, or patched.

The third step is where CreepJS becomes especially useful. A browser can look fine at first glance and still fall apart once its signals are compared side by side. If it claims to be Chrome on Windows but exposes a Linux-looking WebGL renderer, that mismatch stands out. If one obvious automation clue is hidden but deeper APIs still suggest patching, that stands out too.

CreepJS uses indicators such as like headlessheadless, and stealth to show the level of suspicion it detects.

  • Like headless - the browser behaves in ways that resemble a headless environment.
  • Headless - CreepJS sees stronger evidence of actual headless behavior.
  • Stealth - CreepJS detects signs that the browser may have been modified to hide automation.

Lower is better across all 3. A score close to 0% means little evidence. A score close to 100% means the browser clearly behaves that way. A high headless score indicates it appears to be a headless browser. A high stealth score indicates it appears patched.

That distinction matters when comparing tools. A browser can reduce its headless score and still raise its stealth score. That means it looks less openly automated but more obviously patched. The result isn't just about whether one number went down. It's about whether the full browser profile still looks coherent.

That's what makes CreepJS useful in practice. It doesn't just tell you something looks off. It gives you a clearer view of how it looks off, which is exactly what you need before moving on to real anti-bot systems.

CreepJS fingerprinting techniques: A developer's deep dive

CreepJS works because it checks the browser signals that are hardest to fake cleanly. Many automation tools can hide obvious clues, such as navigator.webdriver, or the user agent. CreepJS looks past that and checks whether the rest of the browser still looks believable.

That is why your browser environment can still fail fingerprinting checks, even after the obvious automation signals have been patched.

Canvas and WebGL fingerprinting

Canvas and WebGL matter because they reflect how the browser actually renders graphics.

Canvas fingerprinting works by drawing text or shapes and hashing the result. Tiny pixel-level differences can appear depending on the GPU, drivers, OS, browser engine, and font rendering pipeline. WebGL goes further by exposing values such as the GPU vendor, the renderer string, and supported extensions.

This is where headless setups often stand out. A real consumer laptop usually exposes a normal desktop graphics profile. A headless Chrome session on a Linux server often exposes something very different, such as software rendering or a server-like renderer. Even if the browser claims to be regular Chrome on Windows or macOS, the WebGL data can tell a different story.

The lesson is simple: a believable setup needs a believable graphics profile, not just a patched browser string.

Audio context fingerprinting

Audio fingerprinting is easier to ignore and harder to fake.

Using the Web Audio API, a page can generate a signal, process it through an audio context, and measure the floating-point output. Those results can vary slightly depending on the browser engine, OS audio stack, and processing path.

That's what makes audio useful. It reflects lower-level system behaviour, not just surface-level browser values. A setup can patch obvious browser properties while still leaking a very distinctive audio profile.

Font enumeration

Fonts are among the clearest indicators of whether an environment feels real or stripped down.

CreepJS probes for installed fonts using browser-side measurement tricks. It checks how text renders under different font fallbacks and uses that to infer which fonts are present.

A real consumer OS usually exposes a broad font set shaped by the operating system, language support, installed apps, and normal usage. A minimal Docker container or bare server image often exposes a much shorter and more uniform list. That doesn't just look sparse. It looks artificial.

A believable browser profile needs a font environment that matches the platform it claims to be running on.

This is where many developers start, and where many stop too early.

CreepJS checks whether the browser’s declared identity matches the features it actually exposes. That includes the user agent, platform hints, device information, language settings, and other navigator-related values.

Overriding navigator.userAgent is easy. Making the rest of the environment agree with it is much harder. If a browser claims to be a standard Chrome build on Windows, then values like platformhardwareConcurrencydeviceMemory, and overall feature availability should align with that identity as well. If they don't, the user agent starts to look like a costume.

Timezone and language consistency

Timezone and language seem small, but they help complete the story.

A believable browser fingerprint isn't just a list of values. It's a coherent identity. If the browser looks like a US English desktop session but reports a Japanese timezone, or if its locale and region signals don't match the rest of the setup, that inconsistency raises suspicion.

This is one reason fingerprinting is harder than it first appears. you're not just hiding automation. you're making many small details line up in a way that feels natural.

Extension and permission leaks

Some of the clearest fingerprinting clues come from the very tools meant to hide them.

Privacy extensions and anti-fingerprinting tools often patch browser APIs to reduce exposure, but those changes can leave side effects behind that make the browser look modified rather than natural. Tools like JShelter are a good example. They can reduce API exposure, but they still require careful testing because extension-level changes can introduce detectable patterns.

Chromium-based automation has a similar problem. Tools like Playwright communicate with the browser through the CDP, and that architecture can still leave traces even when obvious flags are hidden. That is why stealth plugins often improve results without fully fixing them. They may remove the loudest automation signals, while the underlying browser behaviour still looks patched.

What a resilient setup needs to do differently

The main takeaway is simple: fingerprint resilience isn't about hiding one bad value. It's about making the whole environment hold together.

A stronger setup does not just spoof a user agent. It lines up graphics signals, audio behaviour, fonts, navigator properties, timezone, language, and feature availability so they all point to the same kind of machine.

If you're already working with JavaScript-heavy targets, it also helps to understand how to scrape websites with dynamic content, because many of the browser behaviours needed to render those pages are the same ones anti-bot systems inspect closely.

Bypassing CreepJS: Why and how?

If your setup can pass CreepJS cleanly, it will probably hold up against most real anti-bot checks, too. CreepJS inspects the same signals that those systems care about, so it works well as a quick sanity check before deployment.

But passing it means more than hiding a single flag. There's a spectrum of how deeply a tool shapes the browser's fingerprint, and where your setup falls on that spectrum determines how convincing it looks. In practice, bypassing CreepJS means masking browser automation well enough that the browser still appears to be a normal, internally consistent user environment.

A useful way to think about bypassing CreepJS is to look at the different levels of fingerprint spoofing:

  • User agent override only. This just changes the browser label. It might say "Chrome on Windows,” but if the WebGL profile, fonts, timezone, or hardware values still suggest something else, the browser will not look convincing.
  • Patched automation browsers. Tools such as Patchright or undetected-chromedriver hide some of the most obvious automation leaks. They're better than changing the user agent alone, but they still often leave deeper signals untouched.
  • Privacy-focused browsers. These give you more control over fingerprinting exposure, but they usually need more manual tuning to look believable.
  • Purpose-built fingerprint spoofing browsers. Tools like Camoufox and Rebrowser aim to shape the entire browser profile rather than patching a few obvious leaks. That usually makes the browser look more convincing because more of the signals line up from the start.

Right now, Camoufox stands out because it patches fingerprinting APIs at the browser level. That means spoofing happens before page JavaScript starts inspecting the environment, which usually produces a cleaner and more convincing profile than patching browser values after launch.

The key idea is consistency. A believable browser fingerprint doesn't contain a hidden WebDriver flag. It's one where the GPU, fonts, timezone, language, screen size, and user agent all tell the same story.

That is also how you should test. Run your setup against CreepJS before deployment, capture the headless and stealth results, and use them as a simple benchmark in your workflow. This is especially useful if you're working with PlaywrightSelenium, or other automation tools that need patching to look more natural.

The last thing to keep in mind is maintenance. Manual patching can help, but it gets expensive fast. For teams that don't want to keep fixing fingerprint leaks one by one, a managed or purpose-built solution is often the more practical path. It's also worth pairing this with broader anti-detection practices covered in this guide on bypassing anti-bot systems.

Skip the detection arms race

Decodo's Web Scraping API handles proxy rotation, fingerprinting, and CAPTCHAs so you don't have to.

Using CreepJS with browser automation tools

The best way to use CreepJS is as a repeatable testing tool, not a one-off comparison.

Instead of checking one browser once and moving on, run the same CreepJS test against each setup you're considering, save the results, and compare them over time. That makes it easier to spot fingerprint regressions before deployment.

If you're new to this part of browser automation, it also helps to understand what a headless browser is.

Benchmarking setup

The workflow is simple:

  • Launch a browser with your chosen tool
  • Open https://abrahamjuliot.github.io/creepjs/
  • Wait for the page to finish computing its fingerprint
  • Extract the like headlessheadless, and stealth values from the page
  • Append the result to a log file so you can compare runs later

Because CreepJS renders these scores as text, it's more reliable to parse document.body.innerText rather than to depend on fragile selectors.

Puppeteer baseline test (Node.js)

Start with plain Puppeteer to set a baseline. These are Node.js scripts, so you'll need Node.js installed before running any of them.

Install the package:

npm install puppeteer

Run it against CreepJS:

Save the following as baseline-creepjs.js and run it in the terminal with node baseline-creepjs.js

// baseline-creepjs.js
const puppeteer = require('puppeteer');
const fs = require('fs');
function extractScores(text) {
const likeHeadlessMatch = text.match(/(\d+)%\s+like headless:/i);
const headlessMatch = text.match(/(\d+)%\s+headless:/i);
const stealthMatch = text.match(/(\d+)%\s+stealth:/i);
return {
likeHeadless: likeHeadlessMatch ? Number(likeHeadlessMatch[1]) : null,
headless: headlessMatch ? Number(headlessMatch[1]) : null,
stealth: stealthMatch ? Number(stealthMatch[1]) : null,
};
}
(async () => {
const browser = await puppeteer.launch({
headless: true,
args: ['--no-sandbox'],
});
const page = await browser.newPage();
await page.goto('https://abrahamjuliot.github.io/creepjs/', {
waitUntil: 'networkidle2',
});
await page.waitForFunction(() => {
const text = document.body.innerText;
return /%\s+headless:/i.test(text) && /%\s+stealth:/i.test(text);
}, { timeout: 30000 });
const text = await page.evaluate(() => document.body.innerText);
const scores = extractScores(text);
const result = {
tool: 'puppeteer-baseline',
timestamp: new Date().toISOString(),
...scores,
};
fs.appendFileSync('creepjs-results.jsonl', JSON.stringify(result) + '\n');
console.log(result);
await browser.close();
})();

In our baseline run, raw Puppeteer returned:

{
tool: 'puppeteer-baseline',
timestamp: '2026-03-24T01:53:25.710Z',
likeHeadless: 38,
headless: 100,
stealth: 0
}

That's exactly what you would expect from an unmodified headless browser. CreepJS identifies it as fully headless, but not stealth-patched.

Puppeteer with puppeteer-extra and the stealth plugin

Next, add puppeteer-extra and the stealth plugin. puppeteer-extra is a wrapper around Puppeteer that supports plugins. The stealth plugin patches common automation signals, such as navigator.webdriver and the plugins array, and Chromium-specific runtime clues to make the browser appear less automated.

Install the packages:

npm install puppeteer puppeteer-extra puppeteer-extra-plugin-stealth

Run it against CreepJS:

Save the following as stealth-creepjs.js, then run it in your terminal with node stealth-creepjs.js.

// stealth-creepjs.js
const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');
const fs = require('fs');
puppeteer.use(StealthPlugin());
function extractScores(text) {
const likeHeadlessMatch = text.match(/(\d+)%\s+like headless:/i);
const headlessMatch = text.match(/(\d+)%\s+headless:/i);
const stealthMatch = text.match(/(\d+)%\s+stealth:/i);
return {
likeHeadless: likeHeadlessMatch ? Number(likeHeadlessMatch[1]) : null,
headless: headlessMatch ? Number(headlessMatch[1]) : null,
stealth: stealthMatch ? Number(stealthMatch[1]) : null,
};
}
(async () => {
const browser = await puppeteer.launch({
headless: true,
args: ['--no-sandbox'],
});
const page = await browser.newPage();
await page.goto('https://abrahamjuliot.github.io/creepjs/', {
waitUntil: 'networkidle2',
});
await page.waitForFunction(() => {
const text = document.body.innerText;
return /%\s+stealth:/i.test(text) && /%\s+headless:/i.test(text);
}, { timeout: 30000 });
const text = await page.evaluate(() => document.body.innerText);
const lines = text.split('\n').filter(line =>
/like headless:|headless:|stealth:/i.test(line)
);
console.log(lines);
const scores = extractScores(text);
const result = {
tool: 'puppeteer-stealth',
timestamp: new Date().toISOString(),
...scores,
};
fs.appendFileSync('creepjs-results.jsonl', JSON.stringify(result) + '\n');
console.log(result);
await browser.close();
})();

In our test, Puppeteer with the stealth plugin returned:

[
'31% like headless: 0cc8cb28',
'33% headless: 6ed45504',
'80% stealth: 1b90e96f'
]
{
tool: 'puppeteer-stealth',
timestamp: '2026-03-22T18:57:25.609Z',
likeHeadless: 31,
headless: 33,
stealth: 80
}

This is a meaningful improvement over raw Puppeteer. The browser looks far less headless than the baseline. But there's a trade-off: the stealth score jumps to 80%, which means CreepJS strongly detects signs of patching.

That is the main pattern to understand here. The stealth plugin reduces obvious headless signals, but in doing so, it can make the browser look more modified. Internally, this is because stealth-style plugins patch common tells, such as navigator.webdriver and the plugins array, as well as Chromium-specific runtime clues. For broader anti-detection work, it also pairs naturally with a Puppeteer CAPTCHA bypass workflow.

Playwright with Patchright

Patchright is a patched Playwright replacement for Chromium automation. It's useful when you want something closer to Playwright’s normal workflow, but with fewer obvious automation leaks.

Install the packages:

npm i patchright
npx patchright install chromium

Run it against CreepJS:

Save the following as patchright-creepjs.js and run it with node patchright-creepjs.js.

// patchright-creep.js
const { chromium } = require('patchright');
const fs = require('fs');
function extractScores(text) {
const likeHeadlessMatch = text.match(/(\d+)%\s+like headless:/i);
const headlessMatch = text.match(/(\d+)%\s+headless:/i);
const stealthMatch = text.match(/(\d+)%\s+stealth:/i);
return {
likeHeadless: likeHeadlessMatch ? Number(likeHeadlessMatch[1]) : null,
headless: headlessMatch ? Number(headlessMatch[1]) : null,
stealth: stealthMatch ? Number(stealthMatch[1]) : null,
};
}
(async () => {
const browser = await chromium.launch({ headless: true });
const page = await browser.newPage();
await page.goto('https://abrahamjuliot.github.io/creepjs/', {
waitUntil: 'networkidle',
});
await page.waitForTimeout(8000);
const text = await page.evaluate(() => document.body.innerText);
const scores = extractScores(text);
const result = {
tool: 'patchright',
timestamp: new Date().toISOString(),
...scores,
};
fs.appendFileSync('creepjs-results.jsonl', JSON.stringify(result) + '\n');
console.log(result);
await browser.close();
})();

In our Patchright test, the browser returned:

{
tool: 'patchright',
timestamp: '2026-04-03T15:41:50.515Z',
likeHeadless: 88,
headless: 67,
stealth: 0
}

That makes Patchright an interesting middle ground. It looks less obviously patched than Puppeteer with the stealth plugin, but it still looks noticeably headless. So, compared with the Puppeteer stealth setup, Patchright hides patching better, while Puppeteer stealth reduces headless detection more aggressively.

It's also worth testing different Chromium headless modes here. Newer headless implementations can behave differently from legacy headless, and those differences can affect what CreepJS detects.

Camoufox (best result)

If you want the best results with CreepJS, Camoufox is the tool to look at. It takes a different approach from Puppeteer stealth plugins or patched Chromium setups. Instead of overriding values after the browser is running, it spoofs fingerprinting signals at the C++ level, before page scripts ever inspect the environment. That means page JavaScript sees a coherent set of signals from the start, not a standard browser with values patched on top.

Camoufox is built on Firefox and uses BrowserForge under the hood to generate realistic device fingerprints. It spoofs navigator properties, screen and window dimensions, WebGL vendor and renderer pairs, AudioContext data, fonts, headers, timezone, and locale. It also avoids injecting JavaScript into the main world, which is one of the more common ways automation gets detected.

Install the package:

pip install -U camoufox

Fetch the browser binary:

python -m camoufox fetch

Camoufox ships its own Firefox build. This step downloads it separately from the Python package.

Run it against CreepJS:

Save the following as a Python script, for example, creepjs_test.py, and run it with python creepjs_test.py

# creep_test.py
from camoufox.sync_api import Camoufox
import re
import json
from datetime import datetime
def extract_scores(text):
like_headless = re.search(r"(\d+)%\s+like headless:", text, re.I)
headless = re.search(r"(\d+)%\s+headless:", text, re.I)
stealth = re.search(r"(\d+)%\s+stealth:", text, re.I)
return {
"likeHeadless": int(like_headless.group(1)) if like_headless else None,
"headless": int(headless.group(1)) if headless else None,
"stealth": int(stealth.group(1)) if stealth else None,
}
with Camoufox() as browser:
page = browser.new_page()
page.goto("https://abrahamjuliot.github.io/creepjs/", wait_until="networkidle")
page.wait_for_timeout(8000)
text = page.locator("body").inner_text()
scores = extract_scores(text)
result = {
"tool": "camoufox",
"timestamp": datetime.utcnow().isoformat(),
**scores
}
with open("creepjs-results.jsonl", "a", encoding="utf-8") as f:
f.write(json.dumps(result) + "\n")
print(result)

In our test,  Camoufax returned:

{
'tool': 'camoufox',
'timestamp': '2026-04-03T15:37:45.773388',
'likeHeadless': 0,
'headless': 0,
'stealth': 0
}

The reason Camoufox tends to achieve near-zero headless and stealth scores is simple: it shapes the fingerprint before page scripts inspect it. Its feature docs list spoofing for navigator values, screen and window sizes, WebGL parameters, AudioContext data, voices, timezone, locale, headers, and more, while also avoiding main-world execution leaks that can expose automation.

The trade-off is convenience. Camoufox can be slower to start than a plain Puppeteer or Playwright script, and it may take more effort to fit neatly into some CI pipelines. So while it often gives the best fingerprint result, it's not always the easiest tool to drop into an existing stack.

Tool

Like headless

Headless

Stealth

Raw Playwright

88%

100%

0%

Patchright

88%

67%

0%

Raw Selenium

44%

100%

0%

undetected-chromedriver

44%

67%

0%

Raw Puppeteer

38%

100%

0%

Puppeteer + stealth

31%

33%

80%

Camoufox

0%

0%

0%

Fortifying web scrapers against CreepJS detection

Once you know what CreepJS checks, the next step is hardening your setup so the browser profile actually holds together.

The easiest way to do that is to treat it like a checklist.

Choose the right browser and operating system to impersonate

Not all browser profiles attract the same level of scrutiny. 1920×1080 remains the most common desktop resolution, and Chrome on Windows is by far the most common desktop browser/OS combination. That makes it the safest profile to impersonate; it blends into the largest crowd.

A few guidelines:

  • Chrome on Windows 10 or 11 is the lowest-risk combination. It's the most common real-world profile, so it raises the fewest questions.
  • Firefox on Windows is another solid option, and Camoufox is built on Firefox, making it a natural fit if you're already using that tool.
  • Safari on macOS is harder to spoof convincingly. Safari exposes a narrower set of APIs and has distinctive behaviour that's easy to get wrong from a non-Mac environment.
  • Avoid impersonating mobile browsers from a desktop environment. The mismatch between touch event support, screen dimensions, and device APIs is hard to fake cleanly.

Use realistic viewport and screen values

Strange or perfectly round screen dimensions are common telltale signs. Stick to resolutions that real users actually have. The most common desktop resolutions are 1920×1080, 1366×768, and 1536×864.

But the resolution alone isn't enough. Your window.outerWidth and window.outerHeight should be slightly larger than window.innerWidth and window.innerHeight - the difference accounts for the browser's toolbar and frame. A viewport that exactly matches the screen size is a headless tell, because real browsers always have some chrome around the content area.

For example, if you're claiming a 1920×1080 screen:

  • screen.width: 1920, screen.height: 1080
  • window.outerWidth: 1920, window.outerHeight: 1040 (accounting for the taskbar)
  • window.innerWidth: 1903, window.innerHeight: 969 (accounting for scrollbar and browser UI)

The exact numbers don't matter as much as the relationships between them. If all 4 values are identical, the profile looks headless.

Align timezone, locale, and Intl output with your proxy IP

If your traffic exits through a German residential IP, the browser needs to match that region across every signal:

  • navigator.language → de-DE
  • navigator.languages → ["de-DE", "de", "en-US", "en"]
  • Timezone → Europe/Berlin
  • Intl.DateTimeFormat().resolvedOptions().timeZone → Europe/Berlin
  • Intl.NumberFormat().resolvedOptions().locale → de-DE

The Intl API is easy to overlook. Even if you set the timezone and language correctly, Intl.DateTimeFormat and Intl.NumberFormat can still return default en-US values if they aren't explicitly configured. CreepJS checks these, and so do production anti-bot systems.

WebGL and canvas noise

This is where many setups make things worse, not better. Adding totally random noise to the canvas or WebGL on every page load is a bad idea. Real hardware is stable. If the fingerprint changes constantly, that instability becomes a strong signal in its own right.

A better approach is seeded noise. That means the small changes are tied to a stable identifier, such as a browser profile ID, so the fingerprint stays consistent across refreshes and within the same session. The key idea is simple: a believable fingerprint should be unique enough to look real, but stable enough to behave like actual hardware.

Removing automation artifacts

You also need to remove the obvious automation clues.

That usually means:

  • disabling Chromium automation flags where possible
  • patching navigator.webdriver
  • removing known automation properties and CDP-related traces
  • avoiding browser behaviour that looks obviously scripted

This is where tools like stealth plugins and patched browsers help, but they only work well if the rest of the fingerprint is consistent as well.

Proxy integration

A clean browser fingerprint still looks suspicious if the network layer tells a different story.

That is why proxy choice matters. Rotating residential proxies are useful because they make it easier to match the browser’s region, timezone, and locale to the exit IP. Datacenter IPs can still work in some cases, but they often attract more scrutiny even when the browser fingerprint itself looks reasonable.

If you want to go deeper on that side of the setup, see what a residential proxy iswhy rotating proxies are often the better choice, and what Decodo residential proxies are.

How CreepJS can be used to bypass fingerprinting

One of the most useful things about CreepJS is that it's not just a test to pass. It's also a debugging tool.

If your browser setup leaks somewhere, CreepJS usually gives you a good clue about where to look. A suspicious WebGL section points you toward graphics-related inconsistencies. A bad audio result points to deeper system-level behaviour. Navigator or language mismatches point to identity problems. That makes it much easier to fix the right thing rather than mindlessly patch everything.

This is also why CreepJS works well in a CI workflow. If you update Chromium, change a dependency, switch proxy providers, or modify your browser config, you can rerun the test to see whether the fingerprint improves or worsens. That is much more useful than finding out after deployment that a previously stable setup now gets blocked faster.

It's also useful for benchmarking third-party tools. If you're comparing scraping APIs, anti-detection browsers, or browser automation services, CreepJS gives you a quick way to see how convincing their browser layer looks before you commit to them. That is one reason it's helpful alongside broader tools and services, such as the Decodo Web Scraping API.

You can even use it to think more clearly about target sites that are not running CreepJS at all. If a target site seems to care about fingerprint quality, and CreepJS is showing clear problems in your setup, there's a good chance those same weak points matter there too. That does not mean the site uses the same checks. It means the same categories of leaks remain relevant.

For a related example of using detection systems as a diagnostic signal, see How to Bypass AI Labyrinth.

CreepJS limitations

CreepJS is useful, but it only evaluates browser-side signals, not all the real-world signals sites use to detect bots.

As a detection tool

CreepJS only sees what client-side JavaScript can see.

It does not measure IP reputation, TLS fingerprinting, request timing, HTTP/2 behaviour, or other network-level signals that production anti-bot systems often use. It also does not evaluate behaviour the way a behavioural system would. As a result, a browser can look clean in CreepJS and still get flagged later if its network or interaction patterns look automated.

It's also open source, which means people can patch specifically against its checks without actually improving their overall fingerprint quality.

As a benchmark

A good CreepJS result doesn't guarantee that a site will not block you.

A target may use different fingerprinting logic, behavioural analysis, reputation systems, or its own scoring model. CreepJS is best treated as a proxy for browser realism, not as a universal pass-or-fail test.

Scores can also vary between runs. Browser updates, local state, dependency changes, and updates to the hosted CreepJS page can all affect results. A 0% headless score only means the browser passed the checks CreepJS is running. It does not mean the browser is indistinguishable from a human user in every setting.

As a maintenance target

The benchmark can move over time.

CreepJS evolves, browsers evolve, and fingerprint outputs shift as engines change. A setup that scores well today may score worse after the next browser update or after changes to the test itself. That is another reason to treat it as a continuous testing tool instead of a one-time test.

CreepJS alternatives

CreepJS is one of the best tools for auditing a browser’s static fingerprint, but it works best alongside other tests.

  • FingerprintJS Pro demo is useful when you want to see how a commercial fingerprinting system identifies your browser. It's closer to a production service than CreepJS, but less diagnostic. It tells you more about the outcome than the reason.
  • BrowserLeaks is better when you need to isolate a specific leak. It breaks the browser down into separate surfaces, such as WebRTC, canvas, WebGL, fonts, and client hints, making debugging much easier.
  • Sannysoft is a faster, simpler bot test focused on common Selenium and Puppeteer artifacts. It's not as deep as CreepJS, but it's useful as a quick sanity check.
  • pixelscan.net adds something CreepJS does not really cover: network context. It checks whether your browser profile, IP address, and geographic signals are consistent.
  • Incolumitas bot test covers another gap. It focuses more on behavioural signals like timing, movement, and interaction patterns instead of just static fingerprinting.

The easiest way to use these tools is by role:

  • Use CreepJS for a broad static fingerprint audit
  • Use BrowserLeaks to isolate specific API leaks
  • Use Pixelscan to check browser and IP consistency
  • Use Incolumitas to test the behavioural layer that CreepJS largely ignores

So if you're building a practical testing workflow, CreepJS should be your starting point, but not your only check.

Use cases for developers and researchers

CreepJS becomes much more useful once you stop thinking of it as just a demo page and start treating it like a testing tool.

Web scraping and data collection

One of the clearest use cases is auditing a scraper before it ever touches a high-value target. Catching obvious fingerprint problems early reduces the chances of burning an IP or account in the first session.

It's also useful for benchmarking third-party scraping solutions, including services like Decodo Web Scraping API, and comparing which one produces a more believable browser layer. For teams working across different target types, it can also help to keep a small library of fingerprint profiles for different site categories.

Security research and browser privacy auditing

You can use CreepJS to check whether a privacy browser or browser extension is actually reducing fingerprint exposure, or whether it's just introducing a different set of detectable changes. It can also help test enterprise browser policies to see whether certain locked-down settings accidentally make users more fingerprintable.

Anti-bot solution development

If you're building anti-bot tooling, CreepJS is useful for regression testing. You can run it before and after a browser upgrade, dependency update, or patch to confirm that the change did not introduce new leaks.

Ad verification and brand protection

For ad verification work, CreepJS can help confirm that a geo-targeted browsing session still appears believable. If you're simulating impressions from different regions using residential proxies, you can use them to verify that the browser fingerprint still matches the geography you're emulating.

Academic and privacy research

Researchers can use CreepJS to compare browser configurations, document which features leak the most unique information, and track how those patterns change across browser versions.

For related reading, see Anti-scraping techniques and how to outsmart themHow to scrape Google without getting blocked, and Decodo Web Scraping API.

Final thoughts

So, can you bypass CreepJS and spoof browser fingerprinting?

Yes, but not in the simplistic way people often hope. You don't get there by changing a single browser setting or hiding a single automation flag. You get there by making the entire browser profile look coherent, from graphics and fonts to language, timezone, hardware hints, and runtime behaviour.

That's also why some tools do better than others. Basic automation setups leak too much. Stealth plugins can help, but often introduce their own tells. The strongest results usually come from setups that shape the fingerprint more deeply and more consistently.

CreepJS works best as a way to audit how believable your browser looks before it reaches a real target. A strong result usually means the browser profile looks more convincing overall, but that still leaves the behavioural layer, network reputation, and site-specific scoring to deal with; it removes one of the most common reasons automated browsers get flagged early.

If maintaining that level of consistency becomes too time-consuming, managed solutions like the Decodo Web Scraping API can be a more practical option for handling browser rendering, proxy rotation, and anti-detection in a single place.

Stay undetected at scale

Pair clean residential IPs with your anti-detect stack and collect data without getting blocked.

About the author

Vilius Sakutis

Head of Partnerships

Vilius leads performance marketing initiatives with expertize rooted in affiliates and SaaS marketing strategies. Armed with a Master's in International Marketing and Management, he combines academic insight with hands-on experience to drive measurable results in digital marketing campaigns.


Connect with Vilius via LinkedIn

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

Frequently asked questions

Does CreepJS store my browser fingerprint?

CreepJS runs as a public browser audit page, so you should assume you're voluntarily exposing your browser to the test. For exact logging or storage behaviour, check the project documentation and hosted page details.

Is bypassing CreepJS the same as bypassing all anti-bot systems?

No. A good CreepJS result only means your browser fingerprint looks cleaner against the checks CreepJS runs. Real anti-bot systems can still use network reputation, behavioural analysis, account history, and other signals that CreepJS doesn't cover.

Which automation tool performs best against CreepJS?

From the tools covered in this article, Camoufox produced the strongest overall result because it shapes fingerprint signals at the browser level rather than relying mostly on JavaScript-side patching.

Can CreepJS detect if I am using a proxy or VPN?

Not directly in the same way as a network-level detection system can. But if your browser locale, timezone, and fingerprint don't match the region your IP suggests, that inconsistency can still make the setup look suspicious.

Anti-scraping

Anti-Scraping Techniques And How To Outsmart Them

Businesses collect scads of data for a variety of reasons: email address gathering, competitor analysis, social media management – you name it. Scraping the web using Python libraries like Scrapy, Requests, and Selenium or, occasionally, the Node.js Puppeteer library has become the norm.

But what do you do when you bump into the iron shield of anti-scraping tools while gathering data with Python or Node.js? If not too many ideas flash across your mind, this article is literally your stairway to heaven cause we’re about to learn the most common anti-scraping techniques and how to combat them.

Navigating Anti-Bot Systems: Pro Tips For 2024

Navigating Anti-Bot Systems: Pro Tips For 2026

With the rapid improvements in artificial intelligence technologies, it seems that 2026 will present some new challenges for web scraping enthusiasts and professionals. Over the years, anti-bot systems have become increasingly sophisticated, which makes extracting valuable data from websites a true challenge. As businesses intensify their efforts to protect against automated bots, traditional web scraping methods are being put to the test. The surge in anti-bot measures is not only due to heightened cybersecurity awareness but also signifies a shift in the digital ecosystem and growing competition. As a result, those who want to leverage publicly available data need to recalibrate their strategies to navigate and circumvent anti-bot systems.

If CAPTCHAs and IP bans were not on your bingo card for 2026, our comprehensive guide is a must-read. We’ve sat down with our scraping gurus and discussed the best practices, gathered all the pro tips, and summarized what’s coming next for anti-bot systems and scrapers. As 2026 approaches, it demands a proactive approach to understanding, outsmarting, and ultimately thriving in the face of escalating anti-bot measures, so grab a cup of coffee and dive into our guide.

If you can't access the whole article, make sure you have disabled your ad blocker

Playwright Web Scraping: A Practical Tutorial

Web scraping can feel like directing a play without a script – unpredictable and chaotic. That’s where Playwright steps in: a powerful, headless browser automation tool that makes scraping modern, dynamic websites smoother than ever. In this practical tutorial, you’ll learn how to use Playwright to reliably extract data from any web page.

© 2018-2026 decodo.com (formerly smartproxy.com). All Rights Reserved