How to Use cURL in JavaScript: Fetch, Axios, and Best Practices
Your cURL command works flawlessly in the terminal. It has for weeks. Then your boss asks, "Can you make this run in JavaScript?" and suddenly you're here. Good news: you have options. You can run the system cURL binary directly from Node.js, or you can ditch cURL entirely and use a native JavaScript HTTP client that does the same job. This article walks through both paths āĀ child_process,Ā node-libcurl, Fetch, and Axios, plus a flag-by-flag cURL-to-JS translation guide and a decision framework so you don't pick the wrong one.
Zilvinas Tamulis
Last updated: Apr 22, 2026
25 min read

TL;DR
- Fetch is the built-in JavaScript equivalent of cURL and works in both browsers and Node.js 18+ with no dependencies
- Axios is the recommended default for Node.js projects thanks to automatic JSON parsing, better error handling, and built-in timeouts
- Running cURL via child_process works, but it's best kept for quick scripts or when you already have a working command
- node-libcurlĀ gives you fullĀ libcurl power, but it's a niche choice for advanced networking needs
- Translating cURL to JavaScript is a flag-by-flag mapping, not a one-to-one copy-and-paste
- Proxies are essential for scraping at scale, and neither Fetch nor Axios supports them natively without extra setup
- If you're dealing with CAPTCHAs or heavy bot protection, switching to a scraping API is more practical than tweaking HTTP clients
What "cURL in JavaScript" actually means
Before writing any code, it helps to understand the split.Ā cURL is a command-line tool. It runs on your system, not in a browser. You can't call it from browser-side JavaScript ā browsers simply don't allow your code to run system programs.
The answer depends on where your JavaScript runs.
In the browser, you use theĀ Fetch API. It can do the same things cURL does ā sendĀ GET and POST requests, set custom headers, and include a request body. The one catch isĀ CORS. If you're making a request to a different domain, the server on the other end has to allow it explicitly. A cURL command might fail in the browser, not because your code is wrong, but because the server never said, "yes, other websites can call me." There's no client-side workaround for this.
In Node.js, you have two options. You can run the actual cURL program from your script using the built-inĀ child_process module ā basically telling Node.js "run this terminal command for me and give me the result." Or you can skip cURL entirely and use a JavaScript HTTP library (Fetch, Axios, orĀ node-libcurl) that makes the same kinds of requests without needing cURL installed at all.
The practical takeaway: for almost every JavaScript project, using a built-in HTTP library is the better choice. It works everywhere your code runs, it's easier to test, and it doesn't break when someone deploys to a server that doesn't have cURL installed. Shelling out to cURL has its place, but it's the exception.
If you're coming from a cURL-heavy workflow, it's worth brushing up on howĀ cURL GET requests work and howĀ cURL handles proxies before diving into the JavaScript equivalents below.
Running system cURL from Node.js with child_process
The most literal way to use cURL in JavaScript ā tell Node.js to run a cURL command in the terminal and hand back the output. It's not elegant, but it gets the job done when you already have a working cURL command and just need it to run from a Node.js script.
You'd reach for this approach when you're migrating a shell script to Node.js, automating something quick, or dealing with a cURL command so complex that translating it to Fetch or Axios isn't worth the effort.
Basic GET request with exec()
TheĀ child_process module is built intoĀ Node.js with no installation required. TheĀ exec()Ā function runs a shell command and gives you the output as a string.
TheĀ -s flag tells cURL to run silently (no progress bar). Without it, you'll get download stats mixed into your output, which makes JSON parsing a mess.
Step-by-step breakdown:
- exec() runs the cURL command in a child shell process.
- When the command finishes, the callback fires with three arguments āĀ error (if the command itself failed),Ā stdout (the response body), andĀ stderr (any error output from cURL).
- SinceĀ stdout is a raw string, you need toĀ JSON.parse()Ā it yourself. There's no automatic parsing here.
Synchronous version with execSync()
If you're writing a simple script and don't need async behavior,Ā execSync()Ā blocks until the command finishes and returns the output directly.
Simpler to read, but it freezes your entire Node.js process until cURL returns. Fine for a quick script, not something you want in a server.
Sending a POST request
When you need to send data, like creating a new resource, the cURL command gets longer. Here's aĀ POST request with a JSON body:
Notice how the command string is getting unwieldy. You're building a shell command inside a JavaScript string, juggling quotes inside quotes, escaping things. It works, but you can probably feel why this approach doesn't scale.
Safer execution with spawnSync()
When your cURL command has many flags andĀ headers, building it as a single string invites shell-injection problems.Ā spawnSync()Ā takes an array of arguments instead, so each flag is a separate element.
Each argument is its own array element, so there's no risk of a malformed string accidentally running something you didn't intend. If you're passing any dynamic input into a cURL command (URLs from user input, variable header values), always useĀ spawnSync()Ā or spawn() over exec().
When this approach falls apart
cURL inside JavaScript works, but let's be honest about the trade-offs:
- cURL must be installed. If your code runs on a minimal Docker image, a serverless function, or a CI environment that strips non-essential binaries, cURL might not be there.
- Output is just a string. You're parsing raw text every time. If cURL returns an unexpected error page instead of JSON, your script crashes unless you've wrapped everything carefully.
- It's hard to test. Mocking a child process is significantly more annoying than mocking an HTTP client.
- Error handling is clunky. HTTP status codes aren't surfaced automatically ā you'd need to add -w "%{http_code}" to the cURL command and parse it out of the output yourself.
For a one-off script or a quick migration, shelling out to cURL is perfectly fine. For anything that'll run in production, other options mentioned further below handle all of this more cleanly.
Skip the boilerplate
Decodo's Web Scraping API handles proxies, CAPTCHAs, and anti-bot detection so your code stays short and your requests actually land.
Using node-libcurl for direct libcurl bindings
IfĀ child_process is "run cURL from JavaScript,"Ā node-libcurlĀ is "put cURL's engine inside JavaScript." It's a native addon that wrapsĀ libcurl and exposes it directly to Node.js. You get full access toĀ libcurl's feature set without shelling out to anything.
That said, this is a specialist tool. For most projects, Fetch or Axios is simpler and more than enough. You'd reach for node-libcurl when you specifically need low-level TLS configuration, custom cipher suites,Ā CURLOPT_* options, or multi-handle concurrency for high-throughput scraping. If none of that means anything to you yet, you can safely skip to the next section and come back here if you ever need it.
Installation
InstallĀ node-libcurl with:
One thing to know upfront:Ā node-libcurlĀ is a native addon, which means it compiles C++ code during installation. It needsĀ node-gyp and a C++ build toolchain on your system. On most dev machines, this is already there. In CI pipelines or Docker containers, you might need to install build tools first ā and that can turn a 5-second npm install into a minor adventure.
GET request with the curly interface
node-libcurl has a few API levels. The simplest isĀ curly, a convenience wrapper that gives you async/await syntax and feels closer to what you'd expect from a modern JavaScript HTTP client.
curly automatically parses JSON responses, so data is already an object. TheĀ statusCodeĀ comes back as a number. It's fairly clean for what's happening under the hood ā a fullĀ libcurl request cycle.
POST request with curly
Send a POST request with a JSON body:
Notice the differences from Fetch or Axios. Headers are passed as an array of strings ("Key: Value" format) rather than an object. The body goes intoĀ postFields as a string. It's not hard, just different ā the naming mirrorsĀ libcurl's C API, which is why it feels a bit alien if you're used to JavaScript conventions.
Using the Curl class for fine-grained control
When you need access to specific CURLOPT_*Ā options, the lower-levelĀ Curl class gives you full control:
This is whereĀ node-libcurl earns its keep. Every option you can set in a cURL command withĀ --something, you can set here withĀ curl.setOpt(). Verbose output, custom DNS resolution, specific TLS versions, and proxy tunneling are all available.
Fetch API: The native JavaScript HTTP client
If you're starting a new project and wondering which HTTP client to use, start here. Fetch is built into every modern browser and comes with Node.js 18+ as a global ā no packages, no imports, no npm install. It's just there.
For most developers looking for a cURL equivalent in JavaScript, Fetch is the answer. It won't do everything cURL can, but it covers the vast majority of use cases with zero dependencies.
Basic GET request
Two lines to make a request and parse the response. That's it. No modules to require, no clients to instantiate. If you're on Node.js 16 or below, you'll need theĀ node-fetch package to get the same API, but from Node.js 18 onward,Ā fetch()Ā is global just like it is in the browser.
POST request with a JSON body
Sending data works the same way a cURL POST does. You specify the method, set the content type, and pass the body. The syntax is just more verbose than a one-liner in the terminal.
One thing that trips people up: the body must be a string. You can't pass a plain JavaScript object, and you needĀ JSON.stringify() every time. Axios handles this automatically, which is one reason people reach for it instead.
Custom headers
The headers option takes a plain object. Each key-value pair is equivalent to anĀ -HĀ flag in cURL.
You can also use the headers constructor if you need to build headers programmatically, but for most cases, the plain object works fine.
Handling errors
Here's the thing about Fetch that catches almost everyone the first time: it doesn't throw on HTTP errors. A 404, a 500, a 403 ā Fetch considers all of these successful responses because the server did respond. It only throws on actual network failures, like the server being unreachable.
Always check theĀ response.okĀ before parsing the body. If you skip this, you'll eventually try toĀ JSON.parse()Ā an HTML error page and spend 20 minutes wondering why your data is undefined.
Timeouts
cURL hasĀ --max-time. Axios has aĀ timeout option. Fetch has... nothing. By default, a Fetch request will hang indefinitely if the server never responds. You need to wire up anĀ AbortController yourself.
It's not a lot of code, but it's code you have to write every time, or abstract into a helper. This is one of the biggest practical arguments for Axios over raw Fetch in production scraping work.
CORS in the browser
If you're running Fetch in the browser and making requests to a different domain, CORS rules apply. The server has to includeĀ Access-Control-Allow-OriginĀ in its response headers, or the browser blocks the response before your code ever sees it.
This is why a cURL command can work perfectly in the terminal but fail the moment you paste the same logic into browser-side JavaScript. cURL doesn't care about CORS ā it's not a browser. ButĀ fetch()Ā in the browser absolutely does.
There's no client-side fix. Your options are:
- Ask the API provider to add CORS headers (sometimes possible, often not)
- Proxy the request through your own server, where CORS doesn't apply
- Use a serverless function as a middleman between your frontend and the target API
If you're doingĀ web scraping with JavaScript, this is a non-issue ā scraping runs server-side in Node.js, where CORS doesn't exist.
Axios: the developer-friendly HTTP library
Note: On March 31, 2026, an attacker compromised the npm credentials of a lead Axios maintainer and published two backdoored versions āĀ 1.14.1Ā andĀ 0.30.4. Both contained a hidden dependency that silently installed a cross-platform remote access trojan on any system that ran npm install. The malicious versions were live for roughly three hours before npm pulled them. If you installed Axios during that window, treat the system as compromised, roll back to 1.14.0 or 0.30.3, and rotate any credentials that were accessible on the affected machine.
With that out of the way, Axios is still one of the most popular HTTP libraries in the JavaScript ecosystem for good reason. It works in both Node.js and the browser, parses JSON automatically, has built-in timeout support, and gives you cleaner error handling than raw Fetch. For Node.js projects that need more ergonomics thanĀ fetch() offers, it's been the go-to choice for years.
Installation
We're pinning to 1.14.0 here deliberately.
Basic GET request
Notice what's missing compared to Fetch: noĀ .json() call. Axios detects the Content-Type header and parses the response body automatically.Ā response.data is already a JavaScript object. It's a small thing, but it adds up over hundreds of requests.
POST request
Sending a POST request is similarly streamlined. Pass a JavaScript object directly, and Axios serializes it to JSON and sets the Content-Type header for you.
No JSON.stringify(), no manual Content-Type header. Compare this to the Fetch version of the same request, and you'll see why people reach for Axios.
Custom headers
Setting headers works like cURL's -H flag ā pass them as an object in the config parameter.
Timeout
One of Axios's biggest practical advantages over Fetch is theĀ timeout:
If the server doesn't respond within the limit, Axios throws an error with code: 'ECONNABORTED'.
Error handling
Unlike Fetch, Axios actually throws errors on 4xx and 5xx responses. This means your catch block handles both network failures and HTTP errors, which is usually what you want.
The three-tier structure (error.response,Ā error.request,Ā error.message) covers every failure mode. You'll know exactly where things went wrong without parsing status codes out of a raw string.
Interceptors
This is where Axios pulls ahead for any project with more than a handful of requests. Interceptors let you run logic on every request or response globally ā add auth headers, log requests, implement retry logic, all without touching individual calls.
For scraping, a response interceptor that retries on 429 (rate limit) or 503 (server overload) with exponential backoff is practically essential. You wire it up once and forget about it.
Creating a reusable instance
When all your requests share the same base URL, auth headers, and timeout, create a configured instance with axios.create()Ā instead of repeating the same config everywhere.
This also keeps connection pools alive between requests, which matters when you're hitting the same API hundreds of times. Create the instance once, reuse it everywhere.
Translating cURL commands to JavaScript
This is the section you'll probably need to bookmark. You have a cURL command, and you need it in JavaScript. Rather than guessing, here's a systematic way to translate any cURL command flag by flag.
The "copy as cURL" workflow
Before you translate anything, you need the cURL command. If you're trying to replicate a request your browser made, Chrome and Firefox hand it to you for free:
- Open DevTools (F12 or Ctrl+Shift+I).
- Go to theĀ Network tab.
- Find the request you want to replicate.
- Right-click it and selectĀ Copy, thenĀ Copy as cURL.
That gives you the exact command as a cURL string. This is the fastest way to get a working starting point, especially when you're debugging why your JavaScript request behaves differently from what the browser sent.
Flag-by-flag mapping
Here's how each common cURL flag translates to Fetch and Axios. If you've been following along with the earlier sections, most of these will look familiar.
cURL flag
What it does
Fetch equivalent
Axios equivalent
-X METHOD
Sets the HTTP method
method: "METHOD"
method: "METHOD"
-d '{"key":"value"}'
Sends a request body
body: JSON.stringify({key: "value"})
data: { key: "value" }
--data-urlencode
Sends URL-encoded data
body: new URLSearchParams({key: "value"})
data: new URLSearchParams({key: "value"})
-u user:pass
Basic authentication
headers: { "Authorization": "Basic " + btoa("user:pass") }
auth: { username: "user", password: "pass" }
--compressed
Accepts gzip/deflate
Automatic, no action needed
Automatic, no action needed
-x host:port
Routes through a proxy
Custom agent (see proxy section)
httpAgent / httpsAgent with proxy agent
-k / --insecure
Skips TLS verification
agent: new https.Agent({ rejectUnauthorized: false })
httpsAgent: new https.Agent({ rejectUnauthorized: false })
-L / --location
Follows redirects
On by default (disable with redirect: "manual")
On by default (disable with maxRedirects: 0)
-o filename
Saves response to a file
fs.writeFile() after fetching as ArrayBuffer
fs.writeFile() with responseType: "arraybuffer"
Full example: translating a real cURL command
Let's take a GitHub API request that creates an issue ā the kind of command you'd realistically copy from documentation or DevTools.
The cURL version:
Translated to Fetch:
Translated to Axios:
Same result, different trade-offs. The Fetch version is more explicit, as you see every step. The Axios version is shorter because it handles JSON serialization andĀ POST body formatting automatically. Neither is wrong; it depends on what your project already uses and how much boilerplate you're willing to manage.
Translating basic auth
cURL's -u flag trips people up because there's no direct one-to-one option in Fetch. You need to construct theĀ Authorization header with a Base64-encoded string manually.
cURL:
Fetch:
Meanwhile, Axios has a dedicated auth option that does this for you:
Downloading files
cURL's -o flag saves the response directly to a file. In JavaScript, you fetch the response as binary data and write it yourself.
cURL:
Fetch (Node.js):
Axios:
The key detail isĀ responseType: 'arraybuffer'Ā in Axios. Without it, Axios tries to parse the response as JSON, and you get garbage.
Online converters
If you'd rather skip the manual translation,Ā curlconverter.com takes a pasted cURL command and generates Fetch, Axios, or plain Node.js code automatically. It's useful for long, complex commands where counting flags and quotes by hand is tedious.
That said, understanding the mapping yourself is more practical in the long run. Converters are great for one-off translations. When you're iterating on a scraper or debugging a failing request, knowing which flag maps to which option means you can fix things in seconds instead of switching between tabs.
Adding proxy support for web scraping
Here's where your JavaScript code meets the real world. You've got a working script, you've translated your cURL command, everything runs perfectly on localhost ā and then you hit a site that rate-limits you after 20 requests, blocks your IP entirely, or serves different content depending on the visitor's country. Welcome to scraping.
Why proxies matter for JavaScript scrapers
Any site worth scraping has some form of anti-bot protection. At the simplest level, that's rate limiting ā too many requests from one IP in a short window, and you're throttled or blocked outright. More sophisticated sites track IP reputation, detect datacenter ranges, analyze request patterns, and serve CAPTCHAs the moment anything looks automated.
Rotating your exit IP sidesteps most of this. Instead of millions of requests hitting a server from one address, you get a bunch of requests from different residential IPs spread across multiple countries.
cURL proxy integration
Before we get into the JavaScript setup, here's the cURL equivalent for reference ā useful if you're debugging a proxy config from the terminal before translating it to code.
curl -x http://username:password@gate.decodo.com:7000 https://ip.decodo.com/ip
TheĀ -x flag specifies the proxy. Run this a few times, and you'll see a different IP each time.
You can also set theĀ HTTPS_PROXY environment variable, so cURL picks it up automatically without theĀ -x flag:
This same pattern translates directly to Node.js, with one catch: neither Fetch nor Axios has native proxy support in Node. You'll need a helper package.
Proxy setup in Axios
Install theĀ https-proxy-agentĀ package:
Now wire it into Axios:
Run this a handful of times, and you'll see the exit IP change on each call ā that's the rotating proxy doing its job.
Proxy setup in Fetch (Node.js)
Fetch needs the agent to be passed slightly differently depending on your Node.js version. Using theĀ undici built-in (which powers Fetch in Node.js 18+):
undici ships with Node.js 18+ and gives you proper proxy support through the dispatcher option. If you're on an older Node.js, node-fetch paired with https-proxy-agent works the same way as the Axios example above.
Residential vs. datacenter proxies
Not all proxies are equal, and the type you pick matters a lot for scraping.
Datacenter proxies live in server farms. They're fast, cheap, and easy to get in bulk ā but they're also easy for target sites to identify. Any decent bot detection system has a list of known datacenter IP ranges and treats them with suspicion by default.
Residential proxies are IPs assigned by actual ISPs to actual homes. From the target server's perspective, a request through a residential proxy looks exactly like a real person browsing from their couch. They're harder to detect, harder to block, and the only realistic option for scraping well-defended sites.
For most serious scraping work, residential proxies are the default choice.Ā Decodo's residential pool covers 195+ countries and 115M+ IPs, which means you can geo-target specific regions without constantly running into blocks.
Choosing the right approach
Pick based on your environment and constraints. Here's a basic breakdown:
Approach
Works in browser
Works in Node.js
Dependencies
Proxy support
Recommended for
Fetch (native)
Yes
Yes (18+)
None
Manual setup
Simple API calls, lightweight scripts
Axios
Yes
Yes
1 package
Via agent
Default choice for scraping and APIs
child_process + cURL
No
Yes
None
Native cURL flags
Reusing existing cURL commands
node-libcurl
No
Yes
Native addon
Built-in
Advanced networking control
Web Scraping API
Yes
Yes
API call
Handled for you
Anti-bot, CAPTCHA, JS-heavy sites
General takeaway:
- Default to Axios for most Node.js scraping work
- Use Fetch when you want minimalism, or when you're in the browser
- TreatĀ child_process andĀ node-libcurlĀ as edge cases, not starting points
- If you're fighting rate limits, CAPTCHAs, or fingerprinting, the HTTP client isn't the bottleneck anymore. Offload it to aĀ scraping solution
Best practices and common mistakes
These are theĀ things that quietly break scraping scripts in production. Fix them up front, and you avoid hours of debugging later.
- Set a timeout on every request. Fetch has no timeout by default, and Axios will happily wait forever unless you configure it. Use AbortControllerĀ with Fetch or theĀ timeout option in Axios to prevent hung requests from blocking your process.
- Use a realistic User-Agent. Many sites reject requests with missing or default headers like axios/1.x.x. Set a User-Agent that looks like a real browser to reduce unnecessary blocks.
- Handle 4xx and 5xx responses properly. Fetch doesn't throw on HTTP errors, so you need to checkĀ response.ok manually. Axios throws by default, but you should still log status codes and response bodies to understand failures quickly.
- Don't hardcode credentials. API keys, tokens, and proxy credentials should live in environment variables, not in your code. This keeps secrets out of version control and makes deployments safer.
- Implement retries with exponential backoff. Temporary failures like network hiccups, 429s, and 503s are normal in scraping.Ā Retry with increasing delays instead of hammering the server with fixed intervals.
- Reuse Axios instances. Creating a new client for every request kills connection reuse and adds overhead. Use axios.create() once and share it across your app to keep connections warm and configs consistent.
- Watch out for CORS in the browser. If a request works in cURL but fails in the browser, CORS is usually the culprit. There's no client-side fix; you need to route the request through your own backend.
- Don't overbuild around blocked requests. If you're stacking retries, rotating headers, and still getting blocked, the issue isn't your HTTP client. At that point, switch to a scraping API or a proxy-backed solution instead of patching symptoms.
The pattern is simple: control timeouts, handle failures, and avoid leaking state. Do that consistently, and your HTTP layer stops being the problem.
Final thoughts
You have four ways to bring cURL-style requests into JavaScript āĀ child_process for running raw cURL commands,Ā node-libcurlĀ for low-level control, Fetch for a built-in and dependency-free option, and Axios for a cleaner, production-friendly experience. In practice, most API and scraping work is covered by Fetch or Axios, with Axios being the default when you want less boilerplate and better error handling. The other two exist for edge cases, not everyday use. And once you start dealing with heavy bot protection, retries, and proxy rotation, the HTTP client stops being the interesting part ā that is where handing things off to a scraping API makes more sense than building around limitations.
Scraping shouldn't be this hard
Replace proxy configs, retry logic, and fingerprint workarounds with a single API call that returns clean data.
About the author

Zilvinas Tamulis
Technical Copywriter
A technical writer with over 4 years of experience, Žilvinas blends his studies in Multimedia & Computer Design with practical expertise in creating user manuals, guides, and technical documentation. His work includes developing web projects used by hundreds daily, drawing from hands-on experience with JavaScript, PHP, and Python.
Connect with Žilvinas via LinkedIn
All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.


