Back to blog

How to Use cURL in JavaScript: Fetch, Axios, and Best Practices

Your cURL command works flawlessly in the terminal. It has for weeks. Then your boss asks, "Can you make this run in JavaScript?" and suddenly you're here. Good news: you have options. You can run the system cURL binary directly from Node.js, or you can ditch cURL entirely and use a native JavaScript HTTP client that does the same job. This article walks through both paths – child_process,Ā node-libcurl, Fetch, and Axios, plus a flag-by-flag cURL-to-JS translation guide and a decision framework so you don't pick the wrong one.

TL;DR

  • Fetch is the built-in JavaScript equivalent of cURL and works in both browsers and Node.js 18+ with no dependencies
  • Axios is the recommended default for Node.js projects thanks to automatic JSON parsing, better error handling, and built-in timeouts
  • Running cURL via child_process works, but it's best kept for quick scripts or when you already have a working command
  • node-libcurlĀ gives you fullĀ libcurl power, but it's a niche choice for advanced networking needs
  • Translating cURL to JavaScript is a flag-by-flag mapping, not a one-to-one copy-and-paste
  • Proxies are essential for scraping at scale, and neither Fetch nor Axios supports them natively without extra setup
  • If you're dealing with CAPTCHAs or heavy bot protection, switching to a scraping API is more practical than tweaking HTTP clients

What "cURL in JavaScript" actually means

Before writing any code, it helps to understand the split.Ā cURL is a command-line tool. It runs on your system, not in a browser. You can't call it from browser-side JavaScript – browsers simply don't allow your code to run system programs.

The answer depends on where your JavaScript runs.

In the browser, you use theĀ Fetch API. It can do the same things cURL does – sendĀ GET and POST requests, set custom headers, and include a request body. The one catch isĀ CORS. If you're making a request to a different domain, the server on the other end has to allow it explicitly. A cURL command might fail in the browser, not because your code is wrong, but because the server never said, "yes, other websites can call me." There's no client-side workaround for this.

In Node.js, you have two options. You can run the actual cURL program from your script using the built-inĀ child_process module – basically telling Node.js "run this terminal command for me and give me the result." Or you can skip cURL entirely and use a JavaScript HTTP library (Fetch, Axios, orĀ node-libcurl) that makes the same kinds of requests without needing cURL installed at all.

The practical takeaway: for almost every JavaScript project, using a built-in HTTP library is the better choice. It works everywhere your code runs, it's easier to test, and it doesn't break when someone deploys to a server that doesn't have cURL installed. Shelling out to cURL has its place, but it's the exception.

If you're coming from a cURL-heavy workflow, it's worth brushing up on howĀ cURL GET requests work and howĀ cURL handles proxies before diving into the JavaScript equivalents below.

Running system cURL from Node.js with child_process

The most literal way to use cURL in JavaScript – tell Node.js to run a cURL command in the terminal and hand back the output. It's not elegant, but it gets the job done when you already have a working cURL command and just need it to run from a Node.js script.

You'd reach for this approach when you're migrating a shell script to Node.js, automating something quick, or dealing with a cURL command so complex that translating it to Fetch or Axios isn't worth the effort.

Basic GET request with exec()

TheĀ child_process module is built intoĀ Node.js with no installation required. TheĀ exec()Ā function runs a shell command and gives you the output as a string.

const { exec } =Ā require('child_process');
exec('curl -s https://jsonplaceholder.typicode.com/posts/1', (error, stdout, stderr) => {
Ā Ā if (error) {
Ā  Ā Ā console.error('Command failed:', error.message);
Ā  Ā Ā return;
Ā  }
Ā Ā if (stderr) {
Ā  Ā Ā console.error('cURL error output:', stderr);
Ā  Ā Ā return;
Ā  }
Ā Ā const data =Ā JSON.parse(stdout);
Ā Ā console.log(data.title);
});

TheĀ -s flag tells cURL to run silently (no progress bar). Without it, you'll get download stats mixed into your output, which makes JSON parsing a mess.

Step-by-step breakdown:

  • exec() runs the cURL command in a child shell process.
  • When the command finishes, the callback fires with three arguments – error (if the command itself failed),Ā stdout (the response body), andĀ stderr (any error output from cURL).
  • SinceĀ stdout is a raw string, you need toĀ JSON.parse()Ā it yourself. There's no automatic parsing here.

Synchronous version with execSync()

If you're writing a simple script and don't need async behavior,Ā execSync()Ā blocks until the command finishes and returns the output directly.

const { execSync } =Ā require('child_process');
try {
Ā Ā const output = execSync('curl -s https://jsonplaceholder.typicode.com/users/1');
Ā Ā const user =Ā JSON.parse(output.toString());
Ā Ā console.log(user.name, user.email);
}Ā catch (error) {
Ā Ā console.error('Request failed:', error.message);
}

Simpler to read, but it freezes your entire Node.js process until cURL returns. Fine for a quick script, not something you want in a server.

Sending a POST request

When you need to send data, like creating a new resource, the cURL command gets longer. Here's aĀ POST request with a JSON body:

const { exec } =Ā require('child_process');
const command =Ā `curl -s -X POST https://jsonplaceholder.typicode.com/posts \
Ā  -H "Content-Type: application/json" \
Ā  -d '{"title": "Test post", "body": "Hello from cURL", "userId": 1}'`;
exec(command, (error, stdout, stderr) => {
Ā Ā if (error) {
Ā  Ā Ā console.error('Command failed:', error.message);
Ā  Ā Ā return;
Ā  }
Ā Ā const response =Ā JSON.parse(stdout);
Ā Ā console.log('Created post with ID:', response.id);
});

Notice how the command string is getting unwieldy. You're building a shell command inside a JavaScript string, juggling quotes inside quotes, escaping things. It works, but you can probably feel why this approach doesn't scale.

Safer execution with spawnSync()

When your cURL command has many flags andĀ headers, building it as a single string invites shell-injection problems.Ā spawnSync()Ā takes an array of arguments instead, so each flag is a separate element.

const { spawnSync } =Ā require('child_process');
const result = spawnSync('curl', [
Ā Ā '-s',
Ā Ā '-X',Ā 'GET',
Ā Ā '-H',Ā 'Accept: application/json',
Ā Ā '-H',Ā 'User-Agent: my-node-script/1.0',
Ā Ā 'https://jsonplaceholder.typicode.com/posts?_limit=3'
]);
if (result.error) {
Ā Ā console.error('Failed to run cURL:', result.error.message);
}Ā elseĀ if (result.status !==Ā 0) {
Ā Ā console.error('cURL exited with code:', result.status);
Ā Ā console.error(result.stderr.toString());
}Ā else {
Ā Ā const posts =Ā JSON.parse(result.stdout.toString());
Ā  posts.forEach(post =>Ā console.log(`- ${post.title}`));
}

Each argument is its own array element, so there's no risk of a malformed string accidentally running something you didn't intend. If you're passing any dynamic input into a cURL command (URLs from user input, variable header values), always useĀ spawnSync()Ā or spawn() over exec().

When this approach falls apart

cURL inside JavaScript works, but let's be honest about the trade-offs:

  • cURL must be installed. If your code runs on a minimal Docker image, a serverless function, or a CI environment that strips non-essential binaries, cURL might not be there.
  • Output is just a string. You're parsing raw text every time. If cURL returns an unexpected error page instead of JSON, your script crashes unless you've wrapped everything carefully.
  • It's hard to test. Mocking a child process is significantly more annoying than mocking an HTTP client.
  • Error handling is clunky. HTTP status codes aren't surfaced automatically – you'd need to add -w "%{http_code}" to the cURL command and parse it out of the output yourself.

For a one-off script or a quick migration, shelling out to cURL is perfectly fine. For anything that'll run in production, other options mentioned further below handle all of this more cleanly.

Skip the boilerplate

Decodo's Web Scraping API handles proxies, CAPTCHAs, and anti-bot detection so your code stays short and your requests actually land.

Using node-libcurl for direct libcurl bindings

IfĀ child_process is "run cURL from JavaScript,"Ā node-libcurlĀ is "put cURL's engine inside JavaScript." It's a native addon that wrapsĀ libcurl and exposes it directly to Node.js. You get full access toĀ libcurl's feature set without shelling out to anything.

That said, this is a specialist tool. For most projects, Fetch or Axios is simpler and more than enough. You'd reach for node-libcurl when you specifically need low-level TLS configuration, custom cipher suites,Ā CURLOPT_* options, or multi-handle concurrency for high-throughput scraping. If none of that means anything to you yet, you can safely skip to the next section and come back here if you ever need it.

Installation

InstallĀ node-libcurl with:

npm install node-libcurl

One thing to know upfront:Ā node-libcurlĀ is a native addon, which means it compiles C++ code during installation. It needsĀ node-gyp and a C++ build toolchain on your system. On most dev machines, this is already there. In CI pipelines or Docker containers, you might need to install build tools first – and that can turn a 5-second npm install into a minor adventure.

GET request with the curly interface

node-libcurl has a few API levels. The simplest isĀ curly, a convenience wrapper that gives you async/await syntax and feels closer to what you'd expect from a modern JavaScript HTTP client.

const { curly } =Ā require('node-libcurl');
asyncĀ functionĀ getPost() {
Ā Ā try {
Ā  Ā Ā const { statusCode, data } =Ā await curly.get(
Ā  Ā  Ā Ā 'https://jsonplaceholder.typicode.com/posts/1'
Ā  Ā  );
Ā  Ā Ā console.log('Status:', statusCode);
Ā  Ā Ā console.log('Title:', data.title);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
getPost();

curly automatically parses JSON responses, so data is already an object. TheĀ statusCodeĀ comes back as a number. It's fairly clean for what's happening under the hood – a fullĀ libcurl request cycle.

POST request with curly

Send a POST request with a JSON body:

const { curly } =Ā require('node-libcurl');
asyncĀ functionĀ createPost() {
Ā Ā try {
Ā  Ā Ā const { statusCode, data } =Ā await curly.post(
Ā  Ā  Ā Ā 'https://jsonplaceholder.typicode.com/posts',
Ā  Ā  Ā  {
Ā  Ā  Ā  Ā  postFields:Ā JSON.stringify({
Ā  Ā  Ā  Ā  Ā  title:Ā 'Test post',
Ā  Ā  Ā  Ā  Ā  body:Ā 'Sent via node-libcurl',
Ā  Ā  Ā  Ā  Ā  userId:Ā 1,
Ā  Ā  Ā  Ā  }),
Ā  Ā  Ā  Ā  httpHeader: [
Ā  Ā  Ā  Ā  Ā Ā 'Content-Type: application/json',
Ā  Ā  Ā  Ā  ],
Ā  Ā  Ā  }
Ā  Ā  );
Ā  Ā Ā console.log('Status:', statusCode);
Ā  Ā Ā console.log('Created post ID:', data.id);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
createPost();

Notice the differences from Fetch or Axios. Headers are passed as an array of strings ("Key: Value" format) rather than an object. The body goes intoĀ postFields as a string. It's not hard, just different – the naming mirrorsĀ libcurl's C API, which is why it feels a bit alien if you're used to JavaScript conventions.

Using the Curl class for fine-grained control

When you need access to specific CURLOPT_*Ā options, the lower-levelĀ Curl class gives you full control:

const { Curl, CurlCode } =Ā require('node-libcurl');
const curl =Ā new Curl();
curl.setOpt('URL',Ā 'https://jsonplaceholder.typicode.com/posts/1');
curl.setOpt('FOLLOWLOCATION',Ā true);
curl.setOpt('TIMEOUT',Ā 10);
curl.setOpt('HTTPHEADER', [
Ā Ā 'Accept: application/json',
Ā Ā 'User-Agent: my-node-script/1.0',
]);
curl.on('end', (statusCode, body, headers) => {
Ā Ā console.log('Status:', statusCode);
Ā Ā console.log('Title:',Ā JSON.parse(body).title);
Ā  curl.close();
});
curl.on('error', (error, errorCode) => {
Ā Ā console.error('Request failed:', error.message);
Ā Ā console.error('cURL error code:', errorCode);
Ā  curl.close();
});
curl.perform();

This is whereĀ node-libcurl earns its keep. Every option you can set in a cURL command withĀ --something, you can set here withĀ curl.setOpt(). Verbose output, custom DNS resolution, specific TLS versions, and proxy tunneling are all available.

Fetch API: The native JavaScript HTTP client

If you're starting a new project and wondering which HTTP client to use, start here. Fetch is built into every modern browser and comes with Node.js 18+ as a global – no packages, no imports, no npm install. It's just there.

For most developers looking for a cURL equivalent in JavaScript, Fetch is the answer. It won't do everything cURL can, but it covers the vast majority of use cases with zero dependencies.

Basic GET request

asyncĀ functionĀ getPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await fetch('https://jsonplaceholder.typicode.com/posts/1');
Ā  Ā Ā const data =Ā await response.json();
Ā  Ā Ā console.log(data.title);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
getPost();

Two lines to make a request and parse the response. That's it. No modules to require, no clients to instantiate. If you're on Node.js 16 or below, you'll need theĀ node-fetch package to get the same API, but from Node.js 18 onward,Ā fetch()Ā is global just like it is in the browser.

POST request with a JSON body

Sending data works the same way a cURL POST does. You specify the method, set the content type, and pass the body. The syntax is just more verbose than a one-liner in the terminal.

asyncĀ functionĀ createPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await fetch('https://jsonplaceholder.typicode.com/posts', {
Ā  Ā  Ā  method:Ā 'POST',
Ā  Ā  Ā  headers: {
Ā  Ā  Ā  Ā Ā 'Content-Type':Ā 'application/json',
Ā  Ā  Ā  },
Ā  Ā  Ā  body:Ā JSON.stringify({
Ā  Ā  Ā  Ā  title:Ā 'Test post',
Ā  Ā  Ā  Ā  body:Ā 'Sent via Fetch',
Ā  Ā  Ā  Ā  userId:Ā 1,
Ā  Ā  Ā  }),
Ā  Ā  });
Ā  Ā Ā const data =Ā await response.json();
Ā  Ā Ā console.log('Created post ID:', data.id);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
createPost();

One thing that trips people up: the body must be a string. You can't pass a plain JavaScript object, and you needĀ JSON.stringify() every time. Axios handles this automatically, which is one reason people reach for it instead.

Custom headers

The headers option takes a plain object. Each key-value pair is equivalent to anĀ -HĀ flag in cURL.

const response =Ā await fetch('https://api.github.com/user', {
Ā  headers: {
Ā  Ā Ā 'Authorization':Ā 'Bearer ghp_your_token_here',
Ā  Ā Ā 'Accept':Ā 'application/vnd.github.v3+json',
Ā  Ā Ā 'User-Agent':Ā 'my-node-script/1.0',
Ā  },
});

You can also use the headers constructor if you need to build headers programmatically, but for most cases, the plain object works fine.

Handling errors

Here's the thing about Fetch that catches almost everyone the first time: it doesn't throw on HTTP errors. A 404, a 500, a 403 – Fetch considers all of these successful responses because the server did respond. It only throws on actual network failures, like the server being unreachable.

asyncĀ functionĀ getPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await fetch('https://jsonplaceholder.typicode.com/posts/9999');
Ā  Ā Ā if (!response.ok) {
Ā  Ā  Ā Ā console.error(`Server returned ${response.status}: ${response.statusText}`);
Ā  Ā  Ā Ā return;
Ā  Ā  }
Ā  Ā Ā const data =Ā await response.json();
Ā  Ā Ā console.log(data.title);
Ā  }Ā catch (error) {
Ā  Ā Ā // This only fires on network errors, not HTTP errors
Ā  Ā Ā console.error('Network error:', error.message);
Ā  }
}
getPost();

Always check theĀ response.okĀ before parsing the body. If you skip this, you'll eventually try toĀ JSON.parse()Ā an HTML error page and spend 20 minutes wondering why your data is undefined.

Timeouts

cURL hasĀ --max-time. Axios has aĀ timeout option. Fetch has... nothing. By default, a Fetch request will hang indefinitely if the server never responds. You need to wire up anĀ AbortController yourself.

asyncĀ functionĀ fetchWithTimeout(url, timeoutMs =Ā 5000) {
Ā Ā const controller =Ā new AbortController();
Ā Ā const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
Ā Ā try {
Ā  Ā Ā const response =Ā await fetch(url, { signal: controller.signal });
Ā  Ā  clearTimeout(timeoutId);
Ā  Ā Ā if (!response.ok) {
Ā  Ā  Ā Ā throwĀ newĀ Error(`HTTP ${response.status}`);
Ā  Ā  }
Ā  Ā Ā returnĀ await response.json();
Ā  }Ā catch (error) {
Ā  Ā  clearTimeout(timeoutId);
Ā  Ā Ā if (error.name ===Ā 'AbortError') {
Ā  Ā  Ā Ā throwĀ newĀ Error(`Request timed out after ${timeoutMs}ms`);
Ā  Ā  }
Ā  Ā Ā throw error;
Ā  }
}

It's not a lot of code, but it's code you have to write every time, or abstract into a helper. This is one of the biggest practical arguments for Axios over raw Fetch in production scraping work.

CORS in the browser

If you're running Fetch in the browser and making requests to a different domain, CORS rules apply. The server has to includeĀ Access-Control-Allow-OriginĀ in its response headers, or the browser blocks the response before your code ever sees it.

This is why a cURL command can work perfectly in the terminal but fail the moment you paste the same logic into browser-side JavaScript. cURL doesn't care about CORS – it's not a browser. ButĀ fetch()Ā in the browser absolutely does.

There's no client-side fix. Your options are:

  • Ask the API provider to add CORS headers (sometimes possible, often not)
  • Proxy the request through your own server, where CORS doesn't apply
  • Use a serverless function as a middleman between your frontend and the target API

If you're doingĀ web scraping with JavaScript, this is a non-issue – scraping runs server-side in Node.js, where CORS doesn't exist.

Axios: the developer-friendly HTTP library

Note: On March 31, 2026, an attacker compromised the npm credentials of a lead Axios maintainer and published two backdoored versions – 1.14.1Ā andĀ 0.30.4. Both contained a hidden dependency that silently installed a cross-platform remote access trojan on any system that ran npm install. The malicious versions were live for roughly three hours before npm pulled them. If you installed Axios during that window, treat the system as compromised, roll back to 1.14.0 or 0.30.3, and rotate any credentials that were accessible on the affected machine.

With that out of the way, Axios is still one of the most popular HTTP libraries in the JavaScript ecosystem for good reason. It works in both Node.js and the browser, parses JSON automatically, has built-in timeout support, and gives you cleaner error handling than raw Fetch. For Node.js projects that need more ergonomics thanĀ fetch() offers, it's been the go-to choice for years.

Installation

npm install axios@1.14.0

We're pinning to 1.14.0 here deliberately.

Basic GET request

const axios =Ā require('axios');
asyncĀ functionĀ getPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await axios.get(
Ā  Ā  Ā Ā 'https://jsonplaceholder.typicode.com/posts/1'
Ā  Ā  );
Ā  Ā Ā console.log(response.data.title);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
getPost();

Notice what's missing compared to Fetch: noĀ .json() call. Axios detects the Content-Type header and parses the response body automatically.Ā response.data is already a JavaScript object. It's a small thing, but it adds up over hundreds of requests.

POST request

Sending a POST request is similarly streamlined. Pass a JavaScript object directly, and Axios serializes it to JSON and sets the Content-Type header for you.

const axios =Ā require('axios');
asyncĀ functionĀ createPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await axios.post(
Ā  Ā  Ā Ā 'https://jsonplaceholder.typicode.com/posts',
Ā  Ā  Ā  {
Ā  Ā  Ā  Ā  title:Ā 'Test post',
Ā  Ā  Ā  Ā  body:Ā 'Sent via Axios',
Ā  Ā  Ā  Ā  userId:Ā 1,
Ā  Ā  Ā  }
Ā  Ā  );
Ā  Ā Ā console.log('Created post ID:', response.data.id);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
createPost();

No JSON.stringify(), no manual Content-Type header. Compare this to the Fetch version of the same request, and you'll see why people reach for Axios.

Custom headers

Setting headers works like cURL's -H flag – pass them as an object in the config parameter.

const axios =Ā require('axios');
asyncĀ functionĀ getGitHubUser() {
Ā Ā try {
Ā  Ā Ā const response =Ā await axios.get('https://api.github.com/user', {
Ā  Ā  Ā  headers: {
Ā  Ā  Ā  Ā Ā 'Authorization':Ā 'Bearer ghp_your_token_here',
Ā  Ā  Ā  Ā Ā 'Accept':Ā 'application/vnd.github.v3+json',
Ā  Ā  Ā  Ā Ā 'User-Agent':Ā 'my-node-script/1.0',
Ā  Ā  Ā  },
Ā  Ā  });
Ā  Ā Ā console.log(response.data.login);
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
getGitHubUser();

Timeout

One of Axios's biggest practical advantages over Fetch is theĀ timeout:

const response =Ā await axios.get('https://slow-api.example.com/data', {
Ā  timeout:Ā 5000,Ā // 5 seconds
});

If the server doesn't respond within the limit, Axios throws an error with code: 'ECONNABORTED'.

Error handling

Unlike Fetch, Axios actually throws errors on 4xx and 5xx responses. This means your catch block handles both network failures and HTTP errors, which is usually what you want.

const axios =Ā require('axios');
asyncĀ functionĀ getPost() {
Ā Ā try {
Ā  Ā Ā const response =Ā await axios.get(
Ā  Ā  Ā Ā 'https://jsonplaceholder.typicode.com/posts/9999'
Ā  Ā  );
Ā  Ā Ā console.log(response.data);
Ā  }Ā catch (error) {
Ā  Ā Ā if (error.response) {
Ā  Ā  Ā Ā // Server responded with a non-2xx status
Ā  Ā  Ā Ā console.error('Status:', error.response.status);
Ā  Ā  Ā Ā console.error('Body:', error.response.data);
Ā  Ā  }Ā elseĀ if (error.request) {
Ā  Ā  Ā Ā // Request was sent, but no response received
Ā  Ā  Ā Ā console.error('No response from server');
Ā  Ā  }Ā else {
Ā  Ā  Ā Ā // Something went wrong setting up the request
Ā  Ā  Ā Ā console.error('Setup error:', error.message);
Ā  Ā  }
Ā  }
}
getPost();

The three-tier structure (error.response,Ā error.request,Ā error.message) covers every failure mode. You'll know exactly where things went wrong without parsing status codes out of a raw string.

Interceptors

This is where Axios pulls ahead for any project with more than a handful of requests. Interceptors let you run logic on every request or response globally – add auth headers, log requests, implement retry logic, all without touching individual calls.

const axios =Ā require('axios');
// Add an auth header to every outgoing request
axios.interceptors.request.use((config) => {
Ā  config.headers['Authorization'] =Ā `Bearer ${process.env.API_TOKEN}`;
Ā Ā return config;
});
// Log every failed response
axios.interceptors.response.use(
Ā  (response) => response,
Ā  (error) => {
Ā  Ā Ā if (error.response) {
Ā  Ā  Ā Ā console.error(
Ā  Ā  Ā  Ā Ā `[${error.response.status}] ${error.config.method.toUpperCase()} ${error.config.url}`
Ā  Ā  Ā  );
Ā  Ā  }
Ā  Ā Ā returnĀ Promise.reject(error);
Ā  }
);

For scraping, a response interceptor that retries on 429 (rate limit) or 503 (server overload) with exponential backoff is practically essential. You wire it up once and forget about it.

Creating a reusable instance

When all your requests share the same base URL, auth headers, and timeout, create a configured instance with axios.create()Ā instead of repeating the same config everywhere.

const axios =Ā require('axios');
const api = axios.create({
Ā  baseURL:Ā 'https://api.github.com',
Ā  timeout:Ā 10000,
Ā  headers: {
Ā  Ā Ā 'Authorization':Ā `Bearer ${process.env.GITHUB_TOKEN}`,
Ā  Ā Ā 'Accept':Ā 'application/vnd.github.v3+json',
Ā  Ā Ā 'User-Agent':Ā 'my-node-script/1.0',
Ā  },
});
// Now every call uses the shared config
const user =Ā await api.get('/user');
const repos =Ā await api.get('/user/repos');

This also keeps connection pools alive between requests, which matters when you're hitting the same API hundreds of times. Create the instance once, reuse it everywhere.

Translating cURL commands to JavaScript

This is the section you'll probably need to bookmark. You have a cURL command, and you need it in JavaScript. Rather than guessing, here's a systematic way to translate any cURL command flag by flag.

The "copy as cURL" workflow

Before you translate anything, you need the cURL command. If you're trying to replicate a request your browser made, Chrome and Firefox hand it to you for free:

  1. Open DevTools (F12 or Ctrl+Shift+I).
  2. Go to theĀ Network tab.
  3. Find the request you want to replicate.
  4. Right-click it and selectĀ Copy, thenĀ Copy as cURL.

That gives you the exact command as a cURL string. This is the fastest way to get a working starting point, especially when you're debugging why your JavaScript request behaves differently from what the browser sent.

Flag-by-flag mapping

Here's how each common cURL flag translates to Fetch and Axios. If you've been following along with the earlier sections, most of these will look familiar.

cURL flag

What it does

Fetch equivalent

Axios equivalent

-X METHOD

Sets the HTTP method

method: "METHOD"

method: "METHOD"

-H "Key: Value"

Adds a header

headers: { "Key": "Value" }

headers: { "Key": "Value" }

-d '{"key":"value"}'

Sends a request body

body: JSON.stringify({key: "value"})

data: { key: "value" }

--data-urlencode

Sends URL-encoded data

body: new URLSearchParams({key: "value"})

data: new URLSearchParams({key: "value"})

-u user:pass

Basic authentication

headers: { "Authorization": "Basic " + btoa("user:pass") }

auth: { username: "user", password: "pass" }

--compressed

Accepts gzip/deflate

Automatic, no action needed

Automatic, no action needed

-x host:port

Routes through a proxy

Custom agent (see proxy section)

httpAgent / httpsAgent with proxy agent

-k / --insecure

Skips TLS verification

agent: new https.Agent({ rejectUnauthorized: false })

httpsAgent: new https.Agent({ rejectUnauthorized: false })

-L / --location

Follows redirects

On by default (disable with redirect: "manual")

On by default (disable with maxRedirects: 0)

-o filename

Saves response to a file

fs.writeFile() after fetching as ArrayBuffer

fs.writeFile() with responseType: "arraybuffer"

Full example: translating a real cURL command

Let's take a GitHub API request that creates an issue – the kind of command you'd realistically copy from documentation or DevTools.

The cURL version:

curl -X POST https://api.github.com/repos/octocat/hello-world/issues \
Ā  -HĀ "Authorization: Bearer ghp_your_token_here" \
Ā  -HĀ "Accept: application/vnd.github.v3+json" \
Ā  -HĀ "User-Agent: my-script/1.0" \
Ā  -dĀ '{"title": "Bug report", "body": "Something is broken", "labels": ["bug"]}'

Translated to Fetch:

asyncĀ functionĀ createIssue() {
Ā Ā const response =Ā await fetch(
Ā  Ā Ā 'https://api.github.com/repos/octocat/hello-world/issues',
Ā  Ā  {
Ā  Ā  Ā  method:Ā 'POST',
Ā  Ā  Ā  headers: {
Ā  Ā  Ā  Ā Ā 'Authorization':Ā 'Bearer ghp_your_token_here',
Ā  Ā  Ā  Ā Ā 'Accept':Ā 'application/vnd.github.v3+json',
Ā  Ā  Ā  Ā Ā 'User-Agent':Ā 'my-script/1.0',
Ā  Ā  Ā  },
Ā  Ā  Ā  body:Ā JSON.stringify({
Ā  Ā  Ā  Ā  title:Ā 'Bug report',
Ā  Ā  Ā  Ā  body:Ā 'Something is broken',
Ā  Ā  Ā  Ā  labels: ['bug'],
Ā  Ā  Ā  }),
Ā  Ā  }
Ā  );
Ā Ā if (!response.ok) {
Ā  Ā Ā throwĀ newĀ Error(`GitHub API returned ${response.status}`);
Ā  }
Ā Ā const issue =Ā await response.json();
Ā Ā console.log('Created issue:', issue.html_url);
}

Translated to Axios:

const axios =Ā require('axios');
asyncĀ functionĀ createIssue() {
Ā Ā const response =Ā await axios.post(
Ā  Ā Ā 'https://api.github.com/repos/octocat/hello-world/issues',
Ā  Ā  {
Ā  Ā  Ā  title:Ā 'Bug report',
Ā  Ā  Ā  body:Ā 'Something is broken',
Ā  Ā  Ā  labels: ['bug'],
Ā  Ā  },
Ā  Ā  {
Ā  Ā  Ā  headers: {
Ā  Ā  Ā  Ā Ā 'Authorization':Ā 'Bearer ghp_your_token_here',
Ā  Ā  Ā  Ā Ā 'Accept':Ā 'application/vnd.github.v3+json',
Ā  Ā  Ā  Ā Ā 'User-Agent':Ā 'my-script/1.0',
Ā  Ā  Ā  },
Ā  Ā  }
Ā  );
Ā Ā console.log('Created issue:', response.data.html_url);
}

Same result, different trade-offs. The Fetch version is more explicit, as you see every step. The Axios version is shorter because it handles JSON serialization andĀ POST body formatting automatically. Neither is wrong; it depends on what your project already uses and how much boilerplate you're willing to manage.

Translating basic auth

cURL's -u flag trips people up because there's no direct one-to-one option in Fetch. You need to construct theĀ Authorization header with a Base64-encoded string manually.

cURL:

curl -u admin:secretpass https://api.example.com/protected

Fetch:

const response =Ā await fetch('https://api.example.com/protected', {
Ā  headers: {
Ā  Ā Ā 'Authorization':Ā 'Basic ' + btoa('admin:secretpass'),
Ā  },
});

Meanwhile, Axios has a dedicated auth option that does this for you:

const response =Ā await axios.get('https://api.example.com/protected', {
Ā  auth: {
Ā  Ā  username:Ā 'admin',
Ā  Ā  password:Ā 'secretpass',
Ā  },
});

Downloading files

cURL's -o flag saves the response directly to a file. In JavaScript, you fetch the response as binary data and write it yourself.

cURL:

curl -o report.pdf https://example.com/files/report.pdf

Fetch (Node.js):

const fs =Ā require('fs');
asyncĀ functionĀ downloadFile() {
Ā Ā const response =Ā await fetch('https://example.com/files/report.pdf');
Ā Ā const buffer = Buffer.from(await response.arrayBuffer());
Ā  fs.writeFileSync('report.pdf', buffer);
Ā Ā console.log('Downloaded report.pdf');
}

Axios:

const axios =Ā require('axios');
const fs =Ā require('fs');
asyncĀ functionĀ downloadFile() {
Ā Ā const response =Ā await axios.get('https://example.com/files/report.pdf', {
Ā  Ā  responseType:Ā 'arraybuffer',
Ā  });
Ā  fs.writeFileSync('report.pdf', response.data);
Ā Ā console.log('Downloaded [report.pdf](https://decodo.com/blog/curl-download-files)');
}

The key detail isĀ responseType: 'arraybuffer'Ā in Axios. Without it, Axios tries to parse the response as JSON, and you get garbage.

Online converters

If you'd rather skip the manual translation,Ā curlconverter.com takes a pasted cURL command and generates Fetch, Axios, or plain Node.js code automatically. It's useful for long, complex commands where counting flags and quotes by hand is tedious.

That said, understanding the mapping yourself is more practical in the long run. Converters are great for one-off translations. When you're iterating on a scraper or debugging a failing request, knowing which flag maps to which option means you can fix things in seconds instead of switching between tabs.

Adding proxy support for web scraping

Here's where your JavaScript code meets the real world. You've got a working script, you've translated your cURL command, everything runs perfectly on localhost – and then you hit a site that rate-limits you after 20 requests, blocks your IP entirely, or serves different content depending on the visitor's country. Welcome to scraping.

Why proxies matter for JavaScript scrapers

Any site worth scraping has some form of anti-bot protection. At the simplest level, that's rate limiting – too many requests from one IP in a short window, and you're throttled or blocked outright. More sophisticated sites track IP reputation, detect datacenter ranges, analyze request patterns, and serve CAPTCHAs the moment anything looks automated.

Rotating your exit IP sidesteps most of this. Instead of millions of requests hitting a server from one address, you get a bunch of requests from different residential IPs spread across multiple countries.

cURL proxy integration

Before we get into the JavaScript setup, here's the cURL equivalent for reference – useful if you're debugging a proxy config from the terminal before translating it to code.

curl -x http://username:password@gate.decodo.com:7000 https://ip.decodo.com/ip

TheĀ -x flag specifies the proxy. Run this a few times, and you'll see a different IP each time.

You can also set theĀ HTTPS_PROXY environment variable, so cURL picks it up automatically without theĀ -x flag:

export HTTPS_PROXY="http://user:pass@gate.decodo.com:7000"
curl https://ip.decodo.com/ip

This same pattern translates directly to Node.js, with one catch: neither Fetch nor Axios has native proxy support in Node. You'll need a helper package.

Proxy setup in Axios

Install theĀ https-proxy-agentĀ package:

npm install https-proxy-agent

Now wire it into Axios:

const axios =Ā require('axios');
const { HttpsProxyAgent } =Ā require('https-proxy-agent');
const proxyAgent =Ā new HttpsProxyAgent(
Ā Ā 'http://user:pass@gate.decodo.com:7000'
);
asyncĀ functionĀ scrapeWithProxy() {
Ā Ā try {
Ā  Ā Ā const response =Ā await axios.get('https://ip.decodo.com/ip', {
Ā  Ā  Ā  httpsAgent: proxyAgent,
Ā  Ā  Ā  httpAgent: proxyAgent,
Ā  Ā  });
Ā  Ā Ā console.log('Exit IP:', response.data.trim());
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
scrapeWithProxy();

Run this a handful of times, and you'll see the exit IP change on each call – that's the rotating proxy doing its job.

Proxy setup in Fetch (Node.js)

Fetch needs the agent to be passed slightly differently depending on your Node.js version. Using theĀ undici built-in (which powers Fetch in Node.js 18+):

const { ProxyAgent, fetch } =Ā require('undici');
const proxyAgent =Ā new ProxyAgent(
Ā Ā 'http://user:pass@gate.decodo.com:7000'
);
asyncĀ functionĀ scrapeWithProxy() {
Ā Ā try {
Ā  Ā Ā const response =Ā await fetch('https://ip.decodo.com/ip', {
Ā  Ā  Ā  dispatcher: proxyAgent,
Ā  Ā  });
Ā  Ā Ā const ip =Ā await response.text();
Ā  Ā Ā console.log('Exit IP:', ip.trim());
Ā  }Ā catch (error) {
Ā  Ā Ā console.error('Request failed:', error.message);
Ā  }
}
scrapeWithProxy();

undici ships with Node.js 18+ and gives you proper proxy support through the dispatcher option. If you're on an older Node.js, node-fetch paired with https-proxy-agent works the same way as the Axios example above.

Residential vs. datacenter proxies

Not all proxies are equal, and the type you pick matters a lot for scraping.

Datacenter proxies live in server farms. They're fast, cheap, and easy to get in bulk – but they're also easy for target sites to identify. Any decent bot detection system has a list of known datacenter IP ranges and treats them with suspicion by default.

Residential proxies are IPs assigned by actual ISPs to actual homes. From the target server's perspective, a request through a residential proxy looks exactly like a real person browsing from their couch. They're harder to detect, harder to block, and the only realistic option for scraping well-defended sites.

For most serious scraping work, residential proxies are the default choice.Ā Decodo's residential pool covers 195+ countries and 115M+ IPs, which means you can geo-target specific regions without constantly running into blocks.

Choosing the right approach

Pick based on your environment and constraints. Here's a basic breakdown:

Approach

Works in browser

Works in Node.js

Dependencies

Proxy support

Recommended for

Fetch (native)

Yes

Yes (18+)

None

Manual setup

Simple API calls, lightweight scripts

Axios

Yes

Yes

1 package

Via agent

Default choice for scraping and APIs

child_process + cURL

No

Yes

None

Native cURL flags

Reusing existing cURL commands

node-libcurl

No

Yes

Native addon

Built-in

Advanced networking control

Web Scraping API

Yes

Yes

API call

Handled for you

Anti-bot, CAPTCHA, JS-heavy sites

General takeaway:

  • Default to Axios for most Node.js scraping work
  • Use Fetch when you want minimalism, or when you're in the browser
  • TreatĀ child_process andĀ node-libcurlĀ as edge cases, not starting points
  • If you're fighting rate limits, CAPTCHAs, or fingerprinting, the HTTP client isn't the bottleneck anymore. Offload it to aĀ scraping solution

Best practices and common mistakes

These are theĀ things that quietly break scraping scripts in production. Fix them up front, and you avoid hours of debugging later.

  • Set a timeout on every request. Fetch has no timeout by default, and Axios will happily wait forever unless you configure it. Use AbortControllerĀ with Fetch or theĀ timeout option in Axios to prevent hung requests from blocking your process.
  • Use a realistic User-Agent. Many sites reject requests with missing or default headers like axios/1.x.x. Set a User-Agent that looks like a real browser to reduce unnecessary blocks.
  • Handle 4xx and 5xx responses properly. Fetch doesn't throw on HTTP errors, so you need to checkĀ response.ok manually. Axios throws by default, but you should still log status codes and response bodies to understand failures quickly.
  • Don't hardcode credentials. API keys, tokens, and proxy credentials should live in environment variables, not in your code. This keeps secrets out of version control and makes deployments safer.
  • Implement retries with exponential backoff. Temporary failures like network hiccups, 429s, and 503s are normal in scraping.Ā Retry with increasing delays instead of hammering the server with fixed intervals.
  • Reuse Axios instances. Creating a new client for every request kills connection reuse and adds overhead. Use axios.create() once and share it across your app to keep connections warm and configs consistent.
  • Watch out for CORS in the browser. If a request works in cURL but fails in the browser, CORS is usually the culprit. There's no client-side fix; you need to route the request through your own backend.
  • Don't overbuild around blocked requests. If you're stacking retries, rotating headers, and still getting blocked, the issue isn't your HTTP client. At that point, switch to a scraping API or a proxy-backed solution instead of patching symptoms.

The pattern is simple: control timeouts, handle failures, and avoid leaking state. Do that consistently, and your HTTP layer stops being the problem.

Final thoughts

You have four ways to bring cURL-style requests into JavaScript – child_process for running raw cURL commands,Ā node-libcurlĀ for low-level control, Fetch for a built-in and dependency-free option, and Axios for a cleaner, production-friendly experience. In practice, most API and scraping work is covered by Fetch or Axios, with Axios being the default when you want less boilerplate and better error handling. The other two exist for edge cases, not everyday use. And once you start dealing with heavy bot protection, retries, and proxy rotation, the HTTP client stops being the interesting part – that is where handing things off to a scraping API makes more sense than building around limitations.

Scraping shouldn't be this hard

Replace proxy configs, retry logic, and fingerprint workarounds with a single API call that returns clean data.

About the author

Zilvinas Tamulis

Technical Copywriter

A technical writer with over 4 years of experience, Žilvinas blends his studies in Multimedia & Computer Design with practical expertise in creating user manuals, guides, and technical documentation. His work includes developing web projects used by hundreds daily, drawing from hands-on experience with JavaScript, PHP, and Python.


Connect with Žilvinas via LinkedIn

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

Frequently asked questions

Can I use cURL directly in JavaScript?

In Node.js, you can run cURL directly using the built-in child_process module. It shells out to the system's cURL binary, captures the output, and hands it back to your script. It works, but it comes with strings attached: cURL has to be installed on the host, the output is raw text you'll need to parse yourself, and it's not portable to serverless or slim container environments. For most new Node.js projects, a native HTTP client is the better path.

What is the JavaScript equivalent of cURL?

The closest built-in equivalent is the Fetch API. It's available natively in all modern browsers and in Node.js 18+ with no imports or packages needed. You get the same core capabilities – GET, POST, custom headers, request bodies, just expressed in JavaScript syntax instead of command-line flags.

If you want something with more batteries included, Axios is the most popular library alternative. It handles JSON parsing automatically, has built-in timeout support, and throws proper errors on 4xx/5xx responses instead of making you check the response yourself.

Is Axios better than Fetch for web scraping in Node.js?

For scraping specifically, Axios tends to be the smoother experience. A few reasons why:

  • It parses JSON automatically, so you skip the .json() step on every response.
  • It has a built-in timeout option. Fetch doesn't, and you'd need to wire up an AbortController yourself.
  • It throws on 4xx/5xx status codes, which means your error handling catches failed requests without extra checks.
  • Interceptors let you bolt on retry logic, rotating headers, or auth tokens globally instead of repeating the same code in every request.

Fetch absolutely works for scraping, but you'll end up writing more boilerplate to get the same reliability. If you're making a handful of API calls, Fetch is fine. If you're making thousands of requests across rotating proxies with retry logic, Axios saves you real time.

How do I add a proxy to a JavaScript HTTP request?

Neither Fetch nor Axios has native proxy support in Node.js, so you'll need the https-proxy-agent package. Install it, create an agent with your proxy URL, and pass it into your request config.

With Axios, that looks like setting httpAgent and httpsAgent in the request options. With Fetch, you pass the agent as a dispatcher (or agent, depending on your Node.js version).

For scraping at any real scale, you'll want rotating residential proxies rather than a single static IP – otherwise you'll burn through that one address fast. Decodo's residential proxies rotate the exit IP on each request (or on a schedule you set), which keeps your scraper running without constantly hitting rate limits or blocks. The proxy setup section walks through the full configuration for both Axios and Fetch.

How do I convert a cURL command to JavaScript?

The fastest way is to break the cURL command down flag by flag. Each flag has a direct JavaScript equivalent: -X maps to the method option, -H becomes a key in the headers object, -d turns into the request body (or data in Axios), and so on.

If you're starting from a real request, Chrome and Firefox DevTools can do the heavy lifting. Open the Network tab, right-click any request, and choose "Copy as cURL." That gives you the exact command, which you can then translate to Fetch or Axios using the flag mapping.

How to Do Web Scraping with curl: Full Tutorial

Web scraping is a great way to automate the extraction of data from websites, and curl is one of the simplest tools to get started with. This command-line utility lets you fetch web pages, send requests, and handle responses without writing complex code. It's lightweight, pre-installed on most systems, and perfect for quick scraping tasks. Let's dive into everything you need to know.

JavaScript Web Scraping Tutorial (2026)

Ever wished you could make the web work for you? JavaScript web scraping allows you to gather valuable information from websites in an automated way, unlocking insights that would be difficult to collect manually. In this guide, you'll learn the key tools, techniques, and best practices to scrape data efficiently, whether you're a beginner or a developer looking to streamline data collection.

Web Scraping with Cheerio and Node.js: A Comprehensive Guide

Scraping static web pages can be challenging, but Cheerio makes it fast and efficient. Cheerio is a lightweight Node.js library that parses and manipulates HTML using a syntax similar to jQuery. This guide covers key concepts, practical code examples, and essential techniques to help you extract web data with ease—no matter your experience level.

Ā© 2018-2026 decodo.com (formerly smartproxy.com). All Rights Reserved