HTTPX vs. Requests vs. AIOHTTP: How to Choose the Right Python HTTP Client
Requests, HTTPX, and AIOHTTP all make HTTP requests, but they differ in how they handle concurrency. Requests is synchronous and has been the default since 2011. HTTPX gives you both sync and async with HTTP/2 support. AIOHTTP is async-only and faster at high concurrency, but has a steeper learning curve. The right choice depends on your async model, whether you need WebSockets or HTTP/2, and how much code you're willing to rewrite. This article covers architecture, performance data, proxy setup, migration paths, and common mistakes in production scraping setups.
Justinas Tamasevicius
Last updated: Mar 03, 2026
12 min read

TL;DR
- Requests is the simplest option, synchronous only, with the broadest third-party ecosystem of the three. Use it for scripts, prototypes, and low-concurrency workloads.
- HTTPX gives you both sync and async clients with the same API, plus HTTP/2 support. It's used by the OpenAI and Anthropic Python SDKs, and powers FastAPI's TestClient for async testing.
- AIOHTTP is async-only, designed from the ground up for high-concurrency workloads, and the only library here with a native WebSocket client. Choose it when throughput at scale is the priority.
What do Requests, HTTPX, and AIOHTTP each do best?
The core divide is the I/O model: Requests is sync-only, HTTPX is sync+async, AIOHTTP is async-only with no sync client.
Requests: synchronous and stable
Requests has been the default Python HTTP client since 2011 and is the baseline both HTTPX and AIOHTTP are measured against. It wraps urllib3, which handles connection pooling and keep-alive.
Architecture – synchronous and single-threaded. The Session object reuses TCP connections across requests. The third-party ecosystem (middleware, caching, auth adapters) is broader than either alternative's, built up over more than a decade.
Best for – quick scripts, simple API integrations, and synchronous codebases. For a deeper dive into Requests, see Requests guide.
HTTPX: sync and async from the same API
HTTPX (released 2019) was designed as a Requests-compatible HTTP client with native async support.
Architecture – two client classes: httpx.Client (sync) and httpx.AsyncClient (async). Both expose the same interface and feature set. HTTP/2 support (pip install httpx[http2]) enables multiplexing, meaning multiple in-flight requests over a single connection, which matters when making many requests to a single host. Neither Requests nor AIOHTTP offers HTTP/2.
It defaults to a 5-second timeout applied independently to each phase (connect, read, write, pool), versus Requests, which has no default timeout. It requires follow_redirects=True explicitly, a breaking change from Requests that causes confusion during migration. It supports response mocking in tests without a live server.
Use HTTPX when you're migrating from Requests, building on FastAPI, or need HTTP/2 or a single API for sync and async.
AIOHTTP: the async native
AIOHTTP is async-only, no sync client, and has been in production use since 2014.
Architecture – AIOHTTP is built directly on asyncio's internals. ClientSession manages a connection pool via a TCPConnector. It includes a WebSocket client (neither Requests nor HTTPX do) and doubles as a server framework via aiohttp.web. At high concurrency it outperforms HTTPX in raw throughput (see the performance section for benchmark context).
Best for – high-throughput scrapers, data pipelines, real-time applications, WebSocket clients, any workload where async throughput is the top priority.
Synchronous vs. asynchronous in Requests, HTTPX, and AIOHTTP
The most important architectural difference between these libraries is how they handle I/O, and that determines which concurrency model you can use.
Blocking I/O: 1 thread per concurrent request
Synchronous calls block the calling thread until the server responds. At 500 concurrent requests, you need 500 threads, and GIL contention, context switching, and memory overhead add up.
Requests is synchronous-only. HTTPX's Client is also synchronous. At low concurrency or in linear scripts, the blocking overhead is negligible.
Non-blocking I/O: 1 event loop, many concurrent requests
With async, the coroutine suspends on await and the event loop resumes another pending coroutine. When the server responds, your coroutine resumes.
AIOHTTP is async-only; HTTPX's AsyncClient is async. Above 50-100 concurrent requests, async (either HTTPX AsyncClient or AIOHTTP) almost always beats a thread pool for I/O-bound scraping, but the crossover shifts with latency and payload size. When AIOHTTP outperforms HTTPX's AsyncClient is covered in the performance section, and the performance break-even point is around 200 concurrent requests.
Why async isn't free
Async isn't free. Every I/O await is a point where the event loop can switch to another coroutine. The async requirement propagates through your whole stack: test suite, error handling, every sync-only third-party library becomes an integration problem.
Creating ClientSession outside an async context causes trouble: pre-3.7 AIOHTTP required a running event loop at creation time. Later versions relaxed that, but async with is still required for cleanup. Skip it and you leak connections. On Python 3.10+, creating a ClientSession outside a running event loop emits a DeprecationWarning (and raises RuntimeError in Python 3.12+).
With AIOHTTP, when the session closes, underlying connections may not close immediately due to asyncio internals. If your event loop exits immediately after the session closes, you get ResourceWarning: unclosed transport. AIOHTTP 3.9+ handles connector cleanup automatically on session close – if you're on 3.9+, drop the sleep workaround below, it's no longer needed. If you're still seeing warnings on 3.9+, you've got a bare ClientSession somewhere not wrapped in async with. On versions before 3.9, a workaround allows the SSL transport's connection_lost callback to fire before the loop exits:
HTTPX doesn't have the creation-time event-loop restriction, but leaving AsyncClient unclosed leaks file descriptors. Always use async with or call .aclose().
Feature comparison: the full matrix
Each row reflects default behavior. ✅ = supported. ❌ = not available. ⚠️ = present with caveats (see cell notes).
Feature
Requests
HTTPX
AIOHTTP
Sync support
✅ Yes
✅ Yes
❌ No
Async support
❌ No
✅ Yes
✅ Yes
HTTP/2
❌ No
✅ Yes (requires httpx[http2])
❌ No
HTTP/3 (QUIC)
❌ No
❌ No
❌ No
WebSocket client
❌ No
❌ No
✅ Yes
Server framework
❌ No
❌ No
✅ Yes
Proxy support
✅ Yes
✅ Yes
✅ Yes
SOCKS5 proxy
✅ Via requests[socks]
✅ Via httpx[socks]
✅ Via aiohttp-socks
SSL/TLS verify
✅ Yes
✅ Yes
✅ Yes
Redirect following
✅ Auto
⚠️ Requires follow_redirects=True
✅ Auto
Timeout config
⚠️ No default; hangs indefinitely on established-but-stalled connections. Set an explicit timeout on every call.
✅ 5s per phase (connect/read/write/pool) independently
⚠️ No default timeout on ClientSession; must pass aiohttp.ClientTimeout explicitly. connect and sock_read are independently configurable.
Cookie handling
✅ Yes
✅ Yes
✅ Yes
Multipart uploads
✅ Yes
✅ Yes
✅ Yes (streaming)
Streaming responses
✅ Yes
✅ Yes
✅ Yes
Connection pooling
✅ Via urllib3
✅ Built-in
✅ Via Connector
Event hooks
✅ Yes
✅ Yes
✅ Signal-based
Custom transport/adapter
✅ HTTPAdapter
✅ Transport
✅ Connector
Prepared requests
✅ Yes
✅ Yes
❌ No
3rd-party ecosystem
✅ Largest – decade of extensions
⚠️ Small but growing (respx, pytest-httpx)
⚠️ Smaller (aioresponses)
Where HTTPX stands out
HTTPX is the only library with HTTP/2 support. With pip install httpx[http2], HTTPX negotiates HTTP/2 automatically when the server supports it, multiplexing multiple requests over a single TCP connection. The 5s-per-phase default also prevents pool exhaustion hangs. HTTPX's pool timeout stops requests from queuing indefinitely behind a saturated connection pool.
httpx.Client and httpx.AsyncClient share the same method signatures — useful if you're building an SDK or migrating a sync codebase incrementally. (HTTPX also supports trio instead of asyncio; AIOHTTP does not.)
Where AIOHTTP stands out
AIOHTTP is the only one with a built-in WebSocket client. If your scraper connects to WebSocket endpoints like live data feeds or push APIs, AIOHTTP is the straightforward choice.
The built-in server framework (aiohttp.web) lets a scraping orchestrator expose its own HTTP API or webhook endpoint from the same process.
What none of them handle: TLS fingerprinting
All three libraries use Python's ssl module by default, which produces TLS fingerprints (JA3/JA4) that bot-detection systems flag as non-browser. If your target site checks TLS fingerprints, you'll need a dedicated library like curl_cffi or rnet to impersonate browser TLS behavior. For a broader look at detection methods beyond TLS, see how to bypass anti-bot protection.
Practical code examples: common HTTP operations
Lets see some code examples:
Basic GET requests
The basic GET API is nearly identical across all three – the meaningful differences are in session management, timeouts, and error handling covered below. AIOHTTP uses .status instead of .status_code. For body reads, AIOHTTP requires await; Requests and HTTPX buffer synchronously.
Requests
AIOHTTP has no module-level convenience functions; ClientSession is always required.
POST with JSON payload
All three handle JSON POST the same way. The only difference worth noting: AIOHTTP requires aiohttp.ClientTimeout objects – plain numbers raise TypeError. All three use json= to serialize the payload and set Content-Type: application/json automatically. The differences are in timeout syntax and how AIOHTTP reads the response body.
Requests
AIOHTTP requires aiohttp.ClientTimeout; plain numbers and tuples raise TypeError. Note: httpx.Timeout(30.0) applies the same value to all 4 phases. Use named arguments to control each independently.
Session and client management
Pool configuration, connection limits, and thread-safety differ across all 3. These aren't cosmetic and they matter in production.
Requests session (with connection reuse)
The pool configuration API differs: HTTPAdapter(pool_maxsize=) in Requests, httpx.Limits(max_connections=) in HTTPX, and TCPConnector(limit=) in AIOHTTP.
Note: HTTPAdapter.pool_connections caps the number of distinct host pools (not total connections), which is different from HTTPX's global max_connections cap.
Timeout configuration
Neither Requests nor AIOHTTP has a default timeout – stalled connections block indefinitely unless you set one explicitly at the client level.
Use split timeouts: connect at 3-5s, read at 20-30s. Connection failures are a different failure mode from slow server responses. A tight connect timeout detects dead proxies quickly, while a longer read timeout accommodates legitimately slow pages. For HTTPX, also set pool timeout (3-5s).
Without a pool timeout on HTTPX, requests can block indefinitely waiting for a free connection from an exhausted pool. With AIOHTTP, total is a wall-clock ceiling for the entire request including body read. Set it higher than your sock_read, or it will truncate long responses even when the socket is active.
HTTPX vs. AIOHTTP performance: concurrency arithmetic and order-of-magnitude estimates
These are rough estimates based on zero retries, zero parse time, and a stable server. Use them as order-of-magnitude guidance.
For synchronous workloads, Requests and HTTPX perform comparably – any per-request difference is single-digit milliseconds and invisible behind network latency. A 100ms round-trip makes sub-millisecond library differences irrelevant. For synchronous workloads, the library choice is about feature surface (type hints, HTTP/2, retry hooks) and dependency footprint, not throughput.
Concurrency at scale: illustrative arithmetic
Consider a common workload: 1M product pages per day from eCommerce sites, assuming 1s response time per request and serial requests per connection. Assumptions: no retries, no rate-limit backoff, 100% success rate, constant 1 req/s per connection.
Approach
Library
Concurrency
Time to complete
Compute-hours/day
Threaded sync
Requests (50 threads)
50
~5.5 hours
~5.5
Async moderate
HTTPX AsyncClient
200
~1.4 hours
~1.4
Async aggressive
AIOHTTP
1,000
~17 minutes
~0.3
HTTPX AsyncClient can sustain higher concurrency, but 200 reflects a practical range before httpcore overhead starts compounding (see the benchmark table below). At 1,000 concurrent requests, HTTPX would finish in roughly 30-50 minutes – still far faster than threads, but 3-5x slower than AIOHTTP at that level.
seconds = total_requests ÷ (concurrency × req_per_second_per_connection). At 1 req/s this simplifies to 1,000,000 ÷ concurrency. Substitute your actual request rate to get your own estimate. A 10% retry rate at 1s per retry adds roughly 100,000 extra seconds (~28 hours), which roughly doubles the threaded-sync time. Model retries explicitly.
On pay-per-second infrastructure, the difference between 17 minutes and 5.5 hours matters. Most of that gain comes from running 20x more concurrent connections, not from the library itself. On a reserved or always-on VM, finishing faster frees the machine for other jobs and you scale horizontally less.
Asynchronous performance: HTTPX vs AIOHTTP
At high concurrency AIOHTTP outperforms HTTPX; the multipliers below are derived from community benchmarks (encode/httpx#3215) and are consistent across workloads. Exact ratios shift with payload size and server behavior.
Concurrent requests
HTTPX AsyncClient
AIOHTTP ClientSession
50
~1.2x slower
Baseline
100
~1.5x slower
Baseline
500
~2-3x slower
Baseline
1,000
~3-5x slower
Baseline
The 10x figure in encode/httpx#3215 comes from a microbenchmark that removes network latency entirely. It measures raw transport overhead in isolation. That number does not apply to real scraping workloads where even a 50ms round-trip masks library-level differences.
AIOHTTP's transport layer sits closer to raw asyncio sockets; HTTPX routes through httpcore as an intermediate layer, which adds call overhead that compounds at high concurrency. Below 200 concurrent requests, the difference rarely justifies choosing AIOHTTP over HTTPX for that reason alone.
Memory usage
At the same concurrency level, per-request memory differences between libraries are small. The big jump is between threads and coroutines – and that jump is determined by library choice.
With threaded requests, each thread holds its own urllib3 pool (default 10 keepalive connections per host); at 50 threads hitting many hosts, you can have up to 500 open connections. HTTPX uses a similar connection pool model; Limits(max_keepalive_connections=20) caps keepalive connections. AIOHTTP's TCPConnector(limit=100, limit_per_host=10) gives fine-grained control.
1,000 coroutines consume roughly 2-5 MB of resident memory. The equivalent 1,000-thread pool reserves roughly 8 GB of virtual address space (8 MB stack per thread on Linux, configurable via ulimit -s). Actual RSS is lower since most of that stack goes untouched, but the kernel's thread scheduling limits become the real ceiling before memory does.
When to ignore performance differences: proxy latency dominates
With 2-3s proxy response times, per-request library overhead is negligible, but concurrency ceiling still matters. Any async client sustaining 1,000 concurrent connections through a proxy tier finishes the same job roughly 19x faster than 50 threads, regardless of per-request latency.
Proxy integration
All 3 libraries handle proxy configuration differently, and SOCKS5 support requires an extra dependency in some cases. If you don't use proxies, skip to Error handling and retry strategies.
Swap in your own credentials and host; the configuration patterns are identical across providers. For the proxy type decision, see SOCKS5 vs. HTTP proxy.
Requests proxy configuration
Requests uses a proxies dict keyed by scheme or by scheme + host, set at the session level or passed per-request. Scheme-only keys ("https") match all HTTPS traffic; scheme + host keys ("https://target.com") override routing for a specific domain.
HTTPX proxy configuration
HTTPX 0.28.0 replaced proxies= with proxy= (single string) or mounts= (dict of transports); upgrade HTTPX without pinning, and any code still using proxies= will raise a TypeError.
proxy= routes all outbound traffic through a single proxy. To restrict to one scheme, or to route different hosts through different proxies, use mounts=.
For per-scheme or per-host routing, use mounts= with URL patterns:
AIOHTTP proxy configuration
AIOHTTP passes the proxy URL per-request for HTTP(S); SOCKS5 requires ProxyConnector from aiohttp-socks because AIOHTTP's built-in connector doesn't support the SOCKS protocol. Unlike Requests and HTTPX, AIOHTTP has no session-level proxy parameter.
You must pass proxy= on every individual request, or use ProxyConnector for connector-level routing. aiohttp-socks is a third-party package, not part of aiohttp core. Check that its release cycle matches your aiohttp version before adding it to production.
SOCKS5 adds an extra dependency for all 3 libraries (see import comments above). Use SOCKS5 when your proxy provider requires it. For most scraping, HTTP proxies are the simpler choice, with no extra package required and native support across all 3 libraries.
Error handling and retry strategies
The common failure modes in scraping pipelines are network timeouts, 429 rate limits, 5xx server errors, and SSL errors.
Exception hierarchies
The exception hierarchies differ enough to break copy-pasted error handlers. Key differences are noted below each tree.
Requests:
aiohttp.ServerTimeoutError inherits from both aiohttp.ClientError (via ServerConnectionError) and asyncio.TimeoutError. Catching aiohttp.ClientError covers it via the inheritance chain, while catching asyncio.TimeoutError covers it more narrowly, which is useful when you want to isolate timeout handling and ignore other ClientError subclasses. One thing to keep in mind: this will also catch asyncio.TimeoutError raised by non-HTTP code in the same try block.
Retry strategies
Requests integrates with urllib3's Retry class directly; HTTPX and AIOHTTP need tenacity for retry.
These two approaches operate at different layers. Retries configured via urllib3.Retry happen internally, before the response reaches your application code. In contrast, Tenacity wraps your function and retries only after an exception is raised.
Because of this difference, you must call raise_for_status() or raise an error manually for non-successful status codes. Otherwise, Tenacity will not detect a failure and no retry will occur.
Tenacity also retries only the exception types you explicitly configure. If a 403 response triggers raise_for_status() and HTTPStatusError is included in your retry list, it will retry that request, even if retrying 403 responses was not your intention.
Requests (urllib3 retry)
HTTPX (tenacity)
AIOHTTP (tenacity)
Note that aiohttp.ClientResponseError carries status and message attributes, not the full response object, so you can't call .text on it directly in an except block.
For proxy-specific failures or multi-endpoint retry logic, see the full retry guide for Requests. For proxy-specific error codes (407, CONNECT failures), see the proxy error codes reference.
Production use cases
Real-world projects using each library in production. The patterns show what each library actually handles at scale.
Requests in production
Requests is the most widely deployed of the three. The ecosystem reflects that: requests-oauthlib for OAuth 1/2, requests-cache for response caching, requests-ratelimiter for rate limiting, and responses / requests-mock for test mocking. HTTPX has respx for mocking; AIOHTTP has aioresponses, but neither ecosystem comes close to the breadth of Requests' extensions.
HTTPX in production
The OpenAI Python SDK adopted HTTPX in v1.0 for its unified sync/async interface (it also supports aiohttp as an optional async backend for higher concurrency). The concurrency issue documented in openai/openai-python#1596 stemmed from sharing a single AsyncClient across threads. It's a documented anti-pattern that the HTTPX docs warn against explicitly. The Anthropic Python SDK also uses HTTPX as its default HTTP backend. FastAPI and Starlette share a TestClient built on HTTPX for testing async web apps without a live server.
AIOHTTP in production
CCXT, the cryptocurrency exchange connectivity library, uses AIOHTTP for its async exchange clients, a workload with dozens of simultaneous WebSocket feeds and sub-second polling across exchanges. Home Assistant uses AIOHTTP as its core HTTP client across a heavily async platform with thousands of concurrent integrations.
How to choose: a 4-step decision framework
Work through the steps in order and stop at the first one that fits your project.
Step 1: Do you need async?
No → Use Requests. Sequential code doesn't benefit from async. It just adds complexity you don't need.
Yes → Go to Step 2.
Step 2: Do you need WebSocket support?
Yes → Use AIOHTTP. AIOHTTP is the only one with a native WebSocket client. No extra packages, no workarounds.
No → Go to Step 3.
Step 3: Is your concurrency above ~200 concurrent requests?
Yes → Use AIOHTTP. AIOHTTP's transport layer is closer to raw asyncio sockets than HTTPX's, and the benchmark data shows a 1.5x gap at 100 that grows to 3-5x at 1,000. The exact crossover depends on your latency and payload size, so benchmark with your actual workload.
No → Go to Step 4.
Step 4: Do you need HTTP/2?
Yes → Use HTTPX. HTTP/2 multiplexes requests over a single TCP connection, which cuts the connection overhead HTTP/1.1 can't avoid. HTTPX is the only one of the 3 that supports it.
No → Use HTTPX. HTTPX has a shallower learning curve than AIOHTTP, a sync fallback for testing, and you can switch to AIOHTTP later if you hit the concurrency ceiling. Already using AIOHTTP in this codebase? Stay on it.
5 common mistakes when scraping with Python
The issues mentioned in the list below show up in scraping code across all three libraries. Most are easy to fix once you know what to look for:
- Not setting a default timeout. All 3 libraries handle timeouts differently: Requests has no timeout and will wait indefinitely; HTTPX defaults to 5s (applied separately to connect, read, write, and pool phases), but pass timeout=None anywhere and that default disappears; AIOHTTP's ClientSession has no default timeout either. Always set an explicit timeout at the session or client level, not just per-request.
- Creating a new session or client per request. If you instantiate requests.Session(), httpx.Client(), or aiohttp.ClientSession() inside a per-request function or loop, you're creating a new connection pool on every call, so you pay for a fresh TCP handshake and DNS lookup every time. Instantiate once at startup and reuse across requests. Worth noting: requests.get() and other module-level functions create a new Session on every call, so there's no connection pooling between calls.
- Migrating from Requests to HTTPX without auditing redirect behavior. HTTPX doesn't follow redirects by default; Requests does. If you're hitting redirect-heavy endpoints, you'll get the 3xx response back instead of the target content. Audit your redirect-dependent endpoints and add follow_redirects=True explicitly where needed.
- Using AIOHTTP without async with for responses. Without async with session.get(url) as response: or an explicit response.release() call, the connection stays checked out of the pool. Under sustained concurrency, available connections drop to zero and new requests block waiting for a free slot, eventually raising asyncio.TimeoutError. Always use async with session.get(url) as response: or call response.release() explicitly.
- Loading large responses fully into memory. Calling response.text or response.json() on a multi-MB payload buffers the entire response before you process a single byte. For large HTML pages or bulk API responses, stream instead: response.iter_content() in Requests, response.content.iter_chunked() in AIOHTTP. For HTTPX, streaming requires the client.stream() context manager. If you construct the response object outside of it, the body is already fully buffered regardless of how you iterate it.
Migrating between libraries
Migration cost depends on direction – Requests-to-HTTPX is mostly drop-in with five breaking changes; Requests-to-AIOHTTP is a full async rewrite.
Requests to HTTPX migration
If you’re moving from Requests to HTTPX, note these differences that can break existing code when not accounted for:
- Redirects. HTTPX does not follow redirects automatically. In Requests, redirects are enabled by default. In HTTPX, you must explicitly set follow_redirects=True in your request call or client configuration.
- Proxy configuration. The proxies={} dictionary pattern used in Requests isn't supported in HTTPX 0.28.0 and later. Instead, use the proxy= parameter for a single proxy or mounts= for more advanced routing. Existing proxy setups will need refactoring.
- URL object type. In Requests, response.url returns a string. In HTTPX, it returns an httpx.URL object. Direct string comparisons like response.url == "https://example.com" will fail silently. Convert it with str(response.url) if you rely on string logic.
- Exception types. HTTPX raises httpx.HTTPStatusError when calling response.raise_for_status(), instead of requests.exceptions.HTTPError. If you catch specific exception classes, update your error handling accordingly.
- Test mocking. The popular responses library, commonly used to mock Requests, doesn't work with HTTPX. For HTTPX, use respx for transport-level mocking or pytest-httpx when working with pytest fixtures.
Basic GET operations work identically:
Requests
HTTPX
Notes
requests.Session()
httpx.Client()
Renamed; same concept
timeout=(3, 30) (tuple)
httpx.Timeout(connect=3, read=30)
HTTPX Timeout covers 4 phases: connect, read, write, pool. timeout=(3, 30) sets only connect+read; Requests has no pool timeout parameter. Note: httpx.Timeout(30.0) applies the same value to all 4 phases – use named arguments to control each independently.
Redirects followed by default
follow_redirects=True required
Breaking change
response.url → str
response.url → httpx.URL
httpx.URL supports attribute access (.host, .path, .params). Direct string comparison (response.url == "https://...") returns False even for matching URLs; always convert with str(response.url) before string comparison.
proxies={"http": ..., "https": ...}
proxy="..." or mounts={}
proxies={} dict removed in HTTPX 0.28.0 and raises TypeError: Client.init() got an unexpected keyword argument 'proxies' on upgrade. Use proxy='...' for a single proxy across all schemes, or mounts= for per-scheme/per-host routing.
requests.auth.AuthBase
httpx.Auth
Different base class
Requests to AIOHTTP migration
Async is contagious – once one function becomes async def, every caller up the chain needs to change too. CLI handlers, Flask routes, test functions – all of them. If your codebase is synchronous today, consider whether HTTPX's sync interface covers your needs before starting.
Four things will break if you just add async and hope for the best:
- Body reads require await. In AIOHTTP, response.json(), response.text(), and response.read() are asynchronous coroutines. You must call them with await, otherwise the body will not be executed or returned.
- Different exception types. Calling raise_for_status() triggers aiohttp.ClientResponseError rather than requests.exceptions.HTTPError. Timeouts raise asyncio.TimeoutError instead of requests.exceptions.Timeout, and connection failures raise aiohttp.ClientConnectionError instead of requests.exceptions.ConnectionError. Review your try and except blocks when migrating.
- Timeout configuration uses structured objects. Instead of simple floats or (connect, read) tuples, AIOHTTP expects an aiohttp.ClientTimeout object. The connect parameter covers both pool acquisition and socket connection, while sock_connect applies only to the low level socket connection. The closest equivalent to Requests’ timeout=(3, 30) is aiohttp.ClientTimeout(connect=3, sock_read=30).
- Testing requires async support. Any test interacting with AIOHTTP must be defined with async def. Add pytest-asyncio to your dependencies and mark async tests with @pytest.mark.asyncio.
AIOHTTP to HTTPX migration
Before migrating: HTTPX buffers response bodies into memory, so at high concurrency (500+ concurrent requests) you'll use more RAM per request than AIOHTTP's streaming model. Benchmark both before committing – you're trading throughput for fewer context managers and no await on body reads.
Both are asyncio-based, so your call stack stays the same:
HTTPX buffers the response body automatically, which removes the double async with.
Connection pooling is where the two libraries diverge. AIOHTTP's TCPConnector pool settings map to httpx.Limits in some cases, but two things don't map:
- TCPConnector(limit_per_host=N) has no direct HTTPX equivalent; httpx.Limits(max_connections=N) caps connections globally. If you're scraping many hosts at once, a slow host holds connections open and blocks fast ones.
- TCPConnector(ttl_dns_cache=300) has no HTTPX equivalent; HTTPX delegates DNS caching to the OS. For most use cases this doesn't matter, but if you're running a long-lived scraper hitting thousands of short-TTL domains, DNS resolution adds to your request latency.
If per-host connection limits or DNS cache TTL control are requirements, those are reasons to stay on AIOHTTP.
HTTPX has no built-in WebSocket support. If your codebase uses AIOHTTP's WebSocket client (session.ws_connect()), you have two options: add httpx-ws (a third-party wrapper that adds WebSocket support on top of HTTPX's transport layer) or add the websockets library and rewrite the WebSocket-handling code against its API. Neither is a simple rename – if your WebSocket code is in one or two modules, manageable; if it's spread across the codebase, this gets painful quickly.
When NOT to migrate: If your current library works and you have no actual need (async, HTTP/2, WebSocket, performance target), stay where you are. Migration always has costs – redirects, exception types, proxy config, and test mocking all work differently. Migrate only when you have a specific reason. Not because a newer library exists.
Final thoughts
Start with Requests if your project has no async requirement. Use HTTPX for async web services, SDK work, or when you need HTTP/2. Use AIOHTTP when you need more than ~200 concurrent connections or native WebSocket support. At moderate concurrency, the library choice matters less than your proxy servers setup and downstream processing – pick based on API compatibility and team familiarity, not benchmarks.
Try Web Scraping API for free
Plug all-in-one scraping solution into your AI workflows and collect data from any website.
About the author

Justinas Tamasevicius
Head of Engineering
Justinas Tamaševičius is Head of Engineering with over two decades of expertize in software development. What started as a self-taught passion during his school years has evolved into a distinguished career spanning backend engineering, system architecture, and infrastructure development.
Connect with Justinas via LinkedIn.
All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.


