Executing HTTP requests in parallel with asyncio


If you’re making multiple HTTP requests in Python, it’s clear that you’d want to perform them asynchronously. The alternative would be to wait for each blocking request to complete before starting the next one, a waste of resources.

Making the requests in parallel

As an example, let’s see how long performing 5 requests synchronously would take:

import time
import requests

start = time.time()
for _ in range(5):

print(time.time() - start)

On my computer, this gives about 1.86 seconds. To make things asynchronous, we only need to do a few things. We need to create a Future for each HTTP request, which will run on our own ThreadPoolExecutor, so we can specify the number of threads. To wait on these Futures, this will have to be done within an asynchronous function, which we’ll have the event loop run until completion.

import asyncio 
from concurrent.futures import ThreadPoolExecutor
import time
import requests

executor = ThreadPoolExecutor(max_workers=5)
loop = asyncio.get_event_loop()

async def make_requests():
    futures = [loop.run_in_executor(executor, requests.get, "https://www.neelsomani.com") for _ in range(5)]
    await asyncio.wait(futures)

start = time.time()
print(time.time() - start)

This code runs in about .41 seconds for me.


Let’s compare the code above to a single request:

start = time.time()
print(time.time() - start)

The single request takes about .375 seconds. As we can see, running five requests asynchronously is marginally slower (maybe due to minor overhead in scheduling a task with the ThreadPoolExecutor, or a small delay by asyncio.wait), but it’s substantially better than performing the 5 requests synchronously.

Tags: asyncio python asynchronous http

See also: An Alternate Interchain Security Proposal

Back to all posts

Neel Somani

About the Author

I'm the founder of Eclipse. You can follow me on Twitter.