Concurrency
Difference Between Asynchronous and Multi-Threading Programming
Asynchronous programming allows only one part of a program to run at a specific time.
Consider three functions in a Python program: fn1(), fn2(), and fn3().
In asynchronous programming, if fn1() is not actively executing (e.g., it’s asleep, waiting, or has completed its task), it won’t block the entire program.
Instead, the program optimizes CPU time by allowing other functions (e.g., fn2()) to execute while fn1() is inactive.
Only when fn2() finishes or sleeps, the third function, fn3(), starts executing.
This concept of asynchronous programming ensures that one task is performed at a time, and other tasks can proceed independently.
In contrast, in multi-threading or multi-processing, all three functions run concurrently without waiting for each other to finish.
With asynchronous programming, specific functions are designated as asynchronous using the async keyword, and the asyncio Python library helps manage this asynchronous behavior.
Asyncio
asyncio is a library to write concurrent code using the async/await syntax
asyncio is often a perfect fit for IO-bound and high-level structured network code
If code is CPU-bound consider using the multiprocessing library instead
import asyncio
import aiohttp
async def fetch(session, url, headers=None, params=None):
"""
Fetch data from a single API endpoint asynchronously.
"""
try:
async with session.get(url, headers=headers, params=params) as response:
response.raise_for_status()
return await response.json()
except aiohttp.ClientError as e:
print(f"Request to {url} failed: {e}")
return None
async def fetch_all(urls, headers=None, params=None):
"""
Fetch data from multiple API endpoints concurrently.
"""
async with aiohttp.ClientSession() as session:
tasks = [fetch(session, url, headers, params) for url in urls]
return await asyncio.gather(*tasks)
# 🔧 Example usage
if __name__ == "__main__":
urls = [
"https://api.github.com/repos/python/cpython",
"https://api.github.com/repos/django/django",
"https://api.github.com/repos/pallets/flask"
]
results = asyncio.run(fetch_all(urls))
for i, result in enumerate(results):
if result:
print(f"{urls[i]} ➜ {result.get('description')}")
Concurrent.futures
Reference
How to make 2500 HTTP requests in 2 seconds with Async & Await: https://www.youtube.com/watch?v=Ii7x4mpIhIs
Last updated