Requests in Python - Interview Questions and Answers
The requests
module is a third-party library in Python used to send HTTP requests and handle responses in a simple and user-friendly manner. It supports GET, POST, PUT, DELETE, and other HTTP methods.
You can install it using pip:
pip install requests
import requests
response = requests.get("https://api.example.com/data")
print(response.text) # Prints the response content
The module supports methods like:
GET
– Retrieve dataPOST
– Submit dataPUT
– Update dataDELETE
– Remove dataPATCH
– Partially update dataHEAD
– Retrieve headers onlyOPTIONS
– Find allowed methods on a server
Use the params
argument:
params = {'q': 'python', 'page': 2}
response = requests.get("https://api.example.com/search", params=params)
print(response.url) # Output: https://api.example.com/search?q=python&page=2
Use the data
parameter:
data = {'username': 'admin', 'password': '1234'}
response = requests.post("https://api.example.com/login", data=data)
print(response.text)
Use the json
parameter:
import requests
data = {"name": "Ankit", "age": 30}
response = requests.post("https://api.example.com/users", json=data)
print(response.json())
Use response.raise_for_status()
:
try:
response = requests.get("https://api.example.com/data")
response.raise_for_status()
except requests.exceptions.HTTPError as err:
print(f"HTTP Error: {err}")
Use the headers
parameter:
headers = {'Authorization': 'Bearer token123', 'User-Agent': 'my-app'}
response = requests.get("https://api.example.com/protected", headers=headers)
Use the timeout
parameter:
try:
response = requests.get("https://api.example.com/data", timeout=5)
except requests.exceptions.Timeout:
print("Request timed out")
response = requests.get("https://example.com/file.zip", stream=True)
with open("file.zip", "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
By default, requests
follows redirects. You can disable it using allow_redirects=False
:
response = requests.get("https://example.com", allow_redirects=False)
response = requests.get("https://api.example.com/data")
print(response.status_code) # 200 for success
Use requests.Session()
:
session = requests.Session()
session.get("https://example.com/login")
response = session.get("https://example.com/dashboard")
cookies = {'session_id': 'abc123'}
response = requests.get("https://example.com", cookies=cookies)
response = requests.get("https://example.com")
print(response.headers)
Using httpx
(an alternative to requests
that supports async):
import httpx
async def fetch():
async with httpx.AsyncClient() as client:
response = await client.get("https://example.com")
print(response.text)
Disable it using verify=False
:
response = requests.get("https://example.com", verify=False)
proxies = {
"http": "http://proxy.com:8080",
"https": "https://proxy.com:8080"
}
response = requests.get("https://example.com", proxies=proxies)
Using requests.auth
:
from requests.auth import HTTPBasicAuth
response = requests.get("https://api.example.com", auth=HTTPBasicAuth('user', 'pass'))
Using tenacity
:
from tenacity import retry, stop_after_attempt
@retry(stop=stop_after_attempt(3))
def fetch():
response = requests.get("https://example.com")
return response.text
Use response.iter_content()
for streaming:
response = requests.get("https://example.com/largefile", stream=True)
for chunk in response.iter_content(chunk_size=1024):
print(chunk)
Enable logging:
import logging
import http.client as http_client
http_client.HTTPConnection.debuglevel = 1
logging.basicConfig(level=logging.DEBUG)
requests.get("https://example.com")
Both are similar, but requests.request("GET", ...)
allows specifying additional options.
requests.delete("https://api.example.com/resource/1")
Many APIs enforce rate limits. To handle them, you can implement a delay between requests:
import time
import requests
for i in range(5): # Example loop sending multiple requests
response = requests.get("https://api.example.com/data")
if response.status_code == 429: # HTTP 429: Too Many Requests
time.sleep(10) # Wait before retrying
print(response.json())
Many APIs return paginated results. You can loop through pages like this:
import requests
url = "https://api.example.com/data"
params = {"page": 1}
while True:
response = requests.get(url, params=params)
data = response.json()
if not data["results"]: # No more results
break
print(data["results"])
params["page"] += 1 # Move to the next page
requests.get(url, **kwargs)
is a shorthand function specifically for GET requests.requests.request("GET", url, **kwargs)
is a more generic function that allows specifying different HTTP methods dynamically.
Example:
response1 = requests.get("https://example.com")
response2 = requests.request("GET", "https://example.com") # Same result
Digest authentication is used for more secure API authentication. You can use requests.auth.HTTPDigestAuth
:
from requests.auth import HTTPDigestAuth
response = requests.get("https://api.example.com/protected",
auth=HTTPDigestAuth("user", "pass"))
Use requests.Session()
to maintain cookies and authentication across requests:
session = requests.Session()
session.get("https://example.com/login") # Login request
response = session.get("https://example.com/dashboard") # Authenticated request
Use try-except
with requests.exceptions.ConnectionError
:
import requests
try:
response = requests.get("https://example.com", timeout=5)
except requests.exceptions.ConnectionError:
print("Connection failed. Retrying...")
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
response = requests.get("https://example.com", verify=False)
You can use the requests-oauthlib
library:
from requests_oauthlib import OAuth2Session
client_id = "your_client_id"
client_secret = "your_client_secret"
token_url = "https://example.com/oauth/token"
oauth = OAuth2Session(client_id)
token = oauth.fetch_token(token_url, client_secret=client_secret)
response = oauth.get("https://example.com/api/data")
Use the files
parameter:
files = {"file": open("example.txt", "rb")}
response = requests.post("https://example.com/upload", files=files)
Use the timeout
parameter:
response = requests.get("https://example.com", timeout=(3, 10)) # Connect timeout: 3s, Read timeout: 10s
Use tenacity
:
from tenacity import retry, stop_after_attempt
@retry(stop=stop_after_attempt(3))
def fetch():
return requests.get("https://example.com").text
Check the response.status_code
or response.ok
:
response = requests.get("https://example.com")
if response.ok: # Equivalent to response.status_code == 200
print("Success")
response = requests.get("https://example.com/image.jpg", stream=True)
with open("image.jpg", "wb") as f:
for chunk in response.iter_content(1024):
f.write(chunk)
response = requests.get("https://example.com")
print(response.encoding) # Default encoding
response.encoding = "utf-8"
session = requests.Session()
session.timeout = 5 # All requests will have a 5s timeout
response = requests.get("https://example.com/data")
data = response.json()
response = requests.get("https://example.com")
if response.history:
print("Request was redirected")
headers = {"User-Agent": "my-app"}
response = requests.get("https://example.com", headers=headers)
response = requests.get("https://example.com", stream=True)
for chunk in response.iter_content(chunk_size=1024):
print(chunk)
try:
response = requests.get("https://example.com", timeout=5)
except requests.exceptions.ConnectionError:
print("Connection error")
try:
data = response.json()
except ValueError:
print("Invalid JSON")
proxies = {
"http": "http://user:pass@proxy.com:8080",
"https": "https://user:pass@proxy.com:8080"
}
response = requests.get("https://example.com", proxies=proxies)
response = requests.options("https://example.com")
print(response.headers["Allow"]) # Allowed methods
Use concurrent.futures
:
import requests
from concurrent.futures import ThreadPoolExecutor
urls = ["https://example.com/api1", "https://example.com/api2"]
with ThreadPoolExecutor() as executor:
results = executor.map(requests.get, urls)
for result in results:
print(result.text)
Tutorials
Random Blogs
- Government Datasets from 50 Countries for Machine Learning Training
- Avoiding the Beginner’s Trap: Key Python Fundamentals You Shouldn't Skip
- Top 10 Knowledge for Machine Learning & Data Science Students
- Datasets for Speech Recognition Analysis
- Loan Default Prediction Project Using Machine Learning
- The Ultimate Guide to Starting a Career in Computer Vision
- Top 15 Recommended SEO Tools
- 15 Amazing Keyword Research Tools You Should Explore
- Why to learn Digital Marketing?
- Robotics & AI – How AI is Powering Modern Robotics