When performing network programming and web development in Python, it is often necessary to send HTTP requests and process the responses returned by the server. Among them, obtaining a response body is one of the common requirements. This article will introduce in detail how to get the response body of HTTP requests in Python, including using the built-in urllib library, third-party library requests, and some advanced usages. Help newbies better understand and master this skill through rich cases and code examples.
1. Introduction
The HTTP protocol is one of the most important protocols in web development, which defines how clients and servers communicate. In HTTP requests and responses, the response body contains data returned by the server to the client, which may be HTML documents, JSON objects, pictures, etc. In Python, there are multiple ways to send HTTP requests and get the response body.
2. Use the urllib library to obtain the Response Body
urllib is a module in the Python standard library for handling URL and HTTP requests. Although the API of urllib is relatively cumbersome, it is built in Python and requires no additional installation.
1. Basic usage
Here is an example of using urllib to send a GET request and get the response body:
import url = '' request = (url) with (request) as response: body = () # Get the response body print(('utf-8')) # Decode byte data into string
In this example, we first create a Request object and then use the urlopen function to send the request. urlopen returns a class file object, we can use the read method to read the response body. Note that the read method returns byte data, which we need to decode into a string using the decode method.
2. Send a POST request
An example of using urllib to send a POST request and obtain a response body is as follows:
import import url = '/post' data = {'key': 'value'} data_encoded = (data).encode('utf-8') # Encode data into bytesrequest = (url, data=data_encoded) with (request) as response: body = () print(('utf-8'))
In this example, we use the urlencode function to encode the data into a URL-encoded string and convert it into byte data. We then pass the byte data to the data parameter of the Request object to send a POST request.
3. Use the requests library to obtain the Response Body
requests is a third-party library for sending HTTP requests. Compared with urllib, the requests API is more concise and easy to use.
1. Install the requests library
Before using the requests library, you need to install it first. You can use the pip command to install:
pip install requests
2. Basic usage
Here is an example of using requests to send a GET request and get the response body:
import requests url = '' response = (url) body = # Get the string representation of the response body directlyprint(body)
In this example, we call the function directly to send the GET request and get the response body. The attribute contains a string representation of the response body without manual decoding.
3. Send a POST request
An example of using requests to send a POST request and obtaining a response body is as follows:
import requests url = '/post' data = {'key': 'value'} response = (url, data=data) body = print(body)
In this example, we call the function directly to send a POST request and pass a data dictionary. The attribute also contains the string representation of the response body.
4. Handle JSON response
If the server returns a response body in JSON format, we can parse it into a Python dictionary using the() method:
import requests url = '/api/data' response = (url) data = () # parse JSON response to Python dictionaryprint(data)
In this example, the () method parses the JSON response body into a Python dictionary, which facilitates us to perform subsequent processing.
4. Advanced usage
In addition to basic GET and POST requests, Python's HTTP client also supports more advanced features, such as processing request headers, setting timeouts, processing cookies, etc.
1. Process the request header
We can use the headers parameter to pass the request header information:
import requests url = '' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'} response = (url, headers=headers) body = print(body)
In this example, we pass a request header that contains the User-Agent.
2. Set timeout
When sending a request, we can set a timeout time to avoid the request being unresponsive for a long time:
import requests url = '' try: response = (url, timeout=5) # Set the timeout to 5 seconds body = print(body) except : print("Request timeout")
In this example, we set the timeout to 5 seconds. If the request does not respond within 5 seconds, an exception will be thrown.
3. Handle Cookies
We can use the cookies parameter to pass cookies information, or use the Session object to maintain cookies:
import requests url = '' cookies = {'session_id': 'abc123'} response = (url, cookies=cookies) body = print(body) # Use Session objects to maintain cookiessession = () response = (url) # In subsequent requests, cookies will be automatically carriedresponse = ('/another-page') body = print(body)
In this example, we first pass the cookies information using the cookies parameter. We then use the Session object to maintain cookies, so that cookies will be automatically carried in subsequent requests.
5. Case: Crawling web page content
Here is an example of using the requests library to crawl web content and extract specific information:
import requests from bs4 import BeautifulSoup url = '' response = (url) if response.status_code == 200: soup = BeautifulSoup(, '') title = # Extract the web page title print("Web title:", title) # Extract other information...else: print("Request failed, status code:", response.status_code)
In this example, we first use the function to send the GET request and check whether the response status code is 200 (indicates success). We then parse the HTML content using the BeautifulSoup library and extract the web page title.
6. Summary
This article introduces in detail how to get the response body of HTTP requests in Python. We introduce the basic usage and advanced features of using the built-in urllib library and third-party library requests. Through rich cases and code examples, we show how to send GET and POST requests, process JSON responses, set request headers, set timeouts, and process cookies.
This is the article about how Python gets HTTP requests Response Body. This is all about this article. For more information about Python getting HTTP requests Response Body content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!