
How to Set Up a Proxy in Python — Requests, Scrapy & Playwright
Setting up proxies in Python is straightforward once you know the pattern. Whether you're using Requests for simple HTTP calls, Scrapy for large-scale scraping, or Playwright for browser automation, the configuration follows the same core idea: tell your HTTP client to route traffic through a proxy endpoint with authentication credentials.
TL;DR
For Requests: pass a proxies dict with http/https keys. For Scrapy: set HTTPPROXY_AUTH and HTTP_PROXY in settings. For Playwright: pass proxy config to browser launch. All examples use IP:PORT:USERNAME:PASSWORD format, which is how Tensor Proxies delivers credentials.
Proxy Format: IP:PORT:USERNAME:PASSWORD
Tensor Proxies delivers credentials in IP:PORT:USERNAME:PASSWORD format. Before plugging them into your code, you'll need to construct the proxy URL your HTTP client expects. The standard format for authenticated proxies is: http://USERNAME:PASSWORD@IP:PORT for HTTP, and socks5://USERNAME:PASSWORD@IP:PORT for SOCKS5.
For example, if your credentials are 192.168.1.1:8080:myuser:mypass, the proxy URL would be http://myuser:[email protected]:8080. This format works across virtually all Python HTTP libraries.
Python Requests Library
The Requests library is the simplest way to use proxies in Python. Pass a proxies dictionary to any request method. The dictionary maps protocol schemes (http, https) to proxy URLs.
For basic usage: import requests, define your proxy URL, create a proxies dict with both 'http' and 'https' keys pointing to your proxy, then pass proxies=proxies to requests.get() or requests.post(). The library handles authentication and tunneling automatically.
For SOCKS5 with Requests, install the requests[socks] extra (pip install requests[socks]), then use socks5://user:pass@ip:port as the proxy URL. Everything else stays the same.
Scrapy Framework
Scrapy uses middleware to handle proxy configuration. The simplest approach is to set the proxy in your spider's request meta, or configure it globally in settings.py. For per-request proxying, set request.meta['proxy'] = 'http://user:pass@ip:port' in your spider.
For rotating through multiple proxies, you can write a custom downloader middleware that picks a random proxy from your pool for each request. This is the standard pattern for large-scale scraping with Scrapy — distribute requests across your proxy list to avoid rate limiting on any single IP.
Scrapy also supports the HTTPPROXY_ENABLED setting and environment variables (http_proxy, https_proxy) for simpler setups where you want all traffic routed through a single proxy.
Playwright Browser Automation
Playwright supports proxy configuration at the browser level. When launching a browser, pass a proxy object with server, username, and password fields. All pages opened in that browser context will route through the proxy.
The proxy server format for Playwright is just ip:port (or http://ip:port). Username and password are separate fields. This works with both Chromium and Firefox browsers in Playwright.
For rotating proxies across different browser contexts, create a new context for each proxy. This also isolates cookies and storage, which is useful for multi-account scenarios. Each context gets its own proxy, its own cookie jar, and its own browsing state.
Troubleshooting Common Issues
The most common problems when setting up proxies in Python are authentication failures, protocol mismatches, and timeout issues. Here are quick fixes:
- 407 Proxy Authentication Required — double-check username and password, ensure special characters are URL-encoded
- Connection refused — verify the IP and port are correct, check that the proxy is active
- SSL errors with HTTPS — make sure you're using http:// (not https://) for the proxy URL itself, even when accessing HTTPS sites
- Timeout errors — increase your timeout setting, try a different proxy from your pool, check if the target site is blocking the proxy IP
- SOCKS5 not working — install PySocks (pip install pysocks) and requests[socks] if using the Requests library
Quick Reference
Requests is the fastest way to get started — three lines of code and you're proxied. Scrapy is best for large-scale scraping projects where you need proxy rotation middleware. Playwright is the choice for browser automation, JavaScript-rendered pages, and multi-account scenarios.
All three approaches work with Tensor Proxies' IP:PORT:USERNAME:PASSWORD credentials. Both HTTP and SOCKS5 protocols are supported across all packages. Start with the Datacenter package ($8/25 proxies) for development and testing, then scale up to Residential ISP for production workloads on protected targets.
RELATED ARTICLES
RELATED USE CASES