作者:张华 发表于:2020-02-12
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
有两块网卡192.168.99.135与192.168.8.101, 对于chrome的某些tab想灵活地类似于ForceBindIp[1]某时走192.168.8.101, 不想使用修改路由的全局方式, 怎么办呢? 那需要在本机构建一个squid http代理服务器, 然后用到squid的tcp_outgoing_address特性.
1,先要做策略路由, 将192.168.8.0/24网段走192.168.8.1
$ ip rule list |grep ssh
32765: from 192.168.8.0/24 lookup ssh
$ ip route list table ssh
default via 192.168.8.1 dev wlan0
# rc.local中的内容如下
#sudo systemctl restart rc.local
ip rule del from 192.168.8.0/24 table ssh >/dev/null 2>&1 &
#ip rule del to <ip> table ssh >/dev/null 2>&1 &
ip rule del fwmark 8 table ssh >/dev/null 2>&1 &
sleep 1 #must sleep to wait for above del lines to finish
ip rule add from 192.168.8.0/24 table ssh >/dev/null 2>&1 &
#ip rule add to <ip> table ssh >/dev/null 2>&1 &
sleep 2
ip route add default via 192.168.8.1 dev wlan0 table ssh >/dev/null 2>&1 &
sleep 3
注意:上面光有policy route还不够,还必须使用’ip rule add to table ssh’将想要走policy route的加进去。如果是wireguard,得在client端设置AllowedIPs = 10.0.8.0/24,/32 , 若想在wireguard的基础上再使用squid tcp_outgoing_address, 就必须得设置 AllowedIPs = 0.0.0.0/0, 设置全局路由之后当然可以再将一些中国的网络走默认主路由
2, squid使用tcp_outgoing_address, 最终这个最小化的squid.conf如下, 底层原理可参照[2]:
sudo apt install squid -y
cat << EOF | sudo tee /etc/squid/squid.conf
#sudo rm -rf /dev/shm/squid*
#sudo mkdir -p /var/log/squid && sudo chown -R proxy:proxy /var/log/squid/
http_port 192.168.99.136:3128
dns_v4_first on
dns_nameservers 192.168.99.1
acl gfw localport 3128
tcp_outgoing_address 192.168.8.101 gfw
http_access allow all
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 7
coredump_dir /var/cache/squid
#coredump_dir /var/log/squid/cache
max_filedescriptors 3200
#prevent squid from adding http headers to cause some websites to detect proxy and prohibit access
via off
forwarded_for delete
EOF
sudo systemctl reload squid
sudo systemctl enable squid
#Ipc::Mem::Segment::create failed to shm_open(/squid-cf__metadata.shm): (17) File exists
sudo rm -rf /dev/shm/squid-cf__*
sudo mkdir -p /var/log/squid && sudo chown -R proxy:proxy /var/log/squid/
sudo mkdir -p /var/cache/squid && sudo chown -R proxy:proxy /var/cache/squid/
#add the following lines in systemd file
ExecStartPre=mkdir -p /var/log/squid
ExecStartPre=chown -R proxy:proxy /var/log/squid/
ExecStartPre=mkdir -p /var/cache/squid
ExecStartPre=chown -R proxy:proxy /var/cache/squid/
NOTE: 20200626更新
添加了policy router之后也可以使用ssh
ssh ubuntu@xxx -b 192.168.8.101 -D192.168.99.135:3128 -fN -o ServerAliveInterval=30 -o ServerAliveCountMax=1
要使用用户名和密码的话如下:
#sudo htpasswd -c /etc/squid/passwd quqi99
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl auth_user proxy_auth REQUIRED
http_access allow auth_user
注意: 上面的只是http类型的, 不是https的, https的可以见附件
3, 对于ssh, 可利用-b参数来走策略路由
ssh ubuntu@vpn_bos -b 192.168.8.101
4, 如果是openvpn呢?采用–local指定即可,但要去掉nobind
sudo openvpn --config xxx.conf --local 192.168.8.101
5, 如是是做路由器的话,可使用ipset结合做.参考:https://blog.csdn.net/quqi99/article/details/104641605
ip rule del fwmark 8 table ssh > /dev/null 2>&1
ip rule add fwmark 8 table ssh
ipset create ssh iphash --exist
ipset add ssh <IP> --exist
iptables -t mangle -D PREROUTING -m set --match-set ssh dst -p tcp -m multiport --dport 22 -j MARK --set-mark 8 >/dev/null 2>&1 &
iptables -t mangle -D OUTPUT -m set --match-set ssh dst -p tcp -m multiport --dport 22 -j MARK --set-mark 8 >/dev/null 2>&1 &
#sleep 1
iptables -t mangle -I PREROUTING -m set --match-set ssh dst -p tcp -m multiport --dport 22 -j MARK --set-mark 8
iptables -t mangle -I OUTPUT -m set --match-set ssh dst -p tcp -m multiport --dport 22 -j MARK --set-mark 8
#need to set rp_fillter=2 when using mangle
sysctl net.ipv4.conf.all.rp_filter=2
sysctl net.ipv4.conf.default.rp_filter=2
题外话
关于如何持久化上面的policy route确是一个大课题,至今未解决:
1, 先是用这个不work
hua@t440p:~$ cat /etc/network/if-up.d/wlp4s0-routes
#!/bin/sh
set -e
if [ "$IFACE" != 'wlp4s0' ]; then
exit 0
fi
if [ "$METHOD" = loopback ]; then
exit 0
elif [ "$METHOD" = dhcp ]; then
IF_ADDRESS="$(echo "$IP4_ADDRESS_0" | cut -d'/' -f1)"
IF_GATEWAY="$(echo "$IP4_ADDRESS_0" | cut -d' ' -f2)"
elif [ "$METHOD" = static]; then
if [ ! "$GATEWAY" ]; then
IF_GATEWAY="$(echo "$IF_ADDRESS" | cut -d. -f1-3).1"
fi
fi
ip route flush table "$IFACE"
ip route add default via "$IF_GATEWAY" table "$IFACE"
ip rule del lookup "$IFACE" || true
ip rule add from "$IF_ADDRESS" lookup "$IFACE"
2, 再用这个还不work
hua@t440p:~$ cat /etc/network/if-up.d/wlp4s0-routes
if [ "$IFACE" == "wlp4s0" ]; then
ip rule add from 172.20.10.0/24 table ssh
ip rule add from 192.168.8.0/24 table ssh
fi
3, 这个(/etc/NetworkManager/system-connections/huawei4G)里也不能持久化policy routing
4, 改用netplan理论上应该可以,但实际测试又报别的错
sudo cp /etc/netplan/01-network-manager-all.yaml /etc/netplan/01-network-manager-all.yaml_bak
cat << EOF | sudo tee /etc/netplan/01-network-manager-all.yaml
network:
version: 2
renderer: networkd # change this from 'NetworkManager' if it's set.
ethernets:
eth0:
#dhcp6: no
#accept-ra: no
#gateway6: 2a01:xxx:xxxx:xx::x
#addresses: [81.94.xx.xx/28, "2a01:xxx:xxxx:xx::xx/64"]
dhcp4: no
addresses: [192.168.99.135/24]
gateway4: 192.168.99.1
nameservers:
addresses: [192.168.99.1]
wlp4s0:
dhcp4: no
addresses: [192.168.8.101/24]
gateway4: # unset, since we configure the route below
routes:
- to: 91.189.0.0/16
via: 192.168.8.1
metric: 600
#table: 9 #echo "9 ssh" >> /etc/iproute2/rt_tables
routing-policy:
- from: 172.20.10.0/24
table: 9
- from: 192.168.8.101
table: 9
EOF
sudo netplan try # it will rolling back your last configurations in 120 seconds
sudo systemctl stop netplan
sudo netplan -d apply
sudo systemctl restart network-manager
#sudo systemctl restart systemd-networkd #for ubuntu server version
hua@t440p:~$ sudo netplan apply
bind: Address already in use
netplan: fatal error: cannot bind to port 2983, is another daemon running?, exiting
netplan的方法不通,这个网页( https://blogs.gnome.org/thaller/category/networkmanager/ ) network-manager 1.18开始技术policy route (使用方法:http://devemmeff.blogspot.com/2016/02/howto-policy-based-routing-using.html), 但是照下列方法从源码编译network-manager遇到太多太多包依赖问题了,至今未解决
git clone https://gitlab.freedesktop.org/NetworkManager/NetworkManager.git && cd NetworkManager
git checkout -b 1.18.4 1.18.4
sudo apt install gtk-doc-tools build-essential automake
sudo apt install wireless-tools libiw-dev libdbus-glib-1-dev libgudev-1.0-dev uuid-dev uuid libnss-db libnss3-dev ppp-dev libjansson-dev libcurl4-openssl-dev libndp-dev
sudo apt install gtk-doc-tools libglib2.0-dev libudev-dev uuid-dev libnss3-dev ppp-dev libjansson-dev libcurl4-nss-dev libndp-dev libreadline-dev intltool
sudo apt install libnm-dev libnm-util-dev
./autogen.sh --prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-introspection=no --disable-ppp --disable-json-validation --enable-gtk-doc=no
最后通过rc.local解决:
sudo bash -c 'cat >/etc/rc.local' <<EOF
#!/bin/sh -e
ip rule del from 172.20.10.0/24 table ssh >/dev/null 2>&1 &
ip rule del from 192.168.8.0/24 table ssh >/dev/null 2>&1 &
#ip rule del to <IP> table ssh >/dev/null 2>&1 &
sleep 1 #must have this line
ip rule add from 172.20.10.0/24 table ssh >/dev/null 2>&1 &
ip rule add from 192.168.8.0/24 table ssh >/dev/null 2>&1 &
#ip rule add to <IP> table ssh >/dev/null 2>&1 &
sleep 1 #must have it
ip route add default via 192.168.8.1 dev wlan0 table ssh >/dev/null 2>&1 &
sleep 1 #must have it
exit 0
EOF
sudo systemctl restart rc.local
sudo systemctl enable rc.local
注意:上面del与add之间必须使用sleep 2,因为都是异步执行的.
然后可以使用下列命令验证:
sudo tcpdump -ni wlan0 host <IP>
20220414更新
由openconnect来形成第二块网卡 , 例:tun0=192.168.11.84
$ cat /etc/systemd/system/openconnect.service
[Unit]
Description=openconnect
After=network.target
[Service]
Type=simple
Environment=password=xxxx
ExecStart=/bin/sh -c 'echo $password | sudo openconnect --no-dtls -u <user> a02.xxx.com:1443 --interface=vpn0 --script /bak/bin/vpnc-custom-dic2.sh'
Restart=always
[Install]
WantedBy=multi-user.target
$ cat /bak/bin/vpnc-custom-dic2.sh
#!/bin/sh
export CISCO_SPLIT_INC=0
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_ADDR=192.168.11.0
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASKLEN=24
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASK=255.255.255.0
export CISCO_SPLIT_INC=$((${CISCO_SPLIT_INC}+1))
for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
echo 2 > $i
done
# echo "11 vpn" >> /etc/iproute2/rt_tables
#ip rule list table vpn && ip table list table vpn
ip rule del from 192.168.11.0/24 table vpn >/dev/null 2>&1
ip rule del fwmark 11 table vpn >/dev/null 2>&1
sleep 1
ip rule add from 192.168.11.0/24 table vpn >/dev/null 2>&1
# put the following line to /etc/vpnc/post-connect.d/set-policy-route
#ip route add default via 192.168.11.1 dev vpn0 table vpn >/dev/null 2>&1 &
export INTERNAL_IP4_DNS=127.0.0.53
# for openvpn: opkg install vpnc-script vpnc
#exec /lib/netifd/vpnc-script
exec /etc/vpnc/vpnc-script
exit 0
$ cat /etc/vpnc/post-connect.d/set-policy-route
#!/bin/sh
ip route add default via 192.168.11.1 dev vpn0 table vpn >/dev/null 2>&1
ip route flush cache
exit 0
20221221 - squid https
安装openssl版本的squid (squid-openssl)
sudo apt-get install openssl libssl-dev ssl-cert squid-openssl -y
squid -v |grep -E 'with-openssl|enable-ssl-crtd'
或若如armbian上没有squid-openssl可这样:
apt install openssl libssl-dev jssl-cert devscripts build-essential fakeroot -y
apt source squid
apt build-dep squid
cd squid3-3.5.23
# modify DEB_CONFIGURE_EXTRA_FLAGS in debian/rules
vim debian/rules
--enable-ssl \
--enable-ssl-crtd \
--with-openssl \
--disable-ipv6
./configure
debuild -us -uc -b
dpkg -i xxx.deb
sudo apt-mark hold squid3
squid -v |grep -E 'with-openssl|enable-ssl-crtd'
但在armbian中上面的方法报’gadgets.h:83:61’错误,然后用下列从源码编译也出错了"BCP 177 violation. IPv6 transport forced OFF by build parameters."
#gadgets.h:83:61: error: template argument 3 is invalid
export LC_ALL=en_US.UTF-8
#update-alternatives --set editor /usr/bin/vim.basic
export EDITOR=vim
apt install libtool-bin automake autoconf ed -y
git clone https://github.com/squid-cache/squid.git squid
cd squid
git checkout v5
./bootstrap.sh
mkdir build; cd build
../configure --enable-ssl --enable-ssl-crtd --with-openssl --disable-ipv6 \
--prefix=/opt/squid --with-default-user=proxy --disable-inlined \
--disable-optimizations --enable-arp-acl --disable-wccp --disable-wccp2 --disable-htcp \
--enable-delay-pools --enable-linux-netfilter --disable-translation --disable-auto-locale \
--with-logdir=/opt/squid/log/squid --with-pidfile=/opt/squid/run/squid.pid \
--with-filedescriptors=65536 --with-large-files --enable-async-io=8
make && make install
/opt/squid/sbin/squid -v |grep -E 'with-openssl|enable-ssl-crtd'
cp /opt/squid/etc/squid.conf /opt/squid/etc/squid.conf_bak
cat << EOF |tee /opt/squid/etc/squid.conf
#htpasswd -c /etc/squid/passwd quqi99
#auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic program /opt/squid/libexec/basic_ncsa_auth /etc/squid/passwd
acl auth_user proxy_auth REQUIRED
http_access allow auth_user
http_access deny all
via off
forwarded_for delete
#dns_v4_first on
http_port 0.0.0.0:3128
https_port 0.0.0.0:3129 cert=/root/ca/quqi.com.crt key=/root/ca/quqi.com.key
EOF
adduser proxy
mkdir -p /opt/squid/log
chown -R proxy:proxy /opt/squid/log
chown -R proxy:proxy /opt/squid/var
chown -R proxy:proxy /opt/squid/run
#su proxy
sudo -u proxy /opt/squid/sbin/squid
#but reports - BCP 177 violation. IPv6 transport forced OFF by build parameters.
创建证书:
openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes -out ca.crt -keyout ca.key -subj "/C=CN/ST=BJ/O=STS/CN=CA"
for DOMAIN in quqi.com
do
openssl genrsa -out $DOMAIN.key
openssl req -new -key $DOMAIN.key -out $DOMAIN.csr -subj "/C=CN/ST=BJ/O=STS/CN=$DOMAIN"
openssl x509 -req -in $DOMAIN.csr -out $DOMAIN.crt -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650
done
squid配置如下:
$ cat /etc/squid/squid.conf
#sudo htpasswd -c /etc/squid/passwd quqi99
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl auth_user proxy_auth REQUIRED
http_access allow auth_user
http_access allow all
via off
forwarded_for delete
dns_v4_first on
http_port 127.0.0.1:3128
https_port 127.0.0.1:3129 cert=/etc/squid/cert/quqi.com.crt key=/etc/squid/cert/quqi.com.key
如果是用chrome浏览器需要处将证书导入chrome://settings/certificates -> ‘Server’ Tab -> Export ca.pem, 但仍然不work,弹出ERR_CERT_AUTHORITY_INVALID,不清楚为什么.但我们此处不是浏览器,是程序,那先忽略这个,使用下列导入:
CA_CERT_D=/usr/local/share/ca-certificates
rm -rf $CA_CERT_D/*
mkdir -p $CA_CERT_D
openssl x509 -in /etc/squid/cert/ca.pem -out /etc/squid/cert/ca.crt
sudo cp /etc/squid/cert/ca.crt $CA_CERT_D/squid_ca.crt
sudo chmod 644 $CA_CERT_D/squid_ca.crt
sudo update-ca-certificates --fresh
测试, 3128是http port它也可自动升级到https, 3129是https_port, 另外SNI是针对squix的不是针对api.snapcraft.io, 另外由于sni是针对squid的所以应使用–proxy-cacert而不是–cacert
$ curl -x http://quqi99:password@127.0.0.1:3128 https://api.snapcraft.io
snapcraft.io store API service - Copyright 2018-2022 Canonical.
$ curl --resolve quqi.com:3128:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x http://quqi99:password@quqi.com:3128 https://api.snapcraft.io
snapcraft.io store API service - Copyright 2018-2022 Canonical.
$ curl --resolve quqi.com:3129:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x https://quqi99:password@quqi.com:3129 https://api.snapcraft.io
snapcraft.io store API service - Copyright 2018-2022 Canonical.
将ca.crt添加到系统(还未添加到浏览器)
$ curl --resolve quqi.com:3129:127.0.0.1 -x https://quqi99:password@quqi.com:3129 https://api.snapcraft.io
curl: (60) SSL certificate problem: unable to get local issuer certificate
CA_CERT_D=/usr/local/share/ca-certificates
rm -rf $CA_CERT_D/*
mkdir -p $CA_CERT_D
sudo cp /etc/squid/cert/ca.crt $CA_CERT_D/squid_ca.crt
sudo update-ca-certificates --fresh
$ curl --resolve quqi.com:3129:127.0.0.1 -x https://quqi99:password@quqi.com:3129 https://api.snapcraft.io
snapcraft.io store API service - Copyright 2018-2022 Canonical
然后在/etc/hosts添加’127.0.0.1 quqi.com’:
echo '127.0.0.1 quqi.com' >> /etc/hosts
$ curl -x https://quqi99:password@quqi.com:3129 https://api.snapcraft.io
snapcraft.io store API service - Copyright 2018-2022 Canonical.
但是为啥下列python3程序不work呢:
$ cat test.py
#!/usr/bin/env python
# coding=utf-8
import ssl
from urllib import request
proxy_handler = request.ProxyHandler({"https": "https://quqi99:password@quqi.com:3129"})
context = ssl.SSLContext()
#context.verify_mode = ssl.CERT_REQUIRED
#context.check_hostname = True
secure_handler = request.HTTPSHandler(context = context)
opener = request.build_opener(proxy_handler, secure_handler)
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; Win64;x64; rv:54.0) Gecko/20100101 Firefox/54.0')]
#import rpdb;rpdb.set_trace()
#response = request.urlopen("https://api.snapcraft.io", context=context)
req = request.Request("https://api.snapcraft.io", method="HEAD")
response = opener.open(req)
print(response)
$ python3 test.py
Traceback (most recent call last):
File "/usr/lib/python3.10/urllib/request.py", line 1348, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/usr/lib/python3.10/http/client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.10/http/client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
self.send(msg)
File "/usr/lib/python3.10/http/client.py", line 975, in send
self.connect()
File "/usr/lib/python3.10/http/client.py", line 1447, in connect
super().connect()
File "/usr/lib/python3.10/http/client.py", line 951, in connect
self._tunnel()
File "/usr/lib/python3.10/http/client.py", line 920, in _tunnel
(version, code, message) = response._read_status()
File "/usr/lib/python3.10/http/client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.10/socket.py", line 705, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/hua/test.py", line 14, in <module>
opener.open(req)
File "/usr/lib/python3.10/urllib/request.py", line 519, in open
response = self._open(req, data)
File "/usr/lib/python3.10/urllib/request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/usr/lib/python3.10/urllib/request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 104] Connection reset by peer>
改用urllib3也是报错ProxySchemeUnsupported, ProxySchemeUnsupported在说不支持通过HTTPS代理获取HTTPS资源, 也就是说https的proxy不能代理https的网站(代理https://baidu.com不可以,但代理http://baidu.com可以)
$ cat test2.py
#!/usr/bin/env python
# coding=utf-8
import ssl
import urllib3
ssl_verify = False
if (ssl_verify):
cert_reqs = ssl.CERT_REQUIRED
else:
cert_reqs = ssl.CERT_NONE
urllib3.disable_warnings()
headers = urllib3.make_headers(proxy_basic_auth='quqi99:password')
proxy = urllib3.ProxyManager('https://quqi.com:3129', proxy_headers=headers, cert_reqs=cert_reqs)
r = proxy.request('HEAD', 'https://api.snapcraft.io')
print(r.status)
urllib3.exceptions.ProxySchemeUnsupported: TLS in TLS requires support for the 'ssl' module
但改成request版本的,是支持https_proxy代理https_site的:
$ python3 test3.py
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (5.1.0) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
snapcraft.io store API service - Copyright 2018-2022 Canonical.
$ cat test3.py
#!/usr/bin/env python
# coding=utf-8
import requests
proxies = {"https": "https://quqi99:password@quqi.com:3129"}
r=requests.get("https://api.snapcraft.io")
print(r.text)
既然curl支持tls over tls, 那pycurl也支持:
$ cat test4.py
#!/usr/bin/env python
# coding=utf-8
import pycurl
from io import BytesIO
import certifi
def make_pycurl_head(username, password, host, port, target_url='https://api.snapcraft.io', method='HEAD'):
header_output = BytesIO()
body_output = BytesIO()
c = pycurl.Curl()
c.setopt(pycurl.CAINFO, certifi.where())
# set proxy-insecure
c.setopt(c.PROXY_SSL_VERIFYHOST, 0)
c.setopt(c.PROXY_SSL_VERIFYPEER, 0)
# set proxy
c.setopt(pycurl.PROXY, f"https://{host}:{port}")
# proxy auth
c.setopt(pycurl.PROXYUSERPWD, f"{username}:{password}")
# set proxy type = "HTTPS"
c.setopt(pycurl.PROXYTYPE, 2)
# target url
c.setopt(c.URL, target_url)
c.setopt(pycurl.CUSTOMREQUEST, method)
c.setopt(pycurl.NOBODY, True)
c.setopt(c.WRITEDATA, body_output)
c.setopt(pycurl.HEADERFUNCTION, header_output.write)
c.setopt(pycurl.HEADERFUNCTION, body_output.write)
c.setopt(pycurl.CONNECTTIMEOUT, 3)
c.setopt(pycurl.TIMEOUT, 8)
return (c, header_output, body_output)
#https://gist.github.com/tumb1er/b7fdf9c257b78d30b6a004149bbd9981
c, header_output, body_output = make_pycurl_head('quqi99', 'password', 'quqi.com', '3129')
c.perform()
status_code = int(c.getinfo(pycurl.HTTP_CODE))
content = body_output.getvalue().decode()
print(status_code)
print(content)
c.close()
一个java程序,也只支持http proxy, 不支持https proxy
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.UnsupportedEncodingException;
import java.net.Authenticator;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.PasswordAuthentication;
import java.net.Proxy;
import java.net.Socket;
import java.net.URL;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLSession;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
public class Test {
//curl --resolve quqi.com:3129:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x https://quqi99:password@quqi.com:3129 https://api.snapcraft.io
private static String proxyHost = "127.0.0.1";
private static int proxyPort = 3129;
private static String user = "quqi99";
private static String pass = "password";
public static final String target_url = "https://api.snapcraft.io";
public static void main(String[] args) {
System.setProperty("jdk.http.auth.tunneling.disabledSchemes", "false");
System.setProperty("jdk.http.auth.proxying.disabledSchemes", "false");
sendGet();
}
public static StringBuffer sendGet() {
BufferedReader in = null;
StringBuffer sb = new StringBuffer();
try {
HttpsURLConnection connection = null;
URL readUrl = new URL(target_url);
@SuppressWarnings("static-access")
Proxy proxy = new Proxy(Proxy.Type.DIRECT.HTTP, new InetSocketAddress(proxyHost, proxyPort));
connection = (HttpsURLConnection) readUrl.openConnection(proxy);
Authenticator.setDefault( new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
if (getRequestorType().equals( RequestorType.PROXY )) {
return new PasswordAuthentication( user, pass.toCharArray() );
}
return super.getPasswordAuthentication();
}
});
SSLContext sc = SSLContext.getInstance("SSL");
sc.init(null, new TrustManager[] { new TrustAnyTrustManager() }, new java.security.SecureRandom());
connection.setSSLSocketFactory(sc.getSocketFactory());
connection.setHostnameVerifier(new TrustAnyHostnameVerifier());
connection.connect();
in = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
System.out.println(in.readLine());
} catch (Exception e) {
e.printStackTrace();
}
finally {
try {
if (in != null) {
in.close();
}
} catch (Exception e2) {
e2.printStackTrace();
}
}
return sb;
}
private static class TrustAnyTrustManager implements X509TrustManager {
public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
}
public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
}
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[] {};
}
}
private static class TrustAnyHostnameVerifier implements HostnameVerifier {
public boolean verify(String hostname, SSLSession session) {
return true;
}
}
}
20221225 - build python with openssl
按下面的方法重新build python with openssl, 但仍然有问题:
python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"
$ python3 -c 'import socket; print(hasattr(socket, "SSL")); socket.ssl'
False
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'socket' has no attribute 'ssl'
#https://docs.python.org/3/using/unix.html
PYDIR=$HOME/opt/python-3.10.6
export PATH=$PYDIR/bin:$PATH
export CPPFLAGS="-I$PYDIR/include $CPPFLAGS"
mkdir -p $PYDIR/src
cd $PYDIR/src
# openssl
wget https://www.openssl.org/source/openssl-1.1.1l.tar.gz
tar zxf openssl-1.1.1l.tar.gz
cd openssl-1.1.1l
./config --prefix=$PYDIR
make
make install
# python
cd ..
wget https://www.python.org/ftp/python/3.10.6/Python-3.10.6.tar.xz
tar xf Python-3.10.6.tar.xz
cd Python-3.10.6
./configure --prefix=$PYDIR
make
make install
$HOME/opt/python-3.10.6/bin/python3 -c 'import socket; print(hasattr(socket, "SSL")); socket.ssl'
#configure: error: C compiler cannot create executables
$ cat config.log |grep error |head -n1
ccache: error while loading shared libraries: libhiredis.so.0.14: cannot open shared object file: No such file or directory
#Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer
20221223 - nginx + squid + https
本来想在nginx中将流量代理到squid,但失败了,记录如下:
上面squid实现了https, 即在client与proxy中间采用L7 https, proxy端再采用L4(不需要证书)而不是L7(需要证书)将client端过来的https透传给target端.
nginx默认是代理L7层,但采用于stream模块(ngx_stream_ssl_preread_module)也能以L4(不需要证书)将上面的tls数据以tcp/ip直接透传.从nginx官方库中安装的nginx具有ngx_stream_ssl_preread_module模块.
#install nginx from upstream to include ngx_stream_ssl_preread_module
cat << EOF |sudo tee -a /etc/apt/sources.list
deb http://nginx.org/packages/mainline/ubuntu/ jammy nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ jammy nginx
EOF
#sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv ABF5BD827BD9BF62
sudo gpg --keyserver keyserver.ubuntu.com --recv-key ABF5BD827BD9BF62
sudo gpg -a --export ABF5BD827BD9BF62 | sudo apt-key add -
sudo apt install nginx -y
nginx -V |grep with-stream_ssl_preread_module
在/etc/nginx/nginx.conf中的events与http之间添加下列stream配置
stream {
map $ssl_preread_server_name $stream_map {
#ssh hua@127.0.0.1 -p443 -v
default ssh;
#curl --resolve quqi.com:443:127.0.0.1 --cacert /etc/squid/cert/ca.crt https://quqi.com:4443/ss
#curl --resolve quqi.com:443:127.0.0.1 --cacert /etc/squid/cert/ca.crt https://quqi.com:443/ss
#why it does't work: curl --resolve quqi.com:443:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x https://quqi.com:443/proxy https://api.snapcraft.io
quqi.com https;
}
upstream ssh {
server 127.0.0.1:22;
}
upstream https {
server 127.0.0.1:4443;
}
server {
listen 443 reuseport;
ssl_preread on;
proxy_pass $stream_map;
}
}
下面的关于/proxy的配置并不work
# And add one line 'include /etc/nginx/sites-enabled/*;' into http section /etc/nginx/nginx.conf
sudo ln -s /etc/nginx/sites-available/https /etc/nginx/sites-enabled/https
sudo nginx -t
sudo systemctl reload nginx
$ cat /etc/nginx/sites-available/https
server {
listen 4443 ssl;
server_name quqi.com;
ssl_certificate /etc/squid/cert/quqi.com.crt;
ssl_certificate_key /etc/squid/cert/quqi.com.key;
location /ss {
default_type text/html;
return 200 'ok';
}
location /proxy {
proxy_pass http://127.0.0.1:3127;
#echo 'quqi99:password' |base64
proxy_set_header Authorization "cXVxaTk5OnBhc3N3b3JkCg==";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Request-URI $request_uri;
proxy_redirect off;
}
error_log /var/log/nginx/error.log debug;
}
这个不成功的理由很容易理解,因为这种是L7代理,Nginx L7代理是反向代理,反向代理即使用了proxy_pass + proxy_set_header也是无法将niginx流量导到squid代理。
要想将nginx流量向到squid代理只能使用正向代理,也就是最上面的stream proxy(这点也能参见: https://community.f5.com/t5/technical-forum/how-to-tell-nginx-to-use-a-forward-proxy-to-reach-a-specific/td-p/303548)。
我们再看一个例子,就是当想要将socket5 proxy或http proxy添加用户名和密码暴露在公网时是无法实现的。
1, 首先在/etc/nginx/sites-available/default中添加下列代码使用反向代理时proxy_pass应该接一个http网页地址而不是http proxy地址(即使添加了proxy_set_header也不行因为它是反向代理)
server {
listen 8000;
server_name xxx.publicvm.com;
root /var/www/html;
index index.html;
#location / {
# try_files $uri $uri/ =404;
#}
location / {
#auth_basic is used for L7 proxy so it can't be used in L4 stream proxy
#apt install apache2-utils && htpasswd -c /etc/nginx/passwd quqi99
#http://192.168.99.194:8000/ss
#auth_basic "Please enter your password:";
#auth_basic_user_file /etc/nginx/passwd;
proxy_pass http://$http_host$request_uri;
#proxy_pass http://127.0.0.1:8118;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Request-URI $request_uri;
proxy_redirect off;
}
}
2, 这种情况只能使用正向代理(L4 stream proxy), 即在/etc/nginx/nginx.conf中添加下列代码. 但是这样也有一个问题,L4是不支持L7的auth_basic的所以仍然无法支持密码认证
stream {
upstream group1 {
hash $remote_addr consistent;
server 192.168.99.1:7071 weight=100;
server 192.168.99.1:7070 weight=50;
}
server {
#auth_basic is used for L7 proxy so it can't be used in L4 stream proxy
#auth_basic "Please enter your password:";
#apt install apache2-utils && htpasswd -c /etc/nginx/passwd quqi99
#auth_basic_user_file /etc/nginx/passwd;
#resolver 8.8.8.8;
listen 444;
proxy_pass group1;
}
}
3, 对于这种在proxy前暴露到公网再加密码认证的需求目前是无解的。另外即使加了,使用nginx做带密码认证的http正向代理并不合适,因为没有设置密码认证间隔的模块,所以每访问一个页面都要输入一次密码,对于没有安装可以保存密码的代理插件的浏览器用户来说简直就是噩耗.
20221227 - 问题解决
写了一个http/https proxy程序方便调试, 也用socket写了一个client程序方便调试.
https proxy程序如下:
#!/usr/bin/env python
# coding=utf-8
# encoding:utf-8
import ssl
import socket
import _thread
#https://raw.githubusercontent.com/python-trio/trio/master/notes-to-self/ssl-handshake/ssl-handshake.py
class ManuallyWrappedSocket:
def __init__(self, ctx, sock, **kwargs):
self.incoming = ssl.MemoryBIO()
self.outgoing = ssl.MemoryBIO()
self.obj = ctx.wrap_bio(self.incoming, self.outgoing, **kwargs)
self.sock = sock
def _retry(self, fn, *args):
finished = False
while not finished:
want_read = False
try:
ret = fn(*args)
except ssl.SSLWantReadError:
want_read = True
except ssl.SSLWantWriteError:
# can't happen, but if it did this would be the right way to
# handle it anyway
pass
else:
finished = True
# do any sending
data = self.outgoing.read()
if data:
self.sock.sendall(data)
# do any receiving
if want_read:
data = self.sock.recv(4096)
if not data:
self.incoming.write_eof()
else:
self.incoming.write(data)
# then retry if necessary
return ret
def do_handshake(self):
self._retry(self.obj.do_handshake)
def recv(self, bufsize):
return self._retry(self.obj.read, bufsize)
def sendall(self, data):
self._retry(self.obj.write, data)
def unwrap(self):
self._retry(self.obj.unwrap)
return self.sock
def wrap_socket_via_wrap_bio(ctx, sock, **kwargs):
return ManuallyWrappedSocket(ctx, sock, **kwargs)
class Header:
def __init__(self, conn):
self._method = None
header = b''
try:
while 1:
data = conn.recv(4096)
header = b"%s%s" % (header, data)
if header.endswith(b'\r\n\r\n') or (not data):
break
except Exception as err:
print('__init__: ', err)
pass
self._header = header
self.header_list = header.split(b'\r\n')
self._host = None
self._port = None
def get_method(self):
if self._method is None:
self._method = self._header[:self._header.index(b' ')]
return self._method
def get_host_info(self):
if self._host is None:
method = self.get_method()
line = self.header_list[0].decode('utf8')
if method == b"CONNECT":
host = line.split(' ')[1]
if ':' in host:
host, port = host.split(':')
else:
port = 443
else:
for i in self.header_list:
if i.startswith(b"Host:"):
host = i.split(b" ")
if len(host) < 2:
continue
host = host[1].decode('utf8')
break
else:
host = line.split('/')[2]
if ':' in host:
host, port = host.split(':')
else:
port = 80
self._host = host
self._port = int(port)
return self._host, self._port
#return '185.125.188.58', 443
@property
def data(self):
return self._header
def is_ssl(self):
if self.get_method() == b'CONNECT':
return True
return False
def __repr__(self):
return str(self._header.decode("utf8"))
def communicate(sock1, sock2):
try:
while 1:
data = sock1.recv(1024)
if not data:
return
sock2.sendall(data)
except Exception as err:
print(sock1)
#for http over https proxy: <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('192.168.99.186', 48246), raddr=('185.125.188.54', 80)>
#for https over https proxy: <socket.socket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('192.168.99.186', 47916)>
print('communicate: ', err)
pass
def handle(client, isbio=False):
timeout = 60
if not isbio:
client.settimeout(timeout)
try:
header = Header(client)
print(header)
if not header.data:
return
print(*header.get_host_info(), header.get_method())
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('connecting to target: %s', header.get_host_info())
server.connect(header.get_host_info())
#如果这里加密了,https over https proxy时不会报错,但client将也看不到这里加密的内容哦
#ctx = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
#ctx.check_hostname = True
#ctx.verify_mode=ssl.CERT_NONE
#server = ctx.wrap_socket(server, server_side=False)
server.settimeout(timeout)
use_nio = False
if header.is_ssl():
data = b"HTTP/1.0 200 Connection Established\r\n\r\n"
client.sendall(data)
print('-------------------client -> target')
print(client)
print(server)
if not use_nio:
_thread.start_new_thread(communicate, (client, server))
else:
server.sendall(header.data)
print('-------------------target -> client:')
print(server)
print(client)
if not use_nio:
communicate(server, client)
else:
fdset = [clientSock, serverSock]
while not stop:
r, w, e = select.select(fdset, [], [], 5)
if clientSock in r:
if serverSock.send(clientSock.recv(1024)) <= 0: break
if serverSock in r:
if clientSock.send(serverSock.recv(1024)) <= 0: break
except:
server.close()
if not isbio:
client.close()
def serve(ip, port, is_https_proxy):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((ip, port))
s.listen(10)
print('proxy start...')
if is_https_proxy:
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH, cafile='/etc/squid/cert/ca.crt')
#ctx.verify_mode = ssl.CERT_REQUIRED
ctx.verify_mode=ssl.CERT_NONE
ctx.load_cert_chain('/etc/squid/cert/quqi.com.crt', '/etc/squid/cert/quqi.com.key')
ctx.load_verify_locations(cafile='/etc/squid/cert/ca.crt')
while True:
#import rpdb;rpdb.set_trace()
client, addr = s.accept()
print("Client connected: {}:{}".format(addr[0], addr[1]))
if is_https_proxy:
client = ctx.wrap_socket(client, server_side=True, do_handshake_on_connect=False)
client.do_handshake()
wrap = None
#wrap = wrap_socket_via_wrap_bio(ctx, client, server_side=True)
#getpeeercert will be empty if it is verify_mode=ssl.CERT_NONE
print("SSL established. Peer: {}".format(client.getpeercert()))
if(wrap):
_thread.start_new_thread(handle, (wrap, True))
else:
_thread.start_new_thread(handle, (client, False))
stop = False
if __name__ == '__main__':
#HTTP Proxy: curl --resolve quqi.com:7070:127.0.0.1 -x http://quqi.com:7070 http://api.snapcraft.io:80
# curl --resolve quqi.com:7070:127.0.0.1 -x http://quqi.com:7070 https//api.snapcraft.io:443
#HTTPS Proxy: curl --resolve quqi.com:7070:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x https://quqi.com:7070 https://api.snapcraft.io:443
# curl --resolve quqi.com:7070:127.0.0.1 --proxy-cacert /etc/squid/cert/ca.crt -x https://quqi.com:7070 http://api.snapcraft.io:80
is_https_proxy = True
IP = "127.0.0.1"
PORT = 7070
serve(IP, PORT, is_https_proxy)
try:
input('Enter any key to stop.\r\n')
finally:
stop = True
client程序如下:
#!/usr/bin/env python
# coding=utf-8
import socket, ssl
#MUST use domain if using SNI (ctx.check_hostname=True)
HOST, PORT = 'quqi.com', 7070
class ManuallyWrappedSocket:
def __init__(self, ctx, sock, **kwargs):
self.incoming = ssl.MemoryBIO()
self.outgoing = ssl.MemoryBIO()
self.obj = ctx.wrap_bio(self.incoming, self.outgoing, **kwargs)
self.sock = sock
def _retry(self, fn, *args):
finished = False
while not finished:
want_read = False
try:
ret = fn(*args)
except ssl.SSLWantReadError:
want_read = True
except ssl.SSLWantWriteError:
# can't happen, but if it did this would be the right way to
# handle it anyway
pass
else:
finished = True
# do any sending
data = self.outgoing.read()
if data:
self.sock.sendall(data)
# do any receiving
if want_read:
data = self.sock.recv(4096)
if not data:
self.incoming.write_eof()
else:
self.incoming.write(data)
# then retry if necessary
return ret
def do_handshake(self):
self._retry(self.obj.do_handshake)
def recv(self, bufsize):
return self._retry(self.obj.read, bufsize)
def sendall(self, data):
self._retry(self.obj.write, data)
def unwrap(self):
self._retry(self.obj.unwrap)
return self.sock
def wrap_socket_via_wrap_bio(ctx, sock, **kwargs):
return ManuallyWrappedSocket(ctx, sock, **kwargs)
def handle(conn):
#https://stackoverflow.com/questions/32792333/python-socket-module-connecting-to-an-http-proxy-then-performing-a-get-request
#To make a HTTP request to a proxy open a connection to the proxy server
#and then send a HTTP-proxy request. This request is mostly the same as
#the normal HTTP request, but contains the absolute URL instead of the relative URL
#conn.sendall(b'CONNECT api.snapcraft.io:80 HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
#print(conn.recv(4096).decode())
#conn.sendall(b'GET / HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
#print(conn.recv(4096).decode())
#To make a HTTPS request open a tunnel using the CONNECT method and then
#proceed inside this tunnel normally, that is do the SSL handshake and
#then a normal non-proxy request inside the tunnel
conn.sendall(b'CONNECT api.snapcraft.io:443 HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
print(conn.recv(4096).decode())
conn.sendall(b'GET / HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
print(conn.recv(4096).decode())
#while True:
#import rpdb;rpdb.set_trace()
#it will hang here for the second time because proxy side didn't send so much data yet
# data = conn.recv(4096)
# print(data.decode())
# if not data:
# break
def main():
is_https_proxy = True
sock = socket.socket(socket.AF_INET)
if is_https_proxy:
ctx = ssl.create_default_context(ssl.Purpose.SERVER_AUTH, cafile='/etc/squid/cert/ca.crt')
ctx.check_hostname = False
ctx.verify_mode=ssl.CERT_NONE #server side should also use ssl.CERT_NONE
#ctx.verify_mode = ssl.CERT_REQUIRED
ctx.load_cert_chain('/etc/squid/cert/quqi.com.crt', '/etc/squid/cert/quqi.com.key')
#Must use server_hostname to pass SNI if proxy side supports L4 proxy
wrap = None
sock = ctx.wrap_socket(sock, server_side=False, server_hostname=HOST)
#wrap = wrap_socket_via_wrap_bio(ctx, sock, server_side=False, server_hostname=HOST)
try:
sock.connect((HOST, PORT))
if is_https_proxy:
print("Client requested https-proxy: {}".format(sock.getpeercert()))
else:
print("Client requested http-proxy: {}:{}".format(HOST, PORT))
if wrap:
handle(wrap)
else:
handle(sock)
finally:
sock.close()
if __name__ == '__main__':
main()
client <-> proxy({http|https}😕/127.0.0.1:7070) <-> target(https://api.snapcraft.io)
当target为https时,client与target之间的流量为https,是端到端加密的(header与payload都加密,所以header中的url也加密,只有sni不加密),此时,proxy若是反正代理https那它肯定得有backend的证书来解密client发来的https的数据再转发到后端,若proxy是正向代理呢?那也有两种方案(可参考:使用NGINX作为HTTPS正向代理服务器 - https://zhuanlan.zhihu.com/p/70459013?utm_id=0):
1, proxy采用L7层的HTTP: client端先通过HTTP CONNECT请求建立从client到proxy的HTTP TUNNEL(socket), 因为是7层proxy能取出HTTP CONNECT请求中的target host(conn.sendall(b’CONNECT api.snapcraft.io:443 HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n’)), 然后proxy再创建到target的TCP TUNNEL(socket), 这样就创建了一条从client到target的TUNNEL (HTTP TUNNEL + TCP TUNNEL)用于透传https流量(所谓透传就是proxy持有到client socket与target socket之间相互不剥离包直接转发)
2, proxy采用L4层的TCP: L4层就没有HTTP CONNECT,那就得从SNI中找target host(那client端必须传sni). nginx stream(ngx_stream_ssl_preread_module)就是此原理实现的.这里也有一个sni中取target host的例子(https://github.com/nICEnnnnnnnLee/proxy.git)
但上面只是针对proxy是http的,因为它要从HTTP CONNECT中找target host,所以HTTP CONNECT必须是明文的.如果proxy是https的呢?那此时:
1, 若是https-based proxy, 那client到proxy就不是Socket了而是SSLSocket, 在这个SSLSocket上再走https(SSLSocket)流量,这种tls over tls的两层结构python库是不支持的(wrap_socket只支持wrap Socket, 不支持wrap SSLSocket, 见:https://bugs.python.org/issue29610), 这样会造成无法在SSLSocket上创建HTTP CONNECT (https://stackoverflow.com/questions/60217507/creating-an-https-proxy-server-in-python)
此时,proxy端在获取client端的socket时就会报:ssl.SSLError: [SSL: HTTPS_PROXY_REQUEST] https proxy request (_ssl.c:997)
python ssl在tls上实现CONNECT是有bug的: https://github.com/urllib3/urllib3/pull/1121#issuecomment-281247652
2, 此时proxy到target的仍然是tcp tunnel(Socket), 这边没问题
这里提到了一个fix: https://github.com/urllib3/urllib3/pull/1121#issuecomment-281874055, 但它这个似乎不work
>>> import requests
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (5.1.0) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
>>> import requests_httpsproxy
>>> https_proxy = 'https://quqi.com:7070'
>>> sess = requests.Session()
>>> print(sess.get('https://httpbin.org/ip', proxies={'http':https_proxy, 'https':https_proxy}).text)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 412, in send
conn = self.get_connection(request.url, proxies)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 310, in get_connection
conn = proxy_manager.connection_from_url(url)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 298, in connection_from_url
return self.connection_from_host(
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 245, in connection_from_host
return self.connection_from_context(request_context)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 258, in connection_from_context
pool_key = pool_key_constructor(request_context)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 124, in _default_key_normalizer
return key_class(**context)
但这个work
import tlslite, ssl, socket
sock = tlslite.TLSConnection(socket.create_connection(('127.0.0.1', 7070)))
sock.handshakeClientCert()
sock.sendall(bytes('CONNECT api.snapcraft.io:443 HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n', 'ascii'))
sock.recv(1024);
sock.sendall(b'GET / HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
conn = tlslite.TLSConnection(sock)
conn.handshakeClientCert()
conn.sendall('GET /get HTTP/1.1\r\nHost: httpbin.org\r\n\r\n')
print(sock.recv(4096).decode())
>>> import tlslite, ssl, socket
>>> sock = tlslite.TLSConnection(socket.create_connection(('127.0.0.1', 7070)))
>>> sock.handshakeClientCert()
>>> sock.sendall(bytes('CONNECT api.snapcraft.io:443 HTTP/1.1\r\nHost: api.snapcraft.io:443\r\n\r\n', 'ascii'))
>>> sock.recv(1024)
b'HTTP/1.0 200 Connection Established\r\n\r\n'
>>>
>>> conn = tlslite.TLSConnection(sock)
>>> conn.handshakeClientCert()
>>> conn.sendall(b'GET / HTTP/1.1\r\nHost: api.snapcraft.io\r\n\r\n')
>>> print(conn.recv(1024).decode())
HTTP/1.1 200 OK
server: gunicorn/20.0.4
date: Tue, 27 Dec 2022 08:57:37 GMT
content-type: text/html; charset=utf-8
content-length: 64
snap-store-version: 52
x-view-name: snapdevicegw.webapi.root
x-vcs-revision: c113a817
x-request-id: C6D30429EABE0A83255C01BB63AAB3816E7FE277
snapcraft.io store API service - Copyright 2018-2022 Canonical.
改用下列程序调试_tunnel()之后发现proxy成https后, client与proxy之间为sslsocket, 在_tunnel中client向proxy发CONNECT成功了, 然后client再从这个sslsocket时读响应时卡住了 (网上有的说此时connect仍然是明文,这样sslsocket拒绝读)
#!/usr/bin/env python
# coding=utf-8
import ssl
import socket
from urllib import request
import http.client
import base64
class TLSProxyConnection(http.client.HTTPSConnection):
"""Like HTTPSConnection but more specific"""
def __init__(self, host, **kwargs):
http.client.HTTPSConnection.__init__(self, host, **kwargs)
def connect(self):
print('socket:%s:%s' % (self.host, self.port))
sock = socket.create_connection((self.host, self.port),
self.timeout, self.source_address)
if getattr(self, '_tunnel_host', None):
self.sock = sock
print(self.sock)
ctx = ssl.create_default_context(ssl.Purpose.SERVER_AUTH, cafile='/etc/squid/cert/ca.crt')
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
ctx.load_cert_chain('/etc/squid/cert/quqi.com.crt', '/etc/squid/cert/quqi.com.key')
self.sock = ctx.wrap_socket(sock, server_side=False, server_hostname=self.host)
print(self.sock)
#import rpdb;rpdb.set_trace()
self._tunnel()
class TLSProxyHandler(request.HTTPSHandler):
def __init__(self, proxies=None):
request.HTTPSHandler.__init__(self)
def https_open(self, req):
return self.do_open(TLSProxyConnection, req)
proxy_handler = request.ProxyHandler({"https": "https://quqi.com:7070"})
opener = request.build_opener(proxy_handler, TLSProxyHandler())
req = request.Request("https://api.snapcraft.io", method="GET")
request.install_opener(opener)
#data = opener.open(request).read()
with opener.open(req) as f:
print(f.read().decode('utf-8'))
print('ok')
抓包数据为:
sudo tcpdump -ni lo "port 7070" -XXvvvnn -w ssl.cap
sudo tshark -r ssl.cap
$ sudo tshark -r ssl.cap
Running as user "root" and group "root". This could be dangerous.
1 0.000000 127.0.0.1 → 127.0.0.1 TCP 74 56794 → 7070 [SYN] Seq=0 Win=65495 Len=0 MSS=65495 SACK_PERM=1 TSval=4023797308 TSecr=0 WS=128
2 0.000019 127.0.0.1 → 127.0.0.1 TCP 74 7070 → 56794 [SYN, ACK] Seq=0 Ack=1 Win=65483 Len=0 MSS=65495 SACK_PERM=1 TSval=4023797308 TSecr=4023797308 WS=128
3 0.000032 127.0.0.1 → 127.0.0.1 TCP 66 56794 → 7070 [ACK] Seq=1 Ack=1 Win=65536 Len=0 TSval=4023797308 TSecr=4023797308
4 0.002578 127.0.0.1 → 127.0.0.1 TLSv1 583 Client Hello
5 0.002595 127.0.0.1 → 127.0.0.1 TCP 66 7070 → 56794 [ACK] Seq=1 Ack=518 Win=65024 Len=0 TSval=4023797311 TSecr=4023797311
6 0.004416 127.0.0.1 → 127.0.0.1 TLSv1.3 3005 Server Hello, Change Cipher Spec, Application Data, Application Data, Application Data, Application Data
7 0.004456 127.0.0.1 → 127.0.0.1 TCP 66 56794 → 7070 [ACK] Seq=518 Ack=2940 Win=63488 Len=0 TSval=4023797312 TSecr=4023797312
8 0.006229 127.0.0.1 → 127.0.0.1 TLSv1.3 146 Change Cipher Spec, Application Data
9 0.006257 127.0.0.1 → 127.0.0.1 TCP 66 7070 → 56794 [ACK] Seq=2940 Ack=598 Win=65536 Len=0 TSval=4023797314 TSecr=4023797314
10 0.006442 127.0.0.1 → 127.0.0.1 TLSv1.3 321 Application Data
11 0.006455 127.0.0.1 → 127.0.0.1 TCP 66 56794 → 7070 [ACK] Seq=598 Ack=3195 Win=65408 Len=0 TSval=4023797314 TSecr=4023797314
12 0.006519 127.0.0.1 → 127.0.0.1 TLSv1.3 321 Application Data
13 0.006527 127.0.0.1 → 127.0.0.1 TCP 66 56794 → 7070 [ACK] Seq=598 Ack=3450 Win=65280 Len=0 TSval=4023797315 TSecr=4023797315
14 0.006573 127.0.0.1 → 127.0.0.1 TLSv1.3 129 Application Data
15 0.006581 127.0.0.1 → 127.0.0.1 TCP 66 7070 → 56794 [ACK] Seq=3450 Ack=661 Win=65536 Len=0 TSval=4023797315 TSecr=4023797315
16 0.010423 127.0.0.1 → 127.0.0.1 TLSv1.3 127 Application Data
17 0.010435 127.0.0.1 → 127.0.0.1 TCP 66 56794 → 7070 [ACK] Seq=661 Ack=3511 Win=65280 Len=0 TSval=4023797318 TSecr=4023797318
18 0.010552 127.0.0.1 → 127.0.0.1 TLSv1.3 208 Application Data
19 0.010563 127.0.0.1 → 127.0.0.1 TCP 66 7070 → 56794 [ACK] Seq=3511 Ack=803 Win=65408 Len=0 TSval=4023797319 TSecr=4023797319
附录 - squid.conf
一个生产环境中的squid.conf
acl localnet src 127.0.0.1/8
http_access allow localnet
http_access allow localhost manager
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
auth_param basic realm proxy
acl authenticated proxy_auth ${squid_username}
http_access allow authenticated
http_access deny all
include /etc/squid/conf.d/*
http_port ${squid_listen_port}
shutdown_lifetime 5 seconds
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
error_directory /etc/squid/pages/
maas中用的实际配置:
#sudo apt-get install openssl libssl-dev ssl-cert squid-openssl -y
acl proxy_manager proto cache_object
acl localnet src 127.0.0.0/8
acl localnet src 192.168.0.0/16
acl localnet src 172.16.0.0/16
acl localnet src 10.0.0.0/8
acl localnet src 240.0.0.0/8
acl localnet src fd42:cb34:c66e:ff16::/64
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT
http_access allow proxy_manager localhost
http_access deny proxy_manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
#http_port 3128 transparent
#http_port 8000
http_port 3128
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
forwarded_for delete
visible_hostname minipc.lan
cache_mem 512 MB
minimum_object_size 0 MB
maximum_object_size 1024 MB
maximum_object_size_in_memory 100 MB
coredump_dir /var/spool/squid
cache_dir aufs /var/spool/squid 40000 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
Reference
[1] https://stackoverflow.com/questions/43140360/how-can-i-force-outgoing-ip-for-specific-applications-forcebindip-doesnt-seem
[2] http://xiaorui.cc/2015/04/07/%E5%88%86%E5%B8%83%E5%BC%8F%E7%88%AC%E8%99%AB%E4%B9%8Bpython%E5%8A%A8%E6%80%81%E8%8E%B7%E5%8F%96%E9%9A%8F%E6%9C%BA%E9%80%89%E6%8B%A9%E5%87%BA%E5%8F%A3ip/