这几天闲着没事从老男孩老师看到一个淘宝网的框架图,挺感兴趣的,cdn和集群线上的架构我都做过,但是没有接触过这么大的环境,先简单的实现看看, 当然了真正的淘宝架构肯定不能像我这样的,但是自己过过实验瘾也挺爽的。
陆续的把脚本贴出来。。。
脚本有不严谨的地方,请大家指出。。。。。
脚本的ip貌似和图上都对不上,自己修改和增加吧~~~~~~~~~
说实话,lvs配置是最简单,没什么好配置的,集群环境我用lvs较少,因为没有正则的功能,当然了他作为4层的东西,优势在于大流量的承载转发。
- mkdir /usr/local/src/lvs
- cd /usr/local/src/lvs
- wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz
- wget http://www.keepalived.org/software/keepalived-1.1.15.tar.gz
- lsmod |grep ip_vs
- uname -r
- ln -s /usr/src/kernels/$(uname -r)/usr/src/linux
- tar zxvf ipvsadm-1.24.tar.gz
- cd ipvsadm-1.24
- make && make install
- tar zxvf keepalived-1.1.15.tar.gz
- cd keepalived-1.1.15
- ./configure&& make && make install
- cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
- cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
- mkdir /etc/keepalived
- cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
- cp /usr/local/sbin/keepalived /usr/sbin/
- #you can service keepalived start|stop
- #master
- cat >> /usr/local/etc/keepalived/keepalived.conf <<EOF
- ! Configuration File for keepalived
- global_defs {
- notification_email {
- rfyiamcool@163.com
- }
- notification_email_from Alexandre.Cassen@firewall.loc
- smtp_server 127.0.0.1
- router_id LVS_DEVEL
- }
- vrrp_instance VI_1 {
- state MASTER # other backup
- interface eth0
- virtual_router_id 51
- priority 100 # other 90
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress {
- 10.10.10.88
- }
- }
- virtual_server 10.10.10.88 80 {
- delay_loop 6
- lb_algo rr
- lb_kind DR
- persistence_timeout 50
- protocol TCP
- real_server 10.10.10.21 80 {
- weight 3
- TCP_CHECK {
- connect_timeout 10
- nb_get_retry 3
- delay_before_retry 3
- connect_port 80
- }
- }
- real_server 10.10.10.22 80 {
- weight 3
- TCP_CHECK {
- connect_timeout 10
- nb_get_retry 3
- delay_before_retry 3
- connect_port 80
- }
- }
- }
- EOF
- service keepalived start
咱们先把二层的haproxy搞定,ip什么的大家自己改吧。
- #!/bin/bash
- cd /usr/local/src/
- wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.8.tar.gz
- tar zxf haproxy-1.4.8.tar.gz
- cd haproxy-1.4.8
- uname -a
- make TARGET=linux26 PREFIX=/usr/local/haproxy
- make install PREFIX=/usr/local/haproxy
- cat >> /usr/local/haproxy/haproxy.cfg << EOF
- global
- log 127.0.0.1 local0 ###全局日志
- maxconn 4096 ###最大连接数
- chroot /usr/local/haproxy
- uid 501 ###用户ID
- gid 501 ###组ID
- daemon ###后台运行
- nbproc 1 ###创建进程数
- pidfile /usr/local/haproxy/haproxy.pid ###pid文件
- defaults
- log 127.0.0.1 local3
- mode http ###支持的模式
- option httplog ###日志格式
- option httpclose ###请求完成后关闭http通道
- option dontlognull
- option forwardfor ###apache日志转发
- option redispatch
- retries 2 ###重连次数
- maxconn 2000
- balance roundrobin ###算法类型
- stats uri /haproxy-stats ###状态统计页面
- #stats auth admin:admin ###状态统计页面用户名密码,可选
- contimeout 5000 ###连接超时
- clitimeout 50000 ###客户端超时
- srvtimeout 50000 ###服务器超时
- listen proxy *:80 ###访问地址及端口
- option httpchk HEAD /index.html HTTP/1.0 ###健康检查页面
- server web2 10.10.10.30:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- server web2 10.10.10.31:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- server web2 10.10.10.32:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- server web2 10.10.10.33:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- server web2 10.10.10.34:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- server web2 10.10.10.35:88 cookie app1inst2 check inter 2000 rise 2 fall 5
- EOF
- cat >> /etc/init.d/haproxy << EOF
- #! /bin/sh
- set -e
- PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/haproxy/sbin
- PROGDIR=/usr/local/haproxy
- PROGNAME=haproxy
- DAEMON=\$PROGDIR/sbin/\$PROGNAME
- CONFIG=\$PROGDIR/\$PROGNAME.cfg
- PIDFILE=\$PROGDIR/\$PROGNAME.pid
- DESC="HAProxy daemon"
- SCRIPTNAME=/etc/init.d/\$PROGNAME
- # Gracefully exit if the package has been removed.
- test -x \$DAEMON || exit 0
- start()
- {
- echo -n "Starting \$DESC: \$PROGNAME"
- \$DAEMON -f \$CONFIG
- echo "."
- }
- stop()
- {
- echo -n "Stopping \$DESC: \$PROGNAME"
- haproxy_pid=cat /usr/local/haproxy/haproxy.pid
- kill \$haproxy_pid
- echo "."
- }
- restart()
- {
- echo -n "Restarting \$DESC: \$PROGNAME"
- \$DAEMON -f \$CONFIG -p \$PIDFILE -sf \$(cat \$PIDFILE)
- echo "."
- }
- case "\$1" in
- start)
- start
- ;;
- stop)
- stop
- ;;
- restart)
- restart
- ;;
- *)
- echo "Usage: \$SCRIPTNAME {start|stop|restart}" >&2
- exit 1
- ;;
- esac
- exit 0
- EOF
- chmod +x /etc/rc.d/init.d/haproxy
- chkconfig --add haproxy
- chmod 777 /usr/local/haproxy/haproxy.pid
- sed -i '/SYSLOGD_OPTIONS/c\SYSLOGD_OPTIONS="-r -m 0"' /etc/sysconfig/syslog
- echo "local3.* /var/log/haproxy.log" /etc/syslog.conf
- echo "local0.* /var/log/haproxy.log" /etc/syslog.conf
- service syslog restart
- #启动haproxy
- # /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg
- #重启haproxy
- # /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg -st `cat /usr/local/haproxy/haproxy.pid`
- #停止haproxy
- # killall haproxy
- # service haproxy start restart stop
haproxy是识别主机名的判断的 主机名的判断的例子格式如下:
- acl url_aaa hdr_dom(host) www.aaa.com
- acl url_bbb hdr_dom(host) www.bbb.com
- acl tm_policy hdr_dom(host) -i trade.gemini.taobao.net
- acl denali_policy hdr_reg(host) -i ^(my.gemini.taobao.net|auction1.gemini.taobao.net)$
- acl path_url163 path_beg -i /163
- acl path_url_bbb path_beg -i /
- use_backend aaa if url_aaa
- use_backend bbb if url_bbb
- use_backend url163 if url_aaa path_url163
- backend url163
- mode http
- balance roundrobin
- option httpchk GET /163/test.jsp
- server url163 10.10.10.31:8080 cookie 1 check inter 2000 rise 3 fall 3 maxconn 50000
- backend aaa
- mode http
- balance roundrobin
- option httpchk GET /test.jsp
- srever app_8080 10.10.10.32:8080 cookie 1 check inter 1500 rise 3 fall 3 maxconn 50000
- backend bbb
- mode http
- balance roundrobin
- option httpchk GET /test.jsp
- srever app_8080 10.10.10.33:8090 cookie 1 check inter 1500 rise 3 fall 3 maxconn 50000
haproxy端还要做lvs客户端模式,绑定回环口。
- #!/bin/bash
- SNS_VIP=10.10.10.88
- source /etc/rc.d/init.d/functions
- case "$1" in
- start)
- ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
- /sbin/route add -host $SNS_VIP dev lo:0
- echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
- echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
- echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
- echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
- echo "RealServer Start OK"
- ;;
- stop)
- ifconfig lo:0 down
- route del $SNS_VIP >/dev/null 2>&1
- echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
- echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
- echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
- echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
- echo "RealServer Stoped"
- ;;
- *)
- echo "Usage: $0 {start|stop}"
- exit 1
- esac
- exit 0
下面是squid的设置
- #!/bin/bash
- wget http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE6.tar.bz2
- tar jxvf squid-2.6.STABLE6.tar.bz2
- ./configure --prefix=/usr/local/squid \
- --enable-async-io=320 \
- --enable-storeio="aufs,diskd,ufs" \
- --enable-useragent-log \
- --enable-referer-log \
- --enable-kill-parent-hack \
- --enable-forward-log \
- --enable-snmp \
- --enable-cache-digests \
- --enable-default-err-language=Simplify_Chinese \
- --enable-epoll \
- --enable-removal-policies="heap,lru" \
- --enable-large-cache-files \
- --disable-internal-dns \
- --enable-x-accelerator-vary \
- --enable-follow-x-forwarded-for \
- --disable-ident-lookups \
- --with-large-files \
- --with-filedescriptors=65536
- cat >> /usr/local/squid/etc/squid.conf <<EOF
- visible_hostname cache1.taobao.com
- http_port 192.168.1.44:80 vhost vport
- icp_port 0
- cache_mem 512 MB
- cache_swap_low 90
- cache_swap_high 95
- maximum_object_size 20000 KB
- maximum_object_size_in_memory 4096 KB
- cache_dir ufs /tmp1 3000 32 256
- cache_store_log none
- emulate_httpd_log on
- efresh_pattern ^ftp: 1440 20% 10080
- refresh_pattern ^gopher: 1440 0% 1440
- refresh_pattern . 0 20% 4320
- negative_ttl 5 minutes
- positive_dns_ttl 6 hours
- negative_dns_ttl 1 minute
- connect_timeout 1 minute
- read_timeout 15 minutes
- request_timeout 5 minutes
- client_lifetime 1 day
- half_closed_clients on
- maximum_single_addr_tries 1
- uri_whitespace strip
- ie_refresh off
- logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
- pid_filename /var/log/squid/squid.pid
- cache_log /var/log/squid/cache.log
- access_log /var/log/squid/access.log combined
- acl all src 0.0.0.0/0.0.0.0
- acl QUERY urlpath_regex cgi-bin .php .cgi .avi .wmv .rm .ram .mpg .mpeg .zip .exe
- cache deny QUERY
- acl picurl url_regex -i \.bmp$ \.png$ \.jpg$ \.gif$ \.jpeg$
- acl mystie1 referer_regex -i aaa
- http_access allow mystie1 picurl
- acl mystie2 referer_regex -i bbb
- http_access allow mystie2 picurl
- acl nullref referer_regex -i ^$
- http_access allow nullref
- acl hasref referer_regex -i .+
- http_access deny hasref picurl
- cache_peer 192.168.1.7 parent 80 0 no-query originserver no-digest name=all
- cache_peer_domain all *.taobao.com
- cache_effective_user nobody
- cache_effective_group nobody
- acl localhost src 127.0.0.1
- acl my_other_proxy srcdomain .a.com
- follow_x_forwarded_for allow localhost
- follow_x_forwarded_for allow all #允许转发 head ip 头
- acl_uses_indirect_client on #只有2.6才有这这个个参数
- delay_pool_uses_indirect_client on #只有2.6才有这这个个参数
- log_uses_indirect_client on # 只有2.6才有这这个个参数
- #refresh_pattern ^ftp: 60 20% 10080
- #refresh_pattern ^gopher: 60 0% 1440
- #refresh_pattern ^gopher: 60 0% 1440
- #refresh_pattern . 0 20% 1440
- refresh_pattern -i \.js$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.html$ 720 50% 1440 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.jpg$ 1440 90% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.gif$ 1440 90% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.swf$ 1440 90% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.jpg$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.png$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.bmp$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.doc$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.ppt$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.xls$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.pdf$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.rar$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.zip$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- refresh_pattern -i \.txt$ 1440 50% 2880 reload-into-ims ignore-reload ignore-no-cache ignore-auth ignore-private
- EOF
- #建立缓存和日志目录,并改变权限使squid能写入
- mkdir /tmp1
- mkdir /var/log/squid
- chown -R nobody:nobody /tmp1
- chmod 666 /tmp1
- chown -R nobody:nobody /var/log/squid
- #首次运行squid要先建立缓存
- /usr/local/squid/sbin/squid -z
- #启动squid
- echo "65535" > /proc/sys/fs/file-max
- ulimit -HSn 65535
- /usr/local/squid/sbin/squid
缓存的清理脚本是从洒哥那里搞到的
只是根据洒哥的脚本很简单的延伸了下,以前那个分享的脚本可以去除域名和特定的文件格式,然后我就想了能不能去除一个网址的所有jpg 或者是 www.92hezu.com/123/bbb/ 这样的。 原来多家几个后缀,用grep过滤就ok了
qingli.sh www.xiuxiukan.com
qingli.sh jpg
qingli.sh xiuxiukan.com 123 bbb jpg
- #!/bin/sh
- squidcache_path="/squidcache"
- squidclient_path="/home/local/squid/bin/squidclient"
- #grep -a -r $1 $squidcache_path/* | grep "http:" | awk -F 'http:' '{print "http:"$2;}' | awk -F\' '{print $1}' > cache.txt
- if [[ "$1" == "swf" || "$1" == "png" || "$1" == "jpg" || "$1" == "ico" || "$1" == "gif" || "$1" == "css" || "$1" == "js" || "$1" == "html" || "$1" == "shtml" || "$1" == "htm" ]]; then
- grep -a -r .$1 $squidcache_path/* | strings | grep "http:" | awk -F 'http:' '{print "http:"$2;}' | awk -F\' '{print $1}' | grep "$1$" | uniq > cache.txt
- else
- grep -a -r $1 $squidcache_path/* | strings | grep "http:" |grep $2$ |grep $3$|grep $4$|grep $5$ |grep $6$| awk -F 'http:' '{print "http:"$2;}' | awk -F\' '{print $1}' | uniq > cache.txt
- fi
- sed -i "s/\";$//g" cache.txt
- cat cache.txt | while read LINE
- do
- $squidclient_path -p 80 -m PURGE $LINE
- done