FastDFS蛋疼的集群和负载均衡(五)之tracker配置反向代理

###Interesting things

接着上一篇。

###What did you do today

  • 我们需要在tracker1和tracker2配置反向代理服务,那么你肯定会问了什么是反向代理服务?

反向代理(Reverse Proxy)方式是指以代理服务器来接受internet上的连接请求,然后将请求转发给内部网络上的服务器,并将从服务器上得到的结果返回给internet上请求连接的客户端,此时代理服务器对外就表现为一个反向代理服务器。

  • 在tracker1和tracker2上解压ngx_cache_purge-2.3.tar.gz文件到/usr/local/fast/ 命令:tar zxvf ngx_cache_purge-2.3.tar.gz -C /usr/local/fast/

  • 我们发现/usr/local/fast/目录下多了ngx_cache_purge-2.3文件夹。

  • 下载依赖库 yum install pcre、yum install pcre-devel、yum install zlib、yum install zlib-devel

  • 解压nginx-1.6.2.tar.gz到/usr/local/目录下,命令: tar -zxvf nginx-1.6.2.tar.gz -C /usr/local/

  • 进入/usr/local/nginx-1.6.2/目录下, cd /usr/local/nginx-1.6.2/

  • 添加ngx_cache_purge-2.3模块并且检查。命令:./configure --add-module=/usr/local/fast/ngx_cache_purge-2.3/

  • 老操作,make && make install编译安装nginx。

  • 进入/usr/local/nginx/conf/目录下,找到nginx.conf,配置反向代理。

#user  nobody;
worker_processes  1;

error_log  /usr/local/nginx/logs/error.log;
error_log  /usr/local/nginx/logs/error.log  notice;
error_log  /usr/local/nginx/logs/error.log  info;

pid        /usr/local/nginx/logs/nginx.pid;


events {
         worker_connections  1024;
        use epoll;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /usr/local/nginx/logs/access.log  main;

    sendfile        on;
    tcp_nopush      on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;
        server_names_hash_bucket_size   128;
        client_header_buffer_size       32k;
        large_client_header_buffers     4       32k;
        client_max_body_size    300m;

        proxy_redirect  off;
        proxy_set_header        Host    $http_host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout   90;
        proxy_send_timeout      90;
        proxy_read_timeout      90;
        proxy_buffer_size       16k;
        proxy_buffers   4       64k;
        proxy_busy_buffers_size 128k;
        proxy_temp_file_write_size      128k;

        proxy_cache_path        /fastdfs/cache/nginx/proxy_cache levels=1:2
        keys_zone=http-cache:200m       max_size=1g     inactive=30d;
        proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;

        upstream fdfs_group1 {
                server 192.168.12.33:8888 weight=1  max_fails=2 fail_timeout=30s;
                server 192.168.12.44:8888 weight=1  max_fails=2 fail_timeout=30s;

        }

        upstream fdfs_group2 {
                server 192.168.12.55:8888 weight=1 max_fails=2 fail_timeout=30s;
                server 192.168.12.66:8888 weight=1 max_fails=2 fail_timeout=30s;

        }
    server {

        listen      8000;
        server_name  localhost;

        #charset koi8-r;

       access_log  /usr/local/nginx/logs/host.access.log  main;

       location / {
           root   html;
           index  index.html index.htm;
       }

        location /group1/M00 {
                proxy_next_upstream http_502 http_504 error timeout invalid_header;
                proxy_cache http-cache;
                proxy_cache_valid 200 304 12h;
                proxy_cache_key $uri$is_args$args;
                proxy_pass http://fdfs_group1;
                expires 30d;

        }

        location /group2/M00 {
                proxy_next_upstream http_502 http_504 error timeout invalid_header;
                proxy_cache http-cache;
                proxy_cache_valid 200 304 12h;
                proxy_cache_key $uri$is_args$args;
                proxy_pass http://fdfs_group2;
                expires 30d;
        }

        location ~/purge(/.*) {
                allow 127.0.0.1;
                allow 192.168.12.0/24;
                deny all;
                proxy_cache_purge http-cache $1$is_args$args;

        }

       # location ~/group([0-9])/M00 {  
            # ngx_fastdfs_module;   
        # }

        #error_page  404              /404.html;

        #redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;

        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}
复制代码
  • 创建/fastdfs/cache/nginx/proxy_cache 和/fastdfs/cache/nginx/proxy_cache/tmp,因为proxy_cache_path和proxy_temp_path设置了路径,所以我们要创建。

  • 由于tracker1和tracker2的端口是8000,所以需要在防火墙配置8000端口。 -A INPUT -p tcp -m state --state NEW -m tcp --dport 8000 -j ACCEPT,然后重启防火墙,让策略生效。

  • 启动tracker1和tracker2的nginx。命令:/usr/local/nginx/sbin/nginx

  • 我们在tracker1上传2张图片,发现一张存储在group1,一张存储在group2.

  • 我们可以通过tracker1(192.168.12.11)和tracker2(192.168.12.22)的8000端口去访问这2张图片。

  • 访问http://192.168.12.11:8000/group1/M00/00/00/wKgMIVpEgoSAcs8VAADRd6mMX3g514.jpg

  • 访问http://192.168.12.22:8000/group2/M00/00/00/wKgMQlpEgoaABUrWAADRd6mMX3g168.jpg

###Summary

Nginx对外提供服务有可能碰到服务挂掉的时候,我们需要搭建一个nginx和keepalived集合实现的nginx集群高可用环境,下一篇讲。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值