因为用户量的增加,加上我写的逻辑不怎么优化,怕每天用户量达到百万的时候这个实例就崩了,加上客户端那边一直在催要让我使用负载均衡,所以就学一学,其实实现搞负载均衡还是挺简单的,我 用的是使用了ngnix实现负载均衡.使用方法,如下所示:
我下载的是nginx-1.12.2
下载完之后解压,打开conf下的ngnix.conf文件
打开文件的内容如下,以下是被我修改过的:
#user nobody;
worker_processes 4;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream netitcast.com{
server 127.0.0.1:8082;
server 127.0.0.1:8083;
}
server {
listen 8081;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://netitcast.com;
proxy_redirect default;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
这个文件是完整的,可以直接复制使用,其实里面改的还是挺少的:
1.我改了worker_processes这个,默认是1,我改成了4
2.这里
server {
listen 8081;
server_name localhost;
#charset koi8-r;
我改成了8081端口,也就是外面访问的就是8081端口进来的,然后在分配到其实端口的实例上跑
3.
upstream netitcast.com{
server 127.0.0.1:8082;
server 127.0.0.1:8083;
}
这个是我添加的,但是这个netitcast.com是对应 location / {
proxy_pass http://netitcast.com;
proxy_redirect default;
}这个http://netitcast.com,如果有改动的话两边都要改动,
这样就完成了,也就是外面访问8081端口之后,ngnix就将请求分配给了82,83端口上去跑,这样写的是一个请求来就先跑82端口实例,第二个来了之后就跑第二个83的端口实例。
最后就是启动ngnix,点击ngnix.exe就可以启动了,启动完之后就启动82,83端口的实例
这样就两个实例在跑了,这样,每个实例每天跑几十万个用户就基本没问题了。
结果如下所示: