CAS集群部署之nginx安装以及配置说明

7 篇文章 1 订阅
2 篇文章 1 订阅

CAS集群部署之nginx安装以及配置说明

此安装说明中包含配置SSL证书和nginx会话保持之sticky模块配置

环境检查

安装nginx前首先要确认系统中安装了gcc、pcre-devel、zlib-devel、openssl-devel

  1. rpm包安装的,可以用 rpm -qa 看到,如果要查找某软件包是否安装,用 rpm -qa | grep “软件或者包的名字”
  2. 以deb包安装的,可以用 dpkg -l 看到。如果是查找指定软件包,用 dpkg -l | grep “软件或者包的名字”
  3. yum方法安装的,可以用 yum list installed 查找,如果是查找指定包,用 yum list installed | grep “软件名或者包名”

安装命令

yum -y install gcc pcre-devel zlib-devel openssl openssl-devel

下载nginx源码

nginx下载地址:https://nginx.org/download/

cd /opt/gdsapp/cluster/
tar -zxvf nginx-1.13.11.tar.gz

下载nginx-sticky-module模块

下载地址:https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/get/08a395c66e42.zip

cd /opt/gdsapp/cluster/tools
unzip nginx-goodies-nginx-sticky-module-ng-08a395c66e42.zip 
mv nginx-goodies-nginx-sticky-module-ng-08a395c66e42 nginx-sticky-module-ng

开始安装

1.进入nginx源码目录
cd /opt/gdsapp/cluster/nginx-1.13.11
2.配置nginx安装目录以及模块
./configure --prefix=/opt/gdsapp/cluster/nginx-1-13-11 --with-http_stub_status_module --with-http_ssl_module --add-module=/opt/gdsapp/cluster/tools/nginx-sticky-module-ng
3.make(编译并安装nginx)
make
make install
4.创建tmp/www文件夹

由于安装后的nginx没有tmp目录,需手动给/opt/gdsapp/cluster/nginx-1-13-11中创建。

5.修改nginx.conf配置

打开/opt/gdsapp/cluster/nginx-1-13-11/conf/nginx.conf
将里面内容替换为以下配置(注意该配置仅运用于nginx1.13.11版本,其它版本请参考下方配置进行修改


#user  nobody;
#nginx 进程数,建议按照cpu 数目来指定,一般为它的倍数 (如,2个四核的cpu计为8)。
worker_processes  4;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
error_log  logs/error.log  info;
pid        logs/nginx.pid;
#nginx 进程打开的最多文件描述符数目,最好与ulimit -n 的值保持一致
worker_rlimit_nofile 65535;

events {
	#使用epoll 的I/O 模型
	use epoll;
	#每个进程允许的最多连接数, 理论上每台nginx 服务器的最大连接数为worker_processes*worker_connections
    worker_connections  65535;
	accept_mutex on;
    multi_accept on;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
	server_names_hash_bucket_size 128;
    server_names_hash_max_size 512;
	#客户端请求头部的缓冲区大小,这个可以根据你的系统分页大小来设置,一般一个请求头的大小不会超过1k,不过由于一般系统分页都要大于1k,所以这里设置为分页大小。分页大小可以用命令getconf PAGESIZE 取得。
    client_header_buffer_size 2048k;
    large_client_header_buffers 8 256k;
    client_max_body_size 1000m;
    client_header_timeout 60s;
    client_body_timeout 60s;
    client_body_buffer_size 512k;

    ##缓存cache参数配置##  
    proxy_connect_timeout 3;  
    proxy_read_timeout 60;  
    proxy_send_timeout 5;  
    proxy_buffer_size 128k;  
    proxy_buffers 4 64k;  
    proxy_busy_buffers_size 128k;  
    proxy_temp_file_write_size 128k; 
    
    #缓存到nginx的本地目录  
    proxy_temp_path  tmp/www;  
    proxy_cache_path tmp/cache_cas levels=1:2 keys_zone=cache_cas:200m inactive=1d max_size=10g;
	
    ##end##  

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
	#日志格式化
	log_format main
                 '$remote_addr - $remote_user [$time_local] "$request" '
                 '$status $body_bytes_sent $http_x_forwarded_for '
                 'upstream_addr:$upstream_addr '
                 'req_body:$request_body'
                 'request_time:$request_time';
				 
    access_log  logs/access.log  main;

    sendfile        on;
    tcp_nopush     on;
	#keepalive 超时时间。
    keepalive_timeout  65;
	keepalive_requests 50000;
	
	send_timeout 15;
	
	tcp_nodelay on;
	
    gzip  on;	#表示允许压缩的页面最小字节数,页面字节数从header头的Content-Length中获取。默认值是0,表示不管页面多大都进行压缩,建议设置成大于1K。如果小于1K可能会越压越大
    gzip_min_length 1k;
    #压缩缓存区大小
    gzip_buffers 4 32k;
    #压缩版本
    gzip_http_version 1.1;
    #压缩比率
    gzip_comp_level 9;
    #指定压缩的类型
    gzip_types text/plain application/x-javascript text/css application/xml;
    #vary header支持
    gzip_vary on;
    #隐藏Nginx版本号
    server_tokens off;
	
	map $http_upgrade $connection_upgrade {
	default upgrade;
	'' close;
    }
	
	#负载均衡算法
	#sticky:基于cookie的一种nginx的负载均衡
	#ip_hash:基于Hash 计算(应用场景:保持session 一至性)
	#url_hash:第三方(应用场景:静态资源缓存,节约存储,加快速度)
	#least_conn:最少链接
	#east_time:最小的响应时间,计算节点平均响应时间,然后取响应最快的那个,分配更高权重。
	upstream cas {
		sticky;
		server 192.168.4.1:8443 weight=2 max_fails=3 fail_timeout=30s;
		server 192.168.4.2:6443 weight=2 max_fails=3 fail_timeout=30s;
    }
	
	# HTTPS server
	server {
		listen 443 ssl;
		server_name localhost;
		ssl on;
		ssl_certificate /opt/gdsapp/cluster/nginx-1-13-11/cacerts/xxx.com.pem;
		ssl_certificate_key /opt/gdsapp/cluster/nginx-1-13-11/cacerts/xxx.com.key;
		location ~ ^/(images|javascript|js|css|flash|media|static|jpg|jpeg|png|ico|map|json)/ {
			proxy_pass https://cas;
			proxy_redirect off;
			proxy_cache_valid 200 302 404 202 30d;
			proxy_cache_valid any 5m;
			proxy_cache cache_cas;
			expires 360d;
		}
        location / {
			proxy_pass https://cas;
			proxy_set_header   Host   $host:$server_port;
			proxy_set_header   X-Real-IP  $remote_addr;
			proxy_set_header   X-Forwarded-For	$proxy_add_x_forwarded_for;
			proxy_ssl_session_reuse  off;
        }
    }

}

*注意:需要手动修改nginx.conf中如下配置

	upstream cas {
		sticky;
		server 192.168.4.1:8443 weight=2 max_fails=3 fail_timeout=30s;
		server 192.168.4.2:6443 weight=2 max_fails=3 fail_timeout=30s;
    }
	
	# HTTPS server
	server {
		ssl_certificate /opt/gdsapp/cluster/nginx-1-13-11/cacerts/xxx.com.pem;
		ssl_certificate_key /opt/gdsapp/cluster/nginx-1-13-11/cacerts/xxx.com.key;
    }
6.启动nginx

cd /opt/gdsapp/cluster/nginx-1-13-11/sbin
启动,关闭,重启,命令:
./nginx 启动
./nginx -s stop 关闭
./nginx -s reload 重启

参考文献:
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1,tomcat8的配置: 1.1修改tomcat8.x/conf/context.xml的配置如下: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- The contents of this file will be loaded for each web application --> <Context> <!-- Default set of monitored resources. If one of these changes, the --> <!-- web application will be reloaded. --> <WatchedResource>WEB-INF/web.xml</WatchedResource> <WatchedResource>${catalina.base}/conf/web.xml</WatchedResource> <!-- Uncomment this to disable session persistence across Tomcat restarts --> <!-- <Manager pathname="" /> <Resources cachingAllowed="true" cacheMaxSize="100000" /> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:127.0.0.1:11211" username="root" password="" sticky="false" sessionBackupAsync="false" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" /> --> <Resources cachingAllowed="true" cacheMaxSize="100000" /> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:127.0.0.1:11211" username="root" password="" sticky="false" sessionBackupAsync="false" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" sessionBackupTimeout="18000" transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.JavolutionTranscoderFactory" copyCollectionsForSerialization="false" /> </Context> 1.2添加memcached如下依赖的jar包到tomcat8.x/lib/: asm-5.1.jar couchbase-client-1.4.12.jar javolution-5.5.1.jar kryo-4.0.0.jar kryo-serializers-0.38.jar memcached-session-manager-2.0.0.jar memcached-session-manager-tc8-2.0.0.jar minlog-1.3.jar msm-javolution-serializer-2.0.0.jar msm-kryo-serializer-2.0.0.jar msm-xstream-serializer-2.0.0.jar objenesis-2.1.jar reflectasm-1.09.jar spymemcached-2.12.1.jar 2,nginx配置: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; #设定负载均衡的服务器列表 upstream 127.0.0.1 { #设定负载均衡的服务器列表 #ip_hash; #同一机器在多网情况下,路由切换,ip可能不同 #weigth参数表示权值,权值越高被分配到的几率越大 server 127.0.0.1:8085 weight=1 max_fails=20 fail_timeout=600s; server 127.0.0.1:8086 weight=1 max_fails=20 fail_timeout=600s; } map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80; server_name localhost; charset UTF-8; #设定本虚拟主机的访问日志 access_log logs/host.access.log main; #对 "/" 所有应用启用负载均衡 location / { proxy_pass http://127.0.0.1; #保留用户真实信息 proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; index index.html index.htm index.aspx; } #对 "/Dossm3RabbitMQConsumer/" 启用负载均衡 location /Dossm3RabbitMQConsumer/ { proxy_pass http://localhost:8086; #保留用户真实信息 proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; index index.html index.htm index.aspx; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } 3,cas配置(): 3.1 修改/CAS/WEB-INF/spring-configuration/ticketRegistry.xml <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to Jasig under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. Jasig licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at the following location: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <description> Configuration for the default TicketRegistry which stores the tickets in-memory and cleans them out as specified intervals. </description> <!-- memcached 配置开始 --> <!-- Ticket Registry --> <bean id="ticketRegistry" class="org.jasig.cas.ticket.registry.MemCacheTicketRegistry"> <constructor-arg index="0"> <bean class="net.spy.memcached.spring.MemcachedClientFactoryBean" p:servers="127.0.0.1:11211" p:protocol="BINARY" p:locatorType="ARRAY_MOD" p:failureMode="Redistribute" p:transcoder-ref="serialTranscoder"> <property name="hashAlg"> <util:constant static-field="net.spy.memcached.DefaultHashAlgorithm.FNV1A_64_HASH" /> </property> </bean> </constructor-arg> <!-- TGT timeout in seconds --> <constructor-arg index="1" value="36000" /> <!-- ST timeout in seconds --> <constructor-arg index="2" value="2" /> </bean> <bean id="serialTranscoder" class="net.spy.memcached.transcoders.SerializingTranscoder" p:compressionThreshold="2048" /> <!-- memcached 配置结束 --> <!--Quartz --> <!-- 默认配置开始 --> <!-- Ticket Registry --> <!-- <bean id="ticketRegistry" class="org.jasig.cas.ticket.registry.DefaultTicketRegistry" />--> <!-- TICKET REGISTRY CLEANER --> <!-- <bean id="ticketRegistryCleaner" class="org.jasig.cas.ticket.registry.support.DefaultTicketRegistryCleaner" p:ticketRegistry-ref="ticketRegistry" p:logoutManager-ref="logoutManager" /> <bean id="jobDetailTicketRegistryCleaner" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean" p:targetObject-ref="ticketRegistryCleaner" p:targetMethod="clean" /> <bean id="triggerJobDetailTicketRegistryCleaner" class="org.springframework.scheduling.quartz.SimpleTriggerBean" p:jobDetail-ref="jobDetailTicketRegistryCleaner" p:startDelay="200000" p:repeatInterval="50000000" /> --> <!-- 默认配置结束 --> </beans> 3.2 添加cas和memcached整合的如下依赖jar包到/CAS/WEB-INF/lib: cas-server-integration-memcached-4.0.0.jar mockito-core-2.1.0-RC.1.jar spymemcached-2.11.2.jar 参考CAS官方配置:https://apereo.github.io/cas/4.2.x/installation/Memcached-Ticket-Registry.html

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值