openresty 添加多个健康检查url

大多数教程中,都只使用了一个健康检查的例子,大部分用户需求是多个upstream配置多个不同的健康检查url,就不知道怎么配置了。

一般教程的健康检查url:

http {
    #---------------------
    # test health check
    #---------------------
    lua_package_path "/usr/local/openresty/lualib/resty/?.lua;/usr/local/openresty/lualib/resty/upstream/?.lua;;";
    
    upstream tomcat {
        server 127.0.0.1:48080;
        server 127.0.0.1:58080;
    }

    lua_shared_dict healthcheck 1m;
    lua_socket_log_errors off;
    init_worker_by_lua_block {
        local hc = require "resty.upstream.healthcheck"
        local ok, err = hc.spawn_checker {
            shm = "healthcheck",
            upstream = "tomcat",
            type = "http",
            http_req = "GET /health.txt HTTP/1.0\r\nHost: tomcat\r\n\r\n",
            interval = 2000,
            timeout = 5000,
            fall = 3,
            rise = 2,
            valid_statuses = {200, 302},
            concurrency = 1,
        }

        if not ok then
            ngx.log(ngx.ERR, "=======> failed to spawn health checker: ", err)
            return
        end
    }
    
    server {
        listen      38080;
    server_name localhost;
    
        access_log logs/access-38080.log  main;
        error_log   logs/error-38080.log  debug;
        
        location / {
            proxy_pass   http://tomcat;
        }
               
        location /server/status {
            access_log off;
            default_type text/plain;
            content_by_lua_block {
                local hc = require "resty.upstream.healthcheck"
                ngx.say("Nginx Worker PID: ", ngx.worker.pid())
                ngx.print(hc.status_page())
            }
        }
    }
}

里面的init_worker_by_lua_block 即为一个upstream设置健康检查。

那么如何为多个upstream设置不同的,比如说,我还有一个tomcat2,健康检查为health2。实际上init_worker_by_lua_block相当于一个方法,我们只需要在这个方法中继续为upstream添加checker就行了。

http {
    #---------------------
    # test health check
    #---------------------
    lua_package_path "/usr/local/openresty/lualib/resty/?.lua;/usr/local/openresty/lualib/resty/upstream/?.lua;;";
    
    upstream tomcat {
        server 127.0.0.1:48080;
        server 127.0.0.1:58080;
    }
    upstream tomcat2 {
        server 127.0.0.1:48081;
        server 127.0.0.1:58081;
    }

    lua_shared_dict healthcheck 1m;
    lua_socket_log_errors off;
    init_worker_by_lua_block {
        local hc = require "resty.upstream.healthcheck"
        local ok, err = hc.spawn_checker {
            shm = "healthcheck",
            upstream = "tomcat",
            type = "http",
            http_req = "GET /health.txt HTTP/1.0\r\nHost: tomcat\r\n\r\n",
            interval = 2000,
            timeout = 5000,
            fall = 3,
            rise = 2,
            valid_statuses = {200, 302},
            concurrency = 1,
        }

        if not ok then
            ngx.log(ngx.ERR, "=======> failed to spawn health checker: ", err)
            return
        end

        local ok, err = hc.spawn_checker {
            shm = "healthcheck",
            upstream = "tomcat2",
            type = "http",
            http_req = "GET /health2.txt HTTP/1.0\r\nHost: tomcat\r\n\r\n",
            interval = 2000,
            timeout = 5000,
            fall = 3,
            rise = 2,
            valid_statuses = {200, 302},
            concurrency = 1,
        }

        if not ok then
            ngx.log(ngx.ERR, "=======> failed to spawn tomcat2 health checker: ", err)
            return
        end
    }
    
    server {
        listen      38080;
    server_name localhost;
    
        access_log logs/access-38080.log  main;
        error_log   logs/error-38080.log  debug;
        
        location / {
            proxy_pass   http://tomcat;
        }
               
        location /server/status {
            access_log off;
            default_type text/plain;
            content_by_lua_block {
                local hc = require "resty.upstream.healthcheck"
                ngx.say("Nginx Worker PID: ", ngx.worker.pid())
                ngx.print(hc.status_page())
            }
        }
    }
}

就是这么简单。其实一般仔细看看代码,不难得出配置方法。

看别人踩坑,铺平自己的路,欢迎关注猿界汪汪队;

 

Lua-Resty-Checkups是一个基于lua的upstream管理和健康检查模块,由又拍云开源。特点:支持周期性upstream服务管理操作支持管理和健康检查支持upstream动态更新有利于加权轮询或哈希平衡支持 Nginx C upstream同步操作可使用级别和键值实现集群使用简介:-- config.lua _M = {} _M.global = {     checkup_timer_interval = 15,     checkup_shd_sync_enable = true,     shd_config_timer_interval = 1, } _M.ups1 = {     cluster = {         {             servers = {                 {host="127.0.0.1", port=4444, weight=10, max_fails=3, fail_timeout=10},             }         },     }, }lua_package_path "/path/to/lua-resty-checkups/lib/checkups/?.lua;/path/to/config.lua;;"; lua_shared_dict state 10m; lua_shared_dict mutex 1m; lua_shared_dict locks 1m; lua_shared_dict config 10m; server {     listen 12350;     return 200 12350; } server {     listen 12351;     return 200 12351; } init_worker_by_lua_block {     local config = require "config"     local checkups = require "resty.checkups.api"     checkups.prepare_checker(config)     checkups.create_checker() } server {     location = /12350 {         proxy_pass http://127.0.0.1:12350/;     }     location = /12351 {         proxy_pass http://127.0.0.1:12351/;     }     location = /t {         content_by_lua_block {             local checkups = require "resty.checkups.api"             local callback = function(host, port)                 local res = ngx.location.capture("/" .. port)                 ngx.say(res.body)                 return 1             end             local ok, err             -- connect to a dead server, no upstream available             ok, err = checkups.ready_ok("ups1", callback)             if err then ngx.say(err) end             -- add server to ups1             ok, err = checkups.update_upstream("ups1", {                     {                         servers = {                             {host="127.0.0.1", port=12350, weight=10, max_fails=3, fail_timeout=10},                         }                     },                 })             if err then ngx.say(err) end             ngx.sleep(1)             ok, err = checkups.ready_ok("ups1", callback)             if err then ngx.say(err) end             ok, err = checkups.ready_ok("ups1", callback)             if err then ngx.say(err) end             -- add server to new upstream             ok, err = checkups.update_upstream("ups2", {                     {                         servers = {                             {host="127.0.0.1", port=12351},                         }                     },                 })             if err then ngx.say(err) end             ngx.sleep(1)             ok, err = checkups.ready_ok("ups2", callback)             if err then ngx.say(err) end             -- add server to ups2, reset rr state             ok, err = checkups.update_upstream("ups2", {                     {                         servers = {                             {host="127.0.0.1", port=12350, weight=10, max_fails=3, fail_timeout=10},                             {host="127.0.0.1", port=12351, weight=10, max_fails=3, fail_timeout=10},                         }                     },                 })             if err then ngx.say(err) end             ngx.sleep(1)             ok, err = checkups.ready_ok("ups2", callback)             if err then ngx.say(err) end             ok, err = checkups.ready_ok("ups2", callback)             if err then ngx.say(err) end     } }Lua 配置示例:_M = {} -- Here is the global part _M.global = {     checkup_timer_interval = 15,     checkup_timer_overtime = 60,     default_heartbeat_enable = true,     checkup_shd_sync_enable = true,     shd_config_timer_interval = 1, } -- The rests parts are cluster configurations _M.redis = {     enable = true,     typ = "redis",     timeout = 2,     read_timeout = 15,     send_timeout = 15,     protected = true,     cluster = {         {   -- level 1             try = 2,             servers = {                 { host = "192.168.0.1", port = 6379, weight=10, max_fails=3, fail_timeout=10 },                 { host = "192.168.0.2", port = 6379, weight=10, max_fails=3, fail_timeout=10 },             }         },         {   -- level 2             servers = {                 { host = "192.168.0.3", port = 6379, weight=10, max_fails=3, fail_timeout=10 },             }         },     }, } _M.api = {     enable = false,     typ = "http",     http_opts = {         query = "GET /status HTTP/1.1\r\nHost: localhost\r\n\r\n",         statuses = {             [500] = false,             [502] = false,             [503] = false,             [504] = false,         },     },     mode = "hash",     cluster = {         dc1 = {             servers = {                 { host = "192.168.1.1", port = 1234, weight=10, max_fails=3, fail_timeout=10 },             }         },         dc2 = {             servers = {                 { host = "192.168.1.2", port = 1234, weight=10, max_fails=3, fail_timeout=10 },             }         }     } } _M.ups_from_nginx = {     timeout = 2,     cluster = {         {   -- level 1             upstream = "api.com",         },         {   -- level 2             upstream = "api.com",             upstream_only_backup = true,         },     }, } return _M
OpenResty是一个基于Nginx的Web平台,它通过Lua脚本语言扩展Nginx的功能,实现更灵活、更高效的Web应用程序开发。在生产环境中,为了保证服务的稳定性和可用性,我们需要配置健康检查来检测服务的状态,从而及时发现并处理故障。 OpenResty提供了两种方式来配置健康检查:一种是通过Nginx的upstream模块提供的健康检查功能,另一种是通过Lua脚本实现自定义的健康检查。 下面是使用Nginx的upstream模块配置健康检查的示例: 1. 在Nginx配置文件中定义upstream: ``` upstream backend { server 127.0.0.1:8000; server 127.0.0.1:8001; # 健康检查配置 check interval=3000 rise=2 fall=5 timeout=1000 type=http; check_http_send "HEAD / HTTP/1.0\r\n\r\n"; check_http_expect_alive http_2xx http_3xx; } ``` 其中,interval表示健康检查间隔时间(单位为毫秒),rise表示成功次数(即上升阈值),fall表示失败次数(即下降阈值),timeout表示超时时间(单位为毫秒),type表示检查类型(这里选择http)。check_http_send表示发送的HTTP请求报文,check_http_expect_alive表示期望响应码。具体参数的含义可以参考Nginx官方文档。 2. 在server中使用upstream: ``` server { listen 80; location / { proxy_pass http://backend; } } ``` 这里将请求转发到backend定义的服务器集群中。 以上就是使用Nginx的upstream模块配置健康检查的基本流程。当服务器集群中有节点出现故障时,Nginx会自动将请求转发到其他正常节点上,从而保证服务的可用性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值