openresty lua-resty-http api请求
官网:https://github.com/ledgetech/lua-resty-http
连接
new:创建连接对象
语法格式:httpc, err = http.new()
Creates the HTTP connection object. In case of failures,
returns nil and a string describing the error
* 创建连接对象
* 如果失败,返回nil、错误描述信息
connect:建立连接
语法格式:ok, err, ssl_session = httpc:connect(options)
Attempts to connect to the web server while incorporating
the following activities:
TCP connection
SSL handshake
HTTP proxy configuration
* 和web服务器建立连接
In doing so it will create a distinct connection pool name that
is safe to use with SSL and / or proxy based connections, and as
such this syntax is strongly recommended over the original (now
deprecated) TCP only connection syntax
# options可选值
scheme:协议
host:主机
port:端口
pool:连接池名称
pool_size:连接池大小
backlog:队列数
proxy_opts:代理参数
ssl_reused_session:ssl session
ssl_verify:ssl验证,默认true
ssl_server_name:ssl 服务器名称
ssl_send_status_req:发送状态
set_timeout:设置超时时间
语法格式:httpc:set_timeout(time)
Sets the socket timeout (in ms) for subsequent operations.
See set_timeouts below for a more declarative approach
* 设置超时时间,单位毫秒
set_timeouts:设置超时时间(连接、读取、发送)
语法格式:httpc:set_timeouts(connect_timeout, send_timeout, read_timeout)
Sets the connect timeout threshold, send timeout threshold,
and read timeout threshold, respectively, in milliseconds,
for subsequent socket operations (connect, send, receive,
and iterators returned from receiveuntil)
* 设置连接、发送、读取超时时间,单位毫秒
set_keepalive:设置空闲连接存活时间
语法格式:ok, err = httpc:set_keepalive(max_idle_timeout, pool_size)
Either places the current connection into the pool for future reuse,
or closes the connection. Calling this instead of close is "safe" in
that it will conditionally close depending on the type of request.
Specifically, a 1.0 request without Connection: Keep-Alive will be
closed, as will a 1.1 request with Connection: Close.
* 设置空闲连接存活时间
In case of success, returns 1. In case of errors, returns nil, err.
In the case where the connection is conditionally closed as described
above, returns 2 and the error string connection must be closed, so
as to distinguish from unexpected errors
* 设置成功,返回1
* 设置失败,返回nil、错误描述信息
set_proxy_options:设置代理参数
语法格式:httpc:set_proxy_options(opts)
Configure an HTTP proxy to be used with this client instance.
The opts table expects the following fields:
http_proxy: an URI to a proxy server to be used with HTTP requests
http_proxy_authorization: a default Proxy-Authorization header
value to be used with http_proxy, e.g. Basic
ZGVtbzp0ZXN0, which will be overriden if the
Proxy-Authorization request header is present.
https_proxy: an URI to a proxy server to be used with HTTPS requests
https_proxy_authorization: as http_proxy_authorization but for use with
https_proxy (since with HTTPS the authorisation is
done when connecting, this one cannot be overridden
by passing the Proxy-Authorization request header).
no_proxy: a comma separated list of hosts that should not be proxied.
* 配置代理参数
* http_proxy:代理服务器
* http_proxy_authorization:代理认证
* https_proxy:加密请求
* https_proxy_authorization:加密请求认证
Note that this method has no effect when using the deprecated
TCP only connect connection syntax
* set_proxy_options对禁用的tcp语法不起作用
get_reused_times:获取重用次数
语法格式:times, err = httpc:get_reused_times()
close:关闭连接
语法格式:ok, err = httpc:close()
请求
request:发送请求
语法格式:res, err = httpc:request(params)
Sends an HTTP request over an already established connection.
Returns a res table or nil and an error message.
* 发送http请求,返回table数据
* 如果失败,返回nil、错误描述信息
The params table expects the following fields:
version: The HTTP version number. Defaults to 1.1.
method: The HTTP method string. Defaults to GET.
path: The path string. Defaults to /.
query: The query string, presented as either a literal string or Lua table..
headers: A table of request headers.
body: The request body as a string, a table of strings, or an iterator function yielding strings until nil when exhausted. Note that you must specify a Content-Length for the request body, or specify Transfer-Encoding: chunked and have your function implement the encoding. See also: get_client_body_reader).
* 可选参数:
* version:http版本,默认:1.1
* method:请求方法,默认:GET
* path:请求路径,默认:/
* query:查询参数
* headers:请求头
* body:请求体
When the request is successful, res will contain the following fields:
status: The status code.
reason: The status reason phrase.
headers: A table of headers. Multiple headers with the same field name will be presented as a table of values.
has_body: A boolean flag indicating if there is a body to be read.
body_reader: An iterator function for reading the body in a streaming fashion.
read_body: A method to read the entire body into a string.
read_trailers: A method to merge any trailers underneath the headers, after reading the body
* 请求成功,返回数据如下
* status:请求状态码
* reason:响应说明
* headers:响应头
* has_body:是否有响应体
* body_reader:响应体
* read_body:读取响应体方法
* read_trailers:合并请求头
request_uri:发送请求,可设置连接、请求参数
语法格式:res, err = httpc:request_uri(uri, params)
The single-shot interface (see usage). Since this method performs
an entire end-to-end request, options specified in the params can
include anything found in both connect and request documented above.
Note also that fields path, and query, in params will override
relevant components of the uri if specified (scheme, host, and
port will always be taken from the uri).
* 参数包含连接、请求参数
* 如果是同名参数,请求参数会覆盖连接参数
There are 3 additional parameters for controlling keepalives:
keepalive: Set to false to disable keepalives and immediately
close the connection. Defaults to true.
keepalive_timeout: The maximal idle timeout (ms). Defaults to
lua_socket_keepalive_timeout.
keepalive_pool: The maximum number of connections in the pool.
Defaults to lua_socket_pool_size.
* 3个额外的连接参数
* keepalive:设置为false可立即关闭连接,默认false
* keepalive_timeout:空闲连接存活时间
* keepalive_pool:空闲连接池大小
If the request is successful, res will contain the following fields:
status: The status code.
headers: A table of headers.
body: The entire response body as a string
* 请求成功,返回的响应参数
* status:响应状态码
* headers: 响应头,table类型
* body:响应体,字符串类型
request_pipeline:发送多个请求
语法格式:responses, err = httpc:request_pipeline(params)
This method works as per the request method above, but params is
instead a nested table of parameter tables. Each request is sent
in order, and responses is returned as a table of response handles
* 多个请求按顺序发送,响应体返回table数据
Due to the nature of pipelining, no responses are actually read
until you attempt to use the response fields (status / headers
etc). And since the responses are read off in order, you must
read the entire body (and any trailers if you have them), before
attempting to read the next response
* 需要先读完一个响应体,才能读取下一个响应体
Be sure to test at least one field (such as status) before trying
to use the others, in case a socket read error has occurred
* 因为可能有socket读取错误,在执行其他操作前,至少先读取一个字段(如状态码)
# 示例
local responses = httpc:request_pipeline({
{ path = "/b" },
{ path = "/c" },
{ path = "/d" },
})
for _, r in ipairs(responses) do
if not r.status then
ngx.log(ngx.ERR, "socket read error")
break
end
ngx.say(r.status)
ngx.say(r:read_body())
end
响应
res.body_reader:读取响应体
语法格式:reader = res.body_reader
The body_reader iterator can be used to stream the response body
in chunk sizes of your choosing
* body_reader可用来读取响应体
If the reader is called with no arguments, the behaviour depends
on the type of connection. If the response is encoded as chunked,
then the iterator will return the chunks as they arrive. If not,
it will simply return the entire body.
* 如果没有设置参数,读取数据依赖于连接类型
* 如果数据被编码成数据块,返回读取的数据块
* 否则,返回读取的全部数据
Note that the size provided is actually a maximum size. So in the
chunked transfer case, you may get buffers smaller than the size
you ask, as a remainder of the actual encoded chunks
* body_reader参数是可读取的最大数据
* 实际返回的数据可能低于设置的参数
# 示例
local reader = res.body_reader
local buffer_size = 8192
repeat
local buffer, err = reader(buffer_size)
if err then
ngx.log(ngx.ERR, err)
break
end
if buffer then
-- process
end
until not buffer
res:read_body:读取响应体,返回字符串
语法格式:body, err = res:read_body()
Reads the entire body into a local string
* 读取全部的响应数据
res:read_tailers:合并响应头
语法格式:res:read_trailers()
This merges any trailers underneath the res.headers table
itself. Must be called after reading the body
* 合并响应头
解析uri
parse_uri:解析uri
语法格式:local scheme, host, port, path, query? = unpack(httpc:parse_uri(uri, query_in_path?))
This is a convenience function allowing one to more easily use the
generic interface, when the input data is a URI.
*解析uri
As of version 0.10, the optional query_in_path parameter was added,
which specifies whether the querystring is to be included in the path
return value, or separately as its own return value. This defaults to
true in order to maintain backwards compatibility. When set to false,
path will only include the path, and query will contain the URI args,
not including the ? delimiter
* query_in_path:是否包含查询参数,默认true
读取请求体
get_client_body_reader:读取请求体
语法格式:reader, err = httpc:get_client_body_reader(chunksize?, sock?)
Returns an iterator function which can be used to read the downstream
client request body in a streaming fashion. You may also specify an
optional default chunksize (default is 65536), or an already established
socket in place of the client request
* 返回客户端请求体遍历器
* chunksize:默认65536
* socket:可替换client request
This iterator can also be used as the value for the body field in
request params, allowing one to stream the request body into a
proxied upstream request
* 遍历器也可作为request方法的body参数
# 示例
local req_reader = httpc:get_client_body_reader()
local buffer_size = 8192
repeat
local buffer, err = req_reader(buffer_size)
if err then
ngx.log(ngx.ERR, err)
break
end
if buffer then
-- process
end
until not buffer
# 示例
local client_body_reader, err = httpc:get_client_body_reader()
local res, err = httpc:request({
path = "/helloworld",
body = client_body_reader,
})
使用示例
*************
后端应用
Person
@Data
public class Person {
private String name;
private Integer age;
}
HelloController
@RestController
public class HelloController {
@RequestMapping("/hello")
public String hello(String name){
System.out.println(name);
return "hello " + name;
}
@RequestMapping("/hello2")
public Map<String, Object> hello2(String name, Integer age){
Map<String, Object> result = new HashMap<>();
result.put("name", name);
result.put("age", age);
return result;
}
@RequestMapping("/hello3")
public Map<String, Object> hello3(@RequestBody Person person){
Map<String, Object> result = new HashMap<>();
result.put("person", person);
return result;
}
}
Dockerfile
from java:8
workdir /usr/local/jar
copy hello.jar app.jar
expose 8080
entrypoint ["java", "-jar", "app.jar"]
Docker 配置:edit configuration ==> docker
点击运行
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.1)
2022-07-26 03:44:55.189 INFO 1 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication v0.0.1-SNAPSHOT using Java 1.8.0_111 on eefb07353e70 with PID 1 (/usr/local/jar/app.jar started by root in /usr/local/jar)
2022-07-26 03:44:55.195 INFO 1 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to 1 default profile: "default"
2022-07-26 03:44:56.705 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2022-07-26 03:44:56.733 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-07-26 03:44:56.733 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.64]
2022-07-26 03:44:56.834 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-07-26 03:44:56.834 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1562 ms
2022-07-26 03:44:57.473 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-07-26 03:44:57.486 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 2.964 seconds (JVM running for 3.546)
*************
openresty
default.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/local/openresty/nginx/html;
index index.html index.htm;
}
location /test {
content_by_lua_block {
local http = require 'resty.http';
local cjson = require 'cjson';
local con = http.new();
local res, err = con:request_uri("http://172.18.0.4:8080", {
method = "GET",
path = "/hello?name=gtlx"
});
if err then
ngx.say("请求出错 ==> ", err);
end
ngx.say(res.body);
}
}
location /test2 {
content_by_lua_block {
local http = require 'resty.http';
local cjson = require 'cjson';
local con = http.new();
local res, err = con:request_uri("http://172.18.0.4:8080", {
method = "GET",
path = "/hello2",
query = "name=gtlx&age=20"
});
if err then
ngx.say("请求出错 ==> ", err);
end
ngx.say(res.body);
}
}
location /test3 {
content_by_lua_block {
local http = require 'resty.http';
local cjson = require 'cjson';
local con = http.new();
local res, err = con:request_uri("http://172.18.0.4:8080", {
method = "POST",
path = "/hello3",
headers = {
["Content-Type"] = "application/json"
},
body = '{"name":"瓜田李下", "age":20}'
});
if err then
ngx.say("请求出错 ==> ", err);
end
ngx.say(res.body);
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/openresty/nginx/html;
}
}
创建容器
docker run -it -d --net fixed --ip 172.18.0.2 -p 9000:80 \
-v /Users/huli/lua/openresty/http/default.conf:/etc/nginx/conf.d/default.conf \
--name open-http lihu12344/openresty
使用测试
# get请求:?传参
huli@localhost http % curl localhost:9000/test
hello gtlx
# get请求:query传参
huli@localhost http % curl localhost:9000/test2
{"name":"gtlx","age":20}
# post请求
huli@localhost http % curl localhost:9000/test3
{"person":{"name":"瓜田李下","age":20}}