linux rpc架构------rpcbind源码简易分析与测试实例

1. 如何使用rpc?

先要安装rpcbind库(可能需要portmap支持)

启动rpcbind服务:

    service rpcbind start/status/stop 需要输入密码, or

    systemctl start/status/stop rpcbind

查看服务是否启动,我的ubuntu16.04如下信息:

systemctl status rpcbind.*
● rpcbind.socket - RPCbind Server Activation Socket
   Loaded: loaded (/lib/systemd/system/rpcbind.socket; enabled; vendor preset: enabled)
   Active: active (running) since 三 2019-12-25 16:03:06 CST; 1 day 17h ago
   Listen: /run/rpcbind.sock (Stream)

12月 25 16:03:06 neo systemd[1]: Listening on RPCbind Server Activation Socket.
12月 26 15:13:31 neo systemd[1]: Listening on RPCbind Server Activation Socket.

● rpcbind.service - RPC bind portmap service
   Loaded: loaded (/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
  Drop-In: /run/systemd/generator/rpcbind.service.d
           └─50-rpcbind-$portmap.conf
   Active: active (running) since 三 2019-12-25 16:03:08 CST; 1 day 17h ago
 Main PID: 13037 (rpcbind)
    Tasks: 1
   Memory: 244.0K
      CPU: 185ms
   CGroup: /system.slice/rpcbind.service
           └─13037 /sbin/rpcbind -f -w

12月 25 16:03:08 neo systemd[1]: Starting RPC bind portmap service...
12月 25 16:03:08 neo systemd[1]: Started RPC bind portmap service.
12月 26 15:13:32 neo systemd[1]: Started RPC bind portmap service.

● rpcbind.target - RPC Port Mapper
   Loaded: loaded (/etc/insserv.conf.d/rpcbind; static; vendor preset: enabled)
  Drop-In: /run/systemd/generator/rpcbind.target.d
           └─50-hard-dependency-rpcbind-$portmap.conf
   Active: active since 三 2019-12-25 16:03:08 CST; 1 day 17h ago
     Docs: man:systemd.special(7)

12月 25 16:03:08 neo systemd[1]: Reached target RPC Port Mapper.

使用service命令,信息没有上面的全面.

service rpcbind status
● rpcbind.service - RPC bind portmap service
   Loaded: loaded (/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
  Drop-In: /run/systemd/generator/rpcbind.service.d
           └─50-rpcbind-$portmap.conf
   Active: active (running) since 三 2019-12-25 16:03:08 CST; 1 day 17h ago
 Main PID: 13037 (rpcbind)
    Tasks: 1
   Memory: 292.0K
      CPU: 185ms
   CGroup: /system.slice/rpcbind.service
           └─13037 /sbin/rpcbind -f -w

12月 25 16:03:08 neo systemd[1]: Starting RPC bind portmap service...
12月 25 16:03:08 neo systemd[1]: Started RPC bind portmap service.
12月 26 15:13:32 neo systemd[1]: Started RPC bind portmap service.

2. 查看rpcbind中注册的服务

使用rpcinfo命令, 可以使用man rpcinfo查看参数用法,默认如下或者带上参数-p,都是本地host的信息.
   program version netid     address                service    owner
    100000    4    tcp6      ::.0.111               portmapper superuser
    100000    3    tcp6      ::.0.111               portmapper superuser
    100000    4    udp6      ::.0.111               portmapper superuser
    100000    3    udp6      ::.0.111               portmapper superuser
    100000    4    tcp       0.0.0.0.0.111          portmapper superuser
    100000    3    tcp       0.0.0.0.0.111          portmapper superuser
    100000    2    tcp       0.0.0.0.0.111          portmapper superuser
    100000    4    udp       0.0.0.0.0.111          portmapper superuser
    100000    3    udp       0.0.0.0.0.111          portmapper superuser
    100000    2    udp       0.0.0.0.0.111          portmapper superuser
    100000    4    local     /run/rpcbind.sock      portmapper superuser
    100000    3    local     /run/rpcbind.sock      portmapper superuser
 824377344    1    udp       0.0.0.0.169.226        -          unknown
 824377344    1    tcp       0.0.0.0.206.254        -          unknown
 939524112    1    udp       0.0.0.0.211.132        -          unknown
 939524112    2    udp       0.0.0.0.211.132        -          unknown
 939524112    1    tcp       0.0.0.0.200.190        -          unknown
 939524112    2    tcp       0.0.0.0.200.190        -          unknown
 939524113    1    udp       0.0.0.0.164.104        -          unknown
 939524113    1    tcp       0.0.0.0.237.171        -          unknown
neo@neo:~/fu/workdir/workspace/go-master/test$ sys

3. 如何编译rpcbind源码?

    使用configure来生成Makefile文件,参数指定,可以使用-h查看后config,之后使用make编译出rpcbind和rpcinfo命令,这个可以自行百度.

4. rpcbind源码简易分析:

--------------------------------rpcbind架构----------------------------------------------------------
1. server如何注册?
2. client如何查询(调用)server提供的接口?
3. rpcbind是如何管理各个server以及如何根据client的需求来调用server中对应的接口?

查找:
    通过[program, version, netid]来查找
注册server:
    头插法到rpcb的单链表中

入口:
    rpcbind.c->main()
    0. 解析参数,并检查rpcbind是否已经执行了(通过锁(#define RPCBINDDLOCK "/var/run/rpcbind.lock")来决定)
    1. 常规设置
        1.1 /* Number of open files.  */ RLIMIT_NOFILE(resource.h)
        1.2 geteuid判断是否是root用户执行
        1.3 使用local service file用来服务查询
        1.4 获取net配置(/etc/netconfig),解析相关信息:比如sock,addr等,并注册默认的一些server.
        1.5 初始化syslog
        1.6 设置信号处理函数
    2. 进入loop(my_svc_run)
        ......

    实例:    定义一个*.x文件来描述这个远程server的接口,使用rpcgen自动生成(rpcgen -C *.x),
    并自动完成server端的stub和client端的proxy,类似android-binder机制.
    server端:
        实现PROJECTproc_1_svc函数,即业务函数.
        以下是自动生成部分:
            svc_register注册udp和tcp协议的server
    client端:
        使用clnt_create()来获取一个CLIENT对象,并给该对象传递参数数据,调用PROJECTproc_1, 实际是调用PROJECTproc_1_svc
    其中PROJECTproc_1就是proxy, PROJECTproc_1_svc就饿是stub.

-------------------------------------end----------------------------------------------------------------

5. 测试例子,有三个例子,只是编译方式不同

    Makefile

include Make.defines

chatcfile=chat_cli.c
chatsfile=chat_srv.c

PROGS =	client server myrpc_cli myrpc_srv chat_cli chat_srv

all:	${PROGS}

# PRG.x -> PRG.h PRG_clnt.c PRG_svc.c PRG_xdr.c
# 客户代理PROXY: PRG_clnt.o <= PRG_clnt.c PRG.h
# 服务端STUB:	PRG_svc.o <= PRG_svc.c PRG.h
# 客户端client: client <= PRG.h 客户端业务代码 客户代理PROXY PRG_xdr.o
# 服务端server: server <= PRG.h 服务端业务代码 服务端STUB PRG_xdr.o

# 生成框架文件
chat.h chat_clnt.c chat_svc.c chat_xdr.c: chat.x
			rpcgen chat.x

# 生成客户端入口文件
ifeq (${chatcfile},${wildcard ${chatcfile}})
	@echo  ${chatcfile} exist
else
	chat_cli.c:	chat.x
				rpcgen -Sc -o $@ chat.x
endif

# 生成服务端入口文件
ifeq (${chatsfile},${wildcard ${chatsfile}})
	@echo  ${chatsfile} exist
else
	chat_srv.c:	chat.x
				rpcgen -Ss -o $@ chat.x
endif
# 编译客户端
chat_cli: chat_clnt.c chat_cli.c chat_xdr.c
			${CC} ${CFLAGS} -o $@ chat_clnt.c chat_cli.c chat_xdr.c \
				${LIBS} ${LIBS_RPC}
# 编译服务端
chat_srv: chat_svc.c chat_srv.c chat_xdr.c
			${CC} ${CFLAGS} -o $@ chat_svc.c chat_srv.c chat_xdr.c \
				${LIBS} ${LIBS_RPC}

myrpc.h myrpc_clnt.c myrpc_svc.c myrpc_xdr.c:	myrpc.x
			rpcgen -C myrpc.x

myrpc_clnt.o:	myrpc_clnt.c myrpc.h

myrpc_svc.o:	myrpc_svc.c myrpc.h

myrpc_cli: myrpc.h myrpc_client.o myrpc_clnt.o myrpc_xdr.o
			${CC} ${CFLAGS} -o $@ myrpc_client.o myrpc_clnt.o myrpc_xdr.o \
				${LIBS} ${LIBS_RPC}

myrpc_srv: myrpc.h myrpc_server.o myrpc_svc.o myrpc_xdr.o
			${CC} ${CFLAGS} -o $@ myrpc_server.o myrpc_svc.o myrpc_xdr.o \
				${LIBS} ${LIBS_RPC}

square.h square_clnt.c square_svc.c square_xdr.c:	square.x
			rpcgen -C square.x

square_clnt.o: square_clnt.c square.h

square_svc.o: square_svc.c square.h

client:	square.h client.o square_clnt.o square_xdr.o
			${CC} ${CFLAGS} -o $@ client.o square_clnt.o square_xdr.o \
				${LIBS} ${LIBS_RPC}

server:	square.h server.o square_svc.o square_xdr.o
			${CC} ${CFLAGS} -o $@ server.o square_svc.o square_xdr.o \
				${LIBS} ${LIBS_RPC}

clean:
		rm -f ${PROGS} ${CLEANFILES} *_clnt.c *_svc.c *_xdr.c square.h myrpc.h  chat.h

Make.defines从uniipc移植过来的

#
# This file is generated by autoconf from "Make.defines.in".
#
# This is the "Make.defines" file that almost every "Makefile" in the
# source directories below this directory include.
# The "../" in the pathnames actually refer to this directory, since
# "make" is executed in all the subdirectories of this directory.
#
# System = 

CC = gcc
CFLAGS = -g -O2 -D_REENTRANT -Wall
LIBS = -lrt -lpthread 
LIBS_RPC = 
RANLIB = ranlib
RPCGEN_OPTS = -C

# Following is the main library, built from all the object files
# in the lib/ directories.
LIBUNPIPC_NAME = 

# Following are all the object files to create in the lib/ directory.
#LIB_OBJS =  daemon_inetd.o daemon_init.o error.o gf_time.o lock_reg.o lock_test.o my_shm.o px_ipc_name.o readable_timeo.o readline.o readn.o set_concurrency.o set_nonblock.o signal.o signal_intr.o sleep_us.o signal_rt.o signal_rt_intr.o timing.o tv_sub.o wrappthread.o wrapsunrpc.o wrapstdio.o wrapunix.o writable_timeo.o writen.o

CLEANFILES = core core.* *.core *.o temp.* *.out typescript* \
		*.[234]c *.[234]h *.bsdi *.sparc *.uw

chat.x

#define MY_RPC_PROG_NUM         0x38000011   /* 程序号 */

struct request
{        /* 定义消息结构 */
    int mtype;
    int len;
    char req[1024];
};

struct response
{        /* 定义消息结构 */
    int mtype;
    int len;
    char resp[1024];
    int status;
};

program MY_RPC_PROG { 
    version MY_RPC_VERS1 {
        response MY_RPCC(request) = 1;    /* 过称号 = 1 */
    } = 1;        /* Version number = 1 */
} = MY_RPC_PROG_NUM;    /* Program number */

/*
 *          程序号范围                  简述
    0x00000000 - 0x1FFFFFFF     由Sun公司定义,提供特定服务
    0x20000000 - 0x3FFFFFFF     由程序员自己定义,提供本地服务或用于调试
    0x40000000 - 0x5FFFFFFF     用于短时间使用的程序,例如回调程序
    0x60000000 - 0xFFFFFFFF     保留程序号
 */

chat_cli.c

/*
 * This is sample code generated by rpcgen.
 * These are only templates and you can use them
 * as a guideline for developing your own functions.
 */

#include "chat.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

void my_rpc_prog_1(char *host)
{
    CLIENT *clnt;
    response *result_1;
    request my_rpcc_1_arg;

#ifndef DEBUG
    clnt = clnt_create(host, MY_RPC_PROG, MY_RPC_VERS1, "udp");
    if (clnt == NULL)
    {
        clnt_pcreateerror(host);
        exit(1);
    }
#endif /* DEBUG */
    my_rpcc_1_arg.mtype = 0;
    strncpy(my_rpcc_1_arg.req, "hello kitty", sizeof(my_rpcc_1_arg.req));
    my_rpcc_1_arg.len = strlen(my_rpcc_1_arg.req);

    result_1 = my_rpcc_1(&my_rpcc_1_arg, clnt);
    if (result_1 == (response *)NULL)
    {
        clnt_perror(clnt, "call failed");
    }
    printf("status=[%d],type=[%d],response=[%s](%d)\n", result_1->status, result_1->mtype, result_1->resp, result_1->len);
#ifndef DEBUG
    clnt_destroy(clnt);
#endif /* DEBUG */
}

int main(int argc, char *argv[])
{
    char *host;

    if (argc < 2)
    {
        printf("usage: %s server_host\n", argv[0]);
        exit(1);
    }
    host = argv[1];
    my_rpc_prog_1(host);
    exit(0);
}

chat_srv.c 只做了一个简单的echo功能

/*
 * This is sample code generated by rpcgen.
 * These are only templates and you can use them
 * as a guideline for developing your own functions.
 */

#include "chat.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

response *
my_rpcc_1_svc(request *argp, struct svc_req *rqstp)
{
    static response result;

    /*
	 * insert server code here
	 */

    result.status = 0;
    result.mtype = argp->mtype;
    strncpy(result.resp, argp->req, argp->len);
    result.len = argp->len;

    return &result;
}

6. 后续可能加入复杂一点的例子

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值