O-RAN notes(10)---Bronze nonrtric/ricplt IoT using A1 for Policy Management(1)

Refer to links: https://wiki.o-ran-sc.org/display/GS/Running+A1+and+O1+Use+Case+Flows

There are two ways for Policy management, either from nonrtric or from ricplt(aka, Near-RT RIC):

I will introduce several topics in following posts:

(1) HOW-TO: connect nonrtric to ricplt(aka, Near-RT RIC)

(2) HOW-TO: manage Policy using nonrtric-controlpanel

(3) HOW-TO: manage Policy using Policy-Agent(aka, the policymanagementservice) 

(4) HOW-TO: manage Policy using A1-Controller

 

First of all, you should have SMO and RICPLT deployed on seperate VMs:

(12:31 dabs@smobronze bin) > ip addr | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 192.168.43.172/24 brd 192.168.43.255 scope global dynamic noprefixroute ens33

smobronze is the SMO VM.

(13:16 dabs@ricpltbronze dep) > ip addr | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 192.168.43.210/24 brd 192.168.43.255 scope global dynamic noprefixroute ens33

ricpltbronze is the RICPLT VM, and its cluster ip is: 192.168.43.210.

 

Let's begin with first topic:

(1) HOW-TO: connect nonrtric to ricplt(aka, Near-RT RIC)

First we need to re-deploy nonrtric.

 

First step, update the example_recipe.yaml of nonrtric to include correct ip/port of the ricplt cluster:

(20:20 dabs@smobronze NONRTRIC) > pwd
/home/dabs/oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC

(20:20 dabs@smobronze NONRTRIC) > sudo vim example_recipe.yaml 

Note: Since we are going to access ricplt from outside the cluster, we will use the nodePort( =32080). As we can see, kong-proxy of ricplt has nodePort/port/targetPort all set to 32080. 

(13:50 dabs@ricpltbronze dep) > sudo kubectl get service r4-infrastructure-kong-proxy -n ricplt -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-08-12T12:18:43Z"
  labels:
    app.kubernetes.io/instance: r4-infrastructure
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: "1.4"
    helm.sh/chart: kong-0.36.6
  name: r4-infrastructure-kong-proxy
  namespace: ricplt
  resourceVersion: "1809"
  selfLink: /api/v1/namespaces/ricplt/services/r4-infrastructure-kong-proxy
  uid: 755e92d3-9e61-4ea0-93fb-56b7c4911ce8
spec:
  clusterIP: 10.108.26.221
  externalTrafficPolicy: Cluster
  ports:
  - name: kong-proxy
    nodePort: 32080
    port: 32080
    protocol: TCP
    targetPort: 32080
  - name: kong-proxy-tls
    nodePort: 32443
    port: 32443
    protocol: TCP
    targetPort: 32443
  selector:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: r4-infrastructure
    app.kubernetes.io/name: kong
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

 

Next step, undeploy and re-deploy nonrtric:

(20:20 dabs@smobronze bin) > pwd
/home/dabs/oran/dep/smo/bin/smo-deploy/smo-dep/bin

(20:27 dabs@smobronze bin) > sudo ./undeploy-nonrtric && sudo ./deploy-nonrtric -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml

Note: suggest remove 'stable' helm repository to speed up the deployment.

(20:27 dabs@smobronze bin) > sudo helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts                    
(20:28 dabs@smobronze bin) > sudo helm repo remove stable
"stable" has been removed from your repositories
(20:28 dabs@smobronze bin) > sudo helm repo list
NAME 	URL                         
local	http://127.0.0.1:8879/charts

 

Wait till all Pods in ns nonrtric is Running and below is all the k8s services in ns nonrtric:

(13:35 dabs@smobronze bin) > sudo kubectl get service -n nonrtric
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
a1-sim                    ClusterIP   None             <none>        8085/TCP,8185/TCP               3h19m
a1controller              ClusterIP   10.108.251.201   <none>        8282/TCP,8383/TCP               3h19m
controlpanel              NodePort    10.97.215.21     <none>        8080:30091/TCP,8081:30092/TCP   3h19m
dbhost                    ClusterIP   10.105.39.221    <none>        3306/TCP                        3h19m
policymanagementservice   NodePort    10.103.125.57    <none>        9080:30093/TCP,9081:30094/TCP   3h19m
sdnctldb01                ClusterIP   10.103.47.243    <none>        3306/TCP                        3h19m

As we can see, the policymanagementservice(aka, the Policy-Agent) has port=9080/nodePort=30093 for HTTP, and its IP address is 10.103.125.57.

 

Let's check whether nonrtric has connection to ricplt:

(13:35 dabs@smobronze bin) > curl -X GET http://10.103.125.57:9080/rics
[{"ricName":"ric1","managedElementIds":["kista_1","kista_2"],"policyTypes":["20008"],"state":"AVAILABLE"}]

We have ric1 with state is 'AVAILABLE' and ric1 has defined policy type 20008.

 

Let's continue with next topic.

(2) HOW-TO: manage Policy using nonrtric-controlpanel 

Check the info of service controlpanel in ns nonrtric:

(13:56 dabs@smobronze bin) > sudo kubectl get service controlpanel -n nonrtric -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-08-13T02:16:01Z"
  labels:
    app: nonrtric-controlpanel
    chart: controlpanel-2.0.0
    heritage: Tiller
    release: r2-dev-nonrtric
  name: controlpanel
  namespace: nonrtric
  resourceVersion: "141471"
  selfLink: /api/v1/namespaces/nonrtric/services/controlpanel
  uid: 4129523f-ba47-4b87-bb8a-6ed8c638378d
spec:
  clusterIP: 10.97.215.21
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 30091
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 30092
    port: 8081
    protocol: TCP
    targetPort: 8082
  selector:
    app: nonrtric-controlpanel
    release: r2-dev-nonrtric
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

controlpannel service has port=8080/nodePort=30091 with targetPort=8080 for HTTP.

 

Setup port-forwarding for controlpanel Pod:

(14:37 dabs@smobronze bin) > sudo kubectl port-forward controlpanel-8bd4748f4-j55sf 8088:8080 -n nonrtric
[sudo] password for dabs: 
Forwarding from 127.0.0.1:8088 -> 8080
Forwarding from [::1]:8088 -> 8080

[TODO]: Pod policymanagementservice keeps crashing with cause "fail to GET http://10.244.0.236:8081/status". I haven't figured out the root cause yet.

(14:49 dabs@smobronze bin) > sudo kubectl get pod -n nonrtric -o wide | grep policymanagementservice
NAME                                       READY   STATUS             RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
policymanagementservice-5ffc4cf5f6-wvrqq   0/1     CrashLoopBackOff   21         4h34m   10.244.0.236   smobronze   <none>           <none>

(14:57 dabs@smobronze bin) > sudo kubectl describe pod policymanagementservice-5ffc4cf5f6-wvrqq -n nonrtric
<----omitted----->
Events:
  Type     Reason     Age                     From                Message
  ----     ------     ----                    ----                -------
  Warning  Unhealthy  39m (x3 over 40m)       kubelet, smobronze  Readiness probe failed: dial tcp 10.244.0.236:8081: i/o timeout
  Normal   Killing    38m (x12 over 4h41m)    kubelet, smobronze  Container container-policymanagementservice failed liveness probe, will be restarted
  Normal   Created    38m (x13 over 4h42m)    kubelet, smobronze  Created container container-policymanagementservice
  Warning  Unhealthy  37m (x31 over 4h41m)    kubelet, smobronze  Liveness probe failed: Get http://10.244.0.236:8081/status: dial tcp 10.244.0.236:8081: connect: connection refused
  Warning  Unhealthy  37m (x34 over 4h41m)    kubelet, smobronze  Readiness probe failed: dial tcp 10.244.0.236:8081: connect: connection refused
  Normal   Started    34m (x16 over 4h42m)    kubelet, smobronze  Started container container-policymanagementservice
  Normal   Pulled     9m16s (x22 over 4h42m)  kubelet, smobronze  Container image "nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-policy-agent:2.0.0" already present on machine
  Warning  Unhealthy  68s (x35 over 4h24m)    kubelet, smobronze  Liveness probe failed: Get http://10.244.0.236:8081/status: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

 

Open your web browser and access http://localhost:8088, which is actually the frontend of nonrtric-ControlPanel.

You can add Policy-instance by clicking the PLUS symbol on the right. Click 'Submit' to deploy.

 

Now we have a new policy-instance:

Here is related log of the controlpanel Pod of nonrtric:

(14:32 dabs@smobronze bin) > sudo kubectl logs -f controlpanel-8bd4748f4-j55sf -n nonrtric
2020-08-13T06:31:10.323Z [http-nio-8080-exec-1] DEBUG o.o.p.n.c.c.PolicyController - putPolicyInstance ric: ric1, typeId: 20008, instanceId: 7a4fa685-7af7-437d-8d8c-f84c5da137de, instance: {
  "threshold": 10
}
2020-08-13T06:31:10.324Z [http-nio-8080-exec-1] DEBUG o.o.p.n.c.util.AsyncRestClient - 27 PUT uri = 'https://policymanagementservice:9081/policy?type=20008&id=7a4fa685-7af7-437d-8d8c-f84c5da137de&ric=ric1&service=controlpanel''

Here is related log of the a1mediator Pod of ricplt: 

(14:34 dabs@ricpltbronze ~) > sudo kubectl logs -f deployment-ricplt-a1mediator-66fcf76c66-9cwr5 -n ricplt
::ffff:10.244.0.37 - - [2020-08-13 06:31:12] "PUT /a1-p/policytypes/20008/policies/7a4fa685-7af7-437d-8d8c-f84c5da137de HTTP/1.1" 202 116 0.009829
::ffff:10.244.0.1 - - [2020-08-13 06:31:13] "GET /a1-p/healthcheck HTTP/1.1" 200 110 0.002360
{"ts": 1597300273332, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "_send_msg: sending: {'payload': b'{\"operation\": \"CREATE\", \"policy_type_id\": 20008, \"policy_instance_id\": \"7a4fa685-7af7-437d-8d8c-f84c5da137de\", \"payload\": {\"threshold\": 10}}', 'payload length': 140, 'message type': 20010, 'subscription id': 20008, 'transaction id': b'956b2596dd2e11eab4bc4a1bb6cf4f67', 'message state': 0, 'message status': 'RMR_OK', 'payload max size': 4096, 'meid': b'', 'message source': 'service-ricplt-a1mediator-rmr.ricplt:4562', 'errno': 0}"}
{"ts": 1597300273334, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "_send_msg: result message state: 0"}

Here is the policy query result form policymanagementservice(aka, the Policy-Agent): 

(15:19 dabs@smobronze bin) > curl -X GET --header "Content-Type: application/json" "http://10.103.125.57:9080/policies"
[{"id":"73216514-d4c4-4c1f-96a1-e255840204c8","type":"20008","ric":"ric1","json":{"threshold":10.0},"service":"controlpanel","lastModified":"2020-08-13T07:21:57.257059Z"}]

(to be continued) 

已标记关键词 清除标记
课程简介: 历经半个多月的时间,Debug亲自撸的 “企业员工角色权限管理平台” 终于完成了。正如字面意思,本课程讲解的是一个真正意义上的、企业级的项目实战,主要介绍了企业级应用系统中后端应用权限的管理,其中主要涵盖了六大核心业务模块、十几张数据库表。 其中的核心业务模块主要包括用户模块、部门模块、岗位模块、角色模块、菜单模块和系统日志模块;与此同时,Debug还亲自撸了额外的附属模块,包括字典管理模块、商品分类模块以及考勤管理模块等等,主要是为了更好地巩固相应的技术栈以及企业应用系统业务模块的开发流程! 核心技术栈列表: 值得介绍的是,本课程在技术栈层面涵盖了前端和后端的大部分常用技术,包括Spring Boot、Spring MVC、Mybatis、Mybatis-Plus、Shiro(身份认证与资源授权跟会话等等)、Spring AOP、防止XSS攻击、防止SQL注入攻击、过滤器Filter、验证码Kaptcha、热部署插件Devtools、POI、Vue、LayUI、ElementUI、JQuery、HTML、Bootstrap、Freemarker、一键打包部署运行工具Wagon等等,如下图所示: 课程内容与收益: 总的来说,本课程是一门具有很强实践性质的“项目实战”课程,即“企业应用员工角色权限管理平台”,主要介绍了当前企业级应用系统中员工、部门、岗位、角色、权限、菜单以及其他实体模块的管理;其中,还重点讲解了如何基于Shiro的资源授权实现员工-角色-操作权限、员工-角色-数据权限的管理;在课程的最后,还介绍了如何实现一键打包上传部署运行项目等等。如下图所示为本权限管理平台的数据库设计图: 以下为项目整体的运行效果截图: 值得一提的是,在本课程中,Debug也向各位小伙伴介绍了如何在企业级应用系统业务模块的开发中,前端到后端再到数据库,最后再到服务器的上线部署运行等流程,如下图所示:
©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客 返回首页