一些常用总结:
记录
手动做的 docker commit id的那些要留readme
以在 openeuler docker vm 中安装anaconda3 之后重新 导出一个新的镜像 为例:
- 使用 openeuler镜像起一个机器,
- 在机器中安装 anaconda ,
- 将安装包删除掉,保留一个处理需要的环境外,其他干净
- docker commit old_container new_container:tag
- 查看docker images 有新的容器镜像:
- 启动新的容器镜像,查看 anaconda 环境能够使用:
oracle 命令检查:
查看一下监听状态:
lsnrctl status
是否启动一下:
lsnrctl start
ps -ef | grep oracle
hive:
- 导入
beeline -u “jdbc:hive2://oe-ambari-10-1-66-20:2181,oe-ambari-10-1-66-21:2181,oe-ambari-10-1-66-22:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2”
rpm 操作:
- 将一个rpm包展开
rpm2cpio bd-java-common-1.5.3-0.20240919220830.el7.x86_64.rpm | cpio -idmv
shell
-
vim 命令
- set paste --带格式粘贴
prom 密码
admin
Pwd@Prom#01Admin
web-remote-code-ide可以用来做环境编译:
- web-remote-code-ide可以用来做环境编译
要有一个自己的虚拟机 https://share.bd.com/share/vagrant_box/openEuler-22.03-LTS/basebox/
http://daily-build-release.bd.com/daily_releases_new/apps/web-remote-code-ide/openEuler-22.03-LTS/202408252221/
keydb 查询命令
-
远程连接
redis-cli -h host -p port -a “passwd” -
支持的数据类型:
string,list,set,hash,sort set
solr 删除所有数据
<delete><query>*:*</query></delete>
<commit/>
hugegraph
初始化:
cd /opt/conf-rocksdb
chmod 777 data_dir1/*
cp crowgraph.properties hugegraph.properties spirit.properties /opt/hugegraph/conf
bash /opt/hugegraph/bin/init-store.sh
cp -r dev prod
/opt/conf-rocksdb
创建点边
http://daily-build-release.bd.com/daily_releases_new/apps/knowledge-graph/crow-1.4.x-dev-sh/202407220108/knowledgegraph-release-1.4.3-dev/docs/install/html/index/ch03.html
curl http://10.1.61.30:15003/knowledgegraph-service/api/schema/createHugeGraphSchema -X POST -H “Content-Type:application/json” -d “{“schemaId”:“SYS_INIT_SCHEMA”}”
ZrPkndjD6AVB4BYkbc7gbdhOE2CkVpWr
curl http://10.1.61.30:15003/knowledgegraph-service/api/schema/createAllHugeGraphSchema -X POST
3eRbP1YU8nO8IQPvuxiy89w8025Dus51
curl http://10.1.61.30:15003/knowledgegraph-service/api/schema/queryHugeGraphSchemaStatus?taskId=TY2jT7vp6NO99EwZuIeO0IAqXAqd0LJ3
http://10.1.61.31:8384/graphs/crowgraph/schema/vertexlabels
http://10.1.61.31:8384/graphs/crowgraph/schema/edgelabels
http://10.1.61.31:8384/graphs/crowgraph/schema/propertykeys
http://10.1.61.31:8384/graphs/crowgraph/schema/indexlabels
本地oracle
yum install oracle-instantclient11.2-sqlplus
yum install oracle-instantclient11.2-basic
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib:
L
D
L
I
B
R
A
R
Y
P
A
T
H
e
x
p
o
r
t
P
A
T
H
=
/
u
s
r
/
l
i
b
/
o
r
a
c
l
e
/
11.2
/
c
l
i
e
n
t
64
/
b
i
n
:
LD_LIBRARY_PATH export PATH=/usr/lib/oracle/11.2/client64/bin:
LDLIBRARYPATHexportPATH=/usr/lib/oracle/11.2/client64/bin:PATH
export NLS_LANG=‘american_america.AL32UTF8’
邮箱修改密码:
用户名称 denglong@bd.com
初始密码 denglong@bd.com@123456
修改密码 http://mail.bd.com/admin/user/password
地图打点:
http://10.1.61.66:15003/map-client-demos/map-client
hbase
http://confluence.bd.com:8090/pages/viewpage.action?pageId=11567208
http://confluence.bd.com:8090/pages/viewpage.action?pageId=11567149
common-import:
10.1.66.21
更新一下
创建表
bash -x /opt/whale-common-convert/bin/whale_common_convert_importer.sh --schema-path=/opt/whale-common-convert/conf/workPlace/schema.sfts --config-path=/opt/whale-common-convert/conf/workPlace/config.converter --action=createSchema
# 发送数据
bash -x /opt/whale-common-convert/bin/whale_common_convert_importer.sh --input-path=/opt/whale-common-convert/conf/user.csv --schema-path=/opt/whale-common-convert/conf/schema.sfts --config-path=/opt/whale-common-convert/conf/config.converter
# 更新schema
bash -x /opt/whale-common-convert/bin/whale_common_convert_importer.sh --schema-path=/opt/whale-common-convert/conf
## crow -import
bash /opt/knowledge-graph-manager/graph-importer/bin/graph_importer.sh \
--meta=/opt/install/crow-import-test/extract_task_config.json --data=/opt/install/crow-import-test/test1.bcp --bad-path=/opt/install/crow-import-test/bad --stat-path=/opt/install/crow-import-test/stat
nagios
为应对现场漏洞扫描,目前nagios默认密码变更为Pwd@Nagios#01Admin(仅限依赖了最新版本的ansible-common的项目),如果需要配置mntos,请修改为 Pwd%40Nagios%2301Admin
jira 检查是否有的没有 预估时间
https://jira.lzy.com/browse/WHAL-607?jql=
status not in (closed, Resolved) AND duedate <= endofweek() AND assignee in (currentUser()) AND originalEstimate is EMPTY ORDER BY originalEstimate DESC, remainingEstimate DESC, key ASC
uuid
facb62e7-82b7-4fb2-85b8-99c0b9ofdda4
徐:e0cbb77e-487e-478e-9c6f-61f5e386cd8c
孔:0087e99d-5424-403b-94c6-7104406783e8
hive beeline:
配置任务队列名称 --hiveconf tez.queue.name=titan
beeline -u “jdbc:hive2://oe-ambari-10-1-66-20:2181,oe-ambari-10-1-66-21:2181,oe-ambari-10-1-66-22:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2” --hiveconf tez.queue.name=titan
修复hive表:
msck repair table simba.trail_lage;
git打patch
git reset --hard xxxxdasdasd
先本地commit
打patch
然后 --hard到最后一次的commit-id
打包:
./gradlew -Pprod assembleDist
./gradlew -Phw_mrs assembleDist
tar -zcvf whale-jingshu-hbase-importer-1.0.0-SNAPSHOT.tar.gz whale-jingshu-hbase-importer-1.0.0-SNAPSHOT/
md5sum.exe whale-jingshu-hbase-importer-1.0.0-SNAPSHOT.tar.gz >whale-jingshu-hbase-importer-1.0.0-SNAPSHOT.tar.gz.md5
md5sum.exe -c whale-jingshu-hbase-importer-1.0.0-SNAPSHOT.tar.gz.md5
certutil -hashfile xxx MD5
mvn com.coveo:fmt-maven-plugin:format
mvn clean install
mvn fmt:format
解压gz:
gunzip -c “${input}”
gzip -d
jvm监控:
jmap -heap 101623
jstat -gcutil 101623
jstack -l
liunx 传文件到本地:
yum install lrzsz
sz 文件
csv 统计:
find . -name “*.csv” -type f|xargs cat |awk -F ‘,’ ‘{print $1}’|sort|uniq|wc -l
joiner 使用
为了防止有null值:
Joiner.on(“,”).useForNull(“”).join(manufacturer, device, model_name); --ok
虚拟机修改 独立ip
vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
ONBOOT=yes
IPADDR=192.168.56.56
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
启动 jingsu-c-master\preprocess
spark参数:
spark.shuffle.registration.maxAttempts 5
spark.shuffle.registration.timeout 50000
要启动一个redis,并且映射到本地
docker stop test-keydb-cluster
docker rm test-keydb-cluster
# 因为单元测试要用到expiremember,必须用keydb启动
# 必须设置ip为127.0.0.1.保证pc和虚拟机,docker ip 一致,可以权限cluster 命令 (广播命令)
docker run --name test-keydb-cluster -d \
-e IP=127.0.0.1 \
--publish 6380-6382:6380-6382 \
bd/docker-keydb-cluster
# 或者
docker start test-keydb-cluster
# 确认启动
docker logs test-keydb-cluster -f
jingshu启动命令行执行:
D:\tools\putty\PLINK.EXE -L 6380:127.0.0.1:6380 -L 6381:127.0.0.1:6381 -L 6382:127.0.0.1:6382 root@127.0.0.1 -P 2222
D:\tools\putty\PLINK.EXE -L 6380:127.0.0.1:6380 -L 6381:127.0.0.1:6381 -L 6382:127.0.0.1:6382 root@192.168.56.56
D:\tools\putty\PLINK.EXE -L 6380:127.0.0.1:6380 -L 6381:127.0.0.1:6381 -L 6382:127.0.0.1:6382 root@192.168.56.100
netstat -nat
查看: --ok
TCP 127.0.0.1:6380 0.0.0.0:0 LISTENING InHost
TCP 127.0.0.1:6381 0.0.0.0:0 LISTENING InHost
TCP 127.0.0.1:6382
jdk-set
D:\Project\javaenv_17.bat
D:\Project\javaenv_8.bat
python linux下载东西:
export REQUESTS_CA_BUNDLE=/opt/LZY_ROOT.crt
tox -e linux_release
测试连接kafka
命令:
查询topic
/opt/cuttlefish-restore/bin
/opt/cuttlefish-restore/bin/kaf topics -b kafka1:6667
创建topic
/opt/cuttlefish-restore/bin/kaf topic create kaf_test_1 -b kafka1:6667 --partitions 4 --replicas 1
发送数据到kafka topic
echo test | /opt/cuttlefish-restore/bin/kaf -b kafka1:6667 produce kaf_test_1
消费topic,查看数据
/opt/cuttlefish-restore/bin/kaf -b kafka1:6667 consume kaf_test_1
/opt/cuttlefish-restore/bin/kaf -b kafka1:6667 consume kaf_test_1 -g ttt
/opt/cuttlefish-restore/bin/kaf -b kafka1:6667 consume --output raw zeek | wc -l
find . -name *.csv | sort | uniq | wc -L
把pcap文件转成json
mergecap -w - ok_1.pcap | tshark -r - -o ‘tls.keys_list:any,443,http,/opt/cuttlefish-restore-test/test_pkcs1.key’ -T ek | python xxx -out xxx
增加py依赖: poetry add requests
squid快速使用:
-
创建repo: repo 名称为 repo_taiwan_renkou solr的collection名称为 test
调用接口:
curl -X POST -H “Content-Type: multipart/form-data”
–form “defaultBackendConfig=@./test.yaml”
“http://10.1.65.52:15003/squidBackendService/api/v1/repo/createNewRepo?repoEnName=test_aaa&autoCreateCollection=true&fulltextIndexPrefix=test_aaa&fulltextSearchIndex=test_aaa_search_collection&fulltextRedirectIndex=test_aaa_redirect_collection&autoClean=true&dataSaveDays=100&routeName=compositeId”
注:test.yaml 可以参考 06sourcecode/backend-service/src/test/resources/com/lzy/squid/backendservice/repo_taiwan_renkou.yml -
给repo导入数据:
调用接口:
curl -X POST -H “Content-Type: multipart/form-data”
–form “updateFile=@./data.csv”
–form “fulltextImportConfigFile=@./test.yaml”
‘http://10.1.65.52:15003/squidBackendService/api/v1/task/import/uploadFileIntoRepo?repoEnName=test_aaa&fulltextRedirectIndex=test_aaa_redirect_collection’
注:test.yaml 可以参考 06sourcecode/backend-service/src/test/resources/com/lzy/squid/backendservice/repo_taiwan_renkou.yml
data.csv 是要导入的数据
time curl 看看花费多少时间
time curl "http://localhost:8081/common-query-service/api/v1/quickTools/getMobileInfoBatch?phoneNumberList=13785910190
for i in $(seq 1000);
do
xxxxx
done
shellcheck shellfmt
shfmt -w -d -l -i 2 -ci ambari_service_check.sh
shellcheck abc.sh
不检查shell 代码:
shellcheck disable=SC1091
shell 日志输出
log() {
#print time
time=
(
d
a
t
e
"
+
e
c
h
o
"
[
(date "+%D %T") echo "[
(date"+echo"[time] $*"
}
if [ ! -e “
R
O
O
T
D
I
R
"
/
l
o
g
s
/
]
;
t
h
e
n
l
o
g
"
{ROOT_DIR}"/logs/ ]; then log "
ROOTDIR"/logs/];thenlog"{ROOT_DIR}/logs/ directory is not exist, creating…”
mkdir -p “${ROOT_DIR}”/logs/
fi
touch “${ROOT_DIR}”/logs/backup_user_db.log
exec > >(tee --append “${ROOT_DIR}”/logs/backup_user_db.log) 2>&1
solr field.query
range:
“{“postcode”: {“type”: “range”,“field”: “postcode_i_i_s_dv”,“start”: “200”,“end”: “10”}}”
k8s
k8s 看db 的端口号
kubectl get svc | grep anole
anole-lzy-anole-db-tcp NodePort 10.43.44.183 5432:31933/TCP
// 查看数据库密码
kubectl edit secret anole-lzy-anole
data:
anole-data-db-password: SjkzMWJ2QXloNw==
anole-example-db-password: TmlSdUl0NjlaOA==
anole-user-db-password: Y0FkR3BhZEg5Tg==
echo -n Y0FkR3BhZEg5Tg==|base64 -d
NiRuIt69Z8
下载每日构造文件
wget 文件路径
查看guard 日志
/root/lcm-guard/logs/lcm.guard.log
执行 这个脚本 写到文件
tail -f /tmp/log.log
ansible常用命令
增加host:
ansible all -i host -m ping
ansible all -i host -m shell -a “mkdir -p /opt/install”
ansible all -i host -m copy -a “src=/opt/install/test_copy.xxx dest=/opt/install”
-
ansible-playbook -i environments/prod site.yml --tags “service” --start-at-task “dataflow-db : Create postgresql database”
-
查看所有 变量:ansible
ansible all -m setup | grep “ansible_distribution”
执行common相关的自动部署: --ok
ansible-playbook -i environments/prod site.yml --tags "common" -v
结果:dataflow.web.com : ok=110 changed=27 unreachable=0 failed=0 skipped=26 rescued=0 ignored=0
确认每一台机器可以访问:
ansible -i environments/prod all -m ping
执行service相关的自动部署:
ansible-playbook -i environments/prod site.yml --tags "service" -v
如果有问题,可以继续从当前位置运行,加参数:如
ansible-playbook -i environments/prod site.yml --tags "service" --start-at-task "dataflow-db : Create postgresql database"
查看last task:
grep TASK ansible.log | tail -1 | sed -e "s/.*\[//" -e "s/].*$//"
-
查看task列表,然后找到需要指定的task,复制需要的task
ansible-playbook -i environments/prod site.yml --list-task
或者从运行日志中,复制TASK []中列出来的角色和任务名 -
从指定task开始运行: --skip-tags 可以指定跳过的 tag --tags 可以指定仅执行哪些 tag
ansible-playbook -i environments/prod site.yml --skip-tags “<需要跳过的tags>” --tags “<将要执行的tags>” -v --start-at-tast “<任务执行起点>” -
案例一:从common roles的Install NetworkManager task开始运行
ansible-playbook -i environments/prod site.yml --skip-tags “bind” --tags “common” -v --start-at-task “common : Install NetworkManager”
docker常用命令
docker ps | grep “apollo”
docker exec -it a08a61025dde bash
- docker-compose up -d
- yum install docker-ce --nogpgcheck
linux
获取本机ip
addressIp=ifconfig | grep 'inet' | grep -v '127.0.0.1' | awk '{print $2}'
echo “addressIp = ${addressIp}”
kubectl get pods | grep hawk
kubectl exec -it hawk-lzy-hawk-web-5b8f67f94c-zvkvw bash
查看 服务是否存在
systemctl is-active ambari-agent --查看服务是否存活
systemctl list-unit-files --type=service | grep -q “ambari-agent.service” && exist=“yes” || exist=“not”
systemctl list-unit-files --type=service | grep -q “ambari-server.service” && exist=“yes” || exist=“not”
echo “ambari-server is exist: ${exist}”
-
将版本安装包解压到/opt目录下:
tar zxvf dataflow-release-2.1.1-dev.tar.gz -C /opt/ -
复制:
cp /opt/dataflow-release-2.1.1-dev/.tar /opt/dataflow-deploy/install/
-
重命名:
mv prod-template prod
-
systemctl reload nginx
-
netstat -nap |grep 15004 查看15004
-
ps aux | grep java | grep -i importer
-
tail -n a.sh 查看a.sh的最后n行
-
head -n a.sh 查看a.sh的最开始几行
maven
- mvn install
- mvn
mvn install -DskipTests
ant
-
ant init
-
ant compile
-
ant common_compile
-
ant bac_release bac打包
-
可以在 一般 先ant init 然后可以 在自己要工作的模块编译 有web可以 before init
lcm_guard
- lcm_guard
-h, --help show this help message and exit
–kill KILLS kill a monitor program and next guard will restart it
–killall kill all monitor program and restart (default: False)
–list list all monitor program (default: False)
–reload reload all xml from conf.d and generate tmp file again
(default: False) - lcm_guard --l|grep bac
ssh
- ssh root@ip
- ssh-copy-id -i ~/.ssh/id_rsa.pub root@127.0.0.1
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.56.1