python如何判断数据是字符串 # -*- coding: utf-8 -*-L1 = ['Hello', 'World', 18, 'Apple', None]L2 = [x for x in L1 if type(x)==type('x')]print(L2)输出['Hello', 'World', 'Apple'] 非字符串被过滤
XXL-JOB学习笔记(二) xxl支持多种路由策略。路由策略针对同一个job在不同group运行的策略,当同时启动多个同名的执行器例子时,这个执行器就有多个,如下:这样这个xxl-job-executor-sample下就有两个进程了,配置这个group下的的job的路由策略:具体策略详见com.xxl.job.admin.core.route.ExecutorRouteStrategyEnumFIRST(I18nUtil.getString("jobconf_route_first"), new Executo
ThreadLocal和InheritableThreadLocal ThreadLocal是做线程本地存储,ThreadLocal为变量在每个线程中都创建了一个副本,每个线程可以访问自己内部的副本变量。举个ThreadLocal的例子public class ThreadLocalTest { private static ThreadLocal<Integer> threadLocal = new ThreadLocal<>(); public static void main(String[] args) {
XXL-JOB学习笔记(一) 源码去github下载下,先去执行sql新建几个表:打开主要分为3个部分:1,管理台2,核心3,执行例子运行xxl-job-admin,成功后打开管理台运行执行例子,我这边用的springboot的demo配置application.properties后启动# web portserver.port=8089# no web#spring.main.web-environment=false# log configlogging.c..
datax安装和使用 1 安装包准备apache-maven-3.3.9-bin.tar.gz datax.tar.gz jdk-8u221-linux-x64.tar.gz Python-3.7.1.tgz1,安装jdk,maven,pythontar -zxvf jdk-8u221-linux-x64.tar.gz -C ~/app/tar -zxvf apache-maven-3.3.9-bin.tar.gz -C ~/app/...
mysql报错【Illegal mix of collations (utf8_general_ci,IMPLICIT) and (utf8_german2_ci,IMPLICIT) for oper 1,可以直接在建表时添加ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_german2_ci2,修改默认的COLLATEmysql> SHOW VARIABLES LIKE 'collation_%';+----------------------+-----------------+| Variable_name | Value |+----------------------+-------------...
修改redhat7.5的yum源为阿里云的yum源 1,强制卸载所有包rpm -qa|grep yum|xargs rpm -e --nodeps2,查看是否卸载完全rpm -qa|grep yum3,下载相关的rpm插件,mkdir softwarecd softwarewget https://mirrors.aliyun.com/centos/7/os/x86_64/Packages/python-chardet-2.2.1-3.el7.noarch.rpmwget https://mirrors.aliyun.
spark-sql执行报错No suitable driver found for jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist= 未指定--driver-class-path正确指令:spark-sql local[2] --jars ~/software/mysql-connector-java-5.1.27-bin.jar --driver-class-path ~/software/mysql-connector-java-5.1.27-bin.jar
linux下安装jdk 1,下载jdk-8u221-linux-x64.tar.gz文件2,解压tar -zxvf jdk-8u221-linux-x64.tar.gz -C ~/app/3,配置环境参数vi ~/.bash_profile添加路径:export JAVA_HOME=/home/hadoop/app/jdk1.8.0_221export PATH=$JAVA_HOME/bin:$PATH4,刷新权限source~/.bash_profile5,用java和jav.
window向linux系统上传文件的方法 1,如果windows命令行支持scp,就用scp上传scp path/文件名linux用户名@主机名或者ip地址:目录例子:scp F:\BaiduNetdiskDownload\mysql-connector-java-5.1.27-bin.jar hadoop@hadoop001:~/app/apache-hive-2.3.7-bin/lib2,使用lrzsz 首先安装该插件yum install lrzszrz命令回车即可跳出弹窗从本地上传文件sz 文件名 ...
spark-sql的使用配置流程 1,安装配置好jdk,mysql和scala2,安装配置hadoop3,安装配置好hive4,安装配置好spark5,把hive的conf目录下hive-site.xml文件拷贝到spark的conf目录下cp hive-site.xml /home/hadoop/app/spark-2.4.6-bin-hadoop2.7/conf6,spark启动,需指定mysql驱动包,进入scala命令行spark-shell local[2] --jars ~/software/my
spark.sql(“show tables“).show执行报错 错误信息如下:20/07/17 20:04:06 WARN Hive: Failed to access metastore. This class should not accessed in runtime.org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveM
springboot项目里使用scala进行开发配置 <!--添加依赖--><dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId></dependency><!--添加plugin--><plugin> <groupId>net.alchim31.maven</groupId&g..
使用scp上传文件时报错WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! 我在windows上使用scp向linux上传文件时报错:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING...
idea下载hadoop里的文件报错java.io.IOException: (null) entry in command string: null chmod 0644 D:\Users\Admi 详细报错如下:java.io.IOException: (null) entry in command string: null chmod 0644 D:\Users\Administrator\Desktop\11.txt at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:770) at org.apache.hadoop.util.Shell.execCommand(Shell.java...
hadoop上传文件报错:org.apache.hadoop.ipc.RemoteException(java.io.IOException) 上传报错:org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /b.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.查看hadoop运行情况,发现主机名称为Hostname: lo.