2020-10-27

Hive详解之参数和变量设置

Jeremy_Lee123 2019-10-05 20:39:18  977  已收藏 3

分类专栏: Hive 文章标签: hive

版权

Hive参数大全:https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties

开发Hive应用时,不可避免地需要设定Hive的参数。设定Hive的参数可以调优HQL代码的执行效率,或帮助定位问题。然而实践中经常遇到的一个问题是,为什么设定的参数没有起作用?这通常是错误的设定方式导致的。

一、hive 参数设置方式

1.1、配置文件

Hive的配置文件包括:

  • 用户自定义配置文件:$HIVE_CONF_DIR/hive-site.xml 
  • 默认配置文件:$HIVE_CONF_DIR/hive-default.xml 

用户自定义配置会覆盖默认配置。另外,Hive也会读入Hadoop的配置,因为Hive是作为Hadoop的客户端启动的,Hive的配置会覆盖Hadoop的配置。配置文件的设定对本机启动的所有Hive进程都有效。

1.2、命令行参数

启动Hive(客户端或Server方式)时,可以在命令行添加-hiveconf param=value来设定参数,

例如:bin/hive -hiveconf hive.root.logger=INFO,console

常用:hive --hiveconf hive.cli.print.header=true (查看:set +属性名称)

这一设定对本次启动的Session(对于Server方式启动,则是所有请求的Sessions)有效。

1.3、参数声明

可以在HQL中使用SET关键字设定参数

例如:set mapred.reduce.tasks=100;

这一设定的作用域也是session级的。

总结:

上述三种设定方式的优先级依次递增。即参数声明覆盖命令行参数,命令行参数覆盖配置文件设定。注意某些系统级的参数,例如log4j相关的设定,必须用前两种方式设定,因为那些参数的读取在Session建立以前已经完成了。

  1. hive参数初始化配置.hiverc文件 如:   ~/.hiverc 如果没有,可直接创建该文件,将需要设置的参数写到该文件中,hive启动运行时,会加载改文件中的配置。
  2. 查看hive历史操作命令集:cat  ~/.hivehistory

二、hive 变量设置方式

  • hive当中的参数、变量,都是以命名空间开头
  • 通过${}方式进行引用,其中system、env下的变量必须以前缀开头

注:临时会话参数 hive -d val=1; 或者 hive -define val=1;  或者 hive --hivevar val=1;

补充:hive相关全部参数

 
  1. hive> set;

  2. _hive.hdfs.session.path=/tmp/hive/root/9a85e67f-c351-4f98-8589-7c1b2b7d17dd

  3. _hive.local.session.path=/tmp/root/9a85e67f-c351-4f98-8589-7c1b2b7d17dd

  4. _hive.tmp_table_space=/tmp/hive/root/9a85e67f-c351-4f98-8589-7c1b2b7d17dd/_tmp_space.db

  5. datanucleus.autoCreateSchema=true

  6. datanucleus.autoStartMechanismMode=checked

  7. datanucleus.cache.level2=false

  8. datanucleus.cache.level2.type=none

  9. datanucleus.connectionPoolingType=BONECP

  10. datanucleus.fixedDatastore=false

  11. datanucleus.identifierFactory=datanucleus1

  12. datanucleus.plugin.pluginRegistryBundleCheck=LOG

  13. datanucleus.rdbms.useLegacyNativeValueStrategy=true

  14. datanucleus.storeManagerType=rdbms

  15. datanucleus.transactionIsolation=read-committed

  16. datanucleus.validateColumns=false

  17. datanucleus.validateConstraints=false

  18. datanucleus.validateTables=false

  19. dfs.block.access.key.update.interval=600

  20. dfs.block.access.token.enable=false

  21. dfs.block.access.token.lifetime=600

  22. dfs.blockreport.initialDelay=0

  23. dfs.blockreport.intervalMsec=21600000

  24. dfs.blockreport.split.threshold=1000000

  25. dfs.blocksize=134217728

  26. dfs.bytes-per-checksum=512

  27. dfs.cachereport.intervalMsec=10000

  28. dfs.client-write-packet-size=65536

  29. dfs.client.block.write.replace-datanode-on-failure.best-effort=false

  30. dfs.client.block.write.replace-datanode-on-failure.enable=true

  31. dfs.client.block.write.replace-datanode-on-failure.policy=DEFAULT

  32. dfs.client.block.write.retries=3

  33. dfs.client.cached.conn.retry=3

  34. dfs.client.context=default

  35. dfs.client.datanode-restart.timeout=30

  36. dfs.client.domain.socket.data.traffic=false

  37. dfs.client.failover.connection.retries=0

  38. dfs.client.failover.connection.retries.on.timeouts=0

  39. dfs.client.failover.max.attempts=15

  40. dfs.client.failover.sleep.base.millis=500

  41. dfs.client.failover.sleep.max.millis=15000

  42. dfs.client.file-block-storage-locations.num-threads=10

  43. dfs.client.file-block-storage-locations.timeout.millis=1000

  44. dfs.client.https.keystore.resource=ssl-client.xml

  45. dfs.client.https.need-auth=false

  46. dfs.client.mmap.cache.size=256

  47. dfs.client.mmap.cache.timeout.ms=3600000

  48. dfs.client.mmap.enabled=true

  49. dfs.client.mmap.retry.timeout.ms=300000

  50. dfs.client.read.shortcircuit=false

  51. dfs.client.read.shortcircuit.skip.checksum=false

  52. dfs.client.read.shortcircuit.streams.cache.expiry.ms=300000

  53. dfs.client.read.shortcircuit.streams.cache.size=256

  54. dfs.client.short.circuit.replica.stale.threshold.ms=1800000

  55. dfs.client.slow.io.warning.threshold.ms=30000

  56. dfs.client.use.datanode.hostname=false

  57. dfs.client.use.legacy.blockreader.local=false

  58. dfs.client.write.exclude.nodes.cache.expiry.interval.millis=600000

  59. dfs.datanode.address=0.0.0.0:50010

  60. dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction=0.75f

  61. dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold=10737418240

  62. dfs.datanode.balance.bandwidthPerSec=1048576

  63. dfs.datanode.block.id.layout.upgrade.threads=12

  64. dfs.datanode.bp-ready.timeout=20

  65. dfs.datanode.cache.revocation.polling.ms=500

  66. dfs.datanode.cache.revocation.timeout.ms=900000

  67. dfs.datanode.data.dir=/opt/lxk/hadoop/hdfs/data

  68. dfs.datanode.data.dir.perm=700

  69. dfs.datanode.directoryscan.interval=21600

  70. dfs.datanode.directoryscan.threads=1

  71. dfs.datanode.dns.interface=default

  72. dfs.datanode.dns.nameserver=default

  73. dfs.datanode.drop.cache.behind.reads=false

  74. dfs.datanode.drop.cache.behind.writes=false

  75. dfs.datanode.du.reserved=0

  76. dfs.datanode.failed.volumes.tolerated=0

  77. dfs.datanode.fsdatasetcache.max.threads.per.volume=4

  78. dfs.datanode.handler.count=10

  79. dfs.datanode.hdfs-blocks-metadata.enabled=false

  80. dfs.datanode.http.address=0.0.0.0:50075

  81. dfs.datanode.https.address=0.0.0.0:50475

  82. dfs.datanode.ipc.address=0.0.0.0:50020

  83. dfs.datanode.max.locked.memory=0

  84. dfs.datanode.max.transfer.threads=4096

  85. dfs.datanode.readahead.bytes=4193404

  86. dfs.datanode.shared.file.descriptor.paths=/dev/shm,/tmp

  87. dfs.datanode.slow.io.warning.threshold.ms=300

  88. dfs.datanode.sync.behind.writes=false

  89. dfs.datanode.use.datanode.hostname=false

  90. dfs.default.chunk.view.size=32768

  91. dfs.encrypt.data.transfer=false

  92. dfs.encrypt.data.transfer.cipher.key.bitlength=128

  93. dfs.ha.automatic-failover.enabled=false

  94. dfs.ha.log-roll.period=120

  95. dfs.ha.tail-edits.period=60

  96. dfs.heartbeat.interval=3

  97. dfs.http.policy=HTTP_ONLY

  98. dfs.https.enable=false

  99. dfs.https.server.keystore.resource=ssl-server.xml

  100. dfs.image.compress=false

  101. dfs.image.compression.codec=org.apache.hadoop.io.compress.DefaultCodec

  102. dfs.image.transfer.bandwidthPerSec=0

  103. dfs.image.transfer.chunksize=65536

  104. dfs.image.transfer.timeout=60000

  105. dfs.journalnode.http-address=0.0.0.0:8480

  106. dfs.journalnode.https-address=0.0.0.0:8481

  107. dfs.journalnode.rpc-address=0.0.0.0:8485

  108. dfs.namenode.accesstime.precision=3600000

  109. dfs.namenode.acls.enabled=false

  110. dfs.namenode.audit.loggers=default

  111. dfs.namenode.avoid.read.stale.datanode=false

  112. dfs.namenode.avoid.write.stale.datanode=false

  113. dfs.namenode.backup.address=0.0.0.0:50100

  114. dfs.namenode.backup.http-address=0.0.0.0:50105

  115. dfs.namenode.checkpoint.check.period=60

  116. dfs.namenode.checkpoint.dir=file://${hadoop.tmp.dir}/dfs/namesecondary

  117. dfs.namenode.checkpoint.edits.dir=${dfs.namenode.checkpoint.dir}

  118. dfs.namenode.checkpoint.max-retries=3

  119. dfs.namenode.checkpoint.period=3600

  120. dfs.namenode.checkpoint.txns=1000000

  121. dfs.namenode.datanode.registration.ip-hostname-check=true

  122. dfs.namenode.decommission.interval=30

  123. dfs.namenode.decommission.nodes.per.interval=5

  124. dfs.namenode.delegation.key.update-interval=86400000

  125. dfs.namenode.delegation.token.max-lifetime=604800000

  126. dfs.namenode.delegation.token.renew-interval=86400000

  127. dfs.namenode.edit.log.autoroll.check.interval.ms=300000

  128. dfs.namenode.edit.log.autoroll.multiplier.threshold=2.0

  129. dfs.namenode.edits.dir=${dfs.namenode.name.dir}

  130. dfs.namenode.edits.journal-plugin.qjournal=org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager

  131. dfs.namenode.edits.noeditlogchannelflush=false

  132. dfs.namenode.enable.retrycache=true

  133. dfs.namenode.fs-limits.max-blocks-per-file=1048576

  134. dfs.namenode.fs-limits.max-component-length=255

  135. dfs.namenode.fs-limits.max-directory-items=1048576

  136. dfs.namenode.fs-limits.max-xattr-size=16384

  137. dfs.namenode.fs-limits.max-xattrs-per-inode=32

  138. dfs.namenode.fs-limits.min-block-size=1048576

  139. dfs.namenode.handler.count=10

  140. dfs.namenode.http-address=0.0.0.0:50070

  141. dfs.namenode.https-address=0.0.0.0:50470

  142. dfs.namenode.inotify.max.events.per.rpc=1000

  143. dfs.namenode.invalidate.work.pct.per.iteration=0.32f

  144. dfs.namenode.kerberos.internal.spnego.principal=${dfs.web.authentication.kerberos.principal}

  145. dfs.namenode.lazypersist.file.scrub.interval.sec=300

  146. dfs.namenode.list.cache.directives.num.responses=100

  147. dfs.namenode.list.cache.pools.num.responses=100

  148. dfs.namenode.list.encryption.zones.num.responses=100

  149. dfs.namenode.logging.level=info

  150. dfs.namenode.max.extra.edits.segments.retained=10000

  151. dfs.namenode.max.objects=0

  152. dfs.namenode.name.dir=/opt/lxk/hadoop/hdfs/name

  153. dfs.namenode.name.dir.restore=false

  154. dfs.namenode.num.checkpoints.retained=2

  155. dfs.namenode.num.extra.edits.retained=1000000

  156. dfs.namenode.path.based.cache.block.map.allocation.percent=0.25

  157. dfs.namenode.path.based.cache.refresh.interval.ms=30000

  158. dfs.namenode.path.based.cache.retry.interval.ms=30000

  159. dfs.namenode.reject-unresolved-dn-topology-mapping=false

  160. dfs.namenode.replication.considerLoad=true

  161. dfs.namenode.replication.interval=3

  162. dfs.namenode.replication.min=1

  163. dfs.namenode.replication.work.multiplier.per.iteration=2

  164. dfs.namenode.resource.check.interval=5000

  165. dfs.namenode.resource.checked.volumes.minimum=1

  166. dfs.namenode.resource.du.reserved=104857600

  167. dfs.namenode.retrycache.expirytime.millis=600000

  168. dfs.namenode.retrycache.heap.percent=0.03f

  169. dfs.namenode.safemode.extension=30000

  170. dfs.namenode.safemode.min.datanodes=0

  171. dfs.namenode.safemode.threshold-pct=0.999f

  172. dfs.namenode.secondary.http-address=node04:50090

  173. dfs.namenode.secondary.https-address=0.0.0.0:50091

  174. dfs.namenode.stale.datanode.interval=30000

  175. dfs.namenode.startup.delay.block.deletion.sec=0

  176. dfs.namenode.support.allow.format=true

  177. dfs.namenode.write.stale.datanode.ratio=0.5f

  178. dfs.namenode.xattrs.enabled=true

  179. dfs.permissions.enabled=true

  180. dfs.permissions.superusergroup=supergroup

  181. dfs.replication=2

  182. dfs.replication.max=512

  183. dfs.secondary.namenode.kerberos.internal.spnego.principal=${dfs.web.authentication.kerberos.principal}

  184. dfs.short.circuit.shared.memory.watcher.interrupt.check.ms=60000

  185. dfs.storage.policy.enabled=true

  186. dfs.stream-buffer-size=4096

  187. dfs.support.append=true

  188. dfs.user.home.dir.prefix=/user

  189. dfs.webhdfs.enabled=true

  190. dfs.webhdfs.user.provider.user.pattern=^[A-Za-z_][A-Za-z0-9._-]*[$]?$

  191. fs.har.impl=org.apache.hadoop.hive.shims.HiveHarFileSystem

  192. hadoop.bin.path=/opt/lxk/hadoop-2.6.5/bin/hadoop

  193. hadoop.fuse.connection.timeout=300

  194. hadoop.fuse.timer.period=5

  195. hadoop.hdfs.configuration.version=1

  196. hive.analyze.stmt.collect.partlevel.stats=true

  197. hive.archive.enabled=false

  198. hive.auto.convert.join=true

  199. hive.auto.convert.join.noconditionaltask=true

  200. hive.auto.convert.join.noconditionaltask.size=10000000

  201. hive.auto.convert.join.use.nonstaged=false

  202. hive.auto.convert.sortmerge.join=false

  203. hive.auto.convert.sortmerge.join.bigtable.selection.policy=org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ

  204. hive.auto.convert.sortmerge.join.to.mapjoin=false

  205. hive.auto.progress.timeout=0s

  206. hive.autogen.columnalias.prefix.includefuncname=false

  207. hive.autogen.columnalias.prefix.label=_c

  208. hive.binary.record.max.length=1000

  209. hive.cache.expr.evaluation=true

  210. hive.cbo.costmodel.cpu=0.000001

  211. hive.cbo.costmodel.extended=false

  212. hive.cbo.costmodel.hdfs.read=1.5

  213. hive.cbo.costmodel.hdfs.write=10.0

  214. hive.cbo.costmodel.local.fs.read=4.0

  215. hive.cbo.costmodel.local.fs.write=4.0

  216. hive.cbo.costmodel.network=150.0

  217. hive.cbo.enable=true

  218. hive.cbo.returnpath.hiveop=false

  219. hive.cli.errors.ignore=false

  220. hive.cli.pretty.output.num.cols=-1

  221. hive.cli.print.current.db=false

  222. hive.cli.print.header=true

  223. hive.cli.prompt=hive

  224. hive.cluster.delegation.token.store.class=org.apache.hadoop.hive.thrift.MemoryTokenStore

  225. hive.cluster.delegation.token.store.zookeeper.znode=/hivedelegation

  226. hive.compactor.abortedtxn.threshold=1000

  227. hive.compactor.check.interval=300s

  228. hive.compactor.cleaner.run.interval=5000ms

  229. hive.compactor.delta.num.threshold=10

  230. hive.compactor.delta.pct.threshold=0.1

  231. hive.compactor.initiator.on=false

  232. hive.compactor.worker.threads=0

  233. hive.compactor.worker.timeout=86400s

  234. hive.compat=0.12

  235. hive.compute.query.using.stats=false

  236. hive.compute.splits.in.am=true

  237. hive.conf.restricted.list=hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role

  238. hive.conf.validation=true

  239. hive.convert.join.bucket.mapjoin.tez=false

  240. hive.counters.group.name=HIVE

  241. hive.debug.localtask=false

  242. hive.decode.partition.name=false

  243. hive.default.fileformat=TextFile

  244. hive.default.fileformat.managed=none

  245. hive.default.rcfile.serde=org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe

  246. hive.default.serde=org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  247. hive.display.partition.cols.separately=true

  248. hive.downloaded.resources.dir=/tmp/${hive.session.id}_resources

  249. hive.enforce.bucketing=false

  250. hive.enforce.bucketmapjoin=false

  251. hive.enforce.sorting=false

  252. hive.enforce.sortmergebucketmapjoin=false

  253. hive.entity.capture.transform=false

  254. hive.entity.separator=@

  255. hive.error.on.empty.partition=false

  256. hive.exec.check.crossproducts=true

  257. hive.exec.compress.intermediate=false

  258. hive.exec.compress.output=false

  259. hive.exec.concatenate.check.index=true

  260. hive.exec.copyfile.maxsize=33554432

  261. hive.exec.counters.pull.interval=1000

  262. hive.exec.default.partition.name=__HIVE_DEFAULT_PARTITION__

  263. hive.exec.drop.ignorenonexistent=true

  264. hive.exec.dynamic.partition=true

  265. hive.exec.dynamic.partition.mode=strict

  266. hive.exec.infer.bucket.sort=false

  267. hive.exec.infer.bucket.sort.num.buckets.power.two=false

  268. hive.exec.job.debug.capture.stacktraces=true

  269. hive.exec.job.debug.timeout=30000

  270. hive.exec.local.scratchdir=/tmp/root

  271. hive.exec.max.created.files=100000

  272. hive.exec.max.dynamic.partitions=1000

  273. hive.exec.max.dynamic.partitions.pernode=100

  274. hive.exec.mode.local.auto=false

  275. hive.exec.mode.local.auto.input.files.max=4

  276. hive.exec.mode.local.auto.inputbytes.max=134217728

  277. hive.exec.orc.block.padding.tolerance=0.05

  278. hive.exec.orc.compression.strategy=SPEED

  279. hive.exec.orc.default.block.padding=true

  280. hive.exec.orc.default.block.size=268435456

  281. hive.exec.orc.default.buffer.size=262144

  282. hive.exec.orc.default.compress=ZLIB

  283. hive.exec.orc.default.row.index.stride=10000

  284. hive.exec.orc.default.stripe.size=67108864

  285. hive.exec.orc.dictionary.key.size.threshold=0.8

  286. hive.exec.orc.encoding.strategy=SPEED

  287. hive.exec.orc.memory.pool=0.5

  288. hive.exec.orc.skip.corrupt.data=false

  289. hive.exec.orc.split.strategy=HYBRID

  290. hive.exec.orc.zerocopy=false

  291. hive.exec.parallel=false

  292. hive.exec.parallel.thread.number=8

  293. hive.exec.perf.logger=org.apache.hadoop.hive.ql.log.PerfLogger

  294. hive.exec.rcfile.use.explicit.header=true

  295. hive.exec.rcfile.use.sync.cache=true

  296. hive.exec.reducers.bytes.per.reducer=256000000

  297. hive.exec.reducers.max=1009

  298. hive.exec.rowoffset=false

  299. hive.exec.scratchdir=/tmp/hive

  300. hive.exec.script.allow.partial.consumption=false

  301. hive.exec.script.maxerrsize=100000

  302. hive.exec.script.trust=false

  303. hive.exec.show.job.failure.debug.info=true

  304. hive.exec.stagingdir=.hive-staging

  305. hive.exec.submit.local.task.via.child=true

  306. hive.exec.submitviachild=false

  307. hive.exec.tasklog.debug.timeout=20000

  308. hive.exec.temporary.table.storage=default

  309. hive.execution.engine=mr

  310. hive.exim.strict.repl.tables=true

  311. hive.exim.uri.scheme.whitelist=hdfs,pfile

  312. hive.explain.dependency.append.tasktype=false

  313. hive.explain.user=false

  314. hive.fetch.output.serde=org.apache.hadoop.hive.serde2.DelimitedJSONSerDe

  315. hive.fetch.task.aggr=false

  316. hive.fetch.task.conversion=more

  317. hive.fetch.task.conversion.threshold=1073741824

  318. hive.file.max.footer=100

  319. hive.fileformat.check=true

  320. hive.groupby.mapaggr.checkinterval=100000

  321. hive.groupby.orderby.position.alias=false

  322. hive.groupby.skewindata=false

  323. hive.hadoop.supports.splittable.combineinputformat=false

  324. hive.hashtable.initialCapacity=100000

  325. hive.hashtable.key.count.adjustment=1.0

  326. hive.hashtable.loadfactor=0.75

  327. hive.hbase.generatehfiles=false

  328. hive.hbase.snapshot.restoredir=/tmp

  329. hive.hbase.wal.enabled=true

  330. hive.heartbeat.interval=1000

  331. hive.hmshandler.force.reload.conf=false

  332. hive.hmshandler.retry.attempts=10

  333. hive.hmshandler.retry.interval=2000ms

  334. hive.hwi.listen.host=0.0.0.0

  335. hive.hwi.listen.port=9999

  336. hive.hwi.war.file=${env:HWI_WAR_FILE}

  337. hive.ignore.mapjoin.hint=true

  338. hive.in.test=false

  339. hive.in.tez.test=false

  340. hive.index.compact.binary.search=true

  341. hive.index.compact.file.ignore.hdfs=false

  342. hive.index.compact.query.max.entries=10000000

  343. hive.index.compact.query.max.size=10737418240

  344. hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat

  345. hive.insert.into.external.tables=true

  346. hive.insert.into.multilevel.dirs=false

  347. hive.int.timestamp.conversion.in.seconds=false

  348. hive.io.rcfile.column.number.conf=0

  349. hive.io.rcfile.record.buffer.size=4194304

  350. hive.io.rcfile.record.interval=2147483647

  351. hive.io.rcfile.tolerate.corruptions=false

  352. hive.jobname.length=50

  353. hive.join.cache.size=25000

  354. hive.join.emit.interval=1000

  355. hive.lazysimple.extended_boolean_literal=false

  356. hive.limit.optimize.enable=false

  357. hive.limit.optimize.fetch.max=50000

  358. hive.limit.optimize.limit.file=10

  359. hive.limit.pushdown.memory.usage=-1.0

  360. hive.limit.query.max.table.partition=-1

  361. hive.limit.row.max.size=100000

  362. hive.localize.resource.num.wait.attempts=5

  363. hive.localize.resource.wait.interval=5000ms

  364. hive.lock.manager=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager

  365. hive.lock.mapred.only.operation=false

  366. hive.lock.numretries=100

  367. hive.lock.sleep.between.retries=60s

  368. hive.lockmgr.zookeeper.default.partition.name=__HIVE_DEFAULT_ZOOKEEPER_PARTITION__

  369. hive.log.every.n.records=0

  370. hive.log.explain.output=false

  371. hive.map.aggr=true

  372. hive.map.aggr.hash.force.flush.memory.threshold=0.9

  373. hive.map.aggr.hash.min.reduction=0.5

  374. hive.map.aggr.hash.percentmemory=0.5

  375. hive.map.groupby.sorted=false

  376. hive.map.groupby.sorted.testmode=false

  377. hive.mapjoin.bucket.cache.size=100

  378. hive.mapjoin.check.memory.rows=100000

  379. hive.mapjoin.followby.gby.localtask.max.memory.usage=0.55

  380. hive.mapjoin.followby.map.aggr.hash.percentmemory=0.3

  381. hive.mapjoin.hybridgrace.hashtable=true

  382. hive.mapjoin.hybridgrace.memcheckfrequency=1024

  383. hive.mapjoin.hybridgrace.minnumpartitions=16

  384. hive.mapjoin.hybridgrace.minwbsize=524288

  385. hive.mapjoin.localtask.max.memory.usage=0.9

  386. hive.mapjoin.optimized.hashtable=true

  387. hive.mapjoin.optimized.hashtable.wbsize=10485760

  388. hive.mapjoin.smalltable.filesize=25000000

  389. hive.mapper.cannot.span.multiple.partitions=false

  390. hive.mapred.local.mem=0

  391. hive.mapred.mode=nonstrict

  392. hive.mapred.partitioner=org.apache.hadoop.hive.ql.io.DefaultHivePartitioner

  393. hive.mapred.reduce.tasks.speculative.execution=true

  394. hive.mapred.supports.subdirectories=false

  395. hive.merge.mapfiles=true

  396. hive.merge.mapredfiles=false

  397. hive.merge.orcfile.stripe.level=true

  398. hive.merge.rcfile.block.level=true

  399. hive.merge.size.per.task=256000000

  400. hive.merge.smallfiles.avgsize=16000000

  401. hive.merge.sparkfiles=false

  402. hive.merge.tezfiles=false

  403. hive.metadata.move.exported.metadata.to.trash=true

  404. hive.metastore.aggregate.stats.cache.clean.until=0.8

  405. hive.metastore.aggregate.stats.cache.enabled=true

  406. hive.metastore.aggregate.stats.cache.fpp=0.01

  407. hive.metastore.aggregate.stats.cache.max.full=0.9

  408. hive.metastore.aggregate.stats.cache.max.partitions=10000

  409. hive.metastore.aggregate.stats.cache.max.reader.wait=1000ms

  410. hive.metastore.aggregate.stats.cache.max.variance=0.01

  411. hive.metastore.aggregate.stats.cache.max.writer.wait=5000ms

  412. hive.metastore.aggregate.stats.cache.size=10000

  413. hive.metastore.aggregate.stats.cache.ttl=600s

  414. hive.metastore.archive.intermediate.archived=_INTERMEDIATE_ARCHIVED

  415. hive.metastore.archive.intermediate.extracted=_INTERMEDIATE_EXTRACTED

  416. hive.metastore.archive.intermediate.original=_INTERMEDIATE_ORIGINAL

  417. hive.metastore.authorization.storage.checks=false

  418. hive.metastore.batch.retrieve.max=300

  419. hive.metastore.batch.retrieve.table.partition.max=1000

  420. hive.metastore.cache.pinobjtypes=Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order

  421. hive.metastore.client.connect.retry.delay=1s

  422. hive.metastore.client.drop.partitions.using.expressions=true

  423. hive.metastore.client.socket.lifetime=0s

  424. hive.metastore.client.socket.timeout=600s

  425. hive.metastore.connect.retries=3

  426. hive.metastore.direct.sql.batch.size=0

  427. hive.metastore.disallow.incompatible.col.type.changes=false

  428. hive.metastore.dml.events=false

  429. hive.metastore.event.clean.freq=0s

  430. hive.metastore.event.db.listener.timetolive=86400s

  431. hive.metastore.event.expiry.duration=0s

  432. hive.metastore.execute.setugi=true

  433. hive.metastore.expression.proxy=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore

  434. hive.metastore.failure.retries=1

  435. hive.metastore.filter.hook=org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl

  436. hive.metastore.fs.handler.class=org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl

  437. hive.metastore.integral.jdo.pushdown=false

  438. hive.metastore.kerberos.principal=hive-metastore/_HOST@EXAMPLE.COM

  439. hive.metastore.orm.retrieveMapNullsAsEmptyStrings=false

  440. hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore

  441. hive.metastore.sasl.enabled=false

  442. hive.metastore.schema.verification=false

  443. hive.metastore.schema.verification.record.version=true

  444. hive.metastore.server.max.message.size=104857600

  445. hive.metastore.server.max.threads=1000

  446. hive.metastore.server.min.threads=200

  447. hive.metastore.server.tcp.keepalive=true

  448. hive.metastore.stats.ndv.densityfunction=false

  449. hive.metastore.thrift.compact.protocol.enabled=false

  450. hive.metastore.thrift.framed.transport.enabled=false

  451. hive.metastore.try.direct.sql=true

  452. hive.metastore.try.direct.sql.ddl=true

  453. hive.metastore.warehouse.dir=/usr/hive_remote/warehouse

  454. hive.multi.insert.move.tasks.share.dependencies=false

  455. hive.multigroupby.singlereducer=true

  456. hive.new.job.grouping.set.cardinality=30

  457. hive.optimize.bucketingsorting=true

  458. hive.optimize.bucketmapjoin=false

  459. hive.optimize.bucketmapjoin.sortedmerge=false

  460. hive.optimize.constant.propagation=true

  461. hive.optimize.correlation=false

  462. hive.optimize.distinct.rewrite=true

  463. hive.optimize.groupby=true

  464. hive.optimize.index.autoupdate=false

  465. hive.optimize.index.filter=false

  466. hive.optimize.index.filter.compact.maxsize=-1

  467. hive.optimize.index.filter.compact.minsize=5368709120

  468. hive.optimize.index.groupby=false

  469. hive.optimize.listbucketing=false

  470. hive.optimize.metadataonly=true

  471. hive.optimize.null.scan=true

  472. hive.optimize.ppd=true

  473. hive.optimize.ppd.storage=true

  474. hive.optimize.reducededuplication=true

  475. hive.optimize.reducededuplication.min.reducer=4

  476. hive.optimize.remove.identity.project=true

  477. hive.optimize.sampling.orderby=false

  478. hive.optimize.sampling.orderby.number=1000

  479. hive.optimize.sampling.orderby.percent=0.1

  480. hive.optimize.skewjoin=false

  481. hive.optimize.skewjoin.compiletime=false

  482. hive.optimize.sort.dynamic.partition=false

  483. hive.optimize.union.remove=false

  484. hive.orc.cache.stripe.details.size=10000

  485. hive.orc.compute.splits.num.threads=10

  486. hive.orc.row.index.stride.dictionary.check=true

  487. hive.orc.splits.include.file.footer=false

  488. hive.outerjoin.supports.filters=true

  489. hive.parquet.timestamp.skip.conversion=true

  490. hive.plan.serialization.format=kryo

  491. hive.ppd.recognizetransivity=true

  492. hive.ppd.remove.duplicatefilters=true

  493. hive.prewarm.enabled=false

  494. hive.prewarm.numcontainers=10

  495. hive.query.id=root_20191002081124_28d18779-aa35-40e6-b209-99234305058d

  496. hive.query.result.fileformat=TextFile

  497. hive.query.string=select * from psn

  498. hive.querylog.enable.plan.progress=true

  499. hive.querylog.location=/tmp/root

  500. hive.querylog.plan.progress.interval=60000ms

  501. hive.reorder.nway.joins=true

  502. hive.repl.task.factory=org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory

  503. hive.resultset.use.unique.column.names=true

  504. hive.rework.mapredwork=false

  505. hive.rpc.query.plan=false

  506. hive.sample.seednumber=0

  507. hive.scratch.dir.permission=700

  508. hive.script.auto.progress=false

  509. hive.script.operator.env.blacklist=hive.txn.valid.txns,hive.script.operator.env.blacklist

  510. hive.script.operator.id.env.var=HIVE_SCRIPT_OPERATOR_ID

  511. hive.script.operator.truncate.env=false

  512. hive.script.recordreader=org.apache.hadoop.hive.ql.exec.TextRecordReader

  513. hive.script.recordwriter=org.apache.hadoop.hive.ql.exec.TextRecordWriter

  514. hive.script.serde=org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  515. hive.security.authenticator.manager=org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator

  516. hive.security.authorization.enabled=false

  517. hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider

  518. hive.security.authorization.sqlstd.confwhitelist=hive\.auto\..*

  519. hive.server.read.socket.timeout=10s

  520. hive.server.tcp.keepalive=true

  521. hive.server2.allow.user.substitution=true

  522. hive.server2.async.exec.keepalive.time=10s

  523. hive.server2.async.exec.shutdown.timeout=10s

  524. hive.server2.async.exec.threads=100

  525. hive.server2.async.exec.wait.queue.size=100

  526. hive.server2.authentication=NONE

  527. hive.server2.enable.doAs=true

  528. hive.server2.global.init.file.location=/root/hive/apache-hive-1.2.1-bin/conf

  529. hive.server2.idle.operation.timeout=5d

  530. hive.server2.idle.session.check.operation=true

  531. hive.server2.idle.session.timeout=7d

  532. hive.server2.logging.operation.enabled=true

  533. hive.server2.logging.operation.level=EXECUTION

  534. hive.server2.logging.operation.log.location=/tmp/root/operation_logs

  535. hive.server2.long.polling.timeout=5000ms

  536. hive.server2.map.fair.scheduler.queue=true

  537. hive.server2.max.start.attempts=30

  538. hive.server2.session.check.interval=6h

  539. hive.server2.support.dynamic.service.discovery=false

  540. hive.server2.table.type.mapping=CLASSIC

  541. hive.server2.tez.initialize.default.sessions=false

  542. hive.server2.tez.sessions.per.default.queue=1

  543. hive.server2.thrift.exponential.backoff.slot.length=100ms

  544. hive.server2.thrift.http.cookie.auth.enabled=true

  545. hive.server2.thrift.http.cookie.is.httponly=true

  546. hive.server2.thrift.http.cookie.is.secure=true

  547. hive.server2.thrift.http.cookie.max.age=86400s

  548. hive.server2.thrift.http.max.idle.time=1800s

  549. hive.server2.thrift.http.path=cliservice

  550. hive.server2.thrift.http.port=10001

  551. hive.server2.thrift.http.worker.keepalive.time=60s

  552. hive.server2.thrift.login.timeout=20s

  553. hive.server2.thrift.max.message.size=104857600

  554. hive.server2.thrift.max.worker.threads=500

  555. hive.server2.thrift.min.worker.threads=5

  556. hive.server2.thrift.port=10000

  557. hive.server2.thrift.sasl.qop=auth

  558. hive.server2.thrift.worker.keepalive.time=60s

  559. hive.server2.transport.mode=binary

  560. hive.server2.use.SSL=false

  561. hive.server2.zookeeper.namespace=hiveserver2

  562. hive.session.history.enabled=false

  563. hive.session.id=9a85e67f-c351-4f98-8589-7c1b2b7d17dd

  564. hive.session.silent=false

  565. hive.skewjoin.key=100000

  566. hive.skewjoin.mapjoin.map.tasks=10000

  567. hive.skewjoin.mapjoin.min.split=33554432

  568. hive.smbjoin.cache.rows=10000

  569. hive.spark.client.connect.timeout=1000ms

  570. hive.spark.client.future.timeout=60s

  571. hive.spark.client.rpc.max.size=52428800

  572. hive.spark.client.rpc.sasl.mechanisms=DIGEST-MD5

  573. hive.spark.client.rpc.threads=8

  574. hive.spark.client.secret.bits=256

  575. hive.spark.client.server.connect.timeout=90000ms

  576. hive.spark.job.monitor.timeout=60s

  577. hive.ssl.protocol.blacklist=SSLv2,SSLv3

  578. hive.stageid.rearrange=none

  579. hive.start.cleanup.scratchdir=false

  580. hive.stats.atomic=false

  581. hive.stats.autogather=true

  582. hive.stats.collect.rawdatasize=true

  583. hive.stats.collect.scancols=false

  584. hive.stats.collect.tablekeys=false

  585. hive.stats.dbclass=fs

  586. hive.stats.dbconnectionstring=jdbc:derby:;databaseName=TempStatsStore;create=true

  587. hive.stats.deserialization.factor=1.0

  588. hive.stats.fetch.column.stats=false

  589. hive.stats.fetch.partition.stats=true

  590. hive.stats.gather.num.threads=10

  591. hive.stats.jdbc.timeout=30s

  592. hive.stats.jdbcdriver=org.apache.derby.jdbc.EmbeddedDriver

  593. hive.stats.join.factor=1.1

  594. hive.stats.key.prefix.max.length=150

  595. hive.stats.key.prefix.reserve.length=24

  596. hive.stats.list.num.entries=10

  597. hive.stats.map.num.entries=10

  598. hive.stats.max.variable.length=100

  599. hive.stats.ndv.error=20.0

  600. hive.stats.reliable=false

  601. hive.stats.retries.max=0

  602. hive.stats.retries.wait=3000ms

  603. hive.stats.tmp.loc=hdfs://192.168.18.103:9000/tmp/hive/root/9a85e67f-c351-4f98-8589-7c1b2b7d17dd/hive_2019-10-02_08-11-24_573_6621744203570378937-1/-mr-10000/.hive-staging_hive_2019-10-02_08-11-24_573_6621744203570378937-1/-ext-10002

  604. hive.support.concurrency=false

  605. hive.support.quoted.identifiers=column

  606. hive.support.sql11.reserved.keywords=true

  607. hive.test.authz.sstd.hs2.mode=false

  608. hive.test.mode=false

  609. hive.test.mode.prefix=test_

  610. hive.test.mode.samplefreq=32

  611. hive.tez.auto.reducer.parallelism=false

  612. hive.tez.container.size=-1

  613. hive.tez.cpu.vcores=-1

  614. hive.tez.dynamic.partition.pruning=true

  615. hive.tez.dynamic.partition.pruning.max.data.size=104857600

  616. hive.tez.dynamic.partition.pruning.max.event.size=1048576

  617. hive.tez.exec.inplace.progress=true

  618. hive.tez.exec.print.summary=false

  619. hive.tez.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat

  620. hive.tez.log.level=INFO

  621. hive.tez.max.partition.factor=2.0

  622. hive.tez.min.partition.factor=0.25

  623. hive.tez.smb.number.waves=0.5

  624. hive.transform.escape.input=false

  625. hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager

  626. hive.txn.max.open.batch=1000

  627. hive.txn.timeout=300s

  628. hive.txn.valid.txns=9223372036854775807:

  629. hive.typecheck.on.insert=true

  630. hive.udtf.auto.progress=false

  631. hive.unlock.numretries=10

  632. hive.user.install.directory=hdfs:///user/

  633. hive.variable.substitute=true

  634. hive.variable.substitute.depth=40

  635. hive.vectorized.execution.enabled=false

  636. hive.vectorized.execution.mapjoin.minmax.enabled=false

  637. hive.vectorized.execution.mapjoin.native.enabled=true

  638. hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled=false

  639. hive.vectorized.execution.mapjoin.native.multikey.only.enabled=false

  640. hive.vectorized.execution.mapjoin.overflow.repeated.threshold=-1

  641. hive.vectorized.execution.reduce.enabled=true

  642. hive.vectorized.execution.reduce.groupby.enabled=true

  643. hive.vectorized.groupby.checkinterval=100000

  644. hive.vectorized.groupby.flush.percent=0.1

  645. hive.vectorized.groupby.maxentries=1000000

  646. hive.warehouse.subdir.inherit.perms=true

  647. hive.zookeeper.clean.extra.nodes=false

  648. hive.zookeeper.client.port=2181

  649. hive.zookeeper.connection.basesleeptime=1000ms

  650. hive.zookeeper.connection.max.retries=3

  651. hive.zookeeper.namespace=hive_zookeeper_namespace

  652. hive.zookeeper.session.timeout=1200000ms

  653. javax.jdo.PersistenceManagerFactoryClass=org.datanucleus.api.jdo.JDOPersistenceManagerFactory

  654. javax.jdo.option.ConnectionDriverName=com.mysql.jdbc.Driver

  655. javax.jdo.option.ConnectionPassword=123

  656. javax.jdo.option.ConnectionURL=jdbc:mysql://node04/hive_remote?createDatabaseIfNotExist=true

  657. javax.jdo.option.ConnectionUserName=root

  658. javax.jdo.option.DetachAllOnCommit=true

  659. javax.jdo.option.Multithreaded=true

  660. javax.jdo.option.NonTransactionalRead=true

  661. mapreduce.input.fileinputformat.input.dir.recursive=false

  662. mapreduce.input.fileinputformat.split.maxsize=256000000

  663. mapreduce.input.fileinputformat.split.minsize=1

  664. mapreduce.input.fileinputformat.split.minsize.per.node=1

  665. mapreduce.input.fileinputformat.split.minsize.per.rack=1

  666. mapreduce.job.committer.setup.cleanup.needed=false

  667. mapreduce.job.committer.task.cleanup.needed=false

  668. mapreduce.job.name=

  669. mapreduce.job.reduces=-1

  670. mapreduce.workflow.id=hive_root_20191002081124_28d18779-aa35-40e6-b209-99234305058d

  671. mapreduce.workflow.name=select * from psn

  672. nfs.allow.insecure.ports=true

  673. nfs.dump.dir=/tmp/.hdfs-nfs

  674. nfs.mountd.port=4242

  675. nfs.rtmax=1048576

  676. nfs.server.port=2049

  677. nfs.wtmax=1048576

  678. parquet.memory.pool.ratio=0.5

  679. rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB=org.apache.hadoop.ipc.ProtobufRpcEngine

  680. silent=off

  681. stream.stderr.reporter.enabled=true

  682. stream.stderr.reporter.prefix=reporter:

  683. env:CLASSPATH=/opt/lxk/hadoop-2.6.5/etc/hadoop:/opt/lxk/hadoop-2.6.5/share/hadoop/common/lib/*:

  684. env:FLUME_HOME=/opt/lxk/apache-flume-1.6.0-bin

  685. env:G_BROKEN_FILENAMES=1

  686. env:HADOOP_CLIENT_OPTS=-Xmx512m

  687. env:HADOOP_COMMON_HOME=/opt/lxk/hadoop-2.6.5

  688. env:HADOOP_CONF_DIR=/opt/lxk/hadoop-2.6.5/etc/hadoop

  689. env:HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS

  690. env:HADOOP_HDFS_HOME=/opt/lxk/hadoop-2.6.5

  691. env:HADOOP_HEAPSIZE=256

  692. env:HADOOP_HOME=/opt/lxk/hadoop-2.6.5

  693. env:HADOOP_HOME_WARN_SUPPRESS=true

  694. env:HADOOP_IDENT_STRING=root

  695. env:HADOOP_MAPRED_HOME=/opt/lxk/hadoop-2.6.5

  696. env:HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender

  697. env:HADOOP_NFS3_OPTS=

  698. env:HADOOP_OPTS= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/lxk/hadoop-2.6.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/lxk/hadoop-2.6.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/lxk/hadoop-2.6.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dhadoop.security.logger=INFO,NullAppender

  699. env:HADOOP_PID_DIR=

  700. env:HADOOP_PORTMAP_OPTS=-Xmx512m

  701. env:HADOOP_PREFIX=/opt/lxk/hadoop-2.6.5

  702. env:HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender

  703. env:HADOOP_SECURE_DN_LOG_DIR=/

  704. env:HADOOP_SECURE_DN_PID_DIR=

  705. env:HADOOP_SECURE_DN_USER=

  706. env:HADOOP_YARN_HOME=/opt/lxk/hadoop-2.6.5

  707. env:HBASE_HOME=/root/hbase/hbase0.98

  708. env:HISTCONTROL=ignoredups

  709. env:HISTSIZE=1000

  710. env:HIVE_AUX_JARS_PATH=

  711. env:HIVE_CONF_DIR=/root/hive/apache-hive-1.2.1-bin/conf

  712. env:HIVE_HOME=/root/hive/apache-hive-1.2.1-bin

  713. env:HOME=/root

  714. env:HOSTNAME=node03

  715. env:JAVA_HOME=/usr/local/src/java/jdk1.8.0_181

  716. env:LANG=en_US.UTF-8

  717. env:LD_LIBRARY_PATH=:/opt/lxk/hadoop-2.6.5/lib/native

  718. env:LESSOPEN=|/usr/bin/lesspipe.sh %s

  719. env:LOGNAME=root

  720. env:MAIL=/var/spool/mail/root

  721. env:MALLOC_ARENA_MAX=4

  722. env:PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/src/java/jdk1.8.0_181/bin:/opt/lxk/hadoop-2.6.5/bin:/opt/lxk/hadoop-2.6.5/sbin:/root/hive/apache-hive-1.2.1-bin/bin:/root/hbase/hbase0.98/bin:/opt/huawei/zookeeper-3.4.6/bin:/opt/lxk/apache-flume-1.6.0-bin/bin:/root/bin

  723. env:PWD=/root

  724. env:SERVICE_LIST=beeline cli help hiveburninclient hiveserver2 hiveserver hwi jar lineage metastore metatool orcfiledump rcfilecat schemaTool version

  725. env:SHELL=/bin/bash

  726. env:SHLVL=1

  727. env:SSH_CLIENT=192.168.18.1 6609 22

  728. env:SSH_CONNECTION=192.168.18.1 6609 192.168.18.103 22

  729. env:SSH_TTY=/dev/pts/0

  730. env:TERM=xterm

  731. env:USER=root

  732. env:ZOOKEEPER_HOME=/opt/huawei/zookeeper-3.4.6

  733. system:awt.toolkit=sun.awt.X11.XToolkit

  734. system:file.encoding=UTF-8

  735. system:file.encoding.pkg=sun.io

  736. system:file.separator=/

  737. system:hadoop.home.dir=/opt/lxk/hadoop-2.6.5

  738. system:hadoop.id.str=root

  739. system:hadoop.log.dir=/opt/lxk/hadoop-2.6.5/logs

  740. system:hadoop.log.file=hadoop.log

  741. system:hadoop.policy.file=hadoop-policy.xml

  742. system:hadoop.root.logger=INFO,console

  743. system:hadoop.security.logger=INFO,NullAppender

  744. system:java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment

  745. system:java.awt.printerjob=sun.print.PSPrinterJob

  746. system:java.class.path=/opt/lxk/hadoop-2.6.5/etc/hadoop:/opt/lxk/hadoop-2.6.5/share/hadoop/common/lib/htrace-core-3.0.4.jar

  747. system:java.class.version=52.0

  748. system:java.endorsed.dirs=/usr/local/src/java/jdk1.8.0_181/jre/lib/endorsed

  749. system:java.ext.dirs=/usr/local/src/java/jdk1.8.0_181/jre/lib/ext:/usr/java/packages/lib/ext

  750. system:java.home=/usr/local/src/java/jdk1.8.0_181/jre

  751. system:java.io.tmpdir=/tmp

  752. system:java.library.path=/opt/lxk/hadoop-2.6.5/lib/native

  753. system:java.net.preferIPv4Stack=true

  754. system:java.runtime.name=Java(TM) SE Runtime Environment

  755. system:java.runtime.version=1.8.0_181-b13

  756. system:java.specification.name=Java Platform API Specification

  757. system:java.specification.vendor=Oracle Corporation

  758. system:java.specification.version=1.8

  759. system:java.vendor=Oracle Corporation

  760. system:java.vendor.url=http://java.oracle.com/

  761. system:java.vendor.url.bug=http://bugreport.sun.com/bugreport/

  762. system:java.version=1.8.0_181

  763. system:java.vm.info=mixed mode

  764. system:java.vm.name=Java HotSpot(TM) 64-Bit Server VM

  765. system:java.vm.specification.name=Java Virtual Machine Specification

  766. system:java.vm.specification.vendor=Oracle Corporation

  767. system:java.vm.specification.version=1.8

  768. system:java.vm.vendor=Oracle Corporation

  769. system:java.vm.version=25.181-b13

  770. system:line.separator=

  771. system:os.arch=amd64

  772. system:os.name=Linux

  773. system:os.version=2.6.32-431.el6.x86_64

  774. system:path.separator=:

  775. system:sun.arch.data.model=64

  776. system:sun.boot.class.path=/usr/local/src/java/jdk1.8.0_181/jre/lib/resources.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/rt.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/sunrsasign.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/jsse.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/jce.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/charsets.jar:/usr/local/src/java/jdk1.8.0_181/jre/lib/jfr.jar:/usr/local/src/java/jdk1.8.0_181/jre/classes

  777. system:sun.boot.library.path=/usr/local/src/java/jdk1.8.0_181/jre/lib/amd64

  778. system:sun.cpu.endian=little

  779. system:sun.cpu.isalist=

  780. system:sun.io.unicode.encoding=UnicodeLittle

  781. system:sun.java.command=org.apache.hadoop.util.RunJar /root/hive/apache-hive-1.2.1-bin/lib/hive-cli-1.2.1.jar org.apache.hadoop.hive.cli.CliDriver

  782. system:sun.java.launcher=SUN_STANDARD

  783. system:sun.jnu.encoding=UTF-8

  784. system:sun.management.compiler=HotSpot 64-Bit Tiered Compilers

  785. system:sun.os.patch.level=unknown

  786. system:user.country=US

  787. system:user.dir=/root

  788. system:user.home=/root

  789. system:user.language=en

  790. system:user.name=root

  791. system:user.timezone=PRC

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值