The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-

It is easy to follow the instructions on http://spark.apache.org/docs/latest/ and download Spark 1.6.0 (Jan 04 2016) with the “Pre-build for Hadoop 2.6 and later” package type from http://spark.apache.org/downloads.html

image

However, when you try to run spark-shell on your Windows 10 (64-bit) machine, you may receive a java.lang.RuntimeException: java.lang.NullPointerException (not found: value sqlContext)

java.lang.RuntimeException: java.lang.NullPointerException
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:194)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:218)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:208)
at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:462)
at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:461)
at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:330)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
… 62 more

<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql

image

This issue is often caused by a missing winutils.exe file that Spark needs in order to initialize the Hive context, which in turn depends on Hadoop, which requires native libraries on Windows to work properly. Unfortunately, this happens even if you are using Spark in local mode without utilizing any of the HDFS features directly.

To resolve this problem, you need to:

  • download the 64-bit winutils.exe (106KB)
  • copy the downloaded file winutils.exe into a folder like C:\hadoop\bin (or C:\spark\hadoop\bin)
  • set the environment variable HADOOP_HOME to point to the above directory but without \bin. For example:
    • if you copied the winutils.exe to C:\hadoop\bin, set HADOOP_HOME=C:\hadoop
    • if you copied the winutils.exe to C:\spark\hadoop\bin, set HADOOP_HOME=C:\spark\hadoop

image

Double-check that the environment variable HADOOP_HOME is set properly by opening the Command Prompt and running echo %HADOOP_HOME%

image

You will also notice that when starting the spark-shell.cmd, Hive will create a C:\tmp\hive folder. If you receive any errors related to permissions of this folder, use the following commands to set that permissions on that folder:

  • List current permissions: %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive
  • Set permissions: %HADOOP_HOME%\bin\winutils.exe chmod 777 \tmp\hive
  • List updated permissions: %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive

image

When you re-run the spark-shell, it should work as expected.

image

Related Spark JIRAs and StackOverflow threads:

I’m looking forward to your feedback and questions via Twitter https://twitter.com/ArsenVlad

Tags  Spark Windows 10
### OKR 在软件开发和团队管理中的应用 OKR(Objectives and Key Results,目标与关键成果法)是一种广泛应用于企业管理和绩效考核的方法论,在软件开发和团队管理中有其独特的应用场景。 #### 软件开发中的 OKR 应用 在软件开发领域,OKR 可以帮助团队更高效地完成复杂的项目交付。以下是具体的实践方式: - **目标设定**:软件开发团队可以通过 OKR 设定明确的目标,例如“在未来三个月内发布新版本的功能模块”,这有助于聚焦于核心业务需求并推动产品迭代[^1]。 - **关键成果定义**:对于上述目标,“成功上线三个主要功能特性”或“减少生产环境错误率至低于 1%”可以作为关键成果来衡量进展[^2]。 - **敏捷开发结合**:许多现代软件工程采用敏捷方法学,而 OKR 则能够很好地补充这一过程——它提供了更高层次的战略指导,使得每次冲刺都有清晰的方向感[^5]。 #### 团队管理中的 OKR 实施 在团队层面,OKR 不仅促进了上下一致的努力方向,还增强了跨部门间的沟通合作效率: - **自下而上的参与度提升**:鼓励员工主动参与到自己岗位职责范围内的目标设计当中,从而激发内在动力去追求卓越表现。 - **多维度评价体系构建**:通过引入自我评估、同事互评以及上级领导评定相结合的方式来进行最终成绩核算,这样既考虑到了个体贡献也兼顾整体协同效果[^3]。 - **持续反馈机制建立**:定期回顾会议成为不可或缺的一环,在这里不仅可以检查当前进度是否偏离预期轨迹,同时也是调整后续行动计划的好机会。 ```python def calculate_okr_score(self_rating, peer_ratings, manager_rating): """ 计算 OKR 综合得分 参数: self_rating (float): 员工自评分数 peer_ratings (list of float): 同事评分列表 manager_rating (float): 上级经理评分 返回: float: 最终综合评分 """ avg_peer_rating = sum(peer_ratings)/len(peer_ratings) if len(peer_ratings)>0 else 0 okr_final_score = self_rating * 0.2 + avg_peer_rating * 0.3 + manager_rating * 0.5 return round(okr_final_score, 2) example_self_rate = 4.0 example_peers_rates = [3.8, 4.2, 3.9] example_manager_rate = 4.1 print(calculate_okr_score(example_self_rate, example_peers_rates, example_manager_rate)) ``` 以上代码片段展示了如何依据不同权重计算一名员工的 OKR 成绩。 ---
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值