./bin/spark-shell 命令

[beifeng@hadoop-senior02 spark-1.6.1-bin-2.5.0-cdh5.3.6]$ ./bin/spark-shell

Q>

18/03/16 23:12:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/16 23:12:38 INFO spark.SecurityManager: Changing view acls to: beifeng
18/03/16 23:12:38 INFO spark.SecurityManager: Changing modify acls to: beifeng
18/03/16 23:12:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(beifeng); users with modify permissions: Set(beifeng)
18/03/16 23:12:39 INFO spark.HttpServer: Starting HTTP Server
18/03/16 23:12:39 INFO server.Server: jetty-8.y.z-SNAPSHOT
18/03/16 23:12:39 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:39311
18/03/16 23:12:39 INFO util.Utils: Successfully started service 'HTTP class server' on port 39311.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.1
      /_/


Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated.
Type :help for more information.
18/03/16 23:12:44 INFO spark.SparkContext: Running Spark version 1.6.1
18/03/16 23:12:44 WARN spark.SparkConf: 
SPARK_WORKER_INSTANCES was detected (set to '2').
This is deprecated in Spark 1.0+.


Please instead use:
 - ./spark-submit with --num-executors to specify the number of executors
 - Or set SPARK_EXECUTOR_INSTANCES
 - spark.executor.instances to configure the number of instances in the spark config.
        
18/03/16 23:12:44 INFO spark.SecurityManager: Changing view acls to: beifeng
18/03/16 23:12:44 INFO spark.SecurityManager: Changing modify acls to: beifeng
18/03/16 23:12:44 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(beifeng); users with modify permissions: Set(beifeng)
18/03/16 23:12:45 INFO util.Utils: Successfully started service 'sparkDriver' on port 41194.
18/03/16 23:12:45 INFO slf4j.Slf4jLogger: Slf4jLogger started
18/03/16 23:12:45 INFO Remoting: Starting remoting
18/03/16 23:12:45 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.159.129:34927]
18/03/16 23:12:45 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 34927.
18/03/16 23:12:45 INFO spark.SparkEnv: Registering MapOutputTracker
18/03/16 23:12:45 INFO spark.SparkEnv: Registering BlockManagerMaster
18/03/16 23:12:45 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-3c1d13f2-2788-48ca-b92a-787759d885a4
18/03/16 23:12:45 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
18/03/16 23:12:46 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/03/16 23:12:46 INFO server.Server: jetty-8.y.z-SNAPSHOT
18/03/16 23:12:46 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
18/03/16 23:12:46 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/03/16 23:12:46 INFO ui.SparkUI: Started SparkUI at http://192.168.159.129:4040
18/03/16 23:12:46 INFO executor.Executor: Starting executor ID driver on host localhost
18/03/16 23:12:46 INFO executor.Executor: Using REPL class URI: http://192.168.159.129:39311
18/03/16 23:12:46 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42848.
18/03/16 23:12:46 INFO netty.NettyBlockTransferService: Server created on 42848
18/03/16 23:12:46 INFO storage.BlockManagerMaster: Trying to register BlockManager
18/03/16 23:12:46 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:42848 with 517.4 MB RAM, BlockManagerId(driver, localhost, 42848)
18/03/16 23:12:46 INFO storage.BlockManagerMaster: Registered BlockManager
18/03/16 23:12:48 ERROR spark.SparkContext: Error initializing SparkContext.
java.net.ConnectException: Call From hadoop-senior02/192.168.159.129 to hadoop-senior02:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        at org.apache.hadoop.ipc.Client.call(Client.java:1415)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
        at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
        at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
        at $line3.$read$$iwC$$iwC.<init>(<console>:15)
        at $line3.$read$$iwC.<init>(<console>:24)
        at $line3.$read.<init>(<console>:26)
        at $line3.$read$.<init>(<console>:30)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.<init>(<console>:7)
        at $line3.$eval$.<clinit>(<console>)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
        at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
        at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
        at org.apache.hadoop.ipc.Client.call(Client.java:1382)
        ... 66 more
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
18/03/16 23:12:48 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
18/03/16 23:12:48 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.159.129:4040
18/03/16 23:12:48 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/03/16 23:12:48 INFO storage.MemoryStore: MemoryStore cleared
18/03/16 23:12:48 INFO storage.BlockManager: BlockManager stopped
18/03/16 23:12:48 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/03/16 23:12:48 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/03/16 23:12:48 INFO spark.SparkContext: Successfully stopped SparkContext
18/03/16 23:12:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/03/16 23:12:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/03/16 23:12:48 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
java.net.ConnectException: Call From hadoop-senior02/192.168.159.129 to hadoop-senior02:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        at org.apache.hadoop.ipc.Client.call(Client.java:1415)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)
        at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:549)
        at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
        at $iwC$$iwC.<init>(<console>:15)
        at $iwC.<init>(<console>:24)
        at <init>(<console>:26)
        at .<init>(<console>:30)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
        at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
        at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
        at org.apache.hadoop.ipc.Client.call(Client.java:1382)
        ... 66 more


java.lang.NullPointerException
        at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
        at $iwC$$iwC.<init>(<console>:15)
        at $iwC.<init>(<console>:24)
        at <init>(<console>:26)
        at .<init>(<console>:30)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
        at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
        at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


<console>:16: error: not found: value sqlContext
         import sqlContext.implicits._
                ^
<console>:16: error: not found: value sqlContext

         import sqlContext.sql

A>

[beifeng@hadoop-senior02 hadoop-2.5.0-cdh5.3.6]$ sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/logs/hadoop-beifeng-namenode-hadoop-senior02.out
[beifeng@hadoop-senior02 hadoop-2.5.0-cdh5.3.6]$ sbin/hadoop-daemon.sh start datanode

starting datanode, logging to /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/logs/hadoop-beifeng-datanode-hadoop-senior02.out

[beifeng@hadoop-senior02 spark-1.6.1-bin-2.5.0-cdh5.3.6]$ ./bin/spark-shell
18/03/16 23:40:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/16 23:40:03 INFO spark.SecurityManager: Changing view acls to: beifeng
18/03/16 23:40:03 INFO spark.SecurityManager: Changing modify acls to: beifeng
18/03/16 23:40:03 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(beifeng); users with modify permissions: Set(beifeng)
18/03/16 23:40:03 INFO spark.HttpServer: Starting HTTP Server
18/03/16 23:40:03 INFO server.Server: jetty-8.y.z-SNAPSHOT
18/03/16 23:40:03 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42663
18/03/16 23:40:03 INFO util.Utils: Successfully started service 'HTTP class server' on port 42663.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.1
      /_/


Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated.
Type :help for more information.
18/03/16 23:40:08 INFO spark.SparkContext: Running Spark version 1.6.1
18/03/16 23:40:08 WARN spark.SparkConf: 
SPARK_WORKER_INSTANCES was detected (set to '2').
This is deprecated in Spark 1.0+.


Please instead use:
 - ./spark-submit with --num-executors to specify the number of executors
 - Or set SPARK_EXECUTOR_INSTANCES
 - spark.executor.instances to configure the number of instances in the spark config.
        
18/03/16 23:40:08 INFO spark.SecurityManager: Changing view acls to: beifeng
18/03/16 23:40:08 INFO spark.SecurityManager: Changing modify acls to: beifeng
18/03/16 23:40:08 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(beifeng); users with modify permissions: Set(beifeng)
18/03/16 23:40:09 INFO util.Utils: Successfully started service 'sparkDriver' on port 42153.
18/03/16 23:40:09 INFO slf4j.Slf4jLogger: Slf4jLogger started
18/03/16 23:40:09 INFO Remoting: Starting remoting
18/03/16 23:40:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.159.129:41137]
18/03/16 23:40:10 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 41137.
18/03/16 23:40:10 INFO spark.SparkEnv: Registering MapOutputTracker
18/03/16 23:40:10 INFO spark.SparkEnv: Registering BlockManagerMaster
18/03/16 23:40:10 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-ee2e7098-a3f3-46bf-b929-08ec150c59e6
18/03/16 23:40:10 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
18/03/16 23:40:10 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/03/16 23:40:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
18/03/16 23:40:10 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
18/03/16 23:40:10 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/03/16 23:40:10 INFO ui.SparkUI: Started SparkUI at http://192.168.159.129:4040
18/03/16 23:40:10 INFO executor.Executor: Starting executor ID driver on host localhost
18/03/16 23:40:10 INFO executor.Executor: Using REPL class URI: http://192.168.159.129:42663
18/03/16 23:40:10 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44871.
18/03/16 23:40:10 INFO netty.NettyBlockTransferService: Server created on 44871
18/03/16 23:40:10 INFO storage.BlockManagerMaster: Trying to register BlockManager
18/03/16 23:40:10 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:44871 with 517.4 MB RAM, BlockManagerId(driver, localhost, 44871)
18/03/16 23:40:10 INFO storage.BlockManagerMaster: Registered BlockManager
18/03/16 23:40:12 INFO scheduler.EventLoggingListener: Logging events to hdfs://hadoop-senior02:8020/spark/history/local-1521214810802
18/03/16 23:40:12 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
18/03/16 23:40:13 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
18/03/16 23:40:13 INFO client.ClientWrapper: Inspected Hadoop version: 2.5.0-cdh5.3.6
18/03/16 23:40:13 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.5.0-cdh5.3.6
18/03/16 23:40:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/03/16 23:40:14 INFO metastore.ObjectStore: ObjectStore, initialize called
18/03/16 23:40:14 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/03/16 23:40:14 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/03/16 23:40:15 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/03/16 23:40:15 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/03/16 23:40:18 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/03/16 23:40:20 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/03/16 23:40:20 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/03/16 23:40:22 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/03/16 23:40:22 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/03/16 23:40:22 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
18/03/16 23:40:22 INFO metastore.ObjectStore: Initialized ObjectStore
18/03/16 23:40:22 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/03/16 23:40:22 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
18/03/16 23:40:23 INFO metastore.HiveMetaStore: Added admin role in metastore
18/03/16 23:40:23 INFO metastore.HiveMetaStore: Added public role in metastore
18/03/16 23:40:23 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
18/03/16 23:40:23 INFO metastore.HiveMetaStore: 0: get_all_databases
18/03/16 23:40:23 INFO HiveMetaStore.audit: ugi=beifeng ip=unknown-ip-addr      cmd=get_all_databases
18/03/16 23:40:23 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
18/03/16 23:40:23 INFO HiveMetaStore.audit: ugi=beifeng ip=unknown-ip-addr      cmd=get_functions: db=default pat=*
18/03/16 23:40:23 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/03/16 23:40:23 INFO session.SessionState: Created local directory: /tmp/41456e4a-609f-4072-97b3-6d6319e3effa_resources
18/03/16 23:40:23 INFO session.SessionState: Created HDFS directory: /tmp/hive/beifeng/41456e4a-609f-4072-97b3-6d6319e3effa
18/03/16 23:40:23 INFO session.SessionState: Created local directory: /tmp/beifeng/41456e4a-609f-4072-97b3-6d6319e3effa
18/03/16 23:40:23 INFO session.SessionState: Created HDFS directory: /tmp/hive/beifeng/41456e4a-609f-4072-97b3-6d6319e3effa/_tmp_space.db
18/03/16 23:40:23 INFO hive.HiveContext: default warehouse location is /hive
18/03/16 23:40:23 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/03/16 23:40:23 INFO client.ClientWrapper: Inspected Hadoop version: 2.5.0-cdh5.3.6
18/03/16 23:40:23 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.5.0-cdh5.3.6
18/03/16 23:40:24 INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop-senior02:9083
18/03/16 23:40:24 INFO hive.metastore: Connected to metastore.
18/03/16 23:40:25 INFO session.SessionState: Created local directory: /tmp/4e7e1e9e-e3d2-43f8-9829-78c031fd6aa7_resources
18/03/16 23:40:25 INFO session.SessionState: Created HDFS directory: /tmp/hive/beifeng/4e7e1e9e-e3d2-43f8-9829-78c031fd6aa7
18/03/16 23:40:25 INFO session.SessionState: Created local directory: /tmp/beifeng/4e7e1e9e-e3d2-43f8-9829-78c031fd6aa7
18/03/16 23:40:25 INFO session.SessionState: Created HDFS directory: /tmp/hive/beifeng/4e7e1e9e-e3d2-43f8-9829-78c031fd6aa7/_tmp_space.db
18/03/16 23:40:25 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值