大数据进阶之路——Spark SQL基本配置_spark(2)

[hadoop@hadoop001 spark-2.1.0]$ pwd
/home/hadoop/source/spark-2.1.0

==> ./build/mvn -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver -Dhadoop.version=2.6.0-cdh5.7.0 -DskipTests clean package

编译可以运行的包

./dev/make-distribution.sh --name 2.6.0-cdh5.7.0 --tgz -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver -Dhadoop.version=2.6.0-cdh5.7.0

make-distribution.sh

spark-$VERSION-bin-$NAME.tgz

—>spark-2.1.0-bin-2.6.0-cdh5.7.0.tgz

编译失败
Failed to execute goal on project ...: Could not resolve dependencies for project ...

pom.xml中添加

<repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>

如果scala2.10
需要添加./dev/change-scala-version.sh 2.10

环境搭建

local

  • tar -zxvf park-2.1.0-bin-2.6.0-cdh5.7.0.tgz -C ~/app/
  • 配置环境SPARK_HOME
  • source ~./bash_profile

运行
spark-shell --master local[2]

	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
	at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
a)
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
	at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
	
 java:104)

.............................................

  at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
 571)
  at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624)
  
  at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
  at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
  at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
  at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)

Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
  at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
  at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
  at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
  ... 145 more

原因没有引入mysql驱动

spark-shell --master local[2] --jar /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar

[hadoop@hadoop001 software]$ spark-shell --master local[2] --jars /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar 
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/10/16 20:42:32 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
20/10/16 20:42:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/10/16 20:42:35 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
20/10/16 20:42:50 ERROR ObjectStore: Version information found in metastore differs 1.1.0 from expected schema version 1.2.0. Schema verififcation is disabled hive.metastore.schema.verification so setting version.
20/10/16 20:42:53 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.43.214:4041
Spark context available as 'sc' (master = local[2], app id = local-1602906155852).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0\_51)
Type in expressions to have them evaluated.
Type :help for more information.



Standalone

Spark Standalone模式的架构和Hadoop HDFS/YARN很类似的
1 master + n worker

spark-env.sh

SPARK_MASTER_HOST=hadoop001
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_INSTANCES=1

master:

hadoop1 

slaves:

hadoop2
hadoop3
hadoop4
....
hadoop10

==> start-all.sh 会在 hadoop1机器上启动master进程,在slaves文件配置的所有hostname的机器上启动worker进程

Spark WordCount统计
val file = spark.sparkContext.textFile(“file:///home/hadoop/data/wc.txt”)
val wordCounts = file.flatMap(line => line.split(“,”)).map((word => (word, 1))).reduceByKey(_ + _)
wordCounts.collect

本地IDE

A master URL must be set in your configuration

点击edit configuration,在左侧点击该项目。在右侧VM options中输入“-Dspark.master=local”,指示本程序本地单线程运行,再次运行即可。

package org.example

import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}

object SQLContextAPP {
  def main(args: Array[String]): Unit = {
    //1创建相应的Spark
    val sparkConf = new SparkConf()
    sparkConf.setAppName("SQLContextAPP")
    val sc = new SparkContext(sparkConf)
    val sqlContext = new SQLContext(sc)

    //2数据处理
    val people = sqlContext.read.format("json").load("people.json")
    people.printSchema()
    people.show()

    //3关闭资源
    sc.stop()


  }

}



root
 |-- age: long (nullable = true)
 |-- name: string (nullable = true)

.........................


| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+



配置maven环境变量cmd控制台提示:mvn不是内部或外部命令,也不是可运行的程序或批处理文件

首先maven环境变量:


变量名:MAVEN_HOME

变量值:E:\apache-maven-3.2.3

变量名:Path

变量值:;%MAVEN_HOME%\bin

然后到项目的目录下直接执行

C:\Users\jacksun\IdeaProjects\SqarkSQL\ mvn clean package -DskipTests

在集群上测试

spark-submit \
--name SQLContextApp \
--class org.example.SQLContextApp \
--master local[2] \
/home/hadoop/lib/sql-1.0.jar \
/home/hadoop/app/spark-2.1.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/people.json




HiveContextAPP

注意:
1)To use a HiveContext, you do not need to have an existing Hive setup
2)hive-site.xml

package org.example

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.hive.HiveContext

object HiveContextAPP {
  def main(args: Array[String]): Unit = {
    //1创建相应的Spark
    val path =args(0)
    val sparkConf = new SparkConf()

    //测试和生产中AppName和Master是通过脚本执行的
    //sparkConf.setAppName("HiveContextAPP").setMaster("local[2]")



    val sc = new SparkContext(sparkConf)
    val hiveContext = new HiveContext(sc)

    //2数据处理
    hiveContext.table("emp").show

    //3关闭资源
    sc.stop()


  }
}



spark-submit \
--name HiveContextApp \
--class org.example.HiveContextApp \
--master local[2] \
/home/hadoop/lib/sql-1.0.jar \
--jars /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar 



SparkSessinon
package org.example

import org.apache.spark.sql.SparkSession

object SparkSessionApp {
  def main(args: Array[String]) {

    val spark = SparkSession.builder().appName("SparkSessionApp")
      .master("local[2]").getOrCreate()

    val people = spark.read.json("people.json")
    people.show()

    spark.stop()
  }
}


Spark Shell
  • 启动hive
[hadoop@hadoop001 bin]$ pwd
/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin
[hadoop@hadoop001 bin]$ hive
ls: cannot access /home/hadoop/app/spark-2.1.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-\*.jar: No such file or directory
which: no hbase in (/home/hadoop/app/spark-2.1.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/app/apache-maven-3.3.9/bin:/home/hadoop/app/jdk1.7.0_51/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin)

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> 



  • 拷贝
    [hadoop@hadoop001 conf]$ cp hive-site.xml ~/app/spark-2.1.0-bin-2.6.0-cdh5.7.0/conf/
  • 启动Spark
    spark-shell --master local[2] --jars /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar
scala> spark.sql("show tables").show
+--------+------------+-----------+
|database|   tableName|isTemporary|
+--------+------------+-----------+
| default|        dept|      false|
| default|         emp|      false|
| default|hive_table_1|      false|
| default|hive_table_2|      false|
| default|           t|      false|
+--------+------------+-----------+


hive> show tables;
OK
dept
emp
hive_wordcount



scala> spark.sql("select \* from emp e join dept d on e.deptno=d.deptno").show
+-----+------+---------+----+----------+------+------+------+------+----------+--------+
|empno| ename|      job| mgr|  hiredate|   sal|  comm|deptno|deptno|     dname|     loc|
+-----+------+---------+----+----------+------+------+------+------+----------+--------+
| 7369| SMITH|    CLERK|7902|1980-12-17| 800.0|  null|    20|    20|  RESEARCH|  DALLAS|
| 7499| ALLEN| SALESMAN|7698| 1981-2-20|1600.0| 300.0|    30|    30|     SALES| CHICAGO|
| 7521|  WARD| SALESMAN|7698| 1981-2-22|1250.0| 500.0|    30|    30|     SALES| CHICAGO|
| 7566| JONES|  MANAGER|7839|  1981-4-2|2975.0|  null|    20|    20|  RESEARCH|  DALLAS|
| 7654|MARTIN| SALESMAN|7698| 1981-9-28|1250.0|1400.0|    30|    30|     SALES| CHICAGO|
| 7698| BLAKE|  MANAGER|7839|  1981-5-1|2850.0|  null|    30|    30|     SALES| CHICAGO|
| 7782| CLARK|  MANAGER|7839|  1981-6-9|2450.0|  null|    10|    10|ACCOUNTING|NEW YORK|
| 7788| SCOTT|  ANALYST|7566| 1987-4-19|3000.0|  null|    20|    20|  RESEARCH|  DALLAS|
| 7839|  KING|PRESIDENT|null|1981-11-17|5000.0|  null|    10|    10|ACCOUNTING|NEW YORK|
| 7844|TURNER| SALESMAN|7698|  1981-9-8|1500.0|   0.0|    30|    30|     SALES| CHICAGO|
| 7876| ADAMS|    CLERK|7788| 1987-5-23|1100.0|  null|    20|    20|  RESEARCH|  DALLAS|
| 7900| JAMES|    CLERK|7698| 1981-12-3| 950.0|  null|    30|    30|     SALES| CHICAGO|
| 7902|  FORD|  ANALYST|7566| 1981-12-3|3000.0|  null|    20|    20|  RESEARCH|  DALLAS|
| 7934|MILLER|    CLERK|7782| 1982-1-23|1300.0|  null|    10|    10|ACCOUNTING|NEW YORK|
+-----+------+---------+----+----------+------+------+------+------+----------+--------+

hive> select \* from emp e join dept d on e.deptno=d.deptno
    > ;
Query ID = hadoop_20201020054545_f7fbda3e-439e-409e-b2ce-3c553d969ed4
Total jobs = 1
20/10/20 05:48:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Execution log at: /tmp/hadoop/hadoop_20201020054545_f7fbda3e-439e-409e-b2ce-3c553d969ed4.log
2020-10-20 05:48:49	Starting to launch local task to process map join;	maximum memory = 477102080
2020-10-20 05:48:51	Dump the side-table for tag: 1 with group count: 4 into file: file:/tmp/hadoop/5c8577b3-c00d-4ece-9899-c0e3de66f2f2/hive_2020-10-20_05-48-27_437_556791932773953494-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2020-10-20 05:48:51	Uploaded 1 File to: file:/tmp/hadoop/5c8577b3-c00d-4ece-9899-c0e3de66f2f2/hive_2020-10-20_05-48-27_437_556791932773953494-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (404 bytes)
2020-10-20 05:48:51	End of local task; Time Taken: 2.691 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1602849227137_0002, Tracking URL = http://hadoop001:8088/proxy/application_1602849227137_0002/
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1602849227137_0002
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2020-10-20 05:49:13,663 Stage-3 map = 0%,  reduce = 0%
2020-10-20 05:49:36,950 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 13.08 sec
MapReduce Total cumulative CPU time: 13 seconds 80 msec
Ended Job = job_1602849227137_0002
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 13.08 sec   HDFS Read: 7639 HDFS Write: 927 SUCCESS
Total MapReduce CPU Time Spent: 13 seconds 80 msec
OK
7369	SMITH	CLERK	7902	1980-12-17	800.0	NULL	20	20	RESEARCH	DALLAS
7499	ALLEN	SALESMAN	7698	1981-2-20	1600.0	300.0	30	30	SALES	CHICAGO
7521	WARD	SALESMAN	7698	1981-2-22	1250.0	500.0	30	30	SALES	CHICAGO
7566	JONES	MANAGER	7839	1981-4-2	2975.0	NULL	20	20	RESEARCH	DALLAS
文末有福利领取哦~
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

👉**一、Python所有方向的学习路线**

Python所有方向的技术点做的整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。![img](https://img-blog.csdnimg.cn/c67c0f87cf9343879a1278dfb067f802.png)

👉**二、Python必备开发工具**

![img](https://img-blog.csdnimg.cn/757ca3f717df4825b7d90a11cad93bc7.png)  
👉**三、Python视频合集**

观看零基础学习视频,看视频学习是最快捷也是最有效果的方式,跟着视频中老师的思路,从基础到深入,还是很容易入门的。  
![img](https://img-blog.csdnimg.cn/31066dd7f1d245159f21623d9efafa68.png)

👉 **四、实战案例**

光学理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。**(文末领读者福利)**  
![img](https://img-blog.csdnimg.cn/e78afb3dcb8e4da3bae5b6ffb9c07ec7.png)

👉**五、Python练习题**

检查学习结果。  
![img](https://img-blog.csdnimg.cn/280da06969e54cf180f4904270636b8e.png)

👉**六、面试资料**

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自阿里、腾讯、字节等一线互联网大厂最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。  
![img](https://img-blog.csdnimg.cn/a9d7c35e6919437a988883d84dcc5e58.png)

![img](https://img-blog.csdnimg.cn/5db8141418d544d3a8e9da4805b1a3f9.png)

👉因篇幅有限,仅展示部分资料,这份完整版的Python全套学习资料已经上传




**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**[需要这份系统化学习资料的朋友,可以戳这里无偿获取](https://bbs.csdn.net/topics/618317507)**

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
  • 5
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: Spark是一个开源的大数据处理框架,它可以在分布式计算集群上进行高效的数据处理和分析。Spark的特点是速度快、易用性高、支持多种编程语言和数据源。Spark的核心是基于内存的计算模型,可以在内存中快速地处理大规模数据。Spark支持多种数据处理方式,包括批处理、流处理、机器学习和图计算等。Spark的生态系统非常丰富,包括Spark SQLSpark Streaming、MLlib、GraphX等组件,可以满足不同场景下的数据处理需求。 ### 回答2: Spark是一个分布式计算框架,其出现是为了解决Hadoop MapReduce计算模型中的许多性能问题。与MapReduce相比,Spark的计算速度更快,因为它可以在内存中缓存数据并使用更高效的调度算法。此外,Spark还支持多种语言,包括Scala、Java、Python和R等。 Spark有多个模块,包括Spark Core、Spark SQLSpark Streaming、Spark MLlib和Spark GraphX等。Spark Core是Spark基本组件,在其中实现了RDD这种抽象数据结构,它可以将数据分布在多台计算机上,从而实现分布式计算。Spark SQL提供了用于处理结构化数据的API和查询语言,它允许将Spark与现有的SQL工具和数据源一起使用。Spark Streaming可以在实时流处理中使用Spark来处理数据,并提供了与常见的消息队列和流处理系统的无缝集成。Spark MLlib提供了许多机器学习算法,可以在分布式环境中进行大规模的机器学习。Spark GraphX是用于图计算的组件,可以用于处理较大的网络图和社交网络图等。 Spark可以在各种场景下使用,例如大型金融数据分析、人工智能、机器学习和图计算等领域。与Hadoop相比,Spark具有更快的速度、更轻量的资源消耗和更广泛的开源社区支持,已经成为许多大规模数据分析和处理项目的首选技术之一。 总之,Spark是一个功能强大的分布式计算框架,具有快速、灵活和多语言支持等特点,并且在实际应用中表现出色,是大数据学习中不可或缺的重要技术之一。 ### 回答3: Spark是一个快速、通用、分布式计算引擎,可以在大规模数据集上进行高效的数据处理。Spark是基于内存的计算引擎,可以将数据存储在内存中,从而提高计算速度。Spark支持多种编程语言,包括Java、Scala、Python和R,因此很容易上手,并且可以适应各种应用场景。 Spark的核心组件包括Spark SQLSpark Streaming、Spark MLlib和Spark GraphX等,在处理不同类型的数据上都具有很强的适应性。Spark SQL可以处理结构化数据,Spark Streaming可以实现实时数据处理,Spark MLlib可以进行机器学习任务,Spark GraphX可以处理图形数据。此外,Spark还提供了一个交互式的shell,方便用户测试和调试代码。 在分布式环境下,Spark使用集群模式进行计算。集群中的每个节点都有自己的内存和CPU资源,Spark通过将任务分发到不同的节点上进行并行计算以提高计算速度。Spark还提供了一些高级特性,如广播变量、累加器和检查点等,以提高计算性能和可靠性。 在大数据处理方面,Spark有着广泛的应用场景。例如,Spark可以用于数据清洗和转换、数据仓库构建、实时数据处理和机器学习等任务。Spark还支持多种数据源,包括关系型数据库、Hadoop HDFS、NoSQL数据库和云存储等,这些数据源都可以与Spark集成,以进行数据分析和处理。 总之,Spark是一个非常重要和流行的大数据处理工具,它有强大的功能和广泛的应用场景。对于想要学习大数据处理的人来说,掌握Spark是必不可少的。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值