基于Spark的电影推荐系统(推荐系统~2)

第四部分-推荐系统-数据ETL

  • 本模块完成数据清洗,并将清洗后的数据load到Hive数据表里面去
前置准备:

spark +hive

vim $SPARK_HOME/conf/hive-site.xml 
	<?xml version="1.0"?>
		<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
	<configuration>
	<property>
			<name>hive.metastore.uris</name>
			<value>thrift://hadoop001:9083</value>
	</property>
	</configuration>  
  • 启动Hive metastore server

[root@hadoop001 conf]# nohup hive --service metastore &

[root@hadoop001 conf]# netstat -tanp | grep 9083
tcp 0 0 0.0.0.0:9083 0.0.0.0:* LISTEN 24787/java
[root@hadoop001 conf]#

测试:
[root@hadoop001 ~]# spark-shell --master local[2]

scala> spark.sql("select * from liuge_db.dept").show;
+------+-------+-----+                                                          
|deptno|  dname|  loc|
+------+-------+-----+
|     1|  caiwu| 3lou|
|     2|  renli| 4lou|
|     3|  kaifa| 5lou|
|     4|qiantai| 1lou|
|     5|lingdao|4 lou|
+------+-------+-----+

==》保证Spark SQL 能够访问到Hive 的元数据才行。

然而我们采用的是standalone模式:需要启动master worker
[root@hadoop001 sbin]# pwd
/root/app/spark-2.4.3-bin-2.6.0-cdh5.7.0/sbin
[root@hadoop001 sbin]# ./start-all.sh

[root@hadoop001 sbin]# jps
26023 Master
26445 Worker

Spark常用端口

8080	spark.master.ui.port	Master WebUI
8081	spark.worker.ui.port	Worker WebUI
18080	spark.history.ui.port	History server WebUI
7077	SPARK_MASTER_PORT	    Master port
6066	spark.master.rest.port	Master REST port
4040	spark.ui.port	        Driver WebUI

这个时候打开:http://hadoop001:8080/

在这里插入图片描述

开始项目Coding

IDEA+Scala+Maven进行项目的构建

步骤一: 新建scala项目后,可以参照如下pom进行配置修改

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.csylh</groupId>
  <artifactId>movie-recommend</artifactId>
  <version>1.0</version>
  <inceptionYear>2008</inceptionYear>
  <properties>
    <scala.version>2.11.8</scala.version>
    <spark.version>2.4.3</spark.version>
  </properties>

  <repositories>
    <repository>
      <id>scala-tools.org</id>
      <name>Scala-Tools Maven2 Repository</name>
      <url>http://scala-tools.org/repo-releases</url>
    </repository>
  </repositories>

  <dependencies>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>2.6.0</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-mllib_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>1.1.1</version>
    </dependency>
    <!--// 0.10.2.1-->

    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.39</version>
    </dependency>

    <dependency>
      <groupId>log4j</groupId>
      <artifactId>log4j</artifactId>
      <version>1.2.17</version>
    </dependency>

  </dependencies>

  <build>
    <!--<sourceDirectory>src/main/scala</sourceDirectory>-->
    <!--<testSourceDirectory>src/test/scala</testSourceDirectory>-->
    <plugins>
      <plugin>
        <!-- see http://davidb.github.com/scala-maven-plugin -->
        <groupId>net.alchim31.maven</groupId>
        <artifactId>scala-maven-plugin</artifactId>
        <version>3.1.3</version>
        <executions>
          <execution>
            <goals>
              <goal>compile</goal>
              <goal>testCompile</goal>
            </goals>
            <configuration>
              <args>
                <arg>-dependencyfile</arg>
                <arg>${project.build.directory}/.scala_dependencies</arg>
              </args>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.13</version>
        <configuration>
          <useFile>false</useFile>
          <disableXmlReport>true</disableXmlReport>
          <!-- If you have classpath issue like NoDefClassError,... -->
          <!-- useManifestOnlyJar>false</useManifestOnlyJar -->
          <includes>
            <include>**/*Test.*</include>
            <include>**/*Suite.*</include>
          </includes>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

步骤二:新建com.csylh.recommend.dataclearer.SourceDataETLApp

import com.csylh.recommend.entity.{
   Links, Movies, Ratings, Tags}
import org.apache.spark.sql.{
   SaveMode, SparkSession}

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值