Hadoop好友推荐系统-项目架构搭建和用户登陆的实现

原文出自:https://blog.csdn.net/xiaokang123456kao/article/details/74567932

项目总目录:基于Hadoop的好友推荐系统项目综述

一、创建Maven项目

创建一个Maven web项目(即war项目),引入依赖如下:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>cn.edu.lnu</groupId>
  <artifactId>ssh_hadoop</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>war</packaging>
   <!-- 集中定义依赖版本号 -->
    <properties>
        <junit.version>4.12</junit.version>
        <spring.version>4.1.3.RELEASE</spring.version>
        <mybatis.version>3.2.8</mybatis.version>
        <mybatis.spring.version>1.2.2</mybatis.spring.version>
        <mybatis.paginator.version>1.2.15</mybatis.paginator.version>
        <pagehelper.version>3.4.2-fix</pagehelper.version>
        <mysql.version>5.1.32</mysql.version>
        <druid.version>1.0.9</druid.version>
        <jstl.version>1.2</jstl.version>
        <servlet-api.version>2.5</servlet-api.version>
        <jsp-api.version>2.0</jsp-api.version>
        <hadoop.version>2.7.3</hadoop.version>
        <commons-fileupload.version>1.3.2</commons-fileupload.version>    
        <commons-io.version>2.5</commons-io.version>
        <jackson.version>2.4.2</jackson.version>
    </properties>
    
    <dependencies>
            <!-- 单元测试 -->
            <dependency>
                <groupId>junit</groupId>
                <artifactId>junit</artifactId>
                <version>${junit.version}</version>
                <scope>test</scope>
            </dependency>
            <!-- Jackson Json处理工具包 -->
            <dependency>
                <groupId>com.fasterxml.jackson.core</groupId>
                <artifactId>jackson-databind</artifactId>
                <version>${jackson.version}</version>
            </dependency>
            <!-- Mybatis -->
            <dependency>
                <groupId>org.mybatis</groupId>
                <artifactId>mybatis</artifactId>
                <version>${mybatis.version}</version>
            </dependency>
            <dependency>
                <groupId>org.mybatis</groupId>
                <artifactId>mybatis-spring</artifactId>
                <version>${mybatis.spring.version}</version>
            </dependency>
            <dependency>
                <groupId>com.github.miemiedev</groupId>
                <artifactId>mybatis-paginator</artifactId>
                <version>${mybatis.paginator.version}</version>
            </dependency>
            <dependency>
                <groupId>com.github.pagehelper</groupId>
                <artifactId>pagehelper</artifactId>
                <version>${pagehelper.version}</version>
            </dependency>
            <!-- MySql -->
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>${mysql.version}</version>
            </dependency>
            <!-- 连接池 -->
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
                <version>${druid.version}</version>
            </dependency>
            <!-- Spring -->
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-context</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-beans</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-aspects</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <!-- JSP相关 -->
            <dependency>
                <groupId>jstl</groupId>
                <artifactId>jstl</artifactId>
                <version>${jstl.version}</version>
            </dependency>
            <dependency>
                <groupId>javax.servlet</groupId>
                <artifactId>servlet-api</artifactId>
                <version>${servlet-api.version}</version>
                <scope>provided</scope>
            </dependency>
            <dependency>
                <groupId>javax.servlet</groupId>
                <artifactId>jsp-api</artifactId>
                <version>${jsp-api.version}</version>
                <scope>provided</scope>
            </dependency>
            <!-- 文件上传组件 -->
            <dependency>
                <groupId>commons-fileupload</groupId>
                <artifactId>commons-fileupload</artifactId>
                <version>${commons-fileupload.version}</version>
            </dependency>
            <dependency>
            <groupId>commons-io</groupId>
            <artifactId>commons-io</artifactId>
            <version>${commons-io.version}</version>
        </dependency>
        <!-- hadoop jar包 -->
        <dependency>  
            <groupId>org.apache.hadoop</groupId>  
            <artifactId>hadoop-common</artifactId>  
            <version>${hadoop.version}</version>  
        </dependency>  
        <dependency>  
            <groupId>org.apache.hadoop</groupId>  
            <artifactId>hadoop-hdfs</artifactId>  
            <version>${hadoop.version}</version>  
        </dependency>  
        <dependency>  
            <groupId>org.apache.hadoop</groupId>  
            <artifactId>hadoop-client</artifactId>  
            <version>${hadoop.version}</version>  
        </dependency> 
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-examples</artifactId>
            <version>${hadoop.version}</version>
        </dependency> 
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
    </dependencies>
    
  <build>
      <finalName>ssh_hadoop</finalName>
         <plugins>
             <plugin>
                 <artifactId>maven-war-plugin</artifactId>
                 <configuration>
                     <version>3.0</version>
                 </configuration>
             </plugin>
             <plugin>  
                <groupId>org.apache.maven.plugins</groupId>  
                <artifactId>maven-compiler-plugin</artifactId>  
                <version>3.2</version>  
                <configuration>  
                    <source>1.7</source>  
                    <target>1.7</target>  
                </configuration>  
            </plugin>  
         </plugins>
     </build>
</project>

搭建SSM框架

1、数据库属性文件

db.properties
主要配置驱动类、Mysql的URL、Mysql用户名和密码等。

jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/friend?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
jdbc.username=root
jdbc.password=password

2、SprinngMVC配置文件

springmvc.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:mvc="http://www.springframework.org/schema/mvc"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-4.0.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <context:component-scan base-package="cn.edu.lnu.controller" />
    <mvc:annotation-driven />
    <bean
        class="org.springframework.web.servlet.view.InternalResourceViewResolver">
        <property name="prefix" value="/WEB-INF/jsp/" />
        <property name="suffix" value=".jsp" />
    </bean>
    
    <!-- 资源映射 -->
    <!-- <mvc:resources location="/WEB-INF/css/" mapping="/css/**"/>
    <mvc:resources location="/WEB-INF/js/" mapping="/js/**"/> -->
    
    <!-- 定义文件上传解析器 -->
    <!-- <bean id="multipartResolver" -->
    <!--     class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
        设定默认编码
        <property name="defaultEncoding" value="UTF-8"></property>
        设定文件上传的最大值5MB,5*1024*1024
        <property name="maxUploadSize" value="5242880"></property>
    </bean>
     -->
    
</beans>

3、配置Mybatis配置文件

SqlMapConfig.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE configuration
        PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
        "http://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
<!-- 配置分页插件 -->
    <!-- <plugins>
        <plugin interceptor="com.github.pagehelper.PageHelper">
            设置数据库类型 Oracle,Mysql,MariaDB,SQLite,Hsqldb,PostgreSQL六种数据库        
            <property name="dialect" value="mysql"/>            
        </plugin>
    </plugins> -->
</configuration> 

这里使用了ehcache来进行缓存,相应配置文件如下:
ehcache.xml

<ehcache>

    <!-- Sets the path to the directory where cache .data files are created.

         If the path is a Java System Property it is replaced by
         its value in the running VM.

         The following properties are translated:
         user.home - User's home directory
         user.dir - User's current working directory
         java.io.tmpdir - Default temp file path -->
    <diskStore path="java.io.tmpdir"/>


    <!--Default Cache configuration. These will applied to caches programmatically created through
        the CacheManager.

        The following attributes are required for defaultCache:

        maxInMemory       - Sets the maximum number of objects that will be created in memory
        eternal           - Sets whether elements are eternal. If eternal,  timeouts are ignored and the element
                            is never expired.
        timeToIdleSeconds - Sets the time to idle for an element before it expires. Is only used
                            if the element is not eternal. Idle time is now - last accessed time
        timeToLiveSeconds - Sets the time to live for an element before it expires. Is only used
                            if the element is not eternal. TTL is now - creation time
        overflowToDisk    - Sets whether elements can overflow to disk when the in-memory cache
                            has reached the maxInMemory limit.

        -->
    <defaultCache
        maxElementsInMemory="10000"
        eternal="false"
        timeToIdleSeconds="120"
        timeToLiveSeconds="120"
        overflowToDisk="true"
        />

    <!--Predefined caches.  Add your cache configuration settings here.
        If you do not have a configuration for your cache a WARNING will be issued when the
        CacheManager starts

        The following attributes are required for defaultCache:

        name              - Sets the name of the cache. This is used to identify the cache. It must be unique.
        maxInMemory       - Sets the maximum number of objects that will be created in memory
        eternal           - Sets whether elements are eternal. If eternal,  timeouts are ignored and the element
                            is never expired.
        timeToIdleSeconds - Sets the time to idle for an element before it expires. Is only used
                            if the element is not eternal. Idle time is now - last accessed time
        timeToLiveSeconds - Sets the time to live for an element before it expires. Is only used
                            if the element is not eternal. TTL is now - creation time
        overflowToDisk    - Sets whether elements can overflow to disk when the in-memory cache
                            has reached the maxInMemory limit.

        -->

    <!-- Sample cache named sampleCache1
        This cache contains a maximum in memory of 10000 elements, and will expire
        an element if it is idle for more than 5 minutes and lives for more than
        10 minutes.

        If there are more than 10000 elements it will overflow to the
        disk cache, which in this configuration will go to wherever java.io.tmp is
        defined on your system. On a standard Linux system this will be /tmp"
        -->
    <cache name="sampleCache1"
        maxElementsInMemory="10000"
        eternal="false"
        timeToIdleSeconds="300"
        timeToLiveSeconds="600"
        overflowToDisk="true"
        />

    <!-- Sample cache named sampleCache2
        This cache contains 1000 elements. Elements will always be held in memory.
        They are not expired. -->
    <cache name="sampleCache2"
        maxElementsInMemory="1000"
        eternal="true"
        timeToIdleSeconds="0"
        timeToLiveSeconds="0"
        overflowToDisk="false"
        /> -->

    <!-- Place configuration for your caches following -->

</ehcache>

4、添加Spring配置文件

applicationContext-*.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:context="http://www.springframework.org/schema/context" xmlns:p="http://www.springframework.org/schema/p"
    xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.0.xsd
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-4.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd
    http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.0.xsd">

    <!-- 数据库连接池 -->
    <!-- 加载配置文件 -->
    <context:property-placeholder location="classpath:resource/*.properties" />
    <!-- 数据库连接池 -->
    <bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
        destroy-method="close">
        <property name="url" value="${jdbc.url}" />
        <property name="username" value="${jdbc.username}" />
        <property name="password" value="${jdbc.password}" />
        <property name="driverClassName" value="${jdbc.driver}" />
        <property name="maxActive" value="10" />
        <property name="minIdle" value="5" />
    </bean>
    <!-- 配置sqlsessionFactory -->
    <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
        <!-- 实体类,简称 -设置别名  包级别-->
        <property name="typeAliasesPackage" value="cn.edu.lnu.pojo"></property>
        <property name="configLocation" value="classpath:mybatis/SqlMapConfig.xml"></property>
        <property name="dataSource" ref="dataSource"></property>
    </bean>
    <!-- 配置扫描包,加载mapper代理对象 -->
    <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
        <property name="basePackage" value="cn.edu.lnu.mapper"></property>
    </bean>

<!-- 扫描包加载Service实现类 -->
    <context:component-scan base-package="cn.edu.lnu.service"></context:component-scan>
</beans>

5、修改web.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
    id="ssh_hadoop" version="2.5">
    <display-name>ldmemory-manager</display-name>
    <welcome-file-list>
        <welcome-file>index.html</welcome-file>
        <welcome-file>index.htm</welcome-file>
        <welcome-file>index.jsp</welcome-file>
        <welcome-file>index1.jsp</welcome-file>
        <welcome-file>default.html</welcome-file>
        <welcome-file>default.htm</welcome-file>
        <welcome-file>default.jsp</welcome-file>
    </welcome-file-list>
    
    <!-- 加载spring容器 -->
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:spring/applicationContext-*.xml</param-value>
    </context-param>
    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>
    <!-- 解决post乱码 -->
    <filter>
        <filter-name>CharacterEncodingFilter</filter-name>
        <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
        <init-param>
            <param-name>encoding</param-name>
            <param-value>utf-8</param-value>
        </init-param>
    </filter>
    <filter-mapping>
        <filter-name>CharacterEncodingFilter</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>
    <!-- springmvc的前端控制器 -->
    <servlet>
        <servlet-name>ldmemory-manager</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <!-- contextConfigLocation不是必须的, 如果不配置contextConfigLocation, springmvc的配置文件默认在:WEB-INF/servlet的name+"-servlet.xml" -->
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>classpath:spring/springmvc.xml</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>ldmemory-manager</servlet-name>
        <url-pattern>*.action</url-pattern>
    </servlet-mapping>
</web-app>

6、创建项目需要的package包

 
之后会对这些包进行详解,这里只需要先创建出来。

二、用户登录的简单实现

1、POJO层

用户登录类如下:
package cn.edu.lnu.pojo;
/**
 * 用户实体类
 * @author yxk
 *2018年6月11日
 */
public class User {
	private int id;
	private String description;
	private String username;
	private String password;
	public int getId() {
		return id;
	}
	public void setId(int id) {
		this.id = id;
	}
	public String getDescription() {
		return description;
	}
	public void setDescription(String description) {
		this.description = description;
	}
	public String getUsername() {
		return username;
	}
	public void setUsername(String username) {
		this.username = username;
	}
	public String getPassword() {
		return password;
	}
	public void setPassword(String password) {
		this.password = password;
	}
	public User() {
		super();
		// TODO Auto-generated constructor stub
	}
	
}

2、Dao层

抽取出UserMapper接口

package cn.edu.lnu.mapper;

import org.apache.ibatis.annotations.Param;

import cn.edu.lnu.pojo.User;

public interface UserMapper {
	User login(@Param("username")String username,@Param("password")String password); 
}
然后编写配置文件UserMapper.xml
<?xml version="1.0" encoding="UTF-8"?> 
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"  
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">

<mapper namespace="cn.edu.lnu.mapper.UserMapper">
	<!-- 用户登录 -->
	<select id="login" parameterType="String" resultType="User">
		select * from loginuser where username = #{username} and password = #{password}
	</select>
</mapper>

3、service层

编写UserService接口
package cn.edu.lnu.service;

import java.io.IOException;

public interface UserService {
	public boolean login(String username, String password);
}
编写UserSrviceImpl实现类
package cn.edu.lnu.service.impl;

import java.io.IOException;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import cn.edu.lnu.mapper.UserMapper;
import cn.edu.lnu.pojo.User;
import cn.edu.lnu.service.UserService;

/**
 * 用户模块
 * 
 * @author yxk 2018年6月11日
 */
@Service
public class UserServiceImpl implements UserService {
	@Autowired
	private UserMapper mapper;
	/*
	 * 用户登录(non-Javadoc)
	 * 
	 * @see cn.edu.lnu.service.UserService#login(java.lang.String,
	 * java.lang.String)
	 */
	@Override
	public boolean login(String username, String password) {
		User user = mapper.login(username, password);
		if (user == null) {
			return false;
		}
		return true;
	}

}

4、Controller层

package cn.edu.lnu.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

import cn.edu.lnu.service.UserService;

/**
 * 用户登录模块
 * 
 * @author 尹显坤 2018年6月11日
 */
@Controller
@RequestMapping("/login")
public class UserController {

	@Autowired
	private UserService service;

	/*
	 * 系统用户登录
	 */
	@RequestMapping("/loginUser_login.action")
	@ResponseBody
	public String login(String username, String password, Model model) {
		boolean flag = service.login(username, password);
		return String.valueOf(flag);
	}

}

5、前台逻辑

<%@ page language="java" pageEncoding="UTF-8" contentType="text/html; charset=UTF-8"%>

<script type="text/javascript">
    $(function() {

        var formParam = {
            url : 'login/loginUser_login.action',
            success : function(result) {
            //  var r = $.parseJSON(result);
                //if (r.success) {
                console.info(result);
                if(result=='true'){
                    $('#user_login_loginDialog').dialog('close');

                //  $('#sessionInfoDiv').html(formatString('[<strong>{0}</strong>],欢迎你!', r.obj.loginName));

                //  $('#layout_east_onlineDatagrid').datagrid('load', {});
                } else {
                    $.messager.show({
                        title : '提示',
                        msg : '用户名或密码错误!'
                    });
                }
            }
        };

        $('#user_login_loginInputForm').form(formParam);



        $('#user_login_loginDialog').show().dialog({
            modal : true,
            title : '系统登录',
            closable : false,
            buttons : [
            // {
            //  text : '注册',
            //  handler : function() {
            //      $('#user_reg_regDialog').dialog('open');
            //  }
            //},
             {
                text : '登录',
                handler : function() {
                    $('#user_login_loginInputForm').submit();
                }
            } ]
        });


        var sessionInfo_userId = '${sessionInfo.userId}';
        if (sessionInfo_userId) {/*目的是,如果已经登陆过了,那么刷新页面后也不需要弹出登录窗体*/
            $('#user_login_loginDialog').dialog('close');
        }

    });
</script>
<div id="user_login_loginDialog" style="display: none;width: 300px;height: 210px;overflow: hidden;">
    <div id="user_login_loginTab">
        <div style="overflow: hidden;">
            <div>
                <div class="info">
                    <div class="tip icon-tip"></div>
                    <div>用户名是"admin",密码是"admin"</div>
                </div>
            </div>
            <div align="center">
                <form method="post" id="user_login_loginInputForm">
                    <table class="tableForm">
                        <tr>
                            <th>登录名</th>
                            <td><input name="username" class="easyui-validatebox" data-options="required:true" value="admin" />
                            </td>
                        </tr>
                        <tr>
                            <th>密码</th>
                            <td><input type="password" name="password" class="easyui-validatebox" data-options="required:true" style="width: 150px;" value="admin" /></td>
                        </tr>
                    </table>
                </form>
            </div>
        </div>
    </div>
</div>

7、测试

通过maven的tomcat插件启动项目,访问页面:http://localhost:8080/ssh_hadoop/basic.jsp,测试即可。





  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
flink-shaded-hadoop3和flink-shaded-hadoop3-uber是Apache Flink项目中与Hadoop 3.x版本集成相关的两个模块。 首先,Hadoop是一个分布式计算框架,用于处理大规模数据。而Flink是一个快速而可扩展的流式处理引擎,它可以在实时和批处理任务之间无缝切换。为了与Hadoop集成,并且能够在Flink中使用Hadoop生态系统的各种功能和工具,例如HDFS、YARN和MapReduce等,Flink提供了与Hadoop版本兼容的特殊模块。 flink-shaded-hadoop3模块是Flink所提供的一个可防止与Hadoop 3.x版本依赖冲突的模块。在Flink应用程序中,当需要使用Hadoop 3.x相关功能时,可以将flink-shaded-hadoop3模块添加到项目的依赖中。该模块会将特定版本的Hadoop 3.x依赖项重新打包,以避免与Flink自身或其他依赖项产生冲突。这样一来,Flink就能够与Hadoop 3.x版本协同工作,平滑地使用Hadoop的功能。 而flink-shaded-hadoop3-uber模块则是更加完整和庞大的用于集成Hadoop 3.x版本的模块。它将包含Hadoop 3.x依赖的所有必需库和资源等,以便于使用和编译。相比于flink-shaded-hadoop3模块,flink-shaded-hadoop3-uber模块更像是一个“全能版”,其中包含了实现Hadoop 3.x版本深度集成所需的所有组件。这使得开发人员能够方便地构建和部署Flink应用程序,并且在与Hadoop生态系统进行交互时更加方便。 总的来说,flink-shaded-hadoop3和flink-shaded-hadoop3-uber模块都是Flink为了与Hadoop 3.x版本无缝集成,提供的两个特殊模块。它们通过重新打包Hadoop依赖,解决了可能产生的冲突问题,使得Flink能够顺利使用并利用Hadoop的功能和工具。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值