Ubuntu系统下配置单机版Hadoop、Spark、Pyspark以及安装Anaconda教程

本文详述在Ubuntu系统中配置Java JDK、Hadoop、Scala、Spark、PySpark以及安装Anaconda的步骤,包括环境变量设置、配置文件修改和验证安装成功的方法,旨在帮助读者顺利搭建大数据开发环境。
摘要由CSDN通过智能技术生成

(这篇博客实属记录本人在安装以及配置环境中的一些小心得,希望能够帮助到大家。)

1.首先安装Java_JDK

在jdk版本选择上,我们一般选择JDK1.8版本,有较好的兼容性。
(附链接:链接:https://pan.baidu.com/s/11Y_dum09skPRspHNjhaBwA
提取码:3loc)
上传到Linux下进行解压,命令为

tar -zxvf 文件地址 -C 目标解压地址

例如:

tar -zxvf /home/demo/jdk-8u221-linux-x64.tar.gz -C /home/demo/hadoopApp/

解压完成后进行添加环境变量

`vi /etc/profile`

在vim编辑器中,需要输入i才能进行代码的插入,输入i后,在文件中添加自己的JDK路径,例如:

export JAVA_HOME = /home/demo/hadoopApp/jdk1.8.0_221
export PATH = $PATH:$JAVA_HOME/bin:

添加完成后,按ESC,:wq进行保存文件内容
配置完成后,需要令配置文件生效

source /etc/profile

验证是否配置成功,

java -version

若出现
在这里插入图片描述
即为JDK配置成功。

2.安装Hadoop

Hadoop下载路径:https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/,这里之所以用清华源作为链接地址是因为这里有Hadoop2.0的版本,而官网上都是最新的版本,并且清华源下载速度较快。
下载完成后,进行解压,与JDK相同

tar -zxvf /home/demo/hadoop-3.1.2.tar.gz -C /home/demo/hadoopApp/

解压后需要更改个别配置文件,需要修改的配置文件均在hadoop目录下/etc/hadoop/文件夹中
第一个:hadoop-env.sh,这个文件主要配置jdk环境
切换到指定的目录中,执行

vi hadoop-env.sh

添加如下代码(具体jdk安装位置依靠自己的为主):</

About This Book, Learn why and how you can efficiently use Python to process data and build machine learning models in Apache Spark 2.0Develop and deploy efficient, scalable real-time Spark solutionsTake your understanding of using Spark with Python to the next level with this jump start guide, Who This Book Is For, If you are a Python developer who wants to learn about the Apache Spark 2.0 ecosystem, this book is for you. A firm understanding of Python is expected to get the best out of the book. Familiarity with Spark would be useful, but is not mandatory., What You Will Learn, Learn about Apache Spark and the Spark 2.0 architectureBuild and interact with Spark DataFrames using Spark SQLLearn how to solve graph and deep learning problems using GraphFrames and TensorFrames respectivelyRead, transform, and understand data and use it to train machine learning modelsBuild machine learning models with MLlib and MLLearn how to submit your applications programmatically using spark-submitDeploy locally built applications to a cluster, In Detail, Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This book will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark., You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command., By the end of this book, you will have established a firm understanding of the Spark Python API and how it can be used to build data-intensive applications., Style and approach, This book takes a very comprehensive, step-by-step approach so you understand how the Spark ecosystem can be used with Python to develop efficient, scalable solutions. Every chapter is standalone and written in a very easy-to-understand manner, with a focus on both the hows and the whys of each concept.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值