hadoop学习(三)

原创 2011年01月09日 19:23:00

Hadoop Core Server Configuration

Default Shared File System URI and NameNode Location for HDFS
The default value is file:///, which instructs the framework to use the local file system. An example of an HDFS URI is hdfs://NamenodeHost[:8020]/, that informs the framework to use the shared file system(HDFS).
JobTracker Host and Port
The URI specified in this parameter informs the Hadoop Core framework of the JobTracker’s location. The default value is local, which indicates that no JobTracker server is to be run, and all tasks will be run from a single JVM.The JobtrackerHost is the host on which the JobTracker server process will be run. This value may be altered by individual jobs.
Maximum Concurrent Map Tasks per TaskTracke r
The mapred.tasktracker.map.tasks.maximum parameter sets the maximum number of map tasks that may be run by a TaskTracker server process on a host at one time. One TaskTracker,one Map Task;one Map Task,many threads; This value may be altered by setting the number of threads via the following:
    JobConf.set("mapred.map.multithreadedrunner.threads", threadCount);
Maximum Concurrent Reduce Tasks per TaskTracker
Reduce tasks tend to be I/O bound, and it is not uncommon to have the per-machine maximum reduce task value set to 1 or 2.
JVM Options for the Task Virtual Machines
During the run phase of a job, there may be up to mapred.tasktracker.map.tasks.maximum map tasks and mapred.tasktracker.reduce.tasks.maximum reduce tasks running simultaneously on each TaskTracker node, as well as the TaskTracker JVM.
Enable Job Control Options on the Web Interfaces
Both the JobTracker and the NameNode provide a web interface for monitoring and control. By default, the JobTracker provides web service on http://JobtrackerHost:50030 and the NameNode provides web service on http://NamenodeHost:50070.


interprocess communications (IPC)

Configuration Requirements

Network Requirements
Hadoop Core uses Secure Shell (SSH) to launch the server processes on the slave nodes.Hadoop Core requires that passwordless SSH work between the master machines and all of the slave and secondary machines.
Advanced Networking: Support for Multihomed Machines
dfs.datanode.dns.interface: If set, this parameter is the name of the network interface to be used for HDFS transactions to the DataNode. The IP address of this interface will be advertised by the DataNode as its contact address.
dfs.datanode.dns.nameserver: If set, this parameter is the hostname or IP address of a machine to use to perform a reverse host lookup on the IP address associated with the specified network interface.

rsync unix 远程同步命令。可以将配置文件同步的其他node上。

版权声明:本文为博主原创文章,未经博主允许不得转载。 举报

相关文章推荐

Hadoop 学习研究(三): MR程序的基础编写和提交

Mapreduce初析   Mapreduce是一个计算框架,既然是做计算的框架,那么表现形式就是有个输入(input),mapreduce操作这个输入(input),通过本身定义好的计算模型,...

hadoop学习笔记(三)mapreduce程序wordcount

Mapreduce程序WordCount 参考: http://www.cnblogs.com/xia520pi/archive/2012/05/16/2504205.html http://w...
  • ptrdu
  • ptrdu
  • 2013-05-04 15:32
  • 510

精选:深入理解 Docker 内部原理及网络配置

网络绝对是任何系统的核心,对于容器而言也是如此。Docker 作为目前最火的轻量级容器技术,有很多令人称道的功能,如 Docker 的镜像管理。然而,Docker的网络一直以来都比较薄弱,所以我们有必要深入了解Docker的网络知识,以满足更高的网络需求。

Hadoop-2.2.0学习之三YARN简介

MapReduce在hadoop-0.23版本中进行了完全的检查修改,并发展为了现在称之为的MapReduce2.0(MRv2)或者YARN。YARN的基本想法是将JobTracker的两个主要功能资...

Hadoop学习总结之三:Map-Reduce入门

1、Map-Reduce的逻辑过程 假设我们需要处理一批有关天气的数据,其格式如下: 按照ASCII码存储,每行一条记录每一行字符从0开始计数,第15个到第18个字符为年第25个到第29个...

Hadoop学习总结之三:Map-Reduce入门

原文链接:http://www.cnblogs.com/forfuture1978/archive/2010/11/14/1877086.html 1、Map-Reduce的逻辑过程 ...

Hadoop学习之HDFS架构(三)

现在看看HDFS的通信协议,HDFS的所有通信协议是在TCP/IP协议之上的。客户端连接到NameNode上的一个可配置的TCP端口,按照ClientProtocol协议与NameNode回话,Dat...

Hadoop学习笔记(三)Linux环境配置

Hadoop配置文档(一)下载安装下载jdk-7u80-linux-x64.tar.gz sudo tar -xvf jdk-7u80-linux-x64.tar.gz -C /opt/module...

Hadoop学习总结之三:Map-Reduce入门

1、Map-Reduce的逻辑过程 假设我们需要处理一批有关天气的数据,其格式如下: 按照ASCII码存储,每行一条记录每一行字符从0开始计数,第15个到第18个字符为年第25个到第29个...

Hadoop-2.4.1学习之使用Quorum Journal Manager的HDFS的高可用性(三)

在学习了如何配置HA后,接下来是启动和管理HA。要启动HA集群,首先要在所有运行JournalNode的主机上启动JournalNodes守护进程,可以在每台主机上执行命令hdfs journalno...
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)