hadoop-core-site.xml配置文件详解

hadoop配置文件:core-site.xml详解

core-site.xml配置文件介绍

HDFS和MapReduce常用的I/O设置等

core-site.xml配置文件
Hadoop2.0版本的core-site.xml
namedescriptionvalue
hadoop.tmp.dirA base for other temporary directories. 只可以设置一个值;建议设置到一个足够空间的地方,而不是默认的/tmp下 服务端参数,修改需重启/tmp/hadoop-${user.name}
io.native.lib.availableShould native hadoop libraries, if present, be used. 是否启动Hadoop的本地库,默认启用。本地库可以加快基本操作,例如IO,压缩等。true
hadoop.http.filter.initializersA comma separated list of class names. Each class in the list must extend org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be initialized. Then, the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list defines the ordering of the filters. Hadoop的Http服务中,用逗号分隔的一组过滤器类名,每个类必须扩展自org.apache.hadoop.http.FilterInitializer。 这些组件被初始化,应用于全部用户的JSP和Servlet页面。 列表中定义的顺序就是过滤器被调用的顺序。org.apache.hadoop.http.lib.StaticUserWebFilter
hadoop.security.authorizationIs service-level authorization enabled? 是否开启安全服务验证 建议不开启。认证操作比较复杂,在公司内部网络下,重要性没那么高false
hadoop.security.instrumentation.requires.adminIndicates if administrator ACLs are required to access instrumentation servlets (JMX, METRICS, CONF, STACKS).false
hadoop.security.authenticationPossible values are simple (no authentication), and kerberos 安全验证规则,可以是simple和kerberos。simple意味着不验证。simple
hadoop.security.group.mappingClass for user to group mapping (get groups for a given user) for ACL. The default implementation, org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will determine if the Java Native Interface (JNI) is available. If JNI is available the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, ShellBasedUnixGroupsMapping, is used. This implementation shells out to the Linux/Unix environment with the bash -c groups command to resolve a list of groups for a user. user到group的映射类。ACL用它以给定user获取group。默认实现是 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, 如果JNI有效,它将发挥作用,使用Hadoop的API去获取user的groups列表。如果JNI无效,会使用另一个基于shell的实现, ShellBasedUnixGroupsMapping。这个实现是基于Linux、Unix的shell的环境。org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
hadoop.security.groups.cache.secsThis is the config controlling the validity of the entries in the cache containing the user->group mapping. When this duration has expired, then the implementation of the group mapping provider is invoked to get the groups of the user and then cached back. user到gourp映射缓存的有效时间。如果超时,会再次调用去获取新的映射关系然后缓存起来。300
hadoop.security.groups.cache.warn.after.msIf looking up a single user to group takes longer than this amount of milliseconds, we will log a warning message.5000
hadoop.security.group.mapping.ldap.urlThe URL of the LDAP server to use for resolving user groups when using the LdapGroupsMapping user to group mapping.
hadoop.security.group.mapping.ldap.sslWhether or not to use SSL when connecting to the LDAP server.false
hadoop.security.group.mapping.ldap.ssl.keystoreFile path to the SSL keystore that contains the SSL certificate required by the LDAP server.
hadoop.security.group.mapping.ldap.ssl.keystore.password.fileThe path to a file containing the password of the LDAP SSL keystore. IMPORTANT: This file should be readable only by the Unix user running the daemons.
hadoop.security.group.mapping.ldap.bind.userThe distinguished name of the user to bind as when connecting to the LDAP server. This may be left blank if the LDAP server supports anonymous binds.
hadoop.security.group.mapping.ldap.bind.password.fileThe path to a file containing the password of the bind user. IMPORTANT: This file should be readable only by the Unix user running the daemons.
hadoop.security.group.mapping.ldap.baseThe search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory.
hadoop.security.group.mapping.ldap.search.filter.userAn additional filter to use when searching for LDAP users. The default will usually be appropriate for Active Directory installations. If connecting to an LDAP server with a non-AD schema, this should be replaced with (&(objectClass=inetOrgPerson)(uid={0}). {0} is a special string used to denote where the username fits into the filter.(&(objectClass=user)(sAMAccountName={0}))
hadoop.security.group.mapping.ldap.search.filter.groupAn additional filter to use when searching for LDAP groups. This should be changed when resolving groups against a non-Active Directory installation. posixGroups are currently not a supported group class.(objectClass=group)
hadoop.security.group.mapping.ldap.search.attr.memberThe attribute of the group object that identifies the users that are members of the group. The default will usually be appropriate for any LDAP installation.member
hadoop.security.group.mapping.ldap.search.attr.group.nameThe attribute of the group object that identifies the group name. The default will usually be appropriate for all LDAP systems.cn
hadoop.security.group.mapping.ldap.directory.search.timeoutThe attribute applied to the LDAP SearchControl properties to set a maximum time limit when searching and awaiting a result. Set to 0 if infinite wait period is desired. Default is 10 seconds. Units in milliseconds.10000
hadoop.security.service.user.name.keyFor those cases where the same RPC protocol is implemented by multiple servers, this configuration is required for specifying the principal name to use for the service when the client wishes to make an RPC call. 如果相同的RPC协议被多个Server实现,这个配置是用来指定在客户端进行RPC调用时,使用哪个principal name去联系服务器。不建议使用null
hadoop.security.uid.cache.secsThis is the config controlling the validity of the entries in the cache containing the userId to userName and groupId to groupName used by NativeIO getFstat(). 安全选项。不建议使用14400
hadoop.rpc.protectionA comma-separated list of protection values for secured sasl connections. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled. hadoop.security.saslproperties.resolver.class can be used to override the hadoop.rpc.protection for a connection at the server side. rpc连接保护。可取的值有authentication(认证), integrity(完整) and privacy(隐私)。 不建议使用authentication
hadoop.security.saslproperties.resolver.classSaslPropertiesResolver used to resolve the QOP used for a connection. If not specified, the full set of values specified in hadoop.rpc.protection is used while determining the QOP used for the connection. If a class is specified, then the QOP values returned by the class will be used while determining the QOP used for the connection.
hadoop.work.around.non.threadsafe.getpwuidSome operating systems or authentication modules are known to have broken implementations of getpwuid_r and getpwgid_r, such that these calls are not thread-safe. Symptoms of this problem include JVM crashes with a stack trace inside these functions. If your system exhibits this issue, enable this configuration parameter to include a lock around the calls as a workaround. An incomplete list of some systems known to have this issue is available at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations 一些系统已知在调用getpwuid_r和getpwgid_r有问题,这些调用是非线程安全的。这个问题的主要表现特征是JVM崩溃。如果你的系统有这些问题,开启这个选项。默认是关闭的。false
hadoop.kerberos.kinit.commandUsed to periodically renew Kerberos credentials when provided to Hadoop. The default setting assumes that kinit is in the PATH of users running the Hadoop client. Change this to the absolute path to kinit if this is not the case. 用来定期的向Hadoop提供新的Kerberos证书。所提供命令需要能够在运行Hadoop客户端的用户路径中查找到,否则,请指定绝对路径。不建议使用kinit
hadoop.security.auth_to_localMaps kerberos principals to local user names 映射kerberos principals(代理人)到本地用户名
io.file.buffer.sizeThe size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. 在读写文件时使用的缓存大小。这个大小应该是内存Page的倍数。建议1M4096
io.bytes.per.checksumThe number of bytes per checksum. Must not be larger than io.file.buffer.size. 每次进行校验和检查的字节数。一定不能大于io.file.buffer.size.512
io.skip.checksum.errorsIf true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception. 是否跳过校验和错误,默认是否,校验和异常时会抛出错误。false
io.compression.codecsA comma-separated list of the compression codec classes that can be used for compression/decompression. In addition to any classes specified with this property (which take precedence), codec classes on the classpath are discovered using a Java ServiceLoader. 压缩和解压缩编码类列表,用逗号分隔。这些类是使用Java ServiceLoader加载。
io.compression.codec.bzip2.libraryThe native-code library to be used for compression and decompression by the bzip2 codec. This library could be specified either by by name or the full pathname. In the former case, the library is located by the dynamic linker, usually searching the directories specified in the environment variable LD_LIBRARY_PATH. The value of “system-native” indicates that the default system library should be used. To indicate that the algorithm should operate entirely in Java, specify “java-builtin”.system-native
io.serializationsA list of serialization classes that can be used for obtaining serializers and deserializers. 序列化类列表,可以被用来获取序列化器和反序列化器(serializers and deserializers)。org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
io.seqfile.local.dirThe local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored. 本地文件目录。sequence file在merge过程中存储内部数据的地方。可以是逗号分隔的一组目录。最好在不同磁盘以分散IO。实际不存在的目录会被忽略。${hadoop.tmp.dir}/io/local
io.map.index.skipNumber of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory. 跳过的索引实体数量在entry之间。默认是0。设置大于0的值可以用更少的内存打开大MapFiles。注意:MpaFile是一组Sequence文件,是排序后的,带内部索引的文件0
io.map.index.intervalMapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file. MapFile包含两个文件,数据文件和索引文件。每io.map.index.interval个记录写入数据文件,一条记录(行key,数据文件位置)写入索引文件。128
fs.defaultFSThe name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri’s scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri’s authority is used to determine the host, port, etc. for a filesystem. 默认文件系统的名称。URI形式。uri’s的scheme需要由(fs.SCHEME.impl)指定文件系统实现类。 uri’s的authority部分用来指定host, port等。默认是本地文件系统。 HA方式,这里设置服务名,例如:hdfs://mycluster1 HDFS的客户端访问HDFS需要此参数。file:///
fs.default.nameDeprecated. Use (fs.defaultFS) property instead 过时。使用(fs.defaultFS)代替file:///
fs.trash.intervalNumber of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled. This option may be configured both on the server and the client. If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. 以分钟为单位的垃圾回收时间,垃圾站中数据超过此时间,会被删除。如果是0,垃圾回收机制关闭。可以配置在服务器端和客户端。如果在服务器端配置trash无效,会检查客户端配置。如果服务器端配置有效,客户端配置会忽略。 建议开启,建议4320(3天) 垃圾回收站,如有同名文件被删除,会给文件顺序编号,例如:a.txt,a.txt(1)0
fs.trash.checkpoint.intervalNumber of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. If zero, the value is set to the value of fs.trash.interval. Every time the checkpointer runs it creates a new checkpoint out of current and removes checkpoints created more than fs.trash.interval minutes ago. 以分钟为单位的垃圾回收检查间隔。应该小于或等于fs.trash.interval。如果是0,值等同于fs.trash.interval。每次检查器运行,会创建新的检查点。 建议设置为60(1小时)0
fs.AbstractFileSystem.file.implThe AbstractFileSystem for file: uris.org.apache.hadoop.fs.local.LocalFs
fs.AbstractFileSystem.hdfs.implThe FileSystem for hdfs: uris. 文件系统实现类:hdfsorg.apache.hadoop.fs.Hdfs
fs.AbstractFileSystem.viewfs.implThe AbstractFileSystem for view file system for viewfs: uris (ie client side mount table:).org.apache.hadoop.fs.viewfs.ViewFs
fs.ftp.hostFTP filesystem connects to this server0.0.0.0
fs.ftp.host.portFTP filesystem connects to fs.ftp.host on this port21
fs.df.intervalDisk usage statistics refresh interval in msec.60000
fs.du.intervalFile space usage statistics refresh interval in msec.600000
fs.s3.block.sizeBlock size to use when writing files to S3.67108864
fs.s3.buffer.dirDetermines where on the local filesystem the S3 filesystem should store files before sending them to S3 (or after retrieving them from S3).${hadoop.tmp.dir}/s3
fs.s3.maxRetriesThe maximum number of retries for reading or writing files to S3, before we signal failure to the application.4
fs.s3.sleepTimeSecondsThe number of seconds to sleep between each S3 retry.10
fs.swift.implThe implementation class of the OpenStack Swift Filesystemorg.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
fs.automatic.closeBy default, FileSystem instances are automatically closed at program exit using a JVM shutdown hook. Setting this property to false disables this behavior. This is an advanced option that should only be used by server applications requiring a more carefully orchestrated shutdown sequence. 默认的,文件系统实例在程序退出时自动关闭,通过JVM shutdown hook方式。可以把此属性设置为false取消这种操作。这是一个高级选项,需要使用者特别关注关闭顺序。不要关闭true
fs.s3n.block.sizeBlock size to use when reading files using the native S3 filesystem (s3n: URIs).67108864
fs.s3n.multipart.uploads.enabledSetting this property to true enables multiple uploads to native S3 filesystem. When uploading a file, it is split into blocks if the size is larger than fs.s3n.multipart.uploads.block.size.false
fs.s3n.multipart.uploads.block.sizeThe block size for multipart uploads to native S3 filesystem. Default size is 64MB.67108864
fs.s3n.multipart.copy.block.sizeThe block size for multipart copy in native S3 filesystem. Default size is 5GB.5368709120
io.seqfile.compress.blocksizeThe minimum block size for compression in block compressed SequenceFiles. SequenceFiles以块压缩方式压缩时,块大小大于此值时才启动压缩。1000000
io.seqfile.lazydecompressShould values of block-compressed SequenceFiles be decompressed only when necessary. 懒惰解压,仅在必要时解压,仅对块压缩的SequenceFiles有效。true
io.seqfile.sorter.recordlimitThe limit on number of records to be kept in memory in a spill in SequenceFiles.Sorter 在SequenceFiles.Sorter spill过程中,保存在内存中的记录数1000000
io.mapfile.bloom.sizeThe size of BloomFilter-s used in BloomMapFile. Each time this many keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter). Larger values minimize the number of filters, which slightly increases the performance, but may waste too much space if the total number of keys is usually much smaller than this number.1048576
io.mapfile.bloom.error.rateThe rate of false positives in BloomFilter-s used in BloomMapFile. As this value decreases, the size of BloomFilter-s increases exponentially. This value is the probability of encountering false positives (default is 0.5%).0.005
hadoop.util.hash.typeThe default implementation of Hash. Currently this can take one of the two values: ‘murmur’ to select MurmurHash and ‘jenkins’ to select JenkinsHash. 默认Hash算法实现. ‘murmur’:MurmurHash, ‘jenkins’:JenkinsHash.murmur
ipc.client.idlethresholdDefines the threshold number of connections after which connections will be inspected for idleness. 连接数阀值,超过此值,需要进行空闲连接检查4000
ipc.client.kill.maxDefines the maximum number of clients to disconnect in one go. 定义客户端最大数量,超过会被断开连接10
ipc.client.connection.maxidletimeThe maximum time in msec after which a client will bring down the connection to the server. 毫秒,最大时间,超过后客户端会断开和服务器的连接。10000
ipc.client.connect.max.retriesIndicates the number of retries a client will make to establish a server connection. 客户端连接重试次数。10
ipc.client.connect.retry.intervalIndicates the number of milliseconds a client will wait for before retrying to establish a server connection.1000
ipc.client.connect.timeoutIndicates the number of milliseconds a client will wait for the socket to establish a server connection.20000
ipc.client.connect.max.retries.on.timeoutsIndicates the number of retries a client will make on socket timeout to establish a server connection. 在连接超时后,客户端连接重试次数45
ipc.server.listen.queue.sizeIndicates the length of the listen queue for servers accepting client connections. 定义服务器端接收客户端连接的监听队列长度128
ipc.server.tcpnodelayTurn on/off Nagle’s algorithm for the TCP socket connection on the server. Setting to true disables the algorithm and may decrease latency with a cost of more/smaller packets. 在服务器端开启/关闭Nagle’s算法,此算法可以延迟小数据包发送,从而达到网络流量更有效利用。但是这对小数据包是不利的。默认关闭。建议false,即开启Nagle算法false
ipc.client.tcpnodelayTurn on/off Nagle’s algorithm for the TCP socket connection on the client. Setting to true disables the algorithm and may decrease latency with a cost of more/smaller packets.false
hadoop.rpc.socket.factory.class.defaultDefault SocketFactory to use. This parameter is expected to be formatted as “package.FactoryClassName”.org.apache.hadoop.net.StandardSocketFactory
hadoop.rpc.socket.factory.class.ClientProtocolSocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes.
hadoop.socks.serverAddress (host:port) of the SOCKS server to be used by the SocksSocketFactory.
net.topology.node.switch.mapping.implThe default implementation of the DNSToSwitchMapping. It invokes a script specified in net.topology.script.file.name to resolve node names. If the value for net.topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names. 机架感知实现类org.apache.hadoop.net.ScriptBasedMapping
net.topology.implThe default implementation of NetworkTopology which is classic three layer one.org.apache.hadoop.net.NetworkTopology
net.topology.script.file.nameThe script name that should be invoked to resolve DNS names to NetworkTopology names. Example: the script would take host.foo.bar as an argument, and return /rack1 as the output. 配合ScriptBasedMapping使用。脚本文件。此脚本文件,输入是ip地址,输出是机架路径。
net.topology.script.number.argsThe max number of args that the script configured with net.topology.script.file.name should be run with. Each arg is an IP address. 机架感知脚本文件的参数最大数量。脚本每次运行被传递的参数,每个参数是一个ip地址100
net.topology.table.file.nameThe file name for a topology file, which is used when the net.topology.node.switch.mapping.impl property is set to org.apache.hadoop.net.TableMapping. The file format is a two column text file, with columns separated by whitespace. The first column is a DNS or IP address and the second column specifies the rack where the address maps. If no entry corresponding to a host in the cluster is found, then /default-rack is assumed. 在net.topology.script.file.name被设置为 org.apache.hadoop.net.TableMapping时,可以使用此配置。文件格式是一个有两个列的文本文件,使用空白字符分隔。第一列是DNS或IP地址,第二列是机架路径。如无指定,使用默认机架(/default-rack)
file.stream-buffer-sizeThe size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.4096
file.bytes-per-checksumThe number of bytes per checksum. Must not be larger than file.stream-buffer-size512
file.client-write-packet-sizePacket size for clients to write65536
file.blocksizeBlock size67108864
file.replicationReplication factor1
s3.stream-buffer-sizeThe size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.4096
s3.bytes-per-checksumThe number of bytes per checksum. Must not be larger than s3.stream-buffer-size512
s3.client-write-packet-sizePacket size for clients to write65536
s3.blocksizeBlock size67108864
s3.replicationReplication factor3
s3native.stream-buffer-sizeThe size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.4096
s3native.bytes-per-checksumThe number of bytes per checksum. Must not be larger than s3native.stream-buffer-size512
s3native.client-write-packet-sizePacket size for clients to write65536
s3native.blocksizeBlock size67108864
s3native.replicationReplication factor3
ftp.stream-buffer-sizeThe size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.4096
ftp.bytes-per-checksumThe number of bytes per checksum. Must not be larger than ftp.stream-buffer-size512
ftp.client-write-packet-sizePacket size for clients to write65536
ftp.blocksizeBlock size67108864
ftp.replicationReplication factor3
tfile.io.chunk.sizeValue chunk size in bytes. Default to 1MB. Values of the length less than the chunk size is guaranteed to have known value length in read time (See also TFile.Reader.Scanner.Entry.isValueLengthKnown()).1048576
tfile.fs.output.buffer.sizeBuffer size used for FSDataOutputStream in bytes.262144
tfile.fs.input.buffer.sizeBuffer size used for FSDataInputStream in bytes.262144
hadoop.http.authentication.typeDefines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#simple
hadoop.http.authentication.token.validityIndicates how long (in seconds) an authentication token is valid before it has to be renewed.36000
hadoop.http.authentication.signature.secret.fileThe signature secret for signing the authentication tokens. The same secret should be used for JT/NN/DN/TT configurations.${user.home}/hadoop-http-auth-signature-secret
hadoop.http.authentication.cookie.domainThe domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all Hadoop nodes web-consoles the domain must be correctly set. IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it.
hadoop.http.authentication.simple.anonymous.allowedIndicates if anonymous requests are allowed when using ‘simple’ authentication.true
hadoop.http.authentication.kerberos.principalIndicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with ‘HTTP/’ as per Kerberos HTTP SPNEGO specification.HTTP/_HOST@LOCALHOST
hadoop.http.authentication.kerberos.keytabLocation of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.${user.home}/hadoop.keytab
dfs.ha.fencing.methodsList of fencing methods to use for service fencing. May contain builtin methods (eg shell and sshfence) or user-defined method. HDFS的HA功能的防脑裂方法。可以是内建的方法(例如shell和sshfence)或者用户定义的方法。建议使用sshfence(hadoop:9922),括号内的是用户名和端口,注意,这需要NN的2台机器之间能够免密码登陆
dfs.ha.fencing.ssh.connect-timeoutSSH connection timeout, in milliseconds, to use with the builtin sshfence fencer.30000
dfs.ha.fencing.ssh.private-key-filesThe SSH private key files to use with the builtin sshfence fencer. 使用sshfence时,SSH的私钥文件。 使用了sshfence,这个必须指定
hadoop.http.staticuser.userThe user name to filter as, on static web filters while rendering content. An example use is the HDFS web UI (user to be used for browsing files).dr.who
ha.zookeeper.quorumA list of ZooKeeper server addresses, separated by commas, that are to be used by the ZKFailoverController in automatic failover. Ha功能,需要一组zk地址,用逗号分隔。被ZKFailoverController使用于自动失效备援failover。
ha.zookeeper.session-timeout.msThe session timeout to use when the ZKFC connects to ZooKeeper. Setting this value to a lower value implies that server crashes will be detected more quickly, but risks triggering failover too aggressively in the case of a transient error or network blip. ZK连接超时。ZKFC连接ZK时用。设置一个小值可以更快的探测到服务器崩溃(crash),但也会更频繁的触发失效备援,在传输错误或者网络不畅时。建议****10s-30s5000
ha.zookeeper.parent-znodeThe ZooKeeper znode under which the ZK failover controller stores its information. Note that the nameservice ID is automatically appended to this znode, so it is not normally necessary to configure this, even in a federated environment. ZK失效备援功能,需要在ZK上创建节点,这里是根节点的名称。ZKFC会在这下面工作。注意,NameService ID会 被写到此节点下,所以即便是开启federation功能,也仅需要指定一个值。/hadoop-ha
ha.zookeeper.aclA comma-separated list of ZooKeeper ACLs to apply to the znodes used by automatic failover. These ACLs are specified in the same format as used by the ZooKeeper CLI. If the ACL itself contains secrets, you may instead specify a path to a file, prefixed with the ‘@’ symbol, and the value of this configuration will be loaded from within.world:anyone:rwcda
ha.zookeeper.authA comma-separated list of ZooKeeper authentications to add when connecting to ZooKeeper. These are specified in the same format as used by the “addauth” command in the ZK CLI. It is important that the authentications specified here are sufficient to access znodes with the ACL specified in ha.zookeeper.acl. If the auths contain secrets, you may instead specify a path to a file, prefixed with the ‘@’ symbol, and the value of this configuration will be loaded from within.
hadoop.ssl.keystores.factory.classThe keystores factory to use for retrieving certificates.org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.ssl.require.client.certWhether client certificates are requiredfalse
hadoop.ssl.hostname.verifierThe hostname verifier to provide for HttpsURLConnections. Valid values are: DEFAULT, STRICT, STRICT_I6, DEFAULT_AND_LOCALHOST and ALLOW_ALLDEFAULT
hadoop.ssl.server.confResource file from which ssl server keystore information will be extracted. This file is looked up in the classpath, typically it should be in Hadoop conf/ directory.ssl-server.xml
hadoop.ssl.client.confResource file from which ssl client keystore information will be extracted This file is looked up in the classpath, typically it should be in Hadoop conf/ directory.ssl-client.xml
hadoop.ssl.enabledDeprecated. Use dfs.http.policy and yarn.http.policy instead.false
hadoop.jetty.logs.serve.aliasesEnable/Disable aliases serving from jettytrue
fs.permissions.umask-modeThe umask used when creating files and directories. Can be in octal or in symbolic. Examples are: “022” (octal for u=rwx,g=r-x,o=r-x in symbolic), or “u=rwx,g=rwx,o=” (symbolic for 007 in octal).022
ha.health-monitor.connect-retry-interval.msHow often to retry connecting to the service.1000
ha.health-monitor.check-interval.msHow often to check the service.1000
ha.health-monitor.sleep-after-disconnect.msHow long to sleep after an unexpected RPC error.1000
ha.health-monitor.rpc-timeout.msTimeout for the actual monitorHealth() calls.45000
ha.failover-controller.new-active.rpc-timeout.msTimeout that the FC waits for the new active to become active FC等待新的NN变成active状态的超时时间。60000
ha.failover-controller.graceful-fence.rpc-timeout.msTimeout that the FC waits for the old active to go to standby FC等待旧的active变成standby的超时时间。5000
ha.failover-controller.graceful-fence.connection.retriesFC connection retries for graceful fencing1
ha.failover-controller.cli-check.rpc-timeout.msTimeout that the CLI (manual) FC waits for monitorHealth, getServiceState20000
ipc.client.fallback-to-simple-auth-allowedWhen a client is configured to attempt a secure connection, but attempts to connect to an insecure server, that server may instruct the client to switch to SASL SIMPLE (unsecure) authentication. This setting controls whether or not the client will accept this instruction from the server. When false (the default), the client will not allow the fallback to SIMPLE authentication, and will abort the connection.false
fs.client.resolve.remote.symlinksWhether to resolve symlinks when accessing a remote Hadoop filesystem. Setting this to false causes an exception to be thrown upon encountering a symlink. This setting does not apply to local filesystems, which automatically resolve local symlinks.true
nfs3.server.portSpecify the port number used by Hadoop NFS.2049
nfs3.mountd.portSpecify the port number used by Hadoop mount daemon.4242
hadoop.user.group.static.mapping.overridesStatic mapping of user to groups. This will override the groups if available in the system for the specified user. In otherwords, groups look-up will not happen for these users, instead groups mapped in this configuration will be used. Mapping should be in this format. user1=group1,group2;user2=;user3=group2; Default, “dr.who=;” will consider “dr.who” as user without groups.dr.who=;
rpc.metrics.quantile.enableSetting this property to true and rpc.metrics.percentiles.intervals to a comma-separated list of the granularity in seconds, the 50/75/90/95/99th percentile latency for rpc queue/processing time in milliseconds are added to rpc metrics.false
rpc.metrics.percentiles.intervals
Hadoop1.0版本的core-site.xml
namevalueDescription
fs.default.namehdfs://hadoopmaster:9000定义HadoopMaster的URI和端口
fs.checkpoint.dir/opt/data/hadoop1/hdfs/namesecondary1定义hadoop的name备份的路径,官方文档说是读取这个,写入dfs.name.dir
fs.checkpoint.period1800定义name备份的备份间隔时间,秒为单位,只对snn生效,默认一小时
fs.checkpoint.size33554432以日志大小间隔做备份间隔,只对snn生效,默认64M
io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec, com.hadoop.compression.lzo.LzoCodec, com.hadoop.compression.lzo.LzopCodec, org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.BZip2Codec (排版调整,实际配置不要回车)Hadoop所使用的编解码器,gzip和bzip2为自带,lzo需安装hadoopgpl或者kevinweil,逗号分隔,snappy也需要单独安装
io.compression.codec.lzo.classcom.hadoop.compression.lzo.LzoCodecLZO所使用的压缩编码器
topology.script.file.name/hadoop/bin/RackAware.py机架感知脚本位置
topology.script.number.args1000机架感知脚本管理的主机数,IP地址
fs.trash.interval10800HDFS垃圾箱设置,可以恢复误删除,分钟数,0为禁用,添加该项无需重启hadoop
hadoop.http.filter.initializersorg.apache.hadoop.security. AuthenticationFilterInitializer (排版调整,实际配置不要回车)需要jobtracker,tasktracker namenode,datanode等http访问端口用户验证使用,需配置所有节点
hadoop.http.authentication.typesimple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#验证方式,默认为简单,也可自己定义class,需配置所有节点
hadoop.http.authentication. token.validity (排版调整,实际配置不要回车)36000验证令牌的有效时间,需配置所有节点
hadoop.http.authentication. signature.secret (排版调整,实际配置不要回车)默认可不写参数默认不写在hadoop启动时自动生成私密签名,需配置所有节点
hadoop.http.authentication.cookie.domaindomian.tldhttp验证所使用的cookie的域名,IP地址访问则该项无效,必须给所有节点都配置域名才可以。
hadoop.http.authentication. simple.anonymous.allowed (排版调整,实际配置不要回车)true | false简单验证专用,默认允许匿名访问,true
hadoop.http.authentication. kerberos.principal (排版调整,实际配置不要回车)HTTP/localhost@$LOCALHOSTKerberos验证专用,参加认证的实体机必须使用HTTP作为K的Name
hadoop.http.authentication. kerberos.keytab (排版调整,实际配置不要回车)/home/xianglei/hadoop.keytabKerberos验证专用,密钥文件存放位置
hadoop.security.authorizationtrue|falseHadoop服务层级验证安全验证,需配合hadoop-policy.xml使用,配置好以后用dfsadmin,mradmin -refreshServiceAcl刷新生效
io.file.buffer.size131072用作序列化文件处理时读写buffer的大小
hadoop.security.authenticationsimple | kerberoshadoop本身的权限验证,非http访问,simple或者kerberos
hadoop.logfile.size1000000000设置日志文件大小,超过则滚动新日志
hadoop.logfile.count20最大日志数
io.bytes.per.checksum1024每校验码所校验的字节数,不要大于io.file.buffer.size
io.skip.checksum.errorstrue | false处理序列化文件时跳过校验码错误,不抛异常。默认false
io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization序列化的编解码器
io.seqfile.compress.blocksize1024000块压缩的序列化文件的最小块大小,字节
webinterface.private.actionstrue | false设为true,则JT和NN的tracker网页会出现杀任务删文件等操作连接,默认是false
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值