资讯

精准传达 • 有效沟通

从品牌网站建设到网络营销策划,从策略到执行的一站式服务

hadoopHA配置文件

fs.defaultFS     hdfs://rongxinhadoop    

网站建设哪家好,找创新互联!专注于网页设计、网站建设、微信开发、小程序设计、集团企业网站建设等服务项目。为回馈新老客户创新互联还提供了武夷山免费建站欢迎大家使用!

这里的 mycluster为HA集群的逻辑名,与hdfs-site.xml中的dfs.nameservices配置一致  hadoop.tmp.dir     /data/hadoop1/HAtmp3    

这里的路径默认是NameNode、DataNode、JournalNode等存放数据的公共目录。用户也可单独指定每类数据的存储目录。这里目录结构需要自己先创建好

 ha.zookeeper.quorum   master:2181,slave1:2181,slave2:2181 

这里是zk集群配置中各节点的地址和端口。  

注意:数量一定是奇数而且和zoo.cfg中配置的一致 

--------------------------------------------------------------------------------------------------

dfs.replication     2     配置副本数量  

dfs.namenode.name.dir     file:/data/hadoop1/HAname3     namenode元数据存储目录  dfs.datanode.data.dir     file:/data/hadoop1/HAdata3     datanode数据存储目录  dfs.nameservices rongxinhadoop 指定HA命名服务,可随意起名,   core-site.xml中fs.defaultFS配置需要引用它  

dfs.ha.namenodes.rongxinhadoop nn1,nn2 指定集群下NameNode逻辑名  

dfs.namenode.rpc-address.rongxinhadoop.nn1   master:9000  

dfs.namenode.rpc-address.rongxinhadoop.nn2   slave1:9000  

dfs.namenode.http-address.rongxinhadoop.nn1   master:50070  

dfs.namenode.http-address.rongxinhadoop.nn2   slave1:50070  

dfs.namenode.servicerpc-address.rongxinhadoop.nn1     master:53310   dfs.namenode.servicerpc-address.rongxinhadoop.nn2     slave1:53310  

dfs.ha.automatic-failover.enabled.rongxinhadoop   true 故障失败是否自动切换   dfs.namenode.shared.edits.dir qjournal://master:8485;slave1:8485;slave2:8485/rongxinhadoop 配置JournalNode,包含三部分:

 1.qjournal 前缀表名协议;

 2.然后就是三台部署JournalNode的主机host/ip:端口,三台机器之间用分号分隔;

 3.最后的hadoop-journal是journalnode的命名空间,可以随意取名。   dfs.journalnode.edits.dir /data/hadoop1/HAjournal3/ journalnode的本地数据存放目录 dfs.client.failover.proxy.provider.rongxinhadoop   

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider   指定mycluster出故障时执行故障切换的类        

dfs.ha.fencing.methods        sshfence   ssh的操作方式执行故障切换      

dfs.ha.fencing.ssh.private-key-files        /home/hadoop1/.ssh/id_rsa   如果使用ssh进行故障切换,使用ssh通信时用的密钥存储的位置    

 dfs.ha.fencing.ssh.connect-timeout    1000      dfs.namenode.handler.count    10  

--------------------------------------------------------------------------------------------------

mapreduce.framework.name yarn   mapreduce.jobhistory.address master:10020  mapreduce.jobhistory.webapp.address master:19888  mapreduce.jobhistory.intermediate-done-dir /data/hadoop1/mr_history/HAtmp3 Directory where history files are written by MapReduce jobs.  mapreduce.jobhistory.done-dir /data/hadoop1/mr_history/HAdone3 Directory where history files are managed by the MR JobHistory Server. 

--------------------------------------------------------------------------------------------------

-

-

yarn.resourcemanager.ha.enabled

true


-

yarn.resourcemanager.cluster-id

clusterrm


-

yarn.resourcemanager.ha.rm-ids

rm1,rm2


-

yarn.resourcemanager.hostname.rm1

master


-

yarn.resourcemanager.hostname.rm2

slave1


-

yarn.resourcemanager.recovery.enabled

true


-

yarn.resourcemanager.store.class

org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore


-

yarn.resourcemanager.zk-address

master:2181,slave1:2181,slave2:2181


-

yarn.nodemanager.aux-services

mapreduce_shuffle


-

yarn.nodemanager.aux-services.mapreduce.shuffle.class

org.apache.hadoop.mapred.ShuffleHandler


-

yarn.log-aggregation-enable

true

-

The hostname of the Timeline service web application.

yarn.timeline-service.hostname

master


-

Address for the Timeline server to start the RPC server.

yarn.timeline-service.address

master:10200


-

The http address of the Timeline service web application.

yarn.timeline-service.webapp.address

master:8188


-

The https address of the Timeline service web application.

yarn.timeline-service.webapp.https.address

master:8190


-

Handler thread count to serve the client RPC requests.

yarn.timeline-service.handler-thread-count

10


-

Enables cross-origin support (CORS) for web services where cross-origin web response headers are needed. For example, javascript making a web services request to the timeline server.

yarn.timeline-service.http-cross-origin.enabled

false


-

Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed

yarn.timeline-service.http-cross-origin.allowed-origins

*


-

Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.

yarn.timeline-service.http-cross-origin.allowed-methods

GET,POST,HEAD


-

Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.

yarn.timeline-service.http-cross-origin.allowed-headers

X-Requested-With,Content-Type,Accept,Origin


-

The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.

yarn.timeline-service.http-cross-origin.max-age

1800


-

Indicate to clients whether Timeline service is enabled or not. If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.

yarn.timeline-service.enabled

true


-

Store class name for timeline store.

yarn.timeline-service.store-class

org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore


-

Enable age off of timeline store data.

yarn.timeline-service.ttl-enable

true


-

Time to live for timeline store data in milliseconds.

yarn.timeline-service.ttl-ms

604800000


网站名称:hadoopHA配置文件
网页URL:http://cdkjz.cn/article/joedpd.html
多年建站经验

多一份参考,总有益处

联系快上网,免费获得专属《策划方案》及报价

咨询相关问题或预约面谈,可以通过以下方式与我们联系

大客户专线   成都:13518219792   座机:028-86922220