Disaster:
1. Shutdown secondary namenode
/etc/init.d/hadoop-hdfs-secondarynamenode stop
2. Force a checkpoint on secondary namenode
hdfs secondarynamenode -checkpoint force
3. Shutdown namenode
/etc/init.d/hadoop-hdfs-namenode stop
4. On namenode, move dfs.namenode.name.dir to a different location, and create an empty directory.
[root@hdm name]# pwd
/data/nn/dfs/name
[root@hdm name]# mv current /tmp/backup_nn_current
[root@hdm name]# mkdir current
[root@hdm name]# chown hdfs:hadoop current
5. Then namenode will fail to start.
Recovery:
1. Create an empty directory specified in the dfs.namenode.checkpoint.dir configuration variable.
mkdir -p /data/secondary_nn/dfs/namesecondary
chown hdfs:hadoop /data/secondary_nn/dfs/namesecondary
2. Scp fsimage and edit logs from secondary namenode to namenode's dfs.namenode.checkpoint.dir.
[root@hdw3 namesecondary]# pwd
/data/secondary_nn/dfs/namesecondary
[root@hdw3 namesecondary]# scp -r current hdm:/data/secondary_nn/dfs/namesecondary/
3. Change owner and group on namenode
chown -R hdfs:hadoop /data/secondary_nn/dfs/namesecondary/*
4. Namenode import checkpint
hdfs namenode -importCheckpoint
5. Restart HDFS cluster.
1. Shutdown secondary namenode
/etc/init.d/hadoop-hdfs-secondarynamenode stop
2. Force a checkpoint on secondary namenode
hdfs secondarynamenode -checkpoint force
3. Shutdown namenode
/etc/init.d/hadoop-hdfs-namenode stop
4. On namenode, move dfs.namenode.name.dir to a different location, and create an empty directory.
[root@hdm name]# pwd
/data/nn/dfs/name
[root@hdm name]# mv current /tmp/backup_nn_current
[root@hdm name]# mkdir current
[root@hdm name]# chown hdfs:hadoop current
5. Then namenode will fail to start.
Recovery:
1. Create an empty directory specified in the dfs.namenode.checkpoint.dir configuration variable.
mkdir -p /data/secondary_nn/dfs/namesecondary
chown hdfs:hadoop /data/secondary_nn/dfs/namesecondary
2. Scp fsimage and edit logs from secondary namenode to namenode's dfs.namenode.checkpoint.dir.
[root@hdw3 namesecondary]# pwd
/data/secondary_nn/dfs/namesecondary
[root@hdw3 namesecondary]# scp -r current hdm:/data/secondary_nn/dfs/namesecondary/
3. Change owner and group on namenode
chown -R hdfs:hadoop /data/secondary_nn/dfs/namesecondary/*
4. Namenode import checkpint
hdfs namenode -importCheckpoint
5. Restart HDFS cluster.
it is very excellent blog and useful article thank you for sharing with us , keep posting learn more about Big Data Hadoop
ReplyDeleteimportant information thank you providing information on
Big Data Hadoop Online Training
Really nice blog post.provided a helpful information.I hope that you will post more updates like this
ReplyDeleteHadoop Admin Online Course India
Really very nice Blog,Keep sharing more posts with us.
ReplyDeletethank you...
Krep updating...
big data and hadoop online training
hadoop admin online training