Thursday, March 1, 2018

Hadoop Admin Interview Question Answer -3

Q 1. In Hadoop ecosystem, we have HDFS, Zookeeper, Yarn/ Mapreduce2, Hive, spark, oozie. What is the sequence of start the service from first to last?
Ans: Zookeeper, HDFS, Mapredure2/Yarn, hive, spark...

Q 2. What are services you use for Authentication and Authorization
Ans: We use Kerberos for Authentication and ACL for Authorization.

Q 3. What is the size of your cluster and what are the services you use.
Ans: Cluster having 10 hosts
6 datanode, 2 Edge Node, 2 NameNode
6 hosts of 12 TB each
Blocksize=64 MB, Replication=3
12 TB * 6 Host = 72 TB
Cluster capacity in MB: 72 * 1000000 MB = 72,000,000 MB
Disk space needed per block: 64 MB per block * 3 = 192 MB storage per block
Total number of blocks: 72,000,000  / 192  = 375000 blocks
70% of total capacity i.e. 72 TB= 39 TB
Actual data = 13 TB
We can say 20-35 GB per day data. and keep last 12 months of data.

Note: Kindly correct it, if I am wrong. This is the only sketch

Q 4. What is the architecture of Hive?
Ans: https://selecthadoop.blogspot.in/search/label/Hive

Q 5. What are the Producer, Consumers, broker in Kafka?

Q 6. Execution of Hadoop Job.
Ans: 1. The client application submits a job to the resource manager.
2. The resource manager takes the job from the job queue and allocates it to an application master. It also manages and monitors resource allocations to each application master and container on the data nodes.
3. The application master divides the job into tasks and allocates it to each data node.
4. On each data node, a Node manager manages the containers in which the tasks run.
5. The application master will ask the resource manager to allocate more resource to particular containers, if necessary.
6. The application master will keep the resource manager informed as to the status of the jobs allocated to it, and the resource manager will keep the client application informed.

Q 7. What are the components of YARN.?

Q 8. What are your roles and responsibilities?
Ans: https://selecthadoop.blogspot.in/search/label/Daily%20Activities%20of%20Hadoop%20Admin

Q 9. What happens with the active namenode when any standby namenode become active?
Ans: Hi Hadoopers kindly help

Q 10.       What are the views in Amabari? Is the file views browse the local directory and Hadoop directory?
Ans: Ambari provides the UI for executing Hive query, pig scripts, transfer files from local to HDFS and vice versa. Yes, file views browse the local directory and Hadoop directory.

Q 11. How Hadoop save 100 MB of a file if your block size is 64 MB?
Ans: Hadoop save 100 MB of a file in 2 blocks, the first block size is of 64 MB and second block size is of 36 MB only.
Hadoop stores each file as a sequence of blocks; all blocks in a file except the last block are the same size.

Q 12. If we have 4 datanode and the replication factor is 3. So how we decommission the 2 datanodes from the cluster?
Ans:

Q 13. What are the steps of upgrade the Hadoop cluster? What are the changes to be made?
Ans:

Q 14: What happens when we do not give the snapshot name.
Ans: The snapshot name, which is an optional argument. When it is omitted, a default name is generated using a timestamp with the format "'s'yyyyMMdd-HHmmss.SSS", e.g. "s20130412-151029.033".

Q 15: In the kerberized Hadoop cluster, What are troubleshooting steps when any user is unable to login into the cluster.
Ans:

Q 16: Why do we use HDFS for applications having large data sets and not when there are lot of small files?
Ans: HDFS is more suitable for a large number of data sets in a single file as compared to small amount of data spread across multiple files. This is because Namenode is a very expensive high-performance system, so it is not prudent to occupy the space in the Namenode by an unnecessary amount of metadata that is generated for multiple small files. So, when there is a large amount of data in a single file, name node will occupy less space. Hence for getting optimized performance, HDFS supports large data sets instead of multiple small files.

Q 17: Web-UI shows that half of the datanodes are in decommissioning mode. What does that mean? Is it safe to remove those nodes from the network?
Ans: This means that namenode is trying to retrieve data from those datanodes by moving replicas to remain datanodes. There is a possibility that data can be lost if administrator removes those datanodes before decommissioning finished.Due to replication strategy, it is possible to lose some data due to datanodes removal en masse prior to completing the decommissioning process. Decommissioning refers to namenode trying to retrieve data from datanodes by moving replicas to remain datanodes.

1 comment:

Kafka Architecture

Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle a high volume of data and enables you t...