Saturday, January 20, 2018

HiVE Architecture














Figure shows the major components of Hive and its interactions with Hadoop. As shown in that figure, the main components of Hive are:

UI – The user interface for users to submit queries and other operations to the system. As of 2011 the system had a command line interface and a web based GUI was being developed.

Driver – The component which receives the queries. This component implements the notion of session handles and provides execute and fetch APIs modeled on JDBC/ODBC interfaces.

Compiler – The component that parses the query, does semantic analysis on the different query blocks and query expressions and eventually generates an execution plan with the help of the table and partition metadata looked up from the metastore.

Metastore – The component that stores all the structure information of the various tables and partitions in the warehouse including column and column type information, the serializers and deserializers necessary to read and write data and the corresponding HDFS files where the data is stored.

Execution Engine – The component which executes the execution plan created by the compiler. The plan is a DAG of stages. The execution engine manages the dependencies between these different stages of the plan and executes these stages on the appropriate system components.

How a typical query flows through the system

  1. The UI calls the execute interface to the Driver (step 1 in Figure). 
  2. The Driver creates a session handle for the query and sends the query to the compiler to generate an execution plan (step 2). 
  3. The compiler gets the necessary metadata from the metastore (steps 3 and 4). 
  4. This metadata is used to typecheck the expressions in the query tree as well as to prune partitions based on query predicates. 
  5. The plan generated by the compiler (step 5) is a DAG of stages with each stage being either a map/reduce job, a metadata operation or an operation on HDFS. For map/reduce stages, the plan contains map operator trees (operator trees that are executed on the mappers) and a reduce operator tree (for operations that need reducers). 
  6. The execution engine submits these stages to appropriate components (steps 6, 6.1, 6.2 and 6.3). 
  7. In each task (mapper/reducer) the deserializer associated with the table or intermediate outputs is used to read the rows from HDFS files and these are passed through the associated operator tree. 
  8. Once the output is generated, it is written to a temporary HDFS file though the serializer (this happens in the mapper in case the operation does not need a reduce).
  9. The temporary files are used to provide data to subsequent map/reduce stages of the plan. For DML operations the final temporary file is moved to the table's location.
  10. This scheme is used to ensure that dirty data is not read (file rename being an atomic operation in HDFS). For queries, the contents of the temporary file are read by the execution engine directly from HDFS as part of the fetch call from the Driver (steps 7, 8 and 9).

2 comments:

  1. It is very informative blog and useful article thank you for sharing with us , keep posting learn more about Big Data Hadoop .some more information on Big data hadoop online training

    ReplyDelete

Kafka Architecture

Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle a high volume of data and enables you t...