Structure Of Parquet File Format

Structure Of Parquet File Format

In my previous article (Read here – All you need to know about ORC file structure in depth) , I had explained the ORC file structure. It received a huge response and that pushed me to write a new article on the parquet file format.

Here in this article, I will be explaining about the Parquet file structure. I hope that after this article you will understand the Parquet File format and how data is stored in it. Apache Parquet is a free and open-source column-oriented data storage format of the Apache Hadoop ecosystem. It is similar to the other columnar-storage file formats available in Hadoop namely RCFile format and ORC format.

Parquet file format consists of 2 parts –

  1. Data
  2. Metadata

Data is written first in the file and the metadata is written at the end to allow for single pass writing. Let’s see the parquet file format first and then lets us have a look at the metadata.

 File Format –

A sample parquet file format is as below –

At a high level, the parquet file consists of header, one or more blocks and footer. The parquet file format contains a 4-byte magic number in the header (PAR1) and at the end of the footer. This is a magic number indicates that the file is in parquet format. All the file metadata is stored in the footer section.

Later in the blog, I’ll explain the advantage of having the metadata in the footer section.

Blocks in the parquet file are written in the form of a nested structure as below –

-Blocks

—Row Groups

—–Column Chunks

——-Page

Each block in the parquet file is stored in the form of row groups. So, data in a parquet file is partitioned into multiple row groups. These row groups in turn consists of one or more column chunks which corresponds to a column in the dataset. The data for each column chunk is then written in the form of pages. Each page contains values for a particular column only, hence pages are very good candidates for compression as they contain similar values.

As we have seen above the file metadata is stored in the footer.

The footer’s metadata includes the version of the format, the schema, any extra key-value pairs, and metadata for columns in the file. The column metadata would be type, path, encoding, number of values, compressed size etc. Apart from the file metadata, it also has a 4-byte field encoding the length of the footer metadata, and a 4-byte magic number (PAR1)

In case of Parquet files, metadata is written after the data has been written, to allow for single pass writing.

Since the metadata is stored in the footer, while reading a parquet file, an initial seek will be performed to read the footer metadata length and then a backward seek will be performed to read the footer metadata.

In other files like sequence and Avro, metadata is stored in header and sync markers are used to separate blocks whereas in parquet, block boundaries are directly stored in the footer metadata. It is possible to do this since the metadata is written after all the blocks have been written.

Therefore, parquet files are split-able since the block boundaries can be read from footer metadata and blocks can be easily located and processed in parallel. Hope this provides a good overview on the Parquet file structure.

For a comparative analysis on which file format to use please refer to article – ORC Vs Parquet Vs Avro : How to select a right file format for Hive?

Rohan Karanjawala
I work with Ellicium Solutions pvt ltd as an AVP looking after projects in big data analytics area, helping clients to stay ahead in the competition and more importantly to serve their customers well.