Jump to content

Apache Parquet: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Fazalmajid (talk | contribs)
Mention DuckDB
 
(25 intermediate revisions by 17 users not shown)
Line 1: Line 1:
{{Short description|Column-oriented data storage format}}
{{Primary sources|date=October 2016}}
{{Primary sources|date=October 2016}}
{{Infobox software
{{Infobox software
| name = Apache Parquet
| name = Apache Parquet
| logo = [[File:Apache_Parquet_Logo.svg|250px|Apache Parquet]]
| logo = Apache Parquet logo.svg
| screenshot =
| screenshot =
| caption = Apache Parquet
| caption = Apache Parquet
| developer =
| developer =
| released = {{Start date and age|2013|03|13|df=yes}} <!-- https://web.archive.org/web/20130504133255/http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/ -->
| released = {{Start date and age|2013|03|13|df=yes}} <!-- https://web.archive.org/web/20130504133255/http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/ -->
| latest release version = 2.8.0
| latest release version = 2.9.0
| latest release date = {{Start date and age|2020|01|13|df=yes}}<ref>{{cite web|url=https://github.com/apache/parquet-format/releases|title=Github releases|access-date=26 March 2020}}</ref>
| latest release date = {{Start date and age|2021|10|06|df=yes}}<ref>{{cite web |url=https://parquet.apache.org/blog/ |title=Apache Parquet – Releases |website=Apache.org |access-date=22 February 2023 |archive-date=22 February 2023 |archive-url=https://web.archive.org/web/20230222213151/https://parquet.apache.org/blog/ |url-status=live }}</ref>
<!-- This is a comment block.
<!-- This is a comment block.
Before the version was referring to Parquet-MR, an implementation of the Parquet format:
Before the version was referring to Parquet-MR, an implementation of the Parquet format:
Line 16: Line 17:
| latest preview date =
| latest preview date =
| operating system = [[Cross-platform]]
| operating system = [[Cross-platform]]
| programming language = [[Java (programming language)|Java]] (reference implementation)<ref>{{cite web|url=https://github.com/apache/parquet-mr|title=Parquet-MR source code|access-date=2 July 2019}}</ref>
| programming language = [[Java (programming language)|Java]] (reference implementation)<ref>{{cite web|url=https://github.com/apache/parquet-mr|title=Parquet-MR source code|website=[[GitHub]]|access-date=2 July 2019|archive-date=11 June 2018|archive-url=https://web.archive.org/web/20180611015409/https://github.com/apache/parquet-mr|url-status=live}}</ref>
| genre = [[Column-oriented DBMS]]
| genre = [[Column-oriented DBMS]]
| license = [[Apache License 2.0]]
| license = [[Apache License 2.0]]
| website = {{URL|https://parquet.apache.org}}
| website = {{URL|https://parquet.apache.org}}
}}
}}
'''Apache Parquet''' is a [[free and open-source]] [[Column-oriented DBMS|column-oriented]] data storage format of the [[Apache Hadoop]] ecosystem. It is similar to the other columnar-storage file formats available in [[Apache Hadoop|Hadoop]] namely [[RCFile]] and [[Apache ORC|ORC]]. It is compatible with most of the data processing frameworks in the [[Hadoop]] environment. It provides efficient [[data compression]] and [[encoding]] schemes with enhanced performance to handle complex data in bulk.
'''Apache Parquet''' is a [[free and open-source]] [[Column-oriented DBMS|column-oriented]] data storage format in the [[Apache Hadoop]] ecosystem. It is similar to [[RCFile]] and [[Apache ORC|ORC]], the other columnar-storage file formats in [[Apache Hadoop|Hadoop]], and is compatible with most of the data processing frameworks around [[Hadoop]]. It provides efficient [[data compression]] and [[encoding]] schemes with enhanced performance to handle complex data in bulk.


==History==
==History==
The [[Open-source software|open-source]] project to build Apache Parquet began as a joint effort between [[Twitter]]<ref>{{cite web|url=https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|title=Release Date}}</ref> and [[Cloudera]].<ref>{{Cite web|url=http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/|archive-url=https://web.archive.org/web/20130504133255/http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/|url-status=dead|archive-date=2013-05-04|title=Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog|date=2013-03-13|language=en-US|access-date=2018-10-22}}</ref> Parquet was designed as an improvement upon the Trevni columnar storage format created by Hadoop creator [[Doug Cutting]]. The first version&mdash;Apache Parquet 1.0&mdash;was released in July 2013. Since April 27, 2015, Apache Parquet is a top-level Apache Software Foundation (ASF)-sponsored project.<ref>http://www.infoworld.com/article/2915565/big-data/apache-parquet-paves-the-way-towards-better-hadoop-data-storage.html</ref><ref>https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces75</ref>
The [[Open-source software|open-source]] project to build Apache Parquet began as a joint effort between [[Twitter]]<ref>{{cite web|url=https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|title=Release Date|access-date=2016-09-12|archive-date=2016-10-20|archive-url=https://web.archive.org/web/20161020154829/https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|url-status=live}}</ref> and [[Cloudera]].<ref>{{Cite web|url=http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/|archive-url=https://web.archive.org/web/20130504133255/http://blog.cloudera.com/blog/2013/03/introducing-parquet-columnar-storage-for-apache-hadoop/|url-status=dead|archive-date=2013-05-04|title=Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog|date=2013-03-13|language=en-US|access-date=2018-10-22}}</ref> Parquet was designed as an improvement on the Trevni columnar storage format created by [[Doug Cutting]], the creator of Hadoop. The first version, Apache Parquet{{nbsp}}1.0, was released in July 2013. Since April 27, 2015, Apache Parquet has been a top-level Apache Software Foundation (ASF)-sponsored project.<ref>{{Cite web|url = http://www.infoworld.com/article/2915565/big-data/apache-parquet-paves-the-way-towards-better-hadoop-data-storage.html|title = Apache Parquet paves the way for better Hadoop data storage|date = 28 April 2015|access-date = 21 May 2017|archive-date = 31 May 2017|archive-url = https://web.archive.org/web/20170531130443/http://www.infoworld.com/article/2915565/big-data/apache-parquet-paves-the-way-towards-better-hadoop-data-storage.html|url-status = live}}</ref><ref>{{Cite web|url=https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces75|title=The Apache Software Foundation Announces Apache™ Parquet™ as a Top-Level Project : The Apache Software Foundation Blog|date=27 April 2015|access-date=21 May 2017|archive-date=20 August 2017|archive-url=https://web.archive.org/web/20170820074502/https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces75|url-status=live}}</ref>


==Features==
==Features==
Apache Parquet is implemented using the record-shredding and assembly algorithm,<ref>{{cite web|title=The striping and assembly algorithms from the Google-inspired Dremel paper|url=https://github.com/julienledem/redelm/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper|website=github|access-date=13 November 2017}}</ref> which accommodates the complex [[data structures]] that can be used to store the data.<ref name=":1">{{cite web|url=https://parquet.apache.org/documentation/latest/|title=Apache Parquet Documentation}}</ref> The values in each column are physically stored in contiguous memory locations and this columnar storage provides the following benefits:<ref>{{cite web|title=Apache Parquet Cloudera|url=http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_parquet.html}}</ref>
Apache Parquet is implemented using the record-shredding and assembly algorithm,<ref>{{cite web|title=The striping and assembly algorithms from the Google-inspired Dremel paper|url=https://github.com/julienledem/redelm/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper|website=github|access-date=13 November 2017|archive-date=26 October 2020|archive-url=https://web.archive.org/web/20201026095824/https://github.com/julienledem/redelm/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper|url-status=live}}</ref> which accommodates the complex [[data structures]] that can be used to store data.<ref name=":1">{{cite web|url=https://parquet.apache.org/documentation/latest/|title=Apache Parquet Documentation|access-date=2016-09-12|archive-date=2016-09-05|archive-url=https://web.archive.org/web/20160905094320/http://parquet.apache.org/documentation/latest/|url-status=dead}}</ref> The values in each column are stored in contiguous memory locations, providing the following benefits:<ref>{{cite web|title=Apache Parquet Cloudera|url=http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_parquet.html|access-date=2016-09-12|archive-date=2016-09-19|archive-url=https://web.archive.org/web/20160919220719/http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_parquet.html|url-status=live}}</ref>


* Column-wise compression is efficient and saves storage space
* Column-wise compression is efficient in storage space
* Compression techniques specific to a type can be applied as the column values tend to be of the same type
* Encoding and compression techniques specific to the type of data in each column can be used
* Queries that fetch specific column values need not read the entire row data thus improving performance
* Queries that fetch specific column values need not read the entire row, thus improving performance
* Different encoding techniques can be applied to different columns


Apache Parquet is implemented using the [[Apache Thrift]] framework which increases its flexibility; it can work with a number of programming languages like [[C++]], [[Java (programming language)|Java]], [[Python (programming language)|Python]], [[PHP]], etc.<ref>{{cite web|title=Apache Thrift|url=http://thrift.apache.org/}}</ref>
Apache Parquet is implemented using the [[Apache Thrift]] framework, which increases its flexibility; it can work with a number of programming languages like [[C++]], [[Java (programming language)|Java]], [[Python (programming language)|Python]], [[PHP]], etc.<ref>{{cite web|title=Apache Thrift|url=http://thrift.apache.org/|access-date=2016-09-14|archive-date=2021-03-12|archive-url=https://web.archive.org/web/20210312025614/http://thrift.apache.org/|url-status=live}}</ref>


As of August 2015,<ref>{{cite web|title=Supported Frameworks|url=https://cwiki.apache.org/confluence/display/Hive/Parquet}}</ref> Parquet supports the big-data-processing frameworks including [[Apache Hive]], [[Apache Drill]], [[Apache Impala]], [http://crunch.apache.org/ Apache Crunch], [[Apache Pig]], [[Cascading (software)|Cascading]], [[Presto (SQL query engine)|Presto]] and [[Apache Spark]].
As of August 2015,<ref>{{cite web|title=Supported Frameworks|url=https://cwiki.apache.org/confluence/display/Hive/Parquet|access-date=2016-09-12|archive-date=2015-02-02|archive-url=https://web.archive.org/web/20150202145641/https://cwiki.apache.org/confluence/display/Hive/Parquet|url-status=live}}</ref> Parquet supports the big-data-processing frameworks including [[Apache Hive]], [[Apache Drill]], [[Apache Impala]], [http://crunch.apache.org/ Apache Crunch], [[Apache Pig]], [[Cascading (software)|Cascading]], [[Presto (SQL query engine)|Presto]] and [[Apache Spark]]. It is one of external data formats used by [[pandas (software)|pandas]] [[Python (programming language)|Python]] data manipulation and analysis library.


== Compression and encoding ==
== Compression and encoding ==
Line 42: Line 42:


=== [[Dictionary coder|Dictionary encoding]] ===
=== [[Dictionary coder|Dictionary encoding]] ===
Parquet has an automatic dictionary encoding enabled dynamically for data with a ''small'' number of unique values (i.e. below 10<sup>5</sup>) that enables significant compression and boosts processing speed.<ref name=":0">{{Cite web|url=https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|title=Announcing Parquet 1.0: Columnar Storage for Hadoop {{!}} Twitter Blogs|website=blog.twitter.com|access-date=2016-09-14}}</ref>
Parquet has an automatic dictionary encoding enabled dynamically for data with a ''small'' number of unique values (i.e. below 10<sup>5</sup>) that enables significant compression and boosts processing speed.<ref name=":0">{{Cite web|url=https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|title=Announcing Parquet 1.0: Columnar Storage for Hadoop {{!}} Twitter Blogs|website=blog.twitter.com|access-date=2016-09-14|archive-date=2016-10-20|archive-url=https://web.archive.org/web/20161020154829/https://blog.twitter.com/2013/announcing-parquet-10-columnar-storage-for-hadoop|url-status=live}}</ref>


=== Bit packing ===
=== Bit packing ===
Line 55: Line 55:
{{Unreferenced section|date=October 2016}}
{{Unreferenced section|date=October 2016}}


Apache Parquet is comparable to [[RCFile]] and [[Apache ORC|Optimized Row Columnar (ORC)]] file formats {{mdash}} all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited [[schema evolution]], i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.
Apache Parquet is comparable to [[RCFile]] and [[Apache ORC|Optimized Row Columnar (ORC)]] file formats {{mdash}} all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited [[schema evolution]]{{Citation needed|date=April 2023}}, i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.

[[Apache Arrow]] is designed as an in-memory complement to on-disk columnar formats like Parquet and ORC. The Arrow and Parquet projects include libraries that allow for reading and writing between the two formats.{{Citation needed|date=January 2023}}


==See also==
==See also==
{{Portal|Free and open-source software}}
{{Portal|Free and open-source software}}
* [[Pig (programming tool)]]
* [[Apache Arrow]]
* [[Apache Pig]]
* [[Apache Hive]]
* [[Apache Hive]]
* [[Apache Impala]]
* [[Apache Impala]]
Line 66: Line 69:
* [[Apache Spark]]
* [[Apache Spark]]
* [[Apache Thrift]]
* [[Apache Thrift]]
* [[Trino (SQL query engine)]]
* [[Presto (SQL query engine)]]
* [[Presto (SQL query engine)]]
* [[SQLite]] embedded database system
* [[DuckDB]] embedded OLAP database with Parquet support


==References==
==References==
Line 73: Line 79:
==External links==
==External links==
* {{Official website}}
* {{Official website}}
* {{YouTube|1j8SdS7s_NY|The Parquet Format and Performance Optimization Opportunities|}}
* [https://research.google.com/pubs/pub36632.html Dremel paper]
* [https://research.google.com/pubs/pub36632.html Dremel paper]
* [https://blog.openbridge.com/how-to-be-a-hero-with-powerful-parquet-google-and-amazon-f2ae0f35ee04 How to Be a Hero with Powerful Apache Parquet, Google and Amazon]


{{Apache Software Foundation}}
{{Apache Software Foundation}}

Latest revision as of 09:27, 22 June 2024

Apache Parquet
Initial release13 March 2013; 11 years ago (2013-03-13)
Stable release
2.9.0 / 6 October 2021; 2 years ago (2021-10-06)[1]
Repository
Written inJava (reference implementation)[2]
Operating systemCross-platform
TypColumn-oriented DBMS
LicenseApache License 2.0
Websiteparquet.apache.org

Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It is similar to RCFile and ORC, the other columnar-storage file formats in Hadoop, and is compatible with most of the data processing frameworks around Hadoop. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

History

[edit]

The open-source project to build Apache Parquet began as a joint effort between Twitter[3] and Cloudera.[4] Parquet was designed as an improvement on the Trevni columnar storage format created by Doug Cutting, the creator of Hadoop. The first version, Apache Parquet 1.0, was released in July 2013. Since April 27, 2015, Apache Parquet has been a top-level Apache Software Foundation (ASF)-sponsored project.[5][6]

Eigenschaften

[edit]

Apache Parquet is implemented using the record-shredding and assembly algorithm,[7] which accommodates the complex data structures that can be used to store data.[8] The values in each column are stored in contiguous memory locations, providing the following benefits:[9]

  • Column-wise compression is efficient in storage space
  • Encoding and compression techniques specific to the type of data in each column can be used
  • Queries that fetch specific column values need not read the entire row, thus improving performance

Apache Parquet is implemented using the Apache Thrift framework, which increases its flexibility; it can work with a number of programming languages like C++, Java, Python, PHP, etc.[10]

As of August 2015,[11] Parquet supports the big-data-processing frameworks including Apache Hive, Apache Drill, Apache Impala, Apache Crunch, Apache Pig, Cascading, Presto and Apache Spark. It is one of external data formats used by pandas Python data manipulation and analysis library.

Compression and encoding

[edit]

In Parquet, compression is performed column by column, which enables different encoding schemes to be used for text and integer data. This strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.

Parquet has an automatic dictionary encoding enabled dynamically for data with a small number of unique values (i.e. below 105) that enables significant compression and boosts processing speed.[12]

Bit packing

[edit]

Storage of integers is usually done with dedicated 32 or 64 bits per integer. For small integers, packing multiple integers into the same space makes storage more efficient.[12]

To optimize storage of multiple occurrences of the same value, a single value is stored once along with the number of occurrences.[12]

Parquet implements a hybrid of bit packing and RLE, in which the encoding switches based on which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.[12]

Comparison

[edit]

Apache Parquet is comparable to RCFile and Optimized Row Columnar (ORC) file formats — all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited schema evolution[citation needed], i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.

Apache Arrow is designed as an in-memory complement to on-disk columnar formats like Parquet and ORC. The Arrow and Parquet projects include libraries that allow for reading and writing between the two formats.[citation needed]

See also

[edit]

References

[edit]
  1. ^ "Apache Parquet – Releases". Apache.org. Archived from the original on 22 February 2023. Retrieved 22 February 2023.
  2. ^ "Parquet-MR source code". GitHub. Archived from the original on 11 June 2018. Retrieved 2 July 2019.
  3. ^ "Release Date". Archived from the original on 2016-10-20. Retrieved 2016-09-12.
  4. ^ "Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog". 2013-03-13. Archived from the original on 2013-05-04. Retrieved 2018-10-22.
  5. ^ "Apache Parquet paves the way for better Hadoop data storage". 28 April 2015. Archived from the original on 31 May 2017. Retrieved 21 May 2017.
  6. ^ "The Apache Software Foundation Announces Apache™ Parquet™ as a Top-Level Project : The Apache Software Foundation Blog". 27 April 2015. Archived from the original on 20 August 2017. Retrieved 21 May 2017.
  7. ^ "The striping and assembly algorithms from the Google-inspired Dremel paper". github. Archived from the original on 26 October 2020. Retrieved 13 November 2017.
  8. ^ "Apache Parquet Documentation". Archived from the original on 2016-09-05. Retrieved 2016-09-12.
  9. ^ "Apache Parquet Cloudera". Archived from the original on 2016-09-19. Retrieved 2016-09-12.
  10. ^ "Apache Thrift". Archived from the original on 2021-03-12. Retrieved 2016-09-14.
  11. ^ "Supported Frameworks". Archived from the original on 2015-02-02. Retrieved 2016-09-12.
  12. ^ a b c d "Announcing Parquet 1.0: Columnar Storage for Hadoop | Twitter Blogs". blog.twitter.com. Archived from the original on 2016-10-20. Retrieved 2016-09-14.
[edit]