Parquet is a columnar format developed within the Apache project. Data is compressed on disk and read into memory before use. The file format is described at https://github.com/apache/parquet-format. This software is written with reference to version 2.10.0 of the format.
The parquet file format itself defines only rather limited semantic metadata, so that there is no standard way to record column units, descriptions, UCDs etc. By default, additional metadata is written in the form of a DATA-less VOTable attached to the file footer, as described by the VOParquet convention. This additional metadata can then be retrieved by other VOParquet-aware software.
Note:
The parquet I/O handlers require large external libraries, which are not always bundled with the library/application software because of their size. In some configurations, parquet support may not be present, and attempts to read or write parquet files will result in a message like:Parquet-mr libraries not availableIf you can supply the relevant libaries on the classpath at runtime, the parquet support will work. At time of writing, the required libraries are included in thetopcat-extra.jar
monolithic jar file (though nottopcat-full.jar
), and are included if you have thetopcat-all.dmg
file. They can also be found in the starjava github repository (https://github.com/Starlink/starjava/tree/master/parquet/src/lib or you can acquire them from the Parquet MR package. These arrangements may be revised in future releases, for instance if parquet usage becomes more mainstream. The required dependencies are a minimal subset of those required by the Parquet MR submoduleparquet-cli
at version 1.13.1, in particular the filesaircompressor-0.21.jar
commons-collections-3.2.2.jar
commons-configuration2-2.1.1.jar
commons-lang3-3.9.jar
failureaccess-1.0.1.jar
guava-27.0.1-jre.jar
hadoop-auth-3.2.3.jar
hadoop-common-3.2.3.jar
hadoop-mapreduce-client-core-3.2.3.jar
htrace-core4-4.1.0-incubating.jar
parquet-cli-1.13.1.jar
parquet-column-1.13.1.jar
parquet-common-1.13.1.jar
parquet-encoding-1.13.1.jar
parquet-format-structures-1.13.1.jar
parquet-hadoop-1.13.1.jar
parquet-jackson-1.13.1.jar
slf4j-api-1.7.22.jar
slf4j-nop-1.7.22.jar
snappy-java-1.1.8.3.jar
stax2-api-4.2.1.jar
woodstox-core-5.3.0.jar
zstd-jni-1.5.0-1.jar
.
These libraries support some, but not all, of the compression formats defined for parquet, currentlyuncompressed
,gzip
,snappy
,zstd
andlz4_raw
. Supplying more of the parquet-mr dependencies at runtime would extend this list. Unlike the rest of TOPCAT/STILTS/STIL which is written in pure java, some of these libraries (currently the snappy and zstd compression codecs) contain native code, which means they may not work on all architectures. At time of writing all common architectures are covered, but there is the possibility of failure with ajava.lang.UnsatisfiedLinkError
on other platforms if attempting to read/write files that use those compression algorithms.
The handler behaviour may be modified by specifying
one or more comma-separated name=value configuration options
in parentheses after the handler name, e.g.
"parquet(votmeta=false,compression=gzip)
".
The following options are available:
votmeta = true|false
IVOA.VOTable-Parquet.content
,
according to the
VOParquet convention (version 1.0).
This enables items such as Units, UCDs and column descriptions, that would otherwise be lost in the serialization,
to be stored in the output parquet file.
This information can then be recovered by parquet readers
that understand this convention.
(Default: true
)
compression = uncompressed|snappy|zstd|gzip|lz4_raw
uncompressed
, snappy
,
zstd
, gzip
and lz4_raw
.
Others may be available if the relevant codecs are on the
classpath at runtime.
If no value is specified, the parquet-mr library default
is used, which is probably uncompressed
.
(Default: null
)
groupArray = true|false
groupArray=false
will write it as
"repeated int32 IVAL
"
while groupArray=true
will write it as
"optional group IVAL (LIST) {repeated group list
{optional int32 element}}
".
Although setting it false
may be slightly more
efficient, the default is true
,
since if any of the columns have array values that either
may be null or may have elements which are null,
groupArray-style declarations for all columns are required
by the Parquet file format:
"A repeated field that is neither contained by a LIST- or MAP-annotated group nor annotated by LIST or MAP should be interpreted as a required list of required elements where the element type is the type of the field. Implementations should use either LIST and MAP annotations or unannotated repeated fields, but not both. When using the annotations, no unannotated repeated types are allowed."
If this option is set false and an attempt is made to write
null arrays or arrays with null values, writing will fail.
(Default: true
)
usedict = true|false|null
true
.
(Default: null
)
If no output format is explicitly chosen,
writing to a filename with
the extension ".parquet
" or ".parq
" (case insensitive)
will select parquet
format for output.
The handler class for files of this format is
ParquetTableWriter
.