Parquet is a columnar format developed within the Apache project. Data is compressed on disk and read into memory before use. The file format is described at https://github.com/apache/parquet-format. This software is written with reference to version 2.10.0 of the format.
This input handler will read columns representing scalars, strings and one-dimensional arrays of the same. It is not capable of reading multi-dimensional arrays, more complex nested data structures, or some more exotic data types like 96-bit integers. If such columns are encountered in an input file, a warning will be emitted through the logging system and the column will not appear in the read table. Support may be introduced for some additional types if there is demand.
Parquet files typically do not contain rich metadata
such as column units, descriptions, UCDs etc.
To remedy that, this reader supports the
VOParquet convention (version 1.0),
in which metadata is recorded in a DATA-less VOTable
stored in the parquet file header.
If such metadata is present it will by default be used,
though this can be controlled using the votmeta
configuration option below.
Depending on the way that the table is accessed, the reader tries to take advantage of the column and row block structure of parquet files to read the data in parallel where possible.
Note:
The parquet I/O handlers require large external libraries, which are not always bundled with the library/application software because of their size. In some configurations, parquet support may not be present, and attempts to read or write parquet files will result in a message like:Parquet-mr libraries not availableIf you can supply the relevant libaries on the classpath at runtime, the parquet support will work. At time of writing, the required libraries are included in thetopcat-extra.jar
monolithic jar file (though nottopcat-full.jar
), and are included if you have thetopcat-all.dmg
file. They can also be found in the starjava github repository (https://github.com/Starlink/starjava/tree/master/parquet/src/lib or you can acquire them from the Parquet MR package. These arrangements may be revised in future releases, for instance if parquet usage becomes more mainstream. The required dependencies are a minimal subset of those required by the Parquet MR submoduleparquet-cli
at version 1.13.1, in particular the filesaircompressor-0.21.jar
commons-collections-3.2.2.jar
commons-configuration2-2.1.1.jar
commons-lang3-3.9.jar
failureaccess-1.0.1.jar
guava-27.0.1-jre.jar
hadoop-auth-3.2.3.jar
hadoop-common-3.2.3.jar
hadoop-mapreduce-client-core-3.2.3.jar
htrace-core4-4.1.0-incubating.jar
parquet-cli-1.13.1.jar
parquet-column-1.13.1.jar
parquet-common-1.13.1.jar
parquet-encoding-1.13.1.jar
parquet-format-structures-1.13.1.jar
parquet-hadoop-1.13.1.jar
parquet-jackson-1.13.1.jar
slf4j-api-1.7.22.jar
slf4j-nop-1.7.22.jar
snappy-java-1.1.8.3.jar
stax2-api-4.2.1.jar
woodstox-core-5.3.0.jar
zstd-jni-1.5.0-1.jar
.
These libraries support some, but not all, of the compression formats defined for parquet, currentlyuncompressed
,gzip
,snappy
,zstd
andlz4_raw
. Supplying more of the parquet-mr dependencies at runtime would extend this list. Unlike the rest of TOPCAT/STILTS/STIL which is written in pure java, some of these libraries (currently the snappy and zstd compression codecs) contain native code, which means they may not work on all architectures. At time of writing all common architectures are covered, but there is the possibility of failure with ajava.lang.UnsatisfiedLinkError
on other platforms if attempting to read/write files that use those compression algorithms.
The handler behaviour may be modified by specifying
one or more comma-separated name=value configuration options
in parentheses after the handler name, e.g.
"parquet(cachecols=true,nThread=4)
".
The following options are available:
cachecols = true|false|null
true
, then when the table is loaded,
all data is read by column into local scratch disk files,
which is generally the fastest way to ingest all the data.
If false
, the table rows are read as required,
and possibly cached using the normal STIL mechanisms.
If null
(the default), the decision is taken
automatically based on available information.
(Default: null
)
nThread = <int>
cachecols
option.
If the value is <=0 (the default), a value is chosen
based on the number of apparently available processors.
(Default: 0
)
tryUrl = true|false
false
)
votmeta = true|false|null
IVOA.VOTable-Parquet.content
will be read to supply the metadata for the input table,
following the
VOParquet convention.
If false, any such VOTable metadata is ignored.
If set null, the default, then such VOTable metadata
will be used only if it is present and apparently consistent
with the parquet data and metadata.
(Default: null
)
votable = <filename-or-url>
votmeta
configuration is not false.
(Default: null
)
This format can be automatically identified by its content so you do not need to specify the format explicitly when reading parquet tables, regardless of the filename.