This release includes :
{parquetize}
now has a newget_parquet_info
function for retrieving metadata from parquet files. This function is particularly useful for row group size (added by @nbc).
This release includes :
- bugfix by @leungi: remove single quotes in SQL statement thatgenerates incorrect SQL syntax for connection of type Microsoft SQL Server #45
{parquetize}
now has a minimal version (2.4.0) for{haven}
dependency package to ensure that conversions are performed correctly from SAS files compressed in BINARY mode #46csv_to_parquet
now has aread_delim_args
argument, allowing passing of arguments toread_delim
(added by @nikostr).table_to_parquet
can now convert files with uppercase extensions (.SAS7BDAT, .SAV, .DTA)
This release includes :
- a new fst_to_parquet function that converts a fst file to parquet format.
- Rely more on
@inheritParams
to simply documentation of functions arguments #38. This leads to some renaming of arguments (e.gpath_to_csv
->path_to_file
...) - Arguments
compression
andcompression_level
are now passed to write_parquet_at_once and write_parquet_by_chunk functions and now available in main conversion functions ofparquetize
#36 - Group
@importFrom
in a file to facilitate their maintenance #37 - work on download_extract tests #43
This release includes :
You can convert to parquet any query you want on any DBI compatible RDBMS :
dbi_connection <- DBI::dbConnect(RSQLite::SQLite(),
system.file("extdata","iris.sqlite",package = "parquetize"))
# Reading iris table from local sqlite database
# and conversion to one parquet file :
dbi_to_parquet(
conn = dbi_connection,
sql_query = "SELECT * FROM iris",
path_to_parquet = tempdir(),
parquetname = "iris"
)
You can find more information on
dbi_to_parquet
documentation.
- a new check_parquet function that check if a dataset/file is valid and return columns and arrow type
Two arguments are deprecated to avoid confusion with arrow concept and keep consistency
chunk_size
is replaced bymax_rows
(chunk size is an arrow concept).chunk_memory_size
is replaced bymax_memory
for consistency
- refactoring : extract the logic to write parquet files as chunk or at once in write_parquet_by_chunk and write_parquet_at_once
- a big test's refactoring : all _to_parquet output files are formally validate (readable as parquet, number of lines, partitions, number of files).
- use cli_abort instead of cli_alert_danger with stop("") everywhere
- some minors changes
- bugfix: table_to_parquet did not select columns as expected
- bugfix: skip_if_offline tests with download
This release includes :
Due to these numerous contributions, @nbc is now officially part of the project authors !
After a big refactoring, three arguments are deprecated :
by_chunk
:table_to_parquet
will automatically chunked if you use one ofchunk_memory_size
orchunk_size
.csv_as_a_zip
:csv_to_table
will detect if file is a zip by the extension.url_to_csv
: usepath_to_csv
instead,csv_to_table
will detect if the file is remote with the file path.
They will raise a deprecation warning for the moment.
The possibility to chunk parquet by memory size with table_to_parquet()
:
table_to_parquet()
takes a chunk_memory_size
argument to convert an input
file into parquet file of roughly chunk_memory_size
Mb size when data are
loaded in memory.
Argument by_chunk
is deprecated (see above).
Example of use of the argument chunk_memory_size
:
table_to_parquet(
path_to_table = system.file("examples","iris.sas7bdat", package = "haven"),
path_to_parquet = tempdir(),
chunk_memory_size = 5000, # this will create files of around 5Gb when loaded in memory
)
The functionality for users to pass argument to write_parquet()
when
chunking argument (in the ellipsis). Can be used for example to pass
compression
and compression_level
.
Example:
table_to_parquet(
path_to_table = system.file("examples","iris.sas7bdat", package = "haven"),
path_to_parquet = tempdir(),
compression = "zstd",
compression_level = 10,
chunk_memory_size = 5000
)
This function is added to ... download and unzip file if needed.
file_path <- download_extract(
"https://www.nomisweb.co.uk/output/census/2021/census2021-ts007.zip",
filename_in_zip = "census2021-ts007-ctry.csv"
)
csv_to_parquet(
file_path,
path_to_parquet = tempdir()
)
Under the cover, this release has hardened tests
This release fix an error when converting a sas file by chunk.
This release includes :
- Added columns selection to
table_to_parquet()
andcsv_to_parquet()
functions #20 - The example files in parquet format of the iris table have been migrated to the
inst/extdata
directory.
This release includes :
- The behaviour of
table_to_parquet()
function has been fixed when the argumentby_chunk
is TRUE.
This release removes duckdb_to_parquet()
function on the advice of Brian Ripley from CRAN.
Indeed, the storage of DuckDB is not yet stable. The storage will be stabilized when version 1.0 releases.
This release includes corrections for CRAN submission.
This release includes an important feature :
The table_to_parquet()
function can now convert tables to parquet format with less memory consumption.
Useful for huge tables and for computers with little RAM. (#15)
A vignette has been written about it. See here.
- Removal of the
nb_rows
argument in thetable_to_parquet()
function - Replaced by new arguments
by_chunk
,chunk_size
andskip
(see documentation) - Progress bars are now managed with {cli} package
- Added
duckdb_to_parquet()
function to convert duckdb files to parquet format. - Added
sqlite_to_parquet()
function to convert sqlite files to parquet format.
- Added
rds_to_parquet()
function to convert rds files to parquet format. - Added
json_to_parquet()
function to convert json and ndjson files to parquet format. - Added the possibility to convert a csv file to a partitioned parquet file.
- Improving code coverage (#9)
- Check if
path_to_parquet
exists in functionscsv_to_parquet()
ortable_to_parquet()
(@py-b)
- Added
table_to_parquet()
function to convert SAS, SPSS and Stata files to parquet format. - Added
csv_to_parquet()
function to convert csv files to parquet format. - Added
parquetize_example()
function to get path to package data examples. - Added a
NEWS.md
file to track changes to the package.