You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to use st_read by passing the same style of file paths that I would use with read_csv_auto: s3://bucket_name/path/file.extension. However, This doesn't seem to work. I'm currently using duckdb 0.9.2, but I have observed the same failures in 0.9.0 and 0.9.1.
Hi! Thanks for reporting this issue. This is indeed a regression. Im working on a fix, but in the meantime you should be able to circumvent this by using the GDAL s3 filesystem instead, e.g. by prefixing the path with /vsis3/
Note: for XLSX in particular using DuckDB's filesystem with httpfs has a slower initial load time than /vsis3/ or /vsicurl/. Im aware of the issue and will look into improving it more in the future.
In a couple of hours (when the CI finishes) you should be able to install the extension with this fix for 0.9.2 by running:
FORCE INSTALL spatial FROM'http://nightly-extensions.duckdb.org';
I've been trying to use
st_read
by passing the same style of file paths that I would use withread_csv_auto
:s3://bucket_name/path/file.extension
. However, This doesn't seem to work. I'm currently using duckdb0.9.2
, but I have observed the same failures in0.9.0
and0.9.1
.The following will fail:
In this case,
http://localhost:8333
is a local seaweedfs container compliant with the S3 API (andsba_sample.xlsx
exists at that path).The exception that gets raised is:
The text was updated successfully, but these errors were encountered: