You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using UpChunk to upload files into a Google Cloud bucket, using a signed URL generated by an external backend. This works perfectly, as we can see the uploaded files appear in the GCP bucket. However, when the filesize exceeds the configured chunk-size, the library makes several requests to the GCS upload URL (as expected), but after the chunked requests are done, only the latest chunk is stored in the GCS bucket. It seems like each PUT request containing a single chunk overwrites the previous one.
And then upload a file of size ~5.2 MiB. The first chunk's PUT has these request headers:
And the second chunk has:
But then after both requests are completed, the file in the GCS bucket looks like this:
We've confirmed that, if you look in GCS after the first request is done, before the second one completes, the file is actually 4 MiB. This seems to confirm that the file is being overwritten, and not appended to.
Having gone through both Google Cloud's documentation, as well as UpChunk's docs, there is nothing that seems to indicate what might be causing this, or how to configure it to append instead of overwrite. Any thoughts would be greatly appreciated!
The text was updated successfully, but these errors were encountered:
Hi there! This is a weird one I'll confess I haven't seen... This might be totally off base, but can you confirm which GCS upload type you're using? This library assumes you're using Resumable Uploads, and a mismatch there is the only thing that immediately comes to mind for what could be going on here.
And we can confirm that uploadType=resumable was also included in the POST request to GCS to generate this URL (using the GCS golang library). So from what I can tell the URL is formatted correctly and the necessary parameters are there.
We are using UpChunk to upload files into a Google Cloud bucket, using a signed URL generated by an external backend. This works perfectly, as we can see the uploaded files appear in the GCP bucket. However, when the filesize exceeds the configured chunk-size, the library makes several requests to the GCS upload URL (as expected), but after the chunked requests are done, only the latest chunk is stored in the GCS bucket. It seems like each PUT request containing a single chunk overwrites the previous one.
To illustrate, we set the chunk size to 4 MiB:
And then upload a file of size ~5.2 MiB. The first chunk's PUT has these request headers:
And the second chunk has:
But then after both requests are completed, the file in the GCS bucket looks like this:
We've confirmed that, if you look in GCS after the first request is done, before the second one completes, the file is actually 4 MiB. This seems to confirm that the file is being overwritten, and not appended to.
Having gone through both Google Cloud's documentation, as well as UpChunk's docs, there is nothing that seems to indicate what might be causing this, or how to configure it to append instead of overwrite. Any thoughts would be greatly appreciated!
The text was updated successfully, but these errors were encountered: