You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working on the automerge-repo-swift implementation, and ran into a bit of a snag and unexpected responses.
I'm running the simple code for automerge-repo-sync-server, but verified the same behaviour from sync.automerge.org
I've created an integration test that does the following:
creates an ephemeral local repo, adds a document locally, and syncs it to the remote (JS) automerge-repo over web sockets
It retains the documentID, discards the local repo
It then creates a new ephemeral repo and requests that documentID from the remote (JS) automerge-repo
In the traces, I'm joining the repo, accepting the peer message, sending a request for the documentID
I get two WebSocket message responses, the first an UNAVAILBLE message, and the second a SYNC message with the contents from the server:
Test log snippet from my tracing/diagnostics
WebSocket received: UNAVAILABLE[documentId: 2QJgUvf8oVRFZCiEo8CPsGbH3YHu, sender: storage-server-sync-automerge-org, target: 61851380-52F1-4675-ABA8-7ED98E74B9BE]
We've requested 2QJgUvf8oVRFZCiEo8CPsGbH3YHu from 1 peers:
- Peer: storage-server-sync-automerge-org
Removing the sending peer, there are 0 remaining:
No further peers with requests outstanding, so marking document 2QJgUvf8oVRFZCiEo8CPsGbH3YHu as unavailable
updating state of 2QJgUvf8oVRFZCiEo8CPsGbH3YHu to unavailable
WebSocket received: SYNC[documentId: 2QJgUvf8oVRFZCiEo8CPsGbH3YHu, sender: storage-server-sync-automerge-org, target: 61851380-52F1-4675-ABA8-7ED98E74B9BE, data: 25 bytes]
PEER: 61851380-52F1-4675-ABA8-7ED98E74B9BE - handling a sync msg from storage-server-sync-automerge-org to 61851380-52F1-4675-ABA8-7ED98E74B9BE
At the moment, I'm taking the first UNAVAILABLE response as a declarative result and marking the document as such, which is then catching an assertion for being in an unexpected state when I later get a SYNC message attempting to update it.
I believe the sending of the first UNAVAILABLE message is a bug in the current implementation.
The text was updated successfully, but these errors were encountered:
heckj
added a commit
to automerge/automerge-repo-swift
that referenced
this issue
Apr 22, 2024
- adds extensive log tracing
- trace full resolve paths
- work around found issue automerge/automerge-repo#343
to accept Automerge-repo double-response on REQ
heckj
added a commit
to automerge/automerge-repo-swift
that referenced
this issue
Apr 22, 2024
- adds extensive log tracing
- trace full resolve paths
- work around found issue automerge/automerge-repo#343
to accept Automerge-repo double-response on REQ
Working on the automerge-repo-swift implementation, and ran into a bit of a snag and unexpected responses.
I'm running the simple code for automerge-repo-sync-server, but verified the same behaviour from
sync.automerge.org
The versions involved here:
"@automerge/automerge": "^2.1.13",
"@automerge/automerge-repo": "^1.1.5",
"@automerge/automerge-repo-network-websocket": "^1.1.5",
"@automerge/automerge-repo-storage-nodefs": "^1.1.5",
The code for the sync-server is being run in a docker container locally, per PR automerge/automerge-repo-sync-server#7
I've created an integration test that does the following:
In the traces, I'm joining the repo, accepting the peer message, sending a request for the documentID
I get two WebSocket message responses, the first an UNAVAILBLE message, and the second a SYNC message with the contents from the server:
Test log snippet from my tracing/diagnostics
At the moment, I'm taking the first
UNAVAILABLE
response as a declarative result and marking the document as such, which is then catching an assertion for being in an unexpected state when I later get aSYNC
message attempting to update it.I believe the sending of the first
UNAVAILABLE
message is a bug in the current implementation.The text was updated successfully, but these errors were encountered: