You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use this library extensively in some of my plugins and recently found that it has a major shortcoming when using it within a multisite environment.
Issue
So it stores the batches globally as a site option in the network. When Site 1 dispatches a background process, it locks the process so no further processes can be dispatched across the network. Say Site 2 now tries to dispatch a process while the process from Site 1 is still running. No process will be dispatched and the batch is added to the stack to be processed.
The process created on Site 1 finishes processing the batch created by it, and will now look for any further batches to process. It will now start processing the batch of Site 2 in the context of Site 1.
That's all fine if the tasks in the batch don't "report back" to the parent site, but I don't think it is unlikely you'd be updating some post meta or site-specific options within the tasks. These changes would now be made on the wrong site.
Proposed solution
Of course we could append the site ID to each task and switch_to_blog()/restore_current_blog() if necessary, but that adds a lot of overhead (it would have to switch on each task because there is currently no way to hook to the beginning/end of a batch)
I've created a fork to test and added the site id to the batch key. When querying the batches it extracts the ID and passes it onto the batch stdClass. handle() then calls switch_to_blog() if the batch is processed from another site than the one that initiated it. When the batch has finished processing restore_current_blog() is called.
Having the site ID in the batch key would also allow for further multisite optimizations (e.g. pausing/resuming processing for individual sites).
The text was updated successfully, but these errors were encountered:
koen12344
changed the title
Batches (potentially) not running in context of site that created them in multsite
Batches (potentially) not running in context of site that created them in multisite
Dec 30, 2024
koen12344
added a commit
to koen12344/multisite-background-processing
that referenced
this issue
Jan 2, 2025
I use this library extensively in some of my plugins and recently found that it has a major shortcoming when using it within a multisite environment.
Issue
So it stores the batches globally as a site option in the network. When Site 1 dispatches a background process, it locks the process so no further processes can be dispatched across the network. Say Site 2 now tries to dispatch a process while the process from Site 1 is still running. No process will be dispatched and the batch is added to the stack to be processed.
The process created on Site 1 finishes processing the batch created by it, and will now look for any further batches to process. It will now start processing the batch of Site 2 in the context of Site 1.
That's all fine if the tasks in the batch don't "report back" to the parent site, but I don't think it is unlikely you'd be updating some post meta or site-specific options within the tasks. These changes would now be made on the wrong site.
Proposed solution
Of course we could append the site ID to each task and
switch_to_blog()/restore_current_blog()
if necessary, but that adds a lot of overhead (it would have to switch on each task because there is currently no way to hook to the beginning/end of a batch)I've created a fork to test and added the site id to the batch key. When querying the batches it extracts the ID and passes it onto the batch stdClass.
handle()
then callsswitch_to_blog()
if the batch is processed from another site than the one that initiated it. When the batch has finished processingrestore_current_blog()
is called.Having the site ID in the batch key would also allow for further multisite optimizations (e.g. pausing/resuming processing for individual sites).
The text was updated successfully, but these errors were encountered: