Skip to content

Commit

Permalink
Mark use concurrent locks as GA
Browse files Browse the repository at this point in the history
  • Loading branch information
kfaraz committed Jan 9, 2025
1 parent 1a972ef commit 6186cb0
Show file tree
Hide file tree
Showing 5 changed files with 109 additions and 102 deletions.
2 changes: 1 addition & 1 deletion docs/data-management/automatic-compaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ You can use concurrent append and replace to safely replace the existing data in
To do this, you need to update your datasource to allow concurrent append and replace tasks:

* If you're using the API, include the following `taskContext` property in your API call: `"useConcurrentLocks": true`
* If you're using the UI, enable **Use concurrent locks (experimental)** in the **Compaction config** for your datasource.
* If you're using the UI, enable **Use concurrent locks** in the **Compaction config** for your datasource.

You'll also need to update your ingestion jobs for the datasource to include the task context `"useConcurrentLocks": true`.

Expand Down
29 changes: 18 additions & 11 deletions docs/ingestion/concurrent-append-replace.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,21 +22,21 @@ title: Concurrent append and replace
~ under the License.
-->

Concurrent append and replace safely replaces the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this feature is appending new data (such as with streaming ingestion) to an interval while compaction of that interval is already in progress. Druid segments the data ingested during this time dynamically. The subsequent compaction run segments the data into the granularity you specified.
Concurrent append and replace safely replaces the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this feature is appending new data (such as with streaming ingestion) to an interval while compaction of that interval is already in progress. Druid partitions the data ingested during this time using `dynamic` partitioning. The subsequent compaction run would partition the data into the granularity you specified in the compaction config.

To set up concurrent append and replace, use the context flag `useConcurrentLocks`. Druid will then determine the correct lock type for you, either append or replace. Although you can set the type of lock manually, we don't recommend it.

## Update the compaction settings
## Update compaction config to use concurrent locks

If you want to append data to a datasource while compaction is running, you need to enable concurrent append and replace for the datasource by updating the compaction settings.

### Update the compaction settings with the UI
### Update compaction config from the Druid web-console

In the **Compaction config** for a datasource, enable **Use concurrent locks (experimental)**.
In the **Compaction config** for a datasource, enable **Use concurrent locks**.

For details on accessing the compaction config in the UI, see [Enable automatic compaction with the web console](../data-management/automatic-compaction.md#manage-auto-compaction-using-the-web-console).

### Update the compaction settings with the API
### Update compaction config using REST API

Add the `taskContext` like you would any other automatic compaction setting through the API:

Expand All @@ -51,17 +51,17 @@ curl --location --request POST 'http://localhost:8081/druid/coordinator/v1/confi
}'
```

## Configure a task lock type for your ingestion job
## Use concurrent locks in ingestion jobs

You also need to configure the ingestion job to allow concurrent tasks.
You also need to configure the ingestion job to allow concurrent locks.

You can provide the context parameter like any other parameter for ingestion jobs through the API or the UI.

### Add a task lock using the Druid console
### Use concurrent locks in the Druid web-console

As part of the **Load data** wizard for classic batch (JSON-based ingestion) and streaming ingestion, enable the following config on the **Publish** step: **Use concurrent locks (experimental)**.
As part of the **Load data** wizard for classic batch (JSON-based) ingestion and streaming ingestion, enable the following config on the **Publish** step: **Use concurrent locks**.

### Add the task lock through the API
### Use concurrent locks in the REST APIs

Add the following JSON snippet to your supervisor or ingestion spec if you're using the API:

Expand All @@ -70,7 +70,14 @@ Add the following JSON snippet to your supervisor or ingestion spec if you're us
"useConcurrentLocks": true
}
```


## Update Overlord properties to use concurrent locks for all ingestion and compaction jobs

Updating the compaction config and ingestion job for each data source can be cumbersome if you have several data sources in your cluster. You can instead set the following config in the `runtime.properties` of the Overlord service to use concurrent locks across all ingestion and compaction jobs.

```bash
druid.indexer.task.default.context={"useConcurrentLocks":true}
```

## Task lock types

Expand Down
Loading

0 comments on commit 6186cb0

Please sign in to comment.