From 6186cb06f5fd027d91bb20cea7f018c23d179d8e Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Thu, 9 Jan 2025 09:26:20 +0530 Subject: [PATCH 1/3] Mark use concurrent locks as GA --- docs/data-management/automatic-compaction.md | 2 +- docs/ingestion/concurrent-append-replace.md | 29 +-- .../compaction-config-dialog.spec.tsx.snap | 176 +++++++++--------- .../compaction-config-dialog.tsx | 2 +- .../views/load-data-view/load-data-view.tsx | 2 +- 5 files changed, 109 insertions(+), 102 deletions(-) diff --git a/docs/data-management/automatic-compaction.md b/docs/data-management/automatic-compaction.md index cf129ea1ee20..62e209317f81 100644 --- a/docs/data-management/automatic-compaction.md +++ b/docs/data-management/automatic-compaction.md @@ -173,7 +173,7 @@ You can use concurrent append and replace to safely replace the existing data in To do this, you need to update your datasource to allow concurrent append and replace tasks: * If you're using the API, include the following `taskContext` property in your API call: `"useConcurrentLocks": true` -* If you're using the UI, enable **Use concurrent locks (experimental)** in the **Compaction config** for your datasource. +* If you're using the UI, enable **Use concurrent locks** in the **Compaction config** for your datasource. You'll also need to update your ingestion jobs for the datasource to include the task context `"useConcurrentLocks": true`. diff --git a/docs/ingestion/concurrent-append-replace.md b/docs/ingestion/concurrent-append-replace.md index 5468bc28c5c2..607fbedfc488 100644 --- a/docs/ingestion/concurrent-append-replace.md +++ b/docs/ingestion/concurrent-append-replace.md @@ -22,21 +22,21 @@ title: Concurrent append and replace ~ under the License. --> -Concurrent append and replace safely replaces the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this feature is appending new data (such as with streaming ingestion) to an interval while compaction of that interval is already in progress. Druid segments the data ingested during this time dynamically. The subsequent compaction run segments the data into the granularity you specified. +Concurrent append and replace safely replaces the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this feature is appending new data (such as with streaming ingestion) to an interval while compaction of that interval is already in progress. Druid partitions the data ingested during this time using `dynamic` partitioning. The subsequent compaction run would partition the data into the granularity you specified in the compaction config. To set up concurrent append and replace, use the context flag `useConcurrentLocks`. Druid will then determine the correct lock type for you, either append or replace. Although you can set the type of lock manually, we don't recommend it. -## Update the compaction settings +## Update compaction config to use concurrent locks If you want to append data to a datasource while compaction is running, you need to enable concurrent append and replace for the datasource by updating the compaction settings. -### Update the compaction settings with the UI +### Update compaction config from the Druid web-console -In the **Compaction config** for a datasource, enable **Use concurrent locks (experimental)**. +In the **Compaction config** for a datasource, enable **Use concurrent locks**. For details on accessing the compaction config in the UI, see [Enable automatic compaction with the web console](../data-management/automatic-compaction.md#manage-auto-compaction-using-the-web-console). -### Update the compaction settings with the API +### Update compaction config using REST API Add the `taskContext` like you would any other automatic compaction setting through the API: @@ -51,17 +51,17 @@ curl --location --request POST 'http://localhost:8081/druid/coordinator/v1/confi }' ``` -## Configure a task lock type for your ingestion job +## Use concurrent locks in ingestion jobs -You also need to configure the ingestion job to allow concurrent tasks. +You also need to configure the ingestion job to allow concurrent locks. You can provide the context parameter like any other parameter for ingestion jobs through the API or the UI. -### Add a task lock using the Druid console +### Use concurrent locks in the Druid web-console -As part of the **Load data** wizard for classic batch (JSON-based ingestion) and streaming ingestion, enable the following config on the **Publish** step: **Use concurrent locks (experimental)**. +As part of the **Load data** wizard for classic batch (JSON-based) ingestion and streaming ingestion, enable the following config on the **Publish** step: **Use concurrent locks**. -### Add the task lock through the API +### Use concurrent locks in the REST APIs Add the following JSON snippet to your supervisor or ingestion spec if you're using the API: @@ -70,7 +70,14 @@ Add the following JSON snippet to your supervisor or ingestion spec if you're us "useConcurrentLocks": true } ``` - + +## Update Overlord properties to use concurrent locks for all ingestion and compaction jobs + +Updating the compaction config and ingestion job for each data source can be cumbersome if you have several data sources in your cluster. You can instead set the following config in the `runtime.properties` of the Overlord service to use concurrent locks across all ingestion and compaction jobs. + +```bash +druid.indexer.task.default.context={"useConcurrentLocks":true} +``` ## Task lock types diff --git a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap index 372153bfde00..60e506ff88d8 100644 --- a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap +++ b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap @@ -35,15 +35,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -86,27 +86,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -128,33 +128,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -175,11 +175,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -224,11 +224,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -247,11 +247,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -358,7 +358,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic

For more information refer to the - + @@ -372,7 +372,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic > @@ -445,15 +445,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -496,27 +496,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -538,33 +538,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -585,11 +585,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -634,11 +634,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -657,11 +657,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -768,7 +768,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p

For more information refer to the - + @@ -782,7 +782,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p > @@ -855,15 +855,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -906,27 +906,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -948,33 +948,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -995,11 +995,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1044,11 +1044,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1067,11 +1067,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1178,7 +1178,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa

For more information refer to the - + @@ -1192,7 +1192,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa > @@ -1265,15 +1265,15 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -1316,27 +1316,27 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1358,33 +1358,33 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1405,11 +1405,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1454,11 +1454,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1477,11 +1477,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1588,7 +1588,7 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] =

For more information refer to the - + @@ -1602,7 +1602,7 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = > diff --git a/web-console/src/dialogs/compaction-config-dialog/compaction-config-dialog.tsx b/web-console/src/dialogs/compaction-config-dialog/compaction-config-dialog.tsx index 800d88f78b29..887acb7a280b 100644 --- a/web-console/src/dialogs/compaction-config-dialog/compaction-config-dialog.tsx +++ b/web-console/src/dialogs/compaction-config-dialog/compaction-config-dialog.tsx @@ -130,7 +130,7 @@ export const CompactionConfigDialog = React.memo(function CompactionConfigDialog } > { setCurrentConfig( diff --git a/web-console/src/views/load-data-view/load-data-view.tsx b/web-console/src/views/load-data-view/load-data-view.tsx index 3767a9852e99..25943929a187 100644 --- a/web-console/src/views/load-data-view/load-data-view.tsx +++ b/web-console/src/views/load-data-view/load-data-view.tsx @@ -3443,7 +3443,7 @@ export class LoadDataView extends React.PureComponent { this.updateSpec( From 9a4cb50ea8ba3bec124a1c5c599423d6893fe76b Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Thu, 9 Jan 2025 09:29:05 +0530 Subject: [PATCH 2/3] Revert changes to web-console snapshot --- .../compaction-config-dialog.spec.tsx.snap | 176 +++++++++--------- 1 file changed, 88 insertions(+), 88 deletions(-) diff --git a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap index 60e506ff88d8..372153bfde00 100644 --- a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap +++ b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap @@ -35,15 +35,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -86,27 +86,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -128,33 +128,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -175,11 +175,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -224,11 +224,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -247,11 +247,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -358,7 +358,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic

For more information refer to the - + @@ -372,7 +372,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic > @@ -445,15 +445,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -496,27 +496,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -538,33 +538,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -585,11 +585,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -634,11 +634,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -657,11 +657,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -768,7 +768,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p

For more information refer to the - + @@ -782,7 +782,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p > @@ -855,15 +855,15 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -906,27 +906,27 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -948,33 +948,33 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -995,11 +995,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1044,11 +1044,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1067,11 +1067,11 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1178,7 +1178,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa

For more information refer to the - + @@ -1192,7 +1192,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa > @@ -1265,15 +1265,15 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = }, { "info":

- For perfect rollup, you should use either + For perfect rollup, you should use either hashed - (partitioning based on the hash of dimensions in each row) or + (partitioning based on the hash of dimensions in each row) or range - (based on several dimensions). For best-effort rollup, you should use + (based on several dimensions). For best-effort rollup, you should use dynamic @@ -1316,27 +1316,27 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = A target row count for each partition. Each partition will have a row count close to the target assuming evenly distributed keys. Defaults to 5 million if numShards is null.

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1358,33 +1358,33 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = maxRowsPerSegment - renamed to + renamed to targetRowsPerSegment

- If + If numShards is left unspecified, the Parallel task will determine - + numShards - automatically by + automatically by targetRowsPerSegment .

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1405,11 +1405,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data.

- Note that either + Note that either targetRowsPerSegment - or + or numShards @@ -1454,11 +1454,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Target number of rows to include in a partition, should be a number that targets segments of 500MB~1GB.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1477,11 +1477,11 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = Maximum number of rows to include in a partition.

- Note that either + Note that either targetRowsPerSegment - or + or maxRowsPerSegment @@ -1588,7 +1588,7 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] =

For more information refer to the - + @@ -1602,7 +1602,7 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = > From 9949362d3dad41ce65265f5c263a817ae862ddfc Mon Sep 17 00:00:00 2001 From: Kashif Faraz Date: Thu, 9 Jan 2025 10:12:17 +0530 Subject: [PATCH 3/3] Update snapshots --- .../__snapshots__/compaction-config-dialog.spec.tsx.snap | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap index 372153bfde00..b9b48cf6e58b 100644 --- a/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap +++ b/web-console/src/dialogs/compaction-config-dialog/__snapshots__/compaction-config-dialog.spec.tsx.snap @@ -372,7 +372,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (dynamic > @@ -782,7 +782,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (hashed p > @@ -1192,7 +1192,7 @@ exports[`CompactionConfigDialog matches snapshot with compactionConfig (range pa > @@ -1602,7 +1602,7 @@ exports[`CompactionConfigDialog matches snapshot without compactionConfig 1`] = >