Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-11898. design doc leader side execution #7583

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Bucket quota

Earlier, bucket level lock is taken, quota validation is performed and updated with-in lock in cache in all nodes.
During startup before persistence to db, request is re-executed from ratis log and bucket quota cache is prepared again.
So once bucket quota is updated in cache, it will remain same (As recovered during startup with same re-execution).

Now request is getting executed at leader node, so bucket case will not be able to recover if crash happens. So it can be updated in BucketTable cache only after its persisted.

![quota_reserve_flow.png](quota_reserve_flow.png)

For bucket quota in new flow,
- When processing key commit, the quota will be `reserved` at leader.
- Bucket quota changes will be distributed to all other nodes via ratis
- At all nodes, key changes is flushed to DB, during that time, quota change will be updated to BucketTable, and quota reserve will be reset.
- On failure, reserve quota for the request will be reset.

`Bucket Resource Quota` will store quota information with respect to `index` also and same will be used to reset on request handling,
- At leader node after request is send to ratis in success and failure path (as default always) with `request index`
- At all nodes on apply transaction, quota is reset with request index.
So in all cases, reserved quota can be removed in processing of request.

Cases:
1. Quota is reserved at leader node but sync to other nodes fail, quota will be reset always
2. Quota is updated at leader node in apply transaction, it will reset quota to avoid double quota increase
3. Quota is updated at follower node in apply transaction, reset as no impact as `Bucket Quota resource` will not have any entry for the the request
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@

## Index generation
In old flow, ratis index is used for `object Id` of key and `update Id` for key update.
For new flow, it will not depend on ratis index, but will have its own **`managed index`**.

Index initialization / update:
- First time startup: 0
- On restart (leader): last preserved index + 1
- On Switch over: last index + 1
- Request execution: index + 1
- Upgrade: Last Ratis index + 1


## Index Persistence:

Index Preserved in TransactionInfo Table with new KEY: "#KEYINDEX"
Format: <timestamp>#<index>
Time stamp: This will be used to identify last saved transaction executed
Index: index identifier of the request

Sync the Index to other nodes:
Special request body having metadata: [Execution Control Message](request-persist-distribution.md#control-request).


## Step-by-step incremental changes for existing flow

1. for increment changes, need remove dependency with ratis index. For this, need to use om managed index in both old and new flow.
2. objectId generation: need follow old logic of index to objectId mapping.

190 changes: 190 additions & 0 deletions hadoop-hdds/docs/content/design/leader-execution/leader-execution.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Background

Here is the summary of the challenges:

- The current implementation depends on consensus on the order of requests received and not on consensus on the processing of the requests.
- The double buffer implementation currently is meant to optimize the rate at which writes get flushed to RocksDB but the effective batching achieved is 1.2 at best. It is also a source of continuous bugs and added complexity for new features.
- The number of transactions that can be pushed through Ratis currently caps out around 25k.
- The Current performance envelope for OM is around 12k transactions per second. The early testing pushes this to 40k transactions per second.

## Execution at leader node needs deal with below cases
1. Parallel execution: ratis serialize all the execution in order. With control, it is possible to execute the request in parallel which are independent.
2. Optimized locking: Locks are taken at bucket level for both read and write flow. Here, focus to remove lock between read and write flow, and have more granular locking.
3. Cache Optimization: Cache are maintained for write operation and read also make use of same for consistency. This creates complexity for read to provide accurate result with parallel operation.
4. Double buffer code complexity: Double buffer provides batching for db update. This is done with ratis state machine and induces issues managing ratis state machine, cache and db updates.
5. Request execution flow optimization: Optimize request execution flow, removing un-necessary operation and improve testability.
6. Performance and resource Optimization: Currently, same execution is repeated at all nodes, and have more failure points. With leader side execution and parallelism, need improve performance and resource utilization.

### Object ID generation
Currently, the Object ID is tied to Ratis transaction metadata. This has multiple challenges in the long run.

- If OM adopts multi Ratis to scale writes further, Object IDs will not longer be unique.
- If we shard OM, then across OMs the object ID will not be unique.
- When batching multiple requests, we cannot utilize Ratis metadata to generate object IDs.

Longer term, we should move to a UUID based object ID generation. This will allow us to generate object IDs that are globally unique. In the mean time, we are moving to a persistent counter based object ID generation. The counter is persisted during apply transaction and is incremented for each new object created.

## Prototype Performance Result:

| sno | item | old flow result | leader execution result |
|-----|------------------------------------------|-------------------------------|------------------------|
| 1 | Operation / Second (key create / commit) | 12k+ | 40k+ |
| 2 | Key Commit / Second | 5.9k+ | 20k+ (3.3 times) |
| 3 | CPU Utilization Leader | 16% (unable to increase load) | 33% |
| 4 | CPU Utilization Follower | 6% above | 4% below |

Refer [performance prototype result](performance-prototype-result.pdf)

# Leader execution

![high-level-flow.png](high-level-flow.png)

Client --> OM --> Gatekeeper ---> Executor --> Batching (ratis request) --{Ratis sync to all nodes}--> apply transaction {db update}


### Gatekeeper
Gatekeeper act as entry point for request execution. Its function is:
1. orchestrate the execution flow
2. granular locking
3. execution of request
4. validate om state like upgrade
5. update metrics and return response
6. handle client replay of request
7. managed index generation (remove dependency with ratis index for objectId)

### Executor
This prepares context for execution, process the request, communicate to all nodes for db changes via ratis and clearing up any cache.

### Batching (Ratis request)
All request as executed parallel are batched and send as single request to other nodes. This helps improve performance over network with batching.

### Apply Transaction (via ratis at all nodes)
With new flow as change,
- all nodes during ratis apply transaction will just only update the DB for changes.
- there will not be any double buffer and all changes will be flushed to db immediately.
- there will be few specific action like snapshot creation of db, upgrade handling which will be done at node.

## Description

### Index generation
refer [index generation and usages](index-generation-usages.md)

### No-Cache for write operation

In old flow, a key creation / updation is added to PartialTableCache, and cleanup happens when DoubleBuffer flushes DB changes.
Since DB changes is done in batches, so a cache is maintained till flush of DB is completed. Cache is maintained so that OM can serve further request till flush is completed.

This adds complexity during read for the keys, as it needs ensure to have the latest data from cache or DB.
Since there can be parallel operation of adding keys to cache, removal from cache and flush to db, this induces bug to the code if this is not handled properly.

For new flow, partial table cache is removed, and changes are visible as soon as changes are flushed to db.
For this to achieve,
- granular locking for key operation to avoid parallel update till the existing operation completes. This avoids need of cache as data is available only after changes are persisted.
- Double buffer operation removal for the flow, flush is done immediately before response is returned. This is no more needed as no need to serve next request as current reply is not done.
- Bucket resource is handled in such a way that its visible only after db changes are flushed. This is required as quota is shared between different keys operating parallel.
Note: For incremental changes, quota count will be available immediately for read for compatibility with older flow till all flows are migrated to new flow.

### Quota handling

refer [bucket quota](bucket-reserve-quota.md)

### Granular locking
Gateway: Perform lock as per below strategy for OBS/FSO
On lock success, trigger execution of request to respective executor queue

#### OBS Locking
refer [OBS locking](obs-locking.md)

#### FSO Locking
TODO

Challenges compared to OBS,
1. Implicit directory creation
2. file Id depends on parent directory /<volId>/<bucketId>/<parent ObjectId>/<file name>
So due to hierarchy in nature and parallel operation at various level, FSO locking is more complicated.

#### Legacy Locking:
Not-in-scope

### Optimized new flow

Currently, a request is handled as:
- Pre-execute: does request static validation, authorization
- validateAndUpdateCache: locking, handle request, update cache
- Double buffer to update DB using cache happening in background

Request execution Template: every request handling need follow below template of request execution.

- preProcess: basic request validation, update parameter like user info, normalization of key
- authorize: perform ranger or native acl validation
- lock: granular level locking
- unlock: unlock locked keys
- process: process request like:
- Validation after lock like bucket details
- Retrieve previous key, create new key, quota update, and so on
- Record changes for db update
- Prepare response
- Audit and logging
- Metrics update
- Request validator annotation: similar to existing, where compatibility check with ozone manager feature version and client feature version, and update request to support compatibility if any.

Detailed request processing:
OBS:
- [Create key](request/obs-create-key.md)
- [Commit key](request/obs-commit-key.md)

### Execution persist and distribution

refer [request-process-distribution](request-persist-distribution.md)

### Replay of client request handling

refer [request-replay-handling](request-replay.md)

### Testability framework

With rework on flow, a testability framework for better test coverage for request processing.

Complexity in existing framework for request:
1. Flow handling is different 1 node / 3 node HA deployment
2. Check for double buffer cache flush
3. cache related behaviour testing
3. Ratis sync and failure handling
4. Too much mocking for unit testing

Proposed handling:
Since execution is leader side and only db update is synchronized to all other nodes, so unit test will focus behavior, but not on env.

1. Test data preparation
- Utility to prepare Key, File, volume, bucket
- Insert to DB to different table
2. Short-circuit for db update (without ratis)
3. simplified mocking for ranger authorization (just mock a method)
4. behavior test from End-to-End perspective

This will have the following advantages:
- Speed of test scenarios (sync wait and double buffer flush wait will be avoided)
- optimized test cases (avoid duplicate test cases)
- test code complexity will be less

TODO: With echo, define test utils and sample test cases
- Capture test cases based on behavior.

## Step-by-step integration of existing request (interoperability)

Leader side execution have changes for all flows. This needs to be done incrementally to help better quality, testability.
This needs below integration points in current code:
1. dependency over ratis index removal (for old flow also)
2. bucket quota handling integration (such that old flow have no impact)
3. Granular locking for old flow, this ensures `no cache` for new flow do not have impact
4. OmStateMachine integration for new flow, so that both old and new flow can work together
5. Request segregation for new flow which is incrementally added.

With above, Enable for old and new flow execution will be done with Feature flag, to switch between them seamlessly.
And old flow can be removed with achieving quality, performance and compatibility for new flow execution.

## Impacted areas
1. With Leader side execution, metrics and its capturing information can change.
- Certain metrics may not be valid
- New metrics needs to be added
- Metrics will be updated at leader side now like for key create. At follower node, its just db update, so value will not be udpated.
46 changes: 46 additions & 0 deletions hadoop-hdds/docs/content/design/leader-execution/obs-locking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# OBS locking

OBS case just involves volume, bucket and key. So this is more simplified in terms of locking.

Like for key commit operation, it needs,
1. Volume: `<No lock required similar to existing>`
1. Bucket: Read lock
2. Key: write lock

There will be:
1. BucketStripLock: locking bucket operation
2. KeyStripLock: Locking key operation

**Note**: Multiple keys locking (like delete multiple keys or rename operation), lock needs to be taken in order, i.e. using StrippedLocking order to avoid dead lock.

Stripped locking ordering:
- Strip lock is obtained over a hash bucket.
- All keys needs to be ordered with hash bucket
- And then need take lock in sequence order

## OBS operation
Bucket read lock will be there default.

For key operations in OBS buckets, the following concurrency control is proposed:

| API Name | Locking Key | Notes |
|-------------------------|----------------------------------------|-----------------------------------------------------------------------------------------------------------|
| CreateKey | `No Lock` | Key can be created parallel by client in open key table and all are exclusive to each other |
| CommitKey | WriteLock: Key Name | Only one key can be committed at a time with the same name: Without locking OM can leave dangling blocks |
| InitiateMultiPartUpload | `No Lock` | no lock is required as key will be created with upload Id and can be parallel |
| CommitMultiPartUpload | WriteLock: PartKey Name | Only one part can be committed at a time with the same name: Without locking OM can leave dangling blocks |
| CompleteMultiPartUpload | WriteLock: Key Name | Only one key can be completed at a time with the same name: Without locking OM can leave dangling blocks |
| AbortMultiPartUpload | `No Lock` | lock is not required in discarding multiple part upload |
| DeleteKey | WriteLock: Key Name | Only one key can be deleted at a time with the same name: Without locking write to DB can fail |
| RenameKey | WriteLock: sort(Key Name1, Key Name 2) | Only one key can be renamed at a time with the same name: Without locking OM can leave dangling blocks |
| SetAcl | WriteLock: Key Name | Only one key can be updated at a time with the same name |
| AddAcl | WriteLock: Key Name | Only one key can be updated at a time with the same name |
| RemoveAcl | WriteLock: Key Name | Only one key can be updated at a time with the same name |
| AllocateBlock | WriteLock: Key Name | Only one key can be updated at a time with the same name |
| SetTimes | WriteLock: Key Name | Only one key can be updated at a time with the same name |

Batch Operation:
1. deleteKeys: batch will be divided to multiple threads in Execution Pool to run parallel calling DeleteKey
2. RenameKeys: This is `depreciated`, but for compatibility, will be divided to multiple threads in Execution Pool to run parallel calling RenameKey

For batch operation, atomicity is not guranteed for above api, and same is behavior for s3 perspective.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading