Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include healthcheck logic for helper scripts running as sidecars #1842

Draft
wants to merge 15 commits into
base: alpha
Choose a base branch
from

Conversation

TrevorBenson
Copy link
Collaborator

Description

Enhances the healthcheck.sh script to work for checking permissions on sidecar containers (helper scripts) via the ENTRYPOINT_PROCESS.

Where should the reviewer start?

  1. Copy the updated healthcheck.sh script into the sidecar container in /home/guild/.scripts/.
  2. Wait for the 5 minute check interval to occur and confirm if the sidecar is now shown as healthy.

Testing different CPU Usage values

  • Define a CPU_THRESHOLD environment variable (defaults to 80 %) at a value you want to mark a container unhealthy when it is exceeded.

Testing different amount of retries (internal to healthcheck.sh script).

  • Define a RETRIES environment variable (defaults to 20) at a number of retries you want to perform if the CPU usage is above the CPU_THRESHOLD value before exiting non zero

Currently it is a 3 second delay between checks, so 20 retries results in up to 60 seconds before the healthcheck will exit as unhealthy due to CPU load.

Testing different healthcheck values (external to healthcheck.sh script).

The current HEALTHCHECK of the container image is:

  • 5 minutes start period
  • 5 minutes interval
  • 100 seconds timeout
  • 3 retries (default value from being undefined)

Reducing the start period and intervals to something more appropriate for the sidecar script will result in a much shorter period to determine the sidecar containers health.

Make sure to keep the environment variable RETRIES * 3 < container healthcheck timeout to avoid marking the container unhealthy before the script will return during periods of high cpu load.

Motivation and context

Issue #1841

Which issue it fixes?

Closes #1841

How has this been tested?

  1. docker cp the script into preview network cncli sync, validate and leaderlog containers and waiting until the interval runs the script
  2. Execute the script with docker exec to confirm it reports healthy
  3. Monitor the containers until the healthcheck interval occurs and that they are marked healthy.

Additional Details

There is a SLEEPING_SCRIPTS array which is used for validate and leaderlog to still check for the cncli binary, but not consider a sleep period for validate and leaderlog to be unhealthy. Not 100% sure this is the best approach, but with sleep periods being variable I felt it was likely an acceptable middle ground.

Please do not hesitate to suggest an alternative approach to handling sleeping sidecars healthchecks if you think you have an improvement.

@adamsthws if you could please copy this into your sidecar containers (and your pool) and report back any results. I am marking this as a draft PR for the time being until testing is completed, after which if things look good I will mark it for review and get feedback from others.

Thanks

@TrevorBenson
Copy link
Collaborator Author

FWIW Here is my preview network pool, and cncli containers showing healthy once the script was copied in and healthcheck interval was reached:

# podman ps --filter 'name=preview-cncli-[slv]' --filter 'name=preview-ccio-pool --format '{{ .Names }}\t{{ .Status }}''
preview-ccio-pool	Up 4 weeks (healthy)
preview-cncli-sync	Up 3 days (healthy)
preview-cncli-validate	Up 3 days (healthy)
preview-cncli-leaderlog	Up 3 days (healthy)

@adamsthws
Copy link
Contributor

adamsthws commented Jan 7, 2025

Looks good!
How I have tested...

cp the script into cncli sync, validate, leaderlog, pt-send-slots, pt-send-tip containers

Execute the script with docker exec.

  • Result: exit 0. 'We're healthy - cncli'

Monitor the containers until the healthcheck interval occurs and that they are marked healthy.

  • Result: Docker ps shows containers are healthy.

Adjusted RETRIES

  • Note
    • Setting RETRIES=0 results in exit 127, 'Max retries reached for cncli'.
    • Setting RETRIES=1 results in exit 0. 'We're healthy - cncli'

Adjusted CPU_THRESHOLD

  • Note
    • The cncli process uses very little cpu so even setting threshold as low as 1% i was unable to intentionally get healthcheck to fail.
# docker ps --filter 'name=cncli' --format '{{ .Names }}\t{{ .Status }}' | column -t
cncli-pt-send-tip    Up  33  minutes  (healthy)
cncli-pt-send-slots  Up  33  minutes  (healthy)
cncli-sync           Up  33  minutes  (healthy)
cncli-validate       Up  33  minutes  (healthy)
cncli-leaderlog      Up  33  minutes  (healthy)

@adamsthws
Copy link
Contributor

adamsthws commented Jan 7, 2025

Further testing...

I was able to test with higher cpu load after deleting the cncli db and re-syncing.

Result

# ./healthcheck.sh
Checking health for process: cncli 
./healthcheck.sh: line 44: ((: 67.9: syntax error: invalid arithmetic operator (error token is ".9")
We're healthy - cncli
# exit 0

Line 44 of healthcheck.sh:
The (( CPU_USAGE > cpu_threshold )) construct in Bash is used for arithmetic evaluation, but it only supports integer arithmetic. It doesn't handle floating-point numbers.

This seems to fix it...
Line 41 (round float to nearest integer):

CPU_USAGE=$(ps -C "$process_name" -o %cpu= | awk '{s+=$1} END {print int(s + 0.5)}')

With the above change, when cpu load is higher than CPU_THRESHOLD, this is the result:

 # ./healthcheck.sh
Checking health for process: cncli
Warning: High CPU usage detected for 'cncli' (68%)
Max retries reached for cncli
# exit 1

@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 8, 2025

Looks good! How I have tested...

cp the script into cncli sync, validate, leaderlog, pt-send-slots, pt-send-tip containers

Execute the script with docker exec.

* Result: exit 0. 'We're healthy - cncli'

Monitor the containers until the healthcheck interval occurs and that they are marked healthy.

* Result: Docker ps shows containers are healthy.

Adjusted RETRIES

* Note
  
  * Setting RETRIES=0 results in exit 127, 'Max retries reached for cncli'.
  * Setting RETRIES=1 results in exit 0. 'We're healthy - cncli'

Adjusted CPU_THRESHOLD

* Note
  
  * The cncli process uses very little cpu so even setting threshold as low as 1% i was unable to intentionally get healthcheck to fail.
# docker ps --filter 'name=cncli' --format '{{ .Names }}\t{{ .Status }}' | column -t
cncli-pt-send-tip    Up  33  minutes  (healthy)
cncli-pt-send-slots  Up  33  minutes  (healthy)
cncli-sync           Up  33  minutes  (healthy)
cncli-validate       Up  33  minutes  (healthy)
cncli-leaderlog      Up  33  minutes  (healthy)

Yeah, there are rare instances where cncli percentage can be high, but this tends to be when resyncing the entire db and/or a cncli init is running. Occasionally if there is an issue with node process itself, like if it gets stuck chainsync/blockfetch and never completes, I have also seen cncli get a high percentage, but otherwise its quite rare to see it increase.

I figured with mithril-signer or db-sync, it might be more useful.

@TrevorBenson
Copy link
Collaborator Author

@adamsthws Feel free to submit suggestions to adjust the SCRIPT_TO_BINARY_MAP entries. Otherwise this week when I look at some other issues I'll go through each helper script and update the map and set this PR to ready to review.

Thanks for the testing.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 8, 2025

Testing revealed that setting RETRIES=0 results in script exit 1 without running the loop... it would be preferable to run the loop once when RETRIES=0.

Suggestion - Modify the loop condition to handle RETRIES=0 by changing line 39 to the following:

    for (( CHECK=0; CHECK<=RETRIES; CHECK++ )); do
    
    # 'RETRIES=3' results in the loop running a total of 4 times
    # 'RETRIES=0' results in the loop running a total of 1 times

Or...

    for (( CHECK=1; CHECK<=RETRIES || (RETRIES==0 && CHECK==1); CHECK++ )); do
    
    # 'RETRIES=3' results in the loop running a total of 3 times
    # 'RETRIES=0' results in the loop running a total of 1 times

@adamsthws
Copy link
Contributor

I started thinkinng about a cncli specific check. The following function is an idea to check cncli status...

# Function to check cncli status
check_cncli_status() {
    CNCLI=$(which cncli)

    for (( CHECK=1; CHECK<=RETRIES || (RETRIES==0 && CHECK==1); CHECK++ )); do
        CNCLI_OUTPUT=$($CNCLI status \
            --byron-genesis "/opt/cardano/cnode/files/byron-genesis.json" \
            --shelley-genesis "/opt/cardano/cnode/files/shelley-genesis.json" \
            --db "/opt/cardano/cnode/guild-db/cncli/cncli.db")

        CNCLI_STATUS=$(echo "$CNCLI_OUTPUT" | jq -r '.status')
        ERROR_MESSAGE=$(echo "$CNCLI_OUTPUT" | jq -r '.errorMessage')

        if [ "$CNCLI_STATUS" == "ok" ]; then
            echo "We're healthy - cncli status is ok and synced."
            return 0  # Return 0 if the status is ok
        elif [ "$CNCLI_STATUS" == "error" ]; then
            if [ "$ERROR_MESSAGE" == "db not fully synced!" ]; then
                echo "cncli's sqlite database is not fully synced. Attempt $CHECK. Retrying in 3 minutes."
                sleep 180  # Wait 3 minutes then retry to allow time for the database to sync
            elif [ "$ERROR_MESSAGE" == "database not found!" ]; then
                echo "cncli's sqlite database not found. Attempt $CHECK. Retrying in 3 minutes."
                sleep 180  # Wait 3 minutes then retry to allow time for the database to be created
            else
                echo "Error - cncli status: $ERROR_MESSAGE. Attempt $CHECK. Retrying in 3 seconds."
                sleep 3  # Wait 3 seconds then retry for other errors
            fi
        else # If status is not "ok" or "error"
            echo "cncli status: $CNCLI_STATUS. Attempt $CHECK. Retrying in 3 seconds."
            sleep 3  # Wait 3 seconds then retry
        fi
    done

    echo "cncli status check failed after $RETRIES attempts."
    return 1  # Return 1 if retries are exhausted
}

Perhaps would be improved further by also checking if sync is incrementing, so the healthcheck doesn't fail during initial sync.

How would you feel about adding me as a commit co-author if you decide to use this?

@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 11, 2025

@adamsthws I'm happy to make you a co-author even for something simple, for example if you know how to submit a suggestion go ahead an apply one for for (( CHECK=0; CHECK<=RETRIES; CHECK++ )); do to modify the PR and I'll merge it as a commit.

@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 11, 2025

In regards to the larger block for cncli checks, first it is clear lots of thought went into it.

This portion:

            if [ "$ERROR_MESSAGE" == "db not fully synced!" ]; then
                echo "cncli's sqlite database is not fully synced. Attempt $CHECK. Retrying in 3 minutes."
                sleep 180  # Wait 3 minutes then retry to allow time for the database to sync
            elif [ "$ERROR_MESSAGE" == "database not found!" ]; then
                echo "cncli's sqlite database not found. Attempt $CHECK. Retrying in 3 minutes."
                sleep 180  # Wait 3 minutes then retry to allow time for the database to be created

Sleeps of 180 exceed the current timeout period of 100. Options:

  • Additional documentation. My gut feeling it would lead to additional support requests for operators who don't read the docs, it also makes the monolithic container slightly more complex.
  • Increasing the containers timeout. Potential to reduce observability for node, or other processes.

With container settings of 3 retries and 5 minute interval w/ 100 second timeout it is 15 minutes from the last healthy response, or 10 minutes from the first unhealthy response, before the container exhausts retries and is marked unhealthy. I think this covers the two 180 second sleeps, even if the operator reduces the interval and timeouts when not running the node.

Separately, conversations outside of this PR and thread have pointed to some of the logic used in KOIOS for db-sync, also that it could also be used for checking the sqlite DB for cncli.

#!/bin/bash

export PGPASSWORD=${POSTGRES_PASSWORD}
[[ $(( $(date +%s) - $(date --date="$(psql -U ${POSTGRES_USER} -d ${POSTGRES_DB} -qt -c 'select time from block order by id desc limit 1;')" +%s) )) -lt 3600 ]] || exit 1

I haven't examined what the common drift might be for a db-sync instance from the last block produced and for cncli I suspect we could make it shorter than 1 hour.


These are just my thoughts. If you think that I overlooked some aspect please don't hesitate to continue the discussion.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 12, 2025

I haven't examined what the common drift might be for a db-sync instance from the last block produced and for cncli I suspect we could make it shorter than 1 hour.

Thankyou for the feedback.

It seems the limitations of using 'cncli status' in the healthcheck are twofold...

1 -
'cncli status' can't be used to determine if sync is progressing... Rather, it only indicates that it is either synced or not-synced. The long timeout allows time for sync to complete but I see that has introduced observability shortcomings.

Instead, checking for cncli sync progress via sqlite db would seem to make more sense.

2 -
Not all cncli containers need sqlite. (E.g. send-tip).
In the cncli send-tip container (with no sqlite db), 'cncli status' always returns error, as it will never reach sync.
(The tip is sucessfully being sent to Pooltool without the need of sqlite).

{
  "status": "error",
  "errorMessage": "database not found!"
}

I think it's reasonable to say 'cncli status' is not a suitable way to check cncli container health.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 12, 2025

Here's a suggestion to check the health of cncli ptsendtip, I'd love your feedback...

# Function to check if the tip is successfully being sent to Pooltool
check_cncli_send_tip() {
    for (( CHECK=0; CHECK<=RETRIES; CHECK++ )); do

        # Define the retry wait time in seconds
        retry_wait=3
        error_suffix="Attempt $((CHECK + 1)). Retrying in $retry_wait seconds."

        # Get the process ID of cncli
        process_id=$(pgrep -o cncli) || {
            echo "Error: cncli process not found. $error_suffix"
            sleep $retry_wait  # Wait n seconds then retry
            continue # Retry if the process is not found
            }

        # Capture the next output from cncli that is related to Pooltool
        pt_log_entry=$(timeout 30 cat /proc/$process_id/fd/1 | grep -i --line-buffered "pooltool" | head -n 1) || {
            echo "Failed to capture output from cncli stdout. $error_suffix"
            sleep $retry_wait  # Wait n seconds then retry
            continue # Retry if the output capture fails
            }

        # Define the success message to check for
        success_message='.*"success":true.*'

        # Check if the success message exists in the captured log
        if echo "$pt_log_entry" | grep -q $success_message; then
            echo "Healthy: Tip is being sent to Pooltool."
            return 0  # Return 0 if the success message is found
        else # If the success message is not found
            echo "Success message not found in log output. $error_suffix"
            sleep $retry_wait  # Wait n seconds then retry
        fi
    done

    echo "Error: Max retries reached - Tip is not being sent to Pooltool."
    return 1  # Return 1 if retries are exhausted
}

I haven't examined what the common drift might be for a db-sync instance from the last block produced and for cncli I suspect we could make it shorter than 1 hour.

Thankyou for the feedback.

It seems the limitations of using 'cncli status' in the healthcheck are twofold...

1 - 'cncli status' can't be used to determine if sync is progressing... Rather, it only indicates that it is either synced or not-synced. The long timeout allows time for sync to complete but I see that has introduced observability shortcomings.

Instead, checking for cncli sync progress via sqlite db would seem to make more sense.

2 - Not all cncli containers need sqlite. (E.g. send-tip). In the cncli send-tip container (with no sqlite db), 'cncli status' always returns error, as it will never reach sync. (The tip is sucessfully being sent to Pooltool without the need of sqlite).

{
  "status": "error",
  "errorMessage": "database not found!"
}

I think it's reasonable to say 'cncli status' is not a suitable way to check cncli container health.

@TrevorBenson
Copy link
Collaborator Author

I think it's reasonable to say 'cncli status' is not a suitable way to check cncli container health.

Yep, this is where I was originally taking "the easy way out" on the healthcheck of checking if the binary was running or not, and/or sleep.

@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 13, 2025

Here's a suggestion to check the health of cncli ptsendtip, I'd love your feedback...

#General Thoughts
First, this part is more about the existing healthcheck than your code contributions. History was that node ping test was removed and the query tip check was implemented. Unfortunately to compare takes at the 20 seconds for the next slot to see it progress. In some cases much longer:
image

I added KOIOS later to try and reduce the 60 second sleep in the hopes to improve observability, if were matching KOIOS we don't even look for progression, were in sync. But in case we weren't, then retry and check for being in sync, w/ 3 second retries it should be very few retries usually to match koios.

As a single unhealthy response requires exhausting the containers --health-retries before marking the container unhealthy my preference would be to aim for using "fail early" methodology on any other checks.

Now, this is just my viewpoint on providing the fastest feedback loop and observability. Its always open for discussion. My viewpoint isn't "the law" when it comes to how it will work, especially as we get more contributors that actually use the container and its specific tools. Feel free to push back on my views at any time.


  1. If retry_wait is a variable (which I like) we should probably make it a global/constant so it can be adjusted by the user for any check function that does do retries.

    • While variable name collisions are probably unlikely I'm thinking of prefixing healthcheck variables just in case. Maybe HEALTHCHECK_, so like:
      • HEALTHCHECK_CPU_THRESHOLD
      • HEALTHCHECK_RETRIES
      • HEALTHCHECK_RETRY_WAIT
    • This provides the operator full control. For example using HEALTHCHECK_RETRIES=0 for a single test, and tune everything else via the containers --health-interval and --health-retries, or vice versa.
  2. For process_id=$(pgrep -o cncli) do you find that cncli sendtip is not running?

    • Did you set --health-start-period 0 and observe a race where the healthcheck happened first and failed? I'd prefer removing the retry in the function and just return 1 for unhealthy with a message to fail fast for observability and let the operator choose retries at the container level as often as possible.
  3. Given the above example of 80 seconds or more between blocks I suspect that the timeout 30 may in some attempts result in not catching the entry at all. So this might force us to include some retries, otherwise the --health-retries could miss 3 passes and mark it unhealthy even when its working.

  4. Have you tested and confirmed its working as intended?

    • I tried cat on the cncli sendtip file descriptor 1 while watching log entries. While I saw logs from the container showing a send operation, I never saw from the file descriptor. Are you seeing something different in your tests?

I can add some logic that determines which subcommand cncli.sh was provided. Something like: cncli_subcmd=$(ps -p 1 -o cmd= | awk '{print $NF}') should allow us to separate sendtip from things where we would use sqlite.

I've got a commit I'll push tomorrow which includes db-sync and the cncli checks based on the db. Then figure out how to move forward on sendtip.

Thanks for the work on this!

@TrevorBenson
Copy link
Collaborator Author

@adamsthws OK commit pushed, sorry for the delay. Spent a little time considering if healthcheck.sh could remain in /home/guild/.scripts/ and properly source env. There is a placeholder for your check to go in once the function is added. Feel free to add a suggestion for it, or open a PR against the source branch of this PR and I'll merge that, whatever is your preference.

Thanks.

@adamsthws
Copy link
Contributor

  1. For process_id=$(pgrep -o cncli) do you find that cncli sendtip is not running?
  • Did you set --health-start-period 0 and observe a race where the healthcheck happened first and failed? I'd prefer removing the retry in the function and just return 1 for unhealthy with a message to fail fast for observability and let the operator choose retries at the container level as often as possible.

In my limited testing have never found cncli sendtip to not be running, or a race when --health-start-period 0... I shall remove the retry.


  1. Given the above example of 80 seconds or more between blocks I suspect that the timeout 30 may in some attempts result in not catching the entry at all. So this might force us to include some retries, otherwise the --health-retries could miss 3 passes and mark it unhealthy even when its working.

If nothing is ever written to the file descriptor...

  • With a long timeout, it will fail slowly.
  • With a short timeout plus retries, it will also fail slowly

Do you reccommend...

  • Lower timeout, with retries within the function that can echo information to the user on each retry.
  • Higher timeout to allow time for the next tip, and remove retries within the function?
  • Something else?
  1. Have you tested and confirmed its working as intended?

I tried cat on the cncli sendtip file descriptor 1 while watching log entries. While I saw logs from the container showing a send operation, I never saw from the file descriptor. Are you seeing something different in your tests?

Yes, I have tested... Might the discrepancy between your results and mine be that:

  • You're looking at the PID of the entrypoint script: pid=$(pgrep -f "${ENTRYPOINT_PROCESS}")
  • Whereas I'm looking at PID of cncli: pid=$(pgrep -o cncli)

I've got a commit I'll push tomorrow which includes db-sync and the cncli checks based on the db. Then figure out how to move forward on sendtip.

Great! I'll try to review quickly but may be a day or two. Thanks!

Copy link
Contributor

@adamsthws adamsthws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't had chance to do any testing but here is a quick initial look over. Thanks!

files/docker/node/addons/healthcheck.sh Show resolved Hide resolved
files/docker/node/addons/healthcheck.sh Outdated Show resolved Hide resolved
files/docker/node/addons/healthcheck.sh Outdated Show resolved Hide resolved
files/docker/node/addons/healthcheck.sh Outdated Show resolved Hide resolved
@TrevorBenson
Copy link
Collaborator Author

If nothing is ever written to the file descriptor...

* With a long timeout, it will fail slowly.

* With a short timeout plus retries, it will also fail slowly

Do you reccommend...

* Lower timeout, with retries within the function that can echo information to the user on each retry.

* Higher timeout to allow time for the next tip, and remove retries within the function?

* Something else?

I suppose we could align with the original query tip (non KOIOS) and go for 60 seconds. If it fails on the first pass, then there are 3 retries before the container is marked unhealthy anyway. Thoughts?


Also I realize in my prior explanation I clearly had the timing off. I should have said 20 total and 15 from the first failure, since it would be defaulting to 3 retries with the 5 minute intervals, not 3 total attempts. At least when operators do not tune the interval, retries, etc. at container creation.

@TrevorBenson
Copy link
Collaborator Author

Yes, I have tested... Might the discrepancy between your results and mine be that:

* You're looking at the PID of the entrypoint script: `pid=$(pgrep -f "${ENTRYPOINT_PROCESS}")`

Nope. This was a manual test inside my cncli sync container. So I manually checked the PID and used tmux, not the script at all. Because I wasn't using a valid API key I checked both fd 1 and 2 (and 0).

image

* Whereas I'm looking at PID of cncli: `pid=$(pgrep -o cncli)`

I'll chalk it up to some oddity with the manual test and using an XXXX value for the API ID. As long as you are seeing the output and its working in your test that is good enough for me.

I need to create a new account at PT to replace my old one based on email address anyway. I'll try to retest once I have a valid setup and start using ptsendtip as a container again.

Co-authored-by: Adam Matthews <[email protected]>
@adamsthws
Copy link
Contributor

Nope. This was a manual test inside my cncli sync container. So I manually checked the PID and used tmux, not the script at all. Because I wasn't using a valid API key I checked both fd 1 and 2 (and 0).

I'll chalk it up to some oddity with the manual test and using an XXXX value for the API ID. As long as you are seeing the output and its working in your test that is good enough for me.

I need to create a new account at PT to replace my old one based on email address anyway. I'll try to retest once I have a valid setup and start using ptsendtip as a container again.

Here's the result I get when I exec into the container to test manually (with an invalid PT API Key):
Screenshot 2025-01-14 173152

@adamsthws
Copy link
Contributor

adamsthws commented Jan 14, 2025

I suppose we could align with the original query tip (non KOIOS) and go for 60 seconds. If it fails on the first pass, then there are 3 retries before the container is marked unhealthy anyway. Thoughts?

Sounds like a reasonable approach , thanks!

Copy link
Contributor

@adamsthws adamsthws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the function to check if the tip is successfully being sent to Pooltool

files/docker/node/addons/healthcheck.sh Outdated Show resolved Hide resolved
files/docker/node/addons/healthcheck.sh Show resolved Hide resolved
# Check if the DB is in sync
[[ -z "${PGPASSFILE}" ]] && PGPASSFILE="${CNODE_HOME}/priv/.pgpass"
if [[ ! -f "${PGPASSFILE}" ]]; then
echo "ERROR: The PGPASSFILE (${PGPASSFILE}) not found, please ensure you've followed the instructions on guild-operators website!" && exit 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For docker/podman, there arent current instructions - we need to possibly attach a sample reference compose file , wherein PGPASSFILE can be added (or instructions added to be part of priv folder)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the current Guild Ops DBSync documentation which mentions this considered insufficient in this case? It documents it at the top of the page, and lower down shows examples of using the priv folder as the path.

image

Also, this healtcheck won't impose any new requirements for using dbsync.sh in a container that don't already exist using it on bare metal.

  • Line 120 comes from line 42 of set_defaults() function:

    set_defaults() {
    if [[ -z "${DBSYNCBIN}" ]]; then
    [[ -f "${HOME}/.local/bin/cardano-db-sync" ]] && DBSYNCBIN="${HOME}/.local/bin/cardano-db-sync" || DBSYNCBIN="$(command -v cardano-db-sync)"
    fi
    [[ -z "${PGPASSFILE}" ]] && PGPASSFILE="${CNODE_HOME}/priv/.pgpass"
    [[ -z "${DBSYNC_CONFIG}" ]] && DBSYNC_CONFIG="${CNODE_HOME}/files/dbsync.json"
    [[ -z "${DBSYNC_SCHEMA_DIR}" ]] && DBSYNC_SCHEMA_DIR="${CNODE_HOME}/guild-db/schema"
    [[ -z "${DBSYNC_STATE_DIR}" ]] && DBSYNC_STATE_DIR="${CNODE_HOME}/guild-db/ledger-state"
    [[ -z "${SYSTEMD_PGNAME}" ]] && SYSTEMD_PGNAME="postgresql"
    }

  • Lines 121-122 come from lines 52-53 of check_defaults() function:

    check_defaults() {
    if [[ -z "${DBSYNCBIN}" ]]; then
    echo "ERROR: DBSYNCBIN variable is not set, please set full path to cardano-db-sync binary!" && exit 1
    elif [[ ! -f "${PGPASSFILE}" ]]; then
    echo "ERROR: The PGPASSFILE (${PGPASSFILE}) not found, please ensure you've followed the instructions on guild-operators website!" && exit 1
    exit 1
    elif [[ ! -f "${DBSYNC_CONFIG}" ]]; then
    echo "ERROR: Could not find the dbsync config file: ${DBSYNC_CONFIG} . Please ensure you've run guild-deploy.sh and/or edit the DBSYNC_CONFIG variable if using a custom file." && exit 1
    elif [[ ! -d "${DBSYNC_SCHEMA_DIR}" ]]; then
    echo "ERROR: The schema directory (${DBSYNC_SCHEMA_DIR}) does not exist. Please ensure you've follow the instructions on guild-operators website" && exit 1
    fi
    }

A compose file example is definitely an option. However, I find it can also be a crutch by allowing users to skip reading documentation. If you consider it a "must have" then I think it should be part of another PR and have its own separate testing and review.

Copy link
Contributor

@rdlrt rdlrt Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the current Guild Ops DBSync documentation which mentions this considered insufficient in this case? It documents it at the top of the page, and lower down shows examples of using the priv folder as the path.

I look at (atleast a big % of) docker users as spoon-fed users - not intended in any demeaning way, but the primary requirement of them using docker [besides virtual env isolation] would be simplicity - as we're not talking about slightly more experienced/advanced users already taking to next step and deploying in professional environments , their expectations will be not having to go through document pages beyond docker reference ones.

If someone running a manual setup is moving to docker, I dont expect them to have as many queries. It does kinda begs the question if docker users would also share the pre-reqs on home page - in my experience of reading github issues on IO/intersectmbo issues, a very large section does not.

If you consider it a "must have" then I think it should be part of another PR and have its own separate testing and review.

Ye - I agree. That - if added - does not need to be part of this PR indeed

Copy link
Contributor

@rdlrt rdlrt Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talking about dbsync in particular, I would think this would likely be a seperate docker image altogether (which is why I mentioned earlier there is some work to be done):

  • Would lighten size (by removing binaries) from current cardano-node image [and vice versa]
  • Add more steps in build, like having schema directory specific to dbsync version, ledger-state directory
  • Above mentioned PGPASSFILE will be part of it's package
  • A dedicated page for operations for dbsync docker image.
  • Deploy pg_bech32 as part of image
  • Have better provision for snapshot restoration
  • Have script modifications to allow for any type of snapshot restore to compare config options (as users may otherwise end up with a lot of wasted hours to find out incompatibility)

In theory , same can be done for smaller components, but there is little value as their setups are mostly the same. For instance, for something like ogmios - we dont need a custom image, official image is perfectly usable.

Once above is available, a [docker|podman]-compose will essentially just give a reference on sharing volumes between the pods|containers

Copy link
Collaborator Author

@TrevorBenson TrevorBenson Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ye - I agree. That - if added - does not need to be part of this PR indeed

Sounds good, after this, and some other documentation issues/PR are resolved I'll open one and start a compose example.

Talking about dbsync in particular, I would think this would likely be a seperate docker image altogether [...]

Nice. So far I've been running the upstream db-sync image, as well as ogmios, and guild ops for node, cncli, and mithril. I'm hereby offering to assist with a new container if any help is needed.

In theory , same can be done for smaller components, but there is little value as their setups are mostly the same. For instance, for something like ogmios - we dont need a custom image, official image is perfectly usable.

Yep. The socket, configs (files) and just about every other subdir of CNODE_HOME lend themselves to volume usage. Not scripts though. The excessive work trying to make scripts a shared volume wasn't worth the time I initially spent to get it working. I've found a container environment file with all vars + a completely stock scripts directory and env file inside the container to be the simplest way to run separate containers per process with shared vars.

Comment on lines 90 to 96
# Capture the next output from cncli that is related to Pooltool
pt_log_entry=$(timeout $log_entry_timeout cat /proc/$process_id/fd/1 | grep -i --line-buffered "pooltool" | head -n 1)
if [ -z "$pt_log_entry" ]; then
echo "Unable to capture cncli output within $log_entry_timeout seconds. $error_message_suffix"
sleep $HEALTHCHECK_RETRY_WAIT # Wait n seconds then retry
continue # Retry if the output capture fails
fi
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the part I was mentioning earlier about "fail early". The reason being is we have compounded retries occurring which reduces observability. I want to apologize (profusely) in advance, this comment is going to be pretty verbose to break down what I mean by compounding retries and reduced observability.


First, I can definitely see where my earlier statements about using the query tip 60 seconds and relying on the 3 "retries" was confusing. What I was referring to is this portion of the current healthcheck node query tip:

if [[ "${ENABLE_KOIOS}" == "N" ]] || [[ -z "${KOIOS_API}" ]]; then
# when KOIOS is not enabled or KOIOS_API is unset, use default behavior
sleep 60;
SECOND=$($CCLI query tip --testnet-magic ${NWMAGIC} | jq .block)
if [[ "$FIRST" -ge "$SECOND" ]]; then
echo "there is a problem"
exit 1
else
echo "we're healthy - node: $FIRST -> node: $SECOND"
fi

It sleeps 60 between the first block and second, and has zero retries. It only relies on the --healthcheck-retries of the container configuration. This results in a minimum 15 minutes and maximum 24 minutes before the container actually shows an unhealthy status. It does not compound retries which makes the status change from healthy -> unhealthy take even longer.

I'll clarify what I mean by compounding retries:

  1. The containers own HEALTHCHECK defined in the dockerfile sets the container automatically to use --health-start-period 5m --healthcheck-interval 5m without being defined it also gets --healthcheck-retries 3 from the default number of retries. Here is the container inspect output with time in nanoseconds.

    podman container inspect preview-ccio-pool | jq '.[].Config.Healthcheck'
    {
      "Test": [
        "CMD-SHELL",
        "/home/guild/.scripts/healthcheck.sh"
      ],
      "StartPeriod": 300000000000,
      "Interval": 300000000000,
      "Timeout": 100000000000
    }
    
  2. The check for ptsendtip does a loop with HEALTHCHECK_RETRIES (default 20), with log_entry_timeout (set to 60), if it fails we wait HEALTHCHECK_RETRY_WAIT (default 3), and start again.

    • The check for ptsendtip has the potential to run for 21 minutes runtime (60 seconds per loop + 3 second wait, times 20).
    • The container healthcheck timeout is set to 100 seconds. Exceeding timeout causes an unhealthy response
    • This status changing from healthy to unhealthy requires exhausting the containers retry value, multiplied by the Interval value.

That is a minimum of 15 and maximum of almost 27 minutes (including runtime).

I see multiple options as a solution:

  1. Drop the retry logic from check_cncli_send_tip(). Let it run once for 60 seconds and exit 1 if it doesn't see a block. This makes it similar to the query tip in the timeout and all other checks in regards to no retries (except the KOIOS version of check_node).

  2. Keep the retry logic in check_cncli_send_tip() and set the HEALTCHECK_RETRIES to default to 0.

  3. Add retry logic to every check function in the healthcheck script and set HEALTCHECK_RETRIES to default to 0.

Each option results in a 15-21 minute window before knowing the container is unhealthy when the container configuration for healthcheck intervals & retries and all script variables are left at current/default values.

I'm torn between 3 and 1 as my favorite option. Improving observability even further requires either the operator to adjust the startup, interval, timeout and retries when creating the container OR for us to come to a consensus about adjusting the Dockerfile HEALTHCHECK statement when building the container image.

Copy link
Contributor

@adamsthws adamsthws Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thankyou for the thorough explination.

Drop the retry logic from check_cncli_send_tip()

My gut feeling is that this would be the prefered approach, minimising the code to add/maintain.

Let it run once for 60 seconds

To test how often it times out, I'm in the process of running the function 100 times with varying timeout values...

  • timeout=60 - Result: x% timeout frequency (x-fail/x-success).
  • timeout=90 - Result: x% timeout frequency (x-fail/x-success).
  • timeout=120 - Result: x% timeout frequency (x-fail/x-success).
    (Results TBD)

After removing retries from the function, if it times out, it then must wait another 5mins before retring again --healthcheck-interval 5m... I propose to change the timeout within the function to something larger, maybe 90s-120s (depending on the results of my testing).

E.g timeout=90 (rather than timeout=60) increases the container time to failure by 30s while decreasing the container time to sucess by relatively larger 4.5min (by avoiding unnecassary timouts and retries).

What would be the maximum time you would expect between tips?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the maximum time you would expect between tips?

Anecdotally I have seen Tip (diff) reach 60 seconds quite often. Occasionally a bit higher at 100-120 seconds. The KOIOS addition was because the original query tip I had twice over a few months hit 60 seconds on repeated retries and marked it unhealthy causing some investigations. After adding KOIOS, or even disabling KOIOS and using the original query tip, I don't remember it happening in the last year seeing unhealthy on the node.


In rare, extreme events, it has exceeded 5 minutes. Node 1.35, or 1.34, had one event long ago that was exceeding 5 minutes. Not sure if this linked issue/comment showing 8 minute delay between blocks is the one, or something else as I didnt read the whole thread. The time frame seems like it could be the one I'm thinking of, and I don't recall at the moment another lasting longer than 5.

IntersectMBO/cardano-node#4421 (comment)

Again, these are rare "extreme" events, and the healthcheck in theory wouldn't even point this out with the current configuration anyway when it lasts less than the --healthcheck-retries multiplied by the interval, we should in theory only see this in grafana/prometheus and in theory the container runtimes would still be showing a healthy container even with timeout=60.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To test how often it times out, I'm in the process of running the function 100 times with varying timeout values...

Here are the results:
timeout=60 - Result: 9% timeout frequency (9-fail/91-success).
timeout=90 - Result: 4% timeout frequency (4-fail/96-success).
timeout=120 - Result: 2% timeout frequency (2-fail/98-success).

I had twice over a few months hit 60 seconds on repeated retries and marked it unhealthy causing some investigations

In this limited testing sample, it has already failed sucessively (twice in a row) - (with lower timout of 60s)... So I feel using a higher timout would mean less chance of failure three times in a row, causing unnceassary investigations.

...How would you feel about making timeout=120?

Copy link
Collaborator Author

@TrevorBenson TrevorBenson Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you happen to have details about consecutive failure/timeouts? For example based on the 300 second interval if it's never observed that we have 1 consecutive failures over 100 attempts, then it still results in 0 status changes to unhealty.

To bump the timeout even higher I'd probably want to see 2, possibly 3, consecutive timeoutes reached based on X interval being tested. If it reaches 50-66% of the required consecutive failures to cause the node to be marked unhealthy then I would definitely consider bumping it up.

However, if the results were similar to KOIOS where there was never more than 1 failure and 0 consecutive failures I would prefer leaving the internal timeout for the check at 60. Raising it to 120 would also require an increase of the Dockerfile HEALTHCHECK settings timeout (aka --healthcheck-timeout) value currently at 100.

If you don't have the details on how many consecutive failures were observed this gist contains the POC I did to gather the details for KOIOS. I won't get time to setup a ptsendtip until sometime this weekend, which would push my ability to run this test out to Sunday or sometime next week.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you happen to have details about consecutive failure/timeouts? For example based on the 300 second interval if it's never observed that we have 1 consecutive failures over 100 attempts, then it still results in 0 status changes to unhealty.

These tests had no interval... I didn't feel adding an interval would effect the results in any meaningful way.

  • timeout=60 - Result: 9% timeout frequency (9-fail/91-success).

    • failed consecutively 1 time. (interval=0)
  • timeout=90 - Result: 4% timeout frequency (4-fail/96-success).

    • failed consecutively 0 times. (interval=0)
  • timeout=120 - Result: 2% timeout frequency (2-fail/98-success).

    • failed consecutively 0 times. (interval=0)

To bump the timeout even higher I'd probably want to see 2, possibly 3, consecutive timeoutes reached based on X interval being tested. If it reaches 50-66% of the required consecutive failures to cause the node to be marked unhealthy then I would definitely consider bumping it up.

I'm in the process of re-running it with timeout=60, interval=300s... I'll keep you posted.

Raising it to 120 would also require an increase of the Dockerfile HEALTHCHECK settings timeout (aka --healthcheck-timeout) value currently at 100.

Oh yes, thanks for pointing that out. Let's see how the re-test goes.

If you don't have the details on how many consecutive failures were observed this gist contains the POC I did to gather the details for KOIOS.

This looks great. Nice use of sqlite!
Here's my test, it's not as elegant but it gets the job done:
https://gist.github.com/adamsthws/dce0263bea2302047660b9c8ea458cbd

Removed retry logic from function: check_cncli_send_tip()
@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 15, 2025

Separately, I've been considering for awhile retries (now HEALTHCHECK_RETRIES) being dropped from 20, to 0-3. I originally set it to 20 retries and (now HEALTHCHECK_RETRY_WAIT) to 3 second sleeps so that the KOIOS method would at max run as long as the 60 seconds of a query tip without KOIOS. This was never really required, but its been there ever since.

I tested last night manually in a loop of 10 with a 20 second sleep 10/10 returned in under 1 second, never needing a retry. 10/10 also incremented at least 1 block.

I didn't bother to do multiple passes and calculate the ratio. However given the retries at the container level, I am pretty confident we could drop this from 20 and improve the observability (slightly) by failing earlier.

I'm interested in your feedback. What do you think about changing the default values of HEALTHCHECK_RETRIES so even KOIOS will fail earlier?

@adamsthws
Copy link
Contributor

adamsthws commented Jan 15, 2025

I tested last night manually in a loop of 10 with a 20 second sleep 10/10 returned in under 1 second, never needing a retry. 10/10 also incremented at least 1 block.

Given the retries at the container level, I am pretty confident we could drop this from 20 and improve the observability (slightly) by failing earlier.

I'm interested in your feedback. What do you think about changing the default values of HEALTHCHECK_RETRIES so even KOIOS will fail earlier?

If it rarely (or never) fails and needs a retry, I would be in favor of removing the Koios retry loop from check_node() entirely. Simplifying the code, making it easier to maintain and understand. The only healthcheck function then that would have a retry would be check_process()

In which case, there might also be an argument to remove variables: $HEALTHCHECK_RETRY_WAIT, $HEALTHCHECK_RETRIES in favor of:

  • Hardcoded Sensible Retry Defaults (Within check_process())
    • Preventing Misconfiguration
    • Simplicity and Ease of Use
    • Reducing User Cognitive Overload
    • Reduced Support Overhead

@TrevorBenson
Copy link
Collaborator Author

If it rarely (or never) fails and needs a retry, I would be in favor of removing the Koios retry loop from check_node() entirely. Simplifying the code, making it easier to maintain and understand. The only healthcheck function then that would have a retry would be check_process()

Well, not "never". There are some occasions where the node is ahead, like when a block is produced locally and healcheck is almost at the exact moment. Same in reverse, KOIOS being ahead, but rarely for more than 3 seconds. Since your open to even removing the retries altogether I'll create a POC and give it a full 100 runs using a current interval of 5 minutes.

I'll track the total times:

  • That the node is ahead
  • That koios is ahead
  • That they are in sync
  • The number of consecutive out of sync responses occur.

I'll also try 4, 3 and 2 minute intervals. We could potentially improve the observability, reducing the time something is wrong before the status changes to unhealthy. As long as multiple systems are not sharing a single outbound IP for KOIOS queries I don't believe this should push the limits of the free tier.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 16, 2025

There are some occasions where the node is ahead, like when a block is produced locally and healcheck is almost at the exact moment. Same in reverse, KOIOS being ahead, but rarely for more than 3 seconds.

I wonder if including an allowable drift for the koios check, (similar to check_db_sync()) would make sense in place of retries? ... But perhaps we're just swapping one type of complexity for another with that line of thought.

@TrevorBenson
Copy link
Collaborator Author

I wonder if including an allowable drift for the koios check, (similar to check_db_sync()) would make sense in place of retries? ... But perhaps we're just swapping one type of complexity for another with that line of thought.

I like the thought, but you might be correct that we just swap complexity.

A single line for "healthy" we get something like this for a drift of 1:

if [[ "$FIRST" -eq "$SECOND" || "$((FIRST - SECOND))" -le 1 && "$((FIRST - SECOND))" -ge -1 ]]; then

It's not terribly complex, but not the simplest either. It could be split up w/ elif though. The drift could also a variable, though I think it brings us back to the discussion of cognitive overload from earlier.

@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 17, 2025

@adamsthws So the results were pretty interesting.

FWIW, I set the healthcheck_poc.sh so that it leverages the KOIOS_API_TOKEN to be sure I didn't get any 429's as this testbed shares the WAN IP with multiple systems querying koios.


Tests

  • Intervals of 300, 240, 180, 120
    • 300 and 240 used 100 runs each
    • 180 and 120 used 200 runs each
      • So that 180 and 120 had more samples with more overlap to the 300 interval test.
  • Tests performed in parallel
    • Over the same time periods
    • On the same chain (Preview)

Results

300 second intervals

  • Run time 2025-01-17-06:10:51 AM UTC - 2025-01-17-02:26:57 PM UTC
  • Results
    • 5/100 node less than koios: 5%
    • 95/100 node synched to koios: 95%
    • 0 node greater than koios: 0%
  • Consecutive unsynchronized checks
    • 0

240 second intervals

  • Run time 2025-01-17-06:10:53 UTC - 2025-01-17-12:47:58 PM UTC
  • Results
    • 1/100 node less than koios: 1%
    • 99/100 node synched to koios: 99%
    • 0/100 node greater than koios: 0%
  • Consecutive unsynchronized checks
    • 0

180 second intervals

  • Run time 2025-01-17-06:10:55 AM UTC - 2025-01-17-04:10:08 PM UTC
  • Results
    • 5/200 node less than koios: 2.5%
    • 195/200 node synched to koios: 97.5%
    • 0 node greater than koios: 0%
  • Consecutive unsynchronized checks
    • 0

120 second intervals

  • Run time 2025-01-17-06:10:58 AM UTC - 2025-01-17-12:51:10 PM UTC
  • Results
    • 5/200 node less than koios: 2.5%
    • 195/200 node synched to koios: 97.5%
    • 0 node greater than koios: 0%
  • Consecutive unsynchronized checks
    • 0

Summary

  • No tests experienced a single consecutive unsynchronized event. I think this confirms 4 consecutive unsynchonized events are fairly unlikely in the nodes KOIOS test.
  • The unsynchronized results among each interval were not from the same issue/time period.
    • The node/koios tips in the DB shows the chain had progressed, even on the shortest distance between two results for a different test (2 minutes)
      15|2025-01-17-06:20:52 AM|300|lt|1|2880268|2880269
      18|2025-01-17-06:22:58 AM|180|lt|1|2880273|2880274
      
  • The interval 300 being double the ratio of the other tests I believe to be just luck of the draw, I doubt it would come out the same if run again.

If reducing this interval does not negatively impact any other healthchecks being adding, and the healthchecks start only after the ledger has finished loading (in case of unclean shutdowns, ledger replays during major version changes, etc.), I see no drawbacks to improving the observability of the container health with shorter intervals.

I can post the script to a gist if you want to replicate the tests yourself or test different networks.

@adamsthws
Copy link
Contributor

I can post the script to a gist if you want to replicate the tests yourself or test different networks.

100% - running your test on my side now. I'll let you know the results.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 20, 2025

Running your test on my side now. I'll let you know the results.

Test Results
Results after running both of our two tests at my end...

Test: check_node()
(healthcheck_pos.sh - Gist)
(Interval=120) (Tries=300)

  • Where "Status" != "synchronized": 43/300 (14.33%)
  • Where "Status" == "lt": 4/300 (1.33%)
  • Where "Status" == "gt": 39/300 (13%)
  • Failed consecutivly: 1/300 (0.33%)
    • (It failed twice in a row one time)

Test: check_cncli_send_tip()
(test_check_cncli_send_tip.sh - Gist)
(Interval=120) (Tries=300) (Timeout=60)

  • Failed checks: 30/300 (10%)
  • Failed consecutivly (twice in a row): 4/300 (1.33%)
  • Failed consecutivly (three times in a row): 1/300 (0.33%)
    • (It failed three times in a row one time)

Probabilities
I started getting interested in the probabilities...

If a check has a probability of failing 10% of the time, the probability of a that check failing three times in a row is 0.1% (or 1 in 1,000) - (assuming the events are independent, meaning each failure is not influenced by previous failures).

Probability of one failure:
P(failure)=10%=0.1P(failure)=10%=0.1.

Probability of two consecutive failures:
P(2 failures)=P(failure)×P(failure)=0.1×0.1=0.01P(2 failures)=P(failure)×P(failure)=0.1×0.1=0.01 (1%).

Probability of three consecutive failures:
P(3 failures)=P(failure)×P(failure)×P(failure)=0.1×0.1×0.1=0.001P(3 failures)=P(failure)×P(failure)×P(failure)=0.1×0.1×0.1=0.001 (0.1%).

So with timeout=60s and --healthcheck-interval 5m, check_cncli_send_tip() would (probabilistically) become unhealthy every 3.47 days

adamsthws and others added 3 commits January 21, 2025 10:37
- Increased accuracy of "Pooltool" log scrape pattern.

- Included ($pt_log_entry) in output for improved logging/debugging.

- Improved variable names for clarification.
@TrevorBenson
Copy link
Collaborator Author

TrevorBenson commented Jan 22, 2025

So with timeout=60s and --healthcheck-interval 5m, check_cncli_send_tip() would (probabilistically) become unhealthy every 3.47 days

It's clear there is a need to account for your findings.

I suspect the "middle ground" here may be adjusting current HEALTHCHECK values, as well as providing documentation for optimizing based on the use case (ENTRYPOINT_PROCESS).


Thoughts

Probabilities

A quick glance at your formula for probabilities I realized I mispoke in earlier statements 🤦🏼

I described one of our monitoring solution's concept of "retries" before a service status changes to offline/unhealthy, instead of dockers concept. Instead of 1 initial failure + X retries to change a status, for Docker it's just X retries, or "FailingStreak: X" == --health-retries X. Failingstreak is available in docker container inspect under State.Health.

I'm a bit tired ATM, but unless my mental math is off, doesn't that bring us to unhealthy status once per 25 hours / 1.04 days (at 300 interval) or once per 10 hours if we wanted to reduce to an interval of 120?

Node tests

Your node results are interesting.

My tests were all in the US from various systems, but still pretty low latency to a very high number of overall stake pools, and total delegation. In regions with even higher overall latency to the majority of nodes/delegation (possibly AU?), I suspect we'd see even more failures.

Additional Data

It might be interesting to see the following output providing details about the consecutive event observed:

SELECT * FROM checks WHERE status != 'synchronized';

This is not required, just "interesting". If you you decide to grab/provide this for review, or any other queries you might find interesting among the data, a gist link would be fine.

Send Tip Tests

Given your current results from timeout=60 I doubt if timeout=100 (or even timeout=120) would be sufficient to combat an event with such high probability to occur. every few days (more than once an epoch, ~105 times annually)

Options

  1. We could retest w/ 100 and 120 to get more data points.
    • Possibly do 60 too, and all in parallel starting the script more than once.
    • Parallel is just to confirm failures at 60 don't drop while we evaluate the 100 & 120 results. If 60 is lower consecutive failures our findings from 100/120 might be skewed as well.
  2. Consider some form of check enhancement.
    • Possibly reduce false positives by borrowing from check_node logic? Using the tip when needed.
CCLI=$(which cardano-cli)
FIRST=$($CCLI query tip --testnet-magic ${NWMAGIC} | jq .block)
pt_log_entry=$(timeout $log_entry_timeout cat /proc/$process_id/fd/1 | grep --line-buffered "Pooltool" | head -n 1)
if [ -z "$pt_log_entry" ]; then
    SECOND=$($CCLI query tip --testnet-magic ${NWMAGIC} | jq .block)
    if [[ "$FIRST" -eq "$SECOND" ]]; then
        echo "Unable to capture cncli output within $log_entry_timeout seconds, but node has not moved tip."
        return 0
    else
        echo "Unable to capture cncli output within $log_entry_timeout seconds."
        return 1  # Return 1 if the output capture fails
fi

EDIT: I mentioned 105 times annually before I looked at the formulas and backtracked to write the intro. It would be off, if my mental math wasn't, but it is is quite possible since I didn't double check.

I'll look at the numbers again today. In any case I still think adjustments are needed, even if its only once every 3.47 days, or even if it was simply once a month.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 22, 2025

I described one of our monitoring solution's concept of "retries" before a service status changes to offline/unhealthy, instead of dockers concept. Instead of 1 initial failure + X retries to change a status, for Docker it's just X retries, or "FailingStreak: X" == --health-retries X. Failingstreak is available in docker container inspect under State.Health.

Thanks for pointing that out... I was thinking about it in terms of docker's behavour - where 3 consecutive failures = unhealthy.

I'm a bit tired ATM, but unless my mental math is off, doesn't that bring us to unhealthy status once per 25 hours / 1.04 days (at 300 interval) or once per 10 hours if we wanted to reduce to an interval of 120?

I re-calculated to double check and I believe my previous working was correct.
By my calculations, with a 10% failure rate (and "FailingStreak: 3" == --health-retries 3):

  • 120 interval would become unhealthy every 1.38 days
  • 300 interval would become unhealthy every 3.47 days

My tests were all in the US from various systems, but still pretty low latency to a very high number of overall stake pools, and total delegation. In regions with even higher overall latency to the majority of nodes/delegation (possibly AU?), I suspect we'd see even more failures.

That's a good point. FWIW I'm testing from the UK.

Additional Data
It might be interesting to see the following output providing details about the consecutive event observed:
SELECT * FROM checks WHERE status != 'synchronized';
This is not required, just "interesting". If you you decide to grab/provide this for review, or any other queries you might find interesting among the data, a gist link would be fine.

Unfortunately the container has been re-created since running your test and the results db has not persisted. Happy to re-run the test if you think it would be valuable?

Given your current results from timeout=60 I doubt if timeout=100 (or even timeout=120) would be sufficient to combat an event with such high probability to occur. every few days (more than once an epoch, ~105 times annually)

I've tested with timout=90 and now awaiting timout=120 test to complete... Once we have these results it might give us a better idea of the values to test for in the next round (while testing all simultaneously). Here's the results of Timeout=90...

  • Test: check_cncli_send_tip() - Gist
    (Interval=120) (Tries=300) (Timeout=90)
    Failed checks: 11/300 (3.67%)
    Probablilty of 3 consecutive failed checks: 0.00495%
    Expected frequency of unhealthy container status: 70.24 days

@TrevorBenson
Copy link
Collaborator Author

Unfortunately the container has been re-created since running your test and the results db has not persisted. Happy to re-run the test if you think it would be valuable?

No need. The parallel run with 60/90/120 would be the most useful, as we can see if timeout=60 results in the same level of failures as your first pass.

If you can persist the DB on the parallel run it would be useful to investigate each consecutive failure. The comparison of each overlapping check for the different timeout values should be invaluable to determine if the adjustment really addresses the probabilities calculated previously, or if you just had a lucky run, which seeing the parallel timeout=60 will determine.

@TrevorBenson
Copy link
Collaborator Author

Thanks for pointing that out... I was thinking about it in terms of docker's behavour - where 3 consecutive failures = unhealthy.

Great. I must have multiplied by 100 to think about the % on the 2 failure scenario, then just continued with that number when I added the third failure.

Sleep is good, I should try it more often 😅.

@adamsthws
Copy link
Contributor

adamsthws commented Jan 23, 2025

I've tested with timout=90 and now awaiting timout=120 test to complete... Once we have these results it might give us a better idea of the values to test for in the next round (while testing all simultaneously). Here's the results of Timeout=90...

* Test: check_cncli_send_tip() - [Gist](https://gist.github.com/adamsthws/dce0263bea2302047660b9c8ea458cbd)
  (Interval=120) (Tries=300) (Timeout=90)
  Failed checks: 11/300 (3.67%)
  Probablilty of 3 consecutive failed checks: 0.00495%
  Expected frequency of unhealthy container status: 70.24 days

Results of timout=120:

  • Test: check_cncli_send_tip() - Gist
    (Interval=120) (Tries=300) (Timeout=120)
    Failed checks: 3/300 (1%)
    Probablilty of 3 consecutive failed checks: 0.0001%
    Expected frequency of unhealthy container status: 3472 days (9.5 years)

Quick observation:
As the length of timout increases linearly, the failure rate decreases exponentially .
As the failure rate decreases linearly, the probability of three consecutive failures decreases exponentially .

I will now go on to test simultaneously to verify results, using the following timout values: 60, 90, 100, 110, 120.

@TrevorBenson
Copy link
Collaborator Author

Results of timout=120:

* Test: check_cncli_send_tip() - [Gist](https://gist.github.com/adamsthws/dce0263bea2302047660b9c8ea458cbd)
  (Interval=120) (Tries=300) (Timeout=120)
  Failed checks: 3/300 (1%)
  Probablilty of 3 consecutive failed checks: 0.0001%
  Expected frequency of unhealthy container status: 3472 days (9.5 years)

Quick observation: As the length of timout increases linearly, the failure rate decreases exponentially . As the failure rate decreases linearly, the probability of three consecutive failures decreases exponentially .

I will now go on to test simultaneously to verify results, using the following timout values: 60, 90, 100, 110, 120.

Send Tip

Fabulous. With confirmation from the parallel data showing 60 was close to or the same as your first while we can compare in SQL for every other timeout during the same period should make the choice quite simple.

Node

The only concerning thing remaining was your results from check_node POC. The 13% result being much higher than I observed (worst at 5% w/ 0 consecutive). The probability increase is dramatic, 5% being 0.000125/0.012%, 10% (send tip) being 0.001/0.1% and 13% being 0.002197/0.22%. Essentially 800% more likely than in any test I performed, and ~120% more likely than the send tip healthcheck failure ratio observed w/ timeout=60.

Due to this I'm thinking about an adjustment to the logic for check_node using KOIOS.

  • Allow up to 1 block drift in either direction or fully in sync to return 0.
  • Use 1-2 retries (instead of current 20) with a 3-4 second sleep.

I suspect either will significantly reduce the POC failure rates. Do you have a preference or other options to suggest?

After your response I'll create an adjustment to the node healthcheck POC and run them in parallel for easy comparison.


FWIW, I think moving this out of draft is quite close. I do need to come up with something for mithril signer. I don't want that to delay this work getting reviewed, so if I don't find time to implement/test something before the next POC test finishes, I'll just add that to a list of other PR's I need to start work on.

Thanks for all the time an energy you've spent collaborating on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Docker container healthcheck for CNCLI usage
3 participants