-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include healthcheck logic for helper scripts running as sidecars #1842
base: alpha
Are you sure you want to change the base?
Include healthcheck logic for helper scripts running as sidecars #1842
Conversation
FWIW Here is my preview network pool, and cncli containers showing healthy once the script was copied in and healthcheck interval was reached:
|
Looks good! cp the script into cncli sync, validate, leaderlog, pt-send-slots, pt-send-tip containers Execute the script with docker exec.
Monitor the containers until the healthcheck interval occurs and that they are marked healthy.
Adjusted RETRIES
Adjusted CPU_THRESHOLD
|
Further testing... I was able to test with higher cpu load after deleting the cncli db and re-syncing. Result
Line 44 of healthcheck.sh: This seems to fix it...
With the above change, when cpu load is higher than CPU_THRESHOLD, this is the result:
|
Yeah, there are rare instances where cncli percentage can be high, but this tends to be when resyncing the entire db and/or a cncli init is running. Occasionally if there is an issue with node process itself, like if it gets stuck chainsync/blockfetch and never completes, I have also seen cncli get a high percentage, but otherwise its quite rare to see it increase. I figured with mithril-signer or db-sync, it might be more useful. |
@adamsthws Feel free to submit suggestions to adjust the Thanks for the testing. |
Testing revealed that setting RETRIES=0 results in script exit 1 without running the loop... it would be preferable to run the loop once when RETRIES=0. Suggestion - Modify the loop condition to handle RETRIES=0 by changing line 39 to the following:
Or...
|
I started thinkinng about a cncli specific check. The following function is an idea to check cncli status...
Perhaps would be improved further by also checking if sync is incrementing, so the healthcheck doesn't fail during initial sync. How would you feel about adding me as a commit co-author if you decide to use this? |
@adamsthws I'm happy to make you a co-author even for something simple, for example if you know how to submit a suggestion go ahead an apply one for |
In regards to the larger block for cncli checks, first it is clear lots of thought went into it. This portion:
Sleeps of 180 exceed the current timeout period of 100. Options:
With container settings of 3 retries and 5 minute interval w/ 100 second timeout it is 15 minutes from the last healthy response, or 10 minutes from the first unhealthy response, before the container exhausts retries and is marked unhealthy. I think this covers the two 180 second sleeps, even if the operator reduces the interval and timeouts when not running the node. Separately, conversations outside of this PR and thread have pointed to some of the logic used in KOIOS for db-sync, also that it could also be used for checking the sqlite DB for cncli.
I haven't examined what the common drift might be for a db-sync instance from the last block produced and for cncli I suspect we could make it shorter than 1 hour. These are just my thoughts. If you think that I overlooked some aspect please don't hesitate to continue the discussion. |
Thankyou for the feedback. It seems the limitations of using 'cncli status' in the healthcheck are twofold... 1 - Instead, checking for cncli sync progress via sqlite db would seem to make more sense. 2 -
I think it's reasonable to say 'cncli status' is not a suitable way to check cncli container health. |
Here's a suggestion to check the health of cncli ptsendtip, I'd love your feedback...
|
Co-authored-by: Adam Matthews <[email protected]>
Yep, this is where I was originally taking "the easy way out" on the healthcheck of checking if the binary was running or not, and/or sleep. |
13d8a50
to
f752b1c
Compare
@adamsthws OK commit pushed, sorry for the delay. Spent a little time considering if healthcheck.sh could remain in Thanks. |
In my limited testing have never found
If nothing is ever written to the file descriptor...
Do you reccommend...
Yes, I have tested... Might the discrepancy between your results and mine be that:
Great! I'll try to review quickly but may be a day or two. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't had chance to do any testing but here is a quick initial look over. Thanks!
Co-authored-by: Adam Matthews <[email protected]>
I suppose we could align with the original query tip (non KOIOS) and go for 60 seconds. If it fails on the first pass, then there are 3 retries before the container is marked unhealthy anyway. Thoughts? Also I realize in my prior explanation I clearly had the timing off. I should have said 20 total and 15 from the first failure, since it would be defaulting to 3 retries with the 5 minute intervals, not 3 total attempts. At least when operators do not tune the interval, retries, etc. at container creation. |
Nope. This was a manual test inside my cncli sync container. So I manually checked the PID and used tmux, not the script at all. Because I wasn't using a valid API key I checked both fd 1 and 2 (and 0).
I'll chalk it up to some oddity with the manual test and using an XXXX value for the API ID. As long as you are seeing the output and its working in your test that is good enough for me. I need to create a new account at PT to replace my old one based on email address anyway. I'll try to retest once I have a valid setup and start using ptsendtip as a container again. |
Co-authored-by: Adam Matthews <[email protected]>
Sounds like a reasonable approach , thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the function to check if the tip is successfully being sent to Pooltool
Co-authored-by: Adam Matthews <[email protected]>
# Check if the DB is in sync | ||
[[ -z "${PGPASSFILE}" ]] && PGPASSFILE="${CNODE_HOME}/priv/.pgpass" | ||
if [[ ! -f "${PGPASSFILE}" ]]; then | ||
echo "ERROR: The PGPASSFILE (${PGPASSFILE}) not found, please ensure you've followed the instructions on guild-operators website!" && exit 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For docker/podman, there arent current instructions - we need to possibly attach a sample reference compose file , wherein PGPASSFILE can be added (or instructions added to be part of priv folder)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the current Guild Ops DBSync documentation which mentions this considered insufficient in this case? It documents it at the top of the page, and lower down shows examples of using the priv folder as the path.
Also, this healtcheck won't impose any new requirements for using dbsync.sh
in a container that don't already exist using it on bare metal.
-
Line 120 comes from line 42 of
set_defaults()
function:
guild-operators/scripts/cnode-helper-scripts/dbsync.sh
Lines 38 to 47 in 349dffd
set_defaults() { if [[ -z "${DBSYNCBIN}" ]]; then [[ -f "${HOME}/.local/bin/cardano-db-sync" ]] && DBSYNCBIN="${HOME}/.local/bin/cardano-db-sync" || DBSYNCBIN="$(command -v cardano-db-sync)" fi [[ -z "${PGPASSFILE}" ]] && PGPASSFILE="${CNODE_HOME}/priv/.pgpass" [[ -z "${DBSYNC_CONFIG}" ]] && DBSYNC_CONFIG="${CNODE_HOME}/files/dbsync.json" [[ -z "${DBSYNC_SCHEMA_DIR}" ]] && DBSYNC_SCHEMA_DIR="${CNODE_HOME}/guild-db/schema" [[ -z "${DBSYNC_STATE_DIR}" ]] && DBSYNC_STATE_DIR="${CNODE_HOME}/guild-db/ledger-state" [[ -z "${SYSTEMD_PGNAME}" ]] && SYSTEMD_PGNAME="postgresql" } -
Lines 121-122 come from lines 52-53 of
check_defaults()
function:
guild-operators/scripts/cnode-helper-scripts/dbsync.sh
Lines 49 to 60 in 349dffd
check_defaults() { if [[ -z "${DBSYNCBIN}" ]]; then echo "ERROR: DBSYNCBIN variable is not set, please set full path to cardano-db-sync binary!" && exit 1 elif [[ ! -f "${PGPASSFILE}" ]]; then echo "ERROR: The PGPASSFILE (${PGPASSFILE}) not found, please ensure you've followed the instructions on guild-operators website!" && exit 1 exit 1 elif [[ ! -f "${DBSYNC_CONFIG}" ]]; then echo "ERROR: Could not find the dbsync config file: ${DBSYNC_CONFIG} . Please ensure you've run guild-deploy.sh and/or edit the DBSYNC_CONFIG variable if using a custom file." && exit 1 elif [[ ! -d "${DBSYNC_SCHEMA_DIR}" ]]; then echo "ERROR: The schema directory (${DBSYNC_SCHEMA_DIR}) does not exist. Please ensure you've follow the instructions on guild-operators website" && exit 1 fi }
A compose file example is definitely an option. However, I find it can also be a crutch by allowing users to skip reading documentation. If you consider it a "must have" then I think it should be part of another PR and have its own separate testing and review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the current Guild Ops DBSync documentation which mentions this considered insufficient in this case? It documents it at the top of the page, and lower down shows examples of using the priv folder as the path.
I look at (atleast a big % of) docker users as spoon-fed users - not intended in any demeaning way, but the primary requirement of them using docker [besides virtual env isolation] would be simplicity - as we're not talking about slightly more experienced/advanced users already taking to next step and deploying in professional environments , their expectations will be not having to go through document pages beyond docker reference ones.
If someone running a manual setup is moving to docker, I dont expect them to have as many queries. It does kinda begs the question if docker users would also share the pre-reqs on home page - in my experience of reading github issues on IO/intersectmbo issues, a very large section does not.
If you consider it a "must have" then I think it should be part of another PR and have its own separate testing and review.
Ye - I agree. That - if added - does not need to be part of this PR indeed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talking about dbsync in particular, I would think this would likely be a seperate docker image altogether (which is why I mentioned earlier there is some work to be done):
- Would lighten size (by removing binaries) from current cardano-node image [and vice versa]
- Add more steps in build, like having schema directory specific to dbsync version, ledger-state directory
- Above mentioned PGPASSFILE will be part of it's package
- A dedicated page for operations for dbsync docker image.
- Deploy pg_bech32 as part of image
- Have better provision for snapshot restoration
- Have script modifications to allow for any type of snapshot restore to compare config options (as users may otherwise end up with a lot of wasted hours to find out incompatibility)
In theory , same can be done for smaller components, but there is little value as their setups are mostly the same. For instance, for something like ogmios - we dont need a custom image, official image is perfectly usable.
Once above is available, a [docker|podman]-compose will essentially just give a reference on sharing volumes between the pods|containers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ye - I agree. That - if added - does not need to be part of this PR indeed
Sounds good, after this, and some other documentation issues/PR are resolved I'll open one and start a compose example.
Talking about dbsync in particular, I would think this would likely be a seperate docker image altogether [...]
Nice. So far I've been running the upstream db-sync image, as well as ogmios, and guild ops for node, cncli, and mithril. I'm hereby offering to assist with a new container if any help is needed.
In theory , same can be done for smaller components, but there is little value as their setups are mostly the same. For instance, for something like ogmios - we dont need a custom image, official image is perfectly usable.
Yep. The socket, configs (files) and just about every other subdir of CNODE_HOME lend themselves to volume usage. Not scripts though. The excessive work trying to make scripts a shared volume wasn't worth the time I initially spent to get it working. I've found a container environment file with all vars + a completely stock scripts directory and env file inside the container to be the simplest way to run separate containers per process with shared vars.
# Capture the next output from cncli that is related to Pooltool | ||
pt_log_entry=$(timeout $log_entry_timeout cat /proc/$process_id/fd/1 | grep -i --line-buffered "pooltool" | head -n 1) | ||
if [ -z "$pt_log_entry" ]; then | ||
echo "Unable to capture cncli output within $log_entry_timeout seconds. $error_message_suffix" | ||
sleep $HEALTHCHECK_RETRY_WAIT # Wait n seconds then retry | ||
continue # Retry if the output capture fails | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the part I was mentioning earlier about "fail early". The reason being is we have compounded retries occurring which reduces observability. I want to apologize (profusely) in advance, this comment is going to be pretty verbose to break down what I mean by compounding retries and reduced observability.
First, I can definitely see where my earlier statements about using the query tip 60 seconds and relying on the 3 "retries" was confusing. What I was referring to is this portion of the current healthcheck node query tip:
guild-operators/files/docker/node/addons/healthcheck.sh
Lines 13 to 22 in 349dffd
if [[ "${ENABLE_KOIOS}" == "N" ]] || [[ -z "${KOIOS_API}" ]]; then | |
# when KOIOS is not enabled or KOIOS_API is unset, use default behavior | |
sleep 60; | |
SECOND=$($CCLI query tip --testnet-magic ${NWMAGIC} | jq .block) | |
if [[ "$FIRST" -ge "$SECOND" ]]; then | |
echo "there is a problem" | |
exit 1 | |
else | |
echo "we're healthy - node: $FIRST -> node: $SECOND" | |
fi |
It sleeps 60 between the first block and second, and has zero retries. It only relies on the --healthcheck-retries
of the container configuration. This results in a minimum 15 minutes and maximum 24 minutes before the container actually shows an unhealthy status. It does not compound retries which makes the status change from healthy -> unhealthy take even longer.
I'll clarify what I mean by compounding retries:
-
The containers own HEALTHCHECK defined in the dockerfile sets the container automatically to use
--health-start-period 5m --healthcheck-interval 5m
without being defined it also gets--healthcheck-retries 3
from the default number of retries. Here is the container inspect output with time in nanoseconds.podman container inspect preview-ccio-pool | jq '.[].Config.Healthcheck' { "Test": [ "CMD-SHELL", "/home/guild/.scripts/healthcheck.sh" ], "StartPeriod": 300000000000, "Interval": 300000000000, "Timeout": 100000000000 }
-
The check for ptsendtip does a loop with
HEALTHCHECK_RETRIES
(default 20), withlog_entry_timeout
(set to 60), if it fails we waitHEALTHCHECK_RETRY_WAIT
(default 3), and start again.- The check for ptsendtip has the potential to run for 21 minutes runtime (60 seconds per loop + 3 second wait, times 20).
- The container healthcheck timeout is set to 100 seconds. Exceeding timeout causes an unhealthy response
- This status changing from healthy to unhealthy requires exhausting the containers retry value, multiplied by the Interval value.
That is a minimum of 15 and maximum of almost 27 minutes (including runtime).
I see multiple options as a solution:
-
Drop the retry logic from
check_cncli_send_tip()
. Let it run once for 60 seconds and exit 1 if it doesn't see a block. This makes it similar to the query tip in thetimeout
and all other checks in regards to no retries (except the KOIOS version of check_node). -
Keep the retry logic in
check_cncli_send_tip()
and set the HEALTCHECK_RETRIES to default to 0. -
Add retry logic to every check function in the healthcheck script and set HEALTCHECK_RETRIES to default to 0.
Each option results in a 15-21 minute window before knowing the container is unhealthy when the container configuration for healthcheck intervals & retries and all script variables are left at current/default values.
I'm torn between 3 and 1 as my favorite option. Improving observability even further requires either the operator to adjust the startup, interval, timeout and retries when creating the container OR for us to come to a consensus about adjusting the Dockerfile HEALTHCHECK statement when building the container image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thankyou for the thorough explination.
Drop the retry logic from check_cncli_send_tip()
My gut feeling is that this would be the prefered approach, minimising the code to add/maintain.
Let it run once for 60 seconds
To test how often it times out, I'm in the process of running the function 100 times with varying timeout values...
- timeout=60 - Result: x% timeout frequency (x-fail/x-success).
- timeout=90 - Result: x% timeout frequency (x-fail/x-success).
- timeout=120 - Result: x% timeout frequency (x-fail/x-success).
(Results TBD)
After removing retries from the function, if it times out, it then must wait another 5mins before retring again --healthcheck-interval 5m
... I propose to change the timeout within the function to something larger, maybe 90s-120s (depending on the results of my testing).
E.g timeout=90 (rather than timeout=60) increases the container time to failure by 30s while decreasing the container time to sucess by relatively larger 4.5min (by avoiding unnecassary timouts and retries).
What would be the maximum time you would expect between tips?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the maximum time you would expect between tips?
Anecdotally I have seen Tip (diff) reach 60 seconds quite often. Occasionally a bit higher at 100-120 seconds. The KOIOS addition was because the original query tip I had twice over a few months hit 60 seconds on repeated retries and marked it unhealthy causing some investigations. After adding KOIOS, or even disabling KOIOS and using the original query tip, I don't remember it happening in the last year seeing unhealthy on the node.
In rare, extreme events, it has exceeded 5 minutes. Node 1.35, or 1.34, had one event long ago that was exceeding 5 minutes. Not sure if this linked issue/comment showing 8 minute delay between blocks is the one, or something else as I didnt read the whole thread. The time frame seems like it could be the one I'm thinking of, and I don't recall at the moment another lasting longer than 5.
IntersectMBO/cardano-node#4421 (comment)
Again, these are rare "extreme" events, and the healthcheck in theory wouldn't even point this out with the current configuration anyway when it lasts less than the --healthcheck-retries
multiplied by the interval, we should in theory only see this in grafana/prometheus and in theory the container runtimes would still be showing a healthy container even with timeout=60.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To test how often it times out, I'm in the process of running the function 100 times with varying timeout values...
Here are the results:
timeout=60 - Result: 9% timeout frequency (9-fail/91-success).
timeout=90 - Result: 4% timeout frequency (4-fail/96-success).
timeout=120 - Result: 2% timeout frequency (2-fail/98-success).
I had twice over a few months hit 60 seconds on repeated retries and marked it unhealthy causing some investigations
In this limited testing sample, it has already failed sucessively (twice in a row) - (with lower timout of 60s)... So I feel using a higher timout would mean less chance of failure three times in a row, causing unnceassary investigations.
...How would you feel about making timeout=120?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you happen to have details about consecutive failure/timeouts? For example based on the 300 second interval if it's never observed that we have 1 consecutive failures over 100 attempts, then it still results in 0 status changes to unhealty.
To bump the timeout even higher I'd probably want to see 2, possibly 3, consecutive timeoutes reached based on X interval being tested. If it reaches 50-66% of the required consecutive failures to cause the node to be marked unhealthy then I would definitely consider bumping it up.
However, if the results were similar to KOIOS where there was never more than 1 failure and 0 consecutive failures I would prefer leaving the internal timeout for the check at 60. Raising it to 120 would also require an increase of the Dockerfile HEALTHCHECK settings timeout (aka --healthcheck-timeout
) value currently at 100.
If you don't have the details on how many consecutive failures were observed this gist contains the POC I did to gather the details for KOIOS. I won't get time to setup a ptsendtip until sometime this weekend, which would push my ability to run this test out to Sunday or sometime next week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you happen to have details about consecutive failure/timeouts? For example based on the 300 second interval if it's never observed that we have 1 consecutive failures over 100 attempts, then it still results in 0 status changes to unhealty.
These tests had no interval... I didn't feel adding an interval would effect the results in any meaningful way.
-
timeout=60 - Result: 9% timeout frequency (9-fail/91-success).
- failed consecutively 1 time. (interval=0)
-
timeout=90 - Result: 4% timeout frequency (4-fail/96-success).
- failed consecutively 0 times. (interval=0)
-
timeout=120 - Result: 2% timeout frequency (2-fail/98-success).
- failed consecutively 0 times. (interval=0)
To bump the timeout even higher I'd probably want to see 2, possibly 3, consecutive timeoutes reached based on X interval being tested. If it reaches 50-66% of the required consecutive failures to cause the node to be marked unhealthy then I would definitely consider bumping it up.
I'm in the process of re-running it with timeout=60, interval=300s... I'll keep you posted.
Raising it to 120 would also require an increase of the Dockerfile HEALTHCHECK settings timeout (aka
--healthcheck-timeout
) value currently at 100.
Oh yes, thanks for pointing that out. Let's see how the re-test goes.
If you don't have the details on how many consecutive failures were observed this gist contains the POC I did to gather the details for KOIOS.
This looks great. Nice use of sqlite!
Here's my test, it's not as elegant but it gets the job done:
https://gist.github.com/adamsthws/dce0263bea2302047660b9c8ea458cbd
Removed retry logic from function: check_cncli_send_tip()
Separately, I've been considering for awhile retries (now HEALTHCHECK_RETRIES) being dropped from 20, to 0-3. I originally set it to 20 retries and (now HEALTHCHECK_RETRY_WAIT) to 3 second sleeps so that the KOIOS method would at max run as long as the 60 seconds of a query tip without KOIOS. This was never really required, but its been there ever since. I tested last night manually in a loop of 10 with a 20 second sleep 10/10 returned in under 1 second, never needing a retry. 10/10 also incremented at least 1 block. I didn't bother to do multiple passes and calculate the ratio. However given the retries at the container level, I am pretty confident we could drop this from 20 and improve the observability (slightly) by failing earlier. I'm interested in your feedback. What do you think about changing the default values of HEALTHCHECK_RETRIES so even KOIOS will fail earlier? |
If it rarely (or never) fails and needs a retry, I would be in favor of removing the Koios retry loop from In which case, there might also be an argument to remove variables:
|
Well, not "never". There are some occasions where the node is ahead, like when a block is produced locally and healcheck is almost at the exact moment. Same in reverse, KOIOS being ahead, but rarely for more than 3 seconds. Since your open to even removing the retries altogether I'll create a POC and give it a full 100 runs using a current interval of 5 minutes. I'll track the total times:
I'll also try 4, 3 and 2 minute intervals. We could potentially improve the observability, reducing the time something is wrong before the status changes to unhealthy. As long as multiple systems are not sharing a single outbound IP for KOIOS queries I don't believe this should push the limits of the free tier. |
I wonder if including an allowable drift for the koios check, (similar to |
I like the thought, but you might be correct that we just swap complexity. A single line for "healthy" we get something like this for a drift of 1:
It's not terribly complex, but not the simplest either. It could be split up w/ |
@adamsthws So the results were pretty interesting. FWIW, I set the healthcheck_poc.sh so that it leverages the KOIOS_API_TOKEN to be sure I didn't get any 429's as this testbed shares the WAN IP with multiple systems querying koios. Tests
Results300 second intervals
240 second intervals
180 second intervals
120 second intervals
Summary
If reducing this interval does not negatively impact any other healthchecks being adding, and the healthchecks start only after the ledger has finished loading (in case of unclean shutdowns, ledger replays during major version changes, etc.), I see no drawbacks to improving the observability of the container health with shorter intervals. I can post the script to a gist if you want to replicate the tests yourself or test different networks. |
100% - running your test on my side now. I'll let you know the results. |
Test Results Test: check_node()
Test: check_cncli_send_tip()
Probabilities If a check has a probability of failing 10% of the time, the probability of a that check failing three times in a row is 0.1% (or 1 in 1,000) - (assuming the events are independent, meaning each failure is not influenced by previous failures).
So with |
- Increased accuracy of "Pooltool" log scrape pattern. - Included ($pt_log_entry) in output for improved logging/debugging. - Improved variable names for clarification.
Update healthcheck.sh
It's clear there is a need to account for your findings. I suspect the "middle ground" here may be adjusting current HEALTHCHECK values, as well as providing documentation for optimizing based on the use case (ENTRYPOINT_PROCESS). ThoughtsProbabilitiesA quick glance at your formula for probabilities I realized I mispoke in earlier statements 🤦🏼 I described one of our monitoring solution's concept of "retries" before a service status changes to offline/unhealthy, instead of dockers concept. Instead of 1 initial failure + X retries to change a status, for Docker it's just X retries, or "FailingStreak: X" == I'm a bit tired ATM, but unless my mental math is off, doesn't that bring us to unhealthy status once per 25 hours / 1.04 days (at 300 interval) or once per 10 hours if we wanted to reduce to an interval of 120? Node testsYour node results are interesting. My tests were all in the US from various systems, but still pretty low latency to a very high number of overall stake pools, and total delegation. In regions with even higher overall latency to the majority of nodes/delegation (possibly AU?), I suspect we'd see even more failures. Additional DataIt might be interesting to see the following output providing details about the consecutive event observed:
This is not required, just "interesting". If you you decide to grab/provide this for review, or any other queries you might find interesting among the data, a gist link would be fine. Send Tip TestsGiven your current results from timeout=60 I doubt if timeout=100 (or even timeout=120) would be sufficient to combat an event with such high probability to occur. every few days (more than once an epoch, ~105 times annually) Options
EDIT: I mentioned 105 times annually before I looked at the formulas and backtracked to write the intro. It would be off, if my mental math wasn't, but it is is quite possible since I didn't double check. I'll look at the numbers again today. In any case I still think adjustments are needed, even if its only once every 3.47 days, or even if it was simply once a month. |
Thanks for pointing that out... I was thinking about it in terms of docker's behavour - where 3 consecutive failures = unhealthy.
I re-calculated to double check and I believe my previous working was correct.
That's a good point. FWIW I'm testing from the UK.
Unfortunately the container has been re-created since running your test and the results db has not persisted. Happy to re-run the test if you think it would be valuable?
I've tested with timout=90 and now awaiting timout=120 test to complete... Once we have these results it might give us a better idea of the values to test for in the next round (while testing all simultaneously). Here's the results of Timeout=90...
|
No need. The parallel run with 60/90/120 would be the most useful, as we can see if timeout=60 results in the same level of failures as your first pass. If you can persist the DB on the parallel run it would be useful to investigate each consecutive failure. The comparison of each overlapping check for the different timeout values should be invaluable to determine if the adjustment really addresses the probabilities calculated previously, or if you just had a lucky run, which seeing the parallel timeout=60 will determine. |
Great. I must have multiplied by 100 to think about the % on the 2 failure scenario, then just continued with that number when I added the third failure. Sleep is good, I should try it more often 😅. |
Results of timout=120:
Quick observation: I will now go on to test simultaneously to verify results, using the following timout values: 60, 90, 100, 110, 120. |
Send TipFabulous. With confirmation from the parallel data showing 60 was close to or the same as your first while we can compare in SQL for every other timeout during the same period should make the choice quite simple. NodeThe only concerning thing remaining was your results from check_node POC. The 13% result being much higher than I observed (worst at 5% w/ 0 consecutive). The probability increase is dramatic, 5% being 0.000125/0.012%, 10% (send tip) being 0.001/0.1% and 13% being 0.002197/0.22%. Essentially 800% more likely than in any test I performed, and ~120% more likely than the send tip healthcheck failure ratio observed w/ timeout=60. Due to this I'm thinking about an adjustment to the logic for check_node using KOIOS.
I suspect either will significantly reduce the POC failure rates. Do you have a preference or other options to suggest? After your response I'll create an adjustment to the node healthcheck POC and run them in parallel for easy comparison. FWIW, I think moving this out of draft is quite close. I do need to come up with something for mithril signer. I don't want that to delay this work getting reviewed, so if I don't find time to implement/test something before the next POC test finishes, I'll just add that to a list of other PR's I need to start work on. Thanks for all the time an energy you've spent collaborating on this! |
Description
Enhances the healthcheck.sh script to work for checking permissions on sidecar containers (helper scripts) via the ENTRYPOINT_PROCESS.
Where should the reviewer start?
/home/guild/.scripts/
.Testing different CPU Usage values
80
%) at a value you want to mark a container unhealthy when it is exceeded.Testing different amount of retries (internal to healthcheck.sh script).
20
) at a number of retries you want to perform if the CPU usage is above the CPU_THRESHOLD value before exiting non zeroCurrently it is a 3 second delay between checks, so 20 retries results in up to 60 seconds before the healthcheck will exit as unhealthy due to CPU load.
Testing different healthcheck values (external to healthcheck.sh script).
The current HEALTHCHECK of the container image is:
Reducing the start period and intervals to something more appropriate for the sidecar script will result in a much shorter period to determine the sidecar containers health.
Make sure to keep the environment variable RETRIES * 3 < container healthcheck timeout to avoid marking the container unhealthy before the script will return during periods of high cpu load.
Motivation and context
Issue #1841
Which issue it fixes?
Closes #1841
How has this been tested?
docker cp
the script into preview network cncli sync, validate and leaderlog containers and waiting until the interval runs the scriptdocker exec
to confirm it reports healthyAdditional Details
There is a SLEEPING_SCRIPTS array which is used for validate and leaderlog to still check for the cncli binary, but not consider a sleep period for validate and leaderlog to be unhealthy. Not 100% sure this is the best approach, but with sleep periods being variable I felt it was likely an acceptable middle ground.
Please do not hesitate to suggest an alternative approach to handling sleeping sidecars healthchecks if you think you have an improvement.
@adamsthws if you could please copy this into your sidecar containers (and your pool) and report back any results. I am marking this as a draft PR for the time being until testing is completed, after which if things look good I will mark it for review and get feedback from others.
Thanks