-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(scheduling): per-Deployment k8s scheduling options #223
base: main
Are you sure you want to change the base?
Conversation
charts/cryostat/values.yaml
Outdated
## @param grafana.affinity [object] Affinity for the Grafana Pod. See: [Affinity](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) | ||
affinity: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unused as Grafana and Datasource are in the same pod with Cryostat?
charts/cryostat/values.yaml
Outdated
@@ -267,6 +277,8 @@ datasource: | |||
nodeSelector: {} | |||
## @param datasource.tolerations [array] Tolerations for the JFR Datasource Pod. See: [Tolerations](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) | |||
tolerations: [] | |||
## @param datasource.affinity [object] Affinity for the JFR Datasource Pod. See: [Affinity](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) | |||
affinity: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here?
## @param affinity [object] default Affinity for the various Pods. See: [Affinity](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) | ||
affinity: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be a better choice that this field represents common affinity specs for managed pods? Then, each pod can overwrite them as needed. This seems a bit more intuitive to me, considering SecurityContext
, where container level specs overwrite pod level ones, where possible.
However, this means we need to figure a way to do a strategic merge on this field :( Not sure how that can be done currently.
A simpler way, I suppose, is to remove this field? Since we don't have such default options for tolerations
, nodeSelectors
. We can just remove this and let the users define their own?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are also default options for tolerations and nodeSelectors now, just a few lines above this one. Each of these is only used if there is no pod-specific attribute provided - there is no merge strategy, each Pod will try to use its specific setting, and if there is none then use these global defaults.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, right opps, Sorry, not sure why I didn't see them :D
Also, considering #222, which defines top-level field and behaves like a set of common annotations, I am wondering if it would be better to have this PR behave the same? Or adjust #222 to behave as in this PR?
Otherwise, I guess it's okayy since they are different specs :D what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I figured it made sense for annotations to get merged together since they are really just some kind of metadata tagging on the resources, so it makes sense that the annotation attributes are probably shared between things, even if they have their own specific values to add on top, or overrides to apply.
Things like affinities (or anything that relates to node scheduling) seem like if there is a specific configuration required, then it is probably not something that is intended to be merged or shared with others. So the default setting is there to allow for customizing things to ensure that all of Cryostat gets scheduled together onto one node, but if there are any more specific settings then those are probably being applied to cause the scheduler to put it on a different node.
So I think it makes sense either for these specs to behave differently, or else #222 should work like this one (replace, not merge). This one should not work like #222 IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, that makes sense! In that case, the current way fits better! I think #222 won't have to be adjusted as it seems to be the common practice to merge metadata. As long as it is well-documented, there won't be any issue^^
Thanks for the explanations! This sort of design situation bugs me at times :D
Related to #220
To test:
helm install cryostat ./charts/cryostat
and ensure everything comes up as usual