-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVLINK Aware Scheduling #214
Comments
We've talked about adding attributes that list NVLink connections between GPUs, but have struggled to come up with something that would actually be useful. The challenge being that the only knob we currently have to align devices is by This would be sufficient if each GPU was only paired with one other GPU and users only ever wanted exactly 2 GPUs (and no more), but the minute you have multiple NVLink connections from a GPU to others (or want to allow others to request more than just 2 GPUs), this simple We have been talking about introducing an alternate field for listing constraints in a ResourceClaim called |
When deploying some workloads on a K8s cluster where NVLINK is installed intra-node, scheduling techniques should let the user guarantee their pod lands within a single NVLINK domain. With existing scheduling, if there is a node with 8 NVIDIA GPUs, with each pair connected over NVLINK, I cannot guarantee my pod lands on GPU2 GPU3 in the event GPU0 was occupied.
Example of NVLINK pair topology:
Currently, the GPU Operator will use a best effort policy but this will not guarantee NVLINK pairs. With DRA, there is also be some prior work which has been tested with MIG setups.
The text was updated successfully, but these errors were encountered: