-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-5027: DRA: admin-controlled device attributes #5034
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: pohly The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@pohly: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/cc @KobayashiD27 For the "device priority" use case. /cc @byako For device health. |
@pohly: GitHub didn't allow me to request PR reviews from the following users: KobayashiD27. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Capacity map[QualifiedName]DeviceCapacity | ||
} | ||
|
||
// AttributeNamePriority is a standardized attribute name. Its value must be an integer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or this?
// AttributeNamePriority is a standardized attribute name. Its value must be an integer. | |
// AttributeNamePriority is an attribute name defined by Kubernetes. Its value must be an integer. |
/cc @johnbelamaric
/wg device-management |
The scheduler must merge these additional attributes with the ones provided by | ||
the DRA drivers. The "kubernetes.io/offline" string attribute contains a | ||
free-form explanation why the device is not currently available. Such a device | ||
must be ignored by the scheduler. The "kubernetes.io/priority" integer defines |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eero-t asked in #5027 (comment):
How admin should run test workload(s) on device which scheduling has been disabled (e.g. for firmware upgrade), to know whether it can be enabled egain (for production workloads)?
With node taints, one would use taint tolerance for this, but I don't seem from KEP description how similar thing would be achieved for DRA devices.
This is indeed not possible as described here. How about making it configurable whether an offline device is used?
The "normal" DeviceClass that users should pick for production workloads could have a selector which excludes offline devices.
Then there is a second DeviceClass which doesn't exclude them. There's nothing that would prevent users from using that, but if they do, they do at their own risk. This is on-par with node taints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Match for all offline devices is not enough, as there can be multiple reasons for offline, e.g. health and administration => selection would need to be specific to given offline reason, and not match if there are other reasons.
(With taints, one could use e.g. fw-upgrade
taint, and its toleration. While one could still taint whole nodes, that could be rather disruptive, whereas by offlining devices one-by-one, upgrades would cause only slight service degradation while they are being performed / tested / verified.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The admin can create a custom DeviceClass with a selector which matches exactly the reason they chose when taking the device offline. The ResourceSliceOverride then has kubernetes.io/offline: fw-upgrade
and the workload's DeviceClass has device.attributes["kubernetes.io"].offline == "fw-upgrade"
.
But that doesn't cover the case where a manually created ResourceSliceOverride contains such a kubernetes.io/offline: fw-upgrade
and another, automatically created one has kubernetes.io/offline: unhealthy
. The admin can make sure that "its" value wins via resourcesliceoverride.spec.rank, but then the kubernetes.io/offline: unhealthy
gets lost.
We could specify a different merging strategy for this well-known attribute: instead of keeping exactly one entry, the different instances could be numbered, leading to kubernetes.io/offline: fw-upgrade; kubernetes.io/offline-1: unhealthy
. The CEL expressions become a bit more complex, but it would work.
Yet another alternative is to extend the CEL environment so that device.attributes["kubernetes.io"].offline
is a list of strings. This might be better than the value name "hack".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks to the KEP. And sorry to cut-in this discussion. I'm curious in the usecase of expossing the device health via ResourceSliceOverride
.
In this usecase, are there any ideas how could online/offline status changes affect to running workloads?? In this case, it might be useful for users to introduce device level toleration for more flexible control?
The below is an imaginary spec that I know this is just a juvenile suggestion(it definitely needs more deep considerations):
apiVersion: resource.k8s.io/v1alpha2
kind: ResourceClaim
spec:
devices:
requests:
- name: gpu
deviceClassName: gpu.nvidia.com
# New field
# - It might be better to introduce taints information in device side, too?
# - toletion should be defined in ResourceClass side?
tolerations:
- cel:
expression: "device.attributes["kubernetes.io"].offline != ""
effect: NoExecute | NoSchedule
tolerationSeconds: 30s # this is effective only when 'NoExecute'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reacting to offline
on the node for a running workload is currently out-of-scope, but I can see how it would be useful to do something even if that means making the ResourceClaim API more complex. It also means that the kubelet needs to become aware of this because a controller cannot force containers to stop, can it?
We could start without it in 1.33, then add such an API in 1.34 (still as alpha!).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But devices also get deallocated when its consuming pod(s) are in a known final state where it won't run any containers anymore, so it's not necessary to fully remove a pod to reuse devices. The advantage would be that one can still retrieve logs or inspect the pod object to determine what it did (exit code, termination message)
Got it. That makes sense.
I just don't see how an external controller can force a pod into that state, so we would have to go the same route as node tainting.
I agree because there is no such apis (make pods reach final state forcefully) as you stated.
I think we can define device.attributes["kubernetes.io"].offline != "" as a check that, if true, means that the device cannot and/or should not be used. With that definition, not scheduling and evicting running pods seem like the right default behavior if a ResourceClaim doesn't list tolerations.
Yeah. That's simpler than introducing taints.
It's my pros/cons analysis:
- Option 1: Unhealthiness via device attributes (e.g.
kubernetes.io/offline
):- Pros:
- Simple (just adding toleration in ResourceClaim)
- Cons:
- User(
ResourceClaim
) should aware which attributes define its device unhealthiness to define thieir toleration. These should be documented in DRA driver documents
- User(
- Pros:
- Option 2: Unhealthiness via
taints
(inResourceSlice
(by DRA Driver) orResoruceSliceOverride
(by admin or external controller))- Pros:
- User(
ResourceClaim
) can follow standard taint/toleration semantics - taints can express its abnormal usecase, i.e. default (in case no tolerations) behavior, via
effect
(NoSchedule | NoExecute | PreferNoSchedule | etc.
for each failure/offline modeoffline=suspect-failure:PreferNoSchedule
offline=investigation:NoSchedule
offline=fw-upgrade:NoSchedule
andoffline=fw-upgrade:NoExecute
offline=hardware-failure:NoSchedule
andoffline=hardware-failure:NoExecute
- User(
- Cons:
- Complex
- Even in this case, user(
ResourceClaim
) should aware which taint key/value are exposed by DRA driver to define tolerations. That should be documented in DRA driver documents.
- Pros:
Hmm. It's actually difficult for me to choose which is better. I personally incline to standard taint/toleration a little bit. But I worry about the API complexity.
WDYT??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had already put something into the KEP under alternatives about using fields instead of pre-defined attribute names. The original argument was that it would be a small step from having this override mechanism to standardizing some attributes for specific purposes.
But that argument is starting to break down: for kubernetes.io/offline
we would already need special merging that combines all values in a list of strings, and one half of the API would be a pre-defined attribute name while the other half are fields (ResourceClaim.Spec.Tolerations
).
I'm leaning towards dropping kubernetes.io/offline
and replacing it with a "proper" API. It still fits into this KEP because it relies on ResourceSliceOverride
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had already put something into the KEP under alternatives about using fields instead of pre-defined attribute names.
Oh, thanks for the clarification. I found it in Alterhatives section 🙇
I'm leaning towards dropping kubernetes.io/offline and replacing it with a "proper" API.
👍
It still fits into this KEP because it relies on ResourceSliceOverride.
Honestly, I think it depends on what kind of "proper" API design will be agreed. IF the agreed API WAS taint/toleration in ResourceSlice(Override)/ResourceClaim, then I prefer to this in separate KEP to isolate the intention even though it relies on ResourceSliceOverride because taint/toleration can work without ResourceSliceOverride. WDYT??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may be right. But it's tricky to have two separate KEPs in-flight at the same time. What if someone disables DRAAdminControlledDeviceAttributes (this KEP) and enables DRADeviceTaints (some new KEP)? Is that a valid setup? Perhaps... the device taint could be stored in the ResourceSlice, just not in the ResourceSliceOverride, because that's disabled.
Okay, let me try two different KEPs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'll leave out kubernetes.io/priority
. The same argument against having device health with taints here apply to it, too (separate feature!), and with the increased complexity of device taints I don't want to bite of more than I (and my reviewers) can chew in this release cycle.
Instead of ResourceSliceOverride as a separate type, new fields in the | ||
ResourceSlice status could be modified by an admin. That has the problem that | ||
the ResourceSlice object might get deleted while doing cluster maintenance like | ||
a driver update, in which case the admin intent would get lost. A driver would | ||
not be able to publish a new ResourceSlice where a device is immediately marked | ||
as offline because creating a ResourceSlice strips the status. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JFYI:
@johnbelamaric suggested another alternative, that is drop-in file style to overide/extend device attributes here:
I also think it could be useful for the driver (actually, the base driver framework that we would prefer all drivers to use) to have a hook to allow VM architects to augment the device attributes published by the driver.
For example, dropping a file on the node that can tell you which external network each NIC is plumbed to.
Patrick's KEP gives the cluster admin an opportunity to enhance attributes. That could be sufficient to do what I am saying. But it may also be helpful to have an on-node way of doing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If DRA driver authors want to support a way of doing this, then they certainly can. But I don't think we as Kubernetes should standardize and require supporting such a feature. If we want to offer a common API, then this KEP looks like a better approach to me, in particular because accessing the apiserver is easier than creating files on nodes...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to offer a common API, then this KEP looks like a better approach to me, in particular because accessing the apiserver is easier than creating files on nodes...
I also support the ResourceSliceOverride approach. Although drop-in might fit with node-local devices, DRA can now support broader device models, e.g. non-node-local devices (i.e. fabric-attached devices).
One-line PR description: DRA: admin-controlled device attributes
Issue link: DRA: admin-controlled device attributes (device health, maintenance, priority) #5027
Other comments: first revision
/cc @johnbelamaric