*** takashin has joined #openstack-placement | 00:24 | |
*** mriedem has quit IRC | 00:44 | |
*** ttsiouts has joined #openstack-placement | 01:00 | |
*** ttsiouts has quit IRC | 01:06 | |
*** Sundar has quit IRC | 02:08 | |
openstackgerrit | Merged openstack/placement master: Resource provider - request group mapping in allocation candidate https://review.opendev.org/657582 | 03:24 |
---|---|---|
*** Sundar has joined #openstack-placement | 05:11 | |
*** e0ne has joined #openstack-placement | 05:43 | |
*** belmoreira has joined #openstack-placement | 05:49 | |
*** Sundar has quit IRC | 06:06 | |
*** evrardjp_ is now known as evrardjp | 06:43 | |
*** belmoreira has quit IRC | 06:43 | |
*** ttsiouts has joined #openstack-placement | 07:09 | |
*** e0ne has quit IRC | 07:09 | |
*** ttsiouts has quit IRC | 07:34 | |
*** ttsiouts has joined #openstack-placement | 07:35 | |
*** ttsiouts has quit IRC | 07:39 | |
*** ttsiouts has joined #openstack-placement | 08:02 | |
*** tetsuro has joined #openstack-placement | 08:07 | |
*** takashin has left #openstack-placement | 08:30 | |
*** e0ne has joined #openstack-placement | 08:32 | |
*** tssurya has joined #openstack-placement | 08:40 | |
*** ttsiouts has quit IRC | 09:01 | |
*** ttsiouts has joined #openstack-placement | 09:02 | |
*** ttsiouts has quit IRC | 09:06 | |
*** mugsie_ is now known as mugsie | 09:52 | |
*** cdent has joined #openstack-placement | 10:27 | |
openstackgerrit | Merged openstack/os-traits master: Create trait for NUMA subtree affinity https://review.opendev.org/657898 | 10:47 |
efried | cdent: https://review.opendev.org/663005 Release os-traits 0.14.0 for ^ | 10:52 |
* efried rolls | 10:52 | |
cdent | k | 10:52 |
openstackgerrit | Chris Dent proposed openstack/placement master: perfload with written allocations https://review.opendev.org/660754 | 11:11 |
openstackgerrit | Tetsuro Nakamura proposed openstack/placement master: PoC: DNM: resourceless request https://review.opendev.org/663009 | 11:17 |
*** ttsiouts has joined #openstack-placement | 11:22 | |
*** ttsiouts has quit IRC | 11:33 | |
*** ttsiouts has joined #openstack-placement | 11:34 | |
*** ttsiouts_ has joined #openstack-placement | 11:37 | |
*** ttsiouts has quit IRC | 11:38 | |
*** tetsuro has quit IRC | 11:39 | |
*** cdent has quit IRC | 11:54 | |
*** mugsie is now known as mugsie_ | 12:18 | |
*** mugsie_ is now known as mugsie | 12:18 | |
*** Sundar has joined #openstack-placement | 12:24 | |
*** cdent has joined #openstack-placement | 12:48 | |
efried | Sundar, cdent: o/ | 12:51 |
* cdent waves | 12:51 | |
Sundar | efried: What's up? | 12:52 |
efried | cdent: cf ~1h ago, Sundar wanted to talk through some of the ways in which nested magic will satisfy "device affinity" use case | 12:52 |
efried | Sundar: This is purely from a "device colocation" standpoint. You would set your model up so your device was a child of the compute (or NUMA node); and then your PFs (or whatever you want to call them) are children of the device RP. | 12:54 |
Sundar | Yes. Before we make specific code changes like https://review.opendev.org/#/c/657419/, it would be good to have a sense of how it maps to all use cases | 12:54 |
efried | So the device RP would actually provide no resources. | 12:54 |
efried | But you would tag it with a trait like HW_DEVICE_ROOT (https://review.opendev.org/662811) | 12:55 |
efried | Now your device profile would list let's say two devices, each in their own request group: | 12:56 |
efried | resources_DEV_1=VF:1 | 12:56 |
efried | ... | 12:56 |
efried | resources_DEV_2=VF:1 | 12:56 |
efried | and then you would add a "resourceless request group" for the device: | 12:56 |
efried | required_DEV_COLOCATION=HW_DEVICE_ROOT | 12:56 |
efried | and then you would add a same_subtree to tie them together: | 12:57 |
efried | same_subtree=_DEV_1,_DEV_2,_DEV_COLOCATION | 12:57 |
efried | Sundar: Does that make sense? And satisfy the use case you had in mind? | 12:57 |
Sundar | efried: "required_DEV_COLOCATION=HW_DEVICE_ROOT" -- is this a new clause? Does not look like a trait | 12:58 |
efried | It's just a request group with a (newly-supported) string instead of numeric suffix | 12:58 |
efried | ...and no 'resources' component. | 12:58 |
Sundar | Why not: trait:HW_DEVICE_ROOT=required ? | 12:59 |
efried | that's flavor syntax. | 12:59 |
efried | and also, it's not granular | 12:59 |
efried | we need granular here. | 12:59 |
Sundar | Well, I am thinking from the perspective of flavors and request groups. | 13:00 |
efried | so like, if it weren't for the fact that the suffixes today are dynamically generated to guarantee uniqueness, you could have done the above with | 13:00 |
efried | resources1=VF:1&resources2=VF:1&required3=HW_NIC_ROOT&same_subtree=1,2,3 | 13:00 |
efried | sorry, HW_DEVICE_ROOT | 13:00 |
efried | Sundar: Yes, this example is only addressing one use case: device colocation when the devices are all listed in the same device profile. | 13:01 |
Sundar | I suppose you mean: resources1:VF=1&.... | 13:01 |
efried | eh? Isn't that what I said? | 13:02 |
Sundar | May be I am nitpicky about the syntax. When you switch the punctuation around, I am not sure if it i a new thing | 13:02 |
efried | okay | 13:03 |
Sundar | Instead of required3, one would have expected: trait3:HW_DEVICE_ROOT=required | 13:03 |
efried | I am talking about placement querystring syntax now | 13:03 |
efried | expressing it from a flavor or a device profile is not a challenge | 13:03 |
efried | is that how we're doing granular in flavors? traitX:TRAIT_NAME=required => ?requiredX=TRAIT_NAME ? | 13:04 |
Sundar | OK. That goes to what I am saying. The use case (or usage model or hwhatever) consists of what is said in the flavors, device profiles and Neutron port RGs. That's what the operator says/sees. | 13:04 |
Sundar | If we start from that, there will be more clarity (at least for me) about how it all ties together | 13:04 |
efried | okay, then I'm going to have to remind myself what the proposed device profile syntax is... | 13:05 |
Sundar | NP, just say it in flavor syntax and I can translate to device profiles. They are very similar | 13:05 |
efried | okay, then I'm going to have to remind myself what the flavor syntax is... | 13:06 |
Sundar | That will help, lol | 13:06 |
efried | Okay, yes, so the flavor syntax today is | 13:10 |
efried | resources${suffix}:${resource_class}=${quantity} ==> ?resources${suffix}=${resource_class}:${quantity} | 13:10 |
efried | trait${suffix}:${trait_name}=required => ?required${suffix}=${trait_name} | 13:10 |
*** ttsiouts_ has quit IRC | 13:10 | |
efried | prior to placement microversion 1.33, ${suffix} is only allowed to be an integer. As of 1.33 it can be an arbitrary string. | 13:10 |
*** ttsiouts has joined #openstack-placement | 13:11 | |
efried | (I'm actually not sure if the flavor side is validating the pattern of ${suffix}, but I kinda think we would have not done that, left it up to placement) | 13:11 |
Sundar | Yes, I got that. When stating the use case, we can skip the Placement syntax | 13:11 |
Sundar | I think we'd do: resources1:VF=1; resources2:VF=2; trait3:HW_DEVICE_ROOT=required | 13:12 |
efried | and you'll need a new syntax for same_subtree | 13:12 |
Sundar | Does that make sense (except for numeric suffixes)? | 13:12 |
Sundar | Yes, that's my next q. How do you say same_subtree in flavors? | 13:13 |
efried | Not sure I see a reason it shouldn't be: same_subtree=1,2,3 | 13:13 |
Sundar | Is that at the same level as group_policy, syntax-wise | 13:13 |
efried | never really knew why we had to bass-ackwards the syntax in flavors, but here I don't even see a way you could reasonably do that. | 13:13 |
efried | yes | 13:13 |
efried | except it can be repeated. | 13:14 |
efried | (without suffixes) | 13:14 |
efried | so I guess for that reason, we would need to figure out a different way to express it in flavors :( :( :( | 13:14 |
efried | maybe that's why we did it that way. Unique keys. | 13:14 |
efried | same_subtree=1,2,3:4,5,6 | 13:14 |
efried | anyway, that's a solvable (if unpretty) problem. | 13:15 |
Sundar | Ha. WHy not: resources_G1:VF=1; resources_G2:VF=2; trait_G3:.... | 13:15 |
Sundar | Then: same_subtree=G1,G2,G3 | 13:15 |
*** ttsiouts has quit IRC | 13:15 | |
efried | yes, exactly | 13:15 |
efried | except same_subtree=_G1,_G2,_G3 (the whole suffix required) | 13:15 |
efried | but that's the idea. | 13:16 |
efried | Point is, placement will allow you to use same_subtree more than once in the same querystring. | 13:16 |
efried | So you can have multiple distinct affinity groupings. | 13:16 |
efried | But doing that in flavor, where key has to be unique, will require some syntactic finagling. | 13:16 |
Sundar | Good so far. Now, I should add the same syntax to device profiles. So an operator can say that for CYborg devices | 13:16 |
efried | Yes. But not in the current spec IMO | 13:17 |
*** mriedem has joined #openstack-placement | 13:17 | |
Sundar | You mean add a supplemental spec to current device profiles spec in CYborg? | 13:17 |
efried | "support for affinity/colocation is out of scope and will be addressed later" - again IMO | 13:17 |
efried | not on the cyborg side, on the nova side. But yes, a supplemental spec | 13:18 |
efried | because it has a dependency on nested magic, which your current spec does not IIUC. | 13:18 |
Sundar | Agreed, the current Nova-CYborg spec will be left as is. No nested magic there | 13:19 |
Sundar | ANyways, back to the feature | 13:19 |
Sundar | If the device profile (or Neutron port RG) want to express a NUMA dependency, how do we name the dependee group? | 13:20 |
efried | I see what you're saying about the cyborg side - yes, assuming you're doing pattern/schema validation of the device profile syntax, you'll need to tweak that on the cyborg side. | 13:20 |
efried | but that will be very minor | 13:20 |
Sundar | Yes | 13:20 |
efried | and I suppose somewhere you'll have to state the tree model cyborg will be pushing to placement. | 13:20 |
efried | Okay, so now you're saying you want your *devices* to have NUMA affinity with something coming from the *flavor*? | 13:21 |
efried | or with each other? | 13:21 |
Sundar | Yes, devices may have NUMA affinity with some resource expressed in a flavor RG | 13:21 |
Sundar | IOW, request groups are in different subsystems -- flavors, neutron ports, device profiles | 13:22 |
Sundar | Yet, latter 2 may want to refer to RGs in the flavor for NUMA affinity | 13:22 |
efried | Right, so here's where it'll get tricky. | 13:23 |
efried | Because the components are coming from different places, we'll have to agree on a naming convention for the suffixes | 13:24 |
Sundar | Exactly | 13:24 |
cdent | efried, tetsuro: interesting bug that I feel into while doing other stuff: https://storyboard.openstack.org/#!/story/2005822 | 13:24 |
cdent | fell, but feel works too | 13:24 |
efried | We can't dictate that they be exact values; it'll have to be substring of some kind. | 13:24 |
efried | ...maybe | 13:25 |
*** ttsiouts has joined #openstack-placement | 13:26 | |
Sundar | Yes. We don't have to solve that right now -- just identifying the scope of what I see as missing | 13:26 |
efried | we start getting pretty tightly coupled unfortunately. Like, a device profile that cites numa affinity won't be able to be used with a flavor that doesn't. | 13:26 |
cdent | that seems quite unfortunate | 13:27 |
Sundar | But the operator sets the device profile in the flavor, so anyway there is a linkage | 13:27 |
cdent | sorry meant to put a ? on the end of that | 13:27 |
Sundar | efried: On a different note -- Re. "state the tree model cyborg will be pushing to placement", no trees: It is still a bunch of RGs, just as if we wrote all that in the flavor. | 13:28 |
Sundar | Sorry, I have a call in a few min | 13:28 |
Sundar | But I hope I managed to convey where the disconnects are | 13:28 |
efried | I'm talking about the resource provider model, not the GET /a_c or device profile | 13:28 |
efried | cyborg is responsible for creating the providers in placement. | 13:29 |
efried | To support this colocation use case, it will have to create a (sub)tree structure | 13:29 |
Sundar | Yes, sure. CYborg should create the tree in Placement appropriately | 13:29 |
efried | "convey where the disconnects are" - I'm not sure what you're getting at here. | 13:29 |
efried | We've at least superficially thought through all the use cases we know about and figured out how they can be satisfied at least on the placement call side. | 13:30 |
efried | We haven't designed the nova side in detail | 13:30 |
efried | but we have a sense for how it would work. | 13:30 |
efried | which fed the placement design | 13:30 |
efried | are we done yet? Of course not. | 13:31 |
cdent | and that's fine, we can adapt | 13:31 |
*** cdent has quit IRC | 13:38 | |
*** cdent has joined #openstack-placement | 14:02 | |
efried | cdent: Perhaps we should discuss group_policy=isolate (in context of https://review.opendev.org/#/c/662191/3/doc/source/specs/train/approved/2005575-nested-magic-1.rst@266) here in IRC for a bit. | 14:12 |
*** mriedem is now known as mriedem_away | 14:14 | |
cdent | if you like, but if there's some concept that's difficult to digest, writing it down would be better for more peole | 14:14 |
efried | cdent: Yes, I assumed we could summarize and refer to this chat | 14:17 |
cdent | I guess what I'm trying to say is: if it is best to chat about it for you to be able to convey the information, then yes, let's, but my preference would be to use asynchronous modes | 14:18 |
cdent | (in all cases) | 14:18 |
efried | The point is that the real world use cases mriedem_drillface identified are broken by group_policy=isolate | 14:19 |
cdent | if you haven't seen this yet, there's some stuff about that here: https://anticdent.org/openstack-denver-summit-reflection.html | 14:19 |
cdent | right, but existing use cases are not | 14:19 |
cdent | it's okay to have two ways | 14:19 |
efried | yeah, I know your preference for async and agree with it to a certain extent | 14:19 |
efried | "existing use cases" - sorry, perhaps we should clarify which is what. AFAIK there are no existing use cases other than the ones mriedem_novocaine identified, for which he had to hack up something different because we didn't have this yet. | 14:20 |
cdent | the two i mentioned don't count? | 14:21 |
efried | you mean VCPU:1&VCPU:1&isolate? Not really, no. What are those useful for today? | 14:21 |
cdent | if you have numa inventory and all you care about is that you get two different nodes (which I understood as _the_ reason isolate was created), that provides it | 14:22 |
cdent | in any case: I'm not saying we should extend the syntax, I'm simply saying we can keep the existing because of two reasons: a) it appears to be useful for that concept (which is useful beyond nova, conceptually), b) it has to say around anyway because of microversions | 14:23 |
cdent | s/should/shouldn't/ # sigh | 14:23 |
efried | Yeah, I think what I'm getting at is, as soon as we have >1.33, group_policy=isolate is going to cause more problems than it solves. If you want to address the above simple use case, where you aren't worrying about the anti-affinity policy bleeding into other request groups (like, where you aren't listing multiple devices and/or ports), you can use 1.33 or lower and group_policy=isolate. | 14:25 |
efried | put another way: "<1.33 is for simple shit. >=1.33 is for complicated shit." And a single global group_policy is "simple". | 14:26 |
cdent | that's essentially what I said too | 14:26 |
cdent | the difference might be that i wasn't thinking "turn off isolate after 1.33" | 14:26 |
cdent | instead: "don't use it most of the time after 1.33" | 14:27 |
cdent | (with the added subtext of "don't use >= 1.33 unless you are a crazy person who hates clouds" | 14:27 |
efried | so perhaps it's time to quit requiring it and default it to none. | 14:27 |
cdent | perhaps it is | 14:30 |
cdent | I don't see an issue with that. My statement: was we need to keep this functionality, that's all | 14:31 |
efried | Keep it as in not remove it from <1.33, agree of course. | 14:32 |
cdent | there's a difference between what you're saying here: "quit requiring it and default it to none" and what you're saying with "we should just get rid of group_policy at this microversion" | 14:33 |
efried | yes. I would prefer the latter, but the former is acceptable. | 14:34 |
efried | though weak | 14:34 |
cdent | I have mixed feelings on this | 14:34 |
cdent | because of my mixef feelings about microversions | 14:34 |
cdent | most of the time I would prefer to not have to think about microversions | 14:34 |
cdent | when we remove functionality at a microversion, I do | 14:35 |
efried | Okay, but at some point we're going to need to do something richer so we can express multiple affinity and anti-affinity subgroups. Do you agree that we'll need to deprecate/remove group_policy at that time? | 14:36 |
cdent | there's a third option here which is: don't mix group_policy with <some thing that got added at X.YY> | 14:36 |
efried | At the very least it would have to be mutually exclusive with ... yeah, that. | 14:37 |
cdent | I think mutually exclusive is better than removing | 14:37 |
efried | Okay, in this spec, let's move forward with "make group_policy optional, defaulting to none", you okay with that? (More below about the specific corner case) | 14:38 |
cdent | (also I wish we didn't have to express all this stuff, but you know, I have to accept it) | 14:38 |
cdent | i am okay with that | 14:39 |
cdent | not that I'm clear on what it really gains except making existing code need to change | 14:39 |
cdent | callers which are already writing group_policy have the option not to do so? | 14:40 |
efried | For the corner case in question, we could: | 14:40 |
efried | - Log a warning, but let it play out, which will likely result in no candidates - or possibly some really weird ones - but that's on you, cause you were warned. | 14:40 |
efried | or | 14:40 |
efried | - 400 | 14:40 |
cdent | or are we more concerned about code that doesn't exist yet? | 14:40 |
efried | code that doesn't exist yet | 14:40 |
efried | that will use the microversion that doesn't exist yet. | 14:40 |
efried | they shouldn't have to say group_policy=none if that's the only thing that makes sense. | 14:40 |
cdent | you make it sound like we have client code that _first_ branches on microversion and then does everything else, which is not the case | 14:41 |
cdent | but let's not get into that. I'm happy to accept the path described above | 14:41 |
efried | Which, warn or 400? | 14:42 |
cdent | what is the corner case you're thinking about? for the most part a warning log is something a client can never be guaranteed of seeing, so something informative is almost always better | 14:42 |
efried | yeah | 14:42 |
cdent | "you make it sound..." was in response to "code that doesn't exist yet" | 14:43 |
efried | the corner case is the one mriedem_nowspit was trying to solve when he was hacking up request filters to add traits to the request based on some configuration. | 14:43 |
efried | He ended up adding the trait to the instance's flavor, which is hacky | 14:44 |
efried | what he would have liked to do is add a unique request group that just identifies the trait | 14:44 |
efried | so he didn't have to mess with the flavor | 14:44 |
efried | but if group_policy=isolate was set (somewhere), that would result in not getting any candidates back IF the trait in question happened to be on the compute node resource provider (which it would have been) and the VCPU was in a different request group (which it would have been). | 14:45 |
efried | I can't remember whether group_policy=isolate forces the resources from the unsuffixed group to not overlap with other groups as well... | 14:46 |
* cdent tries to digest | 14:46 | |
efried | (looked it up, yes, they're allowed to overlap. So in this case, *assuming* the VCPU/MEMORY_MB ended up in the unnumbered group, it probably would have worked.) | 14:48 |
cdent | can you translate "but if..." into query params please | 14:49 |
efried | ?resources=VCPU:1,MEMORY_MB:1024&required2=COMPUTE_SUPPORTS_MULTIATTACH&group_policy=isolate ==> I think this would work, because group_policy doesn't apply to the unnumbered group's interaction with numbered groups. | 14:52 |
efried | ?resources1=VCPU:1,MEMORY_MB:1024&required2=COMPUTE_SUPPORTS_MULTIATTACH&group_policy=isolate ==> This would *not* work because resources1 and required2 would have to be satisfied by different providers, but VCPU+MEMORY_MB+COMPUTE_SUPPORTS_MULTIATTACH are all on the compute RP. | 14:52 |
cdent | and the latter is something you might want to express in a mixed numa/non-numa environment? | 14:54 |
cdent | because, to me, neither of those queries are things I would expect to see | 14:54 |
cdent | in the first one I would not use group_policy and nor would I use required2 | 14:55 |
cdent | (just required) | 14:55 |
cdent | and in the second I would target numa or not numa. It would work if the trait was on the compute-node and the inventory on numa. and it would _not_ work to target a non-numa host, which is the desired outcome | 14:56 |
cdent | (with a query of that form) | 14:56 |
cdent | just because someone can write grawlix when talking to placement doesn't mean we should accept it | 14:58 |
efried | Yeah, so the original desired solution in mriedem_temporarysunglasses's case was to do as you say, add the trait to the unnumbered group. | 14:58 |
efried | That turned out to be hard for mechanical reasons in nova, not because it wouldn't have been the right thing to do in this case. | 14:58 |
* cdent nods | 14:58 | |
efried | and assuming we support resourceless unsuffixed request group, it would still work - assuming the mechanical problem can be solved - in the general case where we can't know for sure that there will be any resources requested in unsuffixed groups. | 14:59 |
cdent | (before I forget to mention it and I time out, I'm going to be gone tomorrow and thursday for a funeral, but will back with the pupdate on friday) | 14:59 |
efried | ight | 15:00 |
cdent | right, and resourceless looks like a runner, yes? | 15:00 |
efried | ? | 15:00 |
efried | you mean we want to do it? Yes, definitely | 15:00 |
cdent | that is: it's not an assumption, we're going to support it | 15:00 |
cdent | jinx | 15:01 |
efried | so where does this get us? "group_policy=isolate will behave weirdly but correctly if you use it in the same breath with a suffixed resourceless request group - deal with it" ? | 15:01 |
cdent | weirdly according to who? from what I can tell it's doing what it says it will. | 15:02 |
efried | It will do what it says, which may or may not be what you mean | 15:02 |
efried | yeah | 15:02 |
cdent | I think this supports what you were saying before: we can make it default to none if it is not there and _if you choose to use it with isolate_ be aware that we've handed you some rope | 15:04 |
cdent | I think that's okay: most of the stuff beyond simple required and resources is rope | 15:04 |
efried | Corollary: "We will not make placement behave strangely/inconsistently to work around a weird code setup in nova that makes it hard to add shit to the unsuffixed group" | 15:04 |
*** ttsiouts has quit IRC | 15:04 | |
cdent | you can simplify that to | 15:04 |
cdent | We will not make placement behave strangely/inconsistently to work around nova | 15:05 |
*** ttsiouts has joined #openstack-placement | 15:05 | |
efried | or $consumer | 15:05 |
cdent | indeed | 15:05 |
cdent | but thus far nova is the one that needs to go to jail the most | 15:05 |
efried | I was forgetting that there was another solution to matt's case | 15:05 |
cdent | or let's make it even simpler | 15:06 |
cdent | Consistency is important to placement | 15:06 |
cdent | (with the subtopics of: "cuz, damn, it is already pretty hard to think about" | 15:06 |
efried | (I'm not actually planning to write any of those things down into like a Tao Te Ching of Placement, just useful to put them in my head while we talk about this.) | 15:07 |
cdent | btw: "consistency is important..." is pretty much what I was trying to say on the review where I said "That's a bit of a logic flaw ..." | 15:08 |
cdent | a Tao of Placement is a good idea | 15:08 |
cdent | in our copious free time | 15:08 |
*** ttsiouts has quit IRC | 15:09 | |
efried | you've mostly convinced me that it's not worth changing group_policy's required-ness. | 15:10 |
efried | except that it's a thing I've hated since the beginning. | 15:10 |
cdent | sometimes we just have to eat that. there are so many things that I've hated since the beginning. | 15:10 |
cdent | so | 15:10 |
cdent | many | 15:10 |
cdent | (including group_policy) | 15:16 |
*** e0ne has quit IRC | 15:32 | |
efried | cdent: See if I captured/summarized accurately if you please: https://review.opendev.org/#/c/662191/3/doc/source/specs/train/approved/2005575-nested-magic-1.rst@266 | 15:46 |
cdent | will do in a mo, in debug zone at the mo | 15:46 |
*** Sundar has quit IRC | 15:57 | |
*** cfriesen has joined #openstack-placement | 16:08 | |
cfriesen | when placement is using the nova_api DB, where are the "resource class ID" defined? table "resource_classes" is empty, but I see "resource_class_id" used in the "inventories" and "allocations" tables. | 16:09 |
cdent | cfriesen: if you're back a few versions, the id comes from the index of an enum, defined either in a file called rc_fields, or if you're really far back, a field in the fields.py down in the objects hierarchy | 16:11 |
cfriesen | cdent: this would be Pike | 16:11 |
cdent | cfriesen: yeah, nova/objects/fields.py | 16:12 |
cfriesen | cdent: found it, thanks. | 16:13 |
cdent | and there's a cache defined in nova/db/sqlalchemy/resource_class_cache | 16:13 |
cdent | which sort of merges custom resource class ids and the ids from the enum | 16:14 |
cdent | efried: yeah, that's a good summary | 16:16 |
efried | k, thx | 16:16 |
*** mriedem_away is now known as mriedem | 16:20 | |
cdent | efried: if you have a moment to gaze at https://storyboard.openstack.org/#!/story/2005822 you might have an insight on where the problem lies and perhaps more importantly what the actual desired result is. I've got to dash to dinner with my folks, but will be back a bit later for a bit more work before skipping the next two days. if you've got no clue, ping tetsuro, perhaps | 16:27 |
*** cdent has quit IRC | 16:29 | |
*** tssurya has quit IRC | 17:38 | |
*** amodi has quit IRC | 18:23 | |
*** Sundar has joined #openstack-placement | 18:33 | |
*** e0ne has joined #openstack-placement | 18:54 | |
*** e0ne has quit IRC | 18:55 | |
*** cdent has joined #openstack-placement | 19:20 | |
*** Sundar has quit IRC | 19:23 | |
*** e0ne has joined #openstack-placement | 19:29 | |
*** e0ne has quit IRC | 19:30 | |
*** altlogbot_1 has joined #openstack-placement | 19:40 | |
*** Sundar has joined #openstack-placement | 19:50 | |
*** e0ne has joined #openstack-placement | 20:23 | |
openstackgerrit | Chris Dent proposed openstack/placement master: Stabilize AllocationRequest hash https://review.opendev.org/663137 | 20:34 |
*** e0ne has quit IRC | 20:41 | |
cdent | efried, mriedem : that's a fix for the bug discussed earlier. It's a fix as is, but I've left some suggestions on the patch for how to make it deluxe if people feel like they want to before I'm back on friday | 20:42 |
*** e0ne has joined #openstack-placement | 20:42 | |
*** e0ne has quit IRC | 20:43 | |
mriedem | i didn't pay attention to any earlier bug discussion | 20:43 |
mriedem | but i reckon i can look at this regardless | 20:43 |
mriedem | unless it involves NRP NUMA GPU TRAIT CITY | 20:43 |
cdent | it's a python problem, not a concept problem | 20:44 |
cdent | which is rare around these parts lately | 20:44 |
cdent | sort of refreshing really | 20:44 |
mriedem | heh right | 20:44 |
mriedem | "can we just get back to slice and index error bugs please" | 20:44 |
mriedem | oh i see py27 hashseed issues | 20:45 |
mriedem | fun | 20:45 |
cdent | not really hashseed, but similar | 20:45 |
mriedem | cdent: btw, are you going to a trump rally while he's somewhat local or something, is that why you'll be gone? | 20:46 |
mriedem | i can only assume it is | 20:46 |
cdent | heh. now Im going to make you feel bad: I'm going to a funeral | 20:46 |
efried | I was going to say it's roughly the same thing, but thought that might be in poor taste. | 20:47 |
cdent | but if there was a trash trump rally nearby I'd totally hit that | 20:47 |
cdent | efried++ | 20:47 |
*** dklyle has quit IRC | 20:49 | |
*** dklyle has joined #openstack-placement | 20:49 | |
cdent | anyway, that's it, see yas | 20:55 |
*** cdent has quit IRC | 20:55 | |
mriedem | well i threw a question in there but don't feel qualified to vote on it | 20:58 |
*** mriedem is now known as mriedem_away | 21:22 | |
*** artom has quit IRC | 22:50 | |
*** takashin has joined #openstack-placement | 23:54 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!