*** openstackgerrit has quit IRC | 04:52 | |
*** openstackgerrit has joined #openstack-placement | 04:57 | |
*** e0ne has joined #openstack-placement | 05:34 | |
*** e0ne has quit IRC | 05:50 | |
*** e0ne has joined #openstack-placement | 05:53 | |
*** e0ne has quit IRC | 06:12 | |
*** gibi is now known as giblet | 06:42 | |
*** helenafm has joined #openstack-placement | 07:18 | |
*** bauzas is now known as PapaOurs | 07:39 | |
giblet | efried, jaypipes: thank you for the reviews and approvals. :) Regarding the rest of the use-nested-allocation-candidates bp I put the patches in the runway queue but did not put them diretly in the empty runway slot as I need one or two workdays to focus on some slide makig tasks. Still I will try to respond on the reviews you made | 07:41 |
---|---|---|
giblet | jaypipes, efried: as https://review.openstack.org/#/c/585672 merged, the happy path works so we already unblocked reshape + vgpu for PapaOurs | 07:42 |
PapaOurs | \o/ | 07:43 |
giblet | jaypipes, efried: the rest of the open patches are test coverage and not-so-happy-path bugfixing | 07:43 |
PapaOurs | giblet: I'll rebase my reshaper change then | 07:43 |
giblet | PapaOurs: nothing better on Friday morning than a nice clean rebase with a cup of good coffee | 07:45 |
PapaOurs | I already have the coffee :) | 07:45 |
*** e0ne has joined #openstack-placement | 07:49 | |
*** ttsiouts has joined #openstack-placement | 07:59 | |
*** tssurya has joined #openstack-placement | 07:59 | |
PapaOurs | giblet: can I ask you a dummy question ? | 07:59 |
*** takashin has left #openstack-placement | 08:00 | |
*** ttsiouts has quit IRC | 08:02 | |
*** ttsiouts has joined #openstack-placement | 08:03 | |
giblet | PapaOurs: sure go ahead | 08:05 |
giblet | (there is no such thing that dummy question) | 08:05 |
PapaOurs | giblet: when we call a-c for getting a list of potential candidates | 08:05 |
PapaOurs | giblet: then we PUT allocations | 08:06 |
PapaOurs | giblet: I guess the allocations are then fine because even if we have different RPs, it works | 08:06 |
*** ttsiouts has quit IRC | 08:07 | |
giblet | PapaOurs: what do you mean by different RPs? the RPs we are allocating from is in the allocation candidate response | 08:07 |
PapaOurs | ok so we're cool | 08:07 |
PapaOurs | giblet: I just wonder which specific RPs we pass down to the filters | 08:07 |
giblet | PapaOurs: the filter scheduler gets both the allocation requests (from the candidates) and the provider summaries in https://github.com/openstack/nova/blob/122702a5a8aaf3231f0ee416ba9bc6935547f0ab/nova/scheduler/filter_scheduler.py#L48-L50 | 08:11 |
PapaOurs | giblet: sorry, I'm just discussing on a separate internal issue downstream as of now, so I'm a bit on and off | 08:12 |
PapaOurs | giblet: lemme clarify | 08:12 |
PapaOurs | giblet: previously (pre-nested world), the allocation candidate was a single RP UUID right? | 08:12 |
PapaOurs | giblet: so when it was chosen, we were just looking at the corresponding host | 08:13 |
PapaOurs | lemme find the code | 08:13 |
giblet | PapaOurs: if we ignore the sharing RP case which was not really supported in nova then yes | 08:13 |
giblet | PapaOurs: yeah, the filters are still filtering hosts as far as I know | 08:13 |
PapaOurs | giblet: I'm talking of https://github.com/openstack/nova/blob/122702a5a8aaf3231f0ee416ba9bc6935547f0ab/nova/scheduler/filter_scheduler.py#L142 | 08:14 |
*** ttsiouts has joined #openstack-placement | 08:15 | |
PapaOurs | giblet: yeah, so https://github.com/openstack/nova/blob/122702a5a8aaf3231f0ee416ba9bc6935547f0ab/nova/scheduler/filter_scheduler.py#L481 is concerning me | 08:17 |
giblet | PapaOurs: yeah, hosts states are gathered by the RP uuids in the a_c response | 08:17 |
giblet | PapaOurs: yeah, that assumption does not hold for nested RP trees any more | 08:17 |
PapaOurs | giblet: because we assess that a RP UUID from the list of provider_summaries is necessarly a compute node | 08:17 |
PapaOurs | giblet: which is incorrect in terms of nested RPs with VGPU resource classes on a nested inventoriy | 08:18 |
giblet | PapaOurs: let me play with the functional env to see why it does not fail hard when we have a non compute RP in the a_c response | 08:18 |
PapaOurs | giblet: I think that since we only check classic resources that are not on a child RP, we're fine | 08:18 |
PapaOurs | I mean, I'm fine with us having +2d the main change, but that's not fully functional yet per my guts | 08:19 |
giblet | PapaOurs: but I have tests that has CUSTOM_MAGIC on nested RP | 08:19 |
giblet | PapaOurs: and that seems to work | 08:19 |
PapaOurs | giblet: that's strange then | 08:19 |
PapaOurs | because when I look https://developer.openstack.org/api-ref/placement/#list-allocation-candidates | 08:19 |
giblet | PapaOurs: yeah, need to trace those tests a bit | 08:19 |
PapaOurs | I can see a provider summary which isn't potentially a root RP | 08:20 |
PapaOurs | giblet: any idea where I could comment this in the series ? | 08:20 |
giblet | PapaOurs: you can comment on the test that it should fail based on the code you saw | 08:21 |
giblet | PapaOurs: here is the FakeDriver reporting nested tree https://review.openstack.org/#/c/604084/7/nova/virt/fake.py@686 | 08:21 |
giblet | PapaOurs: and here is the test setting up a flavor with resource request from the child RP | 08:22 |
giblet | PapaOurs: https://review.openstack.org/#/c/604084/7/nova/tests/functional/test_servers.py@4869 | 08:22 |
giblet | PapaOurs: basically the whole ServerMovingTestsWithNestedResourceRequests test class should fail on scheduling based on the assumption you found in the code above | 08:23 |
PapaOurs | giblet: yeah, even a classic boot request | 08:23 |
giblet | PapaOurs: exaclty | 08:23 |
giblet | giblet: so I will go and run those test with some exta tracing in the scheduler | 08:23 |
PapaOurs | giblet: you know what ? I feel I understand why it fails | 08:28 |
PapaOurs | giblet: well, technically it doesn't fail | 08:28 |
PapaOurs | giblet: it's just that we don't find the corresponding compute node UUID | 08:28 |
giblet | PapaOurs: do you mean that when we gather the computes by uuid and we have non compute uuids in the list then the SQL still return the matching computes without complaining? https://github.com/openstack/nova/blob/235d03ca95aa656957ac13e29ebb3b7515cdba8a/nova/objects/compute_node.py#L446 | 08:31 |
PapaOurs | giblet: I need to look at the exact code we have for gathering the host states | 08:32 |
PapaOurs | giblet: but I imagine a crazy problem | 08:32 |
PapaOurs | giblet: with aggregates | 08:32 |
PapaOurs | giblet: if two providers can fit the request but are on two distinct hosts, we should in theory check those two hosts by the filters | 08:33 |
PapaOurs | giblet: don't take it the hard way, I just have put a -2 on the change | 08:34 |
PapaOurs | giblet: it's just for making sure we correctly understand the implications | 08:34 |
giblet | PapaOurs: no hard feelings. I also want to understand why it works | 08:34 |
PapaOurs | giblet: I'll remove it if we consider this concern as a non-issue | 08:34 |
giblet | PapaOurs: but if two RPs (on two distinct hosts) can fit the requests then the scheduler filter will have two distinct host state that it feeds to the filters | 08:36 |
giblet | PapaOurs: except if the two RPs are not compute RPS | 08:36 |
giblet | PapaOurs: but child RPs | 08:36 |
PapaOurs | giblet: what if one of the two RPs is a child RP ? | 08:36 |
PapaOurs | yeah | 08:36 |
giblet | PapaOurs: BUT in partice in nova every candidate will contain at least the compute RP and now maybe additional child RPs | 08:37 |
PapaOurs | giblet: can we easily get the tree from the scheduler perspective ? | 08:37 |
giblet | PapaOurs: we cannot have a candidate that only contain a single child RP | 08:37 |
PapaOurs | giblet: what if I'm only asking for resources:VGPU=1 ? | 08:37 |
PapaOurs | and I stupidely don't care about CPU, DISK and RAM ? | 08:38 |
giblet | PapaOurs: your flavor will have vcpu and memory_mb in it | 08:38 |
PapaOurs | or even worst, what if I have a NUMA topology with CPU and RAM on a child RP, and DISK on a shared RP ? | 08:38 |
giblet | PapaOurs: yeah NUMA will complicate this | 08:38 |
PapaOurs | giblet: yeah, you're right, my example is bad | 08:38 |
PapaOurs | giblet: so, my take on that is | 08:38 |
giblet | PapaOurs: there was a concept mentioned couple of times in placement "anchoring RP" | 08:39 |
giblet | PapaOurs: I feel it is related | 08:39 |
PapaOurs | 1/ I just think we silently only return compute nodes, which is okay for now | 08:39 |
PapaOurs | 2/ but that could become a problem once we model NUMA | 08:39 |
PapaOurs | if you have a resource that's on the root RP, you're safe | 08:40 |
PapaOurs | but if you only ask for allocations that are on child RPs, then you could have 0 potential hosts to filter | 08:40 |
giblet | PapaOurs: yeah I agree with your points | 08:41 |
PapaOurs | I just need to doublecheck 1/ | 08:42 |
giblet | PapaOurs: I have to dig a bit but I think the problem was discussed before from placement perspective | 08:42 |
*** cdent has joined #openstack-placement | 08:45 | |
*** tetsuro_ has quit IRC | 08:45 | |
PapaOurs | giblet: confirmed that 1/ is harmless | 08:45 |
giblet | PapaOurs: yeah I've just run the test and observed that even if we provid not just compute uuids for the get_host_states_by_uuids() call it still return the host states for the computes and ignores the rest of the uuids (the children RPs) | 08:49 |
PapaOurs | giblet: I just added a comment | 08:49 |
PapaOurs | giblet: https://github.com/openstack/nova/blob/c22b53c2481bac518a6b32cdee7b7df23d91251e/nova/objects/compute_node.py#L444-L445 isn't very conservative | 08:50 |
giblet | PapaOurs: I have an idea how to prove 2/ I think I can put together some functional test where only the child RP has resources | 08:50 |
PapaOurs | giblet: yeah | 08:50 |
PapaOurs | giblet: https://review.openstack.org/#/c/604084/7/nova/tests/functional/test_servers.py@4869 | 08:51 |
PapaOurs | I need to disappear, I have four internal bugs to triage :( | 08:51 |
giblet | PapaOurs: thanks for the nice discussion. Happy triaging! (I will work on the reproduction of 2/) | 08:52 |
PapaOurs | happy, happy, I'm not sure | 08:54 |
*** efried has quit IRC | 09:17 | |
*** efried1 has joined #openstack-placement | 09:17 | |
*** efried1 is now known as efried | 09:19 | |
giblet | PapaOurs: I can reproduce the assumed fault of 2/ but I think the situation is a bit better than we thought | 09:22 |
PapaOurs | ok cool | 09:23 |
giblet | PapaOurs: so the instance requests resources as normal, but every resource is provided by a child RP and the root RP has no inventory. | 09:23 |
PapaOurs | you mean the response we get from a-c ? | 09:23 |
giblet | PapaOurs: in this case tha GET a_c response contains allocation only for the child RP BUT the provider_summaries part contains the root RP and child RP as well | 09:23 |
PapaOurs | so we still have the provider_summaries having the root RP in it , | 09:24 |
PapaOurs | oh ok cool then | 09:24 |
giblet | yes | 09:24 |
giblet | however the scheduling still fails :) | 09:24 |
PapaOurs | hah | 09:24 |
PapaOurs | why ? | 09:24 |
giblet | still looking... | 09:24 |
giblet | it ends up with NoValidHosts | 09:24 |
PapaOurs | ok | 09:24 |
giblet | I will push the reproduction patch after lunch | 09:25 |
giblet | PapaOurs: so the filters are finding the right hoststate but when the filter scheduler try to translate that back to an allocation candidate it does not found the candidate here https://github.com/openstack/nova/blob/122702a5a8aaf3231f0ee416ba9bc6935547f0ab/nova/scheduler/filter_scheduler.py#L216 | 09:31 |
giblet | PapaOurs: alloc_reqs_by_rp_uuid built based on the candidates and therefore does not contains the root RP just the child RP | 09:31 |
giblet | PapaOurs: but the selected host state only contains the root RP | 09:32 |
cdent | ugh | 09:32 |
giblet | PapaOurs: this causes that no candidate is found | 09:32 |
giblet | cdent: while it is a problem it is only problem when the instance does not request resources from the root RP. And still placement is good as it returns the root RP in the provider summary. So only nova needs to adapt to this when NUMA RPs are introduced | 09:34 |
* cdent nods | 09:35 | |
cdent | I was mostly "ugh"ing about the many pieces needing to interact making it hard to get it right and simple | 09:35 |
*** stephenfin is now known as finucannot | 09:35 | |
giblet | cdent: it is just placement and the nova scheduler ;) | 09:37 |
cdent | host states v resource providers v allocation requests v provider summaries v root rp v child rp v post request filters v pre request filters v placement filtering v .... | 09:38 |
giblet | cdent: if you look at it that way... :) | 09:38 |
cdent | it's a tangled web we have woven | 09:38 |
cdent | when it works well it is great | 09:38 |
PapaOurs | giblet: I see | 09:40 |
PapaOurs | giblet: thanks for the update | 09:40 |
giblet | host state is nova only and it is really just about serving the scheduler filters. I think the allocation request, allocation candidate, provider summaries tripplet is the confusing part and of course the translation inbetween them | 09:41 |
giblet | ahh and alternates :) | 09:41 |
PapaOurs | giblet: cdent: I feel that while we say it's only needed for NUMA, we somehow need to fix it sooner than later | 09:41 |
PapaOurs | giblet: cdent: because that's just fortunate it works | 09:41 |
PapaOurs | and I don't want to hold NUMA because of this | 09:42 |
giblet | PapaOurs: yeah, at least we have to remove the bad assumptions from the code | 09:42 |
giblet | ~ assumptions that doesn't hold any more | 09:42 |
PapaOurs | giblet: what we could do is try to find a way to say "which is my root RP" ? | 09:42 |
PapaOurs | I feel it's probably a nova-only fix | 09:43 |
giblet | PapaOurs: yeah I think the solution could be a scheduler only piece that replace alloc_reqs_by_rp_uuid with a better lookup table | 09:43 |
cdent | would it make any sense to "give" provider summaries to their corresponding host states? | 09:43 |
PapaOurs | I need to go to the gym | 09:43 |
PapaOurs | I'll be back later in 2 hours-ish | 09:43 |
* cdent needs to go the gym for his entire openstack career | 09:43 | |
giblet | PapaOurs: I also run fur lunch so see you later | 09:44 |
PapaOurs | giblet: I leave my procedural -2 until we clarify a good solution path | 09:44 |
PapaOurs | ++ | 09:44 |
giblet | PapaOurs: OK, sure | 09:44 |
* giblet needs to insert the swimming pool visits back to his calendar | 09:45 | |
giblet | cdent: attaaching summaries to host states could be a way too | 09:45 |
cdent | giblet: if you get a chance to look at this stack from efried, the would merge some nice cleanups: https://review.openstack.org/#/c/602701/ | 10:00 |
cdent | and get many of the outstanding commits | 10:01 |
cdent | you might also have some ideas on the best way to deal with https://review.openstack.org/#/c/601866/ | 10:01 |
*** ttsiouts has quit IRC | 10:22 | |
giblet | cdent: do we have nova-placement integration test in place to see if such refactoring https://review.openstack.org/#/c/602701/ is really just a refactoring? | 10:40 |
cdent | the functional tests will cover all that, and if they don't they aren't good enough :). We need to rely as much as possible on the API fitting its contract and nova using that API, not random integrations. However, I've started the process for integration tests in this stack: https://review.openstack.org/#/c/601614/ but they have quite a few unresolved depends-on | 10:42 |
cdent | giblet: did you see: https://anticdent.org/gabbi-in-the-gate.html | 10:42 |
giblet | cdent: it is queued up for reading | 10:43 |
cdent | great, thanks | 10:43 |
cdent | there's also this: https://review.openstack.org/#/c/601412/ | 10:43 |
cdent | all of that is hung up on: https://review.openstack.org/#/c/600162/ and https://review.openstack.org/#/c/604454/ | 10:44 |
cdent | And the whole thing keys off this hack which needs to be turned into a real thing: https://review.openstack.org/#/c/600161/ | 10:46 |
cdent | biab | 10:46 |
giblet | cdent: nice, I queued up the patches for review | 10:47 |
*** tetsuro has quit IRC | 10:48 | |
*** s10 has joined #openstack-placement | 11:06 | |
*** helenafm has quit IRC | 11:21 | |
*** ttsiouts has joined #openstack-placement | 11:40 | |
*** e0ne has quit IRC | 11:54 | |
*** helenafm has joined #openstack-placement | 12:12 | |
*** jaypipes is now known as leakypipes | 12:12 | |
*** e0ne has joined #openstack-placement | 12:29 | |
*** dims_ has quit IRC | 12:58 | |
*** mriedem has joined #openstack-placement | 12:59 | |
PapaOurs | giblet: I'm back but I'm looking at some internal issue | 13:10 |
giblet | PapaOurs: no worries, I'm working on a fix for the issue you found | 13:10 |
giblet | PapaOurs: first I thought it will be easy so I dived in, but the the get_alternate_hosts codepath gives me headache | 13:11 |
PapaOurs | argh | 13:11 |
PapaOurs | ok | 13:11 |
PapaOurs | thanks for the fix | 13:11 |
*** efried is now known as fried_rice | 13:14 | |
openstackgerrit | Jay Pipes proposed openstack/os-traits master: Add COMPUTE_TIME_HPET trait https://review.openstack.org/608258 | 13:18 |
fried_rice | PapaOurs, giblet, cdent: Seems like there's two things we need wrt filters: | 13:23 |
fried_rice | 1) They need to be multi-provider allocation request-aware to do their filtering operations | 13:23 |
fried_rice | 2) They need to know how to find the root provider (i.e. host) from a multi-provider allocation request so they know where to send the boot request | 13:23 |
giblet | fried_rice: I'm not sure our filter needs to know about other than the hoststate | 13:24 |
giblet | fried_rice: for the 2) I agree | 13:24 |
giblet | fried_rice: and there provider_summaries helps | 13:24 |
fried_rice | At least 2) is a simple thing: provider_summaries[random_provider_from_allocation_request.root_provider_uuid] | 13:24 |
giblet | fried_rice: I'm cooking up a solution right now for 2) | 13:25 |
giblet | fried_rice: exactly | 13:25 |
fried_rice | giblet: If we only care to filter on the host, then we can do 2) first and 1) is unchanged. | 13:25 |
giblet | fried_rice: give me couple of hours and I will have specific code to talk about :) | 13:25 |
giblet | fried_rice: yeah, current code explicitly ignores 1) | 13:26 |
giblet | fried_rice: so I only want to solve 2) first as that is unavoidable | 13:26 |
giblet | fried_rice: then if there is a need then we need to talk about 1) separately | 13:26 |
fried_rice | ++ | 13:27 |
fried_rice | Which patch is -2'd for this currently? | 13:27 |
giblet | fried_rice: https://review.openstack.org/#/c/604084 | 13:28 |
giblet | fried_rice: which is just test coverage we PapaOurs thought should fail | 13:28 |
giblet | we with PapaOurs | 13:28 |
fried_rice | Cool. | 13:29 |
giblet | then we understood why it doesn't fail right now but it will as soon as there is an allocation candidate that only contains child RPs | 13:29 |
PapaOurs | giblet: honestly, my -2 is for making sure people see the concern | 13:30 |
PapaOurs | call it a flag :) | 13:30 |
PapaOurs | fried_rice: for 1/ we checked and that's not a problem, see my comments in the -2'd change | 13:31 |
fried_rice | ack | 13:31 |
giblet | PapaOurs: I think I will ask you to remove that _after_ I was able to provide tests that fails due to what you discovered and we see some way forward about them | 13:31 |
giblet | PapaOurs: 1/ could be a separate need. I mean if a scheduler filter want to select not just between hosts but between alternative candidates targeting the same host | 13:32 |
PapaOurs | giblet: not sure I understand your point | 13:32 |
giblet | PapaOurs: e.g there is two NUMA nodes on the same host and both can fulfill the request | 13:32 |
giblet | PapaOurs: then we will have two candidates but a single host | 13:32 |
PapaOurs | giblet: well, filters don't know about placement at all | 13:32 |
giblet | PapaOurs: but NUMA filter knows about NUMA nodes | 13:33 |
PapaOurs | if we have a single host to check for filters, that's cool | 13:33 |
giblet | PapaOurs: so NUMA filter migth want to select between NUMA nodes | 13:33 |
PapaOurs | giblet: sure we said to not change that :) | 13:33 |
giblet | PapaOurs: sure I don't want to change that | 13:33 |
giblet | PapaOurs: this is why I said it can be a future need | 13:33 |
PapaOurs | giblet: but NUMA nodes will be checked by placement so we could deprecate the filter | 13:35 |
PapaOurs | giblet: I mean, the filter will continue to do the same | 13:35 |
fried_rice | I wouldn't bet on NUMA filtering being done by placement any time soon. | 13:35 |
giblet | PapaOurs: deprecating NUMA filter would be really nice :) | 13:35 |
PapaOurs | giblet: but once some verifications are done by placement, we'll only leave some specific thnigs | 13:35 |
fried_rice | We are likely to model NUMA in placement but do the filtering post-/a_c for at least a release or two. | 13:36 |
PapaOurs | yup | 13:38 |
PapaOurs | that's my plan | 13:38 |
PapaOurs | do the topology in placement | 13:38 |
PapaOurs | but just keep the filters for a few cycles | 13:39 |
PapaOurs | filter* | 13:39 |
*** ttsiouts has quit IRC | 13:39 | |
*** s10_ has joined #openstack-placement | 13:42 | |
*** s10 has quit IRC | 13:42 | |
giblet | ohh sh*t, nova assume that a ruuning server will alway have allocation on the compute RP to detect that such allocation needs to be moved to the migration_uuid during move | 13:53 |
giblet | so in my test when a server only allocates from the NUMA child, the move operations stop to move the source allocation to the migration_uuid | 13:54 |
giblet | this will be fun | 13:54 |
giblet | still technically in the todays nova code base this assumption holds as today even with nested enabled we cannot boot a server without allocation something from the compute RP | 13:55 |
giblet | so nothing is broken today it just does not support the case when CPU and MEMORY is moved to the NUMA RP | 13:55 |
giblet | ... and I thought that it is a happy Friday ... | 13:57 |
*** ttsiouts has joined #openstack-placement | 14:00 | |
cdent | I still think that we should have vcpu on the host, and some other thing on the numa node | 14:01 |
cdent | so that there is always something on the host | 14:01 |
cdent | mriedem: a) aren't you off, b) this stack of depends on is so much fun | 14:03 |
mriedem | yes and yes | 14:04 |
mriedem | extracting a required thing wasn't going to be simple... | 14:04 |
cdent | it's actually simpler than I feared, in some aspects | 14:09 |
cdent | is the interrelationship between the various bits of qa/infra that is bewildering, but it fails in reasonable ways | 14:11 |
mriedem | well one must sacrifice a few virgin souls to the qa/infra gods to get things to work but yes | 14:16 |
fried_rice | wow, virgin soul must be even harder, what with reincarnation and all | 14:19 |
*** dansmith is now known as SteelyDan | 14:20 | |
*** ttsiouts has quit IRC | 14:31 | |
*** ttsiouts has joined #openstack-placement | 14:31 | |
*** ttsiouts has quit IRC | 14:36 | |
*** e0ne has quit IRC | 14:38 | |
openstackgerrit | Jay Pipes proposed openstack/os-traits master: Add COMPUTE_TIME_HPET trait https://review.openstack.org/608258 | 14:48 |
PapaOurs | cdent: sorry was in meeting so I missed your point when you said you feel VCPUs should be a root RP RC | 14:53 |
PapaOurs | cdent: but I proposed this on my spec as an alternative and got some solid disagreement on that being a thing | 14:53 |
PapaOurs | giblet: I'm back after 2 hours of meeting and internal stuff | 14:54 |
PapaOurs | giblet: I feel we all know a bit more about the problem we're trying to solve, I'm about to un-2 | 14:54 |
PapaOurs | no real need to hold this change since it doesn't regress | 14:54 |
giblet | PapaOurs: fine. I'm about to push the patch | 14:54 |
cdent | PapaOurs: yeah, I think I remember that. I'm mostly being nostalgic for "easy targetting" | 14:55 |
giblet | PapaOurs: I think I managed to solve the issue 2, but we can debate if my solution is a nice one | 14:55 |
*** helenafm has quit IRC | 14:58 | |
PapaOurs | cool :) | 15:01 |
fried_rice | giblet, PapaOurs: Oh, I was thinking we should put the new test case into that patch. But can reissue +2 if we decide otherwise. | 15:11 |
PapaOurs | fried_rice: that's a good point, I'm not sure we really need to hold based on this | 15:12 |
PapaOurs | fried_rice: that can be a follow-up | 15:12 |
fried_rice | y'all just let me know what we decide. | 15:12 |
* giblet still working on the functional tests | 15:16 | |
*** ttsiouts has joined #openstack-placement | 15:21 | |
*** ttsiouts has quit IRC | 15:29 | |
*** ttsiouts has joined #openstack-placement | 15:30 | |
*** ttsiouts has quit IRC | 15:34 | |
giblet | PapaOurs, fried_rice, cdent: here is the my first stab at fixing the children only allocation issue found by PapaOurs https://review.openstack.org/#/c/608298/ | 15:43 |
giblet | PapaOurs: be aware that it is ugly at places :) | 15:43 |
PapaOurs | giblet: ack, thank | 15:43 |
fried_rice | ack | 15:43 |
PapaOurs | giblet: that said, I'll call a day soon | 15:43 |
giblet | PapaOurs: no worries I'm calling the day and the week right now :) | 15:44 |
giblet | have a nice weekend folks! | 15:44 |
fried_rice | See ya giblet. Great work this week | 15:44 |
PapaOurs | enjoy | 15:44 |
*** e0ne has joined #openstack-placement | 15:50 | |
*** e0ne has quit IRC | 15:55 | |
cdent | something about the breadth of that change doesn't seem right, but I'm rather out of time (for a week) to review it properly | 16:03 |
*** PapaOurs is now known as bauzas | 16:08 | |
*** mriedem has quit IRC | 16:09 | |
*** helenafm has joined #openstack-placement | 16:12 | |
*** s10_ has quit IRC | 16:12 | |
cdent | I think I'm done. See you all in a bit more than a week. | 16:30 |
* cdent waves | 16:30 | |
*** cdent has left #openstack-placement | 16:30 | |
*** dims has joined #openstack-placement | 16:46 | |
*** helenafm has quit IRC | 16:49 | |
*** sean-k-mooney has quit IRC | 17:16 | |
*** sean-k-mooney has joined #openstack-placement | 17:24 | |
*** tssurya has quit IRC | 17:44 | |
*** e0ne has joined #openstack-placement | 19:38 | |
*** e0ne has quit IRC | 19:43 | |
*** e0ne has joined #openstack-placement | 19:55 | |
*** e0ne has quit IRC | 19:56 | |
*** e0ne has joined #openstack-placement | 20:44 | |
*** e0ne has quit IRC | 20:49 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!