*** jaypipes has quit IRC | 00:09 | |
*** takashin has joined #openstack-placement | 00:17 | |
*** efried has quit IRC | 00:37 | |
*** efried has joined #openstack-placement | 00:45 | |
*** tetsuro has joined #openstack-placement | 01:12 | |
*** tetsuro has quit IRC | 02:45 | |
*** tetsuro has joined #openstack-placement | 02:51 | |
openstackgerrit | Merged openstack/placement master: Allow [a-zA-Z0-9_-]{1,64} for request group suffix https://review.opendev.org/657419 | 02:53 |
---|---|---|
openstackgerrit | Merged openstack/placement master: Avoid traversing summaries in _check_traits_for_alloc_request https://review.opendev.org/660691 | 02:53 |
openstackgerrit | Merged openstack/placement master: Use trait strings in ProviderSummary objects https://review.opendev.org/660692 | 02:53 |
*** tetsuro has quit IRC | 02:55 | |
*** tetsuro has joined #openstack-placement | 02:55 | |
*** tetsuro has quit IRC | 03:01 | |
*** tetsuro has joined #openstack-placement | 03:09 | |
*** tetsuro has quit IRC | 03:12 | |
*** tetsuro has joined #openstack-placement | 03:20 | |
*** tetsuro has quit IRC | 03:24 | |
*** altlogbot_3 has quit IRC | 03:44 | |
*** altlogbot_2 has joined #openstack-placement | 03:46 | |
*** tetsuro has joined #openstack-placement | 03:59 | |
*** tetsuro has quit IRC | 04:04 | |
*** tetsuro has joined #openstack-placement | 04:26 | |
*** altlogbot_2 has quit IRC | 04:38 | |
*** altlogbot_2 has joined #openstack-placement | 04:40 | |
*** altlogbot_2 has quit IRC | 04:40 | |
*** altlogbot_2 has joined #openstack-placement | 04:42 | |
*** tetsuro has quit IRC | 05:12 | |
*** tetsuro has joined #openstack-placement | 06:02 | |
*** tetsuro has quit IRC | 06:07 | |
*** tetsuro has joined #openstack-placement | 06:19 | |
*** tetsuro has quit IRC | 07:16 | |
*** tetsuro has joined #openstack-placement | 07:22 | |
*** tetsuro has quit IRC | 07:41 | |
*** tetsuro has joined #openstack-placement | 07:47 | |
*** helenafm has joined #openstack-placement | 07:50 | |
*** tetsuro has quit IRC | 07:56 | |
*** takashin has left #openstack-placement | 07:57 | |
*** e0ne has joined #openstack-placement | 07:59 | |
openstackgerrit | Chris Dent proposed openstack/placement master: Optionally run a wsgi profiler when asked https://review.opendev.org/643269 | 08:37 |
*** cdent has joined #openstack-placement | 09:05 | |
openstackgerrit | Chris Dent proposed openstack/placement master: Modernize CORS config and setup https://review.opendev.org/661922 | 09:32 |
*** jaypipes has joined #openstack-placement | 10:57 | |
*** helenafm has quit IRC | 12:05 | |
*** e0ne has quit IRC | 12:16 | |
*** helenafm has joined #openstack-placement | 12:55 | |
sean-k-mooney | cdent: replied to your replies :) am but i think this might be the most interesting https://review.opendev.org/#/c/658510/4/doc/source/specs/train/approved/2005575-nested-magic.rst@363 | 13:05 |
sean-k-mooney | im not actully sure we need can_split if we do ^ | 13:05 |
sean-k-mooney | "... always report vcpus and memory_mb(4k memory only) on the compute node RP and report hugepages and PCPUs allways on the numa nodes." | 13:06 |
sean-k-mooney | cdent: oh and i did not say it in the review but yes im fine with same_subtree, resourceless request groups, and verbose suffixes | 13:08 |
sean-k-mooney | so at least on my part it was not jsut because can_split was first in the docuemnt | 13:09 |
cdent | cool, thanks | 13:19 |
*** mriedem has joined #openstack-placement | 13:20 | |
*** e0ne has joined #openstack-placement | 13:24 | |
efried | cdent, sean-k-mooney: o/ | 13:40 |
sean-k-mooney | efried: o/ | 13:40 |
efried | I didn't manage to catch up with all the can_split comments even as of yesterday. | 13:40 |
efried | but the one thing that stands out as possibly bacon-saving is: | 13:40 |
efried | if we gotta have NUMA and non-NUMA guests on the same host, put VCPUs and non-huge pages on the root RP, and PCPUs and hugepages on the NUMA RPs. | 13:41 |
sean-k-mooney | efried: ya so if we do that i think we dont need can split | 13:42 |
efried | right | 13:42 |
efried | and cdent, to answer your question, no, I don't think existing flavors need to change. | 13:42 |
sean-k-mooney | we do need to handel that one edgecase of you ask for hw:mem_page_size=4k|small | 13:42 |
efried | We were going to need to add code to interpret them anyway | 13:43 |
sean-k-mooney | but we can do that in the prefilter that construct the placemetn request | 13:43 |
efried | sean-k-mooney: is that a done thing? | 13:43 |
sean-k-mooney | yes | 13:43 |
efried | Can we have an equivalent of cpu_dedicated_set for memory pages? | 13:43 |
sean-k-mooney | you can explictly request small pages but it also give the vm a numa toplogy of 1 | 13:43 |
sean-k-mooney | so we alreay track memory pages in teh resouce tracker even for 4k | 13:44 |
sean-k-mooney | we jsut dont use that unless you set hw:mem_page_size=4k or small | 13:44 |
sean-k-mooney | when you set that we bot corectly do the acconting and claim it in the resouce tracker and affinites teh vm to the numa node it was allcoated form | 13:45 |
sean-k-mooney | but as i said that is only a proablem if you set that + requested pinned cpus | 13:46 |
efried | Is there a config option that sets those things up, or do we slice 'em on the fly? | 13:46 |
sean-k-mooney | set which up the tracking in the resouce tracker ectra | 13:46 |
sean-k-mooney | that is always done be just dont claim alwasy as for non numa guests we allow the kerenl to float both the memroy and the cpus across numa nodes as it sees fit | 13:47 |
efried | sean-k-mooney: So what we're suggesting here is to allow the operator to decide which CPUs are going to be for NUMA (cpu_dedicated_set) and non-NUMA (cpu_shared_set) guests. | 13:49 |
efried | That's a new (mostly) config setup per stephenfin's spec | 13:49 |
sean-k-mooney | no cpu_shared_set does not defien that | 13:49 |
sean-k-mooney | we would have to update the spec to state that if we want to use it that way | 13:49 |
efried | sean-k-mooney: I'm saying that's what we're talking about doing to solve the problem at hand. | 13:49 |
sean-k-mooney | not quite | 13:50 |
efried | isn't it? Or did I miss | 13:50 |
efried | apparently I missed | 13:50 |
sean-k-mooney | i am saying at the palcemnt level yes | 13:50 |
sean-k-mooney | in mont saying that you can have numa affintiy with floating cpus | 13:50 |
sean-k-mooney | *not | 13:50 |
sean-k-mooney | that would be a big regression | 13:50 |
efried | what does that mean, NUMA affinity with floating CPUs? | 13:51 |
sean-k-mooney | but that affinity would still be provided by the numa toplogy filter not placemnt | 13:51 |
efried | you mean "my guest can float across all the CPUs in the NUMA node, it's not pinned to specific ones"? | 13:51 |
sean-k-mooney | if you requst hugepages and nothing else | 13:51 |
efried | But it's still restricted to only CPUs in the NUMA node | 13:51 |
sean-k-mooney | we restict your cores to the cores that the hugpage memory is allocated form and create a numa toplogy of 1 | 13:51 |
sean-k-mooney | so they still float but just within the 1 numa node your huge page memory came form | 13:52 |
efried | that ^ is how it's done today? | 13:52 |
sean-k-mooney | yes | 13:52 |
sean-k-mooney | when you set hw:mem_page_size to anythign we treat it as if you also set hw:numa_nodes=1 | 13:53 |
efried | so in placement-land, if we put cpu_shared_set on the root RP, we would satisfy ^ this by allocating some number of those VCPUs from the root RP, and the memory thingies from a NUMA node. | 13:53 |
sean-k-mooney | yep | 13:54 |
sean-k-mooney | and checking the affinity of that would be up to the numa toplogy filter | 13:54 |
sean-k-mooney | e.g. not placment problem | 13:54 |
efried | But that brings us back to the scenario where we have to fail late if there's not enough VCPUs in one NUMA node to fulfil the request. | 13:54 |
efried | how frequently is this ^ done, vs other NUMA-y configurations? | 13:55 |
sean-k-mooney | technicaly that could happen but if we claim the cpus in the resouce tracker in the conducto that also solves that issue | 13:55 |
sean-k-mooney | just requesting hugepages? | 13:56 |
sean-k-mooney | and not pinning | 13:56 |
sean-k-mooney | its done anythime you have a dpdk netowrk backend as its reqiured for it to work | 13:56 |
sean-k-mooney | but cpu pinning is not | 13:56 |
efried | And how much of a regression is it really if we float *all* the CPUs in that case? | 13:56 |
sean-k-mooney | i have raised that idea of breaking that copeling many times and always been shut down | 13:57 |
efried | okay | 13:57 |
efried | by whom, out of curiosity? | 13:58 |
efried | Sounds like they would be good people to ask these questions we've been noodling. | 13:58 |
sean-k-mooney | intel, windriver and telcos | 13:59 |
efried | btw, I had another idea to mitigate "candidate explosion" | 13:59 |
efried | (cdent edleafe) | 13:59 |
efried | allow an optional :$chunk_size syntax in the value, so the caller can decide on a per-call basis. | 14:00 |
efried | can_split=VCPU,MEMORY_MB:1024 | 14:00 |
cdent | mleh | 14:00 |
cdent | how smart are we expecting the caller to be | 14:00 |
efried | that's halfway between a 'meh' and a 'bleh'? | 14:00 |
cdent | this has been a constant concern on my part | 14:00 |
cdent | pretty much | 14:01 |
sean-k-mooney | efried: by the way wat i think would be fiar is not to have affinity for hw:mem_page_size but always have it if you set hw:numa_nodes | 14:01 |
efried | Well, the caller in this case is nova. I'm *not* expecting that syntax ever to be put in place by a human. | 14:01 |
sean-k-mooney | hum interesting | 14:01 |
sean-k-mooney | how would the operator decide that? or specify that | 14:01 |
efried | so for example cdent, hw:mem_page_size translates directly to $chunk_size for MEMORY_MB | 14:01 |
sean-k-mooney | efried: that is not enought ot make hugepages work in placment | 14:02 |
efried | (course, it's normally being used to define a NUMA topology, which can_split is specifically not, but...) | 14:02 |
sean-k-mooney | we shoudl have seperate inventories of each size | 14:02 |
cdent | but then you would have to pre-partition | 14:02 |
efried | I don't understand how hugepages work. Are they preallocated in some way? | 14:02 |
efried | yeah, that. | 14:02 |
sean-k-mooney | well it depends on if you said i want 1G pages or any large page | 14:02 |
sean-k-mooney | efried: yes preallocated in the kernel usually on the kernel commandline | 14:03 |
sean-k-mooney | we do not have to select specific hugepage to give to the guest but we have pools of them per numa node and just allocat 4 units of them fo a given size | 14:03 |
sean-k-mooney | *x units | 14:04 |
efried | sean-k-mooney: and if my system is set up that way and I just ask for a random amount of MEMORY_MB in a non-NUMA configuration, does it actually consume whole pages? | 14:04 |
efried | Or does the memory somehow get to "float" like the CPUs do | 14:04 |
sean-k-mooney | efried: no the default hw:mem_page_size is small not any | 14:04 |
sean-k-mooney | so if that is not set it always uses the smallest pagesize which is typically 4k | 14:05 |
sean-k-mooney | since teh kernel cannot work with only hugepage memory you will never have a case where the only memeory is hugepages | 14:05 |
sean-k-mooney | if you want guests ot only use hugepages on a host then you set the host reserve memory = to the totoal amount of 4k memory on the host | 14:06 |
sean-k-mooney | when the memory is allocated as hugepages even when they are not used by anything they are removed from the free memory reported on the host at the kernel level | 14:07 |
sean-k-mooney | so if you allocte 60 1G pages on a 64G host then free will say you only have 4 G free | 14:08 |
sean-k-mooney | well - whatever the os is useing | 14:08 |
sean-k-mooney | anyway back to chunksize | 14:08 |
sean-k-mooney | did you envison that in the flaovr somewhere? or a config? | 14:08 |
efried | I figured the sched filter would add it to the request | 14:10 |
cdent | [t aPG6] | 14:10 |
purplerbot | <efried> Sounds like they would be good people to ask these questions we've been noodling. [2019-05-29 13:58:18.035937] [n aPG6] | 14:10 |
cdent | because we keep coming up with edges and corners where _something_ is going to change for someone, no matter what we do, unless we continue to say "for now, don't co host numa and not", whether we do can_split or not | 14:10 |
efried | based on either something already in the flavor (like a "page size" of some sort - not sure what's there); or perhaps calculated on the fly based on the total amount of memory being requested; or perhaps even a config option that defaults to something "sensible". | 14:11 |
sean-k-mooney | efried: well even for 4k pages the minium allcoation in the flavor is 1mb chunks | 14:12 |
efried | cdent: I agree with you | 14:12 |
sean-k-mooney | and 1G for disk | 14:12 |
efried | sean-k-mooney: that's useful information, I would think that should be fed into the step_size in the inventory setup. | 14:12 |
sean-k-mooney | so there is a natural minium granularity isn most cases | 14:12 |
efried | But it's not today | 14:12 |
efried | cdent: The issue I have is even knowing what are the reasonable subset of questions to ask. | 14:13 |
efried | If we could identify a small handful of knowledgeable stakeholders, and have an actual sit-down (hangout) with them for a couple of hours, we could get somewhere. | 14:13 |
cdent | Maybe we don't have to ask all the questions at once. Have you had a chance to look at what I said on the etherpad? | 14:13 |
efried | But we've speculated on sooo many things at this point, if we put it all in an email, is anyone actually going to read and process it? | 14:13 |
efried | yes | 14:14 |
cdent | My concern with a "small handful" of stakeholders is that they are often representative of a small but powerful segement that does not represent the commonweal. we need both. | 14:14 |
* edleafe catches up | 14:14 | |
efried | fair enough | 14:14 |
edleafe | Sounds like the classic "i want" vs. "we need" | 14:15 |
cdent | maybe start with "how much will you hate it if we continue to say 'do not mix numa and non-numa' and see where that goes" | 14:15 |
cdent | edleafe: yes | 14:15 |
efried | cdent: At this point I'm tempted to say we should do something of limited scope that supports a certain known subset... yes, kind of that --^ | 14:15 |
edleafe | cdent: when you say "mix": do you mean on the same host, or having numa/non-numa in the same deployment? | 14:15 |
cdent | that's where I'm at too, which is why I was trying to determine if "everything can_split" is a workable thing | 14:16 |
cdent | edleafe: same host | 14:16 |
edleafe | ugh - no, that's dumb | 14:16 |
edleafe | I think we agreed that that wasn't going to happen back in Atlanta | 14:16 |
cdent | damn bit typo: "every feature _but_ can_split, from the spec" | 14:16 |
cdent | s/bit/big/ | 14:16 |
cdent | edleafe: are you up to date on the vast commentary on https://review.opendev.org/658510 ? if not, probably best to catch up there first | 14:17 |
edleafe | cdent: no, I'm not up to date on much lately. :) | 14:17 |
sean-k-mooney | edleafe: the peopel that care about numa and non numa on the same host are typically the edge folk for context but even then im not totally sold on that argument as they tend to tune everthing to use every last resouces | 14:18 |
cdent | efried: so yeah: maybe an option, if it is not damaging, is: same_subtree, resourceless providers, mappings in allocations, complex suffixes and NOT can_split | 14:19 |
edleafe | sean-k-mooney: I'm not surprised | 14:19 |
sean-k-mooney | so in general i personlly woudl be be fine with say please continue not to mix numa and non numa instace | 14:19 |
efried | cdent: and a per-compute config option saying use_placement_for_numa | 14:19 |
sean-k-mooney | and as cdent mention earlier the nova config option for what resouce to report under numa ndoes is enough to enable that | 14:19 |
cdent | efried: yeah, something like | 14:20 |
sean-k-mooney | efried: that is in the nuam in placement spec | 14:20 |
efried | and when nova sees a flavor that asks for numa-y things, it adds a required trait for HW_NUMA_ROOT in its own group so we're sure to land on such a host. | 14:20 |
sean-k-mooney | there is a per compute config option that list the resouce classes to report under a numa node | 14:20 |
efried | I need to read that spec again... | 14:20 |
efried | I've lost track of it. | 14:21 |
sean-k-mooney | ill get the link to the right bit on sec i have it open | 14:21 |
sean-k-mooney | https://review.opendev.org/#/c/552924/14/specs/stein/approved/numa-topology-with-rps.rst@311 | 14:22 |
sean-k-mooney | [numa] | 14:22 |
sean-k-mooney | resource_classes = [VCPU, MEMORY_MB, VGPU] | 14:22 |
sean-k-mooney | the default was propsed as empty so you had to opt in to thinks being reported as numa resouces | 14:22 |
efried | ++ | 14:23 |
sean-k-mooney | although i think PCPU shoudl be in the list by default | 14:23 |
efried | Nah | 14:23 |
sean-k-mooney | give that you can only consume it if you have a numa toplogy | 14:23 |
sean-k-mooney | but im fine with it being empty too | 14:23 |
efried | I can think of a couple use cases right off the top that care about PCPU but don't care about NUMA. | 14:23 |
sean-k-mooney | and nova does not allow it today | 14:24 |
sean-k-mooney | i have been trying to get nova to allow it since before we intoduced cpu pinning | 14:24 |
sean-k-mooney | the only reason we havent is a have not convice peopel its ok to break backwards compatiablity since they can get teh same behavior be also setting hw:numa_nodes=1 | 14:25 |
sean-k-mooney | anyway i need to go work on a static cache allocation spec... | 14:27 |
efried | cdent: Okay, so if we do this (no mixing, use_numa_for_placement type config), there should be a way for operators to continue to do things the way they are today, i.e. no NUMA topo in placement, rely completely on the NUMATopologyFilter. Conductor-level config opt use_placement_for_numa=$bool too? | 14:27 |
* cdent parses | 14:27 | |
sean-k-mooney | efried: am the idea was to not enable the prfilter | 14:27 |
sean-k-mooney | e.g. the prefilter is what transfromse the request into the numa version | 14:28 |
sean-k-mooney | but it will be a cluster wide thing | 14:28 |
efried | yes, exactly sean-k-mooney. if the scheduler (sorry, not conductor) option is use_placement_for_numa=True, we use the prefilter to translate the request into the numa-y placement-y version. If not, not. | 14:29 |
efried | iow operator has to opt into placement-for-numa by | 14:29 |
efried | 1) setting the compute-level conf opt on each host they want NUMA instances to land on | 14:29 |
efried | 2) setting the sched-level opt to True | 14:29 |
sean-k-mooney | efried: sure but the prefilter was also going to have a bool flag | 14:29 |
efried | but they don't have to change their flavors at all, which I think is important. | 14:29 |
efried | sean-k-mooney: That *is* the bool flag. | 14:29 |
sean-k-mooney | efried: no my point was all prefilters have a bool flage to enable them | 14:30 |
efried | we must be talking about different prefilters | 14:30 |
sean-k-mooney | so rather then use_placement_for_numa there woudl be a enable_numa_prefilter config | 14:30 |
efried | I'm talking about nova.scheduler.request_filter | 14:30 |
efried | sure, whatever you want to call it. | 14:31 |
sean-k-mooney | efried: the numa spec proposed a prefilter | 14:31 |
sean-k-mooney | efried: we were going to add a new one | 14:31 |
sean-k-mooney | just fo numa | 14:31 |
efried | one boolean at the scheduler level that tells us whether to use placement-y numa-y GET /a_c or not. | 14:31 |
sean-k-mooney | and we were going to add a new one for cpu pinning too | 14:31 |
cdent | it's a shame that we have turn on "use placementy numay" instead of the presence of appropriate inventory being enough. But I guess we have translate request specs whatever we do... | 14:34 |
efried | yeah, the conductor has to have a way to know that the computes are representing NUMA topo in placement shapes. | 14:35 |
*** artom has quit IRC | 14:35 | |
*** artom has joined #openstack-placement | 14:35 | |
efried | If there were a way to use the *same* GET /a_c request syntax to request NUMA-y stuff whether the computes are treed or not... we'd be done. | 14:36 |
* cdent nods | 14:36 | |
efried | cdent: procedurally, we clearly aren't holding up code work pending merging the spec, since arbitrary suffixes already merged... | 14:36 |
cdent | yeah, this is sparta, or something | 14:37 |
efried | ...but (with all dramatic irony) do you think it would be a good or bad idea to split can_split into its own spec? | 14:37 |
cdent | I was thinking the same thing earlier today, if what we want is to be able to move independently. If that does happen I would keep can_split on the same gerrit review and move all the other stuff to the new one | 14:38 |
efried | yeah, to preserve the massive history | 14:41 |
efried | cdent: I don't know what the f to do now. Reached a point of thorough analysis paralysis. | 14:42 |
cdent | huh, I was thinking we just made something approaching a plan | 14:43 |
cdent | which is: | 14:43 |
cdent | a) split can_split to own spec | 14:43 |
cdent | b) say to ops our next steps on numa (pointing to both specs, indicating can_split is something we'd prefer not to do, ask if they will live) | 14:44 |
cdent | c) write code | 14:44 |
cdent | d) align nova's queries with whatever needs to happen (which has to happen anyway) | 14:45 |
efried | okay | 14:46 |
edleafe | [t hiCo] | 14:47 |
purplerbot | <efried> If there were a way to use the *same* GET /a_c request syntax to request NUMA-y stuff whether the computes are treed or not... we'd be done. [2019-05-29 14:36:00.313264] [n hiCo] | 14:47 |
cdent | you give off the vague air of someone not entirely convinced (which is probably totally appropriate) | 14:47 |
edleafe | efried: You seem to be teasing me... | 14:48 |
cdent | edleafe: c'mon, you're high, this probably doesn't go away with a different database | 14:48 |
cdent | s/probably/problem/ | 14:48 |
edleafe | cdent: Whether I'm high or not is irrelevant :) | 14:48 |
cdent | efried: which of those tasks would you like me to take? | 14:50 |
efried | cdent: Thanks, I was about to ask | 14:51 |
efried | a), and possibly b)? | 14:51 |
efried | edleafe: I'm not teasing you. Graph db does not figure into my thinking at all, in any context. Because regardless of whether the theory is workable, I don't believe there is a practical path to making it a reality. (And also what cdent said.) | 14:52 |
cdent | efried: yeah can do both, but tomorrow | 14:52 |
efried | cdent: Okay, cool, thank you. I'm pretty far "behind" (cf. previously discussed definition) and that'll help me "catch up". | 14:53 |
cdent | rad | 14:53 |
edleafe | efried: And I don't believe that there is a practical path to making nested/shared/etc. a reality with sqla | 14:53 |
efried | I may be way off here, but I don't feel like our challenge right now is around "how would we make this work" | 14:54 |
efried | it's around defining what we want "this" to look like at all. | 14:54 |
cdent | yes | 14:54 |
edleafe | efried: I get that. The difficulty in making a particular choice is always influenced by the dread of implementing it, though | 14:55 |
efried | meh, I'm not worried about that. Mainly because I have faith in tetsuro :) | 14:55 |
*** artom has quit IRC | 14:55 | |
edleafe | efried: ...and magic | 15:00 |
efried | sean-k-mooney: In your commentary you say that we recommend not mixing numa and non-numa instances - can you point me to the doc that says that? | 15:15 |
efried | cause it semes to me that gives us a pretty decent case for saying we're going to simply not allow it at all anymore. | 15:15 |
sean-k-mooney | efried: i know its in some of the downstream nfv tuning guides i will check and see if its upstream | 15:25 |
sean-k-mooney | stephenfin: ^ do you know where this would have been documented upstream? | 15:25 |
stephenfin | I've no idea if we document it or not | 15:27 |
stephenfin | If it was anywhere, it would be in https://docs.openstack.org/nova/latest/admin/cpu-topologies.html | 15:28 |
*** altlogbot_2 has quit IRC | 15:35 | |
*** altlogbot_0 has joined #openstack-placement | 15:36 | |
*** irclogbot_0 has quit IRC | 15:36 | |
*** irclogbot_3 has joined #openstack-placement | 15:38 | |
*** helenafm has quit IRC | 15:49 | |
cdent | efried, edleafe: anyone else, just a proof of concept for now but go to https://cdent.github.io/placeview/ and make uri something like https://burningchrome.com/placement/allocation_candidates?resources=DISK_GB:5&resources1=VCPU:1,MEMORY_MB:1024&resources2=VCPU:1,MEMORY_MB:1024&group_policy=isolate and auth 'admin' | 15:51 |
*** artom has joined #openstack-placement | 16:08 | |
efried | neat | 16:15 |
cdent | efried: I'll do the email and spec stuff tomorrow morning after any overnight fixups | 16:18 |
cdent | but for now, I'm away | 16:18 |
efried | o/ | 16:21 |
*** cdent has quit IRC | 16:22 | |
sean-k-mooney | efried: we have the note below about seperating hosts for pinned instance form non pinned https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies | 16:51 |
sean-k-mooney | "Host aggregates should be used to separate pinned instances from unpinned instances as the latter will not respect the resourcing requirements of the former." | 16:51 |
sean-k-mooney | but that really shoudl be more general | 16:52 |
*** e0ne has quit IRC | 16:58 | |
openstackgerrit | Kashyap Chamarthy proposed openstack/os-traits master: hw: cpu: Rework the directory layout; add missing traits https://review.opendev.org/655193 | 17:11 |
mriedem | can someone just ram this osc-placement stable/queens test only change through? https://review.opendev.org/#/c/556635/ | 17:49 |
*** dklyle has quit IRC | 19:21 | |
efried | mriedem: Looking... | 19:28 |
efried | mriedem: Why is osc-placement a required-project? | 19:28 |
*** dklyle has joined #openstack-placement | 19:29 | |
mriedem | this is (a) a backport and (b) the job definition stuff was originally pulled from openstack-zuul-jobs to move in-tree so this is what was in that repo when i originally copied it | 19:30 |
efried | so it's not actually necessary? Or we want to keep it there in case... some other project picks it up?? | 19:31 |
efried | mriedem: rammed | 19:34 |
mriedem | idk | 19:35 |
mriedem | if we remove it, we should remove it from master | 19:35 |
mriedem | realize that openstack/nova is in required-projects for nova's zuul yaml | 19:35 |
mriedem | as is placement for placement-perfload | 19:35 |
*** mriedem has quit IRC | 19:48 | |
*** mriedem has joined #openstack-placement | 19:55 | |
openstackgerrit | Merged openstack/osc-placement stable/queens: Migrate legacy-osc-placement-dsvm-functional job in-tree https://review.opendev.org/556635 | 20:10 |
openstackgerrit | Eric Fried proposed openstack/placement master: Add RequestGroupSearchContext class https://review.opendev.org/658778 | 20:23 |
openstackgerrit | Eric Fried proposed openstack/placement master: Move search functions to the research context file https://review.opendev.org/660048 | 20:23 |
openstackgerrit | Eric Fried proposed openstack/placement master: Cache provider ids in requested aggregates https://review.opendev.org/660049 | 20:23 |
openstackgerrit | Eric Fried proposed openstack/placement master: Remove normalize trait map func https://review.opendev.org/660050 | 20:23 |
openstackgerrit | Eric Fried proposed openstack/placement master: Move seek providers with resource to context https://review.opendev.org/660051 | 20:23 |
openstackgerrit | Eric Fried proposed openstack/placement master: Reuse cache result for sharing providers capacity https://review.opendev.org/660052 | 20:23 |
*** mriedem has quit IRC | 21:36 | |
*** jaypipes has quit IRC | 21:41 | |
*** efried has quit IRC | 21:49 | |
*** takashin has joined #openstack-placement | 21:49 | |
*** efried has joined #openstack-placement | 21:51 | |
*** artom has quit IRC | 23:05 | |
*** artom has joined #openstack-placement | 23:42 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!