*** yedongcan has joined #openstack-nova | 00:03 | |
*** openstack has joined #openstack-nova | 13:12 | |
*** ChanServ sets mode: +o openstack | 13:12 | |
gibi | mriedem: the previous patches just refactored the heal allocation code path | 13:13 |
---|---|---|
gibi | to make it easy to add the port healing | 13:14 |
*** mmethot has quit IRC | 13:15 | |
gibi | mriedem: the basic structure of final patch | 13:15 |
gibi | mriedem: during _heal_allocations_for_instance first we get if the instance has ports that needs healing | 13:18 |
gibi | this _get_port_allocations_to_heal is the majority of the added code there | 13:18 |
gibi | as it detect is there is any port that needs healing and generates the allocation fragment for that port | 13:19 |
mriedem | ok i'm assuming just checking if the port has a resource request (if hitting neutron directly) or checking the info cache for a bound port with the allocatoin in the binding profile? | 13:19 |
gibi | we are hitting neutron directly | 13:19 |
mriedem | hmm, that's kind of expensive isn't it? | 13:20 |
mriedem | why can't we use the info cache to tell if the vif has a port allocation? | 13:20 |
mriedem | or is that some kind of chicken and egg issue? or we don't trust the cache? | 13:20 |
gibi | we need to hit neutron to see if there is resource request on the port | 13:20 |
gibi | that is not part of the vif cache | 13:21 |
mriedem | sure but the binding profile in the vif cache has a thing that tells us if it had a resource request right? | 13:21 |
gibi | mriedem: now, the binding profile in the cache only could tell us if there is allocation for the port | 13:21 |
openstackgerrit | Eric Fried proposed openstack/nova-specs master: WIP: Physical TPM passthrough https://review.opendev.org/667926 | 13:21 |
gibi | but if there is no allocation then we don't know if there is a need for such allocation | 13:21 |
mriedem | so let's say the binding profile in the vif cache doesn't have that allocation marker, but we find a port with a resource request - we're going to heal the allocation by putting allocations to placement for that instance right? | 13:22 |
mriedem | are we also going to update the cache to indicate it's now got an allocatoin? | 13:23 |
gibi | mriedem: right, and also updating neutron with the rp uuid | 13:23 |
gibi | mriedem: hm, we are not updating the info cache | 13:23 |
gibi | :/ | 13:23 |
mriedem | yeah, so i'm wondering about the scenario that our cache is busted, we go tell placement there is an allocation, but then what happens when we teardown that port/server in the compute service? are we basing that on the cache? b/c if so, we'll leak the allocation correcT? | 13:24 |
openstackgerrit | Eric Fried proposed openstack/nova master: WIP: Physical TPM passthrough https://review.opendev.org/667928 | 13:24 |
mriedem | similarly are we only creating the port allocation in heal_allocations if the port is bound? | 13:25 |
gibi | mriedem: we cannot leak allocation during delete as delete deletes everything for the consumer | 13:25 |
mriedem | gibi: what about when we detach the port though? | 13:25 |
mriedem | but don't delete the server | 13:25 |
gibi | I hope that code path works based on the allocation key in the binding profile, but let me check | 13:26 |
mriedem | https://github.com/openstack/nova/blob/7b769ad403751268c60b095f722437cbed692071/nova/network/neutronv2/api.py#L1672 | 13:26 |
mriedem | looks like it does... | 13:27 |
gibi | mriedem: in the other hand, is there any way to trigger a info_cache refresh or we need to do it manually? | 13:27 |
mriedem | there is a periodic task in the compute service that will forcefully rebuild the cache from info in neutron, | 13:28 |
mriedem | but i don't think that code is doing anything with the resource request / allocation information yet | 13:28 |
mriedem | might have been something i brought up when reviewing the series in stein | 13:28 |
mriedem | this is the call from the compute task https://github.com/openstack/nova/blob/7b769ad403751268c60b095f722437cbed692071/nova/compute/manager.py#L7445 | 13:29 |
mriedem | this is the neutronv2 api method that builds the vif object in the cache https://github.com/openstack/nova/blob/7b769ad403751268c60b095f722437cbed692071/nova/network/neutronv2/api.py#L2842 | 13:30 |
mriedem | we copy the binding:profile directly off the port https://github.com/openstack/nova/blob/7b769ad403751268c60b095f722437cbed692071/nova/network/neutronv2/api.py#L2887 | 13:30 |
*** rcernin has quit IRC | 13:31 | |
mriedem | maybe that's enough? the 'allocation' is in the binding:profile yeah? and that maps to the resource provider uuid on which the allocation belongs | 13:31 |
*** jistr is now known as jistr|call | 13:31 | |
gibi | copying binding:profile is enough as it has the 'allocaton' key | 13:31 |
gibi | and that points to the RP | 13:32 |
gibi | we allocate from | 13:32 |
gibi | mriedem: this is where the heal port allocation makes sure that neutron binding:profile is updated https://review.opendev.org/#/c/637955/28/nova/cmd/manage.py@1857 | 13:32 |
openstackgerrit | Merged openstack/nova stable/stein: libvirt: Rework 'EBUSY' (SIGKILL) error handling code path https://review.opendev.org/667389 | 13:33 |
mriedem | ok so going back to my original question, cache vs neutron directly, i guess heal_allocations is going to the source of truth first (neutron) and working based on that, which is more fool-proof if possibly less performant | 13:34 |
mriedem | if we healed based on the cache, and the cache was stale or missing information somehow, then we would fail to heal allocations for the ports in that busted cache | 13:34 |
gibi | mriedem: there is no way to figure out what to heal if we don't hit neutron. the info cache only good to see if there is allocation or not for a port, but only neutron knows if the port needs allocation | 13:35 |
mriedem | with the way you have it, heal_allocations would heal the allocations and the port binding profile and the _heal_instance_info_cache task in the compute service running that instance would heal the cache | 13:35 |
mriedem | sure, i realize that, | 13:35 |
mriedem | i was thinking of using the cache to pre-filter the list of ports we're asking neutron about | 13:35 |
mriedem | i.e. if we're iterating over 50 intances that's 50 calls to neutron | 13:36 |
gibi | mriedem: you mean if we see allocaton key in the cache then we assume that such port doesn't need to be healed? | 13:36 |
mriedem | i do like that you added an option to skip this though | 13:36 |
mriedem | gibi: something like that | 13:37 |
mriedem | if we would have stored some flag in the vif cache originally that indicated the port had resource requests (not necessarily the resource request itself), heal_allocations could have just checked that flag in the cache and then call neutron for that port | 13:37 |
mriedem | and if ifs and buts were candy and nuts we'd all have a very fine christmas | 13:38 |
gibi | mriedem: these vifs are created _before_ there was any logic in nova about bandwidth resource. So I don't see how to have a flag in nova about it | 13:38 |
mriedem | oh right this is also for healing the case of ports attached with qos before stein... | 13:39 |
mriedem | i forgot about that | 13:39 |
gibi | this is the case exaclty that the port was attached without allocation | 13:39 |
mriedem | anyway, i'm trying to optimize something that we just want to work on the first go, so i can drop it | 13:39 |
gibi | because neutron had support way earlier for bandiwdth | 13:39 |
*** takashin has joined #openstack-nova | 13:40 | |
mriedem | the other good thing is that heal_allocations now has the option to heal a single instance so if an operator is trying to fix one specific instance (b/c of some user ticket or something) then they can just target that one rather than all instances to heal 1 | 13:40 |
*** rajinir has joined #openstack-nova | 13:41 | |
gibi | yeah, I remember I had to adapt to that improvement at some point | 13:41 |
gibi | one extra complexity to note. as we need to update both placement and neutron the update cannot be made atomic | 13:42 |
mriedem | and i'm assuming the earlier patches to refactor the code didn't really require changes to the functional tests the validate the interface (unlike you'd need with unit tests) | 13:43 |
*** munimeha1 has joined #openstack-nova | 13:43 | |
*** dave-mccowan has quit IRC | 13:43 | |
mriedem | i wrote all of the heal_allocations stuff with functional tests b/c i didn't trust unit tests for testing the interface | 13:43 |
gibi | mriedem: yes, no functional test was harmed in the refactoring process | 13:43 |
mriedem | if we fail to update one (placement or neutron) we could rollback the other... | 13:43 |
mriedem | heh, "no functional test was harmed in the making of this patch" | 13:44 |
mriedem | ok this has been a useful discussion before i dive in | 13:44 |
mriedem | thanks | 13:44 |
gibi | mriedem: yes, rollback is one option. I went for just printing what failed and how to update neutron manually | 13:45 |
gibi | mriedem: if you think rollback is better then I have to temprarly store the original allocation of the instance before the heal, and put that back if neutron update fails | 13:46 |
*** pcaruana has quit IRC | 13:46 | |
*** tbachman has quit IRC | 13:46 | |
*** pcaruana has joined #openstack-nova | 13:47 | |
*** bbowen has joined #openstack-nova | 13:48 | |
openstackgerrit | Ghanshyam Mann proposed openstack/nova master: Multiple API cleanup changes https://review.opendev.org/666889 | 13:50 |
mriedem | we could also retry the port updates with a backoff loop, like 3 retries or something if it's a temporary network issue | 13:51 |
mriedem | could be follow ups though | 13:51 |
mriedem | i'll keep it in mind when i review and leave a comment | 13:51 |
gibi | mriedem: ack | 13:54 |
*** eharney has quit IRC | 13:58 | |
*** mkrai_ has joined #openstack-nova | 14:00 | |
mriedem | nova meeting happening | 14:02 |
*** Luzi has quit IRC | 14:02 | |
*** tbachman has joined #openstack-nova | 14:04 | |
*** _alastor_ has joined #openstack-nova | 14:06 | |
*** mkrai_ has quit IRC | 14:06 | |
mriedem | gibi: also in the back of my mind i've been thinking of adding something to the nova-next job post_test_hook, like creating a server, deleting it's allocatoins in placement and then running heal_allocations just to make sure we have integration test coverage as well, but it's lower priority | 14:10 |
gibi | mriedem: I did similar manual testing for my patch in devstack and discovered bugs. So I agree that it would be useful | 14:11 |
*** eharney has joined #openstack-nova | 14:11 | |
*** lpetrut has quit IRC | 14:13 | |
*** mlavalle has joined #openstack-nova | 14:15 | |
*** ricolin_ has joined #openstack-nova | 14:20 | |
jrosser | guilhermesp: this lot https://review.opendev.org/#/q/topic:fix-octavia+(status:open+OR+status:merged) | 14:20 |
jrosser | guilhermesp: there are some jobs running with some depends-on, but it's difficult to see whats going on | 14:20 |
jrosser | oops -ECHAN | 14:21 |
* guilhermesp looking jrosser | 14:23 | |
*** dpawlik has quit IRC | 14:27 | |
*** mmethot has joined #openstack-nova | 14:28 | |
*** jistr|call is now known as jistr | 14:31 | |
*** mmethot has quit IRC | 14:33 | |
*** mmethot has joined #openstack-nova | 14:42 | |
openstackgerrit | Surya Seetharaman proposed openstack/nova stable/stein: Grab fresh power state info from the driver https://review.opendev.org/667948 | 14:50 |
*** NostawRm has quit IRC | 14:52 | |
*** xek__ has quit IRC | 14:53 | |
mriedem | http://status.openstack.org/reviews/#nova sure is fun for abandon fodder | 14:54 |
openstackgerrit | Vrushali Kamde proposed openstack/nova master: Support filtering of hosts by forbidden aggregates https://review.opendev.org/667952 | 14:55 |
*** xek has joined #openstack-nova | 14:55 | |
*** ratailor has joined #openstack-nova | 14:56 | |
*** xek_ has joined #openstack-nova | 14:58 | |
efried | abandon away | 14:58 |
*** ivve has quit IRC | 15:00 | |
*** takashin has left #openstack-nova | 15:01 | |
*** xek has quit IRC | 15:01 | |
*** dpawlik has joined #openstack-nova | 15:01 | |
*** tbachman has quit IRC | 15:02 | |
*** dpawlik has quit IRC | 15:07 | |
openstackgerrit | Surya Seetharaman proposed openstack/nova stable/rocky: Grab fresh power state info from the driver https://review.opendev.org/667955 | 15:09 |
*** cfriesen has joined #openstack-nova | 15:09 | |
*** hamzy has quit IRC | 15:13 | |
*** pcaruana has quit IRC | 15:14 | |
*** dpawlik has joined #openstack-nova | 15:17 | |
*** dpawlik has quit IRC | 15:22 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: RT: replace _instance_in_resize_state with _is_trackable_migration https://review.opendev.org/560467 | 15:24 |
mriedem | efried: another rebase on that one ^ | 15:25 |
*** udesale has joined #openstack-nova | 15:25 | |
mriedem | dansmith: can you come back on this instance.hidden patch https://review.opendev.org/#/c/631123/ ? | 15:25 |
dansmith | <insert hidden pun here> | 15:26 |
mriedem | dansmith.hidden = True | 15:26 |
*** _alastor_ has quit IRC | 15:27 | |
efried | mriedem: I can't even see where the conflict was on that one. | 15:27 |
*** _alastor_ has joined #openstack-nova | 15:27 | |
*** ohwhyosa has quit IRC | 15:28 | |
efried | oh, never mind | 15:28 |
dansmith | mriedem: is the hidden check in the api really required? aren't you filtering out hidden instances from the list query? | 15:30 |
mriedem | dansmith: i'd have to look again to confirm but i believe there is a window where both are not hidden | 15:31 |
mriedem | while swapping over | 15:31 |
*** NostawRm has joined #openstack-nova | 15:31 | |
mriedem | which, | 15:31 |
dansmith | mriedem: in that case, the check for hidden-ness doesn't help right? | 15:31 |
mriedem | arguably the api will still filter - it might pick the "wrong" one but... | 15:32 |
mriedem | the comment in there mentions something about that right? updated_at and such | 15:32 |
dansmith | right, but the check of the hidden field would be pointless in that case | 15:32 |
*** tbachman has joined #openstack-nova | 15:32 | |
dansmith | the comment is talking about the case where they're both not hidden | 15:32 |
dansmith | I'm talking about the case where one is.. what's the point of checking it if instance_list doesn't return them? | 15:33 |
*** bbowen has quit IRC | 15:34 | |
*** luksky has quit IRC | 15:34 | |
mriedem | ok so this is the point where we could have 2 copies where hidden=False https://review.opendev.org/#/c/635646/32/nova/conductor/tasks/cross_cell_migrate.py@593 so the DB API would return both from each cell while listing | 15:34 |
mriedem | now that DB API isn't returning hidden=True by default... | 15:34 |
dansmith | ...and so checking for instance.hidden in compute/api does what? | 15:34 |
mriedem | yeah in this case now b/c db api won't return the hidden one, "or instance.hidden" will always be false | 15:35 |
dansmith | right | 15:36 |
mriedem | the note above still applies, but the logical or doesn't | 15:36 |
mriedem | so are you ok with removing the or condition and leaving the comment? | 15:36 |
dansmith | yep, just said that in the review | 15:37 |
mriedem | yup, thanks | 15:37 |
*** hamzy has joined #openstack-nova | 15:40 | |
*** factor has quit IRC | 15:41 | |
*** hongbin has joined #openstack-nova | 15:41 | |
*** factor has joined #openstack-nova | 15:41 | |
openstackgerrit | melanie witt proposed openstack/nova master: Require at least cryptography>=2.7 https://review.opendev.org/667765 | 15:48 |
*** icarusfactor has joined #openstack-nova | 15:49 | |
*** factor has quit IRC | 15:50 | |
*** ttsiouts has quit IRC | 15:51 | |
*** itssurya has quit IRC | 15:52 | |
*** ttsiouts has joined #openstack-nova | 15:52 | |
*** ccamacho has quit IRC | 15:55 | |
*** ttsiouts has quit IRC | 15:56 | |
*** ratailor has quit IRC | 15:57 | |
*** artom|gmtplus3 has quit IRC | 15:59 | |
*** wwriverrat has joined #openstack-nova | 16:00 | |
*** wwriverrat has quit IRC | 16:00 | |
*** damien_r has quit IRC | 16:01 | |
*** wwriverrat has joined #openstack-nova | 16:01 | |
*** jangutter has quit IRC | 16:02 | |
*** jangutter has joined #openstack-nova | 16:02 | |
mriedem | dansmith: i'm going to drop https://review.opendev.org/#/c/631123/34/nova/tests/unit/compute/test_compute_api.py then since it's not really valid since the db api wouldn't return a hidden=True instance | 16:09 |
dansmith | yeah | 16:09 |
*** igordc has joined #openstack-nova | 16:11 | |
*** wwriverrat has left #openstack-nova | 16:13 | |
openstackgerrit | Miguel Ángel Herranz Trillo proposed openstack/nova master: Add support for 'initenv' elements https://review.opendev.org/667975 | 16:13 |
openstackgerrit | Miguel Ángel Herranz Trillo proposed openstack/nova master: Add support for cloud-init on LXC instances https://review.opendev.org/667976 | 16:13 |
mriedem | looks like someone is trying to make libvirt+lxc work again | 16:15 |
openstackgerrit | Miguel Ángel Herranz Trillo proposed openstack/nova master: Add support for cloud-init on LXC instances https://review.opendev.org/667976 | 16:16 |
*** xek__ has joined #openstack-nova | 16:20 | |
*** bbowen has joined #openstack-nova | 16:21 | |
openstackgerrit | Merged openstack/nova-specs master: support virtual persistent memory https://review.opendev.org/601596 | 16:21 |
*** xek_ has quit IRC | 16:23 | |
*** mdbooth_ has joined #openstack-nova | 16:29 | |
efried | mriedem: At some point recently you wrote a functional test that used a weigher to prefer host1 so that the assertion that we landed on host2 was provably valid every time (instead of just by chance). Can you put your finger on that easily? | 16:30 |
*** luksky has joined #openstack-nova | 16:30 | |
*** rdopiera has quit IRC | 16:31 | |
mriedem | efried: look for HostNameWeigher in the nova/tests/functional | 16:31 |
mriedem | there are several examples | 16:31 |
efried | thanks | 16:31 |
mriedem | ima push this big ass series b/c i've been rebasing it locally for weeks and want to flush it | 16:31 |
*** mdbooth has quit IRC | 16:32 | |
*** panda has quit IRC | 16:32 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add InstanceAction/Event create() method https://review.opendev.org/614036 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add Instance.hidden field https://review.opendev.org/631123 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add TargetDBSetupTask https://review.opendev.org/627892 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add CrossCellMigrationTask https://review.opendev.org/631581 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Execute TargetDBSetupTask https://review.opendev.org/633853 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add prep_snapshot_based_resize_at_dest compute method https://review.opendev.org/633293 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add PrepResizeAtDestTask https://review.opendev.org/627890 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add prep_snapshot_based_resize_at_source compute method https://review.opendev.org/634832 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add nova.compute.utils.delete_image https://review.opendev.org/637605 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add PrepResizeAtSourceTask https://review.opendev.org/627891 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Refactor ComputeManager.remove_volume_connection https://review.opendev.org/642183 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add power_on kwarg to ComputeDriver.spawn() method https://review.opendev.org/642590 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add finish_snapshot_based_resize_at_dest compute method https://review.opendev.org/635080 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add FinishResizeAtDestTask https://review.opendev.org/635646 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add Destination.allow_cross_cell_move field https://review.opendev.org/614035 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Execute CrossCellMigrationTask from MigrationTask https://review.opendev.org/635668 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Plumb allow_cross_cell_resize into compute API resize() https://review.opendev.org/635684 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Filter duplicates from compute API get_migrations_sorted() https://review.opendev.org/636224 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add cross-cell resize policy rule and enable in API https://review.opendev.org/638269 | 16:33 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: WIP: Enable cross-cell resize in the nova-multi-cell job https://review.opendev.org/656656 | 16:33 |
*** mdbooth_ has quit IRC | 16:36 | |
*** panda has joined #openstack-nova | 16:36 | |
*** whoami-rajat has quit IRC | 16:44 | |
*** davidsha has quit IRC | 16:47 | |
*** udesale has quit IRC | 16:54 | |
*** icarusfactor has quit IRC | 16:59 | |
efried | Anyone know artom's status? | 17:00 |
efried | seemed like he was traveling recently | 17:00 |
mnaser | efried: i think i remember seeing his nick with some timezone appended to it | 17:02 |
*** xek_ has joined #openstack-nova | 17:02 | |
efried | yeah, I recall that too, GMT+3 or something. But I wasn't sure where he was or whether he was going to be afk for some amount of time | 17:03 |
tbachman | p!spy artom | 17:03 |
tbachman | oops | 17:03 |
tbachman | lol | 17:03 |
efried | worst spy ever | 17:04 |
tbachman | lol | 17:04 |
edleafe | efried: https://leafe.com/timeline/%23openstack-nova/2019-06-27T08:57:05 | 17:04 |
tbachman | purplerbot: log at http://p.anticdent.org/logs/artom | 17:04 |
dansmith | efried: he's on GMT+3 | 17:04 |
efried | oh, so he's around, just out "early" (relative to me). Cool. | 17:04 |
dansmith | yar | 17:05 |
*** xek__ has quit IRC | 17:05 | |
*** whoami-rajat has joined #openstack-nova | 17:06 | |
*** hamzy has quit IRC | 17:07 | |
*** ricolin_ has quit IRC | 17:12 | |
*** ricolin has joined #openstack-nova | 17:12 | |
*** hamzy has joined #openstack-nova | 17:12 | |
*** martinkennelly has quit IRC | 17:13 | |
*** maciejjozefczyk has quit IRC | 17:17 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add integration testing for heal_allocations https://review.opendev.org/667994 | 17:26 |
*** kaisers has quit IRC | 17:27 | |
*** kaisers has joined #openstack-nova | 17:31 | |
*** ralonsoh has quit IRC | 17:35 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add integration testing for heal_allocations https://review.opendev.org/667994 | 17:35 |
*** _erlon_ has quit IRC | 17:42 | |
*** xek_ has quit IRC | 17:43 | |
*** ociuhandu has quit IRC | 17:48 | |
openstackgerrit | Merged openstack/nova master: Remove global state from the FakeDriver https://review.opendev.org/656709 | 17:50 |
openstackgerrit | Merged openstack/nova master: Enhance service restart in functional env https://review.opendev.org/512552 | 17:50 |
*** ociuhandu has joined #openstack-nova | 17:53 | |
*** pcaruana has joined #openstack-nova | 17:54 | |
*** ociuhandu has quit IRC | 17:58 | |
*** mvkr has quit IRC | 18:03 | |
*** ricolin has quit IRC | 18:09 | |
*** bbowen_ has joined #openstack-nova | 18:15 | |
*** bbowen has quit IRC | 18:18 | |
*** hamzy has quit IRC | 18:24 | |
*** hamzy has joined #openstack-nova | 18:25 | |
*** hamzy has quit IRC | 18:30 | |
openstackgerrit | Merged openstack/nova master: libvirt: flatten rbd images when unshelving an instance https://review.opendev.org/457886 | 18:43 |
*** bbowen__ has joined #openstack-nova | 19:08 | |
*** bbowen_ has quit IRC | 19:11 | |
*** raub has joined #openstack-nova | 19:28 | |
*** bbowen__ has quit IRC | 19:32 | |
*** tbachman has quit IRC | 19:34 | |
openstackgerrit | Merged openstack/nova master: reorder conditions in _heal_allocations_for_instance https://review.opendev.org/655458 | 19:47 |
*** whoami-rajat has quit IRC | 19:54 | |
*** shilpasd has quit IRC | 20:04 | |
*** ivve has joined #openstack-nova | 20:12 | |
*** eharney has quit IRC | 20:55 | |
*** pcaruana has quit IRC | 21:05 | |
*** mmethot has quit IRC | 21:11 | |
*** tesseract has quit IRC | 21:19 | |
*** rcernin has joined #openstack-nova | 21:24 | |
openstackgerrit | Merged openstack/nova master: Prepare _heal_allocations_for_instance for nested allocations https://review.opendev.org/637954 | 21:33 |
mriedem | efried: https://review.opendev.org/#/c/637955/28 tag teamed | 21:36 |
openstackgerrit | Merged openstack/nova master: pull out put_allocation call from _heal_* https://review.opendev.org/655459 | 21:36 |
mriedem | which do you want to be? https://toomanyposts.files.wordpress.com/2011/11/hartfound.jpg | 21:36 |
efried | dude, as long as I get to wear pink tights, who cares?? | 21:37 |
mriedem | ok i'm bret then | 21:38 |
sean-k-mooney | has the follow up spec for numa with pmem been submited or is that for U | 21:39 |
efried | I haven't seen such a thing | 21:40 |
mriedem | that seems to be getting the cart way before the horse | 21:40 |
sean-k-mooney | the current spec says numa will be adress in a follow up spec | 21:41 |
sean-k-mooney | mriedem: not really | 21:41 |
sean-k-mooney | i think it makes sense to proced with the current spec that just merged | 21:41 |
sean-k-mooney | but im concerned the porsal i prolematic | 21:41 |
sean-k-mooney | specifcaly im really not ok with the driver generating numa object with out goign through the numa toplogy filter unless we ensure we dont restrit the guest ram and cpus to a numa node | 21:43 |
sean-k-mooney | if we generate a virtual numa node and do no affintiy to a host numa node fo any resoces it proably fine | 21:43 |
*** mriedem is now known as mriedem_afk | 21:45 | |
*** bbowen__ has joined #openstack-nova | 21:53 | |
*** hongbin has quit IRC | 22:01 | |
*** mvkr has joined #openstack-nova | 22:01 | |
openstackgerrit | Eric Fried proposed openstack/nova master: Un-safe_connect and publicize get_providers_in_tree https://review.opendev.org/668062 | 22:04 |
openstackgerrit | sean mooney proposed openstack/nova-specs master: Libvirt: add vPMU spec for train https://review.opendev.org/651269 | 22:08 |
*** tbachman has joined #openstack-nova | 22:20 | |
*** munimeha1 has quit IRC | 22:23 | |
*** slaweq has quit IRC | 22:23 | |
*** igordc has quit IRC | 22:36 | |
*** mlavalle has quit IRC | 22:40 | |
*** mlavalle has joined #openstack-nova | 22:41 | |
*** mlavalle has quit IRC | 22:41 | |
*** luksky has quit IRC | 23:00 | |
*** spatel has joined #openstack-nova | 23:13 | |
*** spatel has quit IRC | 23:14 | |
*** sean-k-mooney has quit IRC | 23:19 | |
*** sean-k-mooney has joined #openstack-nova | 23:20 | |
alex_xu | johnthetubaguy: mriedem_afk sean-k-mooney efried thanks for all the review, I will continue to look at the comment | 23:27 |
*** tonyb has quit IRC | 23:32 | |
*** tonyb has joined #openstack-nova | 23:32 | |
*** mmethot has joined #openstack-nova | 23:33 | |
*** slaweq has joined #openstack-nova | 23:42 | |
*** slaweq has quit IRC | 23:46 | |
*** efried has quit IRC | 23:47 | |
*** efried has joined #openstack-nova | 23:48 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!