16:00:05 #startmeeting nova 16:00:05 Meeting started Tue Jul 27 16:00:05 2021 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'nova' 16:00:31 o/ 16:00:39 howdy folks I'll be your chair for this meeting given our Supreme Leader is on vacations 16:00:50 \o 16:00:53 o/ 16:01:21 awesome, one more people from the last meeting I chaired \o/ 16:01:46 agenda is up at https://wiki.openstack.org/wiki/Meetings/Nova 16:01:58 o/ 16:02:27 feel free to add items you wanna discuss in the last section above ^ 16:02:31 moving on now 16:02:33 #topic Bugs (stuck/critical) 16:02:41 No Critical bugs 16:02:48 #link 11 new untriaged bugs (+1 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 16:02:58 I'll try to look at some of them tomorrow 16:03:14 any other bugs people wanna raise ? 16:03:46 nope, we had a gate issue due to Sphinx 4.x but sean-k-mooney fixed that for us 16:04:11 we could have had a cinderclient bug, but the v3 change is now merged, right? 16:04:22 stephenfin: excellent, thanks sean-k-mooney 16:04:22 ya i think that is merged now 16:04:28 Yes, the nova one landed last week and the novaclient one went in earlier today 16:04:41 oki doki 16:04:58 https://review.opendev.org/c/openstack/nova/+/802334 16:04:59 were we limiting the cinderclient version ? 16:05:02 that was the nova one 16:05:15 bauzas: no we jsut had a refernce to v2 16:05:22 ok 16:05:28 replced it with v3 16:05:28 anyway, moving 16:05:40 #topic Gate status 16:05:46 Nova gate bugs #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure 16:06:33 #topic Gate status 16:07:02 meh, maybe the meetbot works with the #topic section 16:07:03 anyway 16:07:14 nothing to say about any gate issue ? 16:07:40 nope, not beyond the above 16:07:42 I can see a new one from lyarwood https://bugs.launchpad.net/nova/+bug/1938021 16:08:13 we are still using the tempoary workaround for the ovsdb issue. ill try and find out how the ovs change is comming before m3 but no other update on that 16:08:45 hum interesting 16:08:52 I have noticed while working on placement consumer types that a generation conflict gets hit on my patches, let me find the (old) gate bug 16:08:53 was tehre a new olo release 16:09:24 sean-k-mooney: good question 16:09:37 this one http://bugs.launchpad.net/bugs/1836754 16:10:20 bauzas: the messging issue might be related to something moving and we are nolonger mocking properly in the func tests 16:10:43 melwitt: heh, who is working on this one ? 16:10:46 it occurs in general too but while working on placement to do more during a PUT it makes it happen a lot more. so a heads up that I think we'll need to address that before placement consumer types will be usable 16:10:48 although perhaps not it is corectly using the fake implementation. 16:11:09 melwitt: oh it's you 16:11:21 huh i could see that conflict happening alright 16:11:39 bauzas: I have restored mriedem's old patch about the bug and will add tests to it for review 16:11:47 do we have race conditions for this a lot ? (the conflict) 16:11:55 yeah, it was originallly from tssurya and cdent but both moved off of openstack before it was finished so I've been working on finishing it 16:12:06 or is it just for a few job runs ? 16:12:37 bauzas: I have seen it on other patches yes, but not nearly as often as I do on the placement patches. on the placement patches it looks pretty much guaranteed 16:12:46 well im not sure the frequency matteers with the scale we run at its goign to block patches at least temporaly and require a recheck 16:13:22 so i think we should try an fix it sooner rather then later 16:13:32 melwitt: so we get a conflict when deleting the allocation but why are we getting an exception ? 16:13:50 the allocation should just be orphaned, that's it 16:13:53 just wanted to give everyone a heads up about it because back when the fix was proposed, it there was a lot of discussion on the review. so if anyone has concerns about DELETE for most allocations cases rather than PUT, comment on the review 16:14:10 melwitt: sure, will review your change if you want 16:14:14 because it was changed to a PUT instead of a DELETE when consumer generations were added 16:14:21 ah shit, I see 16:14:34 it's mriedem's change that I'm going to complete 16:14:37 melwitt: thanks for working on it either way 16:14:57 melwitt: wait deleting an allocation was chgange to a put? 16:15:35 because what delete dont have a body and we did not want to include the consomer generation in the query arg? 16:15:38 sean-k-mooney: https://review.opendev.org/c/openstack/nova/+/688802/2/nova/scheduler/client/report.py#b2107 16:15:38 here's the review https://review.opendev.org/c/openstack/nova/+/688802 16:15:40 sean-k-mooney: yes, https://review.opendev.org/c/openstack/nova/+/591597 16:15:47 sean-k-mooney: I don't know, tbh 16:15:51 sean-k-mooney: we now call put() 16:16:02 well that by itself is a bug 16:16:13 I tend to agree 16:16:21 we should not use put for delete and im not convince we need to even include the generation version 16:16:44 let's discuss this after the meeting, if people want 16:16:53 sure 16:16:56 but I tend to agree too 16:17:08 I need to understand the *why* for put 16:17:18 so looking at the original change 16:17:23 anyway 16:17:24 moving on 16:17:36 Placement periodic job status #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly 16:17:40 the only thing you get with PUT is if you issue a delete of your instance, if someone else updates it while it's deleting, you have a chance to reconsider your decision to delete it. afaik that might be the reasoning 16:18:06 melwitt: yeah, that's what I think too 16:18:12 for a race 16:18:16 anyway 16:18:26 yeah sorry, can move on 16:18:28 about the placement periodic job, well, we merged stuff 16:18:31 ya that is what i assuem too but i dont think that is the right design choice if we delete it we shoudl just delete it 16:18:31 last week 16:18:40 now the job looks to work 16:18:51 (we merged a now o-r-c) 16:18:59 *version 16:19:02 o-r-c 16:19:06 ?? 16:19:20 oh os-resouce-classes 16:19:26 yes 16:20:14 placment was updated to account for the new os-resource-class release 16:20:26 yeah, sorry was triying to find the patch 16:20:37 https://review.opendev.org/c/openstack/placement/+/796595 16:20:52 anyway, nothing to tell more 16:21:13 one thing we might want to consider it preparing the patch when we are preparing the release 16:21:35 except maybe https://zuul.openstack.org/build/0e135bb912b240c8bc2aa96049727a1a 16:21:56 we know we have to do this every time we release it so we can prementivly submit the placment patch with a depens on the releases repo patch 16:21:56 oh nevermind, was fixed by the above 16:22:14 sean-k-mooney: you mean the placement release patch ? 16:22:19 yep 16:22:28 for m-3 ? 16:22:34 yes 16:22:40 well 16:22:48 when we go to release o-r-c again 16:23:00 we can prepare a patch to placment for it 16:23:23 to update the canary test and have it ready to go by depending on the pathch to the release repo 16:23:32 well, generally this is made by the release mgmt team but we can surely prepare it 16:23:36 im not sure if we will have anothger o-r-c release at m3 16:24:00 they will open the patch if we dont but they ask the ptl to approve 16:24:32 so at that point we can just do the house keeping patch for placnement and preappove ti so it will merge wehen the release patch does 16:24:45 they ask either the release folk or the PTL, yup :) 16:24:58 anyway we can move on just a tought 16:25:20 sean-k-mooney: keep your thought for next week when we get our ptl back 16:25:32 moving on 16:25:34 time is flying 16:25:49 Please look at the gate failures, file a bug, and add an elastic-recheck signature in the opendev/elastic-recheck repo (example: #link https://review.opendev.org/#/c/759967) 16:25:56 #topic Release Planning 16:26:02 We past M2 and spec freeze. M3 is in 5 weeks. 16:26:10 We have 21 approved an open blueprints and we have 5 weeks to finish them. Please focus review effort on bps in Needs Code Review state. 16:26:29 that reminds me, people have to make sure their blueprint is on Needs Code Review 16:26:51 #link https://launchpad.net/nova/+milestone/xena-3 16:27:35 the delivery status doesn't really mean anything but that can help reviewers to know which series to look at 16:27:49 so, if you love reviews, you know what to do 16:28:19 Next deadline is non-client library freeze at 16th of August 16:28:43 think about it for os-resource-class ;) 16:28:58 moving on 16:29:06 #topic PTG Planning 16:29:14 PTG timeslots booked by gibi, see #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023787.html 16:29:24 The PTG etherpad is ready to be filled with topics: #link https://etherpad.opendev.org/p/nova-yoga-ptg 16:29:37 If you see a need for a specific cross project section then please let gibi know 16:30:22 I'm pretty sure this etherpad will be filled before we have the PTG :) 16:30:47 #topic Stable Branches 16:30:56 elodilles: flood is yours 16:31:01 floor* 16:31:06 (oh man) 16:31:09 stable gates are not blocked 16:31:10 :) 16:31:18 at least as far as I can tell 16:31:19 excellent, excellent :D 16:31:46 tbh, this is not really time of the cycle when I look at stable changes 16:31:47 not so much activity around stable branches (M2, M3, vacations, etc...) 16:32:16 yup, most of the team efforts are focused on feature delivery as we speak, I guess 16:32:18 bauzas: true :) 16:32:21 moving on 16:32:30 #topic Sub/related team Highlights 16:32:36 Libvirt (bauzas) 16:32:45 bauzas: floor is your 16:32:52 bauzas: thanks 16:32:57 bauzas: nothing to report, sir. 16:33:01 bauzas: thanks. 16:33:05 moving on. 16:33:25 #topic Open discussion 16:33:40 I refreshed and nothing popped in the wikipage while we were speaking 16:34:01 so, nothing to say on this today, unless someone wanna raise something now 16:34:35 nope 16:34:41 (I guess my fake dialog frightened a lof of people who disappeared) 16:34:55 oh wow, at least someone stayed \o/ 16:35:08 I'm not that bad actor 16:35:10 can we ever really leave 16:35:39 I could just pretend I'll keep the stick for the whole hour and prevent you to use this channel for the last 25 mins 16:35:41 i dont have anything more for today 16:35:51 privilege of the power, whahahah 16:36:39 but, heh, 16:36:42 #stopmeeting 16:36:53 #endmeeting