16:00:27 #startmeeting nova 16:00:27 Meeting started Tue Jan 31 16:00:27 2023 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:27 The meeting name has been set to 'nova' 16:00:33 hey ho 16:00:44 o/ 16:00:56 o/ 16:01:02 o/ 16:01:04 o/ 16:01:16 o7 16:01:40 okay let's start 16:01:49 #topic Bugs (stuck/critical) 16:01:56 #info No Critical bug 16:02:00 #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 27 new untriaged bugs (+1 since the last meeting) 16:02:04 #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster 16:02:21 bug baton can be passed over to artom if he wants 16:02:30 Sure 16:02:38 I feel like I've skipped.... many weeks 16:02:43 (unintentionally) 16:03:24 thanks 16:03:36 #info bug baton is being passed to artom 16:03:46 artom: again, this is just a temptative 16:03:52 don't feel that much obliged 16:04:04 Well don't say that 16:04:30 ok, so, before we move on, do people want to discuss about some , per say, CVE bugfix or everything is OK since we're on track ? 16:04:46 we know we need to progress with wallaby 16:05:06 but I wanted to know whether people wanted to discuss it by now ? 16:06:24 crickets ? ok, so, let's move on 16:06:43 #topic Gate status 16:06:47 #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:06:52 #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status 16:07:18 gate is still wedgy, but tbh, I had to focus on other priorities last week and this week :( 16:07:38 same here, I made no progress on the functional test instability 16:07:39 from what I see, the gate is quite difficult but still mergeable 16:07:47 it's better 16:07:57 I've been unable to reproduce any of the problems locally, unfortunately 16:07:59 on the functest sage ? 16:08:02 saga* 16:08:07 dansmith: that's the problem, I tried too 16:08:26 dansmith: running the exact same way than the gate using the testr subunits 16:08:36 but got nothing 16:09:13 hence me having pivoted, and tbh, there were people afraid of some security bug last week that diverted my attention 16:09:32 but ok, if things are better, that's cool 16:10:01 either way, I'll try to go looking at it back when I'm done with my own prios (including some feature implementation I promised) 16:10:15 #info Please look at the gate failures and file a bug report with the gate-failure tag. 16:10:19 #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures 16:11:21 ok, let's move on thenb 16:11:28 #topic Release Planning 16:11:32 #link https://releases.openstack.org/antelope/schedule.html 16:11:36 #info Antelope-3 is in 2 weeks 16:11:43 /o\ 16:12:01 #link https://etherpad.opendev.org/p/nova-antelope-blueprint-status Blueprint status for 2023.1 16:12:06 so I created a tracking etherpad 16:12:18 most of the blueprints are in the 'unknown' territory 16:12:37 I need to investigate the crime scene and see whether the owner created some changes 16:13:20 so, use that etherpad as you want 16:13:41 ideally, blueprint owners are more than welcome to amend the etherpad with their comments, including patches to review 16:14:01 and cores are welcome to add comments on things they want to review or already did 16:14:58 I tried to add a 'API Impact' bullet on the blueprints that were actually touching the API, not exclusively by a microversion 16:15:31 ok, if anything else, moving on ? 16:15:43 if not* 16:16:30 sure 16:16:34 lets proceed 16:17:43 #topic Review priorities 16:17:50 #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2) 16:17:59 #info As a reminder, cores eager to review changes can +1 to indicate their interest, +2 for committing to the review 16:18:09 nothing to mention here 16:18:12 moving on 16:18:17 #topic Stable Branches 16:18:25 passing the mic to elodilles 16:18:41 #info stable releases from last week: nova 26.1.0 (zed); nova 25.1.0 (yoga) nova 24.2.0 (xena) -- with CVE fix 16:18:54 #info stable branches seem to be unblocked / OK back till xena 16:19:05 #info wallaby branch is blocked by tempest (thanks gmann for working on it - https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031941.html ) 16:19:16 #info ussuri and train gates are broken broken ('Could not build wheels for bcrypt, cryptography' errors) 16:19:36 stable/wallaby fix is not merged yet #link https://review.opendev.org/c/openstack/devstack/+/871782 16:19:51 ussuri/train might work with newer pip version 16:19:51 gmann: thanks for having worked hardly on that 16:19:51 sdk job started failing which I need to make non voting to get first fix merged 16:20:03 gmann: what a pile of cards 16:20:22 yeah tempest pin on stable EM is always like this :) 16:20:31 gmann: openstacksdk got released some hrs ago, isn't that fixing the gate? 16:20:57 just asking 16:21:01 elodilles: need to check, it is failing on <=stable/xena only 16:21:07 ack 16:21:17 #link https://zuul.openstack.org/builds?job_name=openstacksdk-functional-devstack&branch=stable%2Fxena&branch=stable%2Fwallaby&skip=0 16:21:53 anyway, last item from me: 16:21:57 #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci 16:22:07 EOM 16:22:22 cool 16:22:34 well, not so cool, but moving on 16:22:42 #topic Open discussion 16:22:47 (gmann) PyPi additional maintainers audit for Nova repo 16:22:54 gmann: I've seen your email 16:22:58 ok 16:23:01 and I looked at nova 16:23:10 os-vif is impacted 16:23:53 os-resource-classes too 16:24:04 os-traits 16:24:28 and that's it IIRC 16:24:31 osc-placement also i think 16:25:30 oh, actually placement too 16:26:00 ah yes. its 'openstack-placement' 16:26:07 gmann: can you summarize the issue and what's requested for each of the projects ? 16:26:13 not for me, but for others 16:26:18 sure 16:27:04 TC discussed about those additional maintainers in PyPi which can get released without OpenStack knowing about it or more additional maintainers can be added 16:27:15 which is nothing but security issue 16:28:14 TC decided to clean that but wanted projects to audit their additional maintained first to check if they can be removed (if they are inactive or agreed to) or the package maintenance can be handover to them (then retire it from openstack) 16:28:56 xstatic-font-awesome repo from horizon is one example of handing over maintenance to additional external maintainers and use it in horizon as one of the user 16:29:59 gmann: so I amended the tracking etherpad https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup#L141 16:30:02 but mostly packages are openstack initiated one and who setup the PyPi initially are in additional maintainers list which should be easy to clean up 16:30:19 I'm personnally OK with removing all marked people on those 16:30:36 they are no longer active contributors on the projects 16:30:48 do people disagree with this ? 16:30:58 pasting it 16:30:59 here in nova, we can check if any of those maintainers are active (in OpenStack or in PyPi maintenance) and we need to talk to them first 16:31:00 novaos-vif has berrange and jaypipes as maintainers.os-resource-classes has cdent as maintainerplacement has cdent as maintainerosc-placement has malor as maintaineros-traits has o-tony as maintainer 16:31:10 bauzas: fine with me 16:31:38 fine by me too 16:31:39 tbh I even don't know malor 16:31:47 which is a bit of a security risk 16:32:10 +1 16:32:18 +1 16:32:29 i tought i got os-vif moved over to me at some point but sure 16:32:35 gmann: OK, I'll just leave a comment in the etherpad saying that the nova community agrees on removing those names 16:32:46 bauzas: perfect. thanks 16:33:01 sean-k-mooney: tbh, I'm OK with leaving only openstackci as the sole maitainer 16:33:14 ya thats fine 16:33:19 from a security pov, I understand the concerns 16:34:00 and that reminds me the beam accelerator security issue 16:34:09 oh it was more we need to fix somehting but it was launchpad i got them to amend 16:35:00 can we then consider this done ? 16:35:56 i think so at least form our part 16:36:03 yes, that is needed from project side. adding audit result in etherpad 16:36:04 the maintainer need to actuly be updated 16:36:26 I mean, are we done discussing this topic now ? 16:36:39 if so, nothing left on the agenda 16:37:14 except maybe documing the volume detach issues but since they are actually unrelated to our releases, I think I'll just drop this iteam 16:37:17 item 16:37:36 so 16:37:42 any other item to raise by now ? 16:38:31 Can you quickly discuss about :https://review.opendev.org/c/openstack/nova/+/868089/8 16:39:19 I would like to know if we would like to backport it and to which version. 16:40:43 * bauzas looks 16:41:34 isn't it changing the logic ? 16:41:37 based on the impl it is backportable 16:41:52 wait 16:42:09 it has a dependency on https://review.opendev.org/q/topic:bug%252F1628606 which is being backported 16:42:14 this is post_live_mig_at_dest() 16:42:27 which is run on the dest 16:42:54 my question is, in a rolling upgrade scenario with operators moving workloads from old compute to new compute 16:43:08 would that break them ? 16:43:27 bauzas: the bug is ther until the dest is upgraded 16:44:00 we say we gonna rely on https://review.opendev.org/c/openstack/nova/+/791135 16:44:26 but correct me if I'm wrong, this won't happen for a A to B livemig if A is old, nope ? 16:44:56 this chit-chat limbo dance between two hosts is confusing 16:45:09 I never know which compute runs which codepath 16:45:11 this patch trying to fix a bug that happens on the dest after the live migration finished 16:45:27 at that point there is no return to the source host 16:45:56 Amit fixed that in this case we set the instance to point to the dest host so a hard reboot can recover the instance 16:46:24 but we backported Amit's patch, right? 16:46:29 yes 16:46:36 ok, so we're safe 16:46:46 Uggla had a case where a very similar failure could leave the instance in Migarting state 16:47:00 this fix is putting the instance in error in this case too 16:47:12 I see 16:47:35 ok, then to answer Uggla's question, I don't see any controversy to backport such change once we merge it 16:47:58 yepp 16:48:24 down to ? 16:48:34 the same version as Amit's 16:48:40 yup 16:48:48 I think that is proposed back to train 16:48:49 which is train iirc 16:48:52 yup 16:49:04 but as we said, we're on hold on wallaby 16:49:13 nothing prevents us to do the work tho 16:50:23 can we end the meeting then ? 16:50:55 ok 4 me. 16:51:08 nothing else from me 16:51:57 then 16:51:59 thanks all 16:52:04 #endmeeting