16:01:01 <bauzas> #startmeeting nova 16:01:01 <opendevmeet> Meeting started Tue Nov 22 16:01:01 2022 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:01 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:01 <opendevmeet> The meeting name has been set to 'nova' 16:01:06 <elodilles> o/ 16:02:40 <bauzas> sorry folks, forgot we had an internal meeting at the same time 16:02:52 <bauzas> not sure we'll have a lot of folks around 16:03:10 <jsanemet> hello o/ 16:03:18 <sean-k-mooney> o/ 16:03:30 <Uggla> o/ 16:03:31 <bauzas> ok, let's start then 16:04:01 <bauzas> actually, I forgot to provide my PTG notes for the internal meeting :D 16:04:09 <bauzas> so I won't need to discuss :) 16:04:39 <bauzas> #topic Bugs (stuck/critical) 16:04:45 <bauzas> #info No Critical bug 16:04:49 <dansmith> o/ 16:04:50 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 11 new untriaged bugs (+4 since the last meeting) 16:04:55 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster 16:05:09 <bauzas> sean-k-mooney: can you use the bug baton for next week ? 16:05:15 <opendevreview> Balazs Gibizer proposed openstack/nova stable/train: Reproduce bug 1896463 in func env https://review.opendev.org/c/openstack/nova/+/841288 16:05:16 <opendevreview> Balazs Gibizer proposed openstack/nova stable/train: Set instance host and drop migration under lock https://review.opendev.org/c/openstack/nova/+/841444 16:05:35 <sean-k-mooney> i guess but its likely the last round i can do this year 16:05:44 <bauzas> all cool 16:07:04 <gibi> o/ 16:07:05 <bauzas> #info bug baton is being passed to sean-k-mooney 16:07:21 <bauzas> any bug to discuss or do we move on ? 16:07:27 <bauzas> (please say the latter :) ) 16:08:12 <bauzas> k, moving on 16:09:10 <bauzas> #topic Gate status 16:09:16 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:09:21 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status 16:09:29 <bauzas> huzzah, the periodics are back \o/ 16:09:42 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag. 16:09:47 <bauzas> #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures 16:10:02 <bauzas> anything to discuss about gate ? 16:11:43 <sean-k-mooney> just a note 16:11:49 <bauzas> sure 16:11:53 <sean-k-mooney> the gate should now be jammy/22.04 16:12:09 <bauzas> have we merged the changes ? 16:12:20 <sean-k-mooney> the base jobs changed at m1 16:12:21 <bauzas> sorry, was a bit off the IRC channel during those two days 16:12:26 <bauzas> kk 16:12:31 <sean-k-mooney> so i belive they merged last week 16:13:01 <sean-k-mooney> anywya just keep an eye out and see if there are any new issues 16:13:16 <sean-k-mooney> thats all i wanted to raise 16:13:21 <bauzas> https://review.opendev.org/c/openstack/nova/+/861111 16:13:50 <bauzas> so the base jobs seem to have been merged indeed, but not our jobs 16:14:21 <sean-k-mooney> actully that patch is keeping one job on focal 16:14:24 <bauzas> gmann: can you help us by explaining what we do atm ? 16:14:36 <bauzas> yes, like 2 years ago 16:14:38 <sean-k-mooney> although i woudl argue grenade shoudl be that job 16:14:39 <bauzas> we keep an old job 16:14:51 <sean-k-mooney> i dont think we need a new one 16:14:59 <bauzas> and then we run all the other tempest jobs with the latest ubuntu version 16:15:37 <bauzas> yeah so 16:15:38 <sean-k-mooney> this cycle the grenade jobs shoudl be on focal which shoudl provide enough test coverage 16:15:45 <bauzas> #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/861116 Base jobs are migrated to Jammy 16:16:04 <sean-k-mooney> ill comment on the review 16:16:09 <bauzas> #link https://review.opendev.org/c/openstack/nova/+/861111 Nova specific jobs migration to Jammy need to be reviewed 16:16:27 <bauzas> sean-k-mooney: me too 16:16:35 <bauzas> ok, anything else ? 16:16:53 <sean-k-mooney> nope 16:16:57 <sean-k-mooney> not from me 16:17:05 <bauzas> moving on 16:18:23 <bauzas> #topic Release Planning 16:18:29 <bauzas> #link https://releases.openstack.org/antelope/schedule.html 16:18:34 <bauzas> #info Antelope-1 was last week 16:18:39 <bauzas> #info Spec review day had 6 specs merged and more than 10 others being discussed. 16:18:48 <bauzas> kudos to the reviewers who worked on it 16:19:21 <bauzas> as a reminder, we will get another spec review day in a couple of weeks before milestone-2 16:19:45 <bauzas> we should also plan some implementation review day around between a-1 and a-2 16:20:30 <bauzas> voilĂ , anything to discuss about either the spec review day or antelope-1 ? 16:21:32 <bauzas> fwiw, we merged a release for novaclient https://review.opendev.org/c/openstack/releases/+/861360 16:21:52 <bauzas> and ditto for os-vif https://review.opendev.org/c/openstack/releases/+/864527 16:22:29 <bauzas> if no questions, let's move on 16:26:47 <bauzas> sorry, I'm being dragged in a meeting, can someone continue ? 16:26:49 <bauzas> gibi: ? 16:27:03 <gibi> sure I will try 16:27:29 <gibi> #topic Review priorities 16:27:35 <gibi> #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2) 16:27:40 <gibi> #info As a reminder, cores eager to review changes can +1 to indicate their interest, +2 for committing to the review 16:27:54 <gibi> is there anything we need to discuss on these priorities? 16:28:43 <gibi> #topic Stable Branches 16:28:52 <gibi> elodilles: your turn :) 16:29:04 <gibi> I have a quetion on stable/train 16:29:26 <elodilles> #info wallaby is blocked due to failing nova-ceph-multistore, workaround: https://review.opendev.org/c/openstack/nova/+/865134 16:29:45 <elodilles> this is an interesting one, stable cores, please review if possible ^^^ 16:29:55 <elodilles> #info ussuri and train is blocked due to nova-multi-cell job, possible fix: https://review.opendev.org/c/openstack/tempest/+/865300 16:30:15 <elodilles> ^^^ thanks sean-k-mooney for calling my attention to the issue :) 16:30:26 <elodilles> #info rest of the stable branches should be OK 16:30:33 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci 16:30:43 <sean-k-mooney> i appoved the first one addign os-brick to requried project will also make depends on work 16:30:43 <elodilles> and that's it from my side 16:31:14 <gibi> elodilles: do you know about any issues on stable/train 16:31:29 <bauzas> ok, I'm back 16:31:30 <gibi> I see multiple job failing with 16:31:30 <elodilles> sean-k-mooney: thanks. i think that is the most straightforward workaround for the situation 16:31:30 <gibi> rsync: [sender] link_stat "/var/lib/zuul/builds/eacfbbe59b6a4297a4aebb16cc7d2140/work/ca-bundle.pem" failed: No such file or directory (2) 16:31:33 <gibi> rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3] 16:31:55 <gibi> ie https://review.opendev.org/c/openstack/nova/+/841444/4 16:32:06 <gibi> s/ie/eg/ 16:32:34 <elodilles> gibi: rsync problem usually happened due to infra issues, but those usually temporary / intermittent issues 16:33:02 <elodilles> gibi: train is blocked, see above :) 16:33:05 <gibi> I got it mutliple times. I did a rebase at the start of the meeting if that hits the same I will ping the team again as that will mean it is blocked 16:33:08 <bauzas> #chair gibi 16:33:08 <opendevmeet> Current chairs: bauzas gibi 16:33:26 <bauzas> (just added gibi for chair, not sure the bot will correctly set the topics) 16:33:35 <gibi> elodilles: aaah, 16:33:45 <gibi> sorry it is a bit hard to be on two meetings in parallel 16:33:51 <gibi> so we know that train is broken 16:33:53 <elodilles> :S 16:33:57 <gibi> then I have no other question :) 16:34:04 <elodilles> gibi: yepp, but not with the rsync error 16:34:25 <elodilles> so that's a new thing if it is a permanent problem :( 16:34:26 <gibi> interesting 16:34:28 <bauzas> we should name the train bug "sncf" as train is always broken :) 16:34:42 <gibi> anyhow I will report back if the same failure happens in the recent rebase too 16:34:58 <elodilles> gibi: ack, thx 16:35:07 <gibi> I thank you :) 16:35:17 <gibi> any other topics on stable? 16:35:23 <bauzas> looks not 16:35:57 <bauzas> moving on ? 16:36:01 <bauzas> I'll need to speak in a little 16:36:26 <gibi> #topic Open discussion 16:36:30 <gibi> (jsanemet) Improve usage of privsep on Nova: 16:36:31 <gibi> Bug ticket: https://bugs.launchpad.net/nova/+bug/1996213 Proposal: https://etherpad.opendev.org/p/nova-privsep-review Does the change deserve a spec? 16:36:48 <bauzas> yeah so 16:37:22 <gibi> jsanemet: on the spec question. Do you feel you have open question that would need upfront decision from the core team? 16:37:25 <bauzas> I discussed with jsanemet about his efforts and I proposed him to present himself and discuss about how we could paper the privsep effort during the meeting 16:37:40 <bauzas> the spec thing was more a wonder we had 16:37:52 <bauzas> I don't honestly feel it requires one spec 16:38:06 <bauzas> but we tho need to consider documenting the new context usages 16:38:12 <bauzas> hence the etherpad 16:38:52 <jsanemet> hello 16:38:53 <bauzas> I don't see any upgrade impact, or any other impact which would require us to accept a spedc 16:39:38 <bauzas> anyone disagree with that plan ? 16:40:21 <dansmith> what is the effort? just finishing some things? 16:40:28 <dansmith> or something more complex? 16:40:29 <bauzas> dansmith: https://etherpad.opendev.org/p/nova-privsep-review 16:40:42 <bauzas> dansmith: this is about fine-graining the privsep context to the right callers 16:40:43 <sean-k-mooney> there are several 16:40:58 <sean-k-mooney> this looks like the first low haning fruit 16:40:59 <dansmith> yeah so splitting into different contexts? 16:41:06 <sean-k-mooney> yep 16:41:08 <bauzas> as a first step yes 16:41:21 <dansmith> I dunno, I think that's going to be a bunch of review of which calls need which contexts right? 16:41:31 <bauzas> that's my concern 16:41:35 <dansmith> some things might seem simple but require both net and sys, etc 16:41:35 <bauzas> I don't think we need a spec 16:41:38 <bauzas> but, 16:42:01 <dansmith> I guess I'm not sure why no spec 16:42:03 <bauzas> I'd appreciate if we would agree on the contexts and the callers not exacly by reviewing the patches 16:42:07 <dansmith> this etherpad is half a spec already 16:42:15 <sean-k-mooney> there proably should be a spec to define the scope 16:42:21 <dansmith> agree 16:42:27 <bauzas> there, we have our answer 16:42:37 <bauzas> we can use the spec format to agree on the split 16:42:57 <sean-k-mooney> sure 16:43:30 <bauzas> jsanemet: you're ok with the outcome ? 16:43:39 <bauzas> I can help you on the process-y thingie 16:44:05 <jsanemet> sure 16:44:22 <jsanemet> then, i will translate the etherpad into a spec and we can discuss it further 16:44:26 <jsanemet> seems good to me 16:44:33 <bauzas> from a paperwork pov, you may also need to supersede the rfe bug by creating a launchpad blueprint 16:44:48 <bauzas> but that's an easy peasy 16:45:01 <bauzas> anyway, looks like we have the direction 16:45:06 <jsanemet> yes, i am a little green regarding what documentation is required so any help will be appreciated 16:45:39 <bauzas> jsanemet: as a reminder, you have a spec template https://specs.openstack.org/openstack/nova-specs/specs/2023.1/template.html 16:46:10 <bauzas> I guess that's it for today 16:46:11 <jsanemet> all right, i will save that 16:46:15 <jsanemet> thanks a lot 16:46:36 <bauzas> ok, folks, any other item to raise before we call the wrap ? 16:46:58 <bauzas> looks not 16:47:01 <bauzas> perfect timing 16:47:05 <bauzas> thanks all 16:47:08 <bauzas> #endmeeting