16:00:07 <bauzas> #startmeeting nova
16:00:07 <opendevmeet> Meeting started Tue Aug 29 16:00:07 2023 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:07 <opendevmeet> The meeting name has been set to 'nova'
16:00:18 <bauzas> hey folks, welcome
16:00:25 <auniyal> o/
16:00:26 <bauzas> I'm eventually released from Canada
16:00:37 <bauzas> #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
16:00:44 <bauzas> who's around ?
16:01:00 <Uggla> o/
16:01:20 <gmann> o/
16:01:23 <elodilles> o/
16:01:32 * gibi is partially around (on a call in parallel)
16:01:36 <bauzas> we can softly start
16:01:49 <bauzas> #topic Bugs (stuck/critical)
16:01:53 <bauzas> #info No Critical bug
16:01:57 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 49 new untriaged bugs (+9 since the last meeting)
16:01:58 <dansmith> o/
16:02:02 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster
16:02:18 <bauzas> so, we would like to do some bug triage
16:02:50 <bauzas> auniyal: you're the next in the roster list, fancy with this ?
16:02:58 <auniyal> ack bauzas
16:03:01 <auniyal> yes
16:03:05 <bauzas> very much appreciated
16:03:18 <bauzas> #info bug baton is auniyal
16:03:23 <bauzas> thanks
16:03:36 <bauzas> moving on, I don't see any critical bug to discuss
16:03:40 <bauzas> #topic Gate status
16:03:59 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:04:05 <bauzas> #link https://etherpad.opendev.org/p/nova-ci-failures
16:04:16 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status
16:04:26 <bauzas> #info nova-emulation seems flakey
16:04:34 <bauzas> #info https://zuul.openstack.org/build/209dc9676c4149b281420d9ae18f889c
16:05:05 <dansmith> things are better lately, thanks to a lot of hard work by a number of people, most recently probably melwitt with an LVM fix that might help with some stability, and a functional fix from gibi
16:05:22 <bauzas> dansmith: ++, I was about to ask for news :)
16:05:49 <dansmith> still need to work on some random timeouts we get which I think is related to stress on the api workers (at least in part) and melwitt has a green/native threads refactor up which I hope will eventually help with that
16:06:00 <bauzas> I have to admit the issues I've seen are out of my knowledge area :(
16:06:25 <dansmith> also still occasional OOMs which are hard to address when we're legitimately running out of memory if we get unlucky with what gets started when
16:06:29 <bauzas> dansmith: ack, any change in particular for greenthreading ?
16:06:54 <dansmith> bauzas: we need to move to native threads in nova-api when under wsgi so that we can handle more than two things at once
16:07:11 <dansmith> *and* so we can enable it in devstack so that other services which can already do so can leverage that :)
16:07:28 <bauzas> I see
16:08:13 <dansmith> anyway, that's pretty much the status.. I encourage everyone to look critically at any failure they see and try to resolve it or nail it down while waiting for your recheck :)
16:08:53 <bauzas> well, I'd say, I'm a bad doctor, I'm only able to tell what's wrong without any cure
16:09:14 <bauzas> and my diagnostics are always biased
16:09:25 <bauzas> but yeah, teamwork, agreed :)
16:09:28 <dansmith> well, doctors require continuing education to get better ;)
16:09:53 <bauzas> not only doctors :)
16:09:59 <bauzas> anyway, thanks for the update
16:10:41 <bauzas> I'm still wondering if the GPT errors I'm seeing in the console are somehow related to the timeouts
16:11:16 <dansmith> I don't think so,
16:11:26 <dansmith> we get those every time we attach a blank disk to an instance
16:11:38 <bauzas> ack, ok
16:11:44 <dansmith> the correlation is lots of timeouts come from volume tests,
16:11:52 <dansmith> which always dump console logs, and you see those things that look error-like
16:12:00 <dansmith> correlated, but not caused
16:12:17 <bauzas> yeah, correlation is not cause, I get it :)
16:13:06 <bauzas> I know sean-k-mooney was also investing any alping image usage in some job
16:13:14 <bauzas> have we moved forward on it ?
16:13:21 <bauzas> alpine*
16:14:55 <bauzas> anyway, I think gate stability deserves some discussions at the PTG, if we haven't closed the gaps yet
16:15:28 <dansmith> not that I know of
16:16:13 <bauzas> moving on then
16:16:26 <sean-k-mooney> i havent looked at that in a while
16:16:31 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:16:49 <sean-k-mooney> but it was failing becuase it did not have growfs i.e. the disk root partition was not being expanded
16:16:56 <sean-k-mooney> easy fix jsut have not had time to work on it
16:17:25 <bauzas> sean-k-mooney: ack, this week I'm busy but next week, I could help
16:17:31 <sean-k-mooney> we can move on but https://review.opendev.org/q/topic:alpine is the topic for context
16:17:58 <bauzas> moving on indeed
16:18:01 <bauzas> #topic Release Planning
16:18:10 <bauzas> #link https://releases.openstack.org/bobcat/schedule.html
16:18:17 <bauzas> #link https://etherpad.opendev.org/p/nova-bobcat-blueprint-status Etherpad for tracking blueprints status
16:18:22 <bauzas> #info 2 days before FeatureFreeze
16:18:33 <bauzas> so, I tried to update the etherpad based on my recent findings
16:18:50 <bauzas> we have about 5 series that are reviewable
16:19:11 <bauzas> and we only have 2 days to get them into some state
16:19:29 <bauzas> of those 5 series, 2 of them are quite large
16:20:08 <bauzas> so I don't expect a full merge, but I'd appreciate if some reviews could be made, particularly in order to let them move forward quickly once we open Caracal feature dev
16:20:45 <bauzas> and we can discuss whether we can half-merge some series or hold the whole
16:20:59 <bauzas> so, please, take your pens
16:21:32 <bauzas> I'll continue to give a few review rounds till the last day, but I also need traction from others :)
16:22:50 <bauzas> and if you're an owner of one of those series, feel free to ping us on IRC if you think you'd like feedback
16:24:11 <bauzas> I think we're done on this topic
16:24:46 <bauzas> I'll probably ping a couple of folks after this meeting to get a proper status
16:25:30 <bauzas> #topic Review priorities
16:25:36 <bauzas> #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2)
16:26:27 <bauzas> #topic Stable Branches
16:26:34 <bauzas> elodilles: floor is yours
16:26:46 <elodilles> well, i was on PTO last week,
16:27:02 <elodilles> and i didn't see much activity on stable branches
16:27:16 <elodilles> i guess people are more focused on Bobcat nowadays ;)
16:27:25 <bauzas> this week, for sure
16:27:34 <elodilles> anyway, i'm not aware of any stable gate failures
16:27:47 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:28:02 <elodilles> feel free to add anything there if you find any issue ^^^
16:28:28 <elodilles> that's all from me
16:28:49 <auniyal> #info Please{10} *review* these backport patches of stable 2023.1, zed and yoga for next minor release
16:28:59 <auniyal> #info most of these already have one +2.
16:29:07 <auniyal> #link https://etherpad.opendev.org/p/release-liaison-PatchesToReview
16:30:15 <bauzas> yeah, it would be nice to deliver some stable releases while we reach the rc period
16:30:23 <elodilles> ++
16:30:53 <bauzas> I'll personnally doublecheck if some are missing my +2
16:31:13 <auniyal> I added 2 more patches for stable/2023.1
16:31:34 <auniyal> earlier bauzas had merged all
16:31:44 <bauzas> a 2023.1 stable release before we cut rc1 is important
16:32:09 <bauzas> operators can then upgrade to the latest .z release on Antelope before upgrading smoothly to Bobcat
16:32:27 <bauzas> but ok, I'll try to chase them down
16:32:54 <bauzas> on a note, my usual stable branches item :
16:33:00 <bauzas> #info train-eol patch proposed https://review.opendev.org/c/openstack/releases/+/885365
16:33:08 <bauzas> I guess it's waiting a second releases core
16:33:13 <elodilles> oh, it's still not merged... :S
16:33:23 <elodilles> yes, it is waiting for that
16:33:35 <bauzas> I know US trains are pretty long
16:33:43 <elodilles> :)
16:33:55 <bauzas> anyway, moving on, last topic
16:34:01 <bauzas> #topic Open discussion
16:34:21 <bauzas> nothing on my agenda, ever after a wiki refresh
16:34:29 <bauzas> so, anything anyone ?
16:37:03 <bauzas> looks not
16:37:30 <bauzas> thanks all, and tbh I'm happy to see all of you back :)
16:37:36 <bauzas> #endmeetinh
16:37:38 <bauzas> doh
16:37:43 <bauzas> #endmeeting