16:00:07 #startmeeting nova 16:00:07 Meeting started Tue Aug 29 16:00:07 2023 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:07 The meeting name has been set to 'nova' 16:00:18 hey folks, welcome 16:00:25 o/ 16:00:26 I'm eventually released from Canada 16:00:37 #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 16:00:44 who's around ? 16:01:00 o/ 16:01:20 o/ 16:01:23 o/ 16:01:32 * gibi is partially around (on a call in parallel) 16:01:36 we can softly start 16:01:49 #topic Bugs (stuck/critical) 16:01:53 #info No Critical bug 16:01:57 #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 49 new untriaged bugs (+9 since the last meeting) 16:01:58 o/ 16:02:02 #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster 16:02:18 so, we would like to do some bug triage 16:02:50 auniyal: you're the next in the roster list, fancy with this ? 16:02:58 ack bauzas 16:03:01 yes 16:03:05 very much appreciated 16:03:18 #info bug baton is auniyal 16:03:23 thanks 16:03:36 moving on, I don't see any critical bug to discuss 16:03:40 #topic Gate status 16:03:59 #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:04:05 #link https://etherpad.opendev.org/p/nova-ci-failures 16:04:16 #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status 16:04:26 #info nova-emulation seems flakey 16:04:34 #info https://zuul.openstack.org/build/209dc9676c4149b281420d9ae18f889c 16:05:05 things are better lately, thanks to a lot of hard work by a number of people, most recently probably melwitt with an LVM fix that might help with some stability, and a functional fix from gibi 16:05:22 dansmith: ++, I was about to ask for news :) 16:05:49 still need to work on some random timeouts we get which I think is related to stress on the api workers (at least in part) and melwitt has a green/native threads refactor up which I hope will eventually help with that 16:06:00 I have to admit the issues I've seen are out of my knowledge area :( 16:06:25 also still occasional OOMs which are hard to address when we're legitimately running out of memory if we get unlucky with what gets started when 16:06:29 dansmith: ack, any change in particular for greenthreading ? 16:06:54 bauzas: we need to move to native threads in nova-api when under wsgi so that we can handle more than two things at once 16:07:11 *and* so we can enable it in devstack so that other services which can already do so can leverage that :) 16:07:28 I see 16:08:13 anyway, that's pretty much the status.. I encourage everyone to look critically at any failure they see and try to resolve it or nail it down while waiting for your recheck :) 16:08:53 well, I'd say, I'm a bad doctor, I'm only able to tell what's wrong without any cure 16:09:14 and my diagnostics are always biased 16:09:25 but yeah, teamwork, agreed :) 16:09:28 well, doctors require continuing education to get better ;) 16:09:53 not only doctors :) 16:09:59 anyway, thanks for the update 16:10:41 I'm still wondering if the GPT errors I'm seeing in the console are somehow related to the timeouts 16:11:16 I don't think so, 16:11:26 we get those every time we attach a blank disk to an instance 16:11:38 ack, ok 16:11:44 the correlation is lots of timeouts come from volume tests, 16:11:52 which always dump console logs, and you see those things that look error-like 16:12:00 correlated, but not caused 16:12:17 yeah, correlation is not cause, I get it :) 16:13:06 I know sean-k-mooney was also investing any alping image usage in some job 16:13:14 have we moved forward on it ? 16:13:21 alpine* 16:14:55 anyway, I think gate stability deserves some discussions at the PTG, if we haven't closed the gaps yet 16:15:28 not that I know of 16:16:13 moving on then 16:16:26 i havent looked at that in a while 16:16:31 #info Please look at the gate failures and file a bug report with the gate-failure tag. 16:16:49 but it was failing becuase it did not have growfs i.e. the disk root partition was not being expanded 16:16:56 easy fix jsut have not had time to work on it 16:17:25 sean-k-mooney: ack, this week I'm busy but next week, I could help 16:17:31 we can move on but https://review.opendev.org/q/topic:alpine is the topic for context 16:17:58 moving on indeed 16:18:01 #topic Release Planning 16:18:10 #link https://releases.openstack.org/bobcat/schedule.html 16:18:17 #link https://etherpad.opendev.org/p/nova-bobcat-blueprint-status Etherpad for tracking blueprints status 16:18:22 #info 2 days before FeatureFreeze 16:18:33 so, I tried to update the etherpad based on my recent findings 16:18:50 we have about 5 series that are reviewable 16:19:11 and we only have 2 days to get them into some state 16:19:29 of those 5 series, 2 of them are quite large 16:20:08 so I don't expect a full merge, but I'd appreciate if some reviews could be made, particularly in order to let them move forward quickly once we open Caracal feature dev 16:20:45 and we can discuss whether we can half-merge some series or hold the whole 16:20:59 so, please, take your pens 16:21:32 I'll continue to give a few review rounds till the last day, but I also need traction from others :) 16:22:50 and if you're an owner of one of those series, feel free to ping us on IRC if you think you'd like feedback 16:24:11 I think we're done on this topic 16:24:46 I'll probably ping a couple of folks after this meeting to get a proper status 16:25:30 #topic Review priorities 16:25:36 #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2) 16:26:27 #topic Stable Branches 16:26:34 elodilles: floor is yours 16:26:46 well, i was on PTO last week, 16:27:02 and i didn't see much activity on stable branches 16:27:16 i guess people are more focused on Bobcat nowadays ;) 16:27:25 this week, for sure 16:27:34 anyway, i'm not aware of any stable gate failures 16:27:47 #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci 16:28:02 feel free to add anything there if you find any issue ^^^ 16:28:28 that's all from me 16:28:49 #info Please{10} *review* these backport patches of stable 2023.1, zed and yoga for next minor release 16:28:59 #info most of these already have one +2. 16:29:07 #link https://etherpad.opendev.org/p/release-liaison-PatchesToReview 16:30:15 yeah, it would be nice to deliver some stable releases while we reach the rc period 16:30:23 ++ 16:30:53 I'll personnally doublecheck if some are missing my +2 16:31:13 I added 2 more patches for stable/2023.1 16:31:34 earlier bauzas had merged all 16:31:44 a 2023.1 stable release before we cut rc1 is important 16:32:09 operators can then upgrade to the latest .z release on Antelope before upgrading smoothly to Bobcat 16:32:27 but ok, I'll try to chase them down 16:32:54 on a note, my usual stable branches item : 16:33:00 #info train-eol patch proposed https://review.opendev.org/c/openstack/releases/+/885365 16:33:08 I guess it's waiting a second releases core 16:33:13 oh, it's still not merged... :S 16:33:23 yes, it is waiting for that 16:33:35 I know US trains are pretty long 16:33:43 :) 16:33:55 anyway, moving on, last topic 16:34:01 #topic Open discussion 16:34:21 nothing on my agenda, ever after a wiki refresh 16:34:29 so, anything anyone ? 16:37:03 looks not 16:37:30 thanks all, and tbh I'm happy to see all of you back :) 16:37:36 #endmeetinh 16:37:38 doh 16:37:43 #endmeeting