16:01:07 <bauzas> #startmeeting nova
16:01:07 <opendevmeet> Meeting started Tue Oct  1 16:01:07 2024 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:07 <opendevmeet> The meeting name has been set to 'nova'
16:01:14 <bauzas> hey everyone
16:01:19 <tkajinam> o/
16:01:50 <elodilles> o/
16:01:56 <bauzas> #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
16:02:01 <dansmith> o/
16:02:56 <bauzas> ok let's start
16:03:04 <bauzas> #topic Bugs (stuck/critical)
16:03:08 <bauzas> #info No Critical bug
16:03:14 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster
16:03:20 <bauzas> anything else ?
16:04:22 <bauzas> ok looks not
16:04:25 <bauzas> moving on
16:04:30 <bauzas> #topic Gate status
16:04:36 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:04:41 <bauzas> #link https://etherpad.opendev.org/p/nova-ci-failures-minimal
16:04:47 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status
16:04:53 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:04:57 <bauzas> #info Please try to provide meaningful comment when you recheck
16:05:08 <Uggla> o/
16:05:09 <bauzas> about the periodics, we have a failure on the c9s job
16:05:19 <bauzas> #link https://zuul.openstack.org/build/0e3b92cd874e4f29b4b252b6f4acb8e9
16:05:55 <bauzas> 2024-09-28 08:43:49.610448 | controller | + lib/rpc_backend:install_rpc_backend:61   :   sudo systemctl --now enable rabbitmq-server
16:05:57 <bauzas> 2024-09-28 08:43:49.646533 | controller | Failed to enable unit: Unit file rabbitmq-server.service does not exist.
16:06:08 <bauzas> looks like there is a regression
16:06:16 <bauzas> probably something people already know ?
16:07:24 <tkajinam> Is that seen in master ?
16:07:30 <bauzas> https://zuul.openstack.org/build/0e3b92cd874e4f29b4b252b6f4acb8e9/log/job-output.txt#4894
16:07:50 <bauzas> no, stable/2024.2
16:08:18 <bauzas> but, sec, checking master
16:08:30 * gibi joins late
16:08:40 <elodilles> CentOS9 - Error: Unable to find a match: rabbitmq-server
16:08:54 <elodilles> when installing 'dnf install -y rabbitmq-server'
16:08:55 <tkajinam> I've not seen it in puppet jobs so I guess it's not a general problem.
16:09:38 <bauzas> indeed, that's weird, master job run worked fine
16:09:42 <tkajinam> No match for argument: centos-release-openstack-2024.2
16:09:47 <bauzas> https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-9-stream&project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly&skip=0
16:09:48 <tkajinam> I guess it's not yet created
16:09:56 <tkajinam> I remember there were some discussions about it in #rdo
16:10:11 <tkajinam> I guess we can leave it now until devstack is updated or RDO publishes 2024.2 contents completely
16:10:15 <bauzas> okay, let's then defer this discussion to next week
16:10:28 <bauzas> we'll see whether this fails again
16:10:44 <bauzas> thanks tkajinam for the help
16:10:44 <tkajinam> +1
16:10:50 <bauzas> moving on
16:10:59 <bauzas> nothing else about our gate stability ?
16:11:05 <bauzas> context : I wasn't around this week
16:11:37 <tkajinam> nothing I'm aware of
16:11:49 <sean-k-mooney> ya it sound like the rdo release has not happend yet
16:12:12 <sean-k-mooney> we might be able to pin to the 2024.1 content as a workaround
16:12:12 <bauzas> okay, then moving to the next topic
16:12:26 <bauzas> sean-k-mooney: this is a periodic job, we can wait until next week
16:12:37 <sean-k-mooney> ah then yes lets move on
16:12:47 <bauzas> #topic Release Planning
16:12:53 <bauzas> #link https://releases.openstack.org/dalmatian/schedule.html
16:12:58 <bauzas> #info Dalmatian GA planned tomorrow
16:13:08 <tkajinam> \o/
16:13:08 <bauzas> that will probably help RDO ^
16:13:17 <bauzas> #link https://releases.openstack.org/epoxy/schedule.html
16:13:33 <bauzas> as a reminder, we don't have yet nova deadlines for Epoxy, that will be discussed at the PTG
16:13:54 <bauzas> #topic Review priorities
16:14:02 <bauzas> #link https://etherpad.opendev.org/p/nova-2025.1-status
16:14:22 <bauzas> I need to do some homework and add the already approved blueprints in therre
16:15:30 <bauzas> at least gibi's series on igb is there
16:15:40 <bauzas> sounds like a quickwin
16:15:48 <bauzas> #topic PTG planning
16:15:54 <bauzas> #info as a reminder, we'll meet (virtually) at the PTG on Oct 21-25 2024
16:16:00 <bauzas> #info Please register yourself to the virtual PTG
16:16:07 <bauzas> #link https://etherpad.opendev.org/p/nova-2025.1-ptg
16:16:16 <bauzas> please add your topics to the above etherpad
16:16:34 <bauzas> the more we have before the PTG, the better we are prepared and I can write some agenda :)
16:16:56 <bauzas> we'll have the Austin room for the week
16:17:04 <bauzas> (but I think I already said that last week)
16:17:28 <bauzas> any questions about the PTG ?
16:17:54 <gibi> bauzas: will we have Asia slot?
16:18:05 <bauzas> for the translations docs thing with ian and sungsoo, there is a TC topic about it
16:18:28 <bauzas> gibi: I don't think so, seongsoo and ian had their topic in the TC etherpad
16:18:47 <bauzas> I'll clarify that tonight during the TC meeting
16:19:15 <gibi> OK, so TC won't have an Asia slot either?
16:19:31 <bauzas> unless of course I see a few topics in the nova PTG etherpad where the owners are in Asian TZ
16:19:52 <bauzas> gibi: that's the question I'll ask tonight :D
16:20:03 <gibi> cool:)
16:20:30 <bauzas> I'm fine with addressing the translation issues as a cross-project discussion
16:20:42 <bauzas> but I can propose nova for some experimental work
16:20:50 <gibi> me too, I just need to know in advance if there is an extra schedule
16:21:07 <bauzas> yeah, I want to prepare my mornings :)
16:21:23 <bauzas> should be hopefully settled next week
16:21:32 <gibi> those sweet lazy mornings of the PTG week :)
16:21:54 <bauzas> for the moment, we stick with the usual slots that are 1300-1700UTC Tues to Fri
16:22:18 <gibi> ack
16:22:44 <bauzas> #topic Stable Branches
16:22:48 <bauzas> elodilles: heya
16:22:56 <elodilles> #info stable/202*.* gates seem to be OK
16:23:09 <elodilles> at least i'm not aware of any issue :)
16:23:20 <elodilles> and some patches merged, so probably OK
16:23:27 <elodilles> #info note, that stable/2023.1 (antelope) will transition to unmaintained/2023.1 in a month
16:23:45 <elodilles> we might want a release in 1 or 2 weeks ^^^
16:23:55 <bauzas> time flies
16:23:56 <elodilles> *final* stable/2023.1 release
16:23:58 <elodilles> yepp
16:24:18 <elodilles> and that's all from me about stable for now
16:24:24 <bauzas> one open thought
16:24:44 <bauzas> but I don't want to start a flamewar now
16:24:49 <elodilles> :)
16:25:17 <bauzas> once we unmaintain 2023.1, we will only have one SLURP release
16:25:21 <bauzas> that's maintained
16:25:28 * gibi prepares the torches
16:26:31 <bauzas> giving an extra month before the flag would help people who *really* follow upstream stable support envelope to have more time to upgrade to C
16:27:07 * bauzas awaits the tomatoes
16:27:23 <dansmith> can we move this along?
16:27:32 * gibi hands a torch to bauzas
16:27:40 <bauzas> I don't know, that's a blind question
16:29:24 <bauzas> you know what ? that's not the right place to discuss that and I first need to check the releases support envelopes
16:30:31 <bauzas> but honestly, my thought was about the fact that I can imagine a SLURP release having a shortly longer support envelope than a non-SLURP one (and don't think about LTSes, that's not what I'm thinking)
16:31:01 <bauzas> aw, moving on
16:31:09 <elodilles> bauzas: well, now i'm a bit unsure as well. but i'll check the resolution...
16:31:18 <bauzas> #topic vmwareapi 3rd-party CI efforts Highlights
16:31:18 <elodilles> (unmaintained)
16:31:27 <bauzas> fwiesel: around ?
16:31:53 <fwiesel> Not much to update from my side. Still looking into the flakyness of the CI (occasional port failing to get bound)
16:32:16 <fwiesel> dansmith: Gentle reminder about the open comment: https://review.opendev.org/c/openstack/nova/+/910627?tab=comments
16:32:21 <fwiesel> Otherwise, that's from my side.
16:32:29 <bauzas> cool
16:33:00 <bauzas> and thanks for the update, g'luck with the flakyness
16:33:07 <bauzas> #topic Open discussion
16:33:13 <bauzas> (tkajinam): How can we restore https://review.opendev.org/c/openstack/nova/+/845660 ?
16:33:39 <bauzas> tkajinam: your time
16:33:56 <tkajinam> I noticed I left that stale for long and I'm willing to resume that work
16:34:42 <tkajinam> the current problem is that os-brick silently falls back to single path connection in case multipath is not running. os-brick introduce the capability to hard-fail in case mulitpath is not running and that is configurable in cinder and glance, but is not in nova
16:34:49 <dansmith> fwiesel: ack, but it's not blocking anything
16:35:11 <gibi> tkajinam: yeah make sense to pick that up
16:35:28 <tkajinam> my main question here is whether this can be spec-less bp or need spec
16:35:53 <bauzas> would there be an upgrade impact ?
16:36:18 <tkajinam> no. the option is disabled by default so the existing behavior is kept during upgrade.
16:36:44 <gibi> seems like fully backward compatible opt-in behavior
16:36:46 <fwiesel> dansmith: No, the comment not. I assumed you were volunteering for the second required review core review and hoped I addressed everything outsstanding.
16:36:53 <gibi> so for me it is OK to have it as specless
16:36:59 <bauzas> on a rolling upgrade case, you'd then have *some* computes that would fail instance creations while some others not ?
16:36:59 <tkajinam> yes > full backward compatible.
16:37:35 <gibi> bauzas: the compute agent will fail to start up if you opt-in to the new behavior but failed to install multipathd
16:37:52 <bauzas> then a doc would be sufficient
16:38:14 <bauzas> do you plan to change the behaviour in a later cycle ?
16:38:36 <tkajinam> no. I think we can keep the behavior optional. that's consistent with cinder and glance for now.
16:39:18 <bauzas> then I'm indeed cool with a specless BP, provided we document the option correctly (only turn that flag on if you installed multipathd)
16:40:19 <bauzas> any disagreements ?
16:41:19 <bauzas> so,
16:41:44 <bauzas> #agreed https://blueprints.launchpad.net/nova/+spec/enforce-multipath approved as specless
16:41:59 <bauzas> next item
16:42:02 <bauzas> (tkajinam): Python 3.8 support being removed
16:42:03 <tkajinam> sean-k-mooney, in case you have any remaining concerns then please let me know ^^^
16:42:20 <tkajinam> this is again my topic :-P
16:42:51 <tkajinam> so as I wrote we are removing python 3.8 support from oslo. This is the timeline we agreed earlier and would move the whole projects to drop python 3.8 support
16:43:32 <tkajinam> AFAIK py38 job was removed from most of the nova/placement repos but there are a few remaining in placement I'm trying to replace. if you can review the change and also double-check the other repos, that would be nice
16:43:40 <bauzas> py3.8 upstream support is planned end of Nov, right?
16:43:57 <bauzas> upstream support end*
16:44:20 <tkajinam> https://devguide.python.org/versions/ it's reaching its EOL this month
16:44:29 <bauzas> oh this month
16:44:43 <tkajinam> so py3.8 is already EOL when Epoxy is released
16:45:00 <bauzas> https://governance.openstack.org/tc/reference/runtimes/2025.1.html
16:45:15 <bauzas> that's not in our runtimes, so I'm cool
16:45:55 <bauzas> tkajinam: if that's review support you need for your patches, bug me
16:46:20 <tkajinam> One thing I want to get agreement about is the min version enforced by python_requires in setup.cfg
16:46:32 <bauzas> we can also add your patches in the status etherpad
16:46:37 <tkajinam> which actually defines the min python version during installation
16:47:06 <bauzas> do we have a choice here given the Epoxy runtimes ?
16:47:10 <tkajinam> My own opinion is that we should bump min to 3.9, because we no longer test 3.8 and we don't aim to fix compatibility with it
16:47:35 <gibi> yeah, if we don't test it then it is broken
16:47:41 <gibi> so bump to 3.9
16:48:16 <elodilles> we had issues in the past when we too early dropped a version and some thing went to error,
16:48:34 <elodilles> but i guess with py38 we are probably beyond that line :)
16:48:37 <bauzas> don't ask me to send you the pypi query that shows the number of "py2.7 supported" projects
16:48:53 <elodilles> maybe not merge it before tomorrow final 2024.2 Dalmatian release? :D
16:48:53 <bauzas> elodilles: this was due to some oslo libs
16:49:06 <tkajinam> py38 removal has been discussed for very long time and it was even removed from testing in 2024.2 .
16:49:17 <bauzas> and this was also happening very late in the development cycle
16:49:34 <elodilles> yepp
16:49:41 <bauzas> how many projects import nova.* ?
16:50:07 <tkajinam> I believe we gave all projects enough time to migrate job and now it's time to force any lagging projects to do their migration.
16:50:17 <elodilles> +1
16:50:25 <tkajinam> bauzas, I'm aware of none, but I've not checked external projects really.
16:50:42 <bauzas> well, Blazar did that a decade ago :D
16:50:49 <bauzas> anyway
16:50:51 <tkajinam> As I said we may have new releases from oslo in M1 which drops py38 support. that would allow us to catch problems early
16:51:11 <elodilles> +1
16:51:17 <bauzas> I'd say that oslo librairies should may be the latest to change their min versions
16:51:42 <elodilles> or drop it early in the cycle
16:51:45 <bauzas> but we're way in advance, so we can cut the cord now and see which projects fail the gap
16:52:07 <bauzas> elodilles: sure, and then it will become a TC discussion :D
16:52:21 <bauzas> like, project X CI fails, why ?
16:52:27 <bauzas> anyway
16:52:39 <elodilles> :)
16:53:09 <bauzas> tkajinam: from my sole angle of view which is the Nova project, I'd say you can safely bump our min_version in the descriptor, including os-vif and the clients
16:53:19 <tkajinam> ok
16:53:42 <tkajinam> I'll submit the bump and see how it goes.
16:53:42 <bauzas> anyything else then ?
16:53:50 <tkajinam> no :-) thanks for your time.
16:53:55 <bauzas> tkajinam: and don't hesitate to bug me then
16:54:14 <gibi> tkajinam: thanks for doing the cleanup
16:54:51 <bauzas> yup, thanks indeed
16:55:04 <bauzas> anything else to discuss ?
16:55:05 <dansmith> I just want to point out this patch of mine, which fixes a long-running transgression with us lying to glance about volume snapshots: https://review.opendev.org/c/openstack/nova/+/930754
16:55:35 <dansmith> as far as I could tell while looking, the snap we create for volumes is not useful for actually booting from and was added for a completely unrelated reason ages ago
16:55:46 <dansmith> so i think we might want to consider not doing that anymore if there is no reason
16:56:14 <dansmith> if there is, then we're going to need to consider what we want to do there when we stop allowing nova to boot from unformatted (i.e. raw) images in the future
16:56:27 <dansmith> but I have a lot of history summarized there with links if anyone wants to have a look
16:56:51 <dansmith> that's blocking glance work because we fail in tempest tests to snapshot BFV instances since glance rejects nova's lies that we're uploading a qcow2 when we're clearly not
16:56:56 <bauzas> dansmith: fwiw, I trust you on that yet another image backend story
16:57:06 <dansmith> please trust but verify :)
16:57:17 <dansmith> anyway, no time to discuss in depth now, I need to run to something else but wanted to point it out
16:57:35 <gibi> dansmith: thanks for the heads up, I have to read your excellent commit message...
16:58:35 <bauzas> dansmith: I'll carefully verify your commit msg then
16:58:38 <dansmith> thanks
16:59:45 <gibi> bauzas: I feel we are done
16:59:54 <bauzas> yup, and we're on time
17:00:02 <gibi> perfect :)
17:00:11 <bauzas> I was doing some paperwork to add dansmith's patch into the etherpad
17:00:14 <bauzas> that's his fault
17:00:19 <bauzas> thanks all
17:00:22 <bauzas> #endmeeting