16:01:07 <Uggla> #startmeeting nova
16:01:07 <opendevmeet> Meeting started Tue Apr 22 16:01:07 2025 UTC and is due to finish in 60 minutes.  The chair is Uggla. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:07 <opendevmeet> The meeting name has been set to 'nova'
16:01:16 <Uggla> Hello everyone
16:01:23 <sean-k-mooney> kevko: because it would not be a lilve migration any more, but we can talk about that after the meeting
16:01:29 <Uggla> awaiting a moment for people to join.
16:02:00 <kevko> sean-k-mooney: okay, i will wait ...
16:02:09 <gibi> o/
16:02:45 <bauzas> o/
16:02:54 <elodilles> o/
16:04:04 <sean-k-mooney> o/
16:04:14 <Uggla> #topic Bugs (stuck/critical)
16:04:20 <Uggla> #info No Critical bug
16:04:39 <Uggla> #topic Gate status
16:04:46 <Uggla> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:04:55 <Uggla> #link https://etherpad.opendev.org/p/nova-ci-failures-minimal
16:05:03 <Uggla> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&branch=stable%2F*&branch=master&pipeline=periodic-weekly&skip=0 Nova&Placement periodic jobs status
16:05:16 <bauzas> nothing to say
16:05:19 <Uggla> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:05:31 <Uggla> #info Please try to provide meaningful comment when you recheck
16:05:55 <Uggla> tbh, I have not looked at the jobs this time. Shame on me.
16:06:31 <Uggla> thanks bauzas for the info.
16:06:53 <Uggla> #topic tempest-with-latest-microversion job status
16:06:59 <Uggla> #link https://zuul.opendev.org/t/openstack/builds?job_name=tempest-with-latest-microversion&skip=0
16:07:09 <Uggla> #link https://zuul.opendev.org/t/openstack/builds?job_name=tempest-with-latest-microversion&skip=0
16:07:51 <Uggla> Status is the same as I have not checked with Gmann about it.
16:08:06 <Uggla> #topic PTG summary
16:08:15 <Uggla> #link https://etherpad.opendev.org/p/r.bf5f1185e201e31ed8c3adeb45e3cf6d
16:08:20 <Uggla> as a reminder
16:08:35 <Uggla> #topic Release Planning
16:08:43 <Uggla> #link https://releases.openstack.org/flamingo/schedule.html
16:08:50 <Uggla> #info Nova deadlines are set in the above schedule
16:09:09 <Uggla> #topic Review priorities
16:09:19 <Uggla> #link https://etherpad.opendev.org/p/nova-2025.2-status
16:09:35 <Uggla> Still draft.
16:10:11 <Uggla> #topic Stable Branches
16:10:23 <Uggla> elodilles, please go ahead.
16:10:31 <elodilles> thanks o/
16:10:34 <elodilles> #info stable gates seems to be in good health
16:10:43 <elodilles> #info stable/2023.2 (bobcat) will move to End of Life next week
16:10:43 <Uggla> \o/
16:10:59 <elodilles> #info final 2023.2 bobcat releases are out: nova 2023.2 (28.3.1), placement 2024.2, 2024.1, 2023.2 (12.0.1, 11.0.1, 10.0.1)
16:11:22 <elodilles> thanks melwitt and sean-k-mooney for the tons of stable reviews o/
16:11:50 <Uggla> melwitt++
16:11:54 <gibi> indeed, thanks for making these releases full of backports
16:11:59 <Uggla> sean-k-mooney++
16:12:04 <elodilles> i think we just have to wait until the deadline and hope we did not introduce regressions o:)
16:12:29 <gibi> nah, what regressions, those are upgrade oportunities to caracal :)
16:12:33 <elodilles> :]
16:12:42 <elodilles> yepp :)
16:12:51 <Uggla> gibi, I like your mindset ! :)
16:13:00 <elodilles> btw, i've prepared nova stable releases for dalmatian and caracal as well
16:13:16 <elodilles> to not leave everything till the end o:)
16:13:35 <elodilles> nova: Release 2024.1 Caracal 29.2.1  https://review.opendev.org/c/openstack/releases/+/947846
16:13:45 <gibi> early upgrade opportunities :)
16:13:49 <elodilles> nova: Release 2024.2 Dalmatian 30.0.1  https://review.opendev.org/c/openstack/releases/+/947847
16:14:05 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:14:14 <elodilles> please add here if you find any gate issue ^^^
16:14:25 <elodilles> and that's all about stable from me for now
16:14:32 <elodilles> Uggla: back to you
16:14:40 <Uggla> thx elodilles !
16:14:44 <elodilles> np
16:15:16 <Uggla> #topic Gibi's news about eventlet removal
16:15:25 <Uggla> #link Series: https://gibizer.github.io/categories/eventlet/
16:15:31 <Uggla> #link reminder patches to review: https://review.opendev.org/q/topic:%22eventlet-removal%22+project:openstack/nova
16:15:49 <Uggla> gibi, anything new you want to share ?
16:16:06 <gibi> nope, you shared all what I had :)
16:16:12 <bauzas> as I said, I can help for reviewing the series
16:16:27 <gibi> any help is appreciated :)
16:16:36 <bauzas> cool
16:16:54 <Uggla> I know sean-k-mooney already reviewed some of them.
16:17:22 <Uggla> Having someone else would be great.
16:17:33 <sean-k-mooney> some yes melwitt  looked at a couple too
16:17:40 <sean-k-mooney> but ya more the merrier
16:19:00 <Uggla> ok moving on to next topic
16:19:04 <Uggla> #topic Bug scrubbing
16:20:07 <Uggla> So I wanted to clean up a bit the new bugs. But I didn't have a chance to to it yet. Anyway I have selected 5 new bugs that looks properly defined.
16:20:33 <Uggla> So first one : https://bugs.launchpad.net/nova/+bug/2104255
16:22:08 <Uggla> So my expected outcome would be to know if we think it is a valid bug or not. If it is incomplete what to ask.
16:22:56 <sean-k-mooney> its valid. i looked at it breifly and chated about it on irc
16:23:00 <bauzas> Seems a caching issue
16:23:08 <bauzas> so yeah valid
16:23:20 <sean-k-mooney> tl;dr if the nova-comptue agent is started when the vf is attached to the vm
16:23:28 <sean-k-mooney> we cant get the network capablities
16:23:42 <sean-k-mooney> and we cache the vf info on start up in both nova and libvirt
16:23:59 <sean-k-mooney> libvirt invalidates its cache based on udev events
16:24:12 <sean-k-mooney> i.e. when the vf is returned to the host
16:24:32 <sean-k-mooney> nova does nto invlaidate its cache so that part of the problem
16:24:35 <Uggla> ok cool do not worry about settings the bug fields I'll do that later after the call
16:24:40 <sean-k-mooney> lets mark it valid and medium
16:24:46 <Uggla> great
16:24:49 <sean-k-mooney> and we can decied how to fix it later
16:24:51 <Uggla> next one:
16:25:13 <Uggla> https://bugs.launchpad.net/nova/+bug/2105987
16:26:18 <sean-k-mooney> so melwitt has a patch that might fix this indirectly
16:26:54 <sean-k-mooney> gibi: do you know if we merged the one to auto discover the config options and use the sdk to get the limits?
16:27:07 <gibi> sean-k-mooney: not top of my head
16:27:12 <sean-k-mooney> ack
16:27:20 <gibi> but the bug above links to a separate oslo.limit fix
16:27:24 <sean-k-mooney> so if this config option is not being repected in nova today
16:27:33 <sean-k-mooney> then its valid but we need to confirm that
16:27:35 <gibi> so I'm wondering if just using the latest oslo.limits fixes it for nova
16:27:56 <sean-k-mooney> ya i suspect that might be hte case
16:28:07 <sean-k-mooney> although im a bit confused
16:28:14 <sean-k-mooney> because the also have endpoint_interface
16:28:34 <sean-k-mooney> which i tought was ment to do the same thing as the new filed they just added
16:28:37 <sean-k-mooney> https://review.opendev.org/c/openstack/oslo.limit/+/946128/4/oslo_limit/opts.py
16:29:01 <sean-k-mooney> maybe we should ask if they set endpoint_interface=internal
16:29:12 <sean-k-mooney> in the bug and come back to this
16:29:32 <sean-k-mooney> ill go ask that
16:30:22 <Uggla> ok so we can set it to incomplete and then review it when it will be back in new.
16:31:35 <sean-k-mooney> tkajinam: sicne you wrote https://review.opendev.org/c/openstack/oslo.limit/+/946128 you might be able to add some context to https://bugs.launchpad.net/nova/+bug/2105987
16:31:58 <sean-k-mooney> Uggla: for what its worht we configure our dwonstream installer to use internal too
16:32:09 <sean-k-mooney> so im not entirly sure why the oslo change is requried
16:32:14 <sean-k-mooney> but we can move on
16:33:10 <Uggla> ok next one: https://bugs.launchpad.net/nova/+bug/2106073   this one was opened by Mohammed Naser, corfirmed on Ironic. And I'm pretty sure Mohammed proposed a fix for it.
16:34:54 <Uggla> btw I have just noticed sean-k-mooney commented on it.
16:35:39 <sean-k-mooney> ya so this is unfixable in nova without rewriting parts of the ironic deriver
16:36:19 <sean-k-mooney> tl;dr ironic is modifying port bidinges, vnic typs and other atributes out of band of nova
16:36:51 <sean-k-mooney> that means that the info we provide in teh configdrive string can never be correct becuase ironic changes things in neutron that its not allowed too after we call provision
16:37:00 <Uggla> ok but that sounds a valid bug despite we rely on Ironic to fix it ?
16:37:20 <sean-k-mooney> so if itgs going to do that it also need ot update the config drive info it has
16:37:29 <sean-k-mooney> ya its somewhat valid to fix in ironic
16:37:47 <sean-k-mooney> again they are modifying neutron ports in a way that only valid wehn using ironic without nova
16:37:58 <sean-k-mooney> that the root of the issue.
16:38:12 <sean-k-mooney> ironic port groups only exixits in the ironic api
16:38:18 <sean-k-mooney> they are not modled in nova or neutron
16:38:33 <sean-k-mooney> so there is no portable way to express nic bonds ectra today
16:39:05 <Uggla> should we set it as "won't fix" on our side ?
16:39:10 <sean-k-mooney> i would proably mark the nova sid as cant fix
16:39:13 <sean-k-mooney> wont fix is fine too
16:39:21 <sean-k-mooney> i think we have invalid?
16:39:37 <sean-k-mooney> not sure which is better
16:39:39 <Uggla> yes invalid is an option too
16:40:35 <sean-k-mooney> anywya the tl;dr is because ironic and the ironic virt driver are doing thign virt drivers are not allowed too it can only be "fixed" in ironic
16:40:59 <Uggla> I vote for invalid on our side. Explaing ^ any objection with that ?
16:41:48 <Uggla> 3...
16:42:01 <Uggla> 2...
16:42:20 <Uggla> 1...
16:42:35 <Uggla> ok let's do that.
16:43:25 <Uggla> next one: https://bugs.launchpad.net/nova/+bug/2106380
16:44:14 <sean-k-mooney> that seam valid ish
16:44:39 <sean-k-mooney> if we delete a vm we should remove it form instance_group_member in some form
16:44:49 <sean-k-mooney> i suspect that wont happen until we archive it however
16:45:06 <sean-k-mooney> although we dont have shadow tables in the api db
16:45:21 <sean-k-mooney> so perhaps we should just delete it when the vm is deleted
16:46:01 <gibi> yeah it seems like we should remove the vm from instance_group_member during delete, but it is hard as it is in the api db while the vm is in the cell db
16:46:08 <sean-k-mooney> there is a seperate bug for the slownes due to the request spec
16:46:15 <sean-k-mooney> loading
16:46:24 <sean-k-mooney> so we can keep that out of scope
16:47:26 <sean-k-mooney> so dansmith and i were wondering about a related topic. if you do not have api access from teh cell condutor and do local delete beacuse the conenction to the cell is down
16:47:42 <sean-k-mooney> will anything clean up the instnace mapping?
16:48:17 <sean-k-mooney> like to me this feels similar i feel like the instnace group membership is somethign we shoudl eb cleaning up once the instnace goes to delete in the cell db
16:48:22 <gibi> I think it would have the same problem that the VM delete happens in the cell and we don't have an up call
16:48:24 <sean-k-mooney> but i dont know if anythign will actully do it today
16:48:57 <sean-k-mooney> gibi: this kind of feels like something the supercondutor could do as a perodic
16:49:03 <gibi> so the only reliable way would be a periodic
16:49:04 <gibi> yepp
16:49:13 <sean-k-mooney> but i belive that would be the first such perodic in teh super conductor
16:49:21 <gibi> yeah, I think so
16:49:42 <gibi> not impossible to do just not a low hanging bugfiz
16:49:43 <gibi> x
16:49:50 <sean-k-mooney> Uggla: to summerise, this is valid, it proably need a new periodic and there are likely other cases like this that should be fixed
16:50:09 <Uggla> perfect.
16:50:49 <sean-k-mooney> i would also agree not a low hanging fuirt but perhaps low to medium imporantce
16:52:13 <Uggla> Latest one for this week: https://bugs.launchpad.net/nova/+bug/2106817
16:55:58 <gibi> no stack trace so it is hard to tell
16:56:03 <sean-k-mooney> so neutron supproted adressless port long before nova did
16:56:18 <sean-k-mooney> sylvain added supprot for adressless port but i dont know if that in antelope?
16:56:54 <bauzas> bobcat iirc
16:57:05 <sean-k-mooney> https://specs.openstack.org/openstack/nova-specs/specs/yoga/implemented/boot-vm-with-unaddressed-port.html
16:57:06 <bauzas> or maybe antelope, can't remember this
16:57:09 <sean-k-mooney> it was actully yoga
16:57:14 <bauzas> ohoho
16:57:36 <sean-k-mooney> but i do know there are still some issue with metadta
16:58:21 <sean-k-mooney> so they are aslo using trunk ports
16:58:29 <sean-k-mooney> whic i dont know if that was tested as part of this
16:59:06 <sean-k-mooney> they are also doing this via heat
16:59:12 <sean-k-mooney> which is fun...
16:59:32 <Uggla> maybe we can ask for a reproducer. At least the procedure to reproduce it
16:59:36 <Uggla> without heat
16:59:42 <sean-k-mooney> i knwo that heat uses the same micorverion for all nova calls
16:59:48 <sean-k-mooney> and it default to a pretty old one
17:00:25 <sean-k-mooney> Uggla: yes i think we shoudl ask for a simple repoducer without heat
17:00:39 <sean-k-mooney> what im unsure about is is this a nova or neutron issue
17:00:53 <sean-k-mooney> bauzas: i dotn think we added a an api microversion for this correct
17:01:41 <sean-k-mooney> Uggla: supproting adress port in combiniation with turnk ports was not in socpe fo the spec
17:01:59 <sean-k-mooney> so if this is repoducable without heat
17:02:05 <sean-k-mooney> we have two choices
17:02:18 <sean-k-mooney> treat it as a new feature to actully supprot using both feature togetehr
17:02:26 <sean-k-mooney> or consider it a bug in the existing implimation
17:03:00 <Uggla> Ok, that works for me — incomplete plus a reproducer. That way, we’ll be able to narrow down the possible issue.
17:03:04 <sean-k-mooney> i think to do it properly in nova nova woudl have to check the port bidning of all the sub ports instead fo teh trunk port
17:03:22 <sean-k-mooney> so im inclidne to say this is a feature request but sure lest see wht they come back with
17:04:08 <Uggla> yep I was thinking the same. Maybe that's not a "bug". But it will end up in a feature request.
17:05:21 <Uggla> cool ok thanks all for this triage session. Latest topic
17:05:23 <Uggla> #topic Open discussion
17:05:35 <Uggla> one point from my side
17:05:37 <Uggla> Next week's meeting: I’ll be on vacation and unable to run it. Would someone be available to run it on my behalf?
17:06:25 <Uggla> I'll propably prepare a list of bug to triage. (but that's not mandatory)
17:07:44 <bauzas> Sorry, I can only help for the first 30 mins
17:07:54 <bauzas> (double meeting at the same time with TC one started)
17:08:21 <Uggla> bauzas, ok better than nothing. IMHO.
17:09:26 <bauzas> okay, I'll try to help then
17:09:35 <bauzas> let's then do a short one
17:10:11 <Uggla> ok
17:10:19 <Uggla> Time to close.
17:10:24 <gibi> o/
17:10:54 <Uggla> sorry for the 10mn overtime.
17:11:05 <sean-k-mooney> o/
17:11:08 <Uggla> Thanks all
17:11:14 <Uggla> #endmeeting