16:01:04 #startmeeting nova 16:01:04 Meeting started Tue Apr 29 16:01:04 2025 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:04 The meeting name has been set to 'nova' 16:01:08 o/ 16:01:12 guys, we have a meeting now 16:01:13 o/ 16:01:13 o/ 16:01:18 do you want to discuss ? 16:01:30 o/ 16:01:30 I can chair it for the next 20/30 mins 16:01:41 o 16:01:44 I have one gate topic 16:01:47 and then I'll need to go offline 16:01:51 okay, let's start 16:02:11 #topic Bugs (stuck/critical) 16:02:21 #info No Critical bug 16:02:29 #topic Gate status 16:02:44 #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:02:44 Nova gate is blocked until we land https://review.opendev.org/c/openstack/nova/+/948392 16:02:57 yepp that is my point^^ 16:03:11 just +wd it 16:03:19 py3.9 are dropped from global req causing version conflict 16:03:24 #link https://etherpad.opendev.org/p/nova-ci-failures-minimal 16:03:29 cool 16:03:31 bauzas: thankls 16:03:34 I saw the failures indeed 16:03:46 gibi; no thanks to you for the fix 16:04:00 anything else to say about it ? 16:04:49 nope 16:05:02 all good 16:05:07 moving on quickly then 16:05:30 #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&branch=stable%2F*&branch=master&pipeline=periodic-weekly&skip=0 Nova&Placement periodic jobs status 16:05:39 #info Please look at the gate failures and file a bug report with the gate-failure tag. 16:05:45 #info Please try to provide meaningful comment when you recheck 16:06:05 periodics are green, moving on 16:06:14 #topic tempest-with-latest-microversion job status 16:06:23 #link https://zuul.opendev.org/t/openstack/builds?job_name=tempest-with-latest-microversion&skip=0 16:07:08 I will make it periodic as logs are not visible in experimental run 16:07:14 moving on while I'm awaiting the result 16:07:16 and will see how many tests need adjustment 16:07:26 gmaan: ack thanks 16:07:41 #topic Release Planning 16:07:52 Merged openstack/placement master: Add pyproject.toml to support pip 23.1 https://review.opendev.org/c/openstack/placement/+/932399 16:07:54 #link https://releases.openstack.org/flamingo/schedule.html 16:07:54 #info Nova deadlines are set in the above schedule 16:08:06 well, technically, those deadlines aren't set AFAICS 16:08:11 actually they are not: https://review.opendev.org/c/openstack/releases/+/948077 16:08:19 elodilles yeah 16:08:26 o:) 16:08:55 moving on, that'll be Uggla's responsibility to check those deadlines every week :p 16:08:59 the above needs some review (at least from nova release liaisons) 16:09:13 #topic Review priorities 16:09:13 #link https://etherpad.opendev.org/p/nova-2025.2-status 16:09:20 elodilles +1d 16:09:30 elodilles: +1d 16:09:45 apparently Uggla did a work on creating a tracking etherpad, noice 16:10:06 #topic Stable Branches 16:10:07 thanks all o/ 16:10:13 elodilles: shoot your topic 16:10:25 ah, ok 16:10:36 #info not aware of any stable gate blocking thing 16:10:58 i haven't seen any setuptools related issue yet :) 16:11:17 but there were not much activity in the past days 16:11:21 #info stable/2023.2 (bobcat) EOL patch for nova: https://review.opendev.org/c/openstack/releases/+/948196 16:11:39 another patch that needs some nova release liaison review ^^^ 16:11:49 #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci 16:11:56 that's all from me 16:12:00 bauzas: back to you 16:12:28 cool 16:12:39 #topic vmwareapi 3rd-party CI efforts Highlights 16:12:43 fwiesel: around ? 16:12:47 Yes, finally. 16:12:54 #info fwiesel reports back to duty 16:12:56 So to say 16:13:10 #info fwiesel will involve more people from SAP for redundancy 16:13:44 When I had my sick leave, I just dropped a short message on slack, hoping someone will take over, but that didn't work. 16:13:50 I'll try to make that more formal. 16:13:54 That's from my side. 16:13:59 cool 16:14:08 #topic Gibi's news about eventlet removal. 16:14:10 gibi: shoot 16:14:16 ack 16:14:31 we saw the first nova-scheduler run with native threading 16:14:43 actually multiple green CI jobs 16:14:49 e.g. https://zuul.opendev.org/t/openstack/build/636bdbb6ad494ef0b49415034842a640/log/controller/logs/screen-n-sch.txt nova-multi-cell 16:15:23 nothing needs immediate review attention. The ball is on my side to clean up the series that is open 16:15:46 that is it 16:16:27 bauzas: back to you 16:16:30 noted 16:16:53 I'll skip the bug scrub as I don't have time to lead this 16:17:01 and given Uggla is on PTO 16:17:08 #topic Open discussion 16:17:12 anything anyone ? 16:17:42 Yes, me then. 16:18:23 I was curious about what you think about enabling resize between hypervisors? I mean, a full discussion obviously means a blueprint. 16:18:40 I'll need to disappear shortly 16:18:53 do people have opinions about ^ ? 16:18:55 fwiesel: yeah probably a spec 16:19:18 not sure what you mean by an "hypervisor resize" 16:19:20 Question there is is it worthwile to open one 16:19:38 resizing all the instances on that host ? 16:19:52 Ah, the idea is we have a flavor for one hypervisor (i.e vmware), and the user wants to migrate to another one, say libvirt. 16:20:04 or some hardware change for an hypervisor, like a new memory card ? 16:20:08 The user can then call a resize from one flavor to another 16:20:23 ah 16:20:44 we don't bubble up that to the users because $cloud 16:21:00 I can write a spec, but I wanted to know if anyone would even be willing to look at it :) 16:21:04 but, operators can do multiple scheduling usages for that 16:21:19 eg. aggregates, traits or anything else decorating 16:21:45 not sure I want to disclose which hypervisor runs by a flavor 16:23:02 Let's turn it the other way around. Is it desireable that a resize should work between two different nova drivers? 16:23:19 I need to go offline now 16:23:27 again, anyone want to discuss this ? 16:23:35 or we could discuss this next week 16:23:42 fwiesel: at least it worth dicussing it. I'm not against it at a first glance 16:24:18 I can bring it up next week again. I want to avoid writing a spec, if there is a strong opposition to it. 16:25:12 thanks, let's then discuss this again nextweek 16:25:16 thanks all 16:25:21 #endmeeting