16:02:52 <Uggla> #startmeeting nova
16:02:52 <opendevmeet> Meeting started Tue Jun 17 16:02:52 2025 UTC and is due to finish in 60 minutes.  The chair is Uggla. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:52 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:52 <opendevmeet> The meeting name has been set to 'nova'
16:03:05 <Uggla> Hello everyone
16:03:26 <elodilles> o/
16:03:41 <Uggla> awaiting people to join.
16:04:05 <bauzas> o/
16:04:07 <fwiesel> o/
16:05:20 <gmaan> o/
16:05:44 <Uggla> Let's start slowly as I know some folks are part of another meeting.
16:06:14 <Uggla> #topic Bugs (stuck/critical)
16:06:22 <Uggla> #info No Critical bug
16:06:40 <Uggla> #topic Gate status
16:06:48 <Uggla> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:06:56 <Uggla> #link https://etherpad.opendev.org/p/nova-ci-failures-minimal
16:07:04 <Uggla> #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&branch=stable%2F*&branch=master&pipeline=periodic-weekly&skip=0 Nova&Placement periodic jobs status
16:07:15 <Uggla> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:07:23 <Uggla> #info Please try to provide meaningful comment when you recheck
16:07:31 * gibi is on a different meeting in parallel
16:08:14 <Uggla> gibi, I know.
16:08:18 <Uggla> ;)
16:08:57 <gibi> I know you know j)
16:09:06 <Uggla> #topic tempest-with-latest-microversion job status
16:09:16 <Uggla> #link https://zuul.opendev.org/t/openstack/builds?job_name=tempest-with-latest-microversion&skip=0
16:09:30 <Uggla> gmaan, something you want to tell about it ?
16:09:35 <gmaan> no new updates. did not get chance to progress on this.
16:10:20 <gmaan> #link https://review.opendev.org/q/topic:%22latest-microversion-testing%22
16:10:28 <gmaan> this is latest changes/progress
16:10:38 <Uggla> no worries, that's a marathon not a sprint.
16:11:41 <Uggla> #topic Release Planning
16:11:48 <Uggla> #link https://releases.openstack.org/flamingo/schedule.html
16:11:55 <Uggla> #info Nova deadlines are set in the above schedule
16:12:07 <Uggla> #info Nova Spec review day is today.
16:13:00 <Uggla> tbh, I have not reviewed specs today. Due to a couple of escalations. So I will do that tomorrow.
16:14:54 <bauzas> I did a few specs reviews \o/
16:14:59 <bauzas> I'll continue tomorrow
16:15:39 <Uggla> bauzas, \o/
16:17:11 <bauzas> I mean, I'm happy to see Gerrit eventually this month :)
16:17:27 <bauzas> Gerrit, I missed you
16:17:33 <Uggla> there is still time for people in the us. (for reviews)
16:18:19 * gibi was also busy elsewhere but try to review some specs hopefully tomorrow
16:18:37 <Uggla> gibi ++
16:19:34 <Uggla> #topic Review priorities
16:19:36 <gmaan> i will check a few today,
16:19:47 <Uggla> #link https://etherpad.opendev.org/p/nova-2025.2-status
16:19:52 <Uggla> gmaan, thx
16:20:51 <Uggla> I'll try to update the doc by next week, to have a better view of what has been reviewed and what's left.
16:21:08 <Uggla> #topic OpenAPI
16:21:22 <Uggla> #link: https://review.opendev.org/q/topic:%22openapi%22+(project:openstack/nova+OR+project:openstack/placement)+-status:merged+-status:abandoned
16:21:30 <Uggla> #info 20 increase +2 due to FUP.
16:21:57 <Uggla> but we did progress on this one.
16:22:16 <Uggla> #topic Stable Branches
16:22:26 <Uggla> elodilles, I give you the mic
16:22:33 <elodilles> thanks Uggla ,
16:22:40 <elodilles> as far as i know:
16:22:45 <elodilles> #info stable branches (stable/2025.1 and stable/2024.*) seem to be in OK state
16:22:53 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:23:03 <elodilles> please add issues here ^^^ if you find one
16:23:13 <elodilles> that's all from me about stable
16:23:17 <elodilles> Uggla: back to you
16:23:27 <Uggla> thx elodilles
16:23:31 <elodilles> np
16:23:44 <Uggla> #topic vmwareapi 3rd-party CI efforts Highlights
16:23:52 <fwiesel> Hi, no updates there.
16:24:05 <Uggla> fwiesel, ok thx.
16:24:21 <Uggla> #topic Gibi's news about eventlet removal.
16:24:22 <Uggla> #link Blog: https://gibizer.github.io/categories/eventlet/
16:24:28 <Uggla> #link nova-scheduler series is ready for core review, starting at https://review.opendev.org/c/openstack/nova/+/947966
16:24:44 <Uggla> gibi do you want to share something ?
16:27:29 <bauzas> I guess he provided a revision so the next need will be to review it
16:27:43 <bauzas> (which is something I'll do)
16:28:21 <Uggla> I know he added some stuff on top of the serie to improve logging.
16:28:31 <gibi> sorry
16:28:38 <gibi> to many discussins in parallel
16:28:44 <gibi> so
16:29:02 <gibi> thanks for the review I fixed most of the comments we gathered on the last weeks sycn
16:29:37 <opendevreview> Abhishek Kekane proposed openstack/nova master: Revert^2 "Support glance's new location API"  https://review.opendev.org/c/openstack/nova/+/950623
16:29:37 <gibi> there is one comment form sean-k-mooney about the spanwissynchronous fixture I have to work on
16:29:46 <gibi> it actually pointed to an independent nova bug
16:29:53 <gibi> I linked to the channel last evening
16:30:06 <gibi> other than that the series is reviewable
16:30:25 <gibi> and yes there are some logging improvement on top of the series based on the last week's sync
16:30:48 <gibi> and at some point I will add one proposal to the futurist lib as well for executor utilization
16:31:08 <gibi> other than this I dropped a "bomb" on the mailing list
16:31:17 <gibi> about the eventlet removal timelien
16:31:26 <Uggla> using spawn instead of fork ?
16:31:28 <gibi> as 2026.2 is pretty streched for us based on our progress
16:31:46 <gibi> Uggla: nope just utilization logging
16:32:02 <Uggla> ok
16:32:05 <gibi> I have to go back to that ML thread as there are replys
16:32:51 <gibi> I expect TC will be involved
16:32:53 <gibi> that is all
16:33:13 <bauzas> We'll discuss it today at the TC meeting
16:33:14 <Uggla> maybe we could discuss news about that in the weekly meeting about eventlet.
16:33:15 <gmaan> I think we will be discussing it in today meeting
16:33:22 <gmaan> yeah
16:33:27 <sean-k-mooney> gibi: are you refering to the spawn_on interaction or a diffent nova issue
16:33:30 <bauzas> In the open discussion
16:33:49 <bauzas> (I added the topic?
16:33:54 <bauzas> )
16:33:56 <sean-k-mooney> gibi: we dont need to go into detail just wondering if it was somethign else
16:34:07 <gibi> sean-k-mooney: yepp the spawn_on pointed to the bug I linked yesterady about the decorator on the build_and_run_instance
16:34:17 <sean-k-mooney> oh that
16:34:24 <sean-k-mooney> ok thanks for the context
16:34:27 <gibi> nhp
16:34:29 <gibi> np
16:34:44 <gibi> btw this is the eventlet removal ML thread https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/BIC7BTAN72X6AA4BE6VVNSP7FYFOC362/
16:34:48 <gibi> just for the record
16:34:58 <Uggla> +1
16:35:31 <gibi> that is all from me
16:35:42 <Uggla> thx gibi, moving on.
16:35:46 <Uggla> #topic Bridging the Gap: metrics and survey analyses
16:35:48 <fungi> i'll try to be quick, but there's a lot we dug into... for some background on openstack-wide metrics analysis see ildikov's most recent ml post yesterday:
16:35:52 <fungi> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/NTBNI7YIDCWBR6BTPEKVZIODWTVUIOXO/ BtG metrics analysis
16:35:59 <fungi> also could be worthwhile to revisit her previous post in that thread going over the contributor and survey results (and anyone who hasn't filled those out for epoxy, please see if you can find a few minutes to do that!)
16:36:09 <fungi> as a follow-up activity, we've started doing some team-specific analyses, focusing on teams that had multiple contributor and maintainer survey responses
16:36:19 <fungi> the contributor survey had 2 responses for nova and both respondents had contributed for at least 2 years and were contributors to at least two other open source projects
16:36:29 <fungi> most feedback was relatively neutral (averaging 3-3.5 out of 5) with the lowest score being a 2 on "Changes you propose are reviewed in a timely manner"
16:36:43 <fungi> "You receive actionable feedback from other reviewers" had the widest skew, with a 2 and a 5 from the respondents
16:36:56 <fungi> top challenges reported were trouble with review attention, unhelpful comments and reviewers asking to expand the scope of the change
16:37:04 <fungi> additional feedback noted that the team seemed to have more work backlog than people available to manage it, and that change prioritization seemed to be a struggle
16:37:13 <fungi> the maintainer survey also had 2 responses with overall higher scores than the cntributor survey (averaging 3-4.5)
16:37:22 <fungi> contributing challenges reported were similar to those from the other survey, but specifically called out slow review times leading to changes with one +2 falling into merge conflict and repeatedly having to be rebased and re-reviewed, eating up more maintainer resources
16:37:31 <fungi> the top challenges with reviewing were changes without a spec/blueprint, low-priority changes, and contributors failing to address review comments or ci failures
16:37:46 <fungi> other comments included struggles in agreeing on automated tooling, and inconsistency of style/lint choices between different openstack teams
16:37:54 <fungi> looking at metrics we gathered from gerrit for the past 5 development cycles, we saw caracal was particularly slow for reviews (also observed for cinder), not sure what the cause was but then things got somewhat better again in the past year
16:38:08 <fungi> number of changes that don't get merged within a cycle were fairly steady over time except for epoxy where it spiked, and review times seemed consistent with contributor survey observations
16:38:23 <fungi> finding some way to improve the time to first review might help keep contributors, new and established, involved and engaged
16:38:33 <fungi> trying to balance out first review time between new/casual contributors and established team members/maintainers could also help
16:38:42 <fungi> probably the biggest and most actionable suggestion to is better/clearer communication in reviews that are done, particularly increasing transparency with regard to change and spec prioritization
16:38:49 <fungi> sorry, i know that's a pretty big info dump (i tried to pare it down as much as possible
16:38:58 <fungi> if ildikov's around she might also have additional comments
16:39:42 <fungi> also if anyone has questions, i'm happy to try and answer them either here in the meeting or in irc after
16:40:19 <gibi> what my pretty tired mind got this from is: so basically we should say no to new specs early instead of approving them and then not reviewing the implementation in time
16:40:20 <fungi> this is still just the beginning, we'll be continuing to run the surveys and refine some of the gerrit metrics we've been looking at
16:41:07 <fungi> gibi: yes, that's a great insight. having some feel for what your future review bandwidth might be and keeping conservative about over-committing to feature work especially
16:41:22 <gibi> then we will get the feedback that we are hostile for proposals
16:41:34 <gmaan> I am sure the feedback for this will be 'nova stopped accepting new request/features'
16:41:38 <gmaan> gibi: ++
16:41:48 <gibi> this is a very hard thing to balance
16:41:53 <sean-k-mooney> im not sure if SLUPR effect impacted caracal. nova POCed the slurp process for yoga to antelope
16:41:59 <sean-k-mooney> but caracal was the first offical one
16:42:32 <fungi> i don't think it's out of line to say that the team has limited review bandwidth and is prioritizing fixing bugs over adding features
16:42:37 <sean-k-mooney> nothing else specifcally jumpst out at me in that timeframe that made it special
16:42:56 <fungi> slurp is a great guess there, i agree
16:43:47 <ildikov> Apologies, I’m in transit, but trying to follow along and add comments if helpful
16:43:49 <gmaan> I think one question should be included in survey that people who are not getting enough reviews do contribute in community/nova also ? if no then can they and improve the situation?
16:44:34 <fungi> but on the spec prioritization subtopic, it still seems like being conservative about approving new specs and telling people there won't be time to review the changes for them (and ask for help fixing bugs and working toward becoing new maintainers) is still going to be a better experience for them than having their specs approved and then not being able to find anyone to
16:44:36 <fungi> review the resulting changes
16:45:24 <gmaan> it is always two way things and sometime I feel that collecting/providing one way feedback is not much helpful. We have been receiving this feedback since many years and I at least did not see any improvement due to that.
16:45:36 <fungi> writing a spec and having it postponed or rejected is some wasted work, but writing the entire implentation and having it get ignored will leave an even worse taste in their mouths
16:45:37 <gmaan> but maybe that is just me
16:46:23 <gibi> I can limit my spec +2 on things where I commit to review the impl and only use my +1 on the rest of the specs
16:46:43 <fungi> well, i see it as indicating where the project could use the most additional help, and also something maintainers can point to in order to explain why things people want aren't getting done
16:46:45 <ildikov> I think transparent communication is key, as y’all are saying. I also think gmaan is touching on an important point regarding capturing more contributors to do reviews.
16:46:49 <sean-k-mooney> we could try that but im not sure how much that will help
16:46:55 <gibi> but it won't create satisfaction from the contributors as they feature will not land this way either
16:47:00 <sean-k-mooney> that partly pickign yoru favorit rather then which isready
16:48:04 <fungi> i also think satisfying ungrateful or entitled contributors isn't a goal, they're generally going to become a time sink, what would help is figuring out how to improve the experience for people who are likely to stick around and help
16:48:31 <gibi> how do I foresee who will stick around and help?
16:48:36 <fungi> and yeah, being able to tell the two apart isn't easy
16:48:40 <gibi> yeah
16:48:53 <gmaan> IMO, merging the spec is 'we welcome the ideas/feature requests' and if slow/no review in code then yes we face challenge in review bandwidth so help here
16:49:10 <gibi> should we require one cycle of active bugfixing from someone before we review their spec?
16:49:13 <sean-k-mooney> i guess folks that engagen in the review positivly and relfect on feedback
16:49:23 <fungi> but, for example, people who pester me for reviews on things that have only been open a day or two? i tend to ignore them and their changes, or at most explain to them that their expectations are unrealistic
16:49:24 <sean-k-mooney> is a good indication to some degree
16:49:59 <fungi> (unless it's clear that the change is urgent for the project and not just for the person proposing it)
16:50:09 <Uggla> maybe dumb question, the feedback are for nova or global for openstack project ?
16:50:45 <fungi> what i outlined above was the result of looking at the handful of resurvey responses we received for nova, and nova-specific gerrit metrics
16:50:58 <fungi> the posts on the mailing list are openstack-wide analysis
16:51:08 <ildikov> +1
16:51:10 <dansmith> this sounds like the same feedback we've had for ten years to me
16:51:20 <gmaan> yeah, maybe more than 10 :)
16:51:28 <sean-k-mooney> fungi: i read the mailing list post and i think there is a simliar corratlation tbeween nova's feedback and the wider feedback
16:51:43 <Uggla> fungi, thanks. What about other projects ? Same feelings about reviews ?
16:53:22 <dansmith> I always chalk this up to people wanting to drive-by a fix with no tests, a spec with no promise of CI, or the hard work to refactor the things necessary to make it work for more than just their niche case
16:53:27 <fungi> to some extent, yes, nova had lower overall numbers than other projects we looked at, though for the metrics i think a lot of that is due to outliers and "long tail" values skewing the averages, median values suggest there is still al lot of work getting done and there are more changes getting merged than propoosed each cycle, so the perceptions don't necessarily represent
16:53:29 <fungi> reality in all cases
16:53:52 <dansmith> and a decade of that sort of thing makes us fairly conservative in what we accept (code, not specs)
16:54:07 <sean-k-mooney> we did have a cople of large feataures like manila shares that took 4-5 releases
16:54:21 <dansmith> and we often get feedback that specs hold up implementation (even though they don't) so I think that's why some specs get approved even though we're not going to review the implementation
16:54:24 <sean-k-mooney> nova ofthen has deps on other projects that can take a whiel to resolve
16:54:27 <fungi> however, from what we can tell it seems like a small number of poor experiences can disproportionately influence contributors' perceptions of the whole
16:54:32 <ildikov> Would seeing the metrics per release cycle help with evaluating what to do with the survey results?
16:54:37 <dansmith> or people think that an approved spec means we won't object to the actual implementation
16:54:47 <dansmith> sean-k-mooney: also that
16:54:48 <ildikov> Like the time needed for first reviewed to happen, patches to get merged
16:56:01 <sean-k-mooney> ildikov: maybe but im not sure how actionaable that data will be
16:56:19 <fungi> yeah, we're at least initially trying to avoid dumping the project-specific analyses to the ml since we don't want it to turn into side-by-side comparisons and this-team-is-better-than-that-team sorts of discussions, but we can find a way to share more concrete numbers
16:56:21 <dansmith> yeah, pretty meaningless to me
16:57:08 <gibi> yeah, I would be more interested in case studies on specific contribution attempts
16:57:24 <ildikov> sean-k-mooney: I understand. I was thinking to compare how long things take with what people’s impressions are. And see if there’s any that’s better or worse than what people had in mind. And then see which one we’re can help y’all figure out how to improve.
16:57:43 <sean-k-mooney> knowing what the outliers were might help at least we could know if they should be excludied or were really a problem
16:57:51 <ildikov> This isn’t something to solve in one meeting. But wet would like to help figure out how to make improvements.
16:58:13 <sean-k-mooney> i.e. if it took 200days to merge but was teh 30th patch in a seriese that might mean the data shoudl be excluded
16:58:13 <ildikov> And whether or not those actions are helping, or were should try smth e we else
16:58:28 <fungi> i think it's also reasonable to ignore it all as "this is nothing we don't already know" but at least some feedback on what you'd like looked at (e.g. the idea of trying to gauge the contributor survey respondent's level of involvement in helping the project in other ways) is definitely appreciated
16:58:55 <ildikov> I’ll look into the case study approach
16:59:06 <sean-k-mooney> fungi: ildikov  well i think the general trend are useful and if a project is vastly diffent then another hten it woudl be interesting to know why
16:59:15 <dansmith> fungi: I'd also say "nothing we don't already know and have tried various things to address in the past, with minimal success"
16:59:26 <sean-k-mooney> but i think part fo that comes down to the project in quetion too.
16:59:30 <ildikov> Might not be for a meeting to discuss that, since we can’t make that anonymous, etc
16:59:40 <dansmith> fungi: we've done all kinds of things like core sponsors, runways, etc..
16:59:52 <ildikov> sean-k-mooney: +1
16:59:52 <fungi> we've certainly done some case-by-case studies leading up to this (it started from specific member company representatives giving us lists of changes where they struggled and maybe gave up)
17:00:15 <dansmith> specs themselves were sort of a solution to this problem in the first place, and they improved some things but introduced a hurdle that sometimes get complained about
17:00:24 <fungi> part of the challenge is to avoid public shaming while finding ways to steer toward constructive feedback
17:00:53 <ildikov> +1
17:00:59 <Uggla> We are on top of the hour. So can we wrap up ?
17:01:04 <fungi> anyway, i didn't want to run over the end of the meeting
17:01:07 <fungi> thanks!
17:01:11 <gibi> ildikov fungi Thanks for the effort
17:01:16 <fungi> i'm always around for more discussion after
17:01:17 <dansmith> fungi: yep, appreciate that, because since this has been a complaint for over ten years, some of us are likely to feel like rage quitting :)
17:01:41 <Uggla> ildikov, fungi, thanks
17:01:44 <ildikov> We would love to avoid that! :)
17:01:59 <ildikov> Thanks y’all!
17:02:57 <Uggla> I wonder if we can postponed the open discussion to next week. There is a topic about Evacuate Action & InstanceInvalidState, but I don't know who entered it.
17:03:20 * gibi moves to the tc meeting...
17:03:44 <fungi> while it may be an unpopular approach, the way a lot of open source projects (particularly volunteer-run communities) approach that challenge is by just saying that complaining only serves to burn out the handful of people keeping things going, and to either help shoulder their burden or go away
17:04:24 <Uggla> fungi++ completely true.
17:04:48 <fwiesel> Uggla: That would be me. Then maybe two weeks?
17:05:00 <fwiesel> In two weeks, I meant.
17:05:36 <Uggla> fwiesel, yes I keep the items and we will discuss it not next week but the week after.
17:05:48 <fwiesel> Uggla: Thanks.
17:05:52 <gibi> fungi: yeah then we need to figure out how they can effectivel help shouldering the burden
17:06:50 <Uggla> fwiesel, thanks you to understand. I don't want to keep people here anymore as we are already overtime.
17:07:02 <Uggla> Thanks all
17:07:15 <Uggla> #endmeeting