14:01:40 <jbernard> #startmeeting cinder
14:01:40 <opendevmeet> Meeting started Wed Jul 16 14:01:40 2025 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:40 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:40 <opendevmeet> The meeting name has been set to 'cinder'
14:01:44 <Sai> o/
14:01:55 <hvlcchao1> o/
14:01:57 <Luzi> o/
14:02:20 <jbernard> o/
14:02:25 <simondodsley> o/
14:02:29 <Vivek__> o/
14:03:33 <hemna> mep
14:03:34 <gireesh> o/
14:04:13 <jungleboyj> o/
14:04:17 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-meetings
14:04:27 <rosmaita> o/
14:05:40 <jbernard> hello everyone, thanks for coming
14:05:48 <jbernard> this may be a very short meeting this week :)
14:06:54 <jbernard> #topic annoucements
14:06:57 <jbernard> for schedule
14:07:02 <jbernard> #link https://releases.openstack.org/flamingo/schedule.html
14:07:30 <jbernard> we are about 5/6 weeks from freeze
14:07:49 <hemna> thanks for the link
14:09:11 <jbernard> reviews continue to my primary focus
14:09:24 <jbernard> im hoping to look at tobias' patches next
14:09:33 <simondodsley> but is it the focus of the other cores as well??
14:10:51 <hemna> url?
14:11:13 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L55
14:11:33 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L24
14:12:06 <jbernard> they've been in my queue for quite some time (among many others)
14:12:18 <hemna> ok I'll take a look at those
14:12:24 <jbernard> hemna: thank you!
14:12:31 <hemna> can I add one to that list?  :P. https://review.opendev.org/c/openstack/cinder/+/952900
14:12:38 <jbernard> we need core review on Luzi's image encryption patch
14:12:42 <jbernard> hemna: sure
14:13:11 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/926298
14:13:46 <Anoop_Shukla> We have some reviews from NetApp that are pending too:
14:13:46 <Anoop_Shukla> https://review.opendev.org/c/openstack/cinder/+/954520
14:13:46 <Anoop_Shukla> https://review.opendev.org/c/openstack/cinder/+/952904
14:13:46 <jbernard> would also be nice to have eyes on the volume type metadata work
14:13:50 <jbernard> #link https://review.opendev.org/q/topic:%22user-visible-metadata-in-volume-types%22
14:14:17 <jbernard> I'm trying to update our review request etherpad as we go
14:14:27 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L24
14:14:28 <hemna> Anoop_Shukla I just fixed the pep8 issues on the 952900 patch for netapp.
14:14:33 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-reviews
14:14:55 <hemna> Anoop_Shukla I'll check out those other Netapp reviews as well
14:14:58 <gireesh> thanks @hemna
14:15:01 <jbernard> there are many other reviews, please dont' take my mention of the few above to mean that the others are less important
14:15:54 <Anoop_Shukla> Thanks @hemna
14:16:14 <jbernard> as always, asking for reviews can be helped by offering to do some yourself, trading is highly valued
14:16:32 <Vivek__> IBM Callhome patch is pending for the discussion. https://review.opendev.org/c/openstack/cinder/+/951829
14:16:51 <jbernard> Vivek__: soon
14:18:46 <jbernard> also, reminder to add signoff to your commits with the -s flag
14:18:56 <vdhakad> Hello, I had a patch under review https://review.opendev.org/c/openstack/cinder/+/925450. Brian posted a few minor comments that I handled. But IBM Storage CI setup is down and I dont have CI run on the latest patch. Was wondering it would be okay to merge without the CI since, there aren't any major code changes in the latest patch?
14:18:59 <jbernard> (this can be automated with a config alias)
14:19:43 <jbernard> this also allows others to do a rebase from the gerrit ui
14:19:48 <simondodsley> Are sean.mcginnis, e0ne, and geguileo still active in the core team?
14:20:14 <jbernard> simondodsley: i see sean jump in from time to time
14:20:30 <jbernard> i haven't heard from e0ne in a very long time, i hope he is well
14:20:45 <hemna> man it would be nice to get gorka back.  is he still at RedHate?
14:21:08 <jbernard> he is, but on a different team, so his time (if any) is very strained
14:21:28 <jbernard> if there is something very pressing, we could try to reach out
14:21:32 <jungleboyj> e0ne is still ok.  I see him on FB but haven't seen him around here in a long time.
14:21:36 <jbernard> hemna: agreed
14:22:37 <simondodsley> MY concern with Red Hat is that as a company they are  pushing so hard on OpenShift and KubeVirt , it  is taking all the steam out of OpenStack work
14:23:43 <hemna> yup
14:23:57 <simondodsley> and given, particularly in this project, Red Hat are major core contributors, it is slowing down development for everyone
14:24:49 <simondodsley> if I was a better developer I would volunteer for core, but I'm not that competant
14:24:57 <jbernard> i think it's a reasonable thing to feel concern over, I can only speak for myself and say:
14:25:17 <jbernard> i remain comitted to opensource (always have, always will)
14:25:34 <simondodsley> and we appreciate that
14:25:40 <jbernard> and there is no internal directive to shift focus
14:26:03 <jbernard> we all have many tasks and balancing all of it is challenging for all of us
14:26:21 <simondodsley> it seems to be happening by stealth - Gorka as a reference point for this
14:26:43 <jungleboyj> :-(
14:26:43 <rosmaita> well, Gorka is the kind of guy you'd miss
14:27:25 <jungleboyj> He isn't easy to sneak around.  :-)
14:27:33 <jbernard> what can we do to improve the situation given what we have? any ideas, please share
14:28:07 <simondodsley> Escalate to Collier
14:28:22 <jbernard> im happy to propose new core members if the numbers support it
14:28:43 <jbernard> do the work and I will help make sure it is recognized
14:29:18 <jbernard> we could be more organized about review gatherings, perhaps something on a tigher schedule
14:30:22 <simondodsley> not all reviews are XS or S, so they don't fall under the current scheduled review meetings
14:30:45 <jbernard> we all come and go on a long enough timeline, so we cannot hope that things wont change, we have to adapt ;)
14:31:12 <jbernard> simondodsley: fair point, can we alter things to accomodate better?
14:31:29 <jbernard> it's the non-S reviews that tend to wedge
14:31:35 <jbernard> they take more commitment
14:31:43 <simondodsley> agreed, but when the changes occur we need to react faster, rather than just languishing until the backlog is so large
14:32:24 <simondodsley> Who is our rep on the TC? Could this concern be raised there?
14:32:52 <jbernard> i think any one of us can raise an issue with the TC, they are there for us to make use of
14:33:10 <rosmaita> https://governance.openstack.org/tc/#current-members
14:34:22 <jbernard> another thought:
14:34:38 <jbernard> we technically have an open slot for a midcycle meetup in about 3 weeks
14:35:11 <jbernard> we could repurpose that time to just collaborate on the review queue and try to make a meaningful reduction
14:36:01 <simondodsley> I'm on PTO most of August, but even if I wasn't my +1s don't really count for anything, so the cores would need to buyin to this idea
14:36:35 <simondodsley> which is a good idea
14:37:12 <jbernard> hemna, jungleboyj, rosmaita: thoughts?
14:37:54 <simondodsley> don't forget eharney:
14:38:13 <simondodsley> and whoami-rajat
14:38:18 <jungleboyj> I think it is a good idea to use a mid-cycle to organize things and get reviews caught up.
14:38:21 <rosmaita> well, i think that if someone organized something, cores would participate ... problem is that organizing takes away from reviewing time, which is already scarce
14:38:34 <jungleboyj> rosmaita:  Chicken and Egg.
14:38:35 <jbernard> simondodsley: yeah, im not sure if they're here but certainly
14:39:08 <jbernard> i am willing to do the organizing
14:39:08 <simondodsley> all it needs is a meeting request for a whole day to be sent out - no special organizaing required, unless OpenInfra require it
14:40:06 <rosmaita> simondodsley: i thought you were talking about an ongoing thing
14:40:33 <simondodsley> i would start with just a one-off to clear the backlog
14:40:52 <rosmaita> i think review day instead of midcycle is a good idea, i don't belive people have much to discuss other than that they need reviews!
14:40:52 <simondodsley> including the stable patches as well
14:41:03 <jbernard> rosmaita: agree
14:41:08 <jungleboyj> rosmaita: ++
14:41:15 <jbernard> ok, this is sounding like a consensus
14:41:28 <jbernard> #action schedule midcycle
14:41:42 <jbernard> #action repurpose midcycle into review sprint
14:41:52 <harsh> ++
14:42:18 <jungleboyj> Sounds like a place to start.
14:42:36 <rosmaita> but to be clear, we do want non-core participation as well ... it's helpful just to have someone read the commit message and release notes to make sure they make sense, and to look to see if the 3rd party CI has responded (where appropriate)
14:43:11 <harsh> yes.. will be there to help out as much as i can :)
14:43:28 <simondodsley> I'm happy to go through the patches in the etherpad and do a check of the logic, CI, etc, but the cores have to do a deeper review of the code
14:43:37 <vdhakad> rosmaita: Ack.
14:44:03 <jbernard> re CI, there may need to a cutoff for non-functioning instances
14:44:30 <jbernard> the number of 'need review, but CI is not working, here are one-off results' is growing
14:45:43 <jbernard> it adds time to review, aside from it being against policy, and it's not scalable; additionaly, making exceptions only exacerbates the issue
14:46:12 <simondodsley> that is because Zuul has been upgraded and some of the functionality we were using has been deprecated and there is no clear way to recover the functionality
14:46:47 <jbernard> simondodsley: we need to raise this with the infra team and get feedback, maybe others can make use of that information
14:46:53 <simondodsley> if there were better documentation on using zuul for 3rd party CI, like how to send logs to public github repos, that would help
14:47:08 <jbernard> simondodsley: but we have to do soemthing, one-off ci results slow down an already overloaded system
14:47:09 <vdhakad> simondodsley: +1
14:47:42 <Anoop_Shukla> From NetApp side, we are trying to fix the ZUUL issues too..we have a dedicated person to fix our CI and are making progress on that. But in the meanwhile we are trying to do all we can to make sure the reviewers have the data that they need to assess any regressions.
14:48:05 <simondodsley> jbernard: totally agree, but with the limitations we now have around security, opening firewalls to see log results is a big issue, so we need a better way to expose the logs to the team
14:48:51 <jbernard> simondodsley: i understand.  can you drive this effort? or help recruit someone that can?  we need help here
14:48:54 <simondodsley> we used to copy to a public AWS instance, but Zull stopped us parsing the instance URL to the response
14:49:19 <Vivek__> CI has multiple dependencies, which can lead to delays in patch verification.
14:49:25 <jbernard> simondodsley: reaching out to the zuul team seems like a good starting point
14:49:31 <simondodsley> i had a guy working on it, but he is moving within the compoany and can no longer work on it
14:49:31 <Vivek__> In certain cases where the code changes are minimal, a few exceptions may be acceptable.
14:50:17 <simondodsley> in the pas tthe zuul team refer you back to the founder, who then tries to sell you his consultancy services... not cool
14:50:49 <jbernard> can we try again?
14:50:54 <simondodsley> i can try
14:51:11 <simondodsley> i will ask for specific github integrtion documentation and examples
14:51:14 <gireesh> if change is only driver related then we have to give the exception, but if change are related to core then it can impact the other code also, in that case CI should be pass
14:51:15 <jbernard> at least we'll have a current response, and something to build on
14:52:05 <harsh> gireesh: ++
14:52:32 <Vivek__> gireesh: ++
14:53:10 <jbernard> respectfully i disagree, CI results on driver changes are very important
14:54:39 <simondodsley> true
14:54:46 <gireesh> agree with our point, but in case CI is not up, and driver owner is confident on his changes then we can not block his change for merging
14:54:55 <simondodsley> any driver patch must pass it's own CI
14:55:24 <simondodsley> gireesh: you have to fix the CI then
14:55:57 <harsh> simondodsley: Ack.
14:56:09 <gireesh> yes, we are working on this
14:56:50 <simondodsley> also remeber that all 3rd party CIs should also be checking os-brick patches as well, to ensure the driver still works with any changes made there
14:56:57 <gireesh> other solution is, if CI is down, we can run the tempest test locally and share the test result
14:57:16 <jbernard> please work as quickly as you can, an exception for you becomes expected by others, and this degrades the quality of our releases over time
14:57:50 <gireesh> jbernard ++
14:59:10 <jbernard> it also slows the review process, what you want is this: a patch that passes both zuul and your 3rd party CI (with visible results) so that a reviewer has everything they need to either push back, or allow the patch to proceed.  you don't want to force a back-and-forth on each one, this is the slow path
15:00:19 <Vivek__> how about running tempest locally and posting the visible result on patch ?
15:00:29 <harsh> Question - As we discussed about the review day, how will the team be notified about the invite? Will it be via mailing list?
15:00:42 <jbernard> im going to ask for a few extra minutes, Vivek__ wants to mention this patch update
15:01:17 <Vivek__> Patch : https://review.opendev.org/c/openstack/cinder/+/951829
15:01:20 <jbernard> Vivek__: it needs to be automated, drivers are not exactly owned, anyone in theory could make a change, and they need to see if it causes a regression
15:02:09 <Vivek__> jbernard: Ack.
15:03:06 <simondodsley> jbernard: I have just asked on the Zuul matrix channel about docs for github integration and response to gerrit
15:03:17 <jbernard> Vivek__: do you have an update about the call home patch?
15:03:40 <jbernard> simondodsley: ok, curious what you find, please let us know
15:05:54 <jbernard> ok, we are out of time, but dicussion can continue in our normal channel
15:06:03 <jbernard> thank you everyone for participating
15:06:07 <jbernard> #endmeeting