Wednesday, 2025-07-16

jbernard#startmeeting cinder14:01
opendevmeetMeeting started Wed Jul 16 14:01:40 2025 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
Saio/14:01
hvlcchao1o/14:01
Luzio/14:01
jbernardo/14:02
simondodsleyo/14:02
Vivek__o/14:02
hemnamep14:03
gireesho/14:03
jungleboyjo/14:04
jbernard#link https://etherpad.opendev.org/p/cinder-flamingo-meetings14:04
rosmaitao/14:04
jbernardhello everyone, thanks for coming14:05
jbernardthis may be a very short meeting this week :)14:05
jbernard#topic annoucements14:06
jbernardfor schedule14:06
jbernard#link https://releases.openstack.org/flamingo/schedule.html14:07
jbernardwe are about 5/6 weeks from freeze14:07
hemnathanks for the link14:07
jbernardreviews continue to my primary focus14:09
jbernardim hoping to look at tobias' patches next14:09
simondodsleybut is it the focus of the other cores as well??14:09
hemnaurl?14:10
jbernard#link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L5514:11
jbernard#link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L2414:11
jbernardthey've been in my queue for quite some time (among many others)14:12
hemnaok I'll take a look at those14:12
jbernardhemna: thank you!14:12
hemnacan I add one to that list?  :P. https://review.opendev.org/c/openstack/cinder/+/95290014:12
jbernardwe need core review on Luzi's image encryption patch14:12
jbernardhemna: sure14:12
jbernard#link https://review.opendev.org/c/openstack/cinder/+/92629814:13
Anoop_ShuklaWe have some reviews from NetApp that are pending too: 14:13
Anoop_Shuklahttps://review.opendev.org/c/openstack/cinder/+/95452014:13
Anoop_Shuklahttps://review.opendev.org/c/openstack/cinder/+/95290414:13
jbernardwould also be nice to have eyes on the volume type metadata work14:13
jbernard#link https://review.opendev.org/q/topic:%22user-visible-metadata-in-volume-types%2214:13
jbernardI'm trying to update our review request etherpad as we go14:14
jbernard#link https://etherpad.opendev.org/p/cinder-flamingo-reviews#L2414:14
hemnaAnoop_Shukla I just fixed the pep8 issues on the 952900 patch for netapp.  14:14
jbernard#link https://etherpad.opendev.org/p/cinder-flamingo-reviews14:14
hemnaAnoop_Shukla I'll check out those other Netapp reviews as well14:14
gireeshthanks @hemna 14:14
jbernardthere are many other reviews, please dont' take my mention of the few above to mean that the others are less important14:15
Anoop_ShuklaThanks @hemna14:15
jbernardas always, asking for reviews can be helped by offering to do some yourself, trading is highly valued14:16
Vivek__IBM Callhome patch is pending for the discussion. https://review.opendev.org/c/openstack/cinder/+/951829 14:16
jbernardVivek__: soon14:16
jbernardalso, reminder to add signoff to your commits with the -s flag14:18
vdhakadHello, I had a patch under review https://review.opendev.org/c/openstack/cinder/+/925450. Brian posted a few minor comments that I handled. But IBM Storage CI setup is down and I dont have CI run on the latest patch. Was wondering it would be okay to merge without the CI since, there aren't any major code changes in the latest patch?14:18
jbernard(this can be automated with a config alias)14:18
jbernardthis also allows others to do a rebase from the gerrit ui14:19
simondodsleyAre sean.mcginnis, e0ne, and geguileo still active in the core team?14:19
jbernardsimondodsley: i see sean jump in from time to time14:20
jbernardi haven't heard from e0ne in a very long time, i hope he is well14:20
hemnaman it would be nice to get gorka back.  is he still at RedHate?14:20
jbernardhe is, but on a different team, so his time (if any) is very strained14:21
jbernardif there is something very pressing, we could try to reach out14:21
jungleboyje0ne is still ok.  I see him on FB but haven't seen him around here in a long time.14:21
jbernardhemna: agreed14:21
simondodsleyMY concern with Red Hat is that as a company they are  pushing so hard on OpenShift and KubeVirt , it  is taking all the steam out of OpenStack work14:22
hemnayup14:23
simondodsleyand given, particularly in this project, Red Hat are major core contributors, it is slowing down development for everyone14:23
simondodsleyif I was a better developer I would volunteer for core, but I'm not that competant14:24
jbernardi think it's a reasonable thing to feel concern over, I can only speak for myself and say:14:24
jbernardi remain comitted to opensource (always have, always will)14:25
simondodsleyand we appreciate that14:25
jbernardand there is no internal directive to shift focus14:25
jbernardwe all have many tasks and balancing all of it is challenging for all of us14:26
simondodsleyit seems to be happening by stealth - Gorka as a reference point for this14:26
jungleboyj:-(14:26
rosmaitawell, Gorka is the kind of guy you'd miss14:26
jungleboyjHe isn't easy to sneak around.  :-)14:27
jbernardwhat can we do to improve the situation given what we have? any ideas, please share14:27
simondodsleyEscalate to Collier14:28
jbernardim happy to propose new core members if the numbers support it14:28
jbernarddo the work and I will help make sure it is recognized14:28
jbernardwe could be more organized about review gatherings, perhaps something on a tigher schedule14:29
simondodsleynot all reviews are XS or S, so they don't fall under the current scheduled review meetings14:30
jbernardwe all come and go on a long enough timeline, so we cannot hope that things wont change, we have to adapt ;)14:30
jbernardsimondodsley: fair point, can we alter things to accomodate better?14:31
jbernardit's the non-S reviews that tend to wedge14:31
jbernardthey take more commitment14:31
simondodsleyagreed, but when the changes occur we need to react faster, rather than just languishing until the backlog is so large14:31
simondodsleyWho is our rep on the TC? Could this concern be raised there?14:32
jbernardi think any one of us can raise an issue with the TC, they are there for us to make use of14:32
rosmaitahttps://governance.openstack.org/tc/#current-members14:33
jbernardanother thought:14:34
jbernardwe technically have an open slot for a midcycle meetup in about 3 weeks14:34
jbernardwe could repurpose that time to just collaborate on the review queue and try to make a meaningful reduction14:35
simondodsleyI'm on PTO most of August, but even if I wasn't my +1s don't really count for anything, so the cores would need to buyin to this idea14:36
simondodsleywhich is a good idea14:36
jbernardhemna, jungleboyj, rosmaita: thoughts?14:37
simondodsleydon't forget eharney:14:37
simondodsleyand whoami-rajat14:38
jungleboyjI think it is a good idea to use a mid-cycle to organize things and get reviews caught up.14:38
rosmaitawell, i think that if someone organized something, cores would participate ... problem is that organizing takes away from reviewing time, which is already scarce14:38
jungleboyjrosmaita:  Chicken and Egg.14:38
jbernardsimondodsley: yeah, im not sure if they're here but certainly14:38
jbernardi am willing to do the organizing14:39
simondodsleyall it needs is a meeting request for a whole day to be sent out - no special organizaing required, unless OpenInfra require it14:39
rosmaitasimondodsley: i thought you were talking about an ongoing thing14:40
simondodsleyi would start with just a one-off to clear the backlog14:40
rosmaitai think review day instead of midcycle is a good idea, i don't belive people have much to discuss other than that they need reviews!14:40
simondodsleyincluding the stable patches as well14:40
jbernardrosmaita: agree14:41
jungleboyjrosmaita: ++14:41
jbernardok, this is sounding like a consensus14:41
jbernard#action schedule midcycle14:41
jbernard#action repurpose midcycle into review sprint14:41
harsh++14:41
jungleboyjSounds like a place to start.14:42
rosmaitabut to be clear, we do want non-core participation as well ... it's helpful just to have someone read the commit message and release notes to make sure they make sense, and to look to see if the 3rd party CI has responded (where appropriate)14:42
harshyes.. will be there to help out as much as i can :)14:43
simondodsleyI'm happy to go through the patches in the etherpad and do a check of the logic, CI, etc, but the cores have to do a deeper review of the code14:43
vdhakadrosmaita: Ack.14:43
jbernardre CI, there may need to a cutoff for non-functioning instances14:44
jbernardthe number of 'need review, but CI is not working, here are one-off results' is growing14:44
jbernardit adds time to review, aside from it being against policy, and it's not scalable; additionaly, making exceptions only exacerbates the issue14:45
simondodsleythat is because Zuul has been upgraded and some of the functionality we were using has been deprecated and there is no clear way to recover the functionality14:46
jbernardsimondodsley: we need to raise this with the infra team and get feedback, maybe others can make use of that information14:46
simondodsleyif there were better documentation on using zuul for 3rd party CI, like how to send logs to public github repos, that would help14:46
jbernardsimondodsley: but we have to do soemthing, one-off ci results slow down an already overloaded system14:47
vdhakadsimondodsley: +114:47
Anoop_ShuklaFrom NetApp side, we are trying to fix the ZUUL issues too..we have a dedicated person to fix our CI and are making progress on that. But in the meanwhile we are trying to do all we can to make sure the reviewers have the data that they need to assess any regressions.14:47
simondodsleyjbernard: totally agree, but with the limitations we now have around security, opening firewalls to see log results is a big issue, so we need a better way to expose the logs to the team14:48
jbernardsimondodsley: i understand.  can you drive this effort? or help recruit someone that can?  we need help here14:48
simondodsleywe used to copy to a public AWS instance, but Zull stopped us parsing the instance URL to the response14:48
Vivek__CI has multiple dependencies, which can lead to delays in patch verification.14:49
jbernardsimondodsley: reaching out to the zuul team seems like a good starting point14:49
simondodsleyi had a guy working on it, but he is moving within the compoany and can no longer work on it14:49
Vivek__In certain cases where the code changes are minimal, a few exceptions may be acceptable.14:49
simondodsleyin the pas tthe zuul team refer you back to the founder, who then tries to sell you his consultancy services... not cool14:50
jbernardcan we try again?14:50
simondodsleyi can try14:50
simondodsleyi will ask for specific github integrtion documentation and examples14:51
gireeshif change is only driver related then we have to give the exception, but if change are related to core then it can impact the other code also, in that case CI should be pass 14:51
jbernardat least we'll have a current response, and something to build on14:51
harshgireesh: ++14:52
Vivek__gireesh: ++14:52
jbernardrespectfully i disagree, CI results on driver changes are very important14:53
simondodsleytrue14:54
gireeshagree with our point, but in case CI is not up, and driver owner is confident on his changes then we can not block his change for merging 14:54
simondodsleyany driver patch must pass it's own CI14:54
simondodsleygireesh: you have to fix the CI then14:55
harshsimondodsley: Ack.14:55
gireeshyes, we are working on this 14:56
simondodsleyalso remeber that all 3rd party CIs should also be checking os-brick patches as well, to ensure the driver still works with any changes made there14:56
gireeshother solution is, if CI is down, we can run the tempest test locally and share the test result 14:56
jbernardplease work as quickly as you can, an exception for you becomes expected by others, and this degrades the quality of our releases over time14:57
gireeshjbernard ++14:57
jbernardit also slows the review process, what you want is this: a patch that passes both zuul and your 3rd party CI (with visible results) so that a reviewer has everything they need to either push back, or allow the patch to proceed.  you don't want to force a back-and-forth on each one, this is the slow path14:59
Vivek__how about running tempest locally and posting the visible result on patch ?15:00
harshQuestion - As we discussed about the review day, how will the team be notified about the invite? Will it be via mailing list? 15:00
jbernardim going to ask for a few extra minutes, Vivek__ wants to mention this patch update15:00
Vivek__Patch : https://review.opendev.org/c/openstack/cinder/+/95182915:01
jbernardVivek__: it needs to be automated, drivers are not exactly owned, anyone in theory could make a change, and they need to see if it causes a regression15:01
Vivek__jbernard: Ack.15:02
simondodsleyjbernard: I have just asked on the Zuul matrix channel about docs for github integration and response to gerrit15:03
jbernardVivek__: do you have an update about the call home patch?15:03
jbernardsimondodsley: ok, curious what you find, please let us know15:03
jbernardok, we are out of time, but dicussion can continue in our normal channel15:05
jbernardthank you everyone for participating15:06
jbernard#endmeeting15:06
opendevmeetMeeting ended Wed Jul 16 15:06:07 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:06
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-07-16-14.01.html15:06
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-07-16-14.01.txt15:06
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-07-16-14.01.log.html15:06
jungleboyjThank you!15:06

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!