Wednesday, 2025-01-15

*** jhorstmann is now known as Guest597507:51
*** jhorstmann is now known as Guest599713:46
jbernard#startmeeting cinder14:00
opendevmeetMeeting started Wed Jan 15 14:00:43 2025 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
opendevmeetThe meeting name has been set to 'cinder'14:00
jbernard#topic roll call14:00
jbernardo/14:00
abishopo/14:00
josephillipso/14:01
Luzio/14:01
simondodsleyo/14:01
sp-bmilanovhi!14:01
sfernandhi folks14:01
jungleboyjo/14:01
rosmaitao/14:01
msaravanHi14:01
jhorstmanno/14:01
akawaio/14:01
whoami-rajat_Hi14:02
jbernardhello everyone, thanks for joining14:03
jbernard#link https://etherpad.opendev.org/p/cinder-epoxy-meetings14:04
jbernardbefore we jump into the topic, quick schedule update14:04
jbernard#link https://releases.openstack.org/epoxy/schedule.html14:04
jbernardwe are currently at M214:05
simondodsleyquick question: Why are there no Cinder points on this schedule14:06
jbernardfinal release is late march, but for coding we're looking at late feb14:06
jbernardsimondodsley: ahh that's an easy one :) i didn't create them :)14:06
josephillipsquestion in new on this meetings , to add a extra  topic of review can be checked here?14:06
simondodsleylol - ok - F-cycle then...14:07
jbernardjosephillips: sure, just add it to the etherpad14:07
jbernardjosephillips: i scrape that each week and add it to my review list14:07
jbernardsimondodsley: this brings me to my point14:08
jbernardaside from freezes, the important date we need to agree on is the midcycle14:08
jbernardbecasue I didnt set the date already, we have some flexibility14:08
yuvalo/14:08
simondodsleywhy not just use he same as Manila14:09
jbernardgenerally, late jan or early feb is ideal, to give enough runway to implement anything that comes out14:09
jbernardsimondodsley: that's fine with me14:09
jbernardi wanted to raise it here14:09
jbernardmanilla midcyel is week R9 (jan 27)14:10
yuvalwait, so where is the feature code freeze?14:10
jbernardyuval: feature freeze is R5 (feb 24)14:10
simondodsleycan we state the feature freeze here so we are all aware?14:11
yuvalgot it14:11
simondodsleyoops 14:11
jbernardare there any conflicts or objections to repurposing our meeting into a midcycle the week of Jan 27?14:12
jbernardjan 29 to be precise14:12
jbernard(2 weeks from today)14:12
rosmaitaworks for me14:13
jungleboyjNo concerns here.14:13
abishop+114:13
yuval+114:14
sfernand+114:14
josephillips+114:14
jbernardok, great14:14
jbernardi will send a mail14:14
jbernardthanks14:15
jbernard#topic Enhanced Granularity and Live Applicationof Front-end QoS Policies14:15
jbernard#link https://blueprints.launchpad.net/cinder/+spec/enhanced-granularity-and-live-application-of-frontend-qos-policies14:15
MengyangZhang[m]Yes, I proposed this, just want to know if the community is interested in this feature14:16
MengyangZhang[m]before we commit to implement it14:16
simondodsleyi love the idea of live changes to the QoS settings for a mounted volume.14:17
simondodsleyThe project specific QoS is also a good idea. Pure has a patch up now to do exactly the same for our backend QoS14:18
jbernardthe summary looks good, is there anything like a spec, or more details on proposed implementation? that would likely be the next step14:18
abishopbut wouldn't that be something for nova to implement?14:18
MengyangZhang[m]Yes that's the next step and indeed this is a cross project that also needs code changes on nova side. 14:19
jbernardabishop: i would assume nova would have to be involved, a spec should show that14:19
MengyangZhang[m]I will try to create a similar bp on nova side and bring it up in next cinder-nova meeting14:19
abishopconceptually it all sounds good, and clearly specs will cover a lot of details14:20
MengyangZhang[m]simondodsley: great, is there a link to their patch? 14:20
jbernardMengyangZhang[m]: yeah, if nova has no objections then specs for cinder and nova would allow us to properly review14:20
simondodsleyhttps://review.opendev.org/c/openstack/cinder/+/93367514:21
MengyangZhang[m]sounds good, i will bring it up in next nova meeting14:21
MengyangZhang[m]MengyangZhang[m]: would love to learn how they did it14:21
rosmaitaMengyangZhang[m]: did your nova spec that we discussed last week get accepted?14:22
MengyangZhang[m]it passed the code freeze period for them so i have to wait until next cycle14:23
MengyangZhang[m]but overall they don't any objections about it14:23
whoami-rajat_I think you can raise a spec freeze exception14:24
rosmaitaok, bummer about the delay, sorry that we were slow to act on the cinder side14:24
whoami-rajat_I remember sean mooney suggesting the same14:24
jbernardok, lets keep moving14:26
jbernard#topic 936619: Dell PowerMax: multi detach req caused race conditions (rosmaita)14:26
rosmaitahi14:26
jbernard#link https://review.opendev.org/c/openstack/cinder/+/93661914:26
rosmaitathis is the latest in my weekly series of patches that have stalled14:26
rosmaitathe issue is that the dell driver maintainer wants this local fix that they say they have tested and is working14:27
rosmaitawhile some cinder cores are wondering whether the patch is a band aid, and a more thorough fix is required14:28
rosmaitai can see this both ways14:28
rosmaitawhich is the problem, i guess14:29
whoami-rajatthe main worry here is that if we merge this and later some other deadlock occurs since we didn't fix this the proper way, we are again back to square 114:29
rosmaitawhat i would like is for us today to decide whether to accept the patch with reservations14:29
rosmaitaor require more work14:29
abishopthe topic pertains to coordination locks; currently there are lots of small locks around portions of the code, and the proposed fix would a new lock around a much larger operation14:31
jbernardthis is all local to the powermax driver14:32
rosmaitaright14:33
jbernardthere is significant backlog to read14:33
jbernardthe folks involved in the patch already, is there a consensus?14:34
rosmaitawell, yian has a comment that this top level lock that we don't like is actually correct for the backend14:35
rosmaitahttps://review.opendev.org/c/openstack/cinder/+/936619/2#message-085723942bf6ec5a1e82390ec41a5a5c042fe12c14:35
rosmaitathat said, i am inclined to trust Rajat's intuition that this may not fix everything14:36
rosmaitabut, on the third hand, it's dell's driver and they do have an argument that this is an ok fix14:36
rosmaitaand that it's been tested and avoids the race condition14:37
abishopthat weighs in favor of Dell's patch, they own the solution but also any fallout/regression14:37
jbernard^ this is where i lean as well14:38
jbernardbut im not up to speed on this one14:38
rosmaitai guess my feeling is that since this doesn't touch the main cinder code, and dell is convinced that it is correct, we probably shouldn't hold it up14:38
simondodsleythe comments mention OpenShift and also RHOSP - how are these related unless they mean RHOSO??14:38
simondodsleyand even then the dicconnects should be to the nova side which are not in pods14:39
whoami-rajattheir approach is not totally wrong, they want to only allow either attach or detach of one volume at one time, it's just the scope of the lock is too big14:39
whoami-rajatI'm okay to remove my -1 if team thinks it's best to let the dell team handle the issue their way14:40
simondodsleyin general the fix sounds reasonable and it is a powermax specific issue so the core cinder code doesn't need to be affected14:42
jbernardrosmaita, abishop? any strong feelings?14:43
abishopI think reviewer concerns are noted, but am OK with Dell proceding with their patch14:43
rosmaitai guess that's where i come down too ... Rajat's reasoning is explained well in his comment, and they will have something to go back to if the problem persists14:44
rosmaita(i mean, something to refer to)14:44
jbernardthese are all good points14:44
jbernardok, this one can unblock then14:45
rosmaitai think Rajat can keep his -1 if he likes14:45
jbernardthat's fair14:45
rosmaitain fact, he should14:46
rosmaitaand then he can say "i told you so" after i +2 the patch14:46
jbernardlol14:46
rosmaita:D14:46
whoami-rajat:D14:46
rosmaitaseriously, though, i do think leave the -1 to make it clear that you have reservations14:46
whoami-rajati removed it since it might discourage others to review it14:46
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/936619/2#message-dd5cb190790e5ab109d6712cc8f994d35c3d1aa414:46
whoami-rajatok, should i will add it again then ...14:47
rosmaitajbernard: i will review, will you be the 2nd reviewer?14:47
rosmaita(i just don't want this patch to continue to sit now that we've decided how to proceed)14:48
jbernardrosmaita: yes, i will have cycles this afternoon 14:48
rosmaitaok, great ... thanks everyone for the discussion, that's all from me14:48
whoami-rajatdone14:48
whoami-rajat#link https://review.opendev.org/c/openstack/cinder/+/936619/2#message-af9ae0adbfebdb8276edea8ef6b4fd38748af0cc14:48
jbernardcool14:48
jbernard#topic open discussion14:48
flelainHey! Hope you guys are doing well!14:49
jbernardflelain: thanks, you too!14:49
flelainDon't know it that's the best slot but I have this pacth, staled for now: https://review.opendev.org/c/openstack/cinder/+/93752614:49
flelainYeah doing well thx!14:50
flelainAnd sorry if it's not the best time and place :-/14:50
jbernardflelain: rosmaita will likely follow up on that as he's reviewed it already14:51
sp-bmilanovbtw, last week there were talks of getting an internim os-brick release out, to help with getting Cinder changes in.. is that sill on the table?14:51
rosmaitai think that patch is good, i just wanted to look into whether the new exception class is really necessary14:51
rosmaitasp-bmilanov: i think it was released14:51
jbernardi +1'd the hash14:51
whoami-rajatwe wouldn't even have that issue if nova cleaned up the leftover attachments during live migrate and other similar operations, but that's a long running discussion14:51
rosmaita(i'm sure flelain is right about the exception, i was just surprised and wanted to look more closely)14:51
sp-bmilanovrosmaita: where can I confirm this? The latest tag I see here is two months ago, but I might be looking at the wrong place: https://opendev.org/openstack/os-brick/tags14:53
rosmaitasp-bmilanov: sorry, the release is ready but hasn't happened yet: https://review.opendev.org/c/openstack/releases/+/93857814:53
sp-bmilanovah, ok14:53
yuvalrosmaita I need to update my 3rd party ci wiki, but I keep failing the sign in: https://wiki.openstack.org/w/index.php?title=Special:OpenIDLogin&returnto=Main_Page14:53
yuvalcan someone check if they are able to sign in - maybe its a running issue?14:53
rosmaitayuval: i will check14:54
jbernardyuval: i just logged in14:54
jbernardyuval: seems to be sucessful for me14:55
sp-bmilanovyuval: same as jbernard, I can log in14:55
rosmaitame too14:55
josephillipsi have some doubts about this https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-qos-total-iops-per-gb  14:56
rosmaitayuval: you were going to make the change we discussed yesterday?14:56
josephillipsif i should use the same keys that is on this documentation https://docs.openstack.org/cinder/latest/admin/capacity-based-qos.html14:57
rosmaitayuval: i can edit the page now, and you can follow up with the infra team to get your login fixed14:57
whoami-rajatjosephillips, i think yes, you just need to set the consumer to back-end14:57
yuvalyes14:57
josephillipsyes i dunderstand that but my question is if i can use the same keys?14:58
yuvalyes please14:58
whoami-rajatjosephillips, are you planning to implement it? don't we already have back-end qos for rbd?14:58
josephillipswhoami-rajat yea i will going to implement it , not in a capacitive way 14:59
rosmaitayuval: got it14:59
josephillipsis not available whoami-rajat 14:59
yuvalI see it14:59
yuval3rd party system: Lightbits CI14:59
yuvalThanks14:59
whoami-rajatjosephillips, https://github.com/openstack/cinder/commit/f1bb51c25138a1aaab45b64e2934c0468b94167715:00
josephillipsyea but is static , nof depending the volume size15:00
rosmaitayuval: let's see if the ciwatch picks that up, otherwise we can reach out to smcginnis 15:00
yuvalhttp://cinderstats.ivehearditbothways.com/cireport.txt - maybe this need some time to be updated15:01
simondodsleythere is no per gb qos in ceph15:01
yuvalI will check it15:01
josephillipsyea but is static , nof depending the volume size whoami-rajat 15:01
simondodsleyis that because ceph actually support this?15:01
josephillipsceph support QoS simondodsley but only a static way , if you see another manufacters example netapp they provide a key to allow it dinamic depending of the volume size 15:03
simondodsleyi'm not sure ceph supports per gb qos, so you will be limited to using the front-end qos that already exists15:03
jbernard(we are over time, can we continue this in #openstack-cinder?)15:04
simondodsleysure15:04
jbernardok, thanks everyone!15:04
jbernard#endmeeting15:04
opendevmeetMeeting ended Wed Jan 15 15:04:35 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:04
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-01-15-14.00.html15:04
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-01-15-14.00.txt15:04
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-01-15-14.00.log.html15:04

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!