Wednesday, 2025-02-12

*** mhen_ is now known as mhen02:50
sp-bmilanovo/14:06
rosmaita#startmeeting cinder14:06
opendevmeetMeeting started Wed Feb 12 14:06:57 2025 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.14:06
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:06
opendevmeetThe meeting name has been set to 'cinder'14:06
rosmaita#topic roll call14:07
whoami-rajathi14:07
sp-bmilanovhello!14:07
sfernandhi14:07
rosmaitao/14:07
akawaio/14:07
rosmaitaok, guess we can get started14:09
rosmaita#topic announcements14:10
rosmaitafirst, os-brick release for epoxy is next week14:10
rosmaitaso, obviously, os-brick reviews are a priority right now14:10
rosmaita#link https://review.opendev.org/q/project:openstack/os-brick+status:open+branch:master14:11
rosmaitaif you want any hope of your patch getting in, make sure it is *not* in Merge Conflict and has +1 from Zuul14:12
whoami-rajati wanted to get this one in but it needs to be updated with few details, commit message, releasenote etc14:12
whoami-rajat#link https://review.opendev.org/c/openstack/os-brick/+/93991614:12
msaravanhi14:12
rosmaitawhoami-rajat: go ahead and add that stuff14:13
rosmaitato your patch, i mean14:13
rosmaitaseems like we are seeing a lot more FC issues lately14:13
rosmaitais that an industry trend?14:13
whoami-rajatsure, the main thing holding me was to do some more testing with this but I'm kind of stuck there for known reasons ... though I've at least tested it once14:14
whoami-rajatthat's a good topic i wanted to bring up as well, are the vendor teams seeing sudden slowness in newer distros related to 1. LUN scanning 2. multipath device creation 3. multipath being ready for I/O14:15
rosmaitawhile people are thinking about ^^ , just want to say that the next os-brick release (after the official epoxy one next week) would have to wait until after April 214:16
rosmaita(unless some kind of major regression, cve is discovered)14:16
rosmaitahmmm ... the silence is deafening14:18
rosmaitawhoami-rajat: you may want to ask that on the ML14:18
rosmaitaok, week after next is the python-cinderclient release for Epoxy14:19
whoami-rajatcan do that, thanks14:19
rosmaita#link https://review.opendev.org/q/project:openstack/python-cinderclient+status:open+branch:master14:19
rosmaitanot much in there, but the admin-editable metadata can't go in unless we get the cinder-side change merged first14:20
rosmaitanot sure what the status of that is14:20
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/92879414:20
rosmaitalooks like sfernand caught an issue with the cinder patch14:21
rosmaitaok, and finally, the Epoxy feature freeze is the week of Feb 2414:22
rosmaitawhich is coming up fast14:22
whoami-rajatI had a question about the unmerged specs for features, any plans for them?14:23
rosmaitathere seem to be a lot of them14:24
rosmaita#link https://review.opendev.org/q/project:openstack/cinder-specs+status:open+branch:master14:24
rosmaitaguess that's really a question for Jon14:27
whoami-rajati see, i was just asking about one of my reproposal of an old spec :D14:28
whoami-rajat#link https://review.opendev.org/c/openstack/cinder-specs/+/93158114:28
rosmaitai think if you have the code ready, it would be ok to do a spec-freeze-exception, especially since the spec had been accepted earlier14:29
rosmaitaquick question for you ... in that path, the virtual_size of the image will not be set in glance?14:30
rosmaita(follow up from takashi's min_disk patch discussion last week)14:30
rosmaitai think we may want to continue with the min_disk patch, even if glance is currently setting virtual_size for "normal" image uploads14:32
whoami-rajat+114:32
rosmaitabut that's a side issue, if i'm right, please leave a comment on takashi's patch14:32
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/80458414:32
rosmaitaok, that's it for announcements14:33
rosmaitano "old patch of the week" from me ... looks like we have a few of those sitting for os-brick14:33
rosmaitaso please take a look when you have time14:33
rosmaitaok, the next item on the agenda are the Review Requests14:34
rosmaitawhat i would like to do is give people who have revised a patch a minute to say something about how they have addressed the reviewers' concerns14:34
rosmaitaso for example nileshthathagar i think you addressed my question about extending the timeouts in some dell patches?14:35
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/80458414:36
rosmaitas0rry, bad paste there14:36
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/93951414:37
rosmaitai am curious what other people think about extending the retries14:37
rosmaitaif my calculations are correct, the retry loop could hold for a little over half an hour14:38
rosmaitaseems like maybe it would be better to just fail after a reasonable time?14:38
rosmaitaeven 17 min seems more reasonable14:39
rosmaitai am just worried that the issues caused by waiting that long and still no success would outweigh an earlier failure14:39
rosmaita(i picked 17 min because iirc, that was the worst case in your testing)14:39
nileshthathagarHi Brian14:40
rosmaitahello!14:40
nileshthathagaryes in worst case it is going to retry 6th time14:40
nileshthathagarbut not for every request14:40
nileshthathagarit can one in 10-15 request or more14:41
rosmaitaright, i just worry that if something happens to the backend, these could pile up for a half hour14:41
nileshthathagaryeah that can be happen but it will be rare.14:42
rosmaitamaybe it's nothing to worry about, i just have a bad feeling14:43
rosmaitaok when you push a new patch set to update the release note, that will clear my -114:43
nileshthathagarok thanks i will do it14:44
whoami-rajatdo we know what's the reason for wait here?14:44
nileshthathagaralso there are another patches which have retry.14:44
whoami-rajatfrom the code, i feel we are trying to delete the volume and if it has dependent snapshots, we retry?14:45
rosmaitayeah, it sounds like these aren't normal cinder snapshots, some kind of backend thing14:45
nileshthathagaryes there are some active snapshot that is not get clear from powermax14:45
rosmaitaand sometimes the backend takes a long time to clean them up14:45
nileshthathagarit is taking sometime but same it is happening occassionaly14:46
whoami-rajatokay, it would be good to investigate under which circumstances those delays happen14:47
whoami-rajatsince the _cleanup_device_retry is only called during delete volume, the retry change is isolated to only one operation which is good14:48
rosmaitawell, except, like nileshthathagar says, there are some retries added on other patches14:48
rosmaitaall with the same parameters, iirc14:48
nileshthathagaryes14:49
whoami-rajathmm, that sounds concerning ...14:49
rosmaitayes, i would feel better if there could be some kind of load test done over 30 hours with a large number of simulated users to see what happens to cinder14:50
rosmaitabut i would like to ride a unicorn, too14:50
whoami-rajat:D14:51
whoami-rajatat least analysing the logs from storage array would give us some clue, like in this case if the driver is sending snapshot delete request, why it still exists14:51
sfernandwondering why not implementing periodic tasks for dealing with such longer waits. Volume could be marked as deleted and a periodic might make sure it gets cleared in the backend14:51
whoami-rajatthere should be some logging around that14:51
rosmaitagood points from both of you ... i mean, i would feel better if we merge the patch with better logging to ultimately go in the direction sfernand suggests14:52
rosmaitaok, moving on ... anyone else on the review request list like to make a statement?14:54
nileshthathagarAs of now it is happening some times. Will do a load testing.14:55
nileshthathagarBut it will take a time14:55
rosmaitayeah, i understand14:55
rosmaitai think the worry here is that this fix might cause problems later on14:56
rosmaitabut, it might make sense to do the fix now and pro-actively try to figure out how bad the later problems will be14:56
nileshthathagaryes14:57
rosmaitaand then address them before users hit them14:57
kpdev@rosmaita: we need approval on storpool PR https://review.opendev.org/c/openstack/cinder/+/933078 , you have reviewed, so someone else from core, please review14:57
rosmaitait's a quick review, and i think a safe patch, so someone please take a look14:58
rosmaitadefinitely important to get that one in soon14:59
nileshthathagarwill definitly do that. but incase some customer gets issue. so we have some kind of fix for them14:59
inoriwe need review on Fujitsu Eternus https://review.opendev.org/c/openstack/cinder/+/907126 , this patch has been commented in the past, and we've replied them.14:59
rosmaitakpdev: are all your os-brick changes merged?14:59
kpdevyes14:59
rosmaitagreat14:59
rosmaitainori: ack14:59
inoriThank you rosmaita15:00
rosmaitalooks like there are a bunch of child patches, so it would be good for us to get that one out of the way15:00
rosmaita#link https://review.opendev.org/c/openstack/cinder/+/90712615:00
rosmaitajust noticed that we are over time15:01
rosmaitathanks everyone for attending!15:01
rosmaita#endmeeting15:01
opendevmeetMeeting ended Wed Feb 12 15:01:32 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:01
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-02-12-14.06.html15:01
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-02-12-14.06.txt15:01
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2025/cinder.2025-02-12-14.06.log.html15:01
whoami-rajatthanks!15:02
nileshthathagarthanks15:06

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!