14:01:09 <jbernard> #startmeeting cinder
14:01:09 <opendevmeet> Meeting started Wed Dec  4 14:01:09 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:09 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:09 <opendevmeet> The meeting name has been set to 'cinder'
14:01:13 <jbernard> #topic roll call
14:01:15 <jbernard> o/
14:01:18 <whoami-rajat> hey
14:01:29 <rosmaita> o/
14:02:07 <jbernard> this looks to be a quick one, potentially
14:02:08 <eharney> o7
14:02:10 <nileshthathagar> o/
14:02:29 <tosky_> o/
14:02:56 <msaravan> hi
14:03:49 <jbernard> hello everyone!
14:04:05 <jbernard> we're back :)
14:04:11 <harsh_> Hello :)
14:04:19 <jbernard> #link https://etherpad.opendev.org/p/cinder-epoxy-meetings
14:04:26 <jbernard> ^ etherpad for these meetings
14:04:35 <jbernard> the agenda is quite light
14:04:49 <jbernard> #topic annoucements
14:04:56 <jbernard> not much really,
14:05:03 <jbernard> we're working our way through M1
14:05:19 <jbernard> #link https://releases.openstack.org/epoxy/schedule.html
14:05:26 <jbernard> M2 is early January
14:05:55 <jbernard> which will come much sooner than I anticipate, based on the last few weeks
14:06:04 <akawai> o/
14:06:15 <jbernard> I was out last week, I have much to catch up on
14:06:45 <jbernard> if you've pinged me about reviews or questions, I'm working my way though my backlog and I will get to it
14:06:52 <whoami-rajat> it would be useful to review and merge the spec before year end so the author can work on implementation before M2
14:07:00 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder-specs/+/935347/6/
14:07:02 <jbernard> whoami-rajat: I agree
14:08:16 <jbernard> if anyone is willing to review the dm-clone spec ^ that would be very helpful
14:08:56 <whoami-rajat> I'm willing to be one of the reviewer
14:09:04 <eharney> i'd like to look at the dm-clone spec but it'll probably be a few days before i can get to it
14:09:05 <jbernard> whoami-rajat: +1 thank you
14:09:16 <jbernard> eharney: sounds good
14:09:16 <whoami-rajat> thanks eharney and jbernard
14:09:57 <jbernard> whoami-rajat, eharney: can we plan to have a consensus for next week's meeting?
14:10:13 <jbernard> that gives a week to process and review
14:10:16 <harsh_> This spec review is new for me. I work on the cinder driver for IBM Storwize. I will try to get more details on this spec review.
14:10:31 <jbernard> harsh_: great, reach out if you have questions
14:10:51 <harsh_> jbernard , thanks :)
14:11:16 <whoami-rajat> jbernard, sure, i can provide a first round of review before next meeting
14:12:23 <jbernard> whoami-rajat: thanks, im expecting december to go quickly; if the spec looks good I think Jan will need as much time as we can provide
14:13:01 <jbernard> that's all for annoucements from me
14:13:08 <jbernard> #topic issues / feedback
14:13:18 <whoami-rajat> +1, I expect my comments to be minor since we've already seen the demo and design at PTG
14:13:35 <jbernard> generally, how are things? aside from review bandwidth, is anyone blocked on anything? anything I can help with?
14:14:15 <harsh_> I have a query with on of the review patch
14:14:22 <jbernard> harsh_: sure
14:14:28 <harsh_> https://review.opendev.org/c/openstack/cinder/+/929777
14:15:04 <harsh_> I sent out an email for this. Subject: OpenStack Cinder - Volume-ID in cinder DB
14:16:01 <harsh_> I am trying to use the provider_id field to map the cinder volume object with the backend volume_ID. Doing this for the clone volumes as the two are different in my case.
14:17:14 <harsh_> Because the IDs are different i have to check for the provider_id field to for every volume operation that i can perform on a clone volume. For this i suggested a change in the file: cinder/cinder/objects/volume.py
14:18:00 <jbernard> harsh_: i need to read the thread more carefully, but at first glance I agree with Gorka's last response
14:18:17 <harsh_> I agree too.
14:18:30 <jbernard> harsh_: changing the behaviour of a base object is risky, and there are a few proposed alternatives in his mail
14:18:31 <harsh_> So i am looking for an alternative for this.
14:19:27 <harsh_> I tried to add an override fucntion in cinder/volume/drivers/ibm/storwize_svc/__init__.py file but that didnt override the _volume_name() function.
14:20:21 <nileshthathagar> Hi All,
14:20:36 <nileshthathagar> I have one query
14:20:46 <harsh_> If anyone has any suggestions please do reply on the email thread.
14:22:06 <nileshthathagar> I’m encountering an issue with PowerMax related to multipathing. Specifically:
14:22:16 <nileshthathagar> 1. After an HBA rescan, devices under /dev/disk/by-id are appearing with a slight delay.
14:22:30 <nileshthathagar> 2. At times, the device-mapper (dm) ID does not get assigned immediately.
14:22:38 <nileshthathagar> To address this, I’ve made the following changes:
14:22:44 <nileshthathagar> 1.Updated the existing retry mechanism in linuxscsi.wait_for_rw: Old retry value: Default.
14:22:49 <nileshthathagar> 2.New settings: retry=5 and interval=2, to handle delays with /dev/disk/by-id. Implemented a retry mechanism for device-mapper IDs: Settings: retry=5 and interval=2.
14:22:52 <jbernard> harsh_: if you've implemented one of gorka's alternatives and it's not working, you could post the link and ask for guidance.  At the moment, the last message from gorka has 3 alternatives
14:23:13 <nileshthathagar> These changes have been applied to the linuxscsi.py file.
14:23:20 <nileshthathagar> Please provide your feedback on whether this approach is acceptable during code review?
14:23:29 <whoami-rajat> harsh_, IIUC, the only places you will need to handle the mapping is volume create operations like create volume, create cloned volume, create volume from snapshot etc, do you need the mapping anywhere else apart from this?
14:24:50 <jbernard> nileshthathagar: sure, you'll want to file a bug and reference that in your patch
14:24:53 <harsh_> Yes, two of them are what i am using and they work but thats a lengthy change. I was looking to make a change which could be used to get the volume_id from provider_id field just once during driver intialization. I am also working on the decorator alternative that Gorka suggested. Will reply to the thread if that didn't workout.
14:26:11 <harsh_> whoami-rajat , i need the mapping for exrend operation, group operations, attach-detach as well.
14:26:35 <nileshthathagar> jbernard thanks :) So is it ok to change the os-brick code?
14:27:11 <jbernard> nileshthathagar: it's certainly okay to propose a change and present your logic
14:27:31 <jbernard> nileshthathagar: it should be reviewed and possibly merged if there is agreement
14:27:50 <nileshthathagar> great thanks
14:28:43 <whoami-rajat> nileshthathagar, I'm not sure where you are facing the issue, firstly we scan for device symlinks in /dev/disk/by-path and not in by-id, secondly there is already a retry mechanism for it
14:29:01 <whoami-rajat> in fact we even wait for the multipath device to show up, not exactly sure but probably 10 seconds
14:31:31 <nileshthathagar> whoami-rajat: yes the device is getting under /dev/disk/by-path but during multipath find it is taking time to show under /dev/disk/by-id
14:32:09 <nileshthathagar> I will msg you the whole issue. If you need more details
14:32:54 <whoami-rajat> nileshthathagar, can you try this patch? it's specific to the issue when multipath device takes time to become writable https://review.opendev.org/c/openstack/os-brick/+/920516
14:33:27 <nileshthathagar> sure will check it. thanks
14:33:40 <whoami-rajat> harsh_, ack, would be useful to have an upstream patch with your solution and ask on the thread for further assistance
14:34:17 <harsh_> whoami-rajat sure i will upload a patch with my solution.
14:34:24 <harsh_> Thanks :)
14:34:42 <whoami-rajat> thanks!
14:36:22 <jbernard> that's all for our agenda today,
14:36:32 <jbernard> #topic open disucussion
14:36:35 <jbernard> anything else?
14:37:02 <nileshthathagar> I am good thanks :)
14:38:44 * jungleboyj sneaks in way too late.
14:39:32 <jbernard> jungleboyj: perfect timing :)
14:39:42 <jungleboyj> :-)
14:40:04 <jbernard> ok, thanks everyone!
14:40:07 <jbernard> #endmeeting