14:01:15 <jbernard> #startmeeting cinder
14:01:15 <opendevmeet> Meeting started Wed Nov 20 14:01:15 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:15 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:15 <opendevmeet> The meeting name has been set to 'cinder'
14:01:34 <sp-bmilanov> o/
14:01:36 <tosky> o/
14:01:40 <akawai> o/
14:01:41 <whoami-rajat> Hi
14:01:48 <jhorstmann> o/
14:01:48 <rosmaita> o/
14:02:05 <jbernard> o/
14:04:40 <msaravan> Hi
14:05:06 <manudinesh_> Hi
14:06:41 <jbernard> hello everyone
14:06:48 <jbernard> #topic annoucements
14:06:54 <jbernard> i don't really have any :)
14:07:11 <jbernard> it's the week after M1
14:07:24 <jbernard> i did update the wiki to include our PTG summary for epoxy
14:07:53 <jbernard> otherwise, reviews, communication, emails, etc
14:08:16 <jbernard> let's pick up where we left off, with sp-bmilanov
14:08:31 <jbernard> #topic multi-transport protocol volume driver support
14:09:22 <jbernard> here's the link for our working etherpad
14:09:25 <jbernard> #link https://etherpad.opendev.org/p/cinder-epoxy-meetings
14:09:41 <sp-bmilanov> hi, the main use case for this would be live instance migration between transport protocols.. currently, it cannot be done with the driver-per-protocol approach, I think
14:10:19 <jbernard> simondodsley: your last message from last week's meeting was a suggestion to split the driver up
14:10:34 <jbernard> whoami-rajat: curious your stance
14:10:41 <jbernard> log from last week: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html
14:11:26 <jbernard> sp-bmilanov: have you had a chance to evaluate the nova flavors suggestion?
14:12:04 <sp-bmilanov> using nova flavors does not help with the use case, no
14:14:28 <jbernard> whoami-rajat: do you still object to this approach?
14:15:04 <whoami-rajat> i didn't get much time to look into this but what i was curious about is with multiple backends set up (each having it's own cinder-volume service), how do we determine which service/driver to call for the attachment update call
14:16:00 <jbernard> i think you could call either one, as they both support all available protocols
14:17:41 <whoami-rajat> what i mean is, if we setup one backend for netapp iscsi and one for netapp FC, and we call self.driver.initialize_connection from manager, how do we determine which service gets called
14:17:45 <whoami-rajat> that i wanted to figure out
14:18:03 <whoami-rajat> probably i can take it as an action item and update my comment on the patch
14:19:45 <jbernard> ok
14:20:46 <jbernard> #topic antelope has become unmaintained (2023.1) and the stable branch is no longer present
14:21:02 <jbernard> im not sure there's much to add, just a heads up
14:21:25 <sp-bmilanov> jbernard: can I have 2 more minutes please
14:21:52 <tosky> about that: after the dropping of stable/2023.1 (now unmaintained), we need to merge https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/934275 to unblock cinder-tempest-plugin
14:22:01 <tosky> jbernard: that's urgent ^^
14:22:31 <sp-bmilanov> right, I can ping whoami-rajat in #openstack-cinder later
14:23:03 <jbernard> there is also https://review.opendev.org/c/openstack/cinder/+/935378
14:23:03 <whoami-rajat> sp-bmilanov, yeah sorry for holding this, will try to resolve the query by this week
14:23:18 <jbernard> tosky: will take a look after the meeting
14:24:34 <jbernard> #topic rally ci job
14:24:42 <jbernard> https://zuul.opendev.org/t/openstack/builds?job_name=cinder-rally-task&project=openstack%2Fcinder&branch=master&result=SUCCESS&skip=0
14:24:59 <jbernard> jens brought up that the rally job has been failing for some time
14:25:10 <jbernard> do we want to fix this, or let it go to save resources?
14:27:07 <whoami-rajat> with the link above, i only see the job passing
14:27:29 <whoami-rajat> okay i removed the SUCCESS filter
14:27:32 <tosky> yep
14:27:59 <jbernard> 2024-11-18 15:57:21     frickler        last success for cinder-rally-task was half a year ago, if nobody wants to or is able to fix it, maybe stop running it and wasting CI
14:28:01 <jbernard> resources?
14:28:24 <whoami-rajat> checked one of the test, i think it's related to cinderv3 vs block-storage endpoint change
14:28:26 <whoami-rajat> Unexpected error: Validation error: There is no 'cinderv3' service in your environment
14:28:44 <whoami-rajat> #link https://68dc1c67d2d925924853-d78b0c94e26e635eac828273888f290f.ssl.cf2.rackcdn.com/935615/2/check/cinder-rally-task/e5228ed/results/report.html#/CinderVolumes.create_and_attach_volume
14:29:33 <whoami-rajat> maybe a change here is needed
14:29:35 <whoami-rajat> #link https://github.com/openstack/rally-openstack/blob/master/rally-jobs/cinder.yaml#L40
14:30:13 <jbernard> that might do it
14:30:18 <whoami-rajat> do we know what kind of coverage does the rally job provide us?
14:30:26 <jbernard> so we /do/ want to keep rally it sounds like
14:30:32 <whoami-rajat> just to see if it's not overlapping with our existing tests
14:30:39 <jbernard> or maybe that's undecided
14:30:54 <jbernard> im not sure, im wondering the same thing - how much of it overlaps with tempest
14:30:56 <tosky> it should be more performance testing, which tempest doesn't cover
14:31:26 <tosky> it can also run tempest tests (or it used to, I haven't touched it in ages), but I'm not sure it's used for that purpose inside that job
14:31:26 <jbernard> ahh, i see
14:31:41 <whoami-rajat> tosky, do you mean concurrency testing? like creating 5 volumes simultaneously
14:32:33 <tosky> whoami-rajat: in some specific deadline
14:32:54 <whoami-rajat> okay, i remember working on something similar here
14:32:57 <whoami-rajat> https://github.com/openstack/cinder-tempest-plugin/blob/master/cinder_tempest_plugin/api/volume/test_multiple_volume_from_resource.py
14:33:19 <whoami-rajat> but it would be worth checking the current coverage of rally to determine if we need the job
14:34:24 <tosky> whoami-rajat: but your test didn't consider the execution time, which is iirc what rally is meant for
14:34:35 <tosky> agreed that we should check what we are really testing
14:34:56 <whoami-rajat> i see your point
14:37:22 <jbernard> ok, i will post a patch to rally-openstack to s/cinderv3/block-storage
14:37:45 <jbernard> hopefully that will unblock things
14:38:24 <tosky> not sure rally-openstack is branched (probably it is), that's probably the only thing to check
14:38:41 <jbernard> tosky: good to note
14:38:47 <jbernard> tosky: thanks
14:39:15 <jbernard> #topic upgrade db schema issues
14:39:37 <jbernard> zigo noticed a db migration failure when upgrading to bobcat or caracal
14:41:17 <jbernard> not sure if anyone has anything to add, i just wanted to raise it - we'll need to push some kind of fix for that
14:41:47 <jbernard> there is a bug (with patch): https://bugs.launchpad.net/cinder/+bug/2070475
14:42:14 <jbernard> but the patch doesn't cover all cases, according to zigo's testing
14:42:40 <jbernard> where non-deleted volumes can still have use_quota=NULL
14:42:56 <jbernard> something to be aware of
14:44:47 <jbernard> we can open things up for discussion now, there are review requests but I won't list them all here.  If you have review cycles, please consider those
14:44:53 <jbernard> #topic open discussion
14:45:36 <manudinesh> Hi All, can we take up this one: https://bugs.launchpad.net/cinder/+bug/2084117
14:47:03 <manudinesh> We see error while trying to deploy vSCSI based VM [ It tries to create InitiatorGroup. We were told vSCSI based VMs should use underlying host's IG mapping]...Can someone clarrify...Thanks
14:47:29 <jbernard> this looks like a netapp connection issue, is anyone fom NetApp around today?
14:47:29 <manudinesh> This is in NetApp driver
14:48:26 <whoami-rajat> sp-bmilanov, hey, i was able to get my answer, so i had a proposal in mind to create multiple backends (one for each protocol) and use them depending on what the compute node supports but looks like based on the host/cluster name, we decide which cinder-volume service we will redirect the rpc call to
14:48:31 <manudinesh> Can you please help me with NetApp point-of-contact mail-id...I will try to reachout directly...Thanks
14:48:52 <whoami-rajat> basically if the volume was created in an iSCSI backend, all volume operations will redirect to the same backend
14:49:25 <whoami-rajat> so even though i don't like the idea of one backend class supporting all protocols, maybe we can make this exception for the storpool driver
14:49:43 <whoami-rajat> since we don't have any mechanism to dynamically redirect calls based on the storage protocol in the connector properties
14:49:46 <jbernard> whoami-rajat: do you know if we have a up-to-date contact list for drivers?
14:50:11 <rosmaita> can probably just see who's pushed patches recently
14:50:17 <whoami-rajat> jbernard, i don't think it has been updated in a while but msaravan is the go to person for netapp issues
14:50:55 <rosmaita> https://review.opendev.org/q/project:openstack/cinder+dir:cinder/volume/drivers/netapp
14:51:03 <msaravan> Sorry @manudinesh : I am looking at the bug..
14:51:35 <manudinesh> sure msaravan..Thanks
14:51:43 <sp-bmilanov> whoami-rajat: ack
14:52:01 <msaravan> Can we sync up sometime this week or early next week.. we understood the igroup issue, need some more information
14:52:18 <manudinesh> sure, we can ..Thanks
14:52:44 <whoami-rajat> sp-bmilanov, can you help me with your patch link, will update my comments
14:53:13 <jhorstmann> I have pushed a first version of the dm-clone driver spec: https://review.opendev.org/c/openstack/cinder-specs/+/935347  It is still missing the specifics about snapshots, but some feedback would be welcome before I make things more complicated by adding snapshots
14:53:58 <sp-bmilanov> whoami-rajat: yep, https://review.opendev.org/c/openstack/cinder/+/847536
14:54:35 <jbernard> jhorstmann: i added it to the etherpad, will take a look, thanks
14:55:09 <sp-bmilanov> whoami-rajat: I will also namespace the newly-added options, there's still a comment for that
14:55:35 <whoami-rajat> jbernard, the spec freeze is M2 right?
14:55:53 <jbernard> whoami-rajat: that's what i was thinking
14:56:06 <jbernard> whoami-rajat: but i haven't sumitted it to be official yet
14:56:26 <whoami-rajat> so epoxy-2 is Jan 9
14:56:29 <jbernard> M2 is first week of Jan
14:56:30 <whoami-rajat> #link https://releases.openstack.org/epoxy/schedule.html#e-2
14:56:33 <jbernard> https://releases.openstack.org/epoxy/schedule.html
14:56:49 <whoami-rajat> we have time but a lot of folks will be out next month
14:57:14 <whoami-rajat> would be useful to have some feedback on jhorstmann spec before end of the year break
14:57:19 <jbernard> jhorstmann: you may want to start on the snapshot details
14:57:28 <whoami-rajat> though, i will be around
14:57:41 <jhorstmann> M2 is also the new driver deadline, right?
14:57:56 <whoami-rajat> jhorstmann, yeah right
14:57:57 <jbernard> hmm, it's technically up to us
14:58:12 <whoami-rajat> :D
14:58:38 <whoami-rajat> Cinder team makes exceptions for dedicated contributors for sure
14:58:46 <jbernard> let's see how the spec shapes up
14:58:47 <jhorstmann> jbernard, I will start with the snapshot part
14:59:17 <jbernard> if things look okay, we can make some adjustments
14:59:27 <jbernard> k, we're nearly out of time
14:59:30 <jbernard> last call
15:00:01 <jbernard> thanks everyone!
15:00:05 <jbernard> #endmeeting