14:01:15 #startmeeting cinder 14:01:15 Meeting started Wed Nov 20 14:01:15 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:15 The meeting name has been set to 'cinder' 14:01:34 o/ 14:01:36 o/ 14:01:40 o/ 14:01:41 Hi 14:01:48 o/ 14:01:48 o/ 14:02:05 o/ 14:04:40 Hi 14:05:06 Hi 14:06:41 hello everyone 14:06:48 #topic annoucements 14:06:54 i don't really have any :) 14:07:11 it's the week after M1 14:07:24 i did update the wiki to include our PTG summary for epoxy 14:07:53 otherwise, reviews, communication, emails, etc 14:08:16 let's pick up where we left off, with sp-bmilanov 14:08:31 #topic multi-transport protocol volume driver support 14:09:22 here's the link for our working etherpad 14:09:25 #link https://etherpad.opendev.org/p/cinder-epoxy-meetings 14:09:41 hi, the main use case for this would be live instance migration between transport protocols.. currently, it cannot be done with the driver-per-protocol approach, I think 14:10:19 simondodsley: your last message from last week's meeting was a suggestion to split the driver up 14:10:34 whoami-rajat: curious your stance 14:10:41 log from last week: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html 14:11:26 sp-bmilanov: have you had a chance to evaluate the nova flavors suggestion? 14:12:04 using nova flavors does not help with the use case, no 14:14:28 whoami-rajat: do you still object to this approach? 14:15:04 i didn't get much time to look into this but what i was curious about is with multiple backends set up (each having it's own cinder-volume service), how do we determine which service/driver to call for the attachment update call 14:16:00 i think you could call either one, as they both support all available protocols 14:17:41 what i mean is, if we setup one backend for netapp iscsi and one for netapp FC, and we call self.driver.initialize_connection from manager, how do we determine which service gets called 14:17:45 that i wanted to figure out 14:18:03 probably i can take it as an action item and update my comment on the patch 14:19:45 ok 14:20:46 #topic antelope has become unmaintained (2023.1) and the stable branch is no longer present 14:21:02 im not sure there's much to add, just a heads up 14:21:25 jbernard: can I have 2 more minutes please 14:21:52 about that: after the dropping of stable/2023.1 (now unmaintained), we need to merge https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/934275 to unblock cinder-tempest-plugin 14:22:01 jbernard: that's urgent ^^ 14:22:31 right, I can ping whoami-rajat in #openstack-cinder later 14:23:03 there is also https://review.opendev.org/c/openstack/cinder/+/935378 14:23:03 sp-bmilanov, yeah sorry for holding this, will try to resolve the query by this week 14:23:18 tosky: will take a look after the meeting 14:24:34 #topic rally ci job 14:24:42 https://zuul.opendev.org/t/openstack/builds?job_name=cinder-rally-task&project=openstack%2Fcinder&branch=master&result=SUCCESS&skip=0 14:24:59 jens brought up that the rally job has been failing for some time 14:25:10 do we want to fix this, or let it go to save resources? 14:27:07 with the link above, i only see the job passing 14:27:29 okay i removed the SUCCESS filter 14:27:32 yep 14:27:59 2024-11-18 15:57:21 frickler last success for cinder-rally-task was half a year ago, if nobody wants to or is able to fix it, maybe stop running it and wasting CI 14:28:01 resources? 14:28:24 checked one of the test, i think it's related to cinderv3 vs block-storage endpoint change 14:28:26 Unexpected error: Validation error: There is no 'cinderv3' service in your environment 14:28:44 #link https://68dc1c67d2d925924853-d78b0c94e26e635eac828273888f290f.ssl.cf2.rackcdn.com/935615/2/check/cinder-rally-task/e5228ed/results/report.html#/CinderVolumes.create_and_attach_volume 14:29:33 maybe a change here is needed 14:29:35 #link https://github.com/openstack/rally-openstack/blob/master/rally-jobs/cinder.yaml#L40 14:30:13 that might do it 14:30:18 do we know what kind of coverage does the rally job provide us? 14:30:26 so we /do/ want to keep rally it sounds like 14:30:32 just to see if it's not overlapping with our existing tests 14:30:39 or maybe that's undecided 14:30:54 im not sure, im wondering the same thing - how much of it overlaps with tempest 14:30:56 it should be more performance testing, which tempest doesn't cover 14:31:26 it can also run tempest tests (or it used to, I haven't touched it in ages), but I'm not sure it's used for that purpose inside that job 14:31:26 ahh, i see 14:31:41 tosky, do you mean concurrency testing? like creating 5 volumes simultaneously 14:32:33 whoami-rajat: in some specific deadline 14:32:54 okay, i remember working on something similar here 14:32:57 https://github.com/openstack/cinder-tempest-plugin/blob/master/cinder_tempest_plugin/api/volume/test_multiple_volume_from_resource.py 14:33:19 but it would be worth checking the current coverage of rally to determine if we need the job 14:34:24 whoami-rajat: but your test didn't consider the execution time, which is iirc what rally is meant for 14:34:35 agreed that we should check what we are really testing 14:34:56 i see your point 14:37:22 ok, i will post a patch to rally-openstack to s/cinderv3/block-storage 14:37:45 hopefully that will unblock things 14:38:24 not sure rally-openstack is branched (probably it is), that's probably the only thing to check 14:38:41 tosky: good to note 14:38:47 tosky: thanks 14:39:15 #topic upgrade db schema issues 14:39:37 zigo noticed a db migration failure when upgrading to bobcat or caracal 14:41:17 not sure if anyone has anything to add, i just wanted to raise it - we'll need to push some kind of fix for that 14:41:47 there is a bug (with patch): https://bugs.launchpad.net/cinder/+bug/2070475 14:42:14 but the patch doesn't cover all cases, according to zigo's testing 14:42:40 where non-deleted volumes can still have use_quota=NULL 14:42:56 something to be aware of 14:44:47 we can open things up for discussion now, there are review requests but I won't list them all here. If you have review cycles, please consider those 14:44:53 #topic open discussion 14:45:36 Hi All, can we take up this one: https://bugs.launchpad.net/cinder/+bug/2084117 14:47:03 We see error while trying to deploy vSCSI based VM [ It tries to create InitiatorGroup. We were told vSCSI based VMs should use underlying host's IG mapping]...Can someone clarrify...Thanks 14:47:29 this looks like a netapp connection issue, is anyone fom NetApp around today? 14:47:29 This is in NetApp driver 14:48:26 sp-bmilanov, hey, i was able to get my answer, so i had a proposal in mind to create multiple backends (one for each protocol) and use them depending on what the compute node supports but looks like based on the host/cluster name, we decide which cinder-volume service we will redirect the rpc call to 14:48:31 Can you please help me with NetApp point-of-contact mail-id...I will try to reachout directly...Thanks 14:48:52 basically if the volume was created in an iSCSI backend, all volume operations will redirect to the same backend 14:49:25 so even though i don't like the idea of one backend class supporting all protocols, maybe we can make this exception for the storpool driver 14:49:43 since we don't have any mechanism to dynamically redirect calls based on the storage protocol in the connector properties 14:49:46 whoami-rajat: do you know if we have a up-to-date contact list for drivers? 14:50:11 can probably just see who's pushed patches recently 14:50:17 jbernard, i don't think it has been updated in a while but msaravan is the go to person for netapp issues 14:50:55 https://review.opendev.org/q/project:openstack/cinder+dir:cinder/volume/drivers/netapp 14:51:03 Sorry @manudinesh : I am looking at the bug.. 14:51:35 sure msaravan..Thanks 14:51:43 whoami-rajat: ack 14:52:01 Can we sync up sometime this week or early next week.. we understood the igroup issue, need some more information 14:52:18 sure, we can ..Thanks 14:52:44 sp-bmilanov, can you help me with your patch link, will update my comments 14:53:13 I have pushed a first version of the dm-clone driver spec: https://review.opendev.org/c/openstack/cinder-specs/+/935347 It is still missing the specifics about snapshots, but some feedback would be welcome before I make things more complicated by adding snapshots 14:53:58 whoami-rajat: yep, https://review.opendev.org/c/openstack/cinder/+/847536 14:54:35 jhorstmann: i added it to the etherpad, will take a look, thanks 14:55:09 whoami-rajat: I will also namespace the newly-added options, there's still a comment for that 14:55:35 jbernard, the spec freeze is M2 right? 14:55:53 whoami-rajat: that's what i was thinking 14:56:06 whoami-rajat: but i haven't sumitted it to be official yet 14:56:26 so epoxy-2 is Jan 9 14:56:29 M2 is first week of Jan 14:56:30 #link https://releases.openstack.org/epoxy/schedule.html#e-2 14:56:33 https://releases.openstack.org/epoxy/schedule.html 14:56:49 we have time but a lot of folks will be out next month 14:57:14 would be useful to have some feedback on jhorstmann spec before end of the year break 14:57:19 jhorstmann: you may want to start on the snapshot details 14:57:28 though, i will be around 14:57:41 M2 is also the new driver deadline, right? 14:57:56 jhorstmann, yeah right 14:57:57 hmm, it's technically up to us 14:58:12 :D 14:58:38 Cinder team makes exceptions for dedicated contributors for sure 14:58:46 let's see how the spec shapes up 14:58:47 jbernard, I will start with the snapshot part 14:59:17 if things look okay, we can make some adjustments 14:59:27 k, we're nearly out of time 14:59:30 last call 15:00:01 thanks everyone! 15:00:05 #endmeeting