Wednesday, 2024-11-20

*** mhen_ is now known as mhen02:27
jbernard#startmeeting cinder14:01
opendevmeetMeeting started Wed Nov 20 14:01:15 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
sp-bmilanovo/14:01
toskyo/14:01
akawaio/14:01
whoami-rajatHi14:01
jhorstmanno/14:01
rosmaitao/14:01
jbernardo/14:02
msaravanHi14:04
manudinesh_Hi14:05
jbernardhello everyone14:06
jbernard#topic annoucements14:06
jbernardi don't really have any :)14:06
jbernardit's the week after M114:07
jbernardi did update the wiki to include our PTG summary for epoxy14:07
jbernardotherwise, reviews, communication, emails, etc14:07
jbernardlet's pick up where we left off, with sp-bmilanov14:08
jbernard#topic multi-transport protocol volume driver support14:08
jbernardhere's the link for our working etherpad14:09
jbernard#link https://etherpad.opendev.org/p/cinder-epoxy-meetings14:09
sp-bmilanovhi, the main use case for this would be live instance migration between transport protocols.. currently, it cannot be done with the driver-per-protocol approach, I think14:09
jbernardsimondodsley: your last message from last week's meeting was a suggestion to split the driver up14:10
jbernardwhoami-rajat: curious your stance14:10
jbernardlog from last week: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html14:10
jbernardsp-bmilanov: have you had a chance to evaluate the nova flavors suggestion?14:11
sp-bmilanovusing nova flavors does not help with the use case, no14:12
jbernardwhoami-rajat: do you still object to this approach?14:14
whoami-rajati didn't get much time to look into this but what i was curious about is with multiple backends set up (each having it's own cinder-volume service), how do we determine which service/driver to call for the attachment update call14:15
jbernardi think you could call either one, as they both support all available protocols14:16
whoami-rajatwhat i mean is, if we setup one backend for netapp iscsi and one for netapp FC, and we call self.driver.initialize_connection from manager, how do we determine which service gets called14:17
whoami-rajatthat i wanted to figure out14:17
whoami-rajatprobably i can take it as an action item and update my comment on the patch14:18
jbernardok14:19
jbernard#topic antelope has become unmaintained (2023.1) and the stable branch is no longer present14:20
jbernardim not sure there's much to add, just a heads up14:21
sp-bmilanovjbernard: can I have 2 more minutes please14:21
toskyabout that: after the dropping of stable/2023.1 (now unmaintained), we need to merge https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/934275 to unblock cinder-tempest-plugin14:21
toskyjbernard: that's urgent ^^14:22
sp-bmilanovright, I can ping whoami-rajat in #openstack-cinder later14:22
jbernardthere is also https://review.opendev.org/c/openstack/cinder/+/935378 14:23
whoami-rajatsp-bmilanov, yeah sorry for holding this, will try to resolve the query by this week14:23
jbernardtosky: will take a look after the meeting14:23
jbernard#topic rally ci job14:24
jbernardhttps://zuul.opendev.org/t/openstack/builds?job_name=cinder-rally-task&project=openstack%2Fcinder&branch=master&result=SUCCESS&skip=014:24
jbernardjens brought up that the rally job has been failing for some time14:24
jbernarddo we want to fix this, or let it go to save resources?14:25
whoami-rajatwith the link above, i only see the job passing14:27
whoami-rajatokay i removed the SUCCESS filter14:27
toskyyep14:27
jbernard2024-11-18 15:57:21     frickler        last success for cinder-rally-task was half a year ago, if nobody wants to or is able to fix it, maybe stop running it and wasting CI 14:27
jbernardresources?14:28
whoami-rajatchecked one of the test, i think it's related to cinderv3 vs block-storage endpoint change14:28
whoami-rajatUnexpected error: Validation error: There is no 'cinderv3' service in your environment14:28
whoami-rajat#link https://68dc1c67d2d925924853-d78b0c94e26e635eac828273888f290f.ssl.cf2.rackcdn.com/935615/2/check/cinder-rally-task/e5228ed/results/report.html#/CinderVolumes.create_and_attach_volume14:28
whoami-rajatmaybe a change here is needed14:29
whoami-rajat#link https://github.com/openstack/rally-openstack/blob/master/rally-jobs/cinder.yaml#L4014:29
jbernardthat might do it14:30
whoami-rajatdo we know what kind of coverage does the rally job provide us?14:30
jbernardso we /do/ want to keep rally it sounds like14:30
whoami-rajatjust to see if it's not overlapping with our existing tests14:30
jbernardor maybe that's undecided14:30
jbernardim not sure, im wondering the same thing - how much of it overlaps with tempest14:30
toskyit should be more performance testing, which tempest doesn't cover14:30
toskyit can also run tempest tests (or it used to, I haven't touched it in ages), but I'm not sure it's used for that purpose inside that job14:31
jbernardahh, i see14:31
whoami-rajattosky, do you mean concurrency testing? like creating 5 volumes simultaneously14:31
toskywhoami-rajat: in some specific deadline14:32
whoami-rajatokay, i remember working on something similar here14:32
whoami-rajathttps://github.com/openstack/cinder-tempest-plugin/blob/master/cinder_tempest_plugin/api/volume/test_multiple_volume_from_resource.py14:32
whoami-rajatbut it would be worth checking the current coverage of rally to determine if we need the job14:33
toskywhoami-rajat: but your test didn't consider the execution time, which is iirc what rally is meant for14:34
toskyagreed that we should check what we are really testing14:34
whoami-rajati see your point14:34
jbernardok, i will post a patch to rally-openstack to s/cinderv3/block-storage14:37
jbernardhopefully that will unblock things14:37
toskynot sure rally-openstack is branched (probably it is), that's probably the only thing to check14:38
jbernardtosky: good to note14:38
jbernardtosky: thanks14:38
jbernard#topic upgrade db schema issues14:39
jbernardzigo noticed a db migration failure when upgrading to bobcat or caracal14:39
jbernardnot sure if anyone has anything to add, i just wanted to raise it - we'll need to push some kind of fix for that14:41
jbernardthere is a bug (with patch): https://bugs.launchpad.net/cinder/+bug/207047514:41
jbernardbut the patch doesn't cover all cases, according to zigo's testing14:42
jbernardwhere non-deleted volumes can still have use_quota=NULL14:42
jbernardsomething to be aware of14:42
jbernardwe can open things up for discussion now, there are review requests but I won't list them all here.  If you have review cycles, please consider those14:44
jbernard#topic open discussion14:44
manudineshHi All, can we take up this one: https://bugs.launchpad.net/cinder/+bug/208411714:45
manudineshWe see error while trying to deploy vSCSI based VM [ It tries to create InitiatorGroup. We were told vSCSI based VMs should use underlying host's IG mapping]...Can someone clarrify...Thanks14:47
jbernardthis looks like a netapp connection issue, is anyone fom NetApp around today?14:47
manudineshThis is in NetApp driver 14:47
whoami-rajatsp-bmilanov, hey, i was able to get my answer, so i had a proposal in mind to create multiple backends (one for each protocol) and use them depending on what the compute node supports but looks like based on the host/cluster name, we decide which cinder-volume service we will redirect the rpc call to14:48
manudineshCan you please help me with NetApp point-of-contact mail-id...I will try to reachout directly...Thanks14:48
whoami-rajatbasically if the volume was created in an iSCSI backend, all volume operations will redirect to the same backend14:48
whoami-rajatso even though i don't like the idea of one backend class supporting all protocols, maybe we can make this exception for the storpool driver14:49
whoami-rajatsince we don't have any mechanism to dynamically redirect calls based on the storage protocol in the connector properties14:49
jbernardwhoami-rajat: do you know if we have a up-to-date contact list for drivers?14:49
rosmaitacan probably just see who's pushed patches recently14:50
whoami-rajatjbernard, i don't think it has been updated in a while but msaravan is the go to person for netapp issues14:50
rosmaitahttps://review.opendev.org/q/project:openstack/cinder+dir:cinder/volume/drivers/netapp14:50
msaravanSorry @manudinesh : I am looking at the bug.. 14:51
manudineshsure msaravan..Thanks14:51
sp-bmilanovwhoami-rajat: ack14:51
msaravanCan we sync up sometime this week or early next week.. we understood the igroup issue, need some more information14:52
manudineshsure, we can ..Thanks14:52
whoami-rajatsp-bmilanov, can you help me with your patch link, will update my comments14:52
jhorstmannI have pushed a first version of the dm-clone driver spec: https://review.opendev.org/c/openstack/cinder-specs/+/935347  It is still missing the specifics about snapshots, but some feedback would be welcome before I make things more complicated by adding snapshots14:53
sp-bmilanovwhoami-rajat: yep, https://review.opendev.org/c/openstack/cinder/+/84753614:53
jbernardjhorstmann: i added it to the etherpad, will take a look, thanks14:54
sp-bmilanovwhoami-rajat: I will also namespace the newly-added options, there's still a comment for that14:55
whoami-rajatjbernard, the spec freeze is M2 right?14:55
jbernardwhoami-rajat: that's what i was thinking14:55
jbernardwhoami-rajat: but i haven't sumitted it to be official yet14:56
whoami-rajatso epoxy-2 is Jan 914:56
jbernardM2 is first week of Jan14:56
whoami-rajat#link https://releases.openstack.org/epoxy/schedule.html#e-214:56
jbernardhttps://releases.openstack.org/epoxy/schedule.html14:56
whoami-rajatwe have time but a lot of folks will be out next month14:56
whoami-rajatwould be useful to have some feedback on jhorstmann spec before end of the year break14:57
jbernardjhorstmann: you may want to start on the snapshot details14:57
whoami-rajatthough, i will be around14:57
jhorstmannM2 is also the new driver deadline, right?14:57
whoami-rajatjhorstmann, yeah right14:57
jbernardhmm, it's technically up to us14:57
whoami-rajat:D14:58
whoami-rajatCinder team makes exceptions for dedicated contributors for sure14:58
jbernardlet's see how the spec shapes up14:58
jhorstmannjbernard, I will start with the snapshot part14:58
jbernardif things look okay, we can make some adjustments14:59
jbernardk, we're nearly out of time14:59
jbernardlast call14:59
jbernardthanks everyone!15:00
jbernard#endmeeting15:00
opendevmeetMeeting ended Wed Nov 20 15:00:05 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-20-14.01.html15:00
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-20-14.01.txt15:00
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-20-14.01.log.html15:00
sp-bmilanovthank you!15:00
whoami-rajatthanks everyone!15:00
zigojbernard: My patch is ugly, but works: https://salsa.debian.org/openstack-team/services/cinder/-/blob/debian/dalmatian/debian/patches/fix-db-schema-migration.patch?ref_type=heads15:15

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!