14:00:00 #startmeeting cinder 14:00:01 Meeting started Wed Feb 24 14:00:00 2021 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:04 The meeting name has been set to 'cinder' 14:00:06 #topic roll call 14:00:13 Hi 14:00:16 hi 14:00:23 hi 14:01:13 hi 14:01:13 hi 14:01:21 hi 14:01:35 hello everyone 14:01:49 hi 14:01:53 #link https://etherpad.openstack.org/p/cinder-wallaby-meetings 14:02:02 ok, let's get started 14:02:07 #topic announcements 14:02:21 assert:supports-api-interoperability tag has been approved for cinder 14:02:31 #link https://review.opendev.org/c/openstack/governance/+/773684 14:02:35 hooray! 14:02:46 upcoming deadlines 14:02:53 next week: wallaby os-brick release 14:03:08 week after that: wallaby cinderclient release plus wallaby feature freeze 14:03:23 so the end of the cycle is coming up fast 14:03:59 btw, with february being so short, i lost track that this is the last meeting of the month 14:04:28 which would normally mean a video meeting, but we'll do it next month 14:04:42 * whoami-rajat just realised this is the last meeting of feb 14:04:46 apologies to anyone really looking forward to a video meeting 14:05:21 that's all the announcements from me, any else have something to mention? 14:06:03 i should give a shout out to eharney for his suggestion about the festival of XS reviews we had last week 14:06:10 i think a good time was had by all 14:06:23 and we should do it again 14:06:30 probably soon-ish 14:07:07 in case i am being to subtle, i am looking for feedback about how often we should have such a festival 14:07:31 often, until we have a much smaller review backlog :) 14:07:44 exactly 14:08:06 agree 14:08:26 maybe right after feature freeze, we can get a jump on bug fix reviews 14:08:46 and speaking of bugs ... 14:08:57 #topic Wallaby R-1 Bug Review 14:09:06 enriquetaso: you have the floor 14:09:15 R-7* 14:09:21 Hello, we have 5 bugs reported since last week (2 Cinder, 1 Cinder clienta and 2 OsBrick) 14:09:29 #link https://etherpad.opendev.org/p/cinder-wallaby-r7-bug-review 14:09:41 bug_1: [os-brick] iSCSI+Multipath: Volume attachment hangs if session scanning fails 14:09:49 #link https://bugs.launchpad.net/os-brick/+bug/1915678 14:09:50 Launchpad bug 1915678 in os-brick "iSCSI+Multipath: Volume attachment hungs if sessiong scanning fails" [High,New] 14:09:58 Currently we execute login to iscsi portals and device discovery in multiple threads concurrently.However if some commands like "multiple -m session", the thread can abort immediately without updating any counters like failed_logins or stopped_threads properly, because there are no try-except blocks to catch  exceptions.However the main thread keeps waiting until these counters are updated, and this results in stuck volume 14:09:58 attachment process. 14:11:03 I think some people is already working on this but I couldn't find the gerrit links 14:11:22 yeah, reminder to people working on bugs: 14:11:32 the auto-update between gerrit and launchpad is broken 14:11:54 The next one is small thing but I think we'd like to fix this before wallaby cinderclient release 14:11:55 so if you put up a patch to fix a bug, please open the bug in launchpad and add the review link yourself 14:12:07 enriquetaso: https://review.opendev.org/c/openstack/os-brick/+/775545 ? 14:12:41 tosky++ 14:13:15 OK, I think we'd like to fix this before wallaby cinderclient release 14:13:19 bug_2: [cinder client] Fetching server version fails to support passing client certificates 14:13:25 #link https://bugs.launchpad.net/python-cinderclient/+bug/1915996 14:13:26 Launchpad bug 1915996 in python-cinderclient "Fetching server version fails to support passing client certificates" [Medium,New] - Assigned to Sri Harsha mekala (harshayahoo) 14:13:32 >> from cinderclient import client as cinder_client 14:13:32 >> min_ver, max_ver = cinder_client.get_server_version(cin_url) 14:13:32 >> exception_from_error_queue 14:13:32 raise exception_type(errors) 14:13:32 OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'sslv3 alert handshake failure')] 14:14:22 cinder_client.get_server_version fails 14:15:03 this bug assumes that we support strict mTLS from cinderclient which i don't know is a reasonable assumption 14:15:18 worth looking into if someone is already working on it 14:15:57 It's Assigned to Sri Harsha mekala but again no link 14:16:07 I'll try to find it after the meeting 14:16:27 or maybe tosky can do his gerrit search magic 14:16:33 ha 14:16:48 bug_3: [docs] Install and configure a storage node in cinder 14:16:55 #link https://bugs.launchpad.net/cinder/+bug/1916258 14:16:57 Launchpad bug 1916258 in Cinder "[docs] Install and configure a storage node in cinder" [Low,New] 14:17:03 Looks like the documentation is outdated. 14:17:04 The 'python-keystone' should be replace to 'python3-keystone.However, 14:17:04 I can't find the python3-keystone[1] in RDO but it's on Ubuntu 20. 14:17:04 [1] https://trunk.rdoproject.org/centos8-master/component/tempest/3e/05/3e05a15d9c4c889aba8c4aad9e24ba8a8a71b7f3_a65d35cc/rpmbuild.log 14:18:13 i guess we have an in-tree doc about RDO packages that is not up to date 14:18:57 yep 14:18:58 bug_4: [cinder] XtremIO does not support ports filtering 14:19:01 i'm kind of surprised people are using cinder docs to hand-build an RDO deployment at this point 14:19:28 #link https://bugs.launchpad.net/cinder/+bug/1915800 14:19:29 Launchpad bug 1915800 in Cinder "XtremIO does not support ports filtering" [Medium,In progress] - Assigned to Ivan Pchelintsev (pcheli) 14:19:33 Looks like this new implementation broke the support filtering https://review.opendev.org/c/openstack/cinder/+/775798 14:19:33 The bug is already assigned. 14:19:36 That's all I have for now :) 14:19:52 ok, great ... thanks enriquetaso 14:20:40 i think we'd like to get https://review.opendev.org/c/openstack/os-brick/+/775545 into the release next week if possible 14:20:50 so please prioritize that review 14:20:54 i already -1'd it 14:21:13 that was fast 14:21:42 ok, so when eric's revision has happened, please look for that patch 14:21:55 ok, next topic is another brick fix we need to prioritize 14:22:10 #topic os-brick nvmeof connector regression in 4.2.0 14:22:22 let me get some links down for the record 14:22:34 #link https://bugs.launchpad.net/os-brick/+bug/1916264 14:22:35 Launchpad bug 1916264 in os-brick "NVMeOFConnector can't connect volume" [Medium,New] - Assigned to Zohar Mamedov (zoharm) 14:22:37 ^^ the bug 14:22:53 announcement to operators 14:22:56 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020627.html 14:23:03 announcement to us 14:23:11 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020628.html 14:23:20 and zoharm's current patch: 14:23:30 #link https://review.opendev.org/c/openstack/os-brick/+/777086 14:23:46 e0ne put up a revert patch, there's some discussion on it about how to fix this 14:24:03 #link https://review.opendev.org/c/openstack/os-brick/+/776441 14:24:32 the current patch introduces compatability code into the nvmeof connector to make it backward compatible 14:25:25 it's not an ideal solution, but we're in the end of cycle 14:25:45 splitting into two connectors will require nova change too 14:26:03 that's why we decided to make this fix for now 14:26:20 i think it makes sense given the situation 14:26:36 but cinder cores, please look at this patch! 14:26:36 I didn't review the patch yet 14:26:42 https://review.opendev.org/c/openstack/os-brick/+/777086 14:26:54 but I tested it manually and we restored CI 14:27:09 it is definitely less than ideal, i think ultimately in a further release we could have all drivers use the same driver 14:27:25 same connector* - ie, unify the API 14:27:45 so e0ne , is mellanox ci responding now? 14:27:55 or do you still have to trigger it manually? 14:28:27 rosmaita: I'm not sure auto build is enabled now, need to check it 14:28:45 ok 14:28:48 rosmaita: but we trigger it by adding 'retrigger-mlnx-spdk' comment 14:28:59 so we can trigger it from zoharm's patch? 14:29:34 i saw it has been triggered there and had a successful run 14:29:39 ok, cool 14:29:53 this gives us a check for regressions 14:29:53 https://review.opendev.org/c/openstack/os-brick/+/777086/4#message-10c5858a614dacdc829c8b008056515ca9bd1ef7 14:30:11 rosmaita: it passed for patchset #2, newer patchsets updated commit message and unit-tests only 14:30:27 also, anyone out there with a FC driver that uses the nvmeof brick connector ... please check your CI 14:30:43 it may only be SPDK , but it's hard to tell 14:31:51 zoharm: maybe you can add the retrigger comment whenever you upload a new patch set 14:32:33 sounds good 14:33:09 ok, so cinder cores ... please prioritize reviewing https://review.opendev.org/c/openstack/os-brick/+/777086/ this week 14:33:24 we need to get this fixed before next week's brick release 14:33:55 I'll take a look 14:34:08 hemna: ty 14:34:08 #topic new option or just do it? 14:34:22 #link https://review.opendev.org/c/openstack/cinder/+/766856 14:34:45 that patch proposes to support fast-diff for RBD backups 14:34:53 but it introduces a config option to turn it on 14:35:04 and it also checks to make sure the backend supports it 14:35:36 the question came up last week that maybe we should just check to see if fast-diff is supported by the backend, and if so, use it 14:35:48 without requiring configuration 14:36:09 i think that makes sense (which is why i am bringing it up!) 14:36:17 or try to use scheduler hints for that 14:36:19 but i want to get feedback from the wider community 14:36:19 i think we should do that 14:36:24 is there a specific version that is required ? 14:36:28 why scheduler hints..? 14:36:59 fast-diff is supported in all versions of ceph that we support, it was added years ago 14:37:18 eharney: it that case is should be safe 14:37:36 if that's the case, then I would vote for not needing a conf for it, unless there is a specific reason why the driver shouldn't use it if it's on? 14:37:38 it was in infernalis if not before 14:37:41 eharney: how incremental backups will work if we enable fast-diff? 14:38:34 the only thing that fast-diff changes is how changes are tracked between images/snaps on the ceph backend, backups would presumably work the same 14:41:14 sounds good to me 14:41:35 ok, so it sounds like the consensus is to ask the proposer to revise the patch to remove the config option 14:41:52 this patch does make me think that we need to check whether we need something similar for rbd volumes, or if we are already doing the ideal thing there 14:42:35 also, looking at the docstring for the file, it could probably use a revision too 14:42:47 suggests using at least Ceph Dumpling 14:43:07 heh 14:43:30 eharney: if you don't have time to look, maybe file a bug so we don't forget to check 14:43:44 #topic Zadara cinder driver new features review 14:43:52 rratnaka: that's you 14:44:00 #link https://review.opendev.org/c/openstack/cinder/+/774463 14:44:01 Hi everyone. 14:44:16 Great pleasure meeting community folks 14:44:39 we are happy to have you here! 14:44:47 As rosmaita shared the review link, I am here to highlight my review requests 14:44:52 *request 14:45:24 I have addressed adding new features to Zadara cinder driver and targetted to push to Wallaby release 14:45:48 I addressed first set of review comments 14:45:50 when you say "update code layout" , do you mean moving some functions to common ? 14:45:58 yeah 14:46:43 Requesting to take a look for review and provide set of comments 14:47:02 ok, so though it looks like a big change, it's mostly rearranging 14:47:28 i like adding 500 lines of tests, that's always good to see 14:47:29 Along with that added few features like multiattach, ipv6 14:47:40 and other apiś like get_manageable_volumes/snapshots 14:47:59 yeah. I have added the test cases too 14:48:52 ok, great. can you add a blueprint to track this: https://blueprints.launchpad.net/cinder/wallaby 14:48:54 Do we still have bandwidth to add the changes to Wallaby? 14:49:07 sure rosmaita. I will add a blueprint 14:49:52 ok, will help us make sure everyone is aware 14:50:27 to answer the bandwidth question, let's see how the initial reviews go 14:50:49 anything else? 14:51:08 yeah. totally agree. 14:51:12 thanks 14:51:21 #topic Stable release update 14:51:28 thatś it from my end. Awaiting the review comments. Thanks 14:51:34 #link https://etherpad.opendev.org/p/stable-releases-review-tracker-22-02-2021 14:51:39 whoami-rajat: that's you 14:51:42 thanks rosmaita 14:52:15 so this week there were not many reviews in stable branches 14:52:33 victoria had 1 patch which rosmaita approved 14:52:52 ussuri and train still has 3-4 patches unmerged 14:53:20 I'm not sure if i should go ahead with victoria release after the last change merges or wait for ussuri and train to propose release simultaneously 14:53:25 rosmaita: thoughts ? ^ 14:53:58 i dont know that there's any advantage to a simultaneous release 14:54:24 anyone have any observations? 14:55:02 actually, forget i said that 14:55:11 or maybe not 14:55:21 * whoami-rajat confused 14:56:04 :P 14:56:20 the key thing is that we never want the situation where an earlier release (like train) contains a fix that isn't in the next release (ussuri) 14:56:31 so people don't break on upgrade 14:57:13 that shouldn't be happening because we propose backports in order 14:57:15 so if you release victoria now, and then u & t in a week or so, that should be ik 14:57:18 *ok 14:57:28 oh, release wise, yes 14:57:41 you got it 14:58:08 ok, will propose the victoria release this week and wait for ussuri and train to get cleaned up 14:58:45 ok, everyone needs to concentrate mostly on os-brick anyway, so other than u and t cleanup, we shouldn't have much backport action 14:58:57 thanks, rajat 14:59:03 #topic open discussion 14:59:08 for 45 seconds 14:59:26 also tosky had one question on the meeting pad, not sure if I answered it properly 15:00:10 i think you answered correctly, we don't know that v backports are relevant to earlier branches until someone proposes them 15:00:15 ok, make way for horizon! 15:00:18 thanks everyone 15:00:20 thanks! 15:00:22 #endmeeting