14:00:39 #startmeeting cinder 14:00:40 Meeting started Wed Jan 13 14:00:39 2021 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:43 The meeting name has been set to 'cinder' 14:00:45 #topic roll call 14:00:53 hi! 14:01:02 hi 14:01:22 Hi 14:01:55 o/ 14:01:55 hi 14:01:58 hoping we get a few more people 14:02:50 hi! 14:03:03 ok, now we can start! :D 14:03:14 #link https://etherpad.opendev.org/p/cinder-wallaby-meetings 14:03:19 #topic announcements 14:03:36 o/ 14:03:44 ok, as we discussed last week, the new driver merge deadline is milestone-2 14:03:47 which is next week 14:03:51 hi 14:03:58 so review priority is DRIVERS 14:04:09 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019720.html 14:04:50 also, that email missed a backup driver for S3 14:05:03 which was proposed a while ago but has been sitting 14:05:45 finally found the link 14:05:53 #link https://review.opendev.org/c/openstack/cinder/+/746561 14:06:06 that review requires a change to global-requirements 14:06:13 which has been approved by the requirements team 14:06:19 though currently Zuul disagrees 14:06:46 in any case, we can assume that the requirement is approved, so no reason to hold up reviews on that patch 14:07:01 great! will review and test it 14:07:06 excellent! 14:07:25 i don't need to remind people that all you need to do is look at the cinder logo to see how important backends are to cinder 14:07:36 so these are the most important reviews right now 14:07:56 :-) 14:08:16 and i will gently remind people who have other patches posted, that you can please review other peoples' patches to move things along 14:08:39 the email linked above has a link to the cinder docs, where we have a new driver review checklist that can guide you along 14:09:05 any questions or comments? 14:09:20 Remind me when the deadline is? 14:09:32 not tomorrow, but the thursday after that 14:09:36 yeah 14:09:39 it is very close 14:09:44 Ok. 14:10:37 only other announcement is the status of the cursive library 14:11:10 oslo has agreed to take over governance, and oslo and barbican will share core duties 14:11:21 (at least i think that's what they said) 14:11:30 (possibly shared governance) 14:11:54 in any case, the important point is that it will be brought back into the 'openstack' namespace 14:12:17 and we will have active, identifiable core reviewers to contact if we need to patch it 14:13:10 #topic Wallaby R-13 Bug Review 14:13:18 thanks rosmaita 14:13:20 #link https://etherpad.opendev.org/p/cinder-wallaby-r13-bug-review 14:13:34 quiet week again this week, only 2 bugs opened, one for cinder and one for python cinderlib 14:13:40 bug_1: volume manager under reports allocated space #link https://bugs.launchpad.net/cinder/+bug/1910767 14:13:41 Launchpad bug 1910767 in Cinder "volume manager under reports allocated space" [Medium,Triaged] 14:14:04 I have set this to medium importance but it may warrant high, underreporting of allocated capacity could be very problematic 14:14:43 does this impact anything after the service has been up and running for a few minutes? (i.e. when volume stats have been gathered again) 14:15:34 I had assumed that the periodic status check that go to the drivers would update the capacities also 14:15:42 i assume not, so it probably doesn't need to be a high prio bug 14:16:06 i agree with eharney here 14:16:38 no problem, it was opened by waltboring so we might hear from him in the future about it 14:17:07 i wonder if there was a reason why it was implemented like this 14:17:09 i mean 14:17:17 only look at those 2 statuses at init 14:17:26 and then get better stats later 14:17:56 i kind of doubt it 14:18:04 hemna: ^^^ 14:18:12 i think those are the 2 valid states right? other states may end up in error 14:19:43 ok, we can discuss on the bug report 14:20:02 moving on to the next and final bug 14:20:04 bug_2: Can't get volume usage #link https://bugs.launchpad.net/python-cinderclient/+bug/1911390 14:20:05 Launchpad bug 1911390 in python-cinderclient "Can't get volume usage" [Wishlist,Triaged] 14:20:31 this isnt a bug, more of a misinterpretation of what is returned to users, but I set it to wishlist incase its something that may be looked at in the future 14:20:54 if we want to keep it as wishlist it needs more detail about what it means 14:21:01 because i'm not sure what the complaint/request actually is 14:21:20 and that's not something that's easy to report 14:21:25 I read it like the user wanted/expected to get used capacity back 14:21:34 if you have thin volumes and deduplication/compression on the backend... 14:21:45 i don't think we want to support that 14:21:52 eharney: +1 14:21:53 geguileo I agree, from a driver maintainer perspective we would need to have a call that goes to our management server to get the current capacity 14:22:22 and its not always black & white 14:23:09 yeah, and it probably wouldn't be the same between different backends 14:23:09 i'd vote for "Won't Fix" 14:23:53 ok, any objections to moving to wont fix? 14:24:05 eharney: +1 14:24:21 ill change that now, thanks everyone 14:24:23 eharney +1 14:24:25 you can say in the comment that they can provide more clarity about what exactly they are asking 14:24:34 if they object 14:24:34 no prob ill do that 14:24:42 +1 to won't fix, but with proper explanation on why we won't be fixing it 14:25:18 does someone want to make that an action item for themselves with a proper explanation? 14:25:58 maybe ask for clarity first so we know what we're explaining 14:26:31 will do ill ask that 14:26:43 thats it from me this week, thanks everyone 14:26:46 thanks! 14:27:03 #topic Stable releases 14:27:06 whoami-rajat__: that's you 14:27:10 thanks rosmaita 14:27:30 this is just a reminder that tomorrow is the last date for cycle-trailing releases 14:27:39 for us, it's cinderlib 14:28:13 a patch is blocking it and we need to finalize if we want to include it in the cinderlib victoria release or not 14:28:25 #link https://review.opendev.org/c/openstack/cinderlib/+/768533 14:28:37 geguileo: ^^ need your opinion on the same 14:29:07 my question is that i think stable/vic cinderlib should be using stable/vic os-brick 14:29:15 is that a correct assumption? 14:29:36 +1 14:29:49 otherwise, there wouldn't seem to be any point in having stable cinderlib branches 14:30:06 at least, it sounds like a right way to do 14:30:22 talking about stable releases, and even knowing that drivers have the highest review priority, there is a good amount of (I believe) clean backports that could get some attention 14:31:26 i think geguileo may be in another meeting 14:31:46 seems like that 14:32:09 i can ask him after the meeting 14:32:43 ok, otherwise, i think we merge https://review.opendev.org/c/openstack/cinderlib/+/768533 and update the hash for release 14:33:05 according to herve, there's a good chance the release publish job will fail without that change 14:33:13 or something like it, anyway 14:33:34 any comments or questions? 14:33:49 ack, should i go forward and merge it now? 14:33:55 maybe 14:33:57 it already has 2 +2s 14:34:12 i think the note about os-brick in the commit message doesn't match what's in the reqs? 14:35:01 i think you are correct 14:35:16 it says os-brick 4.0.1 but still only requires 2.7.0 14:36:10 ok, i need to fix that 14:36:12 good catch 14:37:11 i'll check offline to make sure the l-c still work (pretty sure they will, but you never know) 14:37:28 look for a new patch set right after the meeting 14:38:04 rosmaita: yes, sorry, I'm on another meeting. I'll review your updated patch later 14:38:12 thanks! 14:38:25 whoami-rajat__: anything else? 14:38:47 that's all from me 14:38:50 thanks! 14:38:54 great, thank you 14:38:59 #topic open discussion 14:39:36 if nobody has anything, this is 20 minutes of review time!!! 14:40:00 just kindly ask for review https://review.opendev.org/c/openstack/cinder/+/767581 14:40:10 it has +2 from hemna 14:40:26 https://review.opendev.org/c/openstack/cinder/+/769552 had the lio-barbican job failing until someone said "I've never seen this many consecutive successful runs" 14:40:28 :) 14:40:39 had the lio-barbican job passing* 14:41:52 although apparently i spoke too soon, last run timed out! 14:42:11 but i still think it's worth merging, would seem to reduce the number of rechecks 14:43:54 i was going through bug https://bugs.launchpad.net/cinder/+bug/1873234 , and realised the default rpc timeout for cinder is changed from 60 to 120 secs here https://review.opendev.org/c/openstack/devstack/+/768323 14:43:55 Launchpad bug 1873234 in devstack "lvchange -a causing RPC timeouts between c-api and c-vol on inap hosts in CI" [Undecided,New] 14:44:52 not really sure how lv activation could take 81 seconds unless there's something really wrong with that node, that seems odd 14:45:23 probably should read syslog when that occurs to see what it's doing 14:46:00 also the patch increases the rpc timeout for all backends, the change isn't specific for lvm which seems wrong 14:46:14 s/all backends/ irrespective of the backend 14:47:09 also, that command taking that long to return shouldn't cause an rpc timeout anyway, unless there's some other bug in cinder 14:48:31 mep 14:50:43 anything else? 14:51:09 ok, use these final 10 minutes wisely 14:51:15 #endmeeting