14:00:39 <rosmaita> #startmeeting cinder
14:00:40 <openstack> Meeting started Wed Jan 13 14:00:39 2021 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:43 <openstack> The meeting name has been set to 'cinder'
14:00:45 <rosmaita> #topic roll call
14:00:53 <michael-mcaleer> hi!
14:01:02 <eharney> hi
14:01:22 <whoami-rajat__> Hi
14:01:55 <tosky> o/
14:01:55 <e0ne> hi
14:01:58 <rosmaita> hoping we get a few more people
14:02:50 <geguileo> hi!
14:03:03 <rosmaita> ok, now we can start!  :D
14:03:14 <rosmaita> #link https://etherpad.opendev.org/p/cinder-wallaby-meetings
14:03:19 <rosmaita> #topic announcements
14:03:36 <jungleboyj> o/
14:03:44 <rosmaita> ok, as we discussed last week, the new driver merge deadline is milestone-2
14:03:47 <rosmaita> which is next week
14:03:51 <enriquetaso> hi
14:03:58 <rosmaita> so review priority is DRIVERS
14:04:09 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019720.html
14:04:50 <rosmaita> also, that email missed a backup driver for S3
14:05:03 <rosmaita> which was proposed a while ago but has been sitting
14:05:45 <rosmaita> finally found the link
14:05:53 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/746561
14:06:06 <rosmaita> that review requires a change to global-requirements
14:06:13 <rosmaita> which has been approved by the requirements team
14:06:19 <rosmaita> though currently Zuul disagrees
14:06:46 <rosmaita> in any case, we can assume that the requirement is approved, so no reason to hold up reviews on that patch
14:07:01 <e0ne> great! will review and test it
14:07:06 <rosmaita> excellent!
14:07:25 <rosmaita> i don't need to remind people that all you need to do is look at the cinder logo to see how important backends are to cinder
14:07:36 <rosmaita> so these are the most important reviews right now
14:07:56 <jungleboyj> :-)
14:08:16 <rosmaita> and i will gently remind people who have other patches posted, that you can please review other peoples' patches to move things along
14:08:39 <rosmaita> the email linked above has a link to the cinder docs, where we have a new driver review checklist that can guide you along
14:09:05 <rosmaita> any questions or comments?
14:09:20 <jungleboyj> Remind me when the deadline is?
14:09:32 <rosmaita> not tomorrow, but the thursday after that
14:09:36 <rosmaita> yeah
14:09:39 <rosmaita> it is very close
14:09:44 <jungleboyj> Ok.
14:10:37 <rosmaita> only other announcement is the status of the cursive library
14:11:10 <rosmaita> oslo has agreed to take over governance, and oslo and barbican will share core duties
14:11:21 <rosmaita> (at least i think that's what they said)
14:11:30 <rosmaita> (possibly shared governance)
14:11:54 <rosmaita> in any case, the important point is that it will be brought back into the 'openstack' namespace
14:12:17 <rosmaita> and we will have active, identifiable core reviewers to contact if we need to patch it
14:13:10 <rosmaita> #topic Wallaby R-13 Bug Review
14:13:18 <michael-mcaleer> thanks rosmaita
14:13:20 <rosmaita> #link https://etherpad.opendev.org/p/cinder-wallaby-r13-bug-review
14:13:34 <michael-mcaleer> quiet week again this week, only 2 bugs opened, one for cinder and one for python cinderlib
14:13:40 <michael-mcaleer> bug_1: volume manager under reports allocated space #link https://bugs.launchpad.net/cinder/+bug/1910767
14:13:41 <openstack> Launchpad bug 1910767 in Cinder "volume manager under reports allocated space" [Medium,Triaged]
14:14:04 <michael-mcaleer> I have set this to medium importance but it may warrant high, underreporting of allocated capacity could be very problematic
14:14:43 <eharney> does this impact anything after the service has been up and running for a few minutes?  (i.e. when volume stats have been gathered again)
14:15:34 <michael-mcaleer> I had assumed that the periodic status check that go to the drivers would update the capacities also
14:15:42 <eharney> i assume not, so it probably doesn't need to be a high prio bug
14:16:06 <rosmaita> i agree with eharney here
14:16:38 <michael-mcaleer> no problem, it was opened by waltboring so we might hear from him in the future about it
14:17:07 <rosmaita> i wonder if there was a reason why it was implemented like this
14:17:09 <rosmaita> i mean
14:17:17 <rosmaita> only look at those 2 statuses at init
14:17:26 <rosmaita> and then get better stats later
14:17:56 <eharney> i kind of doubt it
14:18:04 <jungleboyj> hemna:  ^^^
14:18:12 <whoami-rajat__> i think those are the 2 valid states right? other states may end up in error
14:19:43 <rosmaita> ok, we can discuss on the bug report
14:20:02 <michael-mcaleer> moving on to the next and final bug
14:20:04 <michael-mcaleer> bug_2: Can't get volume usage #link https://bugs.launchpad.net/python-cinderclient/+bug/1911390
14:20:05 <openstack> Launchpad bug 1911390 in python-cinderclient "Can't get volume usage" [Wishlist,Triaged]
14:20:31 <michael-mcaleer> this isnt a bug, more of a misinterpretation of what is returned to users, but I set it to wishlist incase its something that may be looked at in the future
14:20:54 <eharney> if we want to keep it as wishlist it needs more detail about what it means
14:21:01 <eharney> because i'm not sure what the complaint/request actually is
14:21:20 <geguileo> and that's not something that's easy to report
14:21:25 <michael-mcaleer> I read it like the user wanted/expected to get used capacity back
14:21:34 <geguileo> if you have thin volumes and deduplication/compression on the backend...
14:21:45 <eharney> i don't think we want to support that
14:21:52 <geguileo> eharney: +1
14:21:53 <michael-mcaleer> geguileo I agree, from a driver maintainer perspective we would need to have a call that goes to our management server to get the current capacity
14:22:22 <michael-mcaleer> and its not always black & white
14:23:09 <geguileo> yeah, and it probably wouldn't be the same between different backends
14:23:09 <eharney> i'd vote for "Won't Fix"
14:23:53 <michael-mcaleer> ok, any objections to moving to wont fix?
14:24:05 <e0ne> eharney: +1
14:24:21 <michael-mcaleer> ill change that now, thanks everyone
14:24:23 <whoami-rajat__> eharney +1
14:24:25 <rosmaita> you can say in the comment that they can provide more clarity about what exactly they are asking
14:24:34 <rosmaita> if they object
14:24:34 <michael-mcaleer> no prob ill do that
14:24:42 <geguileo> +1 to won't fix, but with proper explanation on why we won't be fixing it
14:25:18 <michael-mcaleer> does someone want to make that an action item for themselves with a proper explanation?
14:25:58 <rosmaita> maybe ask for clarity first so we know what we're explaining
14:26:31 <michael-mcaleer> will do ill ask that
14:26:43 <michael-mcaleer> thats it from me this week, thanks everyone
14:26:46 <rosmaita> thanks!
14:27:03 <rosmaita> #topic Stable releases
14:27:06 <rosmaita> whoami-rajat__: that's you
14:27:10 <whoami-rajat__> thanks rosmaita
14:27:30 <whoami-rajat__> this is just a reminder that tomorrow is the last date for cycle-trailing releases
14:27:39 <whoami-rajat__> for us, it's cinderlib
14:28:13 <whoami-rajat__> a patch is blocking it and we need to finalize if we want to include it in the cinderlib victoria release or not
14:28:25 <whoami-rajat__> #link https://review.opendev.org/c/openstack/cinderlib/+/768533
14:28:37 <whoami-rajat__> geguileo:  ^^ need your opinion on the same
14:29:07 <rosmaita> my question is that i think stable/vic cinderlib should be using stable/vic os-brick
14:29:15 <rosmaita> is that a correct assumption?
14:29:36 <e0ne> +1
14:29:49 <rosmaita> otherwise, there wouldn't seem to be any point in having stable cinderlib branches
14:30:06 <e0ne> at least, it sounds like a right way to do
14:30:22 <tosky> talking about stable releases, and even knowing that drivers have the highest review priority, there is a good amount of (I believe) clean backports that could get some attention
14:31:26 <rosmaita> i think geguileo may be in another meeting
14:31:46 <whoami-rajat__> seems like that
14:32:09 <whoami-rajat__> i can ask him after the meeting
14:32:43 <rosmaita> ok, otherwise, i think we merge https://review.opendev.org/c/openstack/cinderlib/+/768533 and update the hash for release
14:33:05 <rosmaita> according to herve, there's a good chance the release publish job will fail without that change
14:33:13 <rosmaita> or something like it, anyway
14:33:34 <rosmaita> any comments or questions?
14:33:49 <whoami-rajat__> ack, should i go forward and merge it now?
14:33:55 <eharney> maybe
14:33:57 <whoami-rajat__> it already has 2 +2s
14:34:12 <eharney> i think the note about os-brick in the commit message doesn't match what's in the reqs?
14:35:01 <rosmaita> i think you are correct
14:35:16 <eharney> it says os-brick 4.0.1 but still only requires 2.7.0
14:36:10 <rosmaita> ok, i need to fix that
14:36:12 <rosmaita> good catch
14:37:11 <rosmaita> i'll check offline to make sure the l-c still work (pretty sure they will, but you never know)
14:37:28 <rosmaita> look for a new patch set right after the meeting
14:38:04 <geguileo> rosmaita: yes, sorry, I'm on another meeting. I'll review your updated patch later
14:38:12 <rosmaita> thanks!
14:38:25 <rosmaita> whoami-rajat__: anything else?
14:38:47 <whoami-rajat__> that's all from me
14:38:50 <whoami-rajat__> thanks!
14:38:54 <rosmaita> great, thank you
14:38:59 <rosmaita> #topic open discussion
14:39:36 <rosmaita> if nobody has anything, this is 20 minutes of review time!!!
14:40:00 <e0ne> just kindly ask for review https://review.opendev.org/c/openstack/cinder/+/767581
14:40:10 <e0ne> it has +2 from hemna
14:40:26 <eharney> https://review.opendev.org/c/openstack/cinder/+/769552 had the lio-barbican job failing until someone said "I've never seen this many consecutive successful runs"
14:40:28 <eharney> :)
14:40:39 <eharney> had the lio-barbican job passing*
14:41:52 <rosmaita> although apparently i spoke too soon, last run timed out!
14:42:11 <rosmaita> but i still think it's worth merging, would seem to reduce the number of rechecks
14:43:54 <whoami-rajat__> i was going through bug https://bugs.launchpad.net/cinder/+bug/1873234 , and realised the default rpc timeout for cinder is changed from 60 to 120 secs here https://review.opendev.org/c/openstack/devstack/+/768323
14:43:55 <openstack> Launchpad bug 1873234 in devstack "lvchange -a causing RPC timeouts between c-api and c-vol on inap hosts in CI" [Undecided,New]
14:44:52 <eharney> not really sure how lv activation could take 81 seconds unless there's something really wrong with that node, that seems odd
14:45:23 <eharney> probably should read syslog when that occurs to see what it's doing
14:46:00 <whoami-rajat__> also the patch increases the rpc timeout for all backends, the change isn't specific for lvm which seems wrong
14:46:14 <whoami-rajat__> s/all backends/ irrespective of the backend
14:47:09 <eharney> also, that command taking that long to return shouldn't cause an rpc timeout anyway, unless there's some other bug in cinder
14:48:31 <hemna> mep
14:50:43 <rosmaita> anything else?
14:51:09 <rosmaita> ok, use these final 10 minutes wisely
14:51:15 <rosmaita> #endmeeting