14:00:06 <whoami-rajat__> #startmeeting cinder
14:00:06 <opendevmeet> Meeting started Wed Nov 23 14:00:06 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat__. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:07 <opendevmeet> The meeting name has been set to 'cinder'
14:00:11 <whoami-rajat__> #topic roll call
14:00:42 <tosky> hi
14:01:11 <rosmaita> o/
14:01:47 <crohmann> hey all.
14:01:55 <whoami-rajat__> is there a holiday today in US Europe region?
14:02:24 <whoami-rajat__> #link https://etherpad.opendev.org/p/cinder-antelope-meetings
14:02:45 <senrique> hi
14:02:52 <rosmaita> tomorrow is a USA holiday, so some people may be taking an extra day
14:03:36 <whoami-rajat__> ah yes, i know about tomorrow but people might be extending it right
14:03:45 <whoami-rajat__> so we've few people around but a lot on the agenda
14:03:47 <whoami-rajat__> so let's get started
14:03:55 <whoami-rajat__> #topic announcements
14:04:01 <whoami-rajat__> first, Midcycle-1 next week (Please add topics!) (30th Nov)
14:04:15 <whoami-rajat__> we've midcycle-1 next week, so request everyone to add topics and attend it
14:04:28 <whoami-rajat__> it will be from 1400-1600 UTC (1 hour overlapping our current meeting time)
14:04:39 <whoami-rajat__> #link https://etherpad.opendev.org/p/cinder-antelope-midcycles
14:04:57 <simondodsley> o/
14:05:08 <whoami-rajat__> I will send a mail this week for reminder and details
14:05:20 <whoami-rajat__> next, Runtimes for 2023.1
14:05:27 <whoami-rajat__> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031229.html
14:05:35 <whoami-rajat__> so there are 2 points in the mail regarding runtimes
14:05:47 <whoami-rajat__> 1) we need to propose at least one tempest job which runs on old ubuntu i.e. focal
14:05:50 <whoami-rajat__> I've proposed it
14:06:01 <whoami-rajat__> #link  https://review.opendev.org/c/openstack/cinder/+/865427
14:06:07 <whoami-rajat__> it is the tempest integrated storage
14:06:28 <whoami-rajat__> the purpose is to verify a smooth upgrade from past releases
14:06:51 <whoami-rajat__> like ubuntu focal Zed to ubuntu focal 2023.1
14:07:09 <whoami-rajat__> 2) the python runtimes are 3.8 and 3.10
14:07:17 <whoami-rajat__> unit tests are automatically modified by the templates
14:07:23 <whoami-rajat__> #link  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/864464
14:07:30 <whoami-rajat__> for functional tests, I've proposed a patch
14:07:39 <whoami-rajat__> #link https://review.opendev.org/c/openstack/cinder/+/865429
14:07:49 <whoami-rajat__> it's just changing py39 to py310
14:08:04 <whoami-rajat__> next, OpenInfra Board + OpenStack Syncup Call
14:08:10 <whoami-rajat__> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031242.html
14:08:19 <whoami-rajat__> if you go in the syncup call session in the above tc summary
14:08:27 <whoami-rajat__> there are some good points raised that i wanted to highlight
14:08:41 <whoami-rajat__> we want to make contribution easy for new contributors
14:09:03 <whoami-rajat__> we want to encourage platinum openstack members to contribute to openstack
14:09:20 <whoami-rajat__> you can read it out for more details but i liked the initiative
14:09:26 <whoami-rajat__> it will allow us to have more diversity
14:09:47 <whoami-rajat__> that's all the announcements I had for today
14:09:50 <whoami-rajat__> anyone has anything else?
14:11:00 <whoami-rajat__> or any doubts/clarifications about the above announcements?
14:11:15 <crohmann> If I may comment on the Runtimes issues - I actually thought there was only one release sharing two Ubuntu versions ... Yoga and then one had to upgrade to 22.04 prior to going to Zed.
14:11:33 <crohmann> (https://wiki.ubuntu.com/OpenStack/CloudArchive)
14:12:16 <crohmann> sorry I meant to show the graphic here: 
14:12:18 <crohmann> https://ubuntu.com/about/release-cycle#openstack-release-cycle
14:13:23 <whoami-rajat__> they might be supplying their distro with Zed for deployment but our current gate jobs are running with ubuntu focal
14:13:34 <whoami-rajat__> the above effort is for migrating those jobs to ubuntu 22.04
14:14:03 <whoami-rajat__> we don't test Zed with 22.04 but will be testing 2023.1 with 22.04
14:14:09 <whoami-rajat__> in upstream jobs ^
14:14:25 <crohmann> alright - thanks for the clarification-
14:14:55 <whoami-rajat__> yep it is more of a upstream CI thing
14:15:29 <whoami-rajat__> but thanks for pointing to that
14:15:58 <whoami-rajat__> looks like we can move to topics then
14:16:09 <whoami-rajat__> #topic Have backups happen independently from volume status field to allow e.g. live migrations to happen during potentially long running backups (crohmann)
14:16:14 <whoami-rajat__> crohmann, that's you
14:17:14 <crohmann> Yes. I'd like to bring this topic up once again of making cinder-backup moving out of the state machine of volumes and run independently. e.g. to enable live-migrations and other things to happen idenpendently.
14:19:53 <senrique> crohmann, do you have a full detail plan? or a WIP patch to show?
14:20:19 <crohmann> Did you see the details I placed below the topic on the Etherpad?
14:20:29 <whoami-rajat__> i think we've a spec and also discussed this during PTG
14:20:34 <senrique> oh sorry, my bad i haven't open the etherpad
14:20:39 <whoami-rajat__> need to refresh my memory on what we concluded
14:20:42 * senrique facepalms
14:20:47 <whoami-rajat__> #link https://etherpad.opendev.org/p/cinder-antelope-meetings
14:21:00 <senrique> thanks!
14:21:55 <whoami-rajat__> #action: ask Christian about other cases where this feature would be useful, because it seems like a large feature just for 1 use case.
14:22:04 <whoami-rajat__> this is one of the action items crohmann ^
14:22:47 <whoami-rajat__> #link https://etherpad.opendev.org/p/antelope-ptg-cinder#L302
14:22:48 <crohmann> in short: we tried with a spec to introduce a new task_status (https://review.opendev.org/c/openstack/cinder-specs/+/818551) but then concluded that this is way to heavy when maintaining backwards compatibility and likely only backups benefit from it for the forseeable future.
14:22:54 <crohmann> please see: https://review.opendev.org/c/openstack/cinder-specs/+/818551/comments/6ade3ca0_d95e489d
14:23:34 <whoami-rajat__> I will follow up on the discussion there
14:23:36 <crohmann> My question is: Does it make sense to "just" externalize the backup status from the volume status as this is the actual issue / use-case.
14:24:26 <crohmann> Thanks. We gladly start on a new spec, but that only makes sense if you agree that this is a conclusion of the discussion of the previous one.
14:24:34 <whoami-rajat__> it does make sense since we create another temp volume/snapshot from the original volume to back it up
14:24:59 <whoami-rajat__> so we are not exactly doing anything on the main volume rather than changing it's state to backing-up
14:25:20 <whoami-rajat__> geguileo, also had some ideas to use attachment API for internal attachments
14:25:21 <crohmann> yes, my argument exactly. The backup status does NOT matter to the volume status (attaching, in-use, ...)
14:25:37 <whoami-rajat__> but can't remember exactly how that would benefit this effort
14:26:07 <crohmann> And with the recent bug / discussion I referenced in the Etherpad as well of some race condition on the restoration of the actual volume state after a backup has happended makes this only more valid if you ask me
14:26:18 <crohmann> That field is simply over-(ab)used.
14:27:18 <whoami-rajat__> yeah, other operations already affect the volume state
14:27:47 <whoami-rajat__> let's discuss this during midcycle next week where the whole team will be around
14:27:56 <whoami-rajat__> and it's video so easier to have discussions
14:28:11 <rosmaita> ++
14:28:16 <whoami-rajat__> crohmann, can you add a topic here? https://etherpad.opendev.org/p/cinder-antelope-midcycles
14:28:24 <crohmann> on it
14:28:45 <whoami-rajat__> thanks
14:29:24 <whoami-rajat__> this is another benefit of midcycles to followup on ptg discussions!
14:30:30 <whoami-rajat__> ok, guess we can move to next topic then? crohmann
14:30:37 <crohmann> certainly. Thanks.
14:31:16 <whoami-rajat__> great
14:31:19 <whoami-rajat__> #topic Encrypted backups
14:31:22 <whoami-rajat__> crohmann, that's again you
14:32:06 <crohmann> (sorry) - I just wanted to check with you how this spec could move forward.
14:33:08 <whoami-rajat__> I was in the middle of reviewing it when got hit by other tasks
14:33:15 <whoami-rajat__> I will complete the review this week
14:33:17 <crohmann> After the last operator hour at the PTG Gorka wrote this up. We would love to have encrypted off-site backups (using e.g. S3-drivers).
14:33:38 <whoami-rajat__> we've spec freeze on 16th december but we will try to get that in earlier
14:33:52 <crohmann> Awesome whoami-rajat__! See my comment about using fernet keys to allow key roll overs ...
14:34:03 <rosmaita> crohmann: this looks like another good topic for the PTG
14:34:11 <whoami-rajat__> ack
14:34:19 <whoami-rajat__> yeah, good to followup on this as well
14:34:22 <whoami-rajat__> rosmaita++
14:34:39 <crohmann> So that one goes to the Midcycle list as well?
14:35:12 <rosmaita> yeah, the main thing is for you to explain how you see the key alignment with keystone
14:35:22 <rosmaita> i mean, how that would work exactly
14:35:32 <crohmann> not at all.
14:35:42 <crohmann> There is no relation to Keystone.
14:36:12 <crohmann> I just proposed to do it "like" keystone via Fernet-keys
14:36:44 <rosmaita> sure, but you can explain why that's better than what gorka proposed
14:37:23 <whoami-rajat__> we can review the spec in the meantime, so we don't have to wait for midcycle
14:39:20 <crohmann> I would not store keys inside config files but as dedicated files: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#where-do-i-put-my-key-repository
14:40:46 <crohmann> And then allow for a switch of keys / rollover: https://docs.openstack.org/keystone/zed/admin/fernet-token-faq.html#what-are-the-different-types-of-keys
14:41:56 <crohmann> In shot: Allow the operator to introduce a new key for all new data, but allow for existing backups to still be restored / decrypted.
14:42:38 <crohmann> And since most operators might have code to deal with keystone fernet keys I thought it would be a nice touch to just reuse the mechanisms and terminology there.
14:43:10 <rosmaita> i think it's a good idea, just needs to be thought through on our side
14:44:40 <whoami-rajat__> we can follow up with the discussion on the spec and mid cycle
14:44:51 <crohmann> cool. Thanks once again.
14:45:15 <whoami-rajat__> thank you for following up on the topics crohmann
14:45:21 <whoami-rajat__> moving on
14:45:27 <whoami-rajat__> #topic OSC size vs name discussion
14:45:56 <whoami-rajat__> so we had a discussion in the last week's meeting regarding making size as positional and name as optional in openstackclient
14:46:09 <whoami-rajat__> Stephen disagrees with that and has some concerns
14:46:19 <whoami-rajat__> #link https://review.opendev.org/c/openstack/python-openstackclient/+/865377
14:46:31 <whoami-rajat__> 1) it will break existing scripts -- which every major change does
14:46:39 <whoami-rajat__> 2) it is inconsistent with other OSC commands
14:46:55 <whoami-rajat__> during the meeting, he also sent out a mail to ML
14:47:08 <whoami-rajat__> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031284.html
14:47:32 <whoami-rajat__> just wanted to bring it to the attention and we can follow up on this on the patch or ML or both
14:48:24 <whoami-rajat__> that's all from me, moving on to next topic
14:48:26 <whoami-rajat__> #topic Update on review and acceptance of HPE Cinder Driver
14:48:30 <whoami-rajat__> abdi, that's you
14:48:33 <abdi> Yes.
14:48:56 <abdi> Just wanted to get a quick update if/when we get this approved and merged.
14:49:40 <abdi> Curious if rosmaita had a chance to review.
14:49:45 <rosmaita> nope
14:50:03 <whoami-rajat__> we've the driver merge deadline on 20th January
14:50:17 <whoami-rajat__> it was extended from existing 06th Jan to give people time to review
14:50:24 <whoami-rajat__> since that's year end holiday time
14:50:39 <whoami-rajat__> do we have CI running and passing on the driver? abdi
14:50:58 <abdi> Ok.  I just want to avoid something coming up last min and missing Antelope as we missed Zed.
14:51:28 <abdi> Yes CI is running and passing.  2 iSCSI errors on CI are consistent and root caused to a race condition in nova/os-brick.
14:51:50 <abdi> That's why it is important to get your review to agree/disagree with the root cause
14:51:57 <rosmaita> adbi: i will take a look at the CI and update my CI check comment
14:52:06 <abdi> Thank you.
14:52:21 <whoami-rajat__> are the errors specific to HPE CI job or does it show for other CIs as well?
14:53:14 <whoami-rajat__> in any case, i will take a look at the patch
14:53:15 <abdi> Not sure if anyone has reported similar issues.  But the bug I filed about the race condition is linked in the CI comment.  Nova/os-brick folks reviewed and agreed
14:53:42 <whoami-rajat__> ack, will check it
14:53:53 <abdi> it could be my environment exposes the issues.  Thank you for the review.
14:54:06 <abdi> Just trying to get ahead of this in case I need to take action and not miss the merge.
14:54:27 <whoami-rajat__> if CI is working at this time, don't worry about missing the deadline :)
14:54:36 <abdi> ack.
14:54:50 <whoami-rajat__> anything else on this?
14:54:59 <abdi> no.  that's all.
14:55:12 <whoami-rajat__> thanks
14:55:14 <whoami-rajat__> next topic
14:55:16 <whoami-rajat__> #topic Requesting review for backport to Zed
14:55:22 <whoami-rajat__> tobias-urdin, that's you
14:55:35 <whoami-rajat__> #link https://review.opendev.org/c/openstack/cinder/+/864701
14:55:39 <tobias-urdin> yes o/
14:55:58 <tobias-urdin> I would like some review on that backport, I would like to have it even further back if it's accepted
14:56:08 <tobias-urdin> hopefully that will work :)
14:56:24 <whoami-rajat__> will take a look at the backport
14:57:03 <tobias-urdin> thanks
14:57:35 <whoami-rajat__> np
14:57:47 <whoami-rajat__> that's all the topics we had for today
14:57:53 <whoami-rajat__> let's move to open discussion
14:57:56 <whoami-rajat__> #topic open discussion
14:59:54 <whoami-rajat__> looks like we've nothing else to discuss so we can end here
14:59:56 <crohmann> If I may bring up something else .... quota inconsistencies. We see quite a lot of those, especially for backups. We run cinder-backup on 3 nodes. Is there anything we could look at?
15:00:19 <whoami-rajat__> crohmann, geguileo was working on a quota effort
15:00:54 <whoami-rajat__> crohmann, https://review.opendev.org/c/openstack/cinder-specs/+/819693
15:01:01 <crohmann> Neutron had quite a few inconsistencies until Xena, but with the NoLock driver this seems to be gone.
15:01:20 <crohmann> uh, did not see that one ... "Cinder quotas have been a constant pain for operators and cloud users."
15:01:26 <crohmann> That's me - thanks.
15:01:37 <whoami-rajat__> yep, it's been an issue for very long
15:01:46 <crohmann> thanks for the pointer.
15:01:53 <whoami-rajat__> hopefully geguileo will complete the work and we've consistent quotas
15:01:57 <whoami-rajat__> anyway, we're out of time
15:02:02 <whoami-rajat__> thanks everyone for joining
15:02:05 <whoami-rajat__> and happy holidays!
15:02:11 <abdi> Thank you.  Good day. happy holidays.
15:02:12 <whoami-rajat__> #endmeeting