14:00:04 <whoami-rajat> #startmeeting cinder
14:00:04 <opendevmeet> Meeting started Wed Dec 13 14:00:04 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:04 <opendevmeet> The meeting name has been set to 'cinder'
14:00:07 <whoami-rajat> #topic roll call
14:00:28 <gireesh> hi
14:00:31 <sp-bmilanov> hello!
14:00:53 <simondodsley> o/
14:01:18 <msaravan> Hi
14:01:41 <keerthivasansuresh> o/
14:01:41 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-caracal-meetings
14:01:43 <jayaanand> hi
14:02:08 <rosmaita> 0/
14:02:14 <toheeb> o/
14:02:39 <eharney> o/
14:03:39 <whoami-rajat> we've a lot on the agenda
14:03:41 <whoami-rajat> let's get started
14:03:44 <whoami-rajat> #topic announcements
14:03:44 <raghavendrat> hi
14:04:17 <whoami-rajat> first, OpenInfra Live: PTG Recap
14:04:27 <whoami-rajat> #link https://www.youtube.com/watch?v=thidlQGX29M
14:04:41 <whoami-rajat> last week we discussed PTG recap in open infra live
14:04:51 <whoami-rajat> you can watch the recording for an overall openstack update
14:05:01 <whoami-rajat> also there was update from StarlingX community
14:05:39 <whoami-rajat> next, Festival of XS reviews
14:05:45 <whoami-rajat> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/RPXZR5HKVJUYAHD3I6ESN6AMWFQEQMFJ/
14:05:57 <whoami-rajat> we will be conducting festival of XS reviews this Friday
14:06:03 <whoami-rajat> Details:
14:06:04 <whoami-rajat> Date: 15th December
14:06:04 <whoami-rajat> Time: 1400-1600 UTC
14:06:04 <whoami-rajat> Etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews
14:06:04 <whoami-rajat> Meeting link: https://meet.google.com/jqg-eigw-rku
14:08:04 <whoami-rajat> next, Upcoming deadlines
14:08:07 <whoami-rajat> Cinder spec freeze - 22nd December
14:08:15 <whoami-rajat> we discussed specs 2 weeks ago
14:08:36 <whoami-rajat> out of the 4, there is only one that needs attention
14:08:55 <whoami-rajat> Encrypted backups
14:09:01 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder-specs/+/862601
14:09:19 <whoami-rajat> i think Jon updated the patch and will continue to work on it
14:09:29 <whoami-rajat> and i don't want to turn the announcement into a topic
14:09:48 <whoami-rajat> any other announcements?
14:10:57 <whoami-rajat> okay, let's go to topics
14:11:04 <whoami-rajat> #topic cinderlib deprecation
14:11:06 <whoami-rajat> rosmaita, that's you
14:11:15 <rosmaita> hello
14:11:26 <rosmaita> announcement was sent to the ML earlier this week:
14:11:34 <rosmaita> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/L6HJW55SEUL4NYQVOESJ22KFDW5SGZAE/
14:11:49 <rosmaita> the governance patch to actually do the deprecation has been posted:
14:11:57 <rosmaita> #link https://review.opendev.org/c/openstack/governance/+/903259
14:12:28 <rosmaita> it would be good for whoami-rajat as PTL and jbernard as release manager and maybe gegulio as primary contributor could +1 it
14:12:54 <rosmaita> (just so it's clear that the entire project is behind this)
14:12:55 * jungleboyj sneaks in late
14:13:08 <rosmaita> that's all
14:13:40 <simondodsley> is Gorka going to reply to the email about Ember?
14:14:01 <rosmaita> i would hope so
14:14:21 <whoami-rajat> rosmaita, done, thanks for driving the effort
14:14:24 <rosmaita> is there an email about ember, or do you mean reply to my email mentioning that ember community is OK?
14:14:55 <whoami-rajat> i think Gorka already replied regarding Ember and oVirt that they are happy with older cinderlib releases
14:15:05 <rosmaita> just noticed what whoami-rajat said
14:15:09 <simondodsley> i didn't see it - i'll have a look
14:15:30 <rosmaita> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/TCLC7KN4XFULWZ5IYIXVF23TD6GDULLR/
14:15:35 <simondodsley> yep - so he did - no worries
14:16:13 <rosmaita> ok, that's all
14:16:18 <whoami-rajat> great
14:16:24 <whoami-rajat> we can move to the next topic then
14:16:28 <whoami-rajat> #topic train, ussuri EOL
14:16:32 <whoami-rajat> and that's you again rosmaita !
14:16:53 <rosmaita> yeah, the key thing is that we can't delete the branches if there are open patches
14:16:58 <rosmaita> this is what we have:
14:17:07 <rosmaita> #link https://review.opendev.org/dashboard/?title=Train,+Ussuri+Open+Patches&foreach=%28project%3Aopenstack%2Fcinder+OR%0Aproject%3Aopenstack%2Fpython%2Dcinderclient+OR%0Aproject%3Aopenstack%2Fos%2Dbrick+OR%0Aproject%3Aopenstack%2Fcinderlib+OR%0Aproject%3Aopenstack%2Fpython%2Dbrick%2Dcinderclient%2Dext%29%0Astatus%3Aopen&Ussuri=branch%3A%5Estable%2Fussuri&Train=branch%3A%5Estable%2Ftrain
14:17:16 <rosmaita> (hopefully that link works)
14:17:30 <whoami-rajat> it does
14:17:57 <rosmaita> i think back in June when we first discussed EOLing everything, we informally agreed "no more merges"?
14:18:27 <whoami-rajat> is there any benefit of merging those patches? since we are not going to release those branches
14:18:43 <rosmaita> i am inclinde to say "no benefit"
14:18:58 <rosmaita> i can abandon them all with a note
14:19:33 <whoami-rajat> sounds good to me, also most of them have negative votes with unaddressed comments
14:20:03 <whoami-rajat> +1 to abandon them (unless the team thinks otherwise)
14:20:21 <rosmaita> +1 to abandon from me
14:20:48 <rosmaita> jungleboyj: ?
14:20:54 <happystacker> +1 for me too
14:20:54 <rosmaita> (i think train may have been jungleboyj's last release)
14:21:13 <jungleboyj> I am ok with that decision.
14:21:20 <rosmaita> thanks!
14:21:31 <rosmaita> ok, that closes out this topic
14:21:51 <jungleboyj> Welcome.  :-)
14:22:48 <whoami-rajat> thanks rosmaita
14:22:59 <whoami-rajat> moving on to next topic
14:23:02 <whoami-rajat> #topic Glance Image Encryption
14:23:19 <whoami-rajat> this was originally proposed for midcycle but we went out of time so added it here
14:23:40 <whoami-rajat> I don't think the author is here
14:23:53 <whoami-rajat> there were some questions added to the topic
14:23:56 <whoami-rajat> 1. openstackclient now depends on and pulls in os-brick, because it needs the same encryption/decryption functions that Cinder uses to offer encrypted image upload/download to/from Glance
14:24:27 <whoami-rajat> I need to check but OSC using os-brick seems wrong to me
14:24:56 <rosmaita> i guess that's kind of heavyweight, but the original idea behind the encryption being in os-brick is that all the services that need encryption also use os-brick already
14:25:43 <whoami-rajat> no usage of brick in OSC
14:25:52 <rosmaita> well, you win some and you lose some
14:26:08 <rosmaita> OSC pretty much imports everything else, so why not os-brick, too?
14:26:20 <whoami-rajat> :D
14:26:34 <whoami-rajat> i think you are right, the main idea is to have common code in os-brick to allow all services to consume from it
14:26:41 <rosmaita> i don't think there's a technical reason why the encryption *must* be in os-brick, though
14:27:15 <rosmaita> i'm pretty sure it was just so that there wouldn't have to be a new project with a new library
14:27:56 <rosmaita> i guess the key thing here would be to get feedback from the OSC team
14:29:23 <whoami-rajat> okay, i can check with stephen regarding that
14:29:46 <whoami-rajat> but i don't see any use case of OSC (a client) needing to interact with brick for any operation
14:29:47 <rosmaita> that would be good, if he is like really negative, then we may have to reconsider
14:29:56 <whoami-rajat> it should ideally be the service owning the resource
14:30:35 <rosmaita> unless someone plans to add the os-brick-cinderclient-ext functionality to OSC?
14:31:37 <whoami-rajat> maybe, i think the idea of brickclient was testing and debugging but i might be completely wrong on this
14:31:50 <whoami-rajat> but i will check with tehm
14:31:51 <whoami-rajat> them
14:32:38 <whoami-rajat> next question, do we need new microversion in Cinder and cinderclient for the new API params?
14:33:28 <whoami-rajat> #link https://review.opendev.org/c/openstack/python-cinderclient/+/902652
14:34:33 <whoami-rajat> i think there will be a microversion change on cinder side
14:34:57 <whoami-rajat> since we will be sending new parameters to the existing volume upload to image API
14:35:16 <whoami-rajat> so cinderclient will follow similar microversion bump
14:35:45 <rosmaita> that sounds correct
14:36:21 <whoami-rajat> thanks for confirming
14:37:06 <whoami-rajat> there is a notice
14:37:06 <whoami-rajat> notice: Josephine Seifert (Luzi) will take over / continue the topic in January
14:37:12 <whoami-rajat> and that's pretty much it for this topic
14:37:24 <whoami-rajat> added our discussion points to the topic so the author can check it later
14:37:34 <whoami-rajat> moving on to next topic
14:37:37 <whoami-rajat> #topic Email to ML regarding increasing size of volume image metadata values had only one response suggesting you should accept my patch
14:37:40 <whoami-rajat> drencrom, that's you
14:37:46 <whoami-rajat> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/B7UET4JKHQU5SHH44KLSKHFBMFN3ZZYV/
14:39:58 <whoami-rajat> maybe he's not around
14:40:18 <simondodsley> strange as he just added this to the etherpad
14:40:19 <whoami-rajat> but i guess he was pointing to Erno's response as to nova and cinder should accept the large metadata fields
14:40:29 <drencrom> sorry
14:40:31 <whoami-rajat> and not restrict it
14:41:07 <drencrom> yes, It had only one response commenting that we should accept the long metadata patch
14:41:30 <drencrom> and that nova should not decide the metadata length
14:41:36 <rosmaita> so other than Erno, there seem to be no objections
14:42:09 <drencrom> I commented the fact that different metadata sizes per service could lead to strange behaviour for users
14:42:25 <rosmaita> i think you are correct about that
14:42:41 <simondodsley> seems to me that NOva need to sort out their issues with this rather than dictating to other projects
14:43:29 <whoami-rajat> I would agree to how glance team wants it to be since image is their resource
14:44:20 <drencrom> But it seems that only nova is having performance issues with long values (at least I understand that is the reason for the reduced size on nova)
14:44:55 <rosmaita> i think it's because on list-server-details, they include the image metadata
14:44:56 <simondodsley> so maybe cinder and glance adopt this change and nova have to fix their side - maybe someone raises a bug against them once cinder and glance adopt.
14:45:08 <rosmaita> but in cinder, i think it  is a different call?
14:45:48 <rosmaita> no, i am wrong about that
14:47:05 <whoami-rajat> if we do volume list and the volume is created from an image, it will load the metadata values every time right
14:47:43 <whoami-rajat> sorry volume show
14:47:52 <simondodsley> what is the actual imapct on list by removing the 255 limit?
14:48:25 <simondodsley> s/list/show/
14:48:44 <rosmaita> well, you may be stuffing a bunch of 65535 char fields into a bunch of volume responses
14:48:52 <whoami-rajat> i think the discussion from nova side was theoretical and there are no performance numbers for it
14:49:25 <drencrom> remember that the 255 limit is just for api changes right now, when the volume is created from a image we get the full 65556 bytes values
14:49:41 <rosmaita> drencrom: good point
14:49:51 <simondodsley> so why not go with 65535 and see if it causes problems down the line. If it does we revert back to 255
14:50:43 <rosmaita> that may be the way to go, we can say that we are making the image-vol-metadata update command consistent with what happens when you create a volume from an image
14:51:02 <rosmaita> and if something bad happens, we will blame simondodsley
14:51:05 <rosmaita> :D
14:51:19 <simondodsley> not to get political, but is this a nova dictatorship or are we a democracy?
14:51:31 <drencrom> If you want to go that way I just need a WF+1 on my patch
14:53:04 <whoami-rajat> i think nova team had good points for blocking the change but given the cinder team's perspective, it doesn't seem to be that bad for us
14:53:15 <simondodsley> eharney: you had already givien a WF+1...
14:53:22 <whoami-rajat> and as simondodsley said, we can always revert it to 255 and backport
14:53:28 <whoami-rajat> if it causes an issue
14:54:08 <whoami-rajat> i can W+1, but just to confirm if there are no objections from cinder team regarding this
14:54:48 <drencrom> The WF+1 was removed later after Sean Mooney comments
14:54:51 <drencrom> https://review.opendev.org/c/openstack/cinder/+/868485
14:55:09 <rosmaita> well, it only has 1 +2 right now
14:56:01 <rosmaita> whoami-rajat: i think add a comment that this really doesn't change anything for cinder because you can have the extral-long values already, and nova apparently deals with those and no one has complained
14:56:06 <whoami-rajat> i will review it after the meeting and also add our discussion logs to it
14:56:15 <rosmaita> we're just making our own API consistent
14:56:30 <whoami-rajat> rosmaita, sure, i can do that
14:56:36 <drencrom> Thanks whoami-rajat
14:56:54 <rosmaita> and like simon says, if it causes problems, we can reconsider
14:57:09 <whoami-rajat> great, anything else on this topic?
14:57:18 <drencrom> Not for me
14:57:39 <whoami-rajat> thanks drencrom
14:57:45 <whoami-rajat> we don't have enough time to discuss any other topic
14:57:52 <whoami-rajat> so i will move them to next week
14:58:04 <whoami-rajat> #topic open discussion
14:58:07 <whoami-rajat> 2 minutes for open discussion
14:59:21 <Andrei-1> what about failing CI?
14:59:48 <whoami-rajat> Andrei-1, rosmaita, is onto the swap space fix and I'm looking into the concurrency thing
14:59:59 <whoami-rajat> before that we need to figure out the s3 backup job failing
15:00:02 <Andrei-1> was there any progress? I had 3 sequential failures on my patch from various jobs all related to connectivity
15:00:05 <whoami-rajat> so that these changes can be merged
15:00:29 <whoami-rajat> Andrei-1, that usually happens when OOM killer does kill some services like mysql
15:00:45 <whoami-rajat> we will update once we have something
15:00:50 <whoami-rajat> okay, we're out of time
15:00:54 <whoami-rajat> thanks everyone for attending
15:00:56 <whoami-rajat> #endmeeting