14:01:30 <jbernard> #startmeeting cinder
14:01:30 <opendevmeet> Meeting started Wed May 22 14:01:30 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:30 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:30 <opendevmeet> The meeting name has been set to 'cinder'
14:01:37 <eharney> hi
14:01:37 <jbernard> #topic roll call
14:01:39 <jbernard> o/
14:01:40 <simondodsley> o/
14:01:44 <Luzi> o/
14:01:46 <akawai> o/
14:01:59 <rosmaita> o/
14:01:59 <jbernard> #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings
14:03:38 <msaravan> Hi
14:03:42 <ccokeke[m]> O/
14:03:48 <whoami-rajat> hey
14:05:03 <bryanneumann> Hi all
14:05:11 <zaitcev> o/
14:05:43 <jbernard> welcome everyone
14:05:47 <jungleboyj> o/
14:06:03 <jbernard> #topic annoucements
14:06:06 <jbernard> quick annoucements
14:06:15 <jbernard> #link https://releases.openstack.org/dalmatian/schedule.html
14:06:31 <jbernard> ^ this is the release schedule for all projects
14:06:53 <jbernard> ive posted our proposed cinder-specific dates here
14:07:00 <jbernard> #link https://review.opendev.org/c/openstack/releases/+/919908
14:07:43 <jbernard> ive set the midterm to be wednesday june 12
14:08:12 <jbernard> and the remaining freezes/checkpoints I tried to follow existing precedence
14:08:27 <jbernard> take a look, let me know if there are any comments or concerns
14:08:36 <jbernard> if not, ill address jens' comments and push to get that merged
14:09:44 <jbernard> you can reach out after the meeting too, we can keep moving for now
14:10:02 <jbernard> #topic backports needing attention
14:10:06 <jbernard> rosmaita: that's you
14:10:12 <rosmaita> hello
14:10:33 <rosmaita> yes, this started out as a begging request for someone to please look at some patches of mine
14:10:35 <rosmaita> namely
14:10:59 <rosmaita> https://review.opendev.org/c/openstack/cinder/+/910815
14:10:59 <rosmaita> and
14:10:59 <rosmaita> https://review.opendev.org/c/openstack/cinder/+/892352
14:11:08 <rosmaita> but i notice that we have some antelope patches piling up
14:11:25 <rosmaita> so, stable cores, please use our handy dashboard to check them out:
14:11:32 <rosmaita> http://tiny.cc/cinder-maintained
14:11:36 <jbernard> nice
14:11:43 <rosmaita> that's all from me
14:11:45 <jbernard> #action backport reviews
14:12:10 <jbernard> #topic re-addressing simon's bug
14:12:12 <jungleboyj> On it.  :-)
14:12:18 <jbernard> #link https://bugs.launchpad.net/cinder/+bug/1906286
14:12:24 <simondodsley> hi
14:12:38 <zaitcev> This is not a backport that needs attention really, but I suspect Brian didn't realize it was a backport and replied as if it were on master: https://review.opendev.org/c/openstack/cinder/+/908689
14:13:00 <simondodsley> customer has deployed using kolla and has 5 controllers - when the service moves across controllers it messes up
14:13:26 <simondodsley> kolla are addressing the lack of `cluster_name` in their code, but that still doesn't alter the underlying issue
14:13:58 <simondodsley> the customer has proposed a small patch in the bug but this is deeper into core code than I care to delve
14:14:05 <simondodsley> can one of the cores have a look?
14:14:23 <simondodsley> maybe this needs something t change in glance as well if A/A is enabled and there is a cluster name
14:14:24 <jbernard> #link https://bugs.launchpad.net/cinder/+bug/1906286/comments/4
14:15:30 <jbernard> i think it would be good to at least post the change to have CI input while we take a closer look
14:15:47 <simondodsley> if that is the case i can raise a patch for the simple change they suggest
14:15:53 <simondodsley> and see what happens
14:16:05 <simondodsley> but the CIs don't address HA do they?
14:16:47 <simondodsley> nor do they address cinder as a glance backend
14:17:03 <jbernard> i don't think so, it would have to be tested manually
14:17:09 <whoami-rajat> we do have a job running cinder as glance backend but I'm not aware about any HA jobs
14:17:20 <simondodsley> I don't have a CI currently - ongoing infra issue
14:17:42 <simondodsley> anyone else have the ability to test it? One of the other vendrs?
14:18:17 <whoami-rajat> this is a case of optimized volume creation from image where we are not considering clusters
14:18:36 <simondodsley> exactly
14:18:46 <whoami-rajat> and the suggested code change isn't correct since that isn't doing any filtering
14:19:00 <whoami-rajat> we need to do filtering based on host or cluster depending on the environment
14:19:01 <zaitcev> I'm trying to imagine what's the worst that can happen. Where previously Cinder reported and error with "No accessible image volume", it will try to clone from an off-host volume.
14:19:56 <simondodsley> that is what is happening, but as there are 5 controllers the backend can get 5 images created per image which is a waste of resources, especially when there are multiple base images
14:20:27 <whoami-rajat> zaitcev, the only issue here is performance, with the optimized path we do a clone volume which is very fast whereas in the normal workflow we download the image from glance and write into the volume
14:20:42 <simondodsley> it also slows down the deployment times each time a new image needs to be created
14:21:02 <simondodsley> in a prod environment these random slowdowns are not acceptable
14:21:51 <simondodsley> as i say, not just performance but also physical space resources on the backend
14:22:27 <whoami-rajat> i can take a look at that but don't have HA env to test it
14:23:10 <simondodsley> if you could come up with a better patch to address the cluster environment that would be good. I'm sure the customer will be willing to test it
14:23:29 <zaitcev> I suspect HA is a misnomer. The real issue is if it's okay to clone a volume that belongs to another host or not.
14:24:04 <zaitcev> Some kind of SAN, Ceph, or iscsi thing has be there.
14:24:08 <simondodsley> when its a single backend acting as the glance store  - yes
14:24:23 <simondodsley> this is why their little patch drops the host filtering
14:24:42 <simondodsley> but it is caused by A/A controllers for c-vol
14:26:31 <geguileo> I think the proposed fix is not good enough
14:27:22 <simondodsley> agreed, hence wh ythe customer didn't raise the patch and came to me as the vendor of the storage they use (and this is NVMe-TCP as the dataplane, not that that makes any difference)
14:29:49 <simondodsley> if whoami-rajat is going to take a look, that is good for me  and I'm done with this topic
14:30:17 <jbernard> ok
14:30:32 <jbernard> #topic image encryption cinder spec
14:30:36 <jbernard> Luzi: that's you
14:30:56 <jbernard> #link https://review.opendev.org/c/openstack/cinder-specs/+/919499
14:31:12 <Luzi> yeah I wrote a Cinder spec for the Image encryption, so you can look at what will actually be affected
14:31:51 <Luzi> and I removed the container_type encrypted from both specs, as eharney mentioned that cinder also uses the compressed container type
14:32:05 <Luzi> and it would be less headache when upgrading
14:33:04 <jbernard> excellent
14:33:17 <Luzi> I really need a good look through the spec to check, that I have considered all possible interactions of Cinder with encrypted images
14:34:23 <jbernard> #action image encryption spec review
14:35:04 <Luzi> if there is no question from you to me, than it was all from my side
14:36:05 <jbernard> ok, hopefully you'll get some feedback soon
14:36:41 <jbernard> #topic review requests
14:36:47 <jbernard> there are a few
14:37:05 <jbernard> hitachi
14:37:08 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/916724
14:37:16 <jbernard> bring your own keys
14:37:27 <jbernard> #link https://review.opendev.org/c/openstack/cinder-specs/+/914513
14:37:37 <jbernard> and assertion calls
14:37:45 <jbernard> #link https://review.opendev.org/c/openstack/cinder/+/903503
14:39:39 <jbernard> that's all for the adjenda
14:39:43 <jbernard> #topic open discussion
14:41:56 <zaitcev> I think we need to scan old reviews from 2016 and force-abandon them.
14:42:05 <zaitcev> it's impossible to look that far back
14:42:23 <zaitcev> I have browser UI issues dealing with it.
14:43:05 <jbernard> that sounds fair, how far back do we cut off?
14:43:50 <jbernard> anything pre 2020?
14:44:33 <eharney> we were reviewing a patch the other day in the review meeting that was from before 2020
14:45:42 <jbernard> ok, so maybe on a per-case basis is better than a general approach
14:46:29 <NotTheEvilOne> Hi. Seems like I've a bad connection today. Just wanted to give feedback that I've integrated the feedback for the BYOD spec (https://review.opendev.org/c/openstack/cinder-specs/+/914513) and are open for new one :)
14:49:11 <jbernard> zaitcev: if you want to create a list of patches you think are appropriate, that might help
14:49:18 <jbernard> NotTheEvilOne: no worries
14:49:41 <jbernard> NotTheEvilOne: i included your link in the review topic
14:50:04 <NotTheEvilOne> Thank you :)
14:51:56 <zaitcev> jbernard: you mean candidates for abandonment?
14:53:06 <jbernard> zaitcev: yeah, if they're old /and/ no longer relevant, those would be good candidates
14:53:36 <jbernard> i know the list is quite long, some pruning would probably be good, but i dont want us to throw out old but still-useful ones
14:55:10 <jbernard> ok, thanks everyone!
14:55:14 <jbernard> #endmeeting