14:01:30 #startmeeting cinder 14:01:30 Meeting started Wed May 22 14:01:30 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:30 The meeting name has been set to 'cinder' 14:01:37 hi 14:01:37 #topic roll call 14:01:39 o/ 14:01:40 o/ 14:01:44 o/ 14:01:46 o/ 14:01:59 o/ 14:01:59 #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings 14:03:38 Hi 14:03:42 O/ 14:03:48 hey 14:05:03 Hi all 14:05:11 o/ 14:05:43 welcome everyone 14:05:47 o/ 14:06:03 #topic annoucements 14:06:06 quick annoucements 14:06:15 #link https://releases.openstack.org/dalmatian/schedule.html 14:06:31 ^ this is the release schedule for all projects 14:06:53 ive posted our proposed cinder-specific dates here 14:07:00 #link https://review.opendev.org/c/openstack/releases/+/919908 14:07:43 ive set the midterm to be wednesday june 12 14:08:12 and the remaining freezes/checkpoints I tried to follow existing precedence 14:08:27 take a look, let me know if there are any comments or concerns 14:08:36 if not, ill address jens' comments and push to get that merged 14:09:44 you can reach out after the meeting too, we can keep moving for now 14:10:02 #topic backports needing attention 14:10:06 rosmaita: that's you 14:10:12 hello 14:10:33 yes, this started out as a begging request for someone to please look at some patches of mine 14:10:35 namely 14:10:59 https://review.opendev.org/c/openstack/cinder/+/910815 14:10:59 and 14:10:59 https://review.opendev.org/c/openstack/cinder/+/892352 14:11:08 but i notice that we have some antelope patches piling up 14:11:25 so, stable cores, please use our handy dashboard to check them out: 14:11:32 http://tiny.cc/cinder-maintained 14:11:36 nice 14:11:43 that's all from me 14:11:45 #action backport reviews 14:12:10 #topic re-addressing simon's bug 14:12:12 On it. :-) 14:12:18 #link https://bugs.launchpad.net/cinder/+bug/1906286 14:12:24 hi 14:12:38 This is not a backport that needs attention really, but I suspect Brian didn't realize it was a backport and replied as if it were on master: https://review.opendev.org/c/openstack/cinder/+/908689 14:13:00 customer has deployed using kolla and has 5 controllers - when the service moves across controllers it messes up 14:13:26 kolla are addressing the lack of `cluster_name` in their code, but that still doesn't alter the underlying issue 14:13:58 the customer has proposed a small patch in the bug but this is deeper into core code than I care to delve 14:14:05 can one of the cores have a look? 14:14:23 maybe this needs something t change in glance as well if A/A is enabled and there is a cluster name 14:14:24 #link https://bugs.launchpad.net/cinder/+bug/1906286/comments/4 14:15:30 i think it would be good to at least post the change to have CI input while we take a closer look 14:15:47 if that is the case i can raise a patch for the simple change they suggest 14:15:53 and see what happens 14:16:05 but the CIs don't address HA do they? 14:16:47 nor do they address cinder as a glance backend 14:17:03 i don't think so, it would have to be tested manually 14:17:09 we do have a job running cinder as glance backend but I'm not aware about any HA jobs 14:17:20 I don't have a CI currently - ongoing infra issue 14:17:42 anyone else have the ability to test it? One of the other vendrs? 14:18:17 this is a case of optimized volume creation from image where we are not considering clusters 14:18:36 exactly 14:18:46 and the suggested code change isn't correct since that isn't doing any filtering 14:19:00 we need to do filtering based on host or cluster depending on the environment 14:19:01 I'm trying to imagine what's the worst that can happen. Where previously Cinder reported and error with "No accessible image volume", it will try to clone from an off-host volume. 14:19:56 that is what is happening, but as there are 5 controllers the backend can get 5 images created per image which is a waste of resources, especially when there are multiple base images 14:20:27 zaitcev, the only issue here is performance, with the optimized path we do a clone volume which is very fast whereas in the normal workflow we download the image from glance and write into the volume 14:20:42 it also slows down the deployment times each time a new image needs to be created 14:21:02 in a prod environment these random slowdowns are not acceptable 14:21:51 as i say, not just performance but also physical space resources on the backend 14:22:27 i can take a look at that but don't have HA env to test it 14:23:10 if you could come up with a better patch to address the cluster environment that would be good. I'm sure the customer will be willing to test it 14:23:29 I suspect HA is a misnomer. The real issue is if it's okay to clone a volume that belongs to another host or not. 14:24:04 Some kind of SAN, Ceph, or iscsi thing has be there. 14:24:08 when its a single backend acting as the glance store - yes 14:24:23 this is why their little patch drops the host filtering 14:24:42 but it is caused by A/A controllers for c-vol 14:26:31 I think the proposed fix is not good enough 14:27:22 agreed, hence wh ythe customer didn't raise the patch and came to me as the vendor of the storage they use (and this is NVMe-TCP as the dataplane, not that that makes any difference) 14:29:49 if whoami-rajat is going to take a look, that is good for me and I'm done with this topic 14:30:17 ok 14:30:32 #topic image encryption cinder spec 14:30:36 Luzi: that's you 14:30:56 #link https://review.opendev.org/c/openstack/cinder-specs/+/919499 14:31:12 yeah I wrote a Cinder spec for the Image encryption, so you can look at what will actually be affected 14:31:51 and I removed the container_type encrypted from both specs, as eharney mentioned that cinder also uses the compressed container type 14:32:05 and it would be less headache when upgrading 14:33:04 excellent 14:33:17 I really need a good look through the spec to check, that I have considered all possible interactions of Cinder with encrypted images 14:34:23 #action image encryption spec review 14:35:04 if there is no question from you to me, than it was all from my side 14:36:05 ok, hopefully you'll get some feedback soon 14:36:41 #topic review requests 14:36:47 there are a few 14:37:05 hitachi 14:37:08 #link https://review.opendev.org/c/openstack/cinder/+/916724 14:37:16 bring your own keys 14:37:27 #link https://review.opendev.org/c/openstack/cinder-specs/+/914513 14:37:37 and assertion calls 14:37:45 #link https://review.opendev.org/c/openstack/cinder/+/903503 14:39:39 that's all for the adjenda 14:39:43 #topic open discussion 14:41:56 I think we need to scan old reviews from 2016 and force-abandon them. 14:42:05 it's impossible to look that far back 14:42:23 I have browser UI issues dealing with it. 14:43:05 that sounds fair, how far back do we cut off? 14:43:50 anything pre 2020? 14:44:33 we were reviewing a patch the other day in the review meeting that was from before 2020 14:45:42 ok, so maybe on a per-case basis is better than a general approach 14:46:29 Hi. Seems like I've a bad connection today. Just wanted to give feedback that I've integrated the feedback for the BYOD spec (https://review.opendev.org/c/openstack/cinder-specs/+/914513) and are open for new one :) 14:49:11 zaitcev: if you want to create a list of patches you think are appropriate, that might help 14:49:18 NotTheEvilOne: no worries 14:49:41 NotTheEvilOne: i included your link in the review topic 14:50:04 Thank you :) 14:51:56 jbernard: you mean candidates for abandonment? 14:53:06 zaitcev: yeah, if they're old /and/ no longer relevant, those would be good candidates 14:53:36 i know the list is quite long, some pruning would probably be good, but i dont want us to throw out old but still-useful ones 14:55:10 ok, thanks everyone! 14:55:14 #endmeeting