14:00:50 <jbernard> #startmeeting cinder
14:00:50 <opendevmeet> Meeting started Wed Aug  7 14:00:50 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:50 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:50 <opendevmeet> The meeting name has been set to 'cinder'
14:00:54 <jbernard> #topic roll call
14:00:56 <jbernard> o/
14:01:34 <simondodsley> o/
14:01:38 <yuval> 0/
14:01:48 <akawai> o/
14:01:49 <whoami-rajat> Hi
14:01:56 <rosmaita> o/
14:02:13 <msaravan> hi
14:02:20 <jbernard> #link https://etherpad.opendev.org/p/cinder-dalmatian-meetings
14:04:20 <jungleboyj> o/
14:05:45 <jbernard> hello all, thanks for coming
14:05:50 <raghavendrat> hi
14:05:55 <jbernard> #topic annoucements
14:06:09 <jbernard> https://releases.openstack.org/dalmatian/schedule.html
14:06:50 <jbernard> we are fast approaching the third milestone (D-3) in a couple of weeks
14:07:02 <jbernard> our midcycle is ptg is next week!
14:07:24 <jbernard> #action jbernard to update the ptg etherpad
14:08:06 <whoami-rajat> just a note, I will not be around the PTG time next week
14:08:21 <whoami-rajat> have a conflict with a family event during evening
14:08:34 <jbernard> whoami-rajat: ok, let us know if you need something raised or discussed
14:08:40 <whoami-rajat> sure thanks jbernard
14:09:53 <jbernard> aside from the ptg, the non-client library freeze will happen on Aug 22
14:10:29 <jbernard> and stable branches for those will be cut a bit before the rest of our repositories
14:10:37 <rosmaita> we need to prioritize review of os-brick patches
14:10:48 <jbernard> this means os-brick
14:10:54 <jbernard> rosmaita: exactly
14:11:23 <jbernard> in other words, if you have review cycles, os-brick needs some attention to make sure we get what we need
14:12:07 <jbernard> the freeze for everything else is the week following
14:12:11 <jbernard> so there is much to do :)
14:13:03 <jbernard> in other news, we've seen some movement in review requests from last week
14:13:25 <jbernard> i tried to get to as many as I could and several were crossed off the list
14:13:39 <yuval> Thanks you!
14:13:40 <jbernard> thanks to everyone that reviewed
14:14:49 <whoami-rajat> do we have a list of os-brick patches that need attention? or maybe create an etherpad to track them?
14:14:58 <whoami-rajat> the open patches list seems broad
14:15:01 <whoami-rajat> #link https://review.opendev.org/q/project:openstack/os-brick+status:open
14:15:07 <jbernard> whoami-rajat: not that I know of, i think that's a great idea
14:15:08 <whoami-rajat> but a lot are in merge connflict
14:15:33 <jbernard> yuval: i saw your brick patch, will review today
14:15:46 <yuval> wait for it - I still have some issues
14:15:50 <yuval> but thank you!
14:16:04 <jbernard> #action jbernard to create etherpad for os-brick patch priority
14:16:11 <jbernard> yuval: sure
14:16:27 <jbernard> I'll send a link on the list later today
14:17:41 <jbernard> that's all I have for annoucments, things are winding down toward the release so be mindful of the schedule as we move through august
14:18:16 <jbernard> as always, if you feel something is being overlooked, please reach out
14:18:38 <jbernard> #topic scheduler related unite test hardening
14:18:41 <jbernard> rosmaita: ^
14:18:57 <rosmaita> thanks!
14:19:55 <rosmaita> i was running unit tests in a weird environment recently and hit a bunch (around 284 failures) consistently
14:20:26 <rosmaita> i traced it down to an issue with stevedore not locating the entry points in the cinder setup.cfg where we list the scheduler filters
14:21:19 <rosmaita> which is not nice, but should not affect unit tests
14:21:20 <rosmaita> since the unit tests are supposed to be isolated to test the code they are designed to test,
14:21:43 <rosmaita> anyway, i put up a series of small patches to fix this, so that the tests are isolated correctly
14:21:55 <rosmaita> they will be easy to review
14:22:01 <rosmaita> (did i say they are small)
14:22:40 <rosmaita> that's basically it
14:23:06 <jbernard> rosmaita: are they under a specific topic?
14:23:32 <jbernard> nevermind, i see the relation chain
14:23:43 <rosmaita> just checking, yes, they are
14:23:49 <rosmaita> https://review.opendev.org/q/topic:%22scheduler-tests%22
14:24:14 <whoami-rajat> we have festival of XS reviews next week but it's always good to even review them before that
14:24:32 <rosmaita> :D
14:26:11 <yuval> open discussion?
14:27:01 <jbernard> #topic open discussion!
14:27:36 <whoami-rajat> is anyone from storpool team around?
14:30:39 <whoami-rajat> looks like not
14:30:53 <whoami-rajat> I had a PTL question for jbernard and probably rosmaita
14:31:08 <whoami-rajat> I wanted to backport this os-brick patch
14:31:10 <whoami-rajat> #link https://review.opendev.org/c/openstack/os-brick/+/920516
14:31:18 <whoami-rajat> but it adds 2 new config options with suitable defaults
14:31:42 <whoami-rajat> We haven't allowed it in the past for Cinder but since os-brick was a library, I'm not sure
14:32:05 <whoami-rajat> s/was/is
14:32:44 <jungleboyj> Hmmm, that is an interesting one.
14:32:47 <jbernard> im not personally opposed, i wonder what the rules and precedence are.
14:33:50 <jungleboyj> I am trying to remember what the rules are if they are config changes.
14:34:18 * jbernard is digging through old notes
14:34:48 <rosmaita> i guess we can ask elod, but i think worrying about that is a relic from the old days before oslo.config
14:35:08 <rosmaita> i mean, they have sensible defaults, so it's not like an operator needs to do anything
14:35:17 <rosmaita> and having them there allows for tuning
14:35:24 <jungleboyj> Right.  And I think that adding a config to fix a bug with sensible defaults should be ok.
14:35:32 <rosmaita> otherwise, we would have to remove the options and hard-code the values
14:35:42 <rosmaita> which would be pretty dumb, in my opinion
14:35:48 <jbernard> i agree with all of that
14:36:08 <jungleboyj> Reading the rules it says 'no incompatible config file changes'.
14:36:27 <rosmaita> thanks, jungleboyj
14:36:29 <jbernard> #action whoami-rajat to run his patch by elod
14:36:39 <jbernard> sounds like it will be fine thought
14:36:43 <rosmaita> sounds like we are OK here
14:36:45 <jungleboyj> Let's do that to be safe, but I think this should be ok.
14:37:24 <whoami-rajat> the initial patch did hardcode values but it only catered a small number of issues related to multipath, making it tune-able helps address broader issues like udev rules taking time to load
14:38:01 <jungleboyj> Yeah, makes sense to have that be configurable.
14:38:15 <whoami-rajat> thanks jungleboyj rosmaita and jbernard for your opinions, i will propose a backport and we can run it by elod
14:38:23 <jungleboyj> ++
14:39:07 <jbernard> anyone else? yuval - you mentioned you have issues?
14:39:17 <jbernard> we have a few more minutes if needed
14:40:13 <yuval> I want to ask
14:40:28 <yuval> if I run my setup with ACTIVE/ACTIVE
14:40:46 <yuval> then the same setup - I want to disable the ACTIVE/ACTIVE
14:41:12 <yuval> just removing the conf from the cinder.conf doesnt seems to do the trick
14:42:31 <jbernard> can you elaborate, what is the setup, what behaviour are you expecting, and what are you actually seeing?
14:44:02 <jbernard> also, is this a general issue, or one with your driver?  this might be better in general irc or email form, i guess it depends on how much info is needed to be exchanged - if it's a bug though - we will want to capture it
14:44:32 <yuval> I dont this its a bug just I dont fully understand how it works
14:44:42 <yuval> I am setting the "cluster" the config for cinder
14:45:10 <yuval> then I see my cinder backend is running as a part of a cluster
14:45:14 <yuval> with a single backend
14:45:29 <jbernard> someone correct me if i am wrong, but to move from A/A to A/P, you would need to modify your deployment to include something like haproxy, no?
14:45:33 <yuval> I want to do some checks - afterwards run the same checks but not in a cluster
14:46:47 <yuval> I am following this doc: https://docs.openstack.org/cinder/latest/contributor/high_availability.html
14:47:02 <yuval> but have not see how to reverse it
14:47:32 <yuval> other than destroy my setup and re-create without the "cluster" config
14:47:41 <jbernard> personally, i don't know - i would need to try it in devstack and see for myself
14:48:10 <simondodsley> i think the issue might be that volumes created under the cluster mode have the incorrect hostname in the DB after going back to non-cluster, so they will effectivley be unmanageable
14:49:27 <yuval> simondodsley: thanks I will try to view the DB directly
14:50:40 <simondodsley> i guess if that is the issue, then there needs to be a process, after disabling cluster, to clean the DB with a cinde manage? Or we could just say that reversion is not supported
14:51:22 <yuval> yes I wonder if we have something like that
14:53:10 <jbernard> yuval: probably not currently
14:53:16 <yuval> got it
14:54:16 <jbernard> ok, last call
14:55:06 <jbernard> thanks everyone!
14:55:09 <jbernard> #endmeeting