14:00:01 <whoami-rajat> #startmeeting cinder
14:00:01 <opendevmeet> Meeting started Wed Jul 13 14:00:01 2022 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:01 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:01 <opendevmeet> The meeting name has been set to 'cinder'
14:00:05 <whoami-rajat> #topic roll call
14:00:10 <eharney> hi
14:00:17 <amalashenko> hi
14:00:57 <simondodsley> hi
14:01:03 <jungleboyj> o/
14:01:14 <aneeeshp13> hi
14:01:26 <nahimsouza[m]> o/
14:01:34 <rosmaita> o/
14:02:02 <whoami-rajat> #link https://etherpad.openstack.org/p/cinder-zed-meetings
14:02:21 <caiquemello[m]> o/
14:02:59 <luizsantos[m]> o/
14:03:24 <whoami-rajat> hello
14:04:00 <geguileo> hi! o/
14:04:14 <whoami-rajat> we've the usual people around
14:04:18 <whoami-rajat> let's get started
14:04:25 <whoami-rajat> #topic announcements
14:04:37 <whoami-rajat> first, Deliverables check and library release
14:04:54 <whoami-rajat> so all deliverable files exist for cinder projects (I've checked) so we don't need to do anything here
14:05:10 <whoami-rajat> forgot to mention, it's from the release team mail
14:05:12 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029465.html
14:05:17 <senrique__> hi
14:05:27 <enriquetaso> hello
14:05:48 <whoami-rajat> the release team has proposed some patches for cinderclient, brick and brick-cinderclient-ext for Zed milestone 2 release
14:06:03 <whoami-rajat> I think we've some fixes for os-brick that should be released
14:06:11 <whoami-rajat> not sure about other 2 projects
14:06:14 <geguileo> whoami-rajat: I added a topic later because I'd like to have 4 patches in the release
14:06:26 <geguileo> (for os-brick)
14:06:37 <whoami-rajat> geguileo, ack, yeah i was referring to them
14:06:48 <whoami-rajat> not sure about other projects, if we've something that needs to be released
14:06:53 <geguileo> yeah, but I was slower writting it  ;-)
14:07:02 <whoami-rajat> I will check after the meeting and add a comment to the patches
14:07:08 <whoami-rajat> :D
14:07:10 <simondodsley> what about your NVMe patches for os-brick?
14:07:48 <whoami-rajat> i think we can target them for the non-client lib release of Zed
14:08:20 <whoami-rajat> due to our current priority of drivers, and long list of Nvme patches
14:08:44 <geguileo> simondodsley: I would love to get those in, but those are huge, so I'm not sure we'll be able to get them
14:08:44 <simondodsley> however in those long list of drivers are at least 3 NVMe divers
14:09:29 <whoami-rajat> yep, those drivers will be part of Zed release and we are planning to get all Nvme brick patches in the same
14:09:30 <aneeeshp13> I just want to mention about https://review.opendev.org/c/openstack/os-brick/+/836062. This is a needed patch for the Fungible driver.
14:10:26 <rosmaita> well, releases are cheap ... we can get the M-2 one out as soon as the ceph detach fix is in, and then do another shortly after with nvme changes
14:10:47 <rosmaita> which it sounds like we will need to do with all the nvme drivers proposed
14:11:12 <whoami-rajat> aneeeshp13, ack, i think we've them covered in the driver review list (that we will discuss later in the drivers topic) but thanks for mentioning
14:11:26 <aneeeshp13> okay thank you
14:12:02 <jbernard> re releases, i have stable and z-2 patches pending, just let me know if i need to hold off, or update the hash to include something particular
14:12:39 <whoami-rajat> jbernard, i think we will need to hold the os-brick z-2 patch until we get geguileo's changes in
14:12:50 <jbernard> yup
14:13:15 <geguileo> yeah, since I broke the RBD connector and we backported the broken patch  :-(
14:13:56 <whoami-rajat> but it's not released so we're kind of good? ^
14:14:53 <whoami-rajat> so moving on
14:14:57 <whoami-rajat> next, M-2 week
14:15:03 <whoami-rajat> this week is milestone 2 release
14:15:14 <whoami-rajat> #link https://releases.openstack.org/zed/schedule.html
14:15:45 <whoami-rajat> as per the schedule, we've new volume drivers and new target drivers deadline this week
14:15:55 <whoami-rajat> but i will discuss that in a later topic
14:16:04 <whoami-rajat> just wanted to mention that as an announcement
14:16:32 <whoami-rajat> I don't think we've any new target driver proposed this cycle?
14:17:14 <whoami-rajat> I remember geguileo working on fixing the nvme target driver in cinder but that already exists so not new
14:17:59 <geguileo> yeah, I have a buch of bug fixes and improvements so that anybody can test NVMe-oF without real hardware
14:18:39 <rosmaita> geguileo: ++
14:18:41 <whoami-rajat> sounds great
14:19:15 <whoami-rajat> so let's move to topics
14:19:18 <whoami-rajat> #topic Requesting 4 os-brick patches to be included in the release
14:19:21 <whoami-rajat> geguileo, that's you
14:19:28 <geguileo> thanks
14:19:42 <geguileo> I wanted to ask for reviews on 4 os-brick patches that I think are kind of important
14:19:48 <geguileo> and I'd like to see them in the release
14:20:03 <geguileo> the first one is the one that fixes the RBD connector
14:20:11 <geguileo> it's horribly broken for encrypte volumes
14:20:21 <geguileo> #link https://review.opendev.org/c/openstack/os-brick/+/849542
14:20:32 <geguileo> it's a small one
14:21:13 <geguileo> the next one is to make extending LUKSv2 volumes work
14:21:15 <eharney> not sure if you saw my question there, but i convinced myself it's probably fine
14:21:17 <geguileo> because they currently don't work
14:21:33 <geguileo> eharney: I didn't, was fighting a different battle, will look after the meeting
14:21:46 <enriquetaso> oh i thought the question was closed, my bad
14:21:58 <geguileo> https://review.opendev.org/c/openstack/os-brick/+/836059
14:22:06 <geguileo> #link https://review.opendev.org/c/openstack/os-brick/+/836059
14:22:20 <rosmaita> eharney: i think your question was more about hardening and not about the patch being incorrect?
14:22:22 <geguileo> there is a Nova and Tempest test that depend on that one
14:22:57 <eharney> rosmaita: it was about whether we had actually considered all the types of objects that could be passed into that method
14:23:09 <rosmaita> yeah, that's what i meant
14:23:14 <geguileo> eharney: oh, I see your question now...  I don't think we return anything that's not a str
14:23:52 <geguileo> eharney: I'll have a quick look at the different connectors and reply
14:24:54 <geguileo> the 3rd patch is a simple one, it's about removing a traceback from the os-brick logs every time the get_connector_properties method is called
14:25:04 <geguileo> it happens if the nvme command is not installed on the system
14:25:06 <geguileo> #link https://review.opendev.org/c/openstack/os-brick/+/836055
14:25:11 <eharney> re: 836059 i'm a little unsure if we should be adding any functionality to the cryptsetup encryptor, but i'll leave a comment for later discussion
14:26:11 <geguileo> eharney: ok, we can continue the conversation there
14:26:27 <geguileo> I admit that I wrote that code 4 months ago, so I'll have to refresh my memory  ;-)
14:27:00 <geguileo> The last patch is a patch related to the path_lock configuration option from oslo_concurrency
14:27:10 <geguileo> os-brick now uses file locks for some critical sections
14:27:31 <geguileo> and it uses the lock_path from the service that uses os-brick
14:27:48 <geguileo> but there are cases where multiple services need to use the same location for os-brick locks
14:27:53 <geguileo> for example HCI systems
14:28:09 <geguileo> Glance using Cinder as the backend and running on the same host
14:28:23 <geguileo> one solution is to configure all the services with the same lock_path
14:28:40 <geguileo> but then you have ALL services using the same directory for their locks, which is not "clean"
14:29:04 <geguileo> the patch adds support for os-brick to have its own lock_path independent of the one from the service (though defaulting to it)
14:29:29 <geguileo> #link https://review.opendev.org/c/openstack/os-brick/+/849324
14:29:48 <geguileo> this effort has patches on other projects
14:29:56 <rosmaita> quick question about that one ... you explained clearly in the commit message why the fix requires calling os_brick.setup by consuming services ... what happens if they forget? I think we don't get current behavior, lock_path will be None and in that case oslo.concurrency code will barf an exception?
14:29:57 <geguileo> nova, cinder, glance, glance-store, devstack
14:30:20 <geguileo> rosmaita: oh, good point!
14:30:22 <rosmaita> (or maybe we discuss later in cinder channel after bug squad meeting)
14:31:09 <geguileo> rosmaita: I'll see if the best way to do it is to fix that scenario in os-brick
14:31:10 <enriquetaso> sure
14:31:32 <geguileo> or if we need to merge the other patches first to ensure services use it...
14:31:47 <rosmaita> yeah, we have a chicken-egg problem
14:32:29 <geguileo> rosmaita: yea, but I think it should be ok to fail if services don't call the method
14:33:10 <geguileo> I'll have a look to see if there's a nicer way of doing it
14:33:27 <rosmaita> yeah, i was hoping maybe
14:33:35 <rosmaita> let's talk after bug squad
14:33:37 <geguileo> I may be able to do it automagically
14:33:44 <geguileo> ok
14:33:53 <geguileo> well, those are the 4 patches I'd like to see merged
14:34:03 <geguileo> and thanks for those questions now!!
14:34:37 <whoami-rajat> thanks geguileo for all the fixes and also giving a brief overview so it becomes easier to review
14:35:04 <rosmaita> yes, geguileo is the master commit-message-writer
14:35:11 <enriquetaso> ++
14:35:21 <geguileo> lol
14:35:41 <whoami-rajat> geguileo's commit message should be examples in the doc "How to write a commit message"
14:36:12 <rosmaita> the doc could just be a gerrit search for geguileo's patches
14:36:12 <whoami-rajat> so coming back to the topics, let's move on to next topic
14:36:30 <whoami-rajat> that should also work great
14:36:43 <whoami-rajat> #topic Extend driver merge deadline
14:37:04 <whoami-rajat> With the limited amount of time and a big number of drivers proposed this cycle, I would like to extend the deadline to R-10
14:37:19 <whoami-rajat> actually rosmaita and I have discussed this
14:37:37 <whoami-rajat> I will send out a mail to ML, but before that i wanted to bring this to the cinder team
14:37:55 <whoami-rajat> as per the current schedule, other projects also have driver merge after M-2
14:38:06 <whoami-rajat> like R-10 is also Manila new driver merge deadline so it shouldn't be too controversial to reschedule the cinder deadline
14:38:33 <whoami-rajat> and 2 weeks time should be sufficient to get most (if not all) of the drivers in a working condition and merged
14:39:09 <simondodsley> My concern is that the NVMe drivers can't merge without the os-brick patches they depend on
14:39:46 <whoami-rajat> simondodsley, oh, I might have misunderstood the concern before, so we need those os-brick changes released for NVMe drivers to work ...
14:40:20 <simondodsley> yes - the patches are related to multipathing and a complete refactor of the connector
14:40:22 <rosmaita> we may have to adjust our ordering of reviews, i guess
14:40:44 <whoami-rajat> I guess we can do what rosmaita suggested, is to do another brick release shortly
14:41:00 <whoami-rajat> yeah, need to prioritize the brick patches
14:41:01 <rosmaita> also, powerflex nvme/tcp driver does not seem to have a patch up
14:41:07 <simondodsley> not sure if the other NVMe drivers are aware of the patched geguileo has created
14:41:09 <rosmaita> so we can eliminate that one
14:41:14 <rosmaita> (maybe)
14:41:27 <simondodsley> Fungable, Pure and the other DEMC driver
14:41:34 <rosmaita> and is anyone here from fungible?  i can't find a third-party CI for them
14:41:42 <simondodsley> they should all depend on the new patches
14:41:56 <whoami-rajat> yeah if the patches aren't proposed yet then no point in prioritizing them after the extended deadline
14:42:19 <whoami-rajat> for the powerflex nvme driver i mean ^
14:42:22 <rosmaita> what i mean is that i think a pre-requisite for getting an extension is that the ci should at least be partially working
14:42:45 <aneeeshp13> rosmaita I represent Fungible. I am actively working on the CI. It should be up by end of this week hopefully.
14:42:45 <whoami-rajat> aneeeshp13, i think you're from fungible?
14:42:52 <whoami-rajat> ok
14:43:01 <amalashenko> DEMC here, we updated our CI for NFS driver
14:43:48 <rosmaita> amalashenko: ok, thanks
14:44:25 <rosmaita> aneeeshp13: i think we should not prioritize fungible driver for reviews until after the ci is available
14:44:34 <rosmaita> because it cannot merge without the ci working
14:44:54 <rosmaita> sucks, i know, but we have too many drivers and too few reviewers
14:45:33 <whoami-rajat> with that note ^ let's move to the next topic which is the main topic for drivers and we can discuss more things in that
14:45:42 <whoami-rajat> #topic Driver review
14:45:52 <rosmaita> ok, sorry
14:46:01 <whoami-rajat> (and we also have less time)
14:46:14 <aneeeshp13> rosmaita okay. np. Will try to bring the CI ASAP.
14:46:19 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-zed-new-drivers
14:46:28 <whoami-rajat> Brian and I have worked on the etherpad to include all the drivers that are proposed for Zed
14:46:40 <whoami-rajat> Since there are 8 drivers, it is not feasible for only few cores to review all the drivers
14:46:46 <whoami-rajat> Hence each core should adopt at least 1 driver for review
14:46:50 <whoami-rajat> There is a reviewers section below the driver patches, cores interested in the driver review can add their names
14:47:05 <rosmaita> before we do that, let's quickly look at the ordering
14:47:14 <whoami-rajat> (just pasting notes from the meeting etherpad to quickly summarize it)
14:47:16 <whoami-rajat> ok
14:47:29 <sfernand> could you paste the link here
14:47:39 <sfernand> I will help with reviews
14:47:47 <rosmaita> #link https://etherpad.opendev.org/p/cinder-zed-new-drivers
14:48:18 <rosmaita> anyone here from yadro?
14:48:59 <simondodsley> when did it become a requirement for 3rd party CIs to respond to os-brick patches?
14:49:12 <rosmaita> i thought it was always a requirement?
14:49:25 <rosmaita> maybe i just made it up, though
14:49:41 <simondodsley> never been flagged before that i know of - i can do it, but just wondering
14:49:56 <simondodsley> it does make sense
14:50:03 <rosmaita> cinder tests using a released os-brick, so if we dont' test against brick directly, we don't know that something is broken until too late
14:50:21 <whoami-rajat> i don't think a lot of drivers do it right now but looks like a good thing to do
14:50:23 <simondodsley> yes - i'm seeing that now with my NVMe driver
14:50:31 <eharney> IIRC, it was a bit of an issue recently that no FC drivers were reporting on os-brick
14:51:06 <rosmaita> if it's a new (unstated) requirement, we can make it a nice-to-have for cinder driver approval in Zed, and change it for 2023.1
14:51:19 <simondodsley> all the nvme patches for os-brick are not being tested without it and so the current nvme drivers are testing against bad code
14:51:42 <whoami-rajat> rosmaita, sounds good, would be good to add that in the driver review checklist doc
14:52:02 <rosmaita> ok, i will post an update
14:52:11 <whoami-rajat> great thanks
14:52:17 <rosmaita> and also update the wiki
14:52:30 <sfernand> yep I believe it  makes sense to test os-brick
14:52:30 <sfernand> but I wasn't aware it was a hard requirement, my bad
14:52:30 <sfernand> NetApp CI used to have a os-brick job on Zuul v2 but we haven't ported it yet
14:52:40 <whoami-rajat> yeah, some vendors reference from the wiki ...
14:53:14 <whoami-rajat> sfernand, i don't think it's a hard requirement as of now but it will be good if vendors do it
14:53:42 <sfernand> whoami-rajat: yes got it!
14:53:54 <whoami-rajat> great
14:54:37 <whoami-rajat> so quickly discussing the main point, there are a lot of drivers and few reviewers, so wanted everyone to take one driver for reviewing
14:54:42 <whoami-rajat> each driver requires 2 cores
14:55:07 <whoami-rajat> I've added a reviewers section for each driver in the etherpad
14:55:23 <whoami-rajat> please add your names if you're interested in looking at a particular one
14:55:26 <rosmaita> i will sign up for yadro, just because i feel bad about having to postpone them in yoga
14:55:46 <whoami-rajat> rosmaita, cool, I've already added my name for it so we've 2 for that
14:55:55 <whoami-rajat> Data core is merged so no need to worry about that
14:56:28 <rosmaita> who said they are from DEMC?  do you know the status of the powerstore nvme/tcp CI ?
14:56:52 <whoami-rajat> eharney has signed up for nfs, great!
14:57:05 <amalashenko> I am from DEMC and Oleg
14:57:11 <olegnest> rosmaita: working on it, will be done within this week
14:57:17 <whoami-rajat> so people can add names after the meeting as well, just wanted to mention it here
14:57:36 <rosmaita> olegnest: thanks
14:58:18 <jungleboyj> I put my name on a couple.
14:58:24 <whoami-rajat> thanks
14:58:35 <whoami-rajat> we've 2 minutes, i guess we can quickly discuss geguileo's topic
14:58:41 <rosmaita> since we are running out of time, whoami-rajat and i will re-order the patches after the meeting so that they are in priority order
14:58:42 <whoami-rajat> #topic Enable Ceph CI jobs
14:58:50 <whoami-rajat> rosmaita ++
14:58:51 <enriquetaso> i think powerstore CI shouldn't be working due to https://bugs.launchpad.net/cinder/+bug/1981068
14:59:08 <geguileo> well, just wanted to mention that I broke the RBD connector in os-brick
14:59:23 <geguileo> and we didn't notice because we all forgot to check the ceph job  :-(
14:59:45 <geguileo> so I wanted to bring up the topic we discussed a couple of weeks back
14:59:54 <geguileo> enabling ceph jobs on os-brick and cinder
15:00:04 <whoami-rajat> yep, discussed for nfs ceph jobs to make them voting
15:00:25 <rosmaita> that os-brick job looks pretty unstable
15:00:35 <olegnest> sry, i'm a new here. Can I ask one question or probably I should do it in another channel?
15:00:39 <rosmaita> maybe we could only run it on changes to the rbd connector or something?
15:00:56 <whoami-rajat> olegnest, you can ask in the #openstack-cinder channel anytime
15:01:00 <rosmaita> because i dont' think we can make it voting and ever merge anything
15:01:05 <olegnest> thanks
15:01:56 <geguileo> rosmaita: can we make it always run and only voting if there are code changes to its connector code?
15:02:13 <rosmaita> we can specify irreleveant files for that job
15:02:23 <geguileo> rosmaita: but that's different...
15:02:24 <rosmaita> and i think there's a relevent_files also
15:02:37 <geguileo> that means it only runs when that file changes, which is different
15:03:01 <geguileo> because we may be breaking the job when changing something that is not in the Ceph job
15:03:08 <geguileo> sorry, not in the rbd connector code
15:03:35 <geguileo> so fi the job doesn't even run then reviewers don't even have the possibility of seeing that it's failing
15:03:47 <rosmaita> well, we could have 2 jobs, one nonvoting on everything, and the other voting only on rbd changes
15:04:17 <geguileo> or we can make the non-voting not vote when there are changes to RBD connector
15:04:23 <geguileo> that way only 1 of them runs, right?
15:04:26 <geguileo> is that possible?
15:04:31 <rosmaita> not sure
15:04:41 <geguileo> rosmaita: you don't like any of my ideas  :-P
15:04:42 <rosmaita> but maybe
15:04:50 <rosmaita> lol
15:04:54 <whoami-rajat> not sure if that is possible but rosmaita's idea looks clean
15:04:58 <geguileo> ok, I think we should at least have RBD job vote for its code changes
15:05:11 <rosmaita> i agree, it's too easy to miss otherwise
15:05:42 <whoami-rajat> okay we're way overtime than our schedule, let's continue the discussion in cinder channel after BS meeting
15:05:45 <rosmaita> ok, i will take an action item to try to get something together, if geguileo can give me what files should be relevant for voting
15:05:51 <whoami-rajat> thanks everyone!
15:05:54 <whoami-rajat> #endmeeting