14:00:14 <rosmaita> #startmeeting cinder
14:00:14 <opendevmeet> Meeting started Wed Jul 14 14:00:14 2021 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:14 <opendevmeet> The meeting name has been set to 'cinder'
14:00:20 <rosmaita> #topic roll call
14:00:52 <eharney> hi
14:00:55 <rosmaita> am i netsplit?
14:00:56 <walshh_> hi
14:00:59 <rosmaita> oh good
14:01:00 <whoami-rajat> Hi
14:01:04 <enriquetaso> Hi
14:01:05 <fabiooliveira> Hi
14:01:08 <rosmaita> was worried for a minute there
14:01:13 <simondodsley> Hi
14:01:14 <Mr_Freezeex> Hi
14:01:30 <TusharTgite> hi
14:01:38 <rosmaita> #link https://etherpad.opendev.org/p/cinder-xena-meetings
14:02:15 <rosmaita> hi everyone, i am going to get started because i have like 500 announcements and there's a bunch of topics on the agenda
14:02:17 <rosmaita> #topic announcements
14:02:22 <rosmaita> ok, first item
14:02:52 <rosmaita> TusharTgite put a request for help near the end of our video meeting 2 weeks ago, and i am afraid it got lost
14:02:59 <rosmaita> and i forgot to mention it last week
14:03:07 <rosmaita> #link https://wiki.openstack.org/wiki/Reset_State_Robustification
14:03:25 <rosmaita> he would like some feedback, so please take a look
14:03:36 <rosmaita> next item is a reminder
14:03:44 <rosmaita> Cinder Festival of XS Reviews this friday!
14:04:01 <rosmaita> location as usual is https://meetpad.opendev.org/cinder-festival-of-reviews
14:04:25 <rosmaita> next item, the next OpenStack release will be named "Yoga"
14:04:37 <rosmaita> and so the Yoga virtual PTG 18-22 October 2021
14:04:46 <rosmaita> and now we have a planning etherpad
14:04:54 <rosmaita> #link https://etherpad.opendev.org/p/yoga-ptg-cinder-planning
14:05:14 <rosmaita> and, teams must sign up for time slots within the next week
14:05:25 <rosmaita> i sent a proposal to the ML yesterday which you may have seen
14:05:34 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023624.html
14:06:04 <rosmaita> basically, the proposal is for us to do what we have done for the past virtual PTGs
14:06:28 <rosmaita> the downside is that it is rough on APAC time zones
14:06:43 <rosmaita> but i really don't know what to do about that
14:06:48 <rosmaita> i am open to suggestions
14:07:16 <rosmaita> as i said in the email, maybe we have an APAC-friendly post-PTG session, if there is interest
14:07:33 <rosmaita> the problem is that interest is usually shown by attending cinder weekly meetings
14:07:48 <rosmaita> and if the weekly meeting time sucks for you, you won't be here to express an opinion
14:08:06 <whoami-rajat> i think it's fine for India, not sure about China, Japan and others...
14:08:36 <rosmaita> yeah, for shanghai time it's sub-optimal, but probably do-able
14:08:48 <rosmaita> but for Japan, kind of not very good
14:09:19 <rosmaita> the source of the problem, as i see it, is that we are not all physically in the same TZ for the week
14:09:42 <rosmaita> it is really hard to accommodate Seattle to Tokyo
14:10:12 <rosmaita> anyway, my point is that i don't want to be exclusionary about time zones, but i also don't have any ideas
14:10:32 <rosmaita> and we definitely need to accommodate the core team
14:10:38 <rosmaita> so, please make sure you mark your calendar for the PTG
14:10:43 <rosmaita> and also register
14:10:47 <rosmaita> next item
14:11:01 <rosmaita> dropping lower-constraints job, file (discussed last week)
14:11:08 <rosmaita> #link https://review.opendev.org/q/topic:%22drop-l-c%22+(status:open%20OR%20status:merged)
14:11:17 <whoami-rajat> one way is to add APAC topics to the beginning and they can follow the rest of PTG in recordings if it gets too late?
14:11:39 <rosmaita> whoami-rajat: yeah, i am happy to do that, i'll add a note on the etherpad
14:11:55 <rosmaita> #action rosmaita ^^
14:12:28 <rosmaita> ok, please review ^^ so we can get lower-constraints out of our collective lives and easily make requirements changes
14:12:42 <rosmaita> next item
14:12:52 <rosmaita> this is week R-12 - xena milestone-2
14:13:07 <rosmaita> two major things here
14:13:15 <rosmaita> 1 - new driver merge deadline
14:13:34 <rosmaita> one new driver: Add Cinder NFS driver for Dell EMC PowerStore
14:13:41 <rosmaita> https://review.opendev.org/c/openstack/cinder/+/797608
14:14:05 <rosmaita> i need help reviewing this one, especially from driver maintainers who already have NFS-based drivers
14:14:24 <rosmaita> do please do PowerStore a solid and take a look and leave comments
14:14:45 <rosmaita> i will be honest, the driver as it stands looks a bit raw
14:15:02 <eharney> i've got this one on my list to follow up on
14:15:10 <rosmaita> in that i'm not sure that it takes into account all the implications of multiattach
14:15:24 <rosmaita> but i will also be honest and say i don't know jack about NFS, really
14:15:38 <rosmaita> so thanks eharney, definitely want you to look closely
14:15:54 <rosmaita> but also, other people ... anyone from netapp here today?
14:16:35 <rosmaita> guess not
14:17:12 <rosmaita> ok the other big thing is that i want to get v2 of the block storage api removed this week and cut releases of everything affected
14:17:30 <rosmaita> and since these are my patches, i cannot review
14:17:38 <rosmaita> well, i can, but it doesn't do any good
14:18:08 <rosmaita> i sent an appeal about cinder to the ML earlier this week
14:18:08 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023606.html
14:18:28 <rosmaita> and then this morning, i realized that we had discussed this earlier and people signed up to review patches
14:18:41 <rosmaita> #link https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Block_Storage_API_v2_Removal_Update
14:18:49 <rosmaita> and here are the patches
14:18:57 <rosmaita> #link https://review.opendev.org/q/topic:%22drop-v2%22+(status:open%20OR%20status:merged)
14:19:34 <enriquetaso> i'm not on that list but i'll review them asap
14:20:10 <rosmaita> enriquetaso: please start with the cinderclient patches
14:20:22 <enriquetaso> sure
14:20:47 <rosmaita> i am having problems getting the second cinderclient patch passing CI after rebasing it on the other patch
14:20:54 <rosmaita> the "other patch" being shell removal
14:21:10 <rosmaita> shell removal: https://review.opendev.org/c/openstack/python-cinderclient/+/791834
14:21:21 <rosmaita> classes removal: https://review.opendev.org/c/openstack/python-cinderclient/+/792959
14:22:02 <rosmaita> i have no idea what is going on ... i am consistently getting tempest-full-py failures in a tempest scenario test that tries to log in to horizon
14:22:15 <rosmaita> i am pretty sure my patch is not affecting that
14:22:29 <rosmaita> but my patch seems to be cursed
14:23:00 <rosmaita> the other thing is that i am getting consistent fixture timeouts in the functional-py36 test ...
14:23:16 <whoami-rajat> i also got the python-cinderclient-functional-py36 failure today on my patch...
14:23:30 <rosmaita> yes,and then you passed on the next go-round
14:23:32 <eharney> rosmaita: er... i think horizon imports cinderclient.v2
14:23:42 <rosmaita> whereas mine failed again
14:23:58 <rosmaita> eharney: i thought not, but i better look more closely
14:24:08 <eharney> it does, i just looked
14:24:13 <rosmaita> shoot
14:24:33 <rosmaita> ok, i guess that's why we have CI tests
14:25:02 <rosmaita> eharney: shoot me a link and i will put up an emergency horizon patch or something
14:25:13 <eharney> https://opendev.org/openstack/horizon/src/branch/master/openstack_dashboard/api/cinder.py#L31
14:25:16 <eharney> i'll add a review comment
14:25:19 <rosmaita> ty
14:26:06 <rosmaita> ok, if anyone has an idea about the fixtures problem (only seen in py36) i will be happy to hear it
14:26:23 <rosmaita> final announcement:
14:26:33 <rosmaita> xena midcycle-2 coming up at R-9
14:26:41 <rosmaita> Wednesday 4 August 2021, 1400-1600 UTC
14:26:56 <rosmaita> so make sure your calendar is blocked off for fun & excitement
14:27:04 <rosmaita> and, add topics to the etherpad:
14:27:13 <rosmaita> #link https://etherpad.opendev.org/p/cinder-xena-mid-cycles
14:27:51 <rosmaita> that's all for announcements, other than that i will be away for a week starting next wednesday
14:28:02 <rosmaita> whoami-rajat has volunteered to run the meeting
14:28:29 <rosmaita> ok, on to real topics
14:28:43 <rosmaita> #topic rbd: snapshot mirroring for ceph backup driver
14:28:47 <rosmaita> Mr_Freezeex: that's you
14:28:56 <Mr_Freezeex> Hi, so this is a followup discussion on the addition of snapshot replication discussed in last weekly meetings
14:28:57 <rosmaita> #link https://review.opendev.org/c/openstack/cinder/+/784956
14:29:41 <Mr_Freezeex> The snapshot mirroring was denied and I am here to discuss present why I think it's nice to have that in cinder-backup
14:29:59 <Mr_Freezeex> So first thing is that you can't enable snapshot replication on the whole pool like with journaling, you have to do that per image. So this either needs to be done directly in cinder-backup or with an external automation.
14:30:48 <Mr_Freezeex> Now if we compare the two modes. It's true that the writes are synchronously written to the journal, but it's not synchronously replayed to the other cluster.
14:31:12 <Mr_Freezeex> With journaling the replication is consistent per write and with the snapshot it's per snapshot. But for backups, I don't see a real advantage for per write consistency.
14:31:49 <Mr_Freezeex> So IMO both alternative are equally safe for backups and the snapshot option well exceed the performance of journaling replication in terms of client performance (so backup speed in our case) and replication speed.
14:32:40 <Mr_Freezeex> If you have any questions/concerns, feel free!
14:33:10 <geguileo> Mr_Freezeex: I haven't tried the feature, and I'm looking for the doc where it says that it cannot do whole pool configuration for snapshot replication...
14:33:50 <geguileo> do you have a link, location where the docs says it?
14:34:00 <geguileo> or is it missing from the docs?
14:34:25 <Mr_Freezeex> Well it's not in docs but the rbd command to enable replication on the whole pool assume journaling
14:34:36 <Mr_Freezeex> there is no options to say either snapshot or journal
14:34:46 <Mr_Freezeex> And the default in ceph is journaling so...
14:35:33 <geguileo> Mr_Freezeex: that mirroring mode may be the default, but I believe you can change it
14:35:41 <geguileo> right?
14:36:10 <Mr_Freezeex> Hmm I don't think so but I am maybe wrong
14:36:33 <Mr_Freezeex> at least it not appears to be documented anywhere...
14:37:07 <geguileo> my bad
14:37:15 <geguileo> pool (default): When configured in pool mode, all images in the pool with the journaling feature enabled are mirrored.
14:37:37 <Mr_Freezeex> Ah indeed
14:38:00 <geguileo> oooooh, wait, but you can configure the mirror-snapshot to work on a per-pool level
14:38:26 <geguileo> which should solve the problem, I would think
14:38:55 <geguileo> jbernard: you around to chime in on this? ^
14:38:57 <Mr_Freezeex> Oh ok, where is that documented?
14:39:20 <geguileo> Mirror-snapshots can also be automatically created on a periodic basis if mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool, or per-image levels.
14:39:27 <geguileo> https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#create-image-mirror-snapshots
14:39:44 <geguileo> not 100% sure that is all that's needed
14:40:00 <Mr_Freezeex> Ah yes ok but this only creates snapshots on image that have the snapshot replication mode enabled
14:40:10 <geguileo> ignore it: When using snapshot-based mirroring, mirror-snapshots will need to be created whenever it is desired to mirror the changed contents of the RBD image
14:40:15 <geguileo> yup
14:40:28 <geguileo> ok, so in that light, it makes sense to support it in the backups
14:41:29 <geguileo> I find it odd that there is no way to do that by default
14:42:12 <Mr_Freezeex> Speaking of the mirror snapshots on the whole pool. I assumed this would be the go to for backup in my patch but I will probably add a quick addition to do a mirror snapshot after every data transfer so you would not even have to enable that!
14:42:48 <Mr_Freezeex> Yeah but I think they want to deprecate to enable replication on the whole pool, or at least it is not very recommanded
14:43:56 <rosmaita> i'm trying to remember what the objection to using this for backups was last week, it was something to do with a RPO gap
14:43:57 <geguileo> Mr_Freezeex: what do you mean do a mirror snapshot after every data transfer?
14:44:26 <Mr_Freezeex> geguileo: When the backup is finished do a mirror snapshot
14:44:45 <geguileo> rosmaita: my main complain was that this should be done like the journal based, at the Ceph end, and cinder shouldn't need to know
14:44:57 <Mr_Freezeex> Basically even an empty snapshot has a (small) cost on the rbd-mirorr daemon so that's an easy optimisation
14:45:12 <geguileo> Mr_Freezeex: that is what's necessary, right? otherwise it won't be mirrored
14:47:18 <Mr_Freezeex> geguileo: Yes you have to create mirror snapshot one option is to do that with the mgr and let ceph do it but do it via mirror snapshots would be nice because if we don't do that even backup that has changed for years would have those "useless" snapshot being created.
14:48:17 <geguileo> Mr_Freezeex: so if I understand correctly, we could not do it in Cinder and let storage admin set the scheduler in the backend, right?
14:49:23 <geguileo> and if I understand correctly even the number of snapshots per image can be configured in the cluster, so it can be changed to 1
14:49:30 <Mr_Freezeex> geguileo: yes, but that's an easy thing to do in cinder (like ~10 lines). You just have to call the function to create a mirror snapshot and let ceph does his thing in the backend
14:49:44 <Mr_Freezeex> Yes but this is managed internally by ceph
14:49:56 <geguileo> yeah, but that was the complain, if it can be done in the backend, we don't want to add that on our end
14:50:13 <rosmaita> ok, so the discussion last week was that it made sense to configure journaling vs. snapshot mirroring in cinder ... for backups, do we want to have no option, and just do snapshot mirroring if it's available?
14:50:15 <geguileo> if we create the snapshots, then we are responsible for the snapshots
14:50:48 <geguileo> rosmaita: my complain about backups is that afaik an admin can do it all on the Ceph side
14:50:57 <geguileo> rosmaita: like they are currently doing for journaling
14:51:12 <geguileo> s/journaling/journal-mirroring
14:51:32 <rosmaita> ok, this has until M-3 to merge, so we should continue this discussion later
14:51:33 <geguileo> so if that can be done as well for snapshot-based mirroring
14:51:38 <geguileo> why add code to cinder just for that?
14:51:46 <geguileo> ok
14:51:53 <rosmaita> i am all for not adding unnecessary code to cinder!
14:52:04 <geguileo> Mr_Freezeex: we can continue the discussion on the cinder channel after the meeting
14:52:13 <Mr_Freezeex> yes sure!
14:52:27 <rosmaita> ok, geguileo you are up next
14:52:38 <rosmaita> #topic meaning of "available" status for a volume?
14:52:48 <geguileo> rosmaita: thanks
14:53:06 <geguileo> so I'm working on fixing a race condition on the new attachment API, specifically on the delete operation
14:53:18 <geguileo> and while looking at the code around it, I saw some inconsistencies
14:53:23 <geguileo> some with how we worked before
14:53:26 <geguileo> some on their own
14:53:50 <geguileo> for example, if a remove export fails when deleint the attachment, then we don't fail the operation
14:54:05 <geguileo> though we set the attach_status of the attachment to error_detaching
14:54:27 <geguileo> since we don't fail, the volume will go into available and detached (if there are no additional attachments on that volume)
14:54:46 <geguileo> so we end up with a detach operation that according to the old attachment API should have failed
14:54:49 <geguileo> but now succeeds
14:55:30 <geguileo> that's not a big deal, since on delete we now ensure we call the remove export, just something that we have changed, and sounds a bit weird
14:55:52 <geguileo> the bad part is that now we have a volume that has an attachment, yet is in available and detached state
14:56:13 <whoami-rajat> geguileo, I'm not clear with state "available and detached", i assume you mean status=available and attach_status=detached but you said attach_status is set as error_detaching
14:56:34 <geguileo> whoami-rajat: attach_status was for the attachment itself
14:56:53 <geguileo> whoami-rajat: where as the available and detached is like you described, which is for the volume
14:57:07 <whoami-rajat> ok got it
14:57:25 <geguileo> which is kind of weird, as one would assume that an available volume has no attachment records
14:57:51 <geguileo> so there are 2 ways to look at the available status:
14:58:12 <geguileo> 1- available => has no attachments, on any state (this was the old way)
14:58:31 <geguileo> 2- available => there is no-one using the volume (this is the new way)
14:58:57 <geguileo> so that is my first concern (of may around this code)
14:59:12 <geguileo> if we want to keep current behavior we need to document it
14:59:25 <rosmaita> (btw ... horizon now meets in their own channel, so we can continue this discussion right here)
14:59:42 <geguileo> great!
14:59:52 <geguileo> though I feel like I put everyone to sleep
15:00:03 <rosmaita> we are trying to digest what you are saying
15:00:10 <geguileo> ok, thanks
15:00:40 <whoami-rajat> geguileo, since you mentioned we handle the delete export part during volume delete, doesn't it make sense to delete the attachment record rather than setting the status=error_detaching?
15:00:58 <geguileo> whoami-rajat: yes, it would make sense to me
15:00:59 <whoami-rajat> i know it doesn't sound good but keeping an attachment record which is of no use doesn't sound good either
15:01:12 <geguileo> oh, no, it sounds a lot better than what we have now!!
15:01:24 <whoami-rajat> great
15:01:29 <geguileo> I'm ok with doing that
15:01:58 <geguileo> what do others think?
15:02:53 <whoami-rajat> regarding your question, i prefer 1) since there might be complications with a stale attachment record during multiattach or even with normal volumes (that i can't think of right now)
15:03:54 <rosmaita> geguileo: so when you say "new way", you mean this is the implication of the current code, not that you are proposing that it is good
15:04:26 <geguileo> whoami-rajat: the only time we set error_deleting on the attachments is that one, so if we delete it, we don't have to worry about that state anymore on the attachments themselves
15:04:45 <geguileo> rosmaita: correct, this is the new attachment API current behavior
15:04:50 <geguileo> not something I want to do
15:06:04 <rosmaita> ok, got it
15:06:18 <whoami-rajat> geguileo, great, that solves all the problem then
15:06:20 <rosmaita> i agree with whoami-rajat that 1 is preferable
15:07:01 <geguileo> ok, then when we have an error_attaching case we have to change the status of the volume and the attachment_status
15:07:52 <geguileo> currently there are at least 2 cases where a volume's status will go into available detached even when having an attachment with error_attaching attachment_status
15:08:32 <geguileo> and the "fun" part is that error_attaching will only happen if we have mapped and exported the volume
15:09:41 <whoami-rajat> lot of cleanup needed!
15:09:43 <geguileo> if the export succeeds and initialize fails => available detached
15:10:11 <hemna> so in that case unexport needs to be called?
15:10:45 <geguileo> if the export and initialize succeed and attach_volume fails => available  detached and the attachment.attach_status=error_attaching
15:10:58 <whoami-rajat> IMO that attachment records only makes sense if we could reuse those or reset state to get the volume attached somehow with same attachment ID (or maybe for tracking purpose)
15:11:43 <geguileo> whoami-rajat: attachment records also helps openstack have the information where it should be
15:11:59 <geguileo> so we can terminate connections even when nova doesn't exist
15:12:17 <geguileo> hemna: in the new API we don't consider having a volume left as exported "a problem"
15:12:40 <geguileo> we call it on delete before the actual driver.delete call
15:13:00 <geguileo> in my opinion the error_attaching must be taken into consideration
15:13:09 <whoami-rajat> geguileo, i was referring to error attachment records
15:13:10 <hemna> well, except that the backend may have a volume exported when it shouldn't be at that point
15:13:25 <geguileo> because the volume will be exported and mapped
15:13:28 <geguileo> whoami-rajat: oh, those
15:13:47 <geguileo> whoami-rajat: with the change we agreed we would only have attachment_error record
15:14:01 <geguileo> I mean the only error kind
15:14:20 <geguileo> and that is necessary to state that whatever we have there should not be used
15:14:21 <geguileo> imo
15:14:59 <geguileo> and we shouldn't delete an attachment record on our own on an "attach" (which is an update) request
15:15:25 <whoami-rajat> ack, makes sense
15:15:53 <geguileo> so I would like to reach an agreement on:
15:16:21 <geguileo> - If a volume has been exported & mapped, but the call to attach_volume fails, what should the volume's status and attach_status be?
15:17:43 <whoami-rajat> geguileo, should we consider cases of multiattach and normal volumes separately?
15:17:51 <geguileo> - If a volume is exported but mapping fails (it could be actually mapped), should we change the attachment.attach_status to attaching_error?
15:18:23 <geguileo> whoami-rajat: yes, I'm talking only about single attachment
15:18:38 <geguileo> because for multi-attach there is a priority on the attachment status
15:18:55 <geguileo> I mean on the attachments' attachment_status
15:19:48 <hemna> fwiw if initialize_connection fails remove_export is already called
15:21:04 <geguileo> yup
15:21:25 <geguileo> thanks
15:22:08 <geguileo> whoami-rajat: for multi attach: attachments priority is: attached > attaching > detaching > reserved
15:22:44 <geguileo> so the volume's status and attachment_status of the volume is set according to the highest priority existance between those
15:23:02 <geguileo> if there is one reserved and one attached => volume goes into in-use attached
15:24:01 <rosmaita> ok, that makes sense
15:24:02 <geguileo> btw, if the initialize fails, the remove export could also fail because the mapping may have succeeded in the backend
15:24:15 <geguileo> rosmaita: that is the current behavior, but ignores the error_attaching case
15:24:30 <geguileo> and that is a volume that is exported & mapped!!
15:24:42 <whoami-rajat> geguileo, ok, and these states have higher priority than error attachment records, eg: reserved > error_attaching
15:24:52 <geguileo> so an exported & mapped volume stays as available detached
15:25:04 <geguileo> whoami-rajat: error_* are literally ignored
15:25:22 <whoami-rajat> geguileo, i don't see any cleanup being done if driver's attach volume call fails
15:25:32 <geguileo> whoami-rajat: there is none
15:25:35 <whoami-rajat> geguileo, oh, that's good i guess
15:25:42 <whoami-rajat> hmm
15:25:43 <geguileo> but even if there were, it could fail
15:26:00 <whoami-rajat> :/
15:26:50 <geguileo> so the question is, how do we treat the attaching_error on the attachment?
15:27:01 <geguileo> towards the volume's status and attachment_status
15:28:05 <geguileo> well, I seem to have saturated everyone with so much stuff
15:28:23 <geguileo> so I'll take the win of agreement on the removal of the error_detaching status
15:28:41 <geguileo> and leave the error_attaching questions for another year
15:29:06 <geguileo> or the mid-cycle, or PTG, or whatever
15:29:09 <whoami-rajat> geguileo, maybe we can discuss it in upcoming virtual PTG, good to add a topic there
15:29:17 <whoami-rajat> virtual mid-cycle i mean
15:29:22 <geguileo> will add it
15:29:31 <rosmaita> geguileo: can you give a quick summary of the error_detaching agreement?
15:29:41 <geguileo> sure
15:29:44 <rosmaita> and yes, good topic for virtual midcycle
15:30:13 <geguileo> the error_detaching state only happens when the remove_export fails
15:30:40 <geguileo> and in that case the REST API response is that the attachment delete has successfully been completed
15:31:07 <geguileo> it's not a problem that the volume hasn't been unexported because we'll call it on the volume deletion
15:31:34 <geguileo> so we are going to be consistent with the REST API response, and ignore remove_export errors altogether
15:31:52 <geguileo> so we try, if we fail, we postpone until deletion or until another attachment delete on this volume
15:32:17 <geguileo> by ignoring the error we delete the attachment regardless of whethe the remove_export suceeded or not
15:32:26 <geguileo> instead of leaving it in error_detaching state
15:32:34 <geguileo> which wasn't helping anyone and was weird
15:32:46 <geguileo> because cinder was saying that it was deleted, yet it was still there
15:33:04 <geguileo> rosmaita: that would be my crazy long summary   lol
15:33:13 <rosmaita> that is helpful, thanks
15:33:16 <geguileo> np
15:33:29 <rosmaita> ok, thanks to everyone who stuck around for this week's XL meeting
15:33:29 <rosmaita> don't forget the festival of XS reviews on friday
15:33:34 <enriquetaso> yes, really helpful
15:33:42 <rosmaita> also
15:33:42 <rosmaita> please look at stephenfin's comments in the agenda about sqlalchemy-migrate -> alembic migration
15:33:42 <rosmaita> if we don't get it done this cycle, the cinder project will be stuck maintaining sqlalchemy-migrate
15:33:42 <rosmaita> (and we don't want that!)
15:33:42 <rosmaita> so please start prioritizing reviews of the alembic migration patches as soon as v2 is removed from cinder
15:34:06 <rosmaita> and rajat, you can discuss stable releases next week
15:34:11 <enriquetaso> hey, can I ask for eyes on (reviews) https://review.opendev.org/c/openstack/cinder/+/679138
15:34:13 <enriquetaso> :P
15:34:17 <geguileo> whoami-rajat: sorry, for taking most of the time
15:34:17 <rosmaita> thanks everyone!
15:34:23 <stephenfin> Yes please. Let me know if anything is unclear. I've tried to keep things as concise as possible
15:34:27 <geguileo> thanks everyone!
15:34:30 <enriquetaso> thanks!
15:34:32 <whoami-rajat> geguileo, np, it was a good and important discussion
15:34:35 <rosmaita> #endmeeting