*** dviroel|out is now known as dviroel | 11:24 | |
*** hemna0 is now known as hemna | 12:53 | |
rosmaita | #startmeeting cinder | 14:00 |
---|---|---|
opendevmeet | Meeting started Wed Jul 14 14:00:14 2021 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'cinder' | 14:00 |
rosmaita | #topic roll call | 14:00 |
eharney | hi | 14:00 |
rosmaita | am i netsplit? | 14:00 |
walshh_ | hi | 14:00 |
rosmaita | oh good | 14:00 |
whoami-rajat | Hi | 14:01 |
enriquetaso | Hi | 14:01 |
fabiooliveira | Hi | 14:01 |
rosmaita | was worried for a minute there | 14:01 |
simondodsley | Hi | 14:01 |
Mr_Freezeex | Hi | 14:01 |
TusharTgite | hi | 14:01 |
rosmaita | #link https://etherpad.opendev.org/p/cinder-xena-meetings | 14:01 |
rosmaita | hi everyone, i am going to get started because i have like 500 announcements and there's a bunch of topics on the agenda | 14:02 |
rosmaita | #topic announcements | 14:02 |
rosmaita | ok, first item | 14:02 |
rosmaita | TusharTgite put a request for help near the end of our video meeting 2 weeks ago, and i am afraid it got lost | 14:02 |
rosmaita | and i forgot to mention it last week | 14:02 |
rosmaita | #link https://wiki.openstack.org/wiki/Reset_State_Robustification | 14:03 |
rosmaita | he would like some feedback, so please take a look | 14:03 |
rosmaita | next item is a reminder | 14:03 |
rosmaita | Cinder Festival of XS Reviews this friday! | 14:03 |
rosmaita | location as usual is https://meetpad.opendev.org/cinder-festival-of-reviews | 14:04 |
rosmaita | next item, the next OpenStack release will be named "Yoga" | 14:04 |
rosmaita | and so the Yoga virtual PTG 18-22 October 2021 | 14:04 |
rosmaita | and now we have a planning etherpad | 14:04 |
rosmaita | #link https://etherpad.opendev.org/p/yoga-ptg-cinder-planning | 14:04 |
rosmaita | and, teams must sign up for time slots within the next week | 14:05 |
rosmaita | i sent a proposal to the ML yesterday which you may have seen | 14:05 |
rosmaita | #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023624.html | 14:05 |
rosmaita | basically, the proposal is for us to do what we have done for the past virtual PTGs | 14:06 |
rosmaita | the downside is that it is rough on APAC time zones | 14:06 |
rosmaita | but i really don't know what to do about that | 14:06 |
rosmaita | i am open to suggestions | 14:06 |
rosmaita | as i said in the email, maybe we have an APAC-friendly post-PTG session, if there is interest | 14:07 |
rosmaita | the problem is that interest is usually shown by attending cinder weekly meetings | 14:07 |
rosmaita | and if the weekly meeting time sucks for you, you won't be here to express an opinion | 14:07 |
whoami-rajat | i think it's fine for India, not sure about China, Japan and others... | 14:08 |
rosmaita | yeah, for shanghai time it's sub-optimal, but probably do-able | 14:08 |
rosmaita | but for Japan, kind of not very good | 14:08 |
rosmaita | the source of the problem, as i see it, is that we are not all physically in the same TZ for the week | 14:09 |
rosmaita | it is really hard to accommodate Seattle to Tokyo | 14:09 |
rosmaita | anyway, my point is that i don't want to be exclusionary about time zones, but i also don't have any ideas | 14:10 |
rosmaita | and we definitely need to accommodate the core team | 14:10 |
rosmaita | so, please make sure you mark your calendar for the PTG | 14:10 |
rosmaita | and also register | 14:10 |
rosmaita | next item | 14:10 |
rosmaita | dropping lower-constraints job, file (discussed last week) | 14:11 |
rosmaita | #link https://review.opendev.org/q/topic:%22drop-l-c%22+(status:open%20OR%20status:merged) | 14:11 |
whoami-rajat | one way is to add APAC topics to the beginning and they can follow the rest of PTG in recordings if it gets too late? | 14:11 |
rosmaita | whoami-rajat: yeah, i am happy to do that, i'll add a note on the etherpad | 14:11 |
rosmaita | #action rosmaita ^^ | 14:11 |
rosmaita | ok, please review ^^ so we can get lower-constraints out of our collective lives and easily make requirements changes | 14:12 |
rosmaita | next item | 14:12 |
rosmaita | this is week R-12 - xena milestone-2 | 14:12 |
rosmaita | two major things here | 14:13 |
rosmaita | 1 - new driver merge deadline | 14:13 |
rosmaita | one new driver: Add Cinder NFS driver for Dell EMC PowerStore | 14:13 |
rosmaita | https://review.opendev.org/c/openstack/cinder/+/797608 | 14:13 |
rosmaita | i need help reviewing this one, especially from driver maintainers who already have NFS-based drivers | 14:14 |
rosmaita | do please do PowerStore a solid and take a look and leave comments | 14:14 |
rosmaita | i will be honest, the driver as it stands looks a bit raw | 14:14 |
eharney | i've got this one on my list to follow up on | 14:15 |
rosmaita | in that i'm not sure that it takes into account all the implications of multiattach | 14:15 |
rosmaita | but i will also be honest and say i don't know jack about NFS, really | 14:15 |
rosmaita | so thanks eharney, definitely want you to look closely | 14:15 |
rosmaita | but also, other people ... anyone from netapp here today? | 14:15 |
rosmaita | guess not | 14:16 |
rosmaita | ok the other big thing is that i want to get v2 of the block storage api removed this week and cut releases of everything affected | 14:17 |
rosmaita | and since these are my patches, i cannot review | 14:17 |
rosmaita | well, i can, but it doesn't do any good | 14:17 |
rosmaita | i sent an appeal about cinder to the ML earlier this week | 14:18 |
rosmaita | #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023606.html | 14:18 |
rosmaita | and then this morning, i realized that we had discussed this earlier and people signed up to review patches | 14:18 |
rosmaita | #link https://wiki.openstack.org/wiki/CinderXenaMidCycleSummary#Block_Storage_API_v2_Removal_Update | 14:18 |
rosmaita | and here are the patches | 14:18 |
rosmaita | #link https://review.opendev.org/q/topic:%22drop-v2%22+(status:open%20OR%20status:merged) | 14:18 |
enriquetaso | i'm not on that list but i'll review them asap | 14:19 |
rosmaita | enriquetaso: please start with the cinderclient patches | 14:20 |
enriquetaso | sure | 14:20 |
rosmaita | i am having problems getting the second cinderclient patch passing CI after rebasing it on the other patch | 14:20 |
rosmaita | the "other patch" being shell removal | 14:20 |
rosmaita | shell removal: https://review.opendev.org/c/openstack/python-cinderclient/+/791834 | 14:21 |
rosmaita | classes removal: https://review.opendev.org/c/openstack/python-cinderclient/+/792959 | 14:21 |
rosmaita | i have no idea what is going on ... i am consistently getting tempest-full-py failures in a tempest scenario test that tries to log in to horizon | 14:22 |
rosmaita | i am pretty sure my patch is not affecting that | 14:22 |
rosmaita | but my patch seems to be cursed | 14:22 |
rosmaita | the other thing is that i am getting consistent fixture timeouts in the functional-py36 test ... | 14:23 |
whoami-rajat | i also got the python-cinderclient-functional-py36 failure today on my patch... | 14:23 |
rosmaita | yes,and then you passed on the next go-round | 14:23 |
eharney | rosmaita: er... i think horizon imports cinderclient.v2 | 14:23 |
rosmaita | whereas mine failed again | 14:23 |
rosmaita | eharney: i thought not, but i better look more closely | 14:23 |
eharney | it does, i just looked | 14:24 |
rosmaita | shoot | 14:24 |
rosmaita | ok, i guess that's why we have CI tests | 14:24 |
rosmaita | eharney: shoot me a link and i will put up an emergency horizon patch or something | 14:25 |
eharney | https://opendev.org/openstack/horizon/src/branch/master/openstack_dashboard/api/cinder.py#L31 | 14:25 |
eharney | i'll add a review comment | 14:25 |
rosmaita | ty | 14:25 |
rosmaita | ok, if anyone has an idea about the fixtures problem (only seen in py36) i will be happy to hear it | 14:26 |
rosmaita | final announcement: | 14:26 |
rosmaita | xena midcycle-2 coming up at R-9 | 14:26 |
rosmaita | Wednesday 4 August 2021, 1400-1600 UTC | 14:26 |
rosmaita | so make sure your calendar is blocked off for fun & excitement | 14:26 |
rosmaita | and, add topics to the etherpad: | 14:27 |
rosmaita | #link https://etherpad.opendev.org/p/cinder-xena-mid-cycles | 14:27 |
rosmaita | that's all for announcements, other than that i will be away for a week starting next wednesday | 14:27 |
rosmaita | whoami-rajat has volunteered to run the meeting | 14:28 |
rosmaita | ok, on to real topics | 14:28 |
rosmaita | #topic rbd: snapshot mirroring for ceph backup driver | 14:28 |
rosmaita | Mr_Freezeex: that's you | 14:28 |
Mr_Freezeex | Hi, so this is a followup discussion on the addition of snapshot replication discussed in last weekly meetings | 14:28 |
rosmaita | #link https://review.opendev.org/c/openstack/cinder/+/784956 | 14:28 |
Mr_Freezeex | The snapshot mirroring was denied and I am here to discuss present why I think it's nice to have that in cinder-backup | 14:29 |
Mr_Freezeex | So first thing is that you can't enable snapshot replication on the whole pool like with journaling, you have to do that per image. So this either needs to be done directly in cinder-backup or with an external automation. | 14:29 |
Mr_Freezeex | Now if we compare the two modes. It's true that the writes are synchronously written to the journal, but it's not synchronously replayed to the other cluster. | 14:30 |
Mr_Freezeex | With journaling the replication is consistent per write and with the snapshot it's per snapshot. But for backups, I don't see a real advantage for per write consistency. | 14:31 |
Mr_Freezeex | So IMO both alternative are equally safe for backups and the snapshot option well exceed the performance of journaling replication in terms of client performance (so backup speed in our case) and replication speed. | 14:31 |
Mr_Freezeex | If you have any questions/concerns, feel free! | 14:32 |
geguileo | Mr_Freezeex: I haven't tried the feature, and I'm looking for the doc where it says that it cannot do whole pool configuration for snapshot replication... | 14:33 |
geguileo | do you have a link, location where the docs says it? | 14:33 |
geguileo | or is it missing from the docs? | 14:34 |
Mr_Freezeex | Well it's not in docs but the rbd command to enable replication on the whole pool assume journaling | 14:34 |
Mr_Freezeex | there is no options to say either snapshot or journal | 14:34 |
Mr_Freezeex | And the default in ceph is journaling so... | 14:34 |
geguileo | Mr_Freezeex: that mirroring mode may be the default, but I believe you can change it | 14:35 |
geguileo | right? | 14:35 |
Mr_Freezeex | Hmm I don't think so but I am maybe wrong | 14:36 |
Mr_Freezeex | at least it not appears to be documented anywhere... | 14:36 |
geguileo | my bad | 14:37 |
geguileo | pool (default): When configured in pool mode, all images in the pool with the journaling feature enabled are mirrored. | 14:37 |
Mr_Freezeex | Ah indeed | 14:37 |
geguileo | oooooh, wait, but you can configure the mirror-snapshot to work on a per-pool level | 14:38 |
geguileo | which should solve the problem, I would think | 14:38 |
geguileo | jbernard: you around to chime in on this? ^ | 14:38 |
Mr_Freezeex | Oh ok, where is that documented? | 14:38 |
geguileo | Mirror-snapshots can also be automatically created on a periodic basis if mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool, or per-image levels. | 14:39 |
geguileo | https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#create-image-mirror-snapshots | 14:39 |
geguileo | not 100% sure that is all that's needed | 14:39 |
Mr_Freezeex | Ah yes ok but this only creates snapshots on image that have the snapshot replication mode enabled | 14:40 |
geguileo | ignore it: When using snapshot-based mirroring, mirror-snapshots will need to be created whenever it is desired to mirror the changed contents of the RBD image | 14:40 |
geguileo | yup | 14:40 |
geguileo | ok, so in that light, it makes sense to support it in the backups | 14:40 |
geguileo | I find it odd that there is no way to do that by default | 14:41 |
Mr_Freezeex | Speaking of the mirror snapshots on the whole pool. I assumed this would be the go to for backup in my patch but I will probably add a quick addition to do a mirror snapshot after every data transfer so you would not even have to enable that! | 14:42 |
Mr_Freezeex | Yeah but I think they want to deprecate to enable replication on the whole pool, or at least it is not very recommanded | 14:42 |
rosmaita | i'm trying to remember what the objection to using this for backups was last week, it was something to do with a RPO gap | 14:43 |
geguileo | Mr_Freezeex: what do you mean do a mirror snapshot after every data transfer? | 14:43 |
Mr_Freezeex | geguileo: When the backup is finished do a mirror snapshot | 14:44 |
geguileo | rosmaita: my main complain was that this should be done like the journal based, at the Ceph end, and cinder shouldn't need to know | 14:44 |
Mr_Freezeex | Basically even an empty snapshot has a (small) cost on the rbd-mirorr daemon so that's an easy optimisation | 14:44 |
geguileo | Mr_Freezeex: that is what's necessary, right? otherwise it won't be mirrored | 14:45 |
Mr_Freezeex | geguileo: Yes you have to create mirror snapshot one option is to do that with the mgr and let ceph do it but do it via mirror snapshots would be nice because if we don't do that even backup that has changed for years would have those "useless" snapshot being created. | 14:47 |
geguileo | Mr_Freezeex: so if I understand correctly, we could not do it in Cinder and let storage admin set the scheduler in the backend, right? | 14:48 |
geguileo | and if I understand correctly even the number of snapshots per image can be configured in the cluster, so it can be changed to 1 | 14:49 |
Mr_Freezeex | geguileo: yes, but that's an easy thing to do in cinder (like ~10 lines). You just have to call the function to create a mirror snapshot and let ceph does his thing in the backend | 14:49 |
Mr_Freezeex | Yes but this is managed internally by ceph | 14:49 |
geguileo | yeah, but that was the complain, if it can be done in the backend, we don't want to add that on our end | 14:49 |
*** dviroel is now known as dviroel|lunch | 14:50 | |
rosmaita | ok, so the discussion last week was that it made sense to configure journaling vs. snapshot mirroring in cinder ... for backups, do we want to have no option, and just do snapshot mirroring if it's available? | 14:50 |
geguileo | if we create the snapshots, then we are responsible for the snapshots | 14:50 |
geguileo | rosmaita: my complain about backups is that afaik an admin can do it all on the Ceph side | 14:50 |
geguileo | rosmaita: like they are currently doing for journaling | 14:50 |
geguileo | s/journaling/journal-mirroring | 14:51 |
rosmaita | ok, this has until M-3 to merge, so we should continue this discussion later | 14:51 |
geguileo | so if that can be done as well for snapshot-based mirroring | 14:51 |
geguileo | why add code to cinder just for that? | 14:51 |
geguileo | ok | 14:51 |
rosmaita | i am all for not adding unnecessary code to cinder! | 14:51 |
geguileo | Mr_Freezeex: we can continue the discussion on the cinder channel after the meeting | 14:52 |
Mr_Freezeex | yes sure! | 14:52 |
rosmaita | ok, geguileo you are up next | 14:52 |
rosmaita | #topic meaning of "available" status for a volume? | 14:52 |
geguileo | rosmaita: thanks | 14:52 |
geguileo | so I'm working on fixing a race condition on the new attachment API, specifically on the delete operation | 14:53 |
geguileo | and while looking at the code around it, I saw some inconsistencies | 14:53 |
geguileo | some with how we worked before | 14:53 |
geguileo | some on their own | 14:53 |
geguileo | for example, if a remove export fails when deleint the attachment, then we don't fail the operation | 14:53 |
geguileo | though we set the attach_status of the attachment to error_detaching | 14:54 |
geguileo | since we don't fail, the volume will go into available and detached (if there are no additional attachments on that volume) | 14:54 |
geguileo | so we end up with a detach operation that according to the old attachment API should have failed | 14:54 |
geguileo | but now succeeds | 14:54 |
geguileo | that's not a big deal, since on delete we now ensure we call the remove export, just something that we have changed, and sounds a bit weird | 14:55 |
geguileo | the bad part is that now we have a volume that has an attachment, yet is in available and detached state | 14:55 |
whoami-rajat | geguileo, I'm not clear with state "available and detached", i assume you mean status=available and attach_status=detached but you said attach_status is set as error_detaching | 14:56 |
geguileo | whoami-rajat: attach_status was for the attachment itself | 14:56 |
geguileo | whoami-rajat: where as the available and detached is like you described, which is for the volume | 14:56 |
whoami-rajat | ok got it | 14:57 |
geguileo | which is kind of weird, as one would assume that an available volume has no attachment records | 14:57 |
geguileo | so there are 2 ways to look at the available status: | 14:57 |
geguileo | 1- available => has no attachments, on any state (this was the old way) | 14:58 |
geguileo | 2- available => there is no-one using the volume (this is the new way) | 14:58 |
geguileo | so that is my first concern (of may around this code) | 14:58 |
geguileo | if we want to keep current behavior we need to document it | 14:59 |
rosmaita | (btw ... horizon now meets in their own channel, so we can continue this discussion right here) | 14:59 |
geguileo | great! | 14:59 |
geguileo | though I feel like I put everyone to sleep | 14:59 |
rosmaita | we are trying to digest what you are saying | 15:00 |
geguileo | ok, thanks | 15:00 |
whoami-rajat | geguileo, since you mentioned we handle the delete export part during volume delete, doesn't it make sense to delete the attachment record rather than setting the status=error_detaching? | 15:00 |
geguileo | whoami-rajat: yes, it would make sense to me | 15:00 |
whoami-rajat | i know it doesn't sound good but keeping an attachment record which is of no use doesn't sound good either | 15:00 |
geguileo | oh, no, it sounds a lot better than what we have now!! | 15:01 |
whoami-rajat | great | 15:01 |
geguileo | I'm ok with doing that | 15:01 |
geguileo | what do others think? | 15:01 |
whoami-rajat | regarding your question, i prefer 1) since there might be complications with a stale attachment record during multiattach or even with normal volumes (that i can't think of right now) | 15:02 |
rosmaita | geguileo: so when you say "new way", you mean this is the implication of the current code, not that you are proposing that it is good | 15:03 |
geguileo | whoami-rajat: the only time we set error_deleting on the attachments is that one, so if we delete it, we don't have to worry about that state anymore on the attachments themselves | 15:04 |
geguileo | rosmaita: correct, this is the new attachment API current behavior | 15:04 |
geguileo | not something I want to do | 15:04 |
rosmaita | ok, got it | 15:06 |
whoami-rajat | geguileo, great, that solves all the problem then | 15:06 |
rosmaita | i agree with whoami-rajat that 1 is preferable | 15:06 |
geguileo | ok, then when we have an error_attaching case we have to change the status of the volume and the attachment_status | 15:07 |
geguileo | currently there are at least 2 cases where a volume's status will go into available detached even when having an attachment with error_attaching attachment_status | 15:07 |
geguileo | and the "fun" part is that error_attaching will only happen if we have mapped and exported the volume | 15:08 |
whoami-rajat | lot of cleanup needed! | 15:09 |
geguileo | if the export succeeds and initialize fails => available detached | 15:09 |
hemna | so in that case unexport needs to be called? | 15:10 |
geguileo | if the export and initialize succeed and attach_volume fails => available detached and the attachment.attach_status=error_attaching | 15:10 |
whoami-rajat | IMO that attachment records only makes sense if we could reuse those or reset state to get the volume attached somehow with same attachment ID (or maybe for tracking purpose) | 15:10 |
geguileo | whoami-rajat: attachment records also helps openstack have the information where it should be | 15:11 |
geguileo | so we can terminate connections even when nova doesn't exist | 15:11 |
geguileo | hemna: in the new API we don't consider having a volume left as exported "a problem" | 15:12 |
geguileo | we call it on delete before the actual driver.delete call | 15:12 |
geguileo | in my opinion the error_attaching must be taken into consideration | 15:13 |
whoami-rajat | geguileo, i was referring to error attachment records | 15:13 |
hemna | well, except that the backend may have a volume exported when it shouldn't be at that point | 15:13 |
geguileo | because the volume will be exported and mapped | 15:13 |
geguileo | whoami-rajat: oh, those | 15:13 |
geguileo | whoami-rajat: with the change we agreed we would only have attachment_error record | 15:13 |
geguileo | I mean the only error kind | 15:14 |
geguileo | and that is necessary to state that whatever we have there should not be used | 15:14 |
geguileo | imo | 15:14 |
geguileo | and we shouldn't delete an attachment record on our own on an "attach" (which is an update) request | 15:14 |
whoami-rajat | ack, makes sense | 15:15 |
geguileo | so I would like to reach an agreement on: | 15:15 |
geguileo | - If a volume has been exported & mapped, but the call to attach_volume fails, what should the volume's status and attach_status be? | 15:16 |
whoami-rajat | geguileo, should we consider cases of multiattach and normal volumes separately? | 15:17 |
geguileo | - If a volume is exported but mapping fails (it could be actually mapped), should we change the attachment.attach_status to attaching_error? | 15:17 |
geguileo | whoami-rajat: yes, I'm talking only about single attachment | 15:18 |
geguileo | because for multi-attach there is a priority on the attachment status | 15:18 |
geguileo | I mean on the attachments' attachment_status | 15:18 |
hemna | fwiw if initialize_connection fails remove_export is already called | 15:19 |
geguileo | yup | 15:21 |
geguileo | thanks | 15:21 |
geguileo | whoami-rajat: for multi attach: attachments priority is: attached > attaching > detaching > reserved | 15:22 |
geguileo | so the volume's status and attachment_status of the volume is set according to the highest priority existance between those | 15:22 |
geguileo | if there is one reserved and one attached => volume goes into in-use attached | 15:23 |
rosmaita | ok, that makes sense | 15:24 |
geguileo | btw, if the initialize fails, the remove export could also fail because the mapping may have succeeded in the backend | 15:24 |
geguileo | rosmaita: that is the current behavior, but ignores the error_attaching case | 15:24 |
geguileo | and that is a volume that is exported & mapped!! | 15:24 |
whoami-rajat | geguileo, ok, and these states have higher priority than error attachment records, eg: reserved > error_attaching | 15:24 |
geguileo | so an exported & mapped volume stays as available detached | 15:24 |
geguileo | whoami-rajat: error_* are literally ignored | 15:25 |
whoami-rajat | geguileo, i don't see any cleanup being done if driver's attach volume call fails | 15:25 |
geguileo | whoami-rajat: there is none | 15:25 |
whoami-rajat | geguileo, oh, that's good i guess | 15:25 |
whoami-rajat | hmm | 15:25 |
geguileo | but even if there were, it could fail | 15:25 |
whoami-rajat | :/ | 15:26 |
geguileo | so the question is, how do we treat the attaching_error on the attachment? | 15:26 |
geguileo | towards the volume's status and attachment_status | 15:27 |
geguileo | well, I seem to have saturated everyone with so much stuff | 15:28 |
geguileo | so I'll take the win of agreement on the removal of the error_detaching status | 15:28 |
geguileo | and leave the error_attaching questions for another year | 15:28 |
geguileo | or the mid-cycle, or PTG, or whatever | 15:29 |
whoami-rajat | geguileo, maybe we can discuss it in upcoming virtual PTG, good to add a topic there | 15:29 |
whoami-rajat | virtual mid-cycle i mean | 15:29 |
geguileo | will add it | 15:29 |
rosmaita | geguileo: can you give a quick summary of the error_detaching agreement? | 15:29 |
geguileo | sure | 15:29 |
rosmaita | and yes, good topic for virtual midcycle | 15:29 |
geguileo | the error_detaching state only happens when the remove_export fails | 15:30 |
geguileo | and in that case the REST API response is that the attachment delete has successfully been completed | 15:30 |
geguileo | it's not a problem that the volume hasn't been unexported because we'll call it on the volume deletion | 15:31 |
geguileo | so we are going to be consistent with the REST API response, and ignore remove_export errors altogether | 15:31 |
geguileo | so we try, if we fail, we postpone until deletion or until another attachment delete on this volume | 15:31 |
geguileo | by ignoring the error we delete the attachment regardless of whethe the remove_export suceeded or not | 15:32 |
geguileo | instead of leaving it in error_detaching state | 15:32 |
geguileo | which wasn't helping anyone and was weird | 15:32 |
geguileo | because cinder was saying that it was deleted, yet it was still there | 15:32 |
geguileo | rosmaita: that would be my crazy long summary lol | 15:33 |
rosmaita | that is helpful, thanks | 15:33 |
geguileo | np | 15:33 |
rosmaita | ok, thanks to everyone who stuck around for this week's XL meeting | 15:33 |
rosmaita | don't forget the festival of XS reviews on friday | 15:33 |
enriquetaso | yes, really helpful | 15:33 |
rosmaita | also | 15:33 |
rosmaita | please look at stephenfin's comments in the agenda about sqlalchemy-migrate -> alembic migration | 15:33 |
rosmaita | if we don't get it done this cycle, the cinder project will be stuck maintaining sqlalchemy-migrate | 15:33 |
rosmaita | (and we don't want that!) | 15:33 |
rosmaita | so please start prioritizing reviews of the alembic migration patches as soon as v2 is removed from cinder | 15:33 |
rosmaita | and rajat, you can discuss stable releases next week | 15:34 |
enriquetaso | hey, can I ask for eyes on (reviews) https://review.opendev.org/c/openstack/cinder/+/679138 | 15:34 |
enriquetaso | :P | 15:34 |
geguileo | whoami-rajat: sorry, for taking most of the time | 15:34 |
rosmaita | thanks everyone! | 15:34 |
stephenfin | Yes please. Let me know if anything is unclear. I've tried to keep things as concise as possible | 15:34 |
geguileo | thanks everyone! | 15:34 |
enriquetaso | thanks! | 15:34 |
whoami-rajat | geguileo, np, it was a good and important discussion | 15:34 |
rosmaita | #endmeeting | 15:34 |
opendevmeet | Meeting ended Wed Jul 14 15:34:35 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:34 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-07-14-14.00.html | 15:34 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-07-14-14.00.txt | 15:34 |
opendevmeet | Log: https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-07-14-14.00.log.html | 15:34 |
geguileo | stephenfin: thanks! | 15:34 |
whoami-rajat | thanks everyone! | 15:34 |
*** dviroel|lunch is now known as dviroel | 15:39 | |
*** dviroel is now known as dviroel|out | 22:01 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!