14:04:14 <jbernard> #startmeeting cinder
14:04:14 <opendevmeet> Meeting started Wed Aug 13 14:04:14 2025 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:14 <opendevmeet> The meeting name has been set to 'cinder'
14:04:27 <jbernard> jungleboyj rosmaita smcginnis tosky whoami-rajat m5z e0ne geguileo eharney jbernard hemna fabiooliveira yuval tobias-urdin adiare happystacker dosaboy hillpd msaravan sp-bmilanov Luzi sfernand simondodsley: courtesy reminder
14:04:34 <jbernard> #link https://etherpad.opendev.org/p/cinder-flamingo-meetings
14:04:37 <Luzi> o/
14:04:37 <jbernard> #topic roll call
14:04:42 <mhen> o/
14:04:44 <gireesh> o/
14:04:45 <hillpd> o/
14:04:47 <pedrovlf> o/
14:04:51 <jungleboyj> o/
14:05:02 <tosky> o/
14:05:47 <vivek__> o/
14:05:59 <anthonygamboa> o/
14:06:00 <jayaanand> o/
14:06:05 <sp-bmilanov> o/
14:06:05 <Anoop_Shukla> o/
14:06:20 <yuval> o-/
14:06:24 <hvlcchao1> o/
14:07:35 <jbernard> hello everyone
14:08:10 <jbernard> i see a nova spec being added to the agenda, while that happens some quick reminders
14:08:37 <jbernard> Monday, Aug 18 is our midcycle/review session at 1400 UTC (this slot)
14:09:24 <jbernard> #link https://releases.openstack.org/flamingo/schedule.html
14:09:40 * sp-bmilanov is adding things at the last moment
14:10:20 <sp-bmilanov> (I can elaborate a bit more when I have the floor)
14:10:44 <jbernard> sp-bmilanov: sure, go ahead
14:11:48 <sp-bmilanov> hi, thanks, so we hit a case where, during a live migration of an instance, the OOM killer kills the nova agent at the source, but in just the right moment so that the instance is now running at the destination
14:11:57 <sp-bmilanov> but it is not reflected in OpenStack's state
14:12:11 <sp-bmilanov> and when you boot the instance again, it gets started on the source hypervisor
14:12:35 <sp-bmilanov> leading to data corruption for the volumes that are attached at both the source and destination
14:12:42 <jbernard> yikes
14:13:12 <sp-bmilanov> I've brought this up with Nova, hence the spec, but I was wondering if there is something more we can do to eliminate double-attachments as a whole
14:14:00 <sp-bmilanov> the StorPool storage system can force-detach from all but the instance that is being powered on (the linked change), so you don't at least have data corruption
14:14:25 <sp-bmilanov> (https://review.opendev.org/c/openstack/os-brick/+/940245)
14:14:47 <sp-bmilanov> what I wanted to discuss is does it make sense to generalize this a bit and ship it as something other drivers can implement?
14:15:23 <sp-bmilanov> (something like an invariant that storage systems, where possible, to check that some invariants hold)
14:16:50 <sp-bmilanov> ok, wrong link -- https://review.opendev.org/c/openstack/os-brick/+/957117
14:17:53 <sp-bmilanov> and the context in which it can be called -- https://review.opendev.org/c/openstack/nova/+/957119
14:18:43 <sp-bmilanov> currently, it makes a lot of assumptions, might not cover edge cases, etc, but the spirit of the change is "consult the storage system that it's ok to boot"
14:19:14 <jbernard> this will certainly require nova's input,
14:19:31 <jbernard> the source of the bug is restarting on the source when an instance is in a trasitional state, no?
14:20:27 <sp-bmilanov> in a transitional state from the PoV of OpenStack, yes, from what I've gathered, the instance was fully migrated on the destination, there was just some bookeeping and cleanup left on the source
14:21:06 <jbernard> we support multiattach, so there are at least some cases where eliminating this would not be desired... (im reading as fast as i can, there's quite a bit to think about)
14:21:42 <sp-bmilanov> yep, no rush, I just wanted to get the idea out there
14:21:59 <sp-bmilanov> and yes, I am not sure how this will mesh with multi-attach
14:22:03 <jbernard> sp-bmilanov: if finalizing cleanup is necessary, wouldn't bringing up the instance again on the source be the wrong thing to do?
14:23:18 <sp-bmilanov> yep, but from what I gathered, a restart was issued, which made OpenStack recreate the instance at the source
14:24:07 <sp-bmilanov> (there are also discussions to look into maybe adding more obstacles if OpenStack detects that a migration has failed)
14:24:18 <sp-bmilanov> more obstacles to get a VM restarted
14:24:20 <jbernard> but i would hope a restart on a migrated instance would be able to correct for this case
14:24:57 <sp-bmilanov> it's a known defect, the source doesn't check if the instance is running on the destination
14:25:01 <jbernard> im resisitant to adding additional logic to cinder if nova is capable of improving the restart logic (as a general idea)
14:25:44 <sp-bmilanov> (the spec will aim to address this and maybe describe that the source checking the destination for running instances is the way to go)
14:25:49 <jbernard> i think the source of the problme is the best place to address it, instead of adding safeguards in cinder to work around it (if possible)
14:26:41 <sp-bmilanov> makes sense
14:26:49 <jbernard> i want to hear nova's take on it
14:28:22 <sp-bmilanov> I will bring it up again on the Nova weekly next week
14:28:44 <jbernard> sp-bmilanov: what is the lp bug for this? im not seeing it'
14:29:26 <sp-bmilanov> https://bugs.launchpad.net/nova/+bug/2092391
14:29:56 <jbernard> ahh, sean has some comments on the spec, need to look at that closer
14:29:58 <jbernard> sp-bmilanov: thanks
14:30:50 <sp-bmilanov> you're welcome, yes, and https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2025-07-29.log.html https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2025-07-22.log.html
14:31:26 <sp-bmilanov> I need to update the spec proposal with what we've discussed in the meets, but the focus in the spec is the cleanup enhancement
14:31:46 <jbernard> #action sp-bmilanov to raise restart/migration issue in nova meeting and report back
14:32:19 <jbernard> sp-bmilanov: it could be that nova cannot handle it any better, i just want to understand
14:32:59 <sp-bmilanov> it can, and in that specific case it will solve the issue, but I was wondering if a more general approach would be even better
14:33:22 <sp-bmilanov> in order to avoid a potential next double-attach-start situation
14:33:51 <sp-bmilanov> "in that specific case it will solve the issue" it = a potential enhancement to the cleanup
14:35:28 <jbernard> ok, im interested in addressing the immediate issue first
14:35:42 <sp-bmilanov> ack
14:36:13 <jbernard> im skeptical of solutions in need of problems, but it's certainly something we can dicuss, perhaps you have a reproducer some additional analysis
14:37:11 <jbernard> #topic open discussion
14:37:21 <yuval> hey
14:37:30 <yuval> when is feature freeze for 2025.2?
14:38:02 <jbernard> https://releases.openstack.org/flamingo/schedule.html#f-ff
14:38:02 <yuval> did I missed it already?
14:38:06 <vivek__> Hello,
14:38:06 <jbernard> R-5 i believe
14:38:07 <vivek__> https://review.opendev.org/c/openstack/cinder/+/951829
14:38:11 <Luzi> hi i just wanted to know: how is the review state for the image encryption patches? Is there anything we still need to do?
14:38:23 <jbernard> yuval: not yet
14:38:37 <yuval> 28.8?
14:38:40 <Luzi> Aug 25 - Aug 29
14:38:42 <jbernard> Luzi: i need another core reviewer
14:39:25 <anthonygamboa> Hey everyone, I have what I think is a basic question here but is there detailed documentation on how to modify an existing gerrit merge request submitted by someone else for cinder specifically?  I've checked here https://docs.openstack.org/project-team-guide/review-the-openstack-way.html#modifying-a-change and here https://docs.opendev.org/opendev/infra-manual/latest/developers.html#updating-a-change but con
14:39:34 <jayaanand> https://review.opendev.org/c/openstack/cinder/+/955054 raised patch 10 days back. Please help with review
14:39:55 <jbernard> Luzi: i was hoping to get it merged early so that we could address any regressions should they arise, but everyone is quite busy this cycle.  im still hoping to get it merged, ill try to raise it in Monday's meeting
14:39:58 <pedrovlf> folks we will talk about the https://review.opendev.org/c/openstack/os-brick/+/955379 (agenda) ?
14:40:10 <Luzi> thank you jbernard
14:40:16 <vivek__> Need review for CINDER plugin registration https://review.opendev.org/c/openstack/cinder/+/951829. As I have updated the patch and removed the OS version as it was having GDPR implications.
14:41:23 <jbernard> anthonygamboa: merge request, as in: a patch submitted for review?
14:42:18 <jbernard> anthonygamboa: anyone can update an existing patch, you can use git-review and the author and uploader will then show different values
14:43:04 <jbernard> anthonygamboa: but it will kindof overwrite someone elses work, so make sure you're being helpful and it's the right action to take
14:44:02 <jbernard> pedrovlf: sure, what's up?
14:45:22 <pedrovlf> Hi jbernard we would like a review on the that patch we submit cc: hillpd
14:45:44 <hillpd> We found a an issue in os-brick where multiple iSCSI logins per path aren’t handled correctly. The current logic stops after finding the first session. In practice, we found that this can prevent the clean-up of devices configured under other sessions.
14:46:14 <jbernard> pedrovlf: yes, im aware, i haven't had time to look at it yet
14:46:25 <hillpd> We have a proposed fix and were looking for feedback on this solution. Re: https://review.opendev.org/c/openstack/os-brick/+/955379
14:46:30 <hillpd> okay, thanks
14:47:00 <pedrovlf> also we described the full situation on the https://bugs.launchpad.net/os-brick/+bug/2116553
14:47:57 <jbernard> ok, the patch may well be in fine shape, it just needs someone with time to take a look
14:48:36 <Sandip> Hello Joe, Can you please review this CINDER plugin registration https://review.opendev.org/c/openstack/cinder/+/951829.
14:48:46 <jbernard> pedrovlf, hillpd: we can raise this in Monday's session
14:48:48 * jungleboyj is looking at that one.
14:49:14 <Sandip> We have dropped mail regarding previous review comments.
14:49:34 <pedrovlf> thank you jbernard let us know if you need any other detail
14:49:36 <jbernard> yep, i see it, it's in the review queue as well
14:49:41 <jbernard> Sandip: ^
14:49:52 <jbernard> pedrovlf: ok, thanks for reaching out
14:49:57 <vivek__> Thanks Joe.
14:54:01 <jbernard> ok, last call for problems, issues, dumpster fires, etc
14:54:19 <jbernard> good news is alwasy welcome too :)
14:56:27 <jbernard> thanks everyone!
14:56:30 <jbernard> #endmeeting