16:00:19 <gibi> #startmeeting nova
16:00:19 <opendevmeet> Meeting started Tue Jul 13 16:00:19 2021 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:19 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:19 <opendevmeet> The meeting name has been set to 'nova'
16:01:26 <gibi> o/
16:01:55 <stephenfin> o/
16:02:14 <sean-k-mooney> o/
16:02:19 <dansmith> o/
16:02:41 <elodilles> o/
16:02:52 <gmann> o/
16:03:18 <gibi> #topic Bugs (stuck/critical)
16:03:26 <gibi> no critical bug
16:03:33 <gibi> #link 14 new untriaged bugs (-11 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
16:03:41 <lyarwood> \o
16:03:43 <gibi> thanks for the triage!
16:03:47 <gibi> (whoever did it)(
16:04:05 <gibi> is there any specific bug that we need to discuss?
16:04:36 <gibi> #topic Gate status
16:04:44 <gibi> Nova gate bugs #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure
16:04:54 <stephenfin> nice! (bug triage)
16:05:06 * lyarwood did a little
16:05:37 <gibi> looking at the gate bug list I dont see new bugs there
16:05:42 <stephenfin> subjectively, I'm still seeing a decent amount of nova-live-migration failures
16:05:52 <stephenfin> though it does feel like it's slightly better
16:06:04 <stephenfin> I'm assuming this is still the volume detach issue
16:06:36 <lyarwood> I've not had a chance to look tbh
16:06:37 <gibi> I had no time looking at the gate since the last meeting so I cannto commet
16:06:56 <stephenfin> ah, sorry, that was more of a statement than a question
16:07:04 <lyarwood> https://zuul.opendev.org/t/openstack/builds?project=openstack%2Fnova&branch=master&pipeline=gate&result=failure at least we haven't blocked anything
16:08:01 <gibi> there is one gate fix from melwitt https://review.opendev.org/c/openstack/nova/+/800313 that needs a second core
16:08:14 <lyarwood> https://zuul.opendev.org/t/openstack/builds?job_name=nova-live-migration&project=openstack%2Fnova&branch=master&pipeline=check&result=failure but yeah there's a number of LM failures in check
16:08:20 <stephenfin> I'll grab that
16:08:25 <gibi> stephenfin: thanks
16:08:27 <tosky> related: the legacy jobs removal moved forward, the stable/train backport just requires a few small fixes: https://review.opendev.org/c/openstack/nova/+/795435
16:08:36 <stephenfin> thought I already had, tbh...
16:09:53 <gibi> lyarwood: thanks for the links
16:09:56 <elodilles> tosky: train is not blocked, do we need that backport there?
16:10:55 <tosky> elodilles: I'd say it's worth it: it should help people who will keep on eye on that branch when a failure will happen
16:12:23 <elodilles> i see.... it's just.... really not that nice of a patch :]
16:12:52 <elodilles> for backport :X
16:13:32 <gibi> elodilles: so far we backported that to newer stable branches to unblock gate there?
16:13:56 <elodilles> well, it became like that in ussuri
16:14:11 <elodilles> in newer branches it was a *bit* nicer
16:14:37 <elodilles> but yes, we did backport it so far
16:14:47 <elodilles> until ussuri
16:14:58 <elodilles> since gate was broken back to ussuri
16:15:50 <gibi> so the gate is OK in train, so this becomes a nice to have compared to ussure where it was a must have
16:16:00 <gibi> anyhow I let the stable core team to decide
16:17:40 <gibi> let me know if you stuck on deciding and need my final ruling :)
16:17:57 <gibi> moving on
16:18:04 <gibi> the placement gate status looks green
16:18:16 <gibi> any other topic about the gate?
16:19:25 <gibi> #topic Release Planning
16:19:29 <gibi> Milestone 2 is Thursday (15 of July) which is spec freeze
16:19:45 <gibi> we have couple of open specs
16:19:45 <gibi> https://review.opendev.org/q/project:openstack/nova-specs+status:open
16:19:55 <gibi> melwitt: has two re-propose specs that needs a second core
16:20:33 <gibi> nova-audit https://review.opendev.org/c/openstack/nova-specs/+/800570
16:20:41 <gibi> and
16:20:43 <gibi> consumer types https://review.opendev.org/c/openstack/nova-specs/+/800569
16:21:14 <gibi> these are pretty easy to get in before the deadline I think
16:21:32 <gibi> there is one more spec with only positive feedback https://review.opendev.org/c/openstack/nova-specs/+/787458
16:21:40 <gibi> Integration With Off-path Network Backends ^^
16:22:15 <gibi> I sean-k-mooney said in the review that we wait for bauzas to review
16:22:31 <gibi> but I think bauzas is already on PTO
16:22:45 <stephenfin> he is
16:22:46 <sean-k-mooney> i think part of the concern there is the neutorn apporval and ovn support
16:23:08 <sean-k-mooney> we could approve it for this cycle but im not sure the depencies will line up before feature freeze
16:23:34 <sean-k-mooney> so we could leave that to the implemantion reivew and be optimistic or defer
16:23:43 <gibi> sean-k-mooney: could you please summarize this dependency complication in the spec review?
16:23:44 <sean-k-mooney> until everyting is ready
16:23:55 <sean-k-mooney> gibi: its already captured in the review
16:24:05 <gibi> ohh, cool then I missed that
16:24:18 <sean-k-mooney> https://review.opendev.org/c/openstack/nova-specs/+/787458/9/specs/xena/approved/integration-with-off-path-network-backends.rst#654
16:24:43 <sean-k-mooney> we are not expecting the nova design to change
16:25:07 <sean-k-mooney> but we likely want to wait for the depencies to be resolved before merging the nova code
16:25:49 <sean-k-mooney> gibi: i agree on nova-audit and consumer types
16:26:00 <sean-k-mooney> i think those could be progressed before spec freeze
16:26:00 <gibi> sean-k-mooney: thanks, I will read the last part of the discussion in the spec tomorrow and then try to decide if I upgrade my vote
16:26:58 <gibi> the rest of the open specs has negative feedback on them so they have small chance to land until Thursday but I will keep an eye on them
16:27:32 <gibi> anything else about the coming spec freeze?
16:28:50 <gibi> #topic Stable Branches
16:28:54 <gibi> stable gates are not blocked
16:28:59 <gibi> stable releases have been proposed:
16:29:08 <gibi> wallaby: https://review.opendev.org/800475
16:29:08 <gibi> victoria: https://review.opendev.org/800476
16:29:12 <gibi> ussuri: https://review.opendev.org/800477
16:29:16 <gibi> EOM (from elodilles )
16:29:31 * lyarwood has these open and will review before dropping for a few weeks later today
16:29:48 <gibi> I've already looked at the stable release patches and they look good to me. I will wait for lyarwood before approve
16:29:53 <elodilles> lyarwood: thanks in advance :)
16:30:02 <lyarwood> ack np, sorry I didn't get to them already
16:30:09 <gibi> lyarwood: thanks!
16:30:16 <gibi> anythin else on stable land?
16:30:32 <elodilles> nothing from me
16:31:12 <gibi> I'm skipping the libvirt subteam topic as we dont have bauzas
16:31:17 <gibi> #topic Open discussion
16:31:25 <gibi> nothing on the agenda
16:31:30 <gibi> but I have one note
16:31:47 <gibi> I sent out a doodle about the coming PTG
16:32:00 <sean-k-mooney> ah i was going to ask about that i saw the ooo one
16:32:14 <gibi> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023560.html
16:32:35 <gibi> in the ML thread I have a question about the amount of slots we would need
16:32:54 <gibi> and also we have brand new PTG etherpad too https://etherpad.opendev.org/p/nova-yoga-ptg
16:33:21 <sean-k-mooney> i do think that 4 days with 4 hour slot is likely better yes
16:33:40 <sean-k-mooney> if it was in person we likely owuld do 3 8 hour slots - breaks
16:35:07 <gibi> yeah my experience tells me we could use more time than 3x4 hours
16:35:18 <sean-k-mooney> when do you need to inform the organisers
16:35:36 <gibi> 21st of July
16:35:41 <gibi> so next week
16:35:42 <sean-k-mooney> ack
16:35:53 <gibi> but I think I can always add more slots then remove it later
16:36:08 <sean-k-mooney> yep
16:36:20 <gibi> any other topic for today?
16:36:23 <alistarle> Hello guys, concerning this blueprint https://review.opendev.org/c/openstack/nova/+/794837, do you have any question about it, and what is the next step ?
16:37:21 <sean-k-mooney> looks liek lee has a -1
16:37:32 * lyarwood clicks
16:37:32 * gibi looks
16:37:32 <sean-k-mooney> has this been disucsed as a specless blueprint in the past
16:38:13 <alistarle> Yep, but I already answer in previous comment I think, but I can sum up for lee I for sure
16:38:30 <lyarwood> alistarle: yeah sorry did I miss the justification for this?
16:38:44 <alistarle> What do you mean about a speechless blueprint ? Because yes, there is a blueprint for that: https://blueprints.launchpad.net/nova/+spec/configure-direct-snapshot
16:38:47 <lyarwood> alistarle: I couldn't work out how you would be able to do this without manually flattening disks
16:39:09 <gibi> sean-k-mooney: I think this is the time alistarle asks for the approval of the specless bp
16:39:43 <sean-k-mooney> alistarle: that is a blueprint not a spec. a spec is a serate document that describe the usecase in detial and the design
16:39:45 <sean-k-mooney> gibi: ack
16:40:01 <sean-k-mooney> alistarle: can you summerise what you want to enable
16:40:50 <alistarle> Sure, I want to make the direct snapshot feature configurable, because for now, nova will not honor the glance configuration when creating a snapshot, but trying to guess it from the actual nova backend
16:41:18 <dansmith> what does that mean? "not honor the glance configuration" ?
16:42:10 <sean-k-mooney> alistarle: is this a glance multistore envoinment? or a case where glance does not use ceph for storage?
16:42:22 <alistarle> Let's say you 1. Store your image on ceph, and use copy on write to create VM, so you are storing your VM disk as images child in RBD, but during life of your cloud you want to move out from ceph. So 2. You remove your rbd backend from glance and put a swift one instead
16:43:21 <alistarle> With this configuration, nova will never call glance to check which backend is enabled, and he will always store the snapshot in ceph, even if it is not a declared glance backend
16:43:21 <sean-k-mooney> alistarle: are we talking about cinder volumes for the vm or the rbd image backend?
16:43:29 <sean-k-mooney> in nova.conf
16:43:32 <lyarwood> sean-k-mooney: rbd image backend
16:44:01 <alistarle> It it for a case when you used RBD as a glance store, then you replace it with a swift one, and you want to progressively move out from ceph
16:44:10 <sean-k-mooney> so the request is to flaten the snapshots effectivly in ceph and then have new shapshots go to swift
16:44:18 <dansmith> alistarle: okay I thought I asked you this question on the spec
16:44:41 <sean-k-mooney> alistarle: well one thing we do not support is changing the nova image_types backend
16:45:01 <sean-k-mooney> including via a move operation like cold migrate
16:45:04 <lyarwood> sean-k-mooney: you wouldn't, that doesn't control where the image is
16:45:05 <dansmith> alistarle: maybe what you want is to be able to tell nova-compute which glance backend you want to snapshot to, to create a flattened snapshot in the desired store, which may not be the rbd one?
16:45:31 <sean-k-mooney> lyarwood: well im aware of that but the usecase given was removing ceph
16:45:36 <alistarle> I think it is a good catch dansmith
16:45:38 <dansmith> telling it not to use rbd snapshotting "just because" is more confusing
16:45:42 <lyarwood> dansmith: does Glance support moving images between backends?
16:45:56 <dansmith> lyarwood: copying, yes, but that's not what I mean
16:46:00 <lyarwood> sean-k-mooney: dropping rbd just for Glance AFAICT
16:46:14 <sean-k-mooney> alistarle: is ^ the case
16:46:26 <dansmith> lyarwood: I meant tell nova "specifically create a flattened snapshot and upload it to a specific glance backend"
16:46:45 <sean-k-mooney> dansmith: that i think would be useful in other cases too
16:47:00 <dansmith> lyarwood: I don't think glance really exposes parent-child relationships anyway, so you couldn't expect to copy/move a hierarchy between backends
16:47:01 <sean-k-mooney> adding a glance store parater to the snapshot api i think woudl be resonable
16:47:03 <dansmith> either way,
16:47:14 <dansmith> I think it's quite clear that this can't be a specless bp :)
16:47:14 <alistarle> So you think it is better to change the snapshot API to allow users to choose which glance backend to use to snap the image ?
16:47:35 <alistarle> Yeah I see you point, for sure it will require a spec in that case
16:47:46 <dansmith> alistarle: that's not what I was suggesting, although that might also be an option.. the problem is, many ops will *not* want users to choose the pathologically terrible option, which you seem to want :)
16:48:10 <alistarle> But actually we will need to do the same code, so if a user explicitly specify a glance store, we need to bypass the direct_snapshot process
16:48:11 <dansmith> I was suggesting a conf option for nova-compute to tell it "always snapshot to glance backend X" or something
16:48:34 <dansmith> instead of letting a user choose
16:48:54 <alistarle> Because as of now, rbd image_backend do not care about glance config, he will magically decide to use ceph because VM disk is stored in ceph
16:49:08 <sean-k-mooney> i kind of dislike doing this per host but i guess a host level config might be useful for an edge deployment
16:49:55 <sean-k-mooney> alistarle: correct although that is partly be desgin
16:50:01 <alistarle> Oh I see, so instead of putting a config option to "disable direct snapshot", we put an option to "choose a glance backend"
16:50:27 <alistarle> And if we specify this option, we skip the direct_snapshot to always call glance
16:50:37 <sean-k-mooney> alistarle: well not nessisarly
16:50:48 <sean-k-mooney> if the specifed backedn was ceph
16:51:03 <dansmith> sean-k-mooney: letting users choose the backend on snapshot will mean they try all kinds of things that won't work or are terrible for performance
16:51:03 <sean-k-mooney> then direct would stil make sense
16:51:06 <sean-k-mooney> if the vm was backed by that ceph cluster
16:51:14 <dansmith> sean-k-mooney: like a user who is currently on ceph always choosing the file backend, causing us to always flatten and upload when we shouldn't be
16:51:43 <sean-k-mooney> dansmith: yes although i was wondiering if people woudl want to do that for data reducnace reasons
16:52:17 <sean-k-mooney> e.g. normally backup to local edge site with a snapshot and ocationally do it to central site
16:52:31 <dansmith> sean-k-mooney: I mean, it would be nice, but I think we'd need policy, defaults, and some sort of way to know which ones the compute can even talk to
16:52:48 <dansmith> sean-k-mooney: yeah, but not backup from one edge site to another.. that would be terribad
16:52:58 <dansmith> so we'd have to have some mapping of who can do what, etc
16:53:13 <sean-k-mooney> ya which is not realy somethng a normal user woudl be aware off
16:53:20 <dansmith> for sure
16:53:34 <sean-k-mooney> so we have 2 pothtial host level cofnig options
16:53:49 <sean-k-mooney> disableing direct snapshto whihc to me feels more like a workaround option
16:53:58 <sean-k-mooney> and dansmith's glance backend option
16:54:13 <sean-k-mooney> *glance store
16:54:43 <dansmith> yup.. the former is a workaround for sure, and as we've noted here, hard to even grok what or why you'd want it,
16:54:52 <sean-k-mooney> im a little concerned the host level glance store option will have implications for shelve and cross cell migration
16:55:08 <dansmith> but the latter is at least useful for migrating in or out of some thing, or directing snapshots to an appropriate on- or off-site location depending
16:55:32 <dansmith> sean-k-mooney: well, whatever the default is today already does, AFAIK
16:55:51 <sean-k-mooney> i do think i prefer the store approch
16:56:06 <sean-k-mooney> ya fair we just assume that glance is accable everywhere
16:56:17 <gibi> we have 5 minutes left, is there any other topic today? if not the we can continue this of course
16:56:27 <dansmith> spec on this for sure tho
16:56:49 <gibi> the spec it is
16:56:53 <gibi> then
16:57:07 <gibi> alistarle: please note that we have spec freeze on Thursday for Xena
16:57:32 <lyarwood> Quick one from me, I'm out for the next ~2 or so weeks, stephenfin is babysitting some stuff while I'm gone.
16:57:48 <gibi> lyarwood: thanks for the headsup
16:58:27 <gibi> if nothing else for today, then I will close the meeting but you can continue discussing the snapshot issue
16:59:18 <gibi> than thanks for joining
16:59:21 <gibi> #endmeeting