16:00:19 #startmeeting nova 16:00:19 Meeting started Tue Jul 13 16:00:19 2021 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:19 The meeting name has been set to 'nova' 16:01:26 o/ 16:01:55 o/ 16:02:14 o/ 16:02:19 o/ 16:02:41 o/ 16:02:52 o/ 16:03:18 #topic Bugs (stuck/critical) 16:03:26 no critical bug 16:03:33 #link 14 new untriaged bugs (-11 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 16:03:41 \o 16:03:43 thanks for the triage! 16:03:47 (whoever did it)( 16:04:05 is there any specific bug that we need to discuss? 16:04:36 #topic Gate status 16:04:44 Nova gate bugs #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure 16:04:54 nice! (bug triage) 16:05:06 * lyarwood did a little 16:05:37 looking at the gate bug list I dont see new bugs there 16:05:42 subjectively, I'm still seeing a decent amount of nova-live-migration failures 16:05:52 though it does feel like it's slightly better 16:06:04 I'm assuming this is still the volume detach issue 16:06:36 I've not had a chance to look tbh 16:06:37 I had no time looking at the gate since the last meeting so I cannto commet 16:06:56 ah, sorry, that was more of a statement than a question 16:07:04 https://zuul.opendev.org/t/openstack/builds?project=openstack%2Fnova&branch=master&pipeline=gate&result=failure at least we haven't blocked anything 16:08:01 there is one gate fix from melwitt https://review.opendev.org/c/openstack/nova/+/800313 that needs a second core 16:08:14 https://zuul.opendev.org/t/openstack/builds?job_name=nova-live-migration&project=openstack%2Fnova&branch=master&pipeline=check&result=failure but yeah there's a number of LM failures in check 16:08:20 I'll grab that 16:08:25 stephenfin: thanks 16:08:27 related: the legacy jobs removal moved forward, the stable/train backport just requires a few small fixes: https://review.opendev.org/c/openstack/nova/+/795435 16:08:36 thought I already had, tbh... 16:09:53 lyarwood: thanks for the links 16:09:56 tosky: train is not blocked, do we need that backport there? 16:10:55 elodilles: I'd say it's worth it: it should help people who will keep on eye on that branch when a failure will happen 16:12:23 i see.... it's just.... really not that nice of a patch :] 16:12:52 for backport :X 16:13:32 elodilles: so far we backported that to newer stable branches to unblock gate there? 16:13:56 well, it became like that in ussuri 16:14:11 in newer branches it was a *bit* nicer 16:14:37 but yes, we did backport it so far 16:14:47 until ussuri 16:14:58 since gate was broken back to ussuri 16:15:50 so the gate is OK in train, so this becomes a nice to have compared to ussure where it was a must have 16:16:00 anyhow I let the stable core team to decide 16:17:40 let me know if you stuck on deciding and need my final ruling :) 16:17:57 moving on 16:18:04 the placement gate status looks green 16:18:16 any other topic about the gate? 16:19:25 #topic Release Planning 16:19:29 Milestone 2 is Thursday (15 of July) which is spec freeze 16:19:45 we have couple of open specs 16:19:45 https://review.opendev.org/q/project:openstack/nova-specs+status:open 16:19:55 melwitt: has two re-propose specs that needs a second core 16:20:33 nova-audit https://review.opendev.org/c/openstack/nova-specs/+/800570 16:20:41 and 16:20:43 consumer types https://review.opendev.org/c/openstack/nova-specs/+/800569 16:21:14 these are pretty easy to get in before the deadline I think 16:21:32 there is one more spec with only positive feedback https://review.opendev.org/c/openstack/nova-specs/+/787458 16:21:40 Integration With Off-path Network Backends ^^ 16:22:15 I sean-k-mooney said in the review that we wait for bauzas to review 16:22:31 but I think bauzas is already on PTO 16:22:45 he is 16:22:46 i think part of the concern there is the neutorn apporval and ovn support 16:23:08 we could approve it for this cycle but im not sure the depencies will line up before feature freeze 16:23:34 so we could leave that to the implemantion reivew and be optimistic or defer 16:23:43 sean-k-mooney: could you please summarize this dependency complication in the spec review? 16:23:44 until everyting is ready 16:23:55 gibi: its already captured in the review 16:24:05 ohh, cool then I missed that 16:24:18 https://review.opendev.org/c/openstack/nova-specs/+/787458/9/specs/xena/approved/integration-with-off-path-network-backends.rst#654 16:24:43 we are not expecting the nova design to change 16:25:07 but we likely want to wait for the depencies to be resolved before merging the nova code 16:25:49 gibi: i agree on nova-audit and consumer types 16:26:00 i think those could be progressed before spec freeze 16:26:00 sean-k-mooney: thanks, I will read the last part of the discussion in the spec tomorrow and then try to decide if I upgrade my vote 16:26:58 the rest of the open specs has negative feedback on them so they have small chance to land until Thursday but I will keep an eye on them 16:27:32 anything else about the coming spec freeze? 16:28:50 #topic Stable Branches 16:28:54 stable gates are not blocked 16:28:59 stable releases have been proposed: 16:29:08 wallaby: https://review.opendev.org/800475 16:29:08 victoria: https://review.opendev.org/800476 16:29:12 ussuri: https://review.opendev.org/800477 16:29:16 EOM (from elodilles ) 16:29:31 * lyarwood has these open and will review before dropping for a few weeks later today 16:29:48 I've already looked at the stable release patches and they look good to me. I will wait for lyarwood before approve 16:29:53 lyarwood: thanks in advance :) 16:30:02 ack np, sorry I didn't get to them already 16:30:09 lyarwood: thanks! 16:30:16 anythin else on stable land? 16:30:32 nothing from me 16:31:12 I'm skipping the libvirt subteam topic as we dont have bauzas 16:31:17 #topic Open discussion 16:31:25 nothing on the agenda 16:31:30 but I have one note 16:31:47 I sent out a doodle about the coming PTG 16:32:00 ah i was going to ask about that i saw the ooo one 16:32:14 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023560.html 16:32:35 in the ML thread I have a question about the amount of slots we would need 16:32:54 and also we have brand new PTG etherpad too https://etherpad.opendev.org/p/nova-yoga-ptg 16:33:21 i do think that 4 days with 4 hour slot is likely better yes 16:33:40 if it was in person we likely owuld do 3 8 hour slots - breaks 16:35:07 yeah my experience tells me we could use more time than 3x4 hours 16:35:18 when do you need to inform the organisers 16:35:36 21st of July 16:35:41 so next week 16:35:42 ack 16:35:53 but I think I can always add more slots then remove it later 16:36:08 yep 16:36:20 any other topic for today? 16:36:23 Hello guys, concerning this blueprint https://review.opendev.org/c/openstack/nova/+/794837, do you have any question about it, and what is the next step ? 16:37:21 looks liek lee has a -1 16:37:32 * lyarwood clicks 16:37:32 * gibi looks 16:37:32 has this been disucsed as a specless blueprint in the past 16:38:13 Yep, but I already answer in previous comment I think, but I can sum up for lee I for sure 16:38:30 alistarle: yeah sorry did I miss the justification for this? 16:38:44 What do you mean about a speechless blueprint ? Because yes, there is a blueprint for that: https://blueprints.launchpad.net/nova/+spec/configure-direct-snapshot 16:38:47 alistarle: I couldn't work out how you would be able to do this without manually flattening disks 16:39:09 sean-k-mooney: I think this is the time alistarle asks for the approval of the specless bp 16:39:43 alistarle: that is a blueprint not a spec. a spec is a serate document that describe the usecase in detial and the design 16:39:45 gibi: ack 16:40:01 alistarle: can you summerise what you want to enable 16:40:50 Sure, I want to make the direct snapshot feature configurable, because for now, nova will not honor the glance configuration when creating a snapshot, but trying to guess it from the actual nova backend 16:41:18 what does that mean? "not honor the glance configuration" ? 16:42:10 alistarle: is this a glance multistore envoinment? or a case where glance does not use ceph for storage? 16:42:22 Let's say you 1. Store your image on ceph, and use copy on write to create VM, so you are storing your VM disk as images child in RBD, but during life of your cloud you want to move out from ceph. So 2. You remove your rbd backend from glance and put a swift one instead 16:43:21 With this configuration, nova will never call glance to check which backend is enabled, and he will always store the snapshot in ceph, even if it is not a declared glance backend 16:43:21 alistarle: are we talking about cinder volumes for the vm or the rbd image backend? 16:43:29 in nova.conf 16:43:32 sean-k-mooney: rbd image backend 16:44:01 It it for a case when you used RBD as a glance store, then you replace it with a swift one, and you want to progressively move out from ceph 16:44:10 so the request is to flaten the snapshots effectivly in ceph and then have new shapshots go to swift 16:44:18 alistarle: okay I thought I asked you this question on the spec 16:44:41 alistarle: well one thing we do not support is changing the nova image_types backend 16:45:01 including via a move operation like cold migrate 16:45:04 sean-k-mooney: you wouldn't, that doesn't control where the image is 16:45:05 alistarle: maybe what you want is to be able to tell nova-compute which glance backend you want to snapshot to, to create a flattened snapshot in the desired store, which may not be the rbd one? 16:45:31 lyarwood: well im aware of that but the usecase given was removing ceph 16:45:36 I think it is a good catch dansmith 16:45:38 telling it not to use rbd snapshotting "just because" is more confusing 16:45:42 dansmith: does Glance support moving images between backends? 16:45:56 lyarwood: copying, yes, but that's not what I mean 16:46:00 sean-k-mooney: dropping rbd just for Glance AFAICT 16:46:14 alistarle: is ^ the case 16:46:26 lyarwood: I meant tell nova "specifically create a flattened snapshot and upload it to a specific glance backend" 16:46:45 dansmith: that i think would be useful in other cases too 16:47:00 lyarwood: I don't think glance really exposes parent-child relationships anyway, so you couldn't expect to copy/move a hierarchy between backends 16:47:01 adding a glance store parater to the snapshot api i think woudl be resonable 16:47:03 either way, 16:47:14 I think it's quite clear that this can't be a specless bp :) 16:47:14 So you think it is better to change the snapshot API to allow users to choose which glance backend to use to snap the image ? 16:47:35 Yeah I see you point, for sure it will require a spec in that case 16:47:46 alistarle: that's not what I was suggesting, although that might also be an option.. the problem is, many ops will *not* want users to choose the pathologically terrible option, which you seem to want :) 16:48:10 But actually we will need to do the same code, so if a user explicitly specify a glance store, we need to bypass the direct_snapshot process 16:48:11 I was suggesting a conf option for nova-compute to tell it "always snapshot to glance backend X" or something 16:48:34 instead of letting a user choose 16:48:54 Because as of now, rbd image_backend do not care about glance config, he will magically decide to use ceph because VM disk is stored in ceph 16:49:08 i kind of dislike doing this per host but i guess a host level config might be useful for an edge deployment 16:49:55 alistarle: correct although that is partly be desgin 16:50:01 Oh I see, so instead of putting a config option to "disable direct snapshot", we put an option to "choose a glance backend" 16:50:27 And if we specify this option, we skip the direct_snapshot to always call glance 16:50:37 alistarle: well not nessisarly 16:50:48 if the specifed backedn was ceph 16:51:03 sean-k-mooney: letting users choose the backend on snapshot will mean they try all kinds of things that won't work or are terrible for performance 16:51:03 then direct would stil make sense 16:51:06 if the vm was backed by that ceph cluster 16:51:14 sean-k-mooney: like a user who is currently on ceph always choosing the file backend, causing us to always flatten and upload when we shouldn't be 16:51:43 dansmith: yes although i was wondiering if people woudl want to do that for data reducnace reasons 16:52:17 e.g. normally backup to local edge site with a snapshot and ocationally do it to central site 16:52:31 sean-k-mooney: I mean, it would be nice, but I think we'd need policy, defaults, and some sort of way to know which ones the compute can even talk to 16:52:48 sean-k-mooney: yeah, but not backup from one edge site to another.. that would be terribad 16:52:58 so we'd have to have some mapping of who can do what, etc 16:53:13 ya which is not realy somethng a normal user woudl be aware off 16:53:20 for sure 16:53:34 so we have 2 pothtial host level cofnig options 16:53:49 disableing direct snapshto whihc to me feels more like a workaround option 16:53:58 and dansmith's glance backend option 16:54:13 *glance store 16:54:43 yup.. the former is a workaround for sure, and as we've noted here, hard to even grok what or why you'd want it, 16:54:52 im a little concerned the host level glance store option will have implications for shelve and cross cell migration 16:55:08 but the latter is at least useful for migrating in or out of some thing, or directing snapshots to an appropriate on- or off-site location depending 16:55:32 sean-k-mooney: well, whatever the default is today already does, AFAIK 16:55:51 i do think i prefer the store approch 16:56:06 ya fair we just assume that glance is accable everywhere 16:56:17 we have 5 minutes left, is there any other topic today? if not the we can continue this of course 16:56:27 spec on this for sure tho 16:56:49 the spec it is 16:56:53 then 16:57:07 alistarle: please note that we have spec freeze on Thursday for Xena 16:57:32 Quick one from me, I'm out for the next ~2 or so weeks, stephenfin is babysitting some stuff while I'm gone. 16:57:48 lyarwood: thanks for the headsup 16:58:27 if nothing else for today, then I will close the meeting but you can continue discussing the snapshot issue 16:59:18 than thanks for joining 16:59:21 #endmeeting