16:00:10 <gibi> #startmeeting nova
16:00:11 <openstack> Meeting started Thu Nov 26 16:00:10 2020 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:14 <openstack> The meeting name has been set to 'nova'
16:00:19 <gibi> o/
16:00:37 <lpetrut> o/
16:01:23 <stephenfin> o/
16:01:36 <bauzas> \o
16:02:05 <gibi> #topic Bugs (stuck/critical)
16:02:10 <gibi> no critical bug
16:02:15 <gibi> #link 14 new untriaged bugs (+1 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
16:02:36 <gibi> it is a steady increase +1/week
16:02:49 <gibi> so we would only need a bit more effor to keep it stable
16:03:01 <gibi> #link 75 bugs are in INPROGRESS state without any tag (+0 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=INPROGRESS
16:03:18 <gibi> these are potentially un-triaged bugs. Check if they are still valid
16:03:36 <lyarwood> \o
16:03:48 <gibi> I'm traking these to try not to forget they exists
16:04:20 <gibi> any specific bug we need to talk about?
16:04:43 <lyarwood> not a Nova bug
16:04:59 <lyarwood> but https://bugs.launchpad.net/tempest/+bug/1905725 is blocking the ceph job at the moment on master
16:05:01 <openstack> Launchpad bug 1905725 in tempest "test_attach_scsi_disk_with_config_drive fails during cleanup when using RBD for Glance and Nova" [Undecided,New] - Assigned to Lee Yarwood (lyarwood)
16:05:16 <lyarwood> https://review.opendev.org/c/openstack/tempest/+/764337 posted and I've asked for reviews in #openstack-qa
16:05:56 <gibi> I hope gmann can look at it
16:06:36 <gibi> for me the fix looks sensible
16:07:34 <gibi> any other bugs?
16:08:10 <gibi> #topic Gate status
16:08:17 <gibi> Classification rate 24% (-4 since the last meeting) #link http://status.openstack.org/elastic-recheck/data/integrated_gate.html
16:08:39 <lyarwood> I'll add an er for that last issue after the meeting
16:08:49 <gibi> lyarwood: thanks
16:09:03 <gibi> this is pretty low, and got even lower this week, meaning we have a lot of failures that has no e-r signature
16:09:13 <gibi> Please look at the gate failures, file a bug, and add an elastic-recheck signature in the opendev/elastic-recheck repo (example: #link https://review.opendev.org/#/c/759967)
16:09:53 <gibi> any gate bug we need to talk about?
16:10:35 <gibi> personally I got hit by https://bugs.launchpad.net/nova/+bug/1823251 this week so I spend days running func tests locally in random order but I still cannot reproduce it locally
16:10:36 <openstack> Launchpad bug 1823251 in OpenStack Compute (nova) "Spike in TestNovaMigrationsMySQL.test_walk_versions/test_innodb_tables failures since April 1 2019 on limestone-regionone" [High,Confirmed]
16:11:16 <gibi> what I see is that the test cases in question takes a long time to run on the gate but also the runtime is varying a lot
16:11:18 <stephenfin> I've seen a couple of the unit test failures due to eventlet, but I don't think there's much to be done about that that isn't serious work
16:11:24 <gibi> like 50sec - 800sec
16:11:35 <lyarwood> gibi: yeah I've also hit that one
16:11:54 <lyarwood> and the negative evacuation bug elod has listed in the stable section of the meeting
16:12:59 <gibi> lyarwood: for the evac one there is a comment in the bug with a suggestion
16:13:22 <gibi> or at least with an observation
16:13:55 <lyarwood> gibi: ack yeah I'm looking at it now, it's weird how this is being seen more on the older bionic nodes
16:14:06 <lyarwood> gibi: I'll push an ER and update the change shortly
16:14:19 <gibi> cool
16:14:20 <gibi> thanks
16:14:40 <gibi> any other gate failure we need to discuss?
16:15:15 <gmann> o/
16:15:25 <gibi> #topic Release Planning
16:15:29 <gmann> +2 on 764337.  asked other  core to +A it
16:15:36 <gibi> gmann: thanks!
16:15:39 <gibi> Wallaby Milestone 1 is on 3rd of December, which is 1 weeks from now
16:15:46 <gibi> A second spec review day has been proposed for 1st of Dec: #link http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018932.html
16:16:00 <gibi> which is next Tuesday
16:16:38 <gibi> is there any other release specific info?
16:17:10 <gibi> #topic Stable Branches
16:17:24 <gibi> elod (I assume) left a comment
16:17:25 <gibi> stable/victoria seems to be blocked by bug in live-migration job: https://bugs.launchpad.net/nova/+bug/1903979
16:17:26 <openstack> Launchpad bug 1903979 in OpenStack Compute (nova) "nova-live-migration job fails during evacuate negative test" [High,In progress] - Assigned to Lee Yarwood (lyarwood)
16:17:37 <gibi> this bug we dicussed above
16:17:59 <gibi> anything else about stable?
16:18:55 <gibi> #topic Sub/related team Highlights
16:19:03 <gibi> Libvirt (bauzas)
16:19:17 <bauzas> nothing to report, sir.
16:19:30 <gibi> then moving on
16:19:30 <gibi> #topic Open discussion
16:19:37 <gibi> we have two items
16:19:41 <gibi> (lpetrut): Hyper-V RBD support
16:19:45 <gibi> The upcoming Ceph release will include RBD Windows support, so we'd like to add Hyper-V driver support
16:19:51 <gibi> specless bp: https://blueprints.launchpad.net/nova/+spec/hyperv-rbd
16:20:07 <gibi> lpetrut: as far as I understand it only impact the hyperv driver
16:20:13 <lpetrut> yep
16:20:20 <lpetrut> the nova changes are minimal
16:20:34 <lpetrut> basically just letting the driver know that RBD is supported while os-brick handles the plumbing
16:20:56 <sean-k-mooney> my main concern was testing
16:21:09 <sean-k-mooney> i think it makes sense to proceed as a specless blueprint
16:21:19 <sean-k-mooney> but it would be nice to see it tested in third party ci
16:21:27 <sean-k-mooney> you expalined you plan to test it in os-brick
16:21:42 <sean-k-mooney> so my main question is does the nova team feel that is enough
16:21:55 <sean-k-mooney> or do we want the same job to run on changes to the hyperv driver
16:22:07 <lpetrut> we already have a Hyper-V CI testing all the OpenStack projects that support Windows (e.g. nova, neutron, cinder, os-brick)
16:22:21 <sean-k-mooney> lpetrut: yep but you are only doing lvm testing
16:22:35 <lyarwood> sean-k-mooney: nope, they use os-brick on Windows now
16:22:49 <lyarwood> sean-k-mooney: so I assume this is using their full blown third party setup
16:23:01 <lpetrut> what Sean's saying is that the Nova job only covers iSCSI while we have os-brick test jobs for other backends
16:23:16 <sean-k-mooney> yep
16:23:26 <sean-k-mooney> there are 3 os-brick jobs for hyperv
16:23:35 <sean-k-mooney> only the isusi one triggers on nova
16:23:55 <sean-k-mooney> if we are fine with that then cool
16:24:11 <lpetrut> we're trying to limit the number of jobs because of the available infrastructure. any nova change that breaks other storage backends will be caught by os-brick
16:24:35 <sean-k-mooney> only after the change merges
16:24:49 <lpetrut> that's ok, 3rd party CIs aren't voting anyway
16:24:59 <sean-k-mooney> right but we do look at them
16:25:19 <sean-k-mooney> as i siad im +1 on this porposal
16:25:24 <sean-k-mooney> just wanted to bring up that point
16:25:37 <lyarwood> ah right sorry misunderstood the issue, I'd assume we would want a simple third party job still to catch any virt driver breakage early
16:25:44 <lyarwood> within Nova
16:25:47 <lpetrut> sean-k-mooney: definitely, thanks for raising those concerns
16:26:08 <sean-k-mooney> lyarwood:that was basically what i was asking
16:26:34 <lyarwood> right, I'd vote to have it assuming rbd will be as popular with hyperv as it is with libvirt
16:26:45 <lyarwood> but it shouldn't block it either way
16:26:51 <lyarwood> as long as it's covered somewhere
16:27:17 <sean-k-mooney> yep if capstity is a concern then i think its fine to rely on os-brick but even a nightly or weekly nova run of all 3 jobs might be nice
16:27:19 <gibi> I'm OK to go forward with the bp. lpetrut if it is possible try to provide CI coverage in nova for the RBD track too
16:28:00 <gibi> a periodic run like sean-k-mooney suggest could be a minimal extra load on your infra
16:28:30 <lpetrut> gibi: you mean testing RBD for one in every X patches ?
16:28:32 <lpetrut> that would work
16:28:57 <sean-k-mooney> lpetrut: yep so trigger a run once a week or once a night
16:29:04 <gibi> lpetrut: running the RBD job once a day
16:29:10 <gibi> or so
16:29:34 <lpetrut> ok, we can do that
16:29:35 <sean-k-mooney> lpetrut: its mainly to narrow down the set of possible problem patches
16:30:11 <gibi> then I don't see any objects. So I will approve the bp without spec after the meeting
16:30:17 <gibi> lpetrut: thanks for pushing tihs
16:30:42 <lpetrut> thanks!
16:30:50 <sean-k-mooney> https://review.opendev.org/c/openstack/nova/+/763550 is the change for those that are intersted in reviewing
16:31:16 <gibi> sean-k-mooney: you have a topic too
16:31:44 <lpetrut> one difference compared to libvirt is that we're mounting the disk, hyper-v can't use librbd as qemu does. we're exposing a virtual SCSI disk
16:31:53 <sean-k-mooney> yes basiclly for the libvirt driver we currentl use the metadat element in the xml to store some useful info for debugging
16:32:21 <lpetrut> gibi: I've just set the topic to "hyperv-rbd"
16:32:40 <gibi> lpetrut: ack thanks, the implementation details can be discussed in the review
16:32:53 <gibi> sean-k-mooney: yes
16:32:58 <lpetrut> definitely, just thought that it might be worth mentioning
16:33:06 <sean-k-mooney> was waiting for previous topic to finish
16:33:38 <gibi> lpetrut: thanks
16:33:47 <gibi> sean-k-mooney: I think we can move to your topic
16:33:50 <sean-k-mooney> lpetrut: we do somethign simiar in livbirt in that the gust sees a virtio device or scie device depneing on the bus selected anyway i dont think that is a problem
16:33:52 <sean-k-mooney> cool
16:34:13 <sean-k-mooney> am so ya today we store info like the flavor name and how many cpus we have in the medata in the libvit xml
16:34:39 <sean-k-mooney> but we dont store the flavor extra spec or image properties
16:35:01 <sean-k-mooney> when we review reports form cosutoem we often have the logs
16:35:09 <sean-k-mooney> with the xmls but no access to the api to get the flavors
16:35:31 <sean-k-mooney> so i want to start storign the falvor extra specs and image prperties in the xml
16:35:41 <sean-k-mooney> and normalise soem ot the other element
16:35:54 <sean-k-mooney> for example we have the flavor name but not uuid but image uuid but not name
16:35:59 <sean-k-mooney> so i want to put both
16:36:06 <gibi> make sense to me
16:36:24 <sean-k-mooney> i would like to do this as a specless blueprint and discuss in the code patch but i can do a small spec if required
16:36:35 <lyarwood> I'm okay with that if it doesn't bloat the XML
16:36:45 <lyarwood> otherwise we can just log it at an INFO level within the driver
16:36:51 <lyarwood> during the build
16:36:57 <sean-k-mooney> it should not extend it much
16:36:58 <lyarwood> but I guess that doesn't persist as well
16:37:17 <lyarwood> as logs rotate etc
16:37:23 <sean-k-mooney> we dont typically have more then ~10 extra specs
16:37:23 <lyarwood> kk
16:37:37 <sean-k-mooney> and the xml is only logged at debug level
16:37:50 <lyarwood> right but if we ever need it again we can just dump it from the domain
16:37:53 <gibi> sean-k-mooney: I'm OK to have this as a specless bp as this only change a freeform part of the domain xml
16:38:22 <sean-k-mooney> yep its technically not part of any public api we provide its just debug metadata
16:38:44 <sean-k-mooney> ok ill file the blueprint after the meeting and submit a patch
16:39:21 <sean-k-mooney> we can discuss futher in gerrit and if there are concerns we can create a spec if need
16:39:24 <gibi> also as we discussed earlier when the flavor and image data changes then also the domain is regenerated
16:39:32 <gibi> so it is easy to keep it up tod ate
16:39:34 <gibi> to date
16:39:42 <sean-k-mooney> yep in rebuild and resize we already regerneate teh xml and metadata
16:40:26 <gibi> Ok
16:40:32 <gibi> anyhing else for today?
16:41:06 <sean-k-mooney> that looks like a no
16:41:27 <sean-k-mooney> nothing else from me in anycase
16:41:28 <gibi> then thanks for joining today
16:41:41 <sean-k-mooney> o/
16:41:43 <gibi> have a nice rest of the day!
16:41:48 <gibi> #endmeeting