16:00:03 <gibi> #startmeeting nova
16:00:03 <openstack> Meeting started Thu Jun 25 16:00:03 2020 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:06 <openstack> The meeting name has been set to 'nova'
16:00:39 <gibi> o/
16:00:41 <dansmith> o/
16:00:56 <gmann> o/
16:01:23 <gibi> #topic Bugs (stuck/critical)
16:01:30 <stephenfin> o/
16:01:32 <gibi> no critical bug
16:01:40 <gibi> #link 27 new untriaged bugs (+3 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
16:01:52 <gibi> I need help with triaging
16:01:58 <bauzas> \o
16:02:05 <gmann> we have one bug for focal distro
16:02:22 * bauzas is tempted to say "I can" but I'll be on and off next week due to internal stuff
16:02:25 <gmann> #link https://bugs.launchpad.net/nova/+bug/1882521
16:02:25 <openstack> Launchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [Undecided,New]
16:02:49 <gibi> gmann: do you need some help with that bug?
16:02:56 <gmann> this is dependency of migrating the CI/CD to focal
16:03:12 <gmann> i checked the cinder logs and all flow but could not spot the issue
16:03:26 <gmann> some more eyes will be helpful
16:03:49 <gmann> issue happening while test cleanup try to detach the volume  and failing
16:05:03 <gibi> did you tried to involve cinde folks into that investigation?
16:05:18 <gibi> I can try to look at it more closely tomorrow / next week
16:05:37 <gmann> not yet, cinder seems finishing the detachment but i can ask their help
16:05:44 <gmann> thanks
16:06:27 <gibi> any other bug that needs attention?
16:07:15 <gibi> #topic Runways
16:07:20 <gibi> Etherpad is up for Victoria #link https://etherpad.opendev.org/p/nova-runways-victoria
16:07:25 <gibi> Both bp/rbd-glance-multistore and bp/use-pcpu-and-vcpu-in-one-instance were flushed from the slots as the patches has been approved (and the slots are expired)
16:07:29 <akhil-g> yeah just have a look at this bug https://bugs.launchpad.net/nova/+bug/1884068
16:07:29 <openstack> Launchpad bug 1884068 in OpenStack Compute (nova) "Instance stuck in build state when some/one compute node is unreachable" [Undecided,New]
16:08:22 <gibi> akhil-g: looking at the logs you posted I don't see any logs from your build request
16:09:43 <akhil-g> yeah but I find only those logs from my setup
16:10:04 <gibi> akhil-g: I will circe back to that bug tomorrow but I suspect some connectivity issue between the nova services.
16:11:01 <gibi> going back to the runways
16:11:10 <gibi> so I flushed the slots
16:11:20 <gibi> I think we made good progress on the patches in the slots
16:11:28 <gibi> There is one item in the queue #link https://review.opendev.org/#/q/topic:bp/add-emulated-virtual-tpm
16:11:46 <gibi> stephenfin: do you have time in the next two weeks to iterate on feedback about it
16:11:48 <gibi> ?
16:11:53 <stephenfin> I do
16:11:55 <gibi> coo
16:11:56 <gibi> l
16:12:17 <gibi> is there at least one core (besides me) who willing to look at that series in the next two weeks?
16:12:59 <stephenfin> I was hoping bauzas would be able to chip in, given the area of the code it's touching
16:13:12 <stephenfin> and lyarwood, who should be back next week
16:13:26 <gibi> lyarwood is back next week? awesome
16:13:36 <stephenfin> iirc, yup!
16:14:06 <gibi> OK I will rely on bauzas and lyarwood then
16:14:26 <bauzas> okay, I'll volunteer
16:14:30 <gibi> #action gibi to put bp/add-emulated-virtual-tpm to a runway slot
16:15:03 <gibi> If anybody has any ready patch series / feature then do not hesitate to put it into the queue, we still have two free slots
16:16:07 <gibi> #topic Release Planning
16:16:11 <sean-k-mooney> gibi: maybe the patches form aarents
16:16:26 <gibi> aarents: when you read this, please add them to the queue ^^
16:16:38 <sean-k-mooney> but that can be done async of the meeting
16:17:30 <gibi> OK, going back to the release planning
16:17:32 <gibi> 5 weeks until Victoria M2
16:17:38 <gibi> The Victoria community-wide goals are finalized: #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015459.html
16:17:45 <gibi> We need volunteers for:
16:17:49 <gibi> Switch legacy Zuul jobs to native
16:17:54 <gibi> Migrate CI/CD jobs to new Ubuntu LTS Focal
16:18:42 <gmann> I can help onfocal migration which will be more of testing the things
16:18:53 <gibi> gmann: thans
16:18:55 <gibi> thanks
16:19:03 <gmann> and after that devstack and tempest jobs will be moved to focal which automatically switch nova zuulv3 jobs to focal
16:19:30 <gmann> but as per goal agreement, legacy jobs will need to migrate to zuulv3. legacy jobs will not be moved to focal
16:19:39 <sean-k-mooney> gmann: what legacy zuul jobs do we still have at this point
16:19:49 <gmann> i can try on zuulv3 migration but can't commit fully.
16:20:03 <gmann> sean-k-mooney: nova-live-migration and nova-greande job i thin
16:20:05 <gmann> thinks
16:20:24 <sean-k-mooney> so greande should be trivial now that there is a greade native job
16:20:32 <gmann> grenade will be easy now as grenade zuulv3 base job is backported till train
16:20:38 <sean-k-mooney> i think lyarwood had patches for the live migartion job
16:20:39 <gmann> yeah
16:21:19 <gmann> I will check that patch state
16:21:57 <sean-k-mooney> so grenade should be v3 https://review.opendev.org/#/c/704364/
16:22:19 <gibi> lyarwood's patch #link https://review.opendev.org/#/c/711604/
16:22:24 <gmann> that is different, i mean nova-grenade-multinode job
16:22:33 <gmann> gibi: yeah this one
16:22:57 <gmann> i think all these can go in one shot as all legacy multinode jobs derived from nova-legacy-base job
16:23:21 <sean-k-mooney> yes proably
16:23:30 <gibi> OK. I see some movement around these topic, that is good. Let me know if you need my help
16:23:44 <gibi> any other release related topic?
16:24:24 <sean-k-mooney> do we need to do release for libs for m1
16:24:35 <gibi> sean-k-mooney: those were proposed automatically
16:24:43 <sean-k-mooney> gibi: ok
16:24:56 <gibi> 16:14:25 <gibi> #link https://review.opendev.org/#/c/735716/
16:24:57 <gibi> 16:14:31 <gibi> #link https://review.opendev.org/#/c/735690/
16:25:19 <gibi> both merged
16:25:53 <gibi> OK moving forward
16:25:53 <sean-k-mooney> yep i didnt see the os-vif one but there was also no pending critial patch to hold it so its fine
16:26:10 <gibi> sean-k-mooney: I checked and seemed good
16:26:21 <gibi> I will ping you next time
16:26:32 <sean-k-mooney> its fine i trust you :)
16:26:39 <gibi> :)
16:26:42 <gibi> #topic Stable Branches
16:26:59 <gibi> any stable news?
16:27:05 <gmann> I have some updated on stable gate status
16:27:11 <gibi> sure, go
16:27:17 <gmann> stable/train gate is up. 'virtualenv not found' and neutron-greande-multinode issue is fixed in legacy base jobs and with grenade backport. feel free to recheck.
16:27:42 <gmann> stable/stein gate need this to merge to be green - https://review.opendev.org/#/c/737332/
16:27:54 <elod> i've +W'd the stein patch
16:28:00 <gmann> and stable/rocky and less are all blocked till devstack fix - #link https://review.opendev.org/#/c/735615/
16:28:04 <gmann> elod: thanks,
16:28:04 <elod> so it's will be ok, too
16:28:48 <gibi> thank you all working on unblocking the gate on stable
16:28:49 <gmann> one strange thing hapepning on pep8 cherrypick checks - https://review.opendev.org/#/c/728057/
16:29:16 <elod> gmann: that's not strange :)
16:29:28 <elod> I guess that's the new check
16:29:31 <gmann> though all previous backport are merged but script still fail. melwitt and I were discussing it yesterday and it ran locally fine but on gate it is not
16:29:42 <elod> and patch was not merged when it ran
16:29:52 <gmann> elod: yeah new checks is not working fine in gate, in the patch i mentioned should pass the check but faiking
16:29:54 <gmann> failing
16:30:17 <elod> ok, thanks for the notice
16:30:33 <gibi> I hope dansmith can look at it as he was the author of that check
16:30:36 <elod> I'll try to figure out what's the problem there
16:30:41 <gmann> i am suspecting rebase needs as check patch is merged after stable/train was proposed to zuul merging those at pep8 job runtime is causing some strange behaviour
16:30:55 <sean-k-mooney> gmann: i need to update the commit mesage
16:30:59 <gmann> sean-k-mooney: can you rebase this and we will see if that run fine
16:31:03 <sean-k-mooney> and fix the hashes i think
16:31:12 <sean-k-mooney> yep i can
16:31:14 <gmann> sean-k-mooney: no, hash are all fine:)
16:31:31 <gmann> i will rebase can give us more clarity
16:31:48 <sean-k-mooney> just did it in the ui
16:31:56 <gmann> cool, thanks.
16:32:06 <gibi> anything else for stable?
16:32:06 <gmann> that's all from my side.
16:32:58 * bauzas has to leave \o
16:33:01 <dansmith> that failure isn't finding the commit hashes in the commit message
16:33:03 <gibi> bauzas: thanks
16:33:17 <dansmith> it's not complaining that they're wrong, it's not seeing them for some reason
16:33:36 <gmann> dansmith: yeah, and if i ran locally it find correctly.
16:33:54 <gmann> let's see after rebase
16:33:58 <dansmith> okay
16:34:03 <gibi> moving forward
16:34:05 <gibi> #topic Sub/related team Highlights
16:34:11 <gibi> API (gmann)
16:34:32 <gmann> one thing to share as progress is unified limit
16:34:34 <gmann> #link https://review.opendev.org/#/q/topic:bp/unified-limits-nova+(status:open+OR+status:merged)
16:35:06 <gmann> review is going good as gibi stephenfin melwitt and i also reviewed yesterday
16:35:06 <gibi> gmann: is it reasy for a runway slot?
16:35:14 <gibi> ready
16:35:27 <gmann> i am waiting for johnthetubaguy if he can update the series
16:35:31 <stephenfin> I think we need johnthetubaguy to be around
16:35:35 <gmann> yeah
16:35:47 <gmann> once he update then we can add in runway
16:35:59 <gibi> OK, cool
16:36:27 <gibi> Libvirt (bauzas)
16:36:44 <gibi> he did not mentioned any news to me on #openstack-nova
16:37:25 <gibi> #topic Stuck Reviews
16:37:32 <gibi> nothing on the agenda
16:37:37 <gibi> anything to bring up here?
16:38:01 <gibi> #topic Open discussion
16:38:12 <gibi> vinay_m: your turn
16:38:55 <vinay_m> https://bugs.launchpad.net/nova/+bug/1759839
16:38:55 <openstack> Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7)
16:39:51 <gibi> vinay_m: do you need help with that bug?
16:40:16 <stephenfin> oh, I looked at this
16:40:21 <stephenfin> context is (I suspect) https://review.opendev.org/731141
16:40:35 <vinay_m> https://bugs.launchpad.net/nova/+bug/1741329 .. this bug specifies quite opp to above mention bug
16:40:35 <openstack> Launchpad bug 1741329 in OpenStack Compute (nova) "Install and configure controller node for openSUSE and SUSE Linux Enterprise in nova" [Medium,Fix released] - Assigned to Andreas Jaeger (jaegerandi)
16:41:18 <stephenfin> so it seems the SUSE packages are using port 8780 for placement, while everything else uses 8778
16:41:34 <vinay_m> yes
16:41:34 <sean-k-mooney> what is the default in the code
16:41:57 <stephenfin> No idea. Not sure if there is one
16:42:42 <stephenfin> TripleO deploys to 8778 as does DevStack
16:43:00 <gibi> if our suse doc aligned with the suse package then I guess this is simply an invalid bug. Or am I missing something?
16:43:34 <stephenfin> Yes, https://bugs.launchpad.net/nova/+bug/1759839 looks invalid
16:43:34 <openstack> Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7)
16:43:48 <stephenfin> and https://bugs.launchpad.net/nova/+bug/1741329 *is* valid
16:43:48 <openstack> Launchpad bug 1741329 in OpenStack Compute (nova) "Install and configure controller node for openSUSE and SUSE Linux Enterprise in nova" [Medium,Fix released] - Assigned to Andreas Jaeger (jaegerandi)
16:43:52 <stephenfin> IMO
16:44:20 <gibi> stephenfin: I agre
16:44:21 <gibi> e
16:44:24 <sean-k-mooney> so placement was only ever deployed via wsgi
16:44:44 <sean-k-mooney> so unlike the nova service it has nver had a python wrapper that bound to a defualt port
16:44:55 <sean-k-mooney> so ya i tend to agree with stephenfin
16:45:20 <gibi> vinay_m: is it OK for you if I mark https://bugs.launchpad.net/nova/+bug/1759839 invalid?
16:45:20 <openstack> Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7)
16:46:47 <vinay_m> ok gibi that is what doubt was whether that bug was invalid or not
16:47:01 <gibi> OK cool
16:47:20 <gibi> I will mark it after the meeting
16:47:31 <gibi> I have one more topic
16:47:33 <gibi> (gibi): VMware 3pp CI status: Based on #link https://review.opendev.org/#/c/734114/ the 3pp CI is green now. I will continue monitoring the situation through the V cycle.
16:48:02 <gibi> as far as I see there is real effort now making the CI green
16:48:53 <gibi> so I think if the greenness remain during the cycle, then I will propose an undeprecation patch
16:48:57 <sean-k-mooney> fyi the placemnt docs say it can be on a different port or now port https://github.com/openstack/placement/blame/master/doc/source/install/shared/endpoints.rst#L56
16:49:25 <sean-k-mooney> gibi: ya that is nice to see
16:50:05 <sean-k-mooney> by the way http://ci-watch.tintri.com/ seams to be dead and i think the other one is too.
16:50:20 <sean-k-mooney> if i was to host a copy of it would that be useful to people
16:50:27 <gibi> this opens for me  http://ciwatch.mmedvede.net/project?project=nova
16:50:46 <sean-k-mooney> oh that was dead the last time i hit it but if its working again cool
16:51:34 <akhil-g> https://bugs.launchpad.net/nova/+bug/1801714      This bug I think that openstack-nova is not looking for availability of the compute node so when it comes to scheduling and deployment there it gets stuck
16:51:34 <openstack> Launchpad bug 1801714 in OpenStack Compute (nova) "Rebuild instance is stuck in rebuilding state when hosting Compute is powered off" [Medium,Confirmed] - Assigned to Akhil Gudise (akhil-g)
16:52:29 <akhil-g> because host is not available so its stuck in rebuild state
16:52:54 <gibi> akhil-g: as I said above we would need more logs. Something is clearly missing
16:53:29 <akhil-g> yeah but this is another bug gibi
16:54:10 <gibi> akhil-g: ahh
16:54:19 <gibi> akhil-g: is this the same deployment?
16:55:00 <gibi> ahh I see I was even able to reproduce it
16:55:09 <akhil-g> yeah I've used same setups
16:56:38 <gibi> so you suggestion is to add a check in the compute-api and reject the rebuild if the compute service is down?
16:57:59 <sean-k-mooney> is rebuild a cast?
16:58:01 <akhil-g> not exactly but during scheduling we can eliminate unavailable hosts?
16:58:10 <sean-k-mooney> if it was an rpc call it would timeout right and fail?
16:58:27 <sean-k-mooney> akhil-g: which host is down
16:58:32 <sean-k-mooney> the one the vm is on?
16:58:39 <sean-k-mooney> akhil-g: rebuild is not a move action
16:58:53 <sean-k-mooney> so we only go to the schulder to validate the image is compatiable
16:58:59 <sean-k-mooney> we dont ever move the vm to another host
16:59:11 <gibi> we have one minute left
16:59:12 <sean-k-mooney> thats the main difference between evacuate and rebuild
16:59:45 <gibi> please continue discussing it on #openstack-nova
16:59:50 <akhil-g> ok sorry I mentioned it for the one which I've reported earlier
16:59:57 <sean-k-mooney> sure :)
17:00:04 <gibi> time is uo
17:00:05 <gibi> up
17:00:07 <gibi> #endmeeting