16:00:03 #startmeeting nova 16:00:03 Meeting started Thu Jun 25 16:00:03 2020 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:06 The meeting name has been set to 'nova' 16:00:39 o/ 16:00:41 o/ 16:00:56 o/ 16:01:23 #topic Bugs (stuck/critical) 16:01:30 o/ 16:01:32 no critical bug 16:01:40 #link 27 new untriaged bugs (+3 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 16:01:52 I need help with triaging 16:01:58 \o 16:02:05 we have one bug for focal distro 16:02:22 * bauzas is tempted to say "I can" but I'll be on and off next week due to internal stuff 16:02:25 #link https://bugs.launchpad.net/nova/+bug/1882521 16:02:25 Launchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [Undecided,New] 16:02:49 gmann: do you need some help with that bug? 16:02:56 this is dependency of migrating the CI/CD to focal 16:03:12 i checked the cinder logs and all flow but could not spot the issue 16:03:26 some more eyes will be helpful 16:03:49 issue happening while test cleanup try to detach the volume and failing 16:05:03 did you tried to involve cinde folks into that investigation? 16:05:18 I can try to look at it more closely tomorrow / next week 16:05:37 not yet, cinder seems finishing the detachment but i can ask their help 16:05:44 thanks 16:06:27 any other bug that needs attention? 16:07:15 #topic Runways 16:07:20 Etherpad is up for Victoria #link https://etherpad.opendev.org/p/nova-runways-victoria 16:07:25 Both bp/rbd-glance-multistore and bp/use-pcpu-and-vcpu-in-one-instance were flushed from the slots as the patches has been approved (and the slots are expired) 16:07:29 yeah just have a look at this bug https://bugs.launchpad.net/nova/+bug/1884068 16:07:29 Launchpad bug 1884068 in OpenStack Compute (nova) "Instance stuck in build state when some/one compute node is unreachable" [Undecided,New] 16:08:22 akhil-g: looking at the logs you posted I don't see any logs from your build request 16:09:43 yeah but I find only those logs from my setup 16:10:04 akhil-g: I will circe back to that bug tomorrow but I suspect some connectivity issue between the nova services. 16:11:01 going back to the runways 16:11:10 so I flushed the slots 16:11:20 I think we made good progress on the patches in the slots 16:11:28 There is one item in the queue #link https://review.opendev.org/#/q/topic:bp/add-emulated-virtual-tpm 16:11:46 stephenfin: do you have time in the next two weeks to iterate on feedback about it 16:11:48 ? 16:11:53 I do 16:11:55 coo 16:11:56 l 16:12:17 is there at least one core (besides me) who willing to look at that series in the next two weeks? 16:12:59 I was hoping bauzas would be able to chip in, given the area of the code it's touching 16:13:12 and lyarwood, who should be back next week 16:13:26 lyarwood is back next week? awesome 16:13:36 iirc, yup! 16:14:06 OK I will rely on bauzas and lyarwood then 16:14:26 okay, I'll volunteer 16:14:30 #action gibi to put bp/add-emulated-virtual-tpm to a runway slot 16:15:03 If anybody has any ready patch series / feature then do not hesitate to put it into the queue, we still have two free slots 16:16:07 #topic Release Planning 16:16:11 gibi: maybe the patches form aarents 16:16:26 aarents: when you read this, please add them to the queue ^^ 16:16:38 but that can be done async of the meeting 16:17:30 OK, going back to the release planning 16:17:32 5 weeks until Victoria M2 16:17:38 The Victoria community-wide goals are finalized: #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015459.html 16:17:45 We need volunteers for: 16:17:49 Switch legacy Zuul jobs to native 16:17:54 Migrate CI/CD jobs to new Ubuntu LTS Focal 16:18:42 I can help onfocal migration which will be more of testing the things 16:18:53 gmann: thans 16:18:55 thanks 16:19:03 and after that devstack and tempest jobs will be moved to focal which automatically switch nova zuulv3 jobs to focal 16:19:30 but as per goal agreement, legacy jobs will need to migrate to zuulv3. legacy jobs will not be moved to focal 16:19:39 gmann: what legacy zuul jobs do we still have at this point 16:19:49 i can try on zuulv3 migration but can't commit fully. 16:20:03 sean-k-mooney: nova-live-migration and nova-greande job i thin 16:20:05 thinks 16:20:24 so greande should be trivial now that there is a greade native job 16:20:32 grenade will be easy now as grenade zuulv3 base job is backported till train 16:20:38 i think lyarwood had patches for the live migartion job 16:20:39 yeah 16:21:19 I will check that patch state 16:21:57 so grenade should be v3 https://review.opendev.org/#/c/704364/ 16:22:19 lyarwood's patch #link https://review.opendev.org/#/c/711604/ 16:22:24 that is different, i mean nova-grenade-multinode job 16:22:33 gibi: yeah this one 16:22:57 i think all these can go in one shot as all legacy multinode jobs derived from nova-legacy-base job 16:23:21 yes proably 16:23:30 OK. I see some movement around these topic, that is good. Let me know if you need my help 16:23:44 any other release related topic? 16:24:24 do we need to do release for libs for m1 16:24:35 sean-k-mooney: those were proposed automatically 16:24:43 gibi: ok 16:24:56 16:14:25 #link https://review.opendev.org/#/c/735716/ 16:24:57 16:14:31 #link https://review.opendev.org/#/c/735690/ 16:25:19 both merged 16:25:53 OK moving forward 16:25:53 yep i didnt see the os-vif one but there was also no pending critial patch to hold it so its fine 16:26:10 sean-k-mooney: I checked and seemed good 16:26:21 I will ping you next time 16:26:32 its fine i trust you :) 16:26:39 :) 16:26:42 #topic Stable Branches 16:26:59 any stable news? 16:27:05 I have some updated on stable gate status 16:27:11 sure, go 16:27:17 stable/train gate is up. 'virtualenv not found' and neutron-greande-multinode issue is fixed in legacy base jobs and with grenade backport. feel free to recheck. 16:27:42 stable/stein gate need this to merge to be green - https://review.opendev.org/#/c/737332/ 16:27:54 i've +W'd the stein patch 16:28:00 and stable/rocky and less are all blocked till devstack fix - #link https://review.opendev.org/#/c/735615/ 16:28:04 elod: thanks, 16:28:04 so it's will be ok, too 16:28:48 thank you all working on unblocking the gate on stable 16:28:49 one strange thing hapepning on pep8 cherrypick checks - https://review.opendev.org/#/c/728057/ 16:29:16 gmann: that's not strange :) 16:29:28 I guess that's the new check 16:29:31 though all previous backport are merged but script still fail. melwitt and I were discussing it yesterday and it ran locally fine but on gate it is not 16:29:42 and patch was not merged when it ran 16:29:52 elod: yeah new checks is not working fine in gate, in the patch i mentioned should pass the check but faiking 16:29:54 failing 16:30:17 ok, thanks for the notice 16:30:33 I hope dansmith can look at it as he was the author of that check 16:30:36 I'll try to figure out what's the problem there 16:30:41 i am suspecting rebase needs as check patch is merged after stable/train was proposed to zuul merging those at pep8 job runtime is causing some strange behaviour 16:30:55 gmann: i need to update the commit mesage 16:30:59 sean-k-mooney: can you rebase this and we will see if that run fine 16:31:03 and fix the hashes i think 16:31:12 yep i can 16:31:14 sean-k-mooney: no, hash are all fine:) 16:31:31 i will rebase can give us more clarity 16:31:48 just did it in the ui 16:31:56 cool, thanks. 16:32:06 anything else for stable? 16:32:06 that's all from my side. 16:32:58 * bauzas has to leave \o 16:33:01 that failure isn't finding the commit hashes in the commit message 16:33:03 bauzas: thanks 16:33:17 it's not complaining that they're wrong, it's not seeing them for some reason 16:33:36 dansmith: yeah, and if i ran locally it find correctly. 16:33:54 let's see after rebase 16:33:58 okay 16:34:03 moving forward 16:34:05 #topic Sub/related team Highlights 16:34:11 API (gmann) 16:34:32 one thing to share as progress is unified limit 16:34:34 #link https://review.opendev.org/#/q/topic:bp/unified-limits-nova+(status:open+OR+status:merged) 16:35:06 review is going good as gibi stephenfin melwitt and i also reviewed yesterday 16:35:06 gmann: is it reasy for a runway slot? 16:35:14 ready 16:35:27 i am waiting for johnthetubaguy if he can update the series 16:35:31 I think we need johnthetubaguy to be around 16:35:35 yeah 16:35:47 once he update then we can add in runway 16:35:59 OK, cool 16:36:27 Libvirt (bauzas) 16:36:44 he did not mentioned any news to me on #openstack-nova 16:37:25 #topic Stuck Reviews 16:37:32 nothing on the agenda 16:37:37 anything to bring up here? 16:38:01 #topic Open discussion 16:38:12 vinay_m: your turn 16:38:55 https://bugs.launchpad.net/nova/+bug/1759839 16:38:55 Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7) 16:39:51 vinay_m: do you need help with that bug? 16:40:16 oh, I looked at this 16:40:21 context is (I suspect) https://review.opendev.org/731141 16:40:35 https://bugs.launchpad.net/nova/+bug/1741329 .. this bug specifies quite opp to above mention bug 16:40:35 Launchpad bug 1741329 in OpenStack Compute (nova) "Install and configure controller node for openSUSE and SUSE Linux Enterprise in nova" [Medium,Fix released] - Assigned to Andreas Jaeger (jaegerandi) 16:41:18 so it seems the SUSE packages are using port 8780 for placement, while everything else uses 8778 16:41:34 yes 16:41:34 what is the default in the code 16:41:57 No idea. Not sure if there is one 16:42:42 TripleO deploys to 8778 as does DevStack 16:43:00 if our suse doc aligned with the suse package then I guess this is simply an invalid bug. Or am I missing something? 16:43:34 Yes, https://bugs.launchpad.net/nova/+bug/1759839 looks invalid 16:43:34 Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7) 16:43:48 and https://bugs.launchpad.net/nova/+bug/1741329 *is* valid 16:43:48 Launchpad bug 1741329 in OpenStack Compute (nova) "Install and configure controller node for openSUSE and SUSE Linux Enterprise in nova" [Medium,Fix released] - Assigned to Andreas Jaeger (jaegerandi) 16:43:52 IMO 16:44:20 stephenfin: I agre 16:44:21 e 16:44:24 so placement was only ever deployed via wsgi 16:44:44 so unlike the nova service it has nver had a python wrapper that bound to a defualt port 16:44:55 so ya i tend to agree with stephenfin 16:45:20 vinay_m: is it OK for you if I mark https://bugs.launchpad.net/nova/+bug/1759839 invalid? 16:45:20 Launchpad bug 1759839 in OpenStack Compute (nova) "Documentation Error for Placement API Port Number" [Low,Triaged] - Assigned to vinay harsha mitta (vinay7) 16:46:47 ok gibi that is what doubt was whether that bug was invalid or not 16:47:01 OK cool 16:47:20 I will mark it after the meeting 16:47:31 I have one more topic 16:47:33 (gibi): VMware 3pp CI status: Based on #link https://review.opendev.org/#/c/734114/ the 3pp CI is green now. I will continue monitoring the situation through the V cycle. 16:48:02 as far as I see there is real effort now making the CI green 16:48:53 so I think if the greenness remain during the cycle, then I will propose an undeprecation patch 16:48:57 fyi the placemnt docs say it can be on a different port or now port https://github.com/openstack/placement/blame/master/doc/source/install/shared/endpoints.rst#L56 16:49:25 gibi: ya that is nice to see 16:50:05 by the way http://ci-watch.tintri.com/ seams to be dead and i think the other one is too. 16:50:20 if i was to host a copy of it would that be useful to people 16:50:27 this opens for me http://ciwatch.mmedvede.net/project?project=nova 16:50:46 oh that was dead the last time i hit it but if its working again cool 16:51:34 https://bugs.launchpad.net/nova/+bug/1801714 This bug I think that openstack-nova is not looking for availability of the compute node so when it comes to scheduling and deployment there it gets stuck 16:51:34 Launchpad bug 1801714 in OpenStack Compute (nova) "Rebuild instance is stuck in rebuilding state when hosting Compute is powered off" [Medium,Confirmed] - Assigned to Akhil Gudise (akhil-g) 16:52:29 because host is not available so its stuck in rebuild state 16:52:54 akhil-g: as I said above we would need more logs. Something is clearly missing 16:53:29 yeah but this is another bug gibi 16:54:10 akhil-g: ahh 16:54:19 akhil-g: is this the same deployment? 16:55:00 ahh I see I was even able to reproduce it 16:55:09 yeah I've used same setups 16:56:38 so you suggestion is to add a check in the compute-api and reject the rebuild if the compute service is down? 16:57:59 is rebuild a cast? 16:58:01 not exactly but during scheduling we can eliminate unavailable hosts? 16:58:10 if it was an rpc call it would timeout right and fail? 16:58:27 akhil-g: which host is down 16:58:32 the one the vm is on? 16:58:39 akhil-g: rebuild is not a move action 16:58:53 so we only go to the schulder to validate the image is compatiable 16:58:59 we dont ever move the vm to another host 16:59:11 we have one minute left 16:59:12 thats the main difference between evacuate and rebuild 16:59:45 please continue discussing it on #openstack-nova 16:59:50 ok sorry I mentioned it for the one which I've reported earlier 16:59:57 sure :) 17:00:04 time is uo 17:00:05 up 17:00:07 #endmeeting