14:00:06 <gibi> #startmeeting nova
14:00:06 <openstack> Meeting started Thu Feb  7 14:00:06 2019 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:11 <openstack> The meeting name has been set to 'nova'
14:00:17 <cdent> o/
14:00:17 <takashin> o/
14:00:20 <gibi> hello
14:00:24 <edleafe> \o
14:00:26 <mriedem> o/
14:01:07 <sean-k-mooney> o/
14:01:09 <gibi> let's get started
14:01:15 <gibi> #topic Release News
14:01:20 <gibi> #link Stein release schedule: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule
14:01:24 <gibi> #info s-3 feature freeze is March 7
14:01:28 <efried> o/
14:01:30 <gibi> #link Stein runway etherpad: https://etherpad.openstack.org/p/nova-runways-stein
14:01:36 <gibi> #link runway #1: https://blueprints.launchpad.net/nova/+spec/flavor-extra-spec-image-property-validation (jackding) [END 2019-02-07] https://review.openstack.org/#/c/620706/
14:01:37 <patchbot> patch 620706 - nova - Flavor extra spec and image properties validation - 22 patch sets
14:01:40 <gibi> #link runway #2: https://blueprints.launchpad.net/nova/+spec/libvirt-neutron-sriov-livemigration (adrianc, moshele) [END 2019-02-19] https://review.openstack.org/#/q/status:open+topic:bp/libvirt-neutron-sriov-livemigration
14:01:45 <gibi> #link runway #3: https://blueprints.launchpad.net/nova/+spec/handling-down-cell (tssurya) [END 2019-02-19] https://review.openstack.org/#/q/status:open+branch:master+topic:bp/handling-down-cell
14:02:05 <tssurya> o/
14:02:12 <gibi> any comment about the release or the runway items?
14:02:39 * bauzas waves
14:03:03 <gibi> #topic Bugs
14:03:10 <gibi> no critical bugs
14:03:15 <gibi> #link 58 new untriaged bugs (1 up since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
14:03:20 <gibi> #link 3 untagged untriaged bugs (up 1 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:03:25 <gibi> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
14:03:29 <gibi> #help need help with bug triage
14:03:38 <gibi> anything else about the bug status?
14:03:53 <jaypipes> o/
14:04:10 <gibi> Gate status
14:04:14 <gibi> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:04:20 <gibi> #link Fix for the legacy-grenade-dsvm-neutron-multinode-live-migration (non-voting) job: https://review.openstack.org/#/c/634962/
14:04:21 <patchbot> patch 634962 - nova - Fix legacy-grenade-dsvm-neutron-multinode-live-mig... - 5 patch sets
14:04:24 <gibi> 3rd party CI
14:04:29 <gibi> #link 3rd party CI status http://ciwatch.mmedvede.net/project?project=nova&time=7+days
14:04:35 <gibi> where we are with the gate
14:04:36 <gibi> ?
14:04:39 <mriedem> i'll have another patch up for https://bugs.launchpad.net/nova/+bug/1813147 soon
14:04:40 <openstack> Launchpad bug 1813147 in OpenStack Compute (nova) "p35 jobs are failing with subunit.parser ... FAILED" [High,Confirmed]
14:04:50 <mriedem> there is a deprecation warning still killing the functional-py35 job
14:04:51 <gibi> mriedem: I will review it soon after you propose it :)
14:05:37 <mriedem> logstash / e-r graphs have been off but it looks like they are flowing again
14:05:41 <bauzas> mriedem: ping me the bugfix once done
14:05:43 <mriedem> so i guess we'll have a better idea soonish
14:05:44 <mriedem> also,
14:05:56 <mriedem> https://review.openstack.org/#/c/634970/
14:05:57 <patchbot> patch 634970 - openstack-dev/devstack - Set ETCD_USE_RAMDISK=True by default - 1 patch set
14:06:05 <mriedem> hoping ^ helps with the cinder tooz connection error failures
14:06:24 <mriedem> http://status.openstack.org/elastic-recheck/#1810526
14:06:40 <mriedem> that's it - more on CI job stuff in open discussion
14:06:57 <gibi> mriedem: thanks
14:07:06 <gibi> #topic Reminders
14:07:11 <gibi> #link Stein Subteam Patches n Bugs: https://etherpad.openstack.org/p/stein-nova-subteam-tracking
14:07:16 <gibi> any other reminders?
14:07:47 <gibi> #topic Stable branch status
14:07:50 <gibi> #link stable/rocky: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/rocky,n,z
14:07:54 <gibi> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
14:07:57 <gibi> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:08:33 <gibi> rocky seems pretty short, but queens looks long (what an expert am I?!)
14:08:42 <mriedem> yes queens is long
14:08:57 <bauzas> roger, will take a look
14:08:58 <mriedem> bauzas: maybe you want to walk through stable/queens changes
14:09:01 <mriedem> jinx
14:09:06 <gibi> bauzas: thanks a lot
14:09:20 <gibi> any other comment about the stable branches?
14:09:38 <bauzas> I stopped making promises for a long time, but this will be done ^ :)
14:10:19 <gibi> #topic Subteam Highlights
14:10:28 <gibi> Scheduler (efried)
14:10:30 <efried> #link n-sch meeting minutes http://eavesdrop.openstack.org/meetings/nova_scheduler/2019/nova_scheduler.2019-02-04-14.01.html
14:10:30 <efried> It was a pretty short one.
14:10:34 <efried> #link libvirt reshaper https://review.openstack.org/#/c/599208/
14:10:34 <efried> ^ is due another PS from bauzas
14:10:35 <patchbot> patch 599208 - nova - libvirt: implement reshaper for vgpu - 12 patch sets
14:10:42 <efried> Report client traffic reduction & refactor series has all merged at this point \o/
14:10:46 <efried> Extraction update happened yesterday, see
14:10:46 <efried> #link cdent's extraction update summary on the ML http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002451.html
14:10:54 <efried> END
14:11:06 <gibi> efried: thanks
14:11:13 <gibi> API (gmann)
14:11:16 <gibi> No office hour this week.
14:11:23 <gibi> #topic Stuck Reviews
14:11:28 <gibi> nothing on the agenada
14:11:29 <gmann> gibi: yeah no update this week
14:11:40 <gibi> gmann: thanks
14:12:04 <gibi> #topic Open discussion
14:12:10 <sean-k-mooney> i have 2 bugs that i want to highlight before the end of the meeting
14:12:32 <sean-k-mooney> https://bugs.launchpad.net/os-vif/+bug/1801919
14:12:33 <openstack> Launchpad bug 1801919 in devstack "brctl is obsolete use ip" [Undecided,In progress] - Assigned to Nate Johnston (nate-johnston)
14:13:11 <sean-k-mooney> so im going to work on removing the brctl depency this/next week as rhel-8 apparntly wont package it
14:13:49 <sean-k-mooney> i may ping some people to review it so we can do another releae before the non-client lib freeze
14:13:58 <gibi> sean-k-mooney: thanks
14:14:00 <mriedem> you have until feb 28 to get that changed and released
14:14:09 <mriedem> Feb 28: non-client library (oslo, os-vif, os-brick, os-win, os-traits, etc) release freeze
14:14:09 <sean-k-mooney> the other one is https://bugs.launchpad.net/nova/+bug/1741575
14:14:10 <openstack> Launchpad bug 1741575 in OpenStack Compute (nova) "creating a VM without IP (ip_allocation='none')" [Undecided,Opinion]
14:14:38 <sean-k-mooney> we still dont support this but im a bit uncomfortable calling this a bug
14:14:55 <sean-k-mooney> should this be converted to a blueprint/spec for trian
14:15:01 <sean-k-mooney> if we choose to support it?
14:15:17 <mriedem> lack of a feature is not a bug
14:15:42 <mriedem> there was a spec https://review.openstack.org/#/c/239276/
14:15:43 <patchbot> patch 239276 - nova-specs - VM boot with unaddressed port (ABANDONED) - 10 patch sets
14:16:01 <sean-k-mooney> mriedem: most of the support was added when we added routed networks
14:16:07 <mriedem> which i thought was dealt with by https://review.openstack.org/#/c/299591/ but that only added 'deferred' ip support
14:16:08 <patchbot> patch 299591 - nova - Enable deferred IP on Neutron ports (MERGED) - 12 patch sets
14:16:11 <sean-k-mooney> but we specifcally chose not to support it
14:16:48 <sean-k-mooney> so i can convert it to a blueprint/spec for train and we can discuss then
14:16:54 <mriedem> then it's a new feature and we're past feature freeze
14:17:07 <mriedem> sure bring it up for train
14:17:13 <gibi> sean-k-mooney, mriedem: sounds like a plan
14:17:36 <gibi> so there is mriedem's topic on the agenda
14:17:37 <gibi> (mriedem): Proposed CI changes: http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002388.html
14:17:44 <gibi> #link Drop the nova-multiattach job: https://review.openstack.org/#/c/606981/
14:17:45 <patchbot> patch 606981 - nova - Drop nova-multiattach job - 5 patch sets
14:17:53 <gibi> #link Make nova-next run with python3: https://review.openstack.org/#/c/634739/
14:17:54 <patchbot> patch 634739 - nova - Change nova-next job to run with python3 - 1 patch set
14:17:58 <gibi> #link Drop the integrated-gate (py2) template jobs: https://review.openstack.org/#/c/634949/ - this one is more controversial, see gmann's reply in the ML
14:17:59 <patchbot> patch 634949 - nova - Drop the integrated-gate (py27) template - 1 patch set
14:18:15 <mriedem> the only other thing from the ML thread i haven't posted yet,
14:18:21 <mriedem> is slimming down what nova-next runs for tests
14:18:28 <mriedem> to just be tempest.compute.api and scenario tests
14:18:57 <mriedem> and i think i'll probably enable that for slow tests as well so it will alleviate gmann's concern about one test in https://review.openstack.org/#/c/606978/
14:18:58 <patchbot> patch 606978 - tempest - Enable volume multiattach tests in tempest-full/sl... - 8 patch sets
14:19:19 <mriedem> the bigger issue is nova not running the integrated-gate py2 template jobs for tempest and grenade,
14:19:28 <mriedem> i personally really don't want to have to run those duplicate jobs through the U release
14:19:34 <gmann> mriedem: +1 on that. if we are more worried on slow test, we can try it with n-v first
14:19:40 <mriedem> given we have py3 and we have other tempest jobs that use py2
14:20:28 <mriedem> gmann: regarding the integrated-gate py2 jobs,
14:20:34 <gmann> +1 was for nova-next one. :) i like to have py2 also running on project side to maintain py2 till stable/train
14:20:40 <mriedem> nova runs the ceph job which is py2 (tempest) and neutron-grenade-multinode which is also py2
14:21:08 <mriedem> we still maintain py2
14:21:14 <mriedem> i just don't want to run redundant jobs
14:21:19 <mriedem> we have other jobs that test py2
14:21:23 <gmann> mriedem: integrated-gate cover most of the code, rest other i am ok to run only py3
14:21:36 <mriedem> the ceph job runs the same tests as tempest-full but with ceph as the storage
14:21:38 <gmann> but those job does run only few set of tests rught
14:21:41 <mriedem> no
14:21:47 <gmann> which one ?
14:21:54 <mriedem> devstack-plugin-ceph-tempest
14:22:07 <mriedem> and neutron-grenade-multinode is neutron-grenade but on 2 nodes
14:22:10 <mriedem> still py2
14:22:23 <gmann> can we move them to py3 ?
14:22:45 <mriedem> there is already a ceph py3 job,
14:22:48 <gmann> i am not sure they run on stable branch also or not
14:22:57 <mriedem> my concern is if we move everything to py3, then we still have to run the redundant integrated-gate py2 jobs
14:23:03 <mriedem> for nova they do
14:23:09 <gmann> ok then we can remove the ceph py2 and keep integrated-gate py2
14:23:25 <mriedem> umm
14:23:28 <mriedem> i'm not really agreeing to that
14:23:37 <mriedem> i understand your concern
14:23:47 <gmann> but if we remove that, do we add that on stable/stein and stable/train as we need coverage of py2 there in case any backport break py2
14:23:47 <mriedem> but i don't like that we have to run so many redundant jobs
14:23:56 <sean-k-mooney> gmann: but why not remove the template and just run the jobs manually so we can better mange what test are run
14:24:10 <mriedem> gmann: these jobs already run on nova's stable branches
14:24:17 <mriedem> because they are defined in nova's zuul file
14:24:46 <gmann> sean-k-mooney:  actually template main goal was we can control the integration testing in single place instead of decided by project
14:24:55 <mriedem> and we have that with py3
14:24:59 <sean-k-mooney> gmann: https://github.com/openstack/nova/blob/master/.zuul.yaml#L204-L275
14:25:04 <mriedem> i'm saying py2 can be a second class citizen now
14:25:08 <mriedem> we still have coverage,
14:25:09 <cdent> yes please
14:25:11 <mriedem> but it's lower priority
14:25:16 <gmann> mriedem: i mean if you remove them from stein master then stable/stein and stable/train would no hae them
14:25:36 <mriedem> gmann: but we'll still have py2 jobs that run tempest and grenade on stable/stein and stable/train
14:26:08 <sean-k-mooney> gmann: ths issue mriedem is raising is today we run the same these 5 or 6 times aross py2 and py3
14:26:32 <mriedem> we just have a lot of jobs which mostly overlap tests with very little difference in configuration, and when we have these persistent bugs killing the gate that people aren't helping to investigate and fix,
14:26:33 <gmann> only tempest-full py2 and grenade py2 right ?
14:26:40 <mriedem> i tire of having to recheck everything
14:27:08 <cdent> I personally think we should unit test py2 and _everything_ else py3
14:27:11 <mriedem> gmann: for py2 i'm talking about the ceph job for tempest and neutron-grenade-multinode for py2 + grenade
14:27:20 <cdent> but I'm apparently crazy
14:27:23 <mriedem> cdent: that's what i'm pushing for also,
14:27:31 <mriedem> but we still have some jobs that run integration testing with py2
14:27:40 <mriedem> and thus i want to drop any redundancies i can
14:27:53 * cdent applauds
14:28:15 <mriedem> i assume that 99% of py2-specific issues in nova would be caught in unit/functional jobs that we run
14:28:51 <bauzas> I tend to agree on it for one reason
14:29:04 <bauzas> beyond end of this year, py2 will be unsupported anyway
14:29:29 <bauzas> so, if we stop testing py2 at runtime now, that's just a matter of accepting that py3 is the default now
14:29:29 <sean-k-mooney> bauzas: well we will start droping support upstream in U in september
14:29:42 <bauzas> and py2 becomes unsupported
14:29:46 <bauzas> which I'm good with
14:29:47 <mriedem> i'm not saying nova doesn't support py2
14:29:58 <mriedem> we still test py2 in unit/functional/tempest/grenade variants
14:30:08 <mriedem> my goal is to avoid redundancy
14:30:28 <mriedem> which is also why i'm dropping nova-multiattach and slimming down nova-net
14:30:29 <mriedem> *next
14:30:46 <mriedem> b/c i get tired of nova changes failing on cinder tooz connection failures
14:30:49 <mriedem> etc
14:31:27 <sean-k-mooney> it does wast a lot of ci resouces and we run the integrated jobs in the gate pipeline anyway
14:31:29 <mriedem> so i guess at this point ill reply to the ML thread with a summary of what i've said here
14:31:38 <gibi> gmann: would it be something we can try and see if py2 faults slip through?
14:31:45 <sean-k-mooney> so even if it passes check the tempest-full job will catch it in gate
14:31:45 <bauzas> mriedem: sure, I got you
14:31:58 <mriedem> sean-k-mooney: not if nova isn't running with that template
14:32:00 <bauzas> mriedem: I just say that what's untested can't be supported
14:32:07 <bauzas> or at best effort
14:32:16 <mriedem> bauzas: again, py2 isn't untested
14:32:29 <bauzas> but I don't disagree with you on non duplicating integration jobs
14:32:31 <sean-k-mooney> no i mean we explcitly urn tempets-full on py2 and py3 https://github.com/openstack/nova/blob/master/.zuul.yaml#L268-L275
14:32:35 * gmann sorry delay in response in TC meeting also
14:32:46 <mriedem> sean-k-mooney: correct b/c of the integrated-gate template
14:32:56 <mriedem> see https://review.openstack.org/#/c/634949/
14:32:57 <patchbot> patch 634949 - nova - Drop the integrated-gate (py27) template - 1 patch set
14:33:00 <gibi> I agree to slim down py2 testing. If we manage to break py2 after it, then we can re-evaluate our decision
14:33:16 <mriedem> gibi: yeah that's what i'd like to see
14:33:22 <gmann> mriedem: i agree on your concern about lot of job, and if we think unit/functionla jobs can cover all py2 things then we can think per project basis to run only py3
14:33:22 <mriedem> i'll reply on the ML thread
14:34:10 <mriedem> that's it from me
14:34:18 <gibi> any other open discussion topic?
14:34:59 <gibi> then thanks for joining today
14:35:00 <gibi> #endmeeting