14:00:28 <mriedem> #startmeeting nova
14:00:30 <openstack> Meeting started Thu Jul 28 14:00:28 2016 UTC and is due to finish in 60 minutes.  The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:34 <openstack> The meeting name has been set to 'nova'
14:00:44 <rlrossit> o/
14:00:44 <markus_z> o/
14:00:46 <gcb> o/
14:00:47 <dansmith> o/
14:00:48 <lyarwood> o/
14:00:49 <andrearosa> hi
14:00:52 <raj_singh> o/
14:00:53 <takashin> o/
14:01:08 <auggy> o/
14:01:30 <mriedem> let's do this
14:01:38 <mriedem> #topic release news
14:01:50 <mriedem> #link https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
14:01:58 <mriedem> #info Today is the freeze for python 3 and mox removal work. If there is  anything close let's get it merged, everything else should be stopped  for Newton.
14:02:06 <bauzas> \o
14:02:13 <mriedem> #link python 3 conversion changes: https://review.openstack.org/#/q/topic:bp/nova-python3-newton+status:open
14:02:16 <macsz> \o
14:02:21 <cdent> o/
14:02:23 <mriedem> #link mox removal changes https://review.openstack.org/#/q/topic:bp/remove-mox-newton+status:open
14:02:30 <dansmith> so lyarwood has a py3 fix but it's needed for a real bug fix
14:02:39 <dansmith> and it's also a disgusting hack, which is why it failed on py3
14:02:46 <mriedem> real fixes are fine i think
14:02:58 <dansmith> yeah, just FYI
14:03:04 <dansmith> don't go -2 it :D
14:03:11 <dansmith> https://review.openstack.org/#/c/342111
14:03:23 <mriedem> there are 5 pages of mox removal stuff, so we need to land whatever is already +2'ed or ready today
14:03:36 <mriedem> i'm not going to -2 all of the mox removal changes, there are too many
14:03:59 <mriedem> i will mark the bps complete by the end of the day though
14:04:07 <mriedem> and probably also put out a reminder in the mailing list
14:04:54 <mriedem> #info Aug 4: priority spec approval freeze
14:05:01 <mriedem> that's 1 week from today
14:05:19 <mriedem> the only priority specs i think we have are probably resource provider stuff and libvirt storage pools
14:06:01 <mriedem> #info move non-priority newton specs for review to the ocata directory if you still plan on working those in ocata
14:06:22 <mriedem> anything up for review that's not moved to ocata by 8/4 will get abandoned
14:06:42 <mriedem> any release questions?
14:07:28 <mriedem> #topic bugs
14:07:40 <mriedem> gate status has been ok i think
14:07:41 <mriedem> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:07:56 <mriedem> i'm not aware of any major new fallout
14:08:10 <bauzas> so, the resize is quite in a hard shape
14:08:13 <diana_clarke> o/
14:08:24 <bauzas> because of the man just writing this line
14:08:40 <mriedem> bauzas: is there a gate bug?
14:08:41 <bauzas> we basically just send the original flavor to the scheduler
14:08:57 <bauzas> mriedem: nope, just that we verify the old flavor instead of the new one
14:09:08 <mriedem> so jobs are failing or...?
14:09:12 <bauzas> nope
14:09:22 <bauzas> is this a "bugs" topic :)
14:09:26 <bauzas> ? :p
14:09:45 <mriedem> yeah, sorry, was talking about gate status though so i thought you were talking about jobs failing on something new
14:10:00 <bauzas> my bad then :)
14:10:20 <bauzas> I saw a couple of grenade rechecks tho
14:10:28 <mriedem> does the test fail?
14:10:37 <mriedem> b/c it tries to do a migrate instead of a resize on a single node?
14:11:22 <mriedem> how about we just talk about this in -nova after the meeting...
14:11:28 <bauzas> sorry, seems I made a lot of confusion : no, the resize bug is not impacting the bug AFAICS
14:11:33 <bauzas> graah
14:11:38 <bauzas> not impacting the gate*
14:11:51 <bauzas> and okay, let's move that offline
14:11:56 * gibi is late
14:11:59 <mriedem> ok, well, if we have false positives in resize tests that's also bad
14:12:02 * edleafe wanders in
14:12:07 <mriedem> but let's talk about it in nova
14:12:13 <bauzas> agreed
14:12:17 <mriedem> third party ci status - has seemed....ok?
14:12:31 <mriedem> honestly third party ci status is a crapshoot from week to week
14:12:42 <mriedem> quobyte ci should be back on track though, we reverted something that was breaking their ci
14:13:06 <mriedem> i think i noticed that xen is working on a neutron-backed job
14:13:08 <mriedem> which is good to see
14:13:26 <mriedem> are there any critical bugs anyone wants to bring up?
14:13:29 <mriedem> markus_z: auggy: ?
14:13:42 <markus_z> nothing noteworthy from my pov
14:13:45 <auggy> i'm not aware of anything
14:13:48 <mriedem> ok
14:13:56 <mriedem> moving on then
14:13:57 <mriedem> #reminders
14:13:59 <mriedem> oops
14:14:02 <mriedem> #topic reminders
14:14:09 <mriedem> #link Newton review focus list: https://etherpad.openstack.org/p/newton-nova-priorities-tracking
14:14:21 <mriedem> #help https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty Volunteers for 1 week of bug skimming duty?
14:14:49 <mriedem> looks like we have 36 untriaged bugs
14:14:53 <macsz> i can do it
14:15:07 <mriedem> macsz: cool, just update that wiki, thanks
14:15:21 <macsz> sure, will do
14:15:44 <mriedem> auggy: before i forget, if we have wiki pages on the py3 and/or mox conversion efforts for new people, we should put a big fat "this is frozen now for newton and will resume in ocata" at the top of the pages
14:15:55 <auggy> mriedem: kk noted
14:16:10 <mriedem> #action auggy to note that py3 and mox efforts are frozen for newton in their respective wiki pages
14:16:13 <mriedem> thanks
14:16:14 <auggy> i'll update the new contributor page
14:17:01 <mriedem> #topic Stable branch status: https://etherpad.openstack.org/p/stable-tracker
14:17:14 <mriedem> the periodic jobs for nova on stable have been fine
14:17:24 <mriedem> #link stable/mitaka: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/mitaka,n,z
14:17:41 <mriedem> #link  stable/liberty: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/liberty,n,z
14:17:55 <mriedem> we have a bunch of mitaka backports ready to go i think, just need some review focus
14:18:15 <mriedem> i need to hunker down on those so we can get a stable/mitaka point release out early next week
14:18:35 <mriedem> #help review stable/mitaka backports so we can do a release the week of 8/1
14:18:55 <mriedem> #topic subteam highlights
14:19:03 <mriedem> there wasn't a cells meeting this week
14:19:11 <mriedem> there are reviews ongoing though
14:19:41 <mriedem> dansmith: jaypipes: how is doffm's aggregates to api db series coming?
14:19:58 <dansmith> I asked him something the other day and never got a response
14:20:13 <dansmith> he squashed the patch to block creations and do creates in the api db
14:20:25 <jaypipes> mriedem: slowly...
14:20:33 <dansmith> mriedem: meaning, I asked jaypipes something.. rather, pointed him at the later patch as the answer to his question
14:20:43 <dansmith> about the autoincrement thing
14:20:58 <jaypipes> mriedem: lemme look again. last I checked yesterday I posted a review of -1 because there were tempest tests around aggs that were failing with a valid problem.
14:21:00 <mriedem> dansmith: jaypipes: if there isn't much left, and it's just a matter of pushing the changes, i say someone takes it over
14:21:11 <dansmith> if that is resolved, then I think they're probably ready I just didn't +2 anything
14:21:15 <mriedem> ok
14:21:16 <dansmith> mriedem: I'm happy to do that
14:21:20 <dansmith> if need be
14:21:21 <jaypipes> mriedem: there are valid failures. is doffster on PTO?
14:21:39 <mriedem> the doffster is getting pulled into lots of internal stuff
14:21:43 <jaypipes> boo.
14:22:05 <dansmith> jaypipes: I didn't see those failures, but I'll pick it up and figure those out when we're done here and will ping you if I can't find them
14:22:15 <mriedem> there is also a series starting at https://review.openstack.org/#/c/325985/ which the bottom patch is holding up some stuff - unfortunately i didn't get the review on it until right before laski left on vacation
14:22:30 <jaypipes> dansmith: ok, well, my schedule today is pretty open. have a meeting I must attend from 8am-9am your time but otherwise pretty open.
14:22:39 <mriedem> if we think someone else can tackle what's left, which i don't think is much on that one, we can move forward with the series
14:22:43 <mriedem> bauzas: melwitt: ^ fyi
14:22:43 <jaypipes> dansmith: happy to quick-review-iterate with you on that series.
14:22:49 <dansmith> alright
14:22:57 <edleafe> mriedem: I may have some cycles, too
14:23:06 <mriedem> ok
14:23:14 <bauzas> mriedem: I'll be on PTO for two weeks starting tomorrow EOB, passing the ball, sorry
14:23:21 <mriedem> mandatory PTO i hope
14:23:25 <dansmith> jeez
14:23:26 <bauzas> ahah
14:23:33 <mriedem> then let's all let it rest on melwitt's shoulders
14:23:42 <bauzas> I really *tried* to not take more, hard call :p
14:23:46 <mriedem> we're all counting on you, good luck
14:24:00 <mriedem> moving on to scheduler subteam meeting highlights
14:24:09 <mriedem> edleafe: was there a scheduler meeting this week? i didn't think so
14:24:13 <edleafe> I was out on Monday
14:24:20 <edleafe> So unless someone else ran it...
14:24:23 <mriedem> ok
14:24:25 <bauzas> and noone took the ball AFAIK
14:24:34 <mriedem> jaypipes and cdent have a thread in the ML
14:24:37 <mriedem> which i haven't read yet
14:24:57 <cdent> nbd
14:25:00 <mriedem> jaypipes: anything you want to say?
14:25:22 <jaypipes> mriedem: on the laski patch (that has a -WIP on it) do you need me to do anything there?
14:25:35 <mriedem> jaypipes: no i don't think so
14:25:53 <mriedem> jaypipes: ideally you and i would stay as reviewers on that one
14:25:57 <jaypipes> mriedem: on scheduler stuff, I agree with bauzas that I'd handle the object model work for dynamic resource classes and he could crank out the REST API for resource classes on the placement service.
14:26:09 <jaypipes> mriedem: yes on laski thing
14:26:31 <mriedem> even though he'll be gone for 2 weeks
14:26:51 <bauzas> mriedem: because jaypipes has good cheese
14:26:53 <jaypipes> mriedem: I'm gonna push yet another revision on the resource-providers-allocations and dynamic-resource-classes specs today after feedback from cdent and bauzas.
14:27:21 <mriedem> dynamic-resource-classes was agreed to be a stretch goal for newton yes?
14:27:21 <jaypipes> bauzas: well, yeah, but there's lots that needs done before his CRUD API patches would be landable anyway.
14:27:28 <jaypipes> mriedem: correct.
14:27:33 <mriedem> ok, just making sure
14:27:56 <bauzas> mriedem: exactly, hence me thinking I can async work on it
14:28:04 <mriedem> ok
14:28:12 <mriedem> let's move on
14:28:14 <jaypipes> bauzas: yup. just dont' make the API calls async ;P
14:28:19 <mriedem> PaulMurray isn't here
14:28:32 <mriedem> there was a live migration meeting this week, we mostly talked about CI
14:28:52 <mriedem> tdurakov is working on tempest patches for the latest live migration related microversions
14:29:01 <mriedem> and was going to see how re-enabling NFV in the live migration job works out
14:29:49 <mriedem> we also compared the nova-net vs neutron multinode jobs that both run live migration, the neutron job has somewhat better pass rates over 7 days, but it's only check queue and the types of failures from the jobs probably wouldn't matter (for live migration) between networking backends
14:30:01 <mriedem> anyway, putting thought into moving away from nova-net jobs when we have duplicates
14:30:36 <mriedem> and paul-carlton's libvirt storage pools spec is up for review yet i think, and mnestratov was +1 on it for vz
14:30:57 <mriedem> sdague: want to cover anything from the api meeting this week?
14:31:26 <mriedem> must be too early for sean :)
14:31:32 <mriedem> the proxy api deprecation series is done
14:31:38 <mriedem> dansmith is working the novaclient changes for that
14:31:48 <mriedem> there are some questions in the ML on how to handle the tempest changes for that
14:32:18 <mriedem> and gmann has a patch up to stop allowing a url for image_href
14:32:24 <dansmith> mriedem: I think the current patch is good, needs reviews
14:32:32 <mriedem> that's about all i remember from the api meeting
14:32:38 <mriedem> dansmith: ok i'll try to get a pass on that again today
14:32:42 <dansmith> thanks
14:32:53 <mriedem> https://review.openstack.org/#/c/347514/
14:32:56 <mriedem> for those at home
14:33:30 <mriedem> i thought we were going to deprecate the python APIs too?
14:33:37 <mriedem> rather than let them just start failing
14:34:35 <mriedem> moshele: if you're around, was there a pci/sriov meeting this week?
14:34:41 <moshele> yes
14:35:26 <mriedem> updates?
14:35:27 <moshele> so regarding CI we have the following tempest patches https://review.openstack.org/#/c/343294/ https://review.openstack.org/#/c/335447/  we didn't get review for tempest cores :(
14:35:40 <dansmith> mriedem: they fail if you don't explicitly request the right microversion.. I thought we have to leave those in place for a cycle, but maybe you just want some deprecated decoration?
14:36:08 <moshele> basically tempest is broken for some test when using vnic_type
14:36:17 <mriedem> dansmith: yeah deprecation warnings and docstring updates like we have for baremetal and image proxy apis
14:36:23 <dansmith> okay
14:36:31 <moshele> this is WIP patch for testing migration and migration revert in tempest https://review.openstack.org/#/c/347374/
14:37:12 <moshele> also we have several patch around migraion/migration-revert that needs reviews  https://review.openstack.org/#/c/347444/ https://review.openstack.org/#/c/347558/ https://review.openstack.org/#/c/328983/
14:37:13 <jaypipes> moshele: I'm reviewing that...
14:37:29 <mriedem> moshele: doesn't that have to depend on those other changes?
14:37:37 <mriedem> the tempest WIP i mean
14:37:38 <moshele> no
14:38:02 <moshele> sorry it depend on one of them
14:38:19 <moshele> on this one https://review.openstack.org/#/c/335447/
14:38:54 <mriedem> moshele: ok should probably stack those in a series or use Depends-On
14:39:17 <moshele> mriedem: we ping the tempest core in infra and we didn't get mach review
14:39:51 <moshele> mriedem: so we have a multinode cI with all the tempest patches for testing the nova patches
14:40:11 <moshele> mriedem: I don't want to block stuff because of tempest
14:40:38 <mriedem> ok
14:40:53 <mriedem> let's move on
14:41:01 <mriedem> #topic stuck reviews
14:41:08 <mriedem> there was nothing in the agenda
14:41:14 <mriedem> anyone have something to bring up?
14:41:32 <gcb> mriedem ,please have a look at oslo related reviews which we talked before, https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1517839
14:41:56 <mriedem> gcb: is that all that's left of that series?
14:42:06 <gcb> yes
14:42:07 <mriedem> gcb: b/c we're winding down on stuff like this in nova for newton
14:42:08 <mriedem> ok
14:42:26 <mriedem> #topic open discussion
14:42:36 <mriedem> one thing i have to bring up
14:42:42 <mriedem> novaclient currently caps at 2.32
14:42:46 * gibi is wondering if he missed the notification subteam part of the meeting
14:43:04 <mriedem> gibi: sorry i didn't ask about notifications since those were halted for newton and are biweekly meetings
14:43:06 <mriedem> did you have something?
14:43:34 <gibi> mriedem: yeah we have biweekly meetings and a couple of follow up patches still for newton
14:43:44 <gibi> mostly refactoring, test improvemtn, doc improvement
14:44:03 <gibi> the subteam part of the newton priority etherpad has the links
14:44:19 <gibi> these follow ups needs some core attention if bandwidth allows
14:44:41 <mriedem> sure, same as last meeting
14:44:50 <gibi> yeah somthing like that
14:44:52 <gibi> that is all
14:44:55 <mriedem> ok, back to novaclient
14:44:59 <mriedem> so we have 2.32 today
14:45:03 <mriedem> dan is working on 2.36
14:45:06 <mriedem> we need to fill the gap
14:45:15 <mriedem> https://review.openstack.org/#/q/project:openstack/python-novaclient+status:open
14:45:38 <mriedem> i see WIP reviews for 2.34 and 2.35
14:45:44 <mriedem> don't see anything for 2.33
14:45:52 <mriedem> anyway, heads up that we need work there
14:46:07 <mriedem> #help need to get 2.33-2.35 microversions implemented in novaclient
14:46:40 <mriedem> anyone have anything else for open discussion?
14:46:48 <wznoinsk> hi, I have one
14:46:57 <mriedem> intel nfv ci :)
14:47:07 <wznoinsk> you said that ;-)
14:47:09 <wznoinsk> I would like to ask for your opinions (especially the bad ones) on our Intel NFV CI, where we could improve and is it possible to get voting rights now/then -> http://lists.openstack.org/pipermail/openstack-dev/2016-July/099735.html
14:48:23 <mriedem> i feel like i don't know that's been going since we don't have metrics to compare it against the other jobs
14:48:49 <mriedem> looking at http://ci-watch.tintri.com/project?project=nova&time=7+days it looks quite green
14:48:54 <mriedem> except tempest-dsvm-ovsdpdk-nfv-networking
14:49:33 <mriedem> but those could be actual changes that just fail the job b/c they break it
14:49:51 <mriedem> i'm fine with making it voting to see how it goes,
14:49:58 <mriedem> we can always turn that off if it goes south
14:50:03 <wznoinsk> yeah, it was broken for less than a day because of upstream change... let me dig out the id
14:50:24 <mriedem> i feel like the intel team running that ci has been pretty responsive
14:51:30 <mriedem> unless anyone has major objections i'll ack that request in the ML thread
14:51:52 <mriedem> ok, anything else for open discussion?
14:51:52 <dansmith> I haven't been paying attention to it,
14:52:10 <dansmith> but making it voting will help that, so if you think it's reasonable, I'm good with turning it on for visibility
14:52:22 <mriedem> same thought
14:52:37 <mriedem> plus this was a big ask at the summit for NFV so i'm glad to see people stepping up that runs these things
14:52:41 <mriedem> like wznoinsk and moshele
14:52:45 <dansmith> yeah
14:53:11 <mriedem> btw, i was planning on writing up a recap of the highlights/todos/decisions from the ML for the meetup, it's just a lot of content to cover
14:53:19 <mriedem> so if you didn't make it and are wondering about that, ^
14:53:21 <wznoinsk> btw. devstack change that broke the above job - https://github.com/openstack-dev/devstack/commit/c714c7e96, workarounded on the spot
14:53:46 <mriedem> wznoinsk: does the intel nfv ci run against devstack changes?
14:54:16 <wznoinsk> mriedem: only for ODL+OVSDPDK testing, kind of a legacy decision
14:54:41 <mriedem> and what do you mean by workarounded on the spot? is devstack going to be changed or are you carrying something in your ci setup to workaround that?
14:55:29 <mriedem> anyway, we can take this to the dev list, i'll ask there
14:55:38 <mriedem> let's wrap up with 5 big minutes to spare
14:55:45 <mriedem> thanks everyone
14:55:47 <mriedem> #endmeeting