14:00:28 #startmeeting nova 14:00:30 Meeting started Thu Jul 28 14:00:28 2016 UTC and is due to finish in 60 minutes. The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:34 The meeting name has been set to 'nova' 14:00:44 o/ 14:00:44 o/ 14:00:46 o/ 14:00:47 o/ 14:00:48 o/ 14:00:49 hi 14:00:52 o/ 14:00:53 o/ 14:01:08 o/ 14:01:30 let's do this 14:01:38 #topic release news 14:01:50 #link https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule 14:01:58 #info Today is the freeze for python 3 and mox removal work. If there is anything close let's get it merged, everything else should be stopped for Newton. 14:02:06 \o 14:02:13 #link python 3 conversion changes: https://review.openstack.org/#/q/topic:bp/nova-python3-newton+status:open 14:02:16 \o 14:02:21 o/ 14:02:23 #link mox removal changes https://review.openstack.org/#/q/topic:bp/remove-mox-newton+status:open 14:02:30 so lyarwood has a py3 fix but it's needed for a real bug fix 14:02:39 and it's also a disgusting hack, which is why it failed on py3 14:02:46 real fixes are fine i think 14:02:58 yeah, just FYI 14:03:04 don't go -2 it :D 14:03:11 https://review.openstack.org/#/c/342111 14:03:23 there are 5 pages of mox removal stuff, so we need to land whatever is already +2'ed or ready today 14:03:36 i'm not going to -2 all of the mox removal changes, there are too many 14:03:59 i will mark the bps complete by the end of the day though 14:04:07 and probably also put out a reminder in the mailing list 14:04:54 #info Aug 4: priority spec approval freeze 14:05:01 that's 1 week from today 14:05:19 the only priority specs i think we have are probably resource provider stuff and libvirt storage pools 14:06:01 #info move non-priority newton specs for review to the ocata directory if you still plan on working those in ocata 14:06:22 anything up for review that's not moved to ocata by 8/4 will get abandoned 14:06:42 any release questions? 14:07:28 #topic bugs 14:07:40 gate status has been ok i think 14:07:41 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:07:56 i'm not aware of any major new fallout 14:08:10 so, the resize is quite in a hard shape 14:08:13 o/ 14:08:24 because of the man just writing this line 14:08:40 bauzas: is there a gate bug? 14:08:41 we basically just send the original flavor to the scheduler 14:08:57 mriedem: nope, just that we verify the old flavor instead of the new one 14:09:08 so jobs are failing or...? 14:09:12 nope 14:09:22 is this a "bugs" topic :) 14:09:26 ? :p 14:09:45 yeah, sorry, was talking about gate status though so i thought you were talking about jobs failing on something new 14:10:00 my bad then :) 14:10:20 I saw a couple of grenade rechecks tho 14:10:28 does the test fail? 14:10:37 b/c it tries to do a migrate instead of a resize on a single node? 14:11:22 how about we just talk about this in -nova after the meeting... 14:11:28 sorry, seems I made a lot of confusion : no, the resize bug is not impacting the bug AFAICS 14:11:33 graah 14:11:38 not impacting the gate* 14:11:51 and okay, let's move that offline 14:11:56 * gibi is late 14:11:59 ok, well, if we have false positives in resize tests that's also bad 14:12:02 * edleafe wanders in 14:12:07 but let's talk about it in nova 14:12:13 agreed 14:12:17 third party ci status - has seemed....ok? 14:12:31 honestly third party ci status is a crapshoot from week to week 14:12:42 quobyte ci should be back on track though, we reverted something that was breaking their ci 14:13:06 i think i noticed that xen is working on a neutron-backed job 14:13:08 which is good to see 14:13:26 are there any critical bugs anyone wants to bring up? 14:13:29 markus_z: auggy: ? 14:13:42 nothing noteworthy from my pov 14:13:45 i'm not aware of anything 14:13:48 ok 14:13:56 moving on then 14:13:57 #reminders 14:13:59 oops 14:14:02 #topic reminders 14:14:09 #link Newton review focus list: https://etherpad.openstack.org/p/newton-nova-priorities-tracking 14:14:21 #help https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty Volunteers for 1 week of bug skimming duty? 14:14:49 looks like we have 36 untriaged bugs 14:14:53 i can do it 14:15:07 macsz: cool, just update that wiki, thanks 14:15:21 sure, will do 14:15:44 auggy: before i forget, if we have wiki pages on the py3 and/or mox conversion efforts for new people, we should put a big fat "this is frozen now for newton and will resume in ocata" at the top of the pages 14:15:55 mriedem: kk noted 14:16:10 #action auggy to note that py3 and mox efforts are frozen for newton in their respective wiki pages 14:16:13 thanks 14:16:14 i'll update the new contributor page 14:17:01 #topic Stable branch status: https://etherpad.openstack.org/p/stable-tracker 14:17:14 the periodic jobs for nova on stable have been fine 14:17:24 #link stable/mitaka: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/mitaka,n,z 14:17:41 #link stable/liberty: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/liberty,n,z 14:17:55 we have a bunch of mitaka backports ready to go i think, just need some review focus 14:18:15 i need to hunker down on those so we can get a stable/mitaka point release out early next week 14:18:35 #help review stable/mitaka backports so we can do a release the week of 8/1 14:18:55 #topic subteam highlights 14:19:03 there wasn't a cells meeting this week 14:19:11 there are reviews ongoing though 14:19:41 dansmith: jaypipes: how is doffm's aggregates to api db series coming? 14:19:58 I asked him something the other day and never got a response 14:20:13 he squashed the patch to block creations and do creates in the api db 14:20:25 mriedem: slowly... 14:20:33 mriedem: meaning, I asked jaypipes something.. rather, pointed him at the later patch as the answer to his question 14:20:43 about the autoincrement thing 14:20:58 mriedem: lemme look again. last I checked yesterday I posted a review of -1 because there were tempest tests around aggs that were failing with a valid problem. 14:21:00 dansmith: jaypipes: if there isn't much left, and it's just a matter of pushing the changes, i say someone takes it over 14:21:11 if that is resolved, then I think they're probably ready I just didn't +2 anything 14:21:15 ok 14:21:16 mriedem: I'm happy to do that 14:21:20 if need be 14:21:21 mriedem: there are valid failures. is doffster on PTO? 14:21:39 the doffster is getting pulled into lots of internal stuff 14:21:43 boo. 14:22:05 jaypipes: I didn't see those failures, but I'll pick it up and figure those out when we're done here and will ping you if I can't find them 14:22:15 there is also a series starting at https://review.openstack.org/#/c/325985/ which the bottom patch is holding up some stuff - unfortunately i didn't get the review on it until right before laski left on vacation 14:22:30 dansmith: ok, well, my schedule today is pretty open. have a meeting I must attend from 8am-9am your time but otherwise pretty open. 14:22:39 if we think someone else can tackle what's left, which i don't think is much on that one, we can move forward with the series 14:22:43 bauzas: melwitt: ^ fyi 14:22:43 dansmith: happy to quick-review-iterate with you on that series. 14:22:49 alright 14:22:57 mriedem: I may have some cycles, too 14:23:06 ok 14:23:14 mriedem: I'll be on PTO for two weeks starting tomorrow EOB, passing the ball, sorry 14:23:21 mandatory PTO i hope 14:23:25 jeez 14:23:26 ahah 14:23:33 then let's all let it rest on melwitt's shoulders 14:23:42 I really *tried* to not take more, hard call :p 14:23:46 we're all counting on you, good luck 14:24:00 moving on to scheduler subteam meeting highlights 14:24:09 edleafe: was there a scheduler meeting this week? i didn't think so 14:24:13 I was out on Monday 14:24:20 So unless someone else ran it... 14:24:23 ok 14:24:25 and noone took the ball AFAIK 14:24:34 jaypipes and cdent have a thread in the ML 14:24:37 which i haven't read yet 14:24:57 nbd 14:25:00 jaypipes: anything you want to say? 14:25:22 mriedem: on the laski patch (that has a -WIP on it) do you need me to do anything there? 14:25:35 jaypipes: no i don't think so 14:25:53 jaypipes: ideally you and i would stay as reviewers on that one 14:25:57 mriedem: on scheduler stuff, I agree with bauzas that I'd handle the object model work for dynamic resource classes and he could crank out the REST API for resource classes on the placement service. 14:26:09 mriedem: yes on laski thing 14:26:31 even though he'll be gone for 2 weeks 14:26:51 mriedem: because jaypipes has good cheese 14:26:53 mriedem: I'm gonna push yet another revision on the resource-providers-allocations and dynamic-resource-classes specs today after feedback from cdent and bauzas. 14:27:21 dynamic-resource-classes was agreed to be a stretch goal for newton yes? 14:27:21 bauzas: well, yeah, but there's lots that needs done before his CRUD API patches would be landable anyway. 14:27:28 mriedem: correct. 14:27:33 ok, just making sure 14:27:56 mriedem: exactly, hence me thinking I can async work on it 14:28:04 ok 14:28:12 let's move on 14:28:14 bauzas: yup. just dont' make the API calls async ;P 14:28:19 PaulMurray isn't here 14:28:32 there was a live migration meeting this week, we mostly talked about CI 14:28:52 tdurakov is working on tempest patches for the latest live migration related microversions 14:29:01 and was going to see how re-enabling NFV in the live migration job works out 14:29:49 we also compared the nova-net vs neutron multinode jobs that both run live migration, the neutron job has somewhat better pass rates over 7 days, but it's only check queue and the types of failures from the jobs probably wouldn't matter (for live migration) between networking backends 14:30:01 anyway, putting thought into moving away from nova-net jobs when we have duplicates 14:30:36 and paul-carlton's libvirt storage pools spec is up for review yet i think, and mnestratov was +1 on it for vz 14:30:57 sdague: want to cover anything from the api meeting this week? 14:31:26 must be too early for sean :) 14:31:32 the proxy api deprecation series is done 14:31:38 dansmith is working the novaclient changes for that 14:31:48 there are some questions in the ML on how to handle the tempest changes for that 14:32:18 and gmann has a patch up to stop allowing a url for image_href 14:32:24 mriedem: I think the current patch is good, needs reviews 14:32:32 that's about all i remember from the api meeting 14:32:38 dansmith: ok i'll try to get a pass on that again today 14:32:42 thanks 14:32:53 https://review.openstack.org/#/c/347514/ 14:32:56 for those at home 14:33:30 i thought we were going to deprecate the python APIs too? 14:33:37 rather than let them just start failing 14:34:35 moshele: if you're around, was there a pci/sriov meeting this week? 14:34:41 yes 14:35:26 updates? 14:35:27 so regarding CI we have the following tempest patches https://review.openstack.org/#/c/343294/ https://review.openstack.org/#/c/335447/ we didn't get review for tempest cores :( 14:35:40 mriedem: they fail if you don't explicitly request the right microversion.. I thought we have to leave those in place for a cycle, but maybe you just want some deprecated decoration? 14:36:08 basically tempest is broken for some test when using vnic_type 14:36:17 dansmith: yeah deprecation warnings and docstring updates like we have for baremetal and image proxy apis 14:36:23 okay 14:36:31 this is WIP patch for testing migration and migration revert in tempest https://review.openstack.org/#/c/347374/ 14:37:12 also we have several patch around migraion/migration-revert that needs reviews https://review.openstack.org/#/c/347444/ https://review.openstack.org/#/c/347558/ https://review.openstack.org/#/c/328983/ 14:37:13 moshele: I'm reviewing that... 14:37:29 moshele: doesn't that have to depend on those other changes? 14:37:37 the tempest WIP i mean 14:37:38 no 14:38:02 sorry it depend on one of them 14:38:19 on this one https://review.openstack.org/#/c/335447/ 14:38:54 moshele: ok should probably stack those in a series or use Depends-On 14:39:17 mriedem: we ping the tempest core in infra and we didn't get mach review 14:39:51 mriedem: so we have a multinode cI with all the tempest patches for testing the nova patches 14:40:11 mriedem: I don't want to block stuff because of tempest 14:40:38 ok 14:40:53 let's move on 14:41:01 #topic stuck reviews 14:41:08 there was nothing in the agenda 14:41:14 anyone have something to bring up? 14:41:32 mriedem ,please have a look at oslo related reviews which we talked before, https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1517839 14:41:56 gcb: is that all that's left of that series? 14:42:06 yes 14:42:07 gcb: b/c we're winding down on stuff like this in nova for newton 14:42:08 ok 14:42:26 #topic open discussion 14:42:36 one thing i have to bring up 14:42:42 novaclient currently caps at 2.32 14:42:46 * gibi is wondering if he missed the notification subteam part of the meeting 14:43:04 gibi: sorry i didn't ask about notifications since those were halted for newton and are biweekly meetings 14:43:06 did you have something? 14:43:34 mriedem: yeah we have biweekly meetings and a couple of follow up patches still for newton 14:43:44 mostly refactoring, test improvemtn, doc improvement 14:44:03 the subteam part of the newton priority etherpad has the links 14:44:19 these follow ups needs some core attention if bandwidth allows 14:44:41 sure, same as last meeting 14:44:50 yeah somthing like that 14:44:52 that is all 14:44:55 ok, back to novaclient 14:44:59 so we have 2.32 today 14:45:03 dan is working on 2.36 14:45:06 we need to fill the gap 14:45:15 https://review.openstack.org/#/q/project:openstack/python-novaclient+status:open 14:45:38 i see WIP reviews for 2.34 and 2.35 14:45:44 don't see anything for 2.33 14:45:52 anyway, heads up that we need work there 14:46:07 #help need to get 2.33-2.35 microversions implemented in novaclient 14:46:40 anyone have anything else for open discussion? 14:46:48 hi, I have one 14:46:57 intel nfv ci :) 14:47:07 you said that ;-) 14:47:09 I would like to ask for your opinions (especially the bad ones) on our Intel NFV CI, where we could improve and is it possible to get voting rights now/then -> http://lists.openstack.org/pipermail/openstack-dev/2016-July/099735.html 14:48:23 i feel like i don't know that's been going since we don't have metrics to compare it against the other jobs 14:48:49 looking at http://ci-watch.tintri.com/project?project=nova&time=7+days it looks quite green 14:48:54 except tempest-dsvm-ovsdpdk-nfv-networking 14:49:33 but those could be actual changes that just fail the job b/c they break it 14:49:51 i'm fine with making it voting to see how it goes, 14:49:58 we can always turn that off if it goes south 14:50:03 yeah, it was broken for less than a day because of upstream change... let me dig out the id 14:50:24 i feel like the intel team running that ci has been pretty responsive 14:51:30 unless anyone has major objections i'll ack that request in the ML thread 14:51:52 ok, anything else for open discussion? 14:51:52 I haven't been paying attention to it, 14:52:10 but making it voting will help that, so if you think it's reasonable, I'm good with turning it on for visibility 14:52:22 same thought 14:52:37 plus this was a big ask at the summit for NFV so i'm glad to see people stepping up that runs these things 14:52:41 like wznoinsk and moshele 14:52:45 yeah 14:53:11 btw, i was planning on writing up a recap of the highlights/todos/decisions from the ML for the meetup, it's just a lot of content to cover 14:53:19 so if you didn't make it and are wondering about that, ^ 14:53:21 btw. devstack change that broke the above job - https://github.com/openstack-dev/devstack/commit/c714c7e96, workarounded on the spot 14:53:46 wznoinsk: does the intel nfv ci run against devstack changes? 14:54:16 mriedem: only for ODL+OVSDPDK testing, kind of a legacy decision 14:54:41 and what do you mean by workarounded on the spot? is devstack going to be changed or are you carrying something in your ci setup to workaround that? 14:55:29 anyway, we can take this to the dev list, i'll ask there 14:55:38 let's wrap up with 5 big minutes to spare 14:55:45 thanks everyone 14:55:47 #endmeeting