14:00:14 #startmeeting nova 14:00:15 Meeting started Thu May 5 14:00:14 2016 UTC and is due to finish in 60 minutes. The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:19 The meeting name has been set to 'nova' 14:00:22 adrian_otto akuriata alevine alexpilotti aloga andreykurilin anteaya artom auggy 14:00:22 bauzas belliott belmoreira bobball cburgess claudiub danpb dguitarbite _diana_ 14:00:22 diana_clarke dims duncant edleafe efried flip214 funzo garyk gcb gjayavelu 14:00:22 irina_pov jaypipes jcookekhugen jgrimm jichen jlvillal jroll kashyap klindgren 14:00:26 o/ 14:00:27 o/ 14:00:27 o/ 14:00:27 o/ 14:00:29 o/ 14:00:30 hi 14:00:30 \o 14:00:30 krtaylor lbeliveau lxsli macsz markus_z mdorman med_ mikal mjturek mnestratov 14:00:30 moshele mrda nagyz ndipanov neiljerram nic Nisha PaulMurray raildo rgeragnov 14:00:30 sc68cal scottda sdague sileht sorrison swamireddy thomasem thorst tjones tonyb 14:00:30 tpatil tpatzig xyang rdopiera sarafraj woodster sahid rbradfor junjie gsilvis 14:00:31 hai 14:00:34 o/ 14:00:37 o/ 14:00:38 o/ 14:00:40 * kashyap waves 14:00:42 o/ 14:00:44 o/ 14:00:44 o/ 14:00:44 hi 14:00:51 \o 14:00:52 hi 14:00:52 o/ 14:00:55 yo 14:00:59 cripes 14:00:59 hey 14:01:05 o/ 14:01:10 o/ 14:01:10 o/ 14:01:11 o/ 14:01:16 an army 14:01:16 o/ 14:01:23 #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 14:01:25 post-summit spike? 14:01:29 of course 14:01:29 reporting for duty sir! :) 14:01:39 #topic release news 14:01:45 #link Rough outline of Newton release milestones is up: https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule 14:01:57 ^ is based on the priorities session we had last week 14:02:22 i have a patch to get it into the releases repo also https://review.openstack.org/#/c/312245/ 14:02:31 just need dims to approve that 14:02:43 yep, done 14:03:05 was just reviewing that :) 14:03:07 the next milestone is n-1 which is also non-priority spec approval freeze, on june 2nd 14:03:20 so we have ~4 weeks to review specs 14:03:26 *non-priority specs 14:03:47 what are the priorities? you ask 14:03:50 well, bam! https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html 14:04:04 also from the session last week 14:04:33 we have oodles of specs coming in from after the summit, several are just TODOs from discussions 14:04:37 a lot around cells v2 and API 14:04:38 * johnthetubaguy nods with approval in mriedem general direction 14:05:03 any questions on the release schedule? 14:05:16 #topic bugs 14:05:17 o/ 14:05:33 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:05:55 the only news i have here is devstack switched to using fernet tokens by default on 4/29 14:06:05 and now we have race failures in the identity testes 14:06:07 *tests 14:06:19 and some neutron tests in our unit tests right? 14:06:21 a fix merged yesterday but i'm still seeing failures in logstash 14:06:31 dansmith: which looks like mox? 14:06:34 i've seen something like that 14:06:41 some port create failure or someting 14:06:45 yeah 14:06:55 i don't have a bug or query for that one 14:07:04 Need to port away from mox? 14:07:06 usually those were in py34 jobs, but that one wasn't 14:07:15 efried: well, 14:07:18 I saw these in py27 14:07:20 have you seen that class? 14:07:26 mega mox setup in the base class 14:07:35 so porting that one is going to be a massive undertaking 14:07:43 in fact, 14:07:51 mriedem, I can't remember why we hate mox. 14:07:56 it's going to nearly require a parallel effort to make sure we get coverage before we remove the mox ones, 14:07:59 efried: it doesn't support py3 for one 14:08:06 because those tests are soooo tightly coupled to the code 14:08:06 dansmith: yeah 14:08:23 efried: there is a doc about it somewhere with more details 14:08:39 we could at least start by converting the test that's failing to mock, and then drop the old one 14:08:48 i'm sure it's some variant of allocate_for_instance 14:08:50 mriedem, +1 14:09:29 efried: it's in http://docs.openstack.org/infra/manual/developers.html#peer-review 14:09:45 i'm not seeing our faithful bugs subteam people 14:09:57 the bugs meeting this week was pretty short, people are still recovering from the summit 14:10:13 our new bugs are rising though https://bugs.launchpad.net/nova/+bugs?field.searchtext=&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field 14:10:37 so if you get a spare 10 min, triage a bug, won't you? 14:10:53 #topic reminders 14:11:01 #info Newton mid-cycle RSVP closes on Tuesday 5/10: http://lists.openstack.org/pipermail/openstack-dev/2016-May/093815.html 14:11:24 if you are planning, or thinking of asking, to go to the midcycle, please rsvp ^ by next tuesday 14:11:45 #info api-ref docs cleanup review sprints on Monday 5/9 and Wednesday 5/11: http://lists.openstack.org/pipermail/openstack-dev/2016-May/093844.html 14:11:52 The Watcher midcycle is also at Hillsboro on the same dates, FYI 14:12:00 ok 14:12:29 we're going to do a review sprint next monday and wednesday for the api-ref docs cleanup 14:12:43 there are already a bunch of changes up for that 14:13:07 well. there were yesterday, looks like there were some busy beavers over night 14:13:23 #link Newton review focus list: https://etherpad.openstack.org/p/newton-nova-priorities-tracking 14:13:40 just remember to refresh ^ 14:13:58 sdague also has dashboards for the virt driver subteams now i think? 14:14:15 mriedem: I had a proposed set, was looking for feedback on it 14:14:23 it's posted to the list 14:14:40 #link driver review dashboards ML thread http://lists.openstack.org/pipermail/openstack-dev/2016-May/093753.html 14:14:45 yeah, i haven't read it all yet 14:14:50 thanks for posting that though 14:14:59 Just checking that you saw my feedback sdague :) 14:15:11 BobBall: yep 14:15:28 Coolio 14:15:29 claudiub and garyk aren't here 14:15:33 so i guess... 14:15:51 We have 50 approved blueprints: https://blueprints.launchpad.net/nova/newton - 5 are completed, 5 have not started, 3 are blocked 14:16:05 ^ as of yesterday 14:16:26 #help https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty Volunteers for 1 week of bug skimming duty? 14:16:37 sdague: hmm, that looks really quite good, with just looking for a +1, nice 14:16:52 looks like the table in https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty needs to be updated 14:16:56 #action mriedem to update the table in https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty 14:17:03 sdague: what's the best way to add ironic there? 14:17:27 is it in the dashboard repo or? 14:17:37 jroll: he's proposing it for that i think 14:17:45 jroll: yes, it's in the dashboard repo already in that form 14:17:47 see what's in the ML thread and copy/change for ironic 14:17:57 sdague: okay, I'll propose a change there 14:18:02 jroll: great 14:18:08 thanks 14:18:11 anything else on bugs? 14:18:28 #topic stable branches 14:18:34 #link Stable branch status: https://etherpad.openstack.org/p/stable-tracker 14:18:49 there is nothing really new for issues affecting nova as far as i know 14:18:56 stable team is on summit hangover also 14:19:23 we have quite a few mitaka and liberty backports open for review 14:19:38 and kilo is in eol freeze 14:19:59 Daviey has been going through all stable/kilo changes and -2ing them to prep for the final release before EOLing the branch 14:20:25 so if you have something in https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/kilo,n,z that you think needs to get in, you need to propose a freeze exception to the dev list 14:20:46 #topic Subteam Highlights 14:20:56 i hope people are here :) 14:21:01 alaski: cells v2 meeting recap? 14:21:14 lots of jetlag 14:21:16 was mostly do reviews 14:21:22 bunch of specs up for review 14:21:29 dansmith is working on migrating keypairs to the api db 14:21:34 yes, a lot of db migrations under discussion 14:21:45 melwitt is working on mq switching 14:21:59 ccarmack on testing still 14:22:11 and everyone else on db migrations 14:22:33 ok, and the add-buildrequest-obj series 14:22:38 yep 14:22:44 which looks like partially merged last week or so 14:22:48 I'm sidetracked on policy briefly and then I'll be back to that 14:22:52 yup 14:22:56 ok, thanks 14:23:12 we don't have jaypipes, anyone else for the scheduler meeting highlights? 14:23:14 cdent: ? 14:23:24 hola 14:23:27 * cdent thinks 14:23:48 the spec for generic-resource-pools is top priority this week i think 14:24:01 https://review.openstack.org/#/c/300176/ 14:24:07 not from the meeting, but we had a long talk with jay about scrapping the currently proposed ironic multiple compute host thing in favor of leveraging generic-resource-pools 14:24:07 Yeah, nothing huge to report from the meeting, but yes, the specs rolling in on the resource-provider stack are active 14:24:11 and then resource-providers-allocations 14:24:23 yeah, dynamic resource classes was posted by jay recently 14:24:25 jroll: cool 14:24:44 jroll: when i went back and read the re-approved compute hosts spec for ironic i was thinking it sounded a lot of like generic-resource-pools 14:24:49 mriedem: might end up with a spec coming your way on that 14:24:50 there's some heated debate going on related to how to do logging of hosts not matching during scheduling 14:24:51 yeah 14:25:01 there's some ops-ability things to work out but I think it's solvable 14:25:06 cdent: cfriesen's spec? 14:25:15 and some quibbling over whether or not to use the db to resolve scheduling 14:25:18 mriedem: yes 14:25:25 loggin: https://review.openstack.org/#/c/306647/ 14:25:43 dynamic resoource classes: https://review.openstack.org/#/c/312696/ 14:25:54 ok, i do see some ops people +1ing https://review.openstack.org/#/c/306647/ 14:25:57 seems like we had that filter action logging discussion years ago 14:25:58 i brought that up in the ops meeting yesterday 14:26:00 glad they chimed in 14:26:03 scheduling in db: https://review.openstack.org/#/c/300178/ 14:26:22 yeah, quite a lot of input on the logging for "new" people, which is very useful 14:26:30 cdent: +1 14:26:59 ok, and there is the thread about placement CLI/API in the dev list, but let's not get into that here 14:27:08 I'm, personally, still vaguely confused on the need for the logs (see my comment on https://review.openstack.org/#/c/300178/ for doing resource management via filters being bad) 14:27:34 #link dev list thread on placement API/CLI http://lists.openstack.org/pipermail/openstack-dev/2016-May/093955.html 14:27:57 moving on 14:28:03 PaulMurray isn't around 14:28:11 anyone can speak for the live migration meeting? 14:28:12 mdbooth: ? 14:28:30 I hadn't prepared anything 14:28:30 mriedem: give him one sec 14:28:35 yelling at him 14:28:46 PaulMurray would be best :) 14:28:51 hi - 14:29:00 just joined - whatsup 14:29:06 PaulMurray: hi! any highlights from the live migration meeting this week? 14:29:39 we did a recap of the summit 14:30:05 ok 14:30:15 pretty much the same as the email I put on ML 14:30:17 moving on? 14:30:37 at some point i plan on adding a patch for an lvm gate job 14:30:42 for mdbooth's refactor series 14:30:44 mriedem: +1 14:30:58 sdague: want to highlight the api subteam meeting? 14:31:04 sure 14:31:12 mriedem: Not sure this is the right time, but I'm also not convinced we have CI coverage of both Qcow2 and 'Raw' backends 14:31:21 Suspect we only cover 1, but may be wrong 14:31:39 a lot of it was on the deletion of the legacy v2 code in tree, which we agreed at summit 14:32:05 the paste.ini entries are fully removed now, as well as the functional tests for the legacy code 14:32:25 so there will be a series of test tear downs, then the final code remove over the next couple of weeks 14:33:02 we're doing the api-ref sprint next week, I'm trying to work out something for a progress, burn down chart 14:33:33 and lastly people are focussed on the policy in code specs to get those reviewed to completion 14:33:46 that's the big focus for this week 14:34:02 ok, thanks, already making good progress on the summit items 14:34:15 i don't see moshele around 14:34:29 yeh, there are a few specs on delete / deprecation which are being posted as well, based on summit items 14:34:36 but he did post some notes from the sriov/pci meeting this week to the dev list 14:34:59 lbeliveau and sfinucane are working on filling gaps in the NFV docs 14:35:36 and it sounds like mellanox ci is working on moving to containers for their ci like how the intel nfv ci does it 14:35:38 mriedem: a couple of ones are already merged 14:35:46 lbeliveau: ah, cool 14:35:51 thanks for working on that 14:36:04 gibi: did you want to share anything about the notifications meeting? 14:36:10 sure 14:36:34 the etherpad https://etherpad.openstack.org/p/nova-versioned-notifications is up to date after the summit 14:36:46 we are working on to get the transformation spec merged 14:37:01 https://review.openstack.org/#/c/286675/ 14:37:10 johnthetubaguy gave feedback so couple of things needs to be updated 14:37:12 i see john has the -1 of death on there 14:37:27 its not too serious, its mostly little things 14:37:34 of death 14:37:37 :) 14:37:52 we have an odd dependency on stuff that modifies notifications 14:37:53 it sounds like the searchlight team is also interested in helping with this, which is great 14:38:07 but an idea came up where we just add TODOs for new notifications, while gibi gets that sorted 14:38:08 besides that we have quite much of new notification specs are up on review from different parties 14:38:40 because of that "no more old style notifications" code review rule we decided on 14:39:00 yes, I will try to be put of PoC code soon for instance.delete 14:39:08 johnthetubaguy: is someone going to put that in the nova devref review guide page? 14:39:10 that will help others 14:39:23 mriedem: I think it is already there... 14:39:39 yeah, its in there already 14:39:41 down the bottom 14:39:44 http://docs.openstack.org/developer/nova/code-review.html#notifications 14:39:50 hells bells 14:39:53 bottom of the page 14:39:54 thats the one 14:40:05 alright, sounds good, thanks 14:40:16 moving on 14:40:19 #topic stuck reviews 14:40:27 there is nothing in the agenda 14:40:40 #topic open discussion 14:40:51 abhishek: are you around? 14:41:00 hi 14:41:00 review request for: Set migration status to 'error' on live-migration failure - https://review.openstack.org/#/c/215483/ 14:41:10 I have already requested alaski for his review 14:41:47 I've mostly lost the context on it since it's from a while ago 14:42:04 but there's a short term workaround that can be applied if we modify task states a bit on failure 14:42:19 abhishek: have you talked to tdurakov about this? 14:42:21 and then an idea to proactively clean stuff up and not rely on periodic tasks doing error cleanups 14:42:22 this is error vs failed in the migration object right? 14:42:25 seems like something related to a thing he was working on 14:42:36 johnthetubaguy: yes 14:42:40 johnthetubaguy: right 14:42:53 mriedem: no I will catch him 14:42:59 alaski, there were two queries on this 14:43:14 one was failed vs error as an api visible change 14:43:28 the other was about continuing with the hacky cleanup - which is 14:43:33 what your referring to I think 14:43:45 do we care about the failed vs error question ? 14:44:03 I am more worried about the clean up working, honestly 14:44:41 personally I would rather this be done correctly, i.e. without relying on a periodic 14:44:53 IMO right now there is now way we can cleanup this other than periodic task 14:44:54 might be worth asking on the ops channel 14:44:56 but I don't understand the complexities of changing the failure state 14:45:16 alaski, its just that its seen in the api 14:45:19 or ops list 14:45:32 yeah, so it would be a behavior change in the API 14:45:58 not sure about meaning of error vs failed in this case too 14:46:10 I think one is meant to be recoverable and the other not ? 14:46:13 yeah, I guess I don't understand why those are two different states 14:46:18 "The 'error' means something unexpected happened, and nova can't cleanup it automatically. The 'failed' means something wrong happened, and nova can handle it clearly." 14:46:23 from alex_xu in the patch 14:47:13 there might be more info about this in ndipanov's migratoin state machine spec? 14:47:14 this gets into a whole area of api versioning that I don't think is well covered yet 14:47:19 so it looks like we are using error to trigger an auto cleanup ? 14:47:32 if we change the state machine in Nova do we have to advertise that? 14:48:22 we don't publish the actual state machine (because there isn't really one) so I don't think it matters 14:48:28 I mean, shouldn't there be an expectation that either error or failure could happen 14:48:30 plus it differs between hypervisors for things like boot 14:48:31 if you're writing client side code that's checking for error or failed on the migration object, i doubt you're doing anything differently for either state 14:48:40 alaski: we have been hiding the state changes with alias and unknown status, AFAIK 14:48:40 to a client they are both failures 14:49:05 mriedem: yeah 14:49:11 dansmith: hmm, true, the state changes and progress increments in slightly different ways I guess 14:49:16 yeah 14:50:08 so to me it seems that changing from error to failure, or the other way, should be fine 14:50:17 if what's being used now is wrong 14:50:33 the change is using 'error' instead of 'failed' for live migration 14:50:37 to work like cold migrate/resize 14:50:42 so that periodic cleanup task finds it 14:50:54 right 14:50:56 because that checks for 'error' migrations 14:51:40 anyway, sounds like we're meh on the state change 14:51:52 there were comments in the patch about not wanting to rely on the periodic task 14:52:00 I am. seems fine to me, with the acknowledgement that this is all still hacky 14:52:09 i don't know what the solutions are for that, 14:52:17 but we don't need to get into those details in this meeting either probably 14:52:25 I think this should be a short term work around for now 14:52:29 i added tdurakov to the review to see if he has input 14:52:39 PaulMurray: i think that's reasonable 14:52:41 we should redo the migration process anyway 14:52:47 since there is talk of refactoring a lot of this 14:52:54 we were going that way in the friday session 14:53:05 yeah, that's what tdurakov is working on 14:53:15 ok, thank you mriedem and all 14:53:17 abhishek: ok, so i guess for now just rebase the patch 14:53:23 and talk to tdurakov 14:53:36 I will also catch tdurakov 14:53:42 thank you for your time 14:53:46 it also wouldn't hurt to give a heads up on the ops list 14:54:04 ok, anyone else have anything for open discussion? 14:54:06 Request to merge storage pools 'pre-patches': http://lists.openstack.org/pipermail/openstack-dev/2016-May/094059.html 14:54:24 Just to highlight that, no need to discuss now 14:54:45 ok, i guess we need to get https://review.openstack.org/#/c/302117/ in first 14:55:00 mdbooth: can you check on that qcow2 vs raw in the gate question? 14:55:01 These are independent of that 14:55:06 mriedem: Yes, will do 14:55:08 mriedem: I was +2 on that minus a couple nits 14:55:15 mdbooth: they are linked to the bp 14:55:17 which is linked to the spec 14:55:18 mriedem: which I think have been fixed now 14:55:33 mriedem: Yeah, so the patch series depends on them, but they're not really part of the patch series 14:55:42 If you see what I mean 14:55:43 mriedem: they're bugish things 14:55:46 sure, 14:55:58 then remove the bp link in those semi unrelated cleanup patches :) 14:55:59 we should just merge the spec anyway 14:56:05 if you don't want my -2 hammer of procedure on them 14:56:13 or we just merge the spec yeah :) 14:56:23 * mdbooth is cool either way 14:56:32 #action dansmith to do that thing he does 14:56:34 I will look at the spec right after this and poke mriedem if it's good 14:56:39 cool 14:56:40 can you look at the storage pools specs too 14:56:46 dansmith: I think we still had some upgrade details to thrash out. 14:56:48 (as well as mdbooth one) 14:56:52 danpb is specs core too you know... 14:56:59 and this is all libvirty things 14:57:11 will bug him too 14:57:17 excellent 14:57:22 alright, 3 minutes to spare 14:57:25 can we be done now? 14:57:31 .... 14:57:32 yes 14:57:35 #endmeeting