14:00:07 #startmeeting nova 14:00:10 Meeting started Thu Apr 6 14:00:07 2017 UTC and is due to finish in 60 minutes. The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:14 The meeting name has been set to 'nova' 14:00:23 will give it a few minutes 14:00:31 o/ 14:00:32 o/ 14:00:36 o/ 14:00:42 O/ 14:00:58 \o 14:01:03 o/ 14:01:43 o/ 14:02:04 ok let's do it 14:02:06 #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 14:02:19 #topic release news 14:02:24 #link Pike release schedule: https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule 14:02:38 #info p1 milestone is in 1 week (April 13) 14:02:45 #info Blueprints: 64 targeted, 49 approved, 1 completed 14:02:53 \o 14:02:55 #info Open spec reviews: 61 (down 48 from last week) 14:03:17 the spec review sprint cleaned up a bunch of old cruft from the open review backlog it looks like 14:03:18 which is nice 14:03:31 any release questions? 14:03:40 #topic bugs 14:03:47 #help Our backlog of new untriaged bugs is high. Need help with triage of new bugs. 14:03:57 we've got >80 untriaged bugs 14:04:03 wow 14:04:09 and it's all bauzas' fault 14:04:15 blame me 14:04:22 nothing listed as critical 14:04:29 gate status is ok 14:04:33 3rd party ci status is ok 14:04:43 anything for bugs? 14:04:54 #topic reminders 14:05:01 #info Spec freeze is April 13 (1 week) 14:05:12 #link Pike Review Priorities etherpad: https://etherpad.openstack.org/p/pike-nova-priorities-tracking 14:05:27 please keep ^ up to date 14:05:36 #link Forum planning: https://wiki.openstack.org/wiki/Forum/Boston2017 14:05:42 #link https://etherpad.openstack.org/p/BOS-Nova-brainstorming Forum discussion planning for nova (add your name if you are going) 14:05:45 * edleafe needs to update for scheduler 14:05:47 ^ is a bit stale at this point 14:05:52 #info Currently proposed forum topics: http://forumtopics.openstack.org/ 14:06:04 i see that the forum sessions are already being reviewed and some are being rejected 14:06:15 i expect a lot of that should shake out next week 14:06:24 #info More information on planning: http://lists.openstack.org/pipermail/openstack-dev/2017-March/114703.html 14:06:36 any questions about the forum? 14:06:56 #topic stable branch status 14:07:05 i'm working on getting some ocata regressions backported 14:07:20 we've got one more that needs to be merged on master and then backported, and then release 14:07:43 that is https://review.openstack.org/#/c/453859/ and down 14:07:49 ^ is an ocata regression 14:07:54 once merged, i'll backport and release ocata 14:08:06 bottom 2 have +2s 14:08:10 top patch is trivial 14:08:16 * bauzas on it 14:08:20 no news for stable/newton really 14:08:26 #info Planned EOL for Mitaka is April 10 14:08:39 i haven't heard any communication from tonyb yet about mitaka eol 14:08:42 but it's right around the corner 14:09:01 questions about stable? 14:09:14 #topic subteam highlights 14:09:19 dansmith: cells v2 14:09:34 we had a meeting this week, 14:09:38 discussed the regressions mriedem was mentioning above 14:10:03 I gave a little status on the cellsv2 test patch, and guilted mriedem into picking up the page on reviewing my series, which worked like a charm 14:10:13 melwitt says she's close to resubmitting the quotas patch 14:10:28 and mriedem/dtp are getting rolling on the api-facing service id->uuid stuff 14:10:37 so lots of good progress and momentum all around 14:10:38 le fin 14:10:44 yay 14:10:46 edleafe: scheduler 14:10:53 Reviewed os-traits status. Agreed to clean up the code by adding the autoimport code. 14:10:56 Had a good long discussion about claims being made in placement, and just where they should be made, and how the workflow should proceed. Agreed not to use any special claim database or token or anything, and just have failed builds delete any allocations for the instance UUID 14:11:01 One concern was that if the scheduler claims resources, what happens when the resource tracker does its periodic update of its view of resources. Will it wipe out the claimed resources? Made a note to investigate this. 14:11:05 Had a strong back-and-forth about returning scheduler hints in the API. No agreement on this, except to continue the discussion on the spec. https://review.openstack.org/#/c/440580/ 14:11:09 Discussed the use-local-scheduler spec (https://review.openstack.org/#/c/438936/). No agreement yet on that one 14:11:12 jaypipes got Madonna songs going in everyone's head. 14:11:14 EOM 14:11:53 ok i also left comments in https://review.openstack.org/#/c/440580/ 14:12:00 it sounds like sdague would hate it less if it was a subresource 14:12:10 rather than in the full server repr 14:12:29 yes, and I thought we'd agreed on that previously 14:12:48 i don't remember an agreement on it, but i don't disagree with it 14:12:52 in fact, i think i agree with it :) 14:12:56 http://45.55.105.55:8082/bugs-dashboard.html 14:13:12 markus_z made this dashboard, might be helpful in bug triage 14:13:12 macsz: premature linking? 14:13:27 macsz: let's get to that in open discussion 14:14:10 so in general i'm good with a subresource on the hints, and restricting the hints, and also making it more like how we're going to do flavors (comments on that in the spec), 14:14:25 2 weeks ago we talked about a backlog spec for a high-level idea on moving entire groups of servers 14:14:40 rather than exposing hints so an external system has to sort all of this garbage out, which is terrible u 14:14:41 *ux 14:14:55 but i know if we block the small incremental thing for the big thing, neither will get done 14:14:59 and everyone will be sad 14:15:16 anywho, moving on 14:15:25 tdurakov: are you around for live migration? 14:15:34 mriedem: I think I basically agree with your point about subresource and restricting hints 14:15:35 no LM meeting this week 14:15:44 (FWIW) 14:15:50 there were two LM specs to review 14:15:57 one was the per-instance timeout, that spec is approved 14:15:59 the other is: 14:16:00 Live migration completion timeout(config part): https://review.openstack.org/#/c/438467/ 14:16:13 that's got a +2, just needs another reviewer 14:16:19 or 5 14:16:32 alex_xu: are you around for the api meeting? 14:16:36 *api subteam 14:17:00 guess not, we mostly talked about policy specs 14:17:10 specifically johnthetubaguy remove scope spec 14:17:22 https://review.openstack.org/#/c/433037/ 14:17:46 yeah 14:17:47 we talked some about my spec to remove bdm.device_name from server create and volume attach POST requests, which i've since abandoned 14:18:09 and finally we talked some about a path forward to deprecate os-virtual-interface and replace with os-interface, and eventually get bdm/vif tags out of the rest api 14:18:20 which i need to write a spec for, but i'm all spec'ed out 14:18:32 mriedem: my bouncer died for 5 minutes, couldnt see anything so i may be out of sync 14:18:45 ^ me too :/ 14:18:46 macsz: we'll talk about the bugs dashboard in open discussion 14:19:02 gibi: notification highlights? 14:19:22 i think we're missing the gibster 14:19:42 we talked a bit about bdms in payload notifications for searchlight 14:19:57 and agreed to make that config driven for now, since it will eventually be config driven for searchlight anyway 14:20:11 disabled by default so we don't make unnecessary db calls from computes 14:20:31 powervm - efried? 14:20:39 PowerVM driver blueprint impl progress has stagnated. Broader team, please review https://review.openstack.org/438119 14:20:51 In other news, *something* changed underneath us a couple of weeks ago which made the PowerVM code hang while pulling a glance image to an instance's boot disk. Something to do with eventlets? A thread blocks open()ing a FIFO - and hangs the entire compute process. We've worked around it, but if anyone knows more about the root cause of this, please hit me and thorst up after the meeting. 14:21:25 ok 14:21:29 In conclusion: Broader team, please review https://review.openstack.org/438119 14:21:43 cinder/nova updates 14:22:01 i wasn't in last week's meeting, but lyarwood's vol detach refactor patch is merged 14:22:05 Review focus is on the new detach flow: https://review.openstack.org/#/c/438750/ 14:22:18 ^ is all noop right now, by design 14:22:35 #topic stuck reviews 14:22:43 there was nothing in the agenda 14:22:56 #topic open discussion 14:23:05 macsz: did you want to mention something about bugs? 14:23:31 I have something - https://review.openstack.org/#/c/450211/ 14:23:33 wanted to point out that there is the dashboard that markus_z created : http://45.55.105.55:8082/bugs-dashboard.html 14:23:45 helps with bug triage 14:23:47 macsz: we also talked about doing a bug spring clean 14:23:49 and the queue of those 14:23:50 after p1 14:23:56 is getting looong 14:23:57 and yes 14:24:04 by accident i attended the bug team meeting this week 14:24:06 the team of 1 14:24:16 you made a crowd, sir 14:24:22 and was reminded that in newton, around this time last year, we did a spring clean of launchpad 14:24:30 not fixing bugs, 14:24:35 but just going through cruft in launchpad 14:24:47 i think i'll propose that again, to be done in a couple of weeks 14:24:57 more info in the ML, when it happens 14:25:12 sfinucan: ok what's up 14:25:28 So I was hoping Jens would be here 14:25:46 * sfinucan doesn't know his nick 14:25:51 fickler? 14:25:59 frickler: 14:26:00 ^ 14:26:27 but long story short, seems I exposed some flaws in how the metadata service exposes network info 14:26:33 this looks much like what dtp has a bp for 14:26:33 ...with a recent change 14:26:41 it reminded me about the lack of docs we have around the metadata and config drive "API" 14:26:51 ah, so yeah - I was hoping for some context 14:26:54 johnthetubaguy: yes, very much 14:27:09 i had the same concern in https://review.openstack.org/#/c/400883/ 14:27:26 he's talking about "making a new metadata version" in that review. Curious to know (a) what that means and (b) if it's necessary 14:27:48 ...and therefore if it should block that fix 14:27:52 the URLs get versioned a bit, one per release 14:28:06 but its not really very clear 14:28:11 we don't bump the version in the meta service if there are no changes 14:28:14 Presumably https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L50-L59 14:28:19 For the versions 14:28:21 mriedem: +1 true 14:28:29 artom: no those are the aws versions 14:28:31 ec2 14:28:36 artom: https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L76 14:28:49 mriedem, doh, right 14:29:16 sfinucan: so i'm sure we do a crap job of verifying or testing this, 14:29:38 but if for example we add new things to the metadata payload, or change the response in one of the existing keys, then yeah it would be a version bump 14:29:48 I was wondering how hard it would be to do something like we have with the samples tests for the API 14:30:01 johnthetubaguy: we really probably should have something like that 14:30:07 with a fixture or something 14:30:13 mriedem: Right, and do we need to provide the ability to get the older version a la microversions? 14:30:14 I just feel I am flying blind in there, which is a worry 14:30:19 because if you change the format of network_data.json it would be hard to tell 14:30:26 sfinucan: i think we do 14:30:34 sfinucan: we hand out all versions today, its in the URL or directory I think 14:30:41 johnthetubaguy, it's been bouncing around in the back of my head to overhaul metadata api, with proper versionning like the API microversions 14:30:52 johnthetubaguy, yeah, I think that's correct 14:30:58 "version" is in the URL as a directory 14:31:00 artom: its not that simple, because of config drive, but better testing would help a tun 14:31:01 johnthetubaguy: we hand out all versions for config drive i think 14:31:08 mriedem: yeah, exactly that 14:31:10 but you can request a specific version via the metadata api 14:31:29 yeah, in the URL or directory path, I believe 14:32:19 ok, anyway, sfinucan does that answer your question? 14:32:37 Sort of . We can revisit on openstack-nova later 14:33:08 interesting project for someone there, getting the testing sorted 14:33:30 i believe we now have a fixture for running actual requests against the metadata api 14:33:37 johnthetubaguy, I probably have the bandwidth for that this cycle 14:33:38 sdague and mikal wrote that for the vendordata v2 stuff 14:33:49 Unless you have someone else in mind 14:34:00 mriedem: Link? 14:34:03 so in functional tests we could write some tests to hit each version and assert what we get back 14:34:07 sfinucan: i can dig it up later 14:34:13 artom I can help with that too 14:34:16 mriedem: cool cool 14:34:21 we'd have to stub out the backend data/content of course 14:34:29 but it could test the actual route path logic 14:34:45 ok anything else? 14:35:02 sounds like no, ok, thanks everyone 14:35:03 #endmeeting