14:01:32 #startmeeting nova 14:01:32 Meeting started Thu Jun 11 14:01:32 2015 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:34 yo 14:01:34 hi all 14:01:36 The meeting name has been set to 'nova' 14:01:37 o/ 14:01:38 o/ 14:01:39 o/ 14:01:39 o/ 14:01:40 o/ 14:01:43 o/ 14:01:44 o/ 14:01:44 o/ 14:01:45 o/ 14:01:46 o/ 14:01:46 o/ 14:01:46 \o 14:01:52 o...../ 14:01:56 welcome 14:02:04 #topic release status 14:02:20 #info June 12: spec review day 14:02:23 #info June 23-25: liberty-1 (spec freeze) 14:02:38 spec freeze or proposal freeze? 14:02:41 so any questions about our release deadlines? 14:02:43 freeze 14:02:46 :( 14:02:48 jk 14:02:51 cool 14:03:03 #link https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule 14:03:12 so I tried to describe all the dates in the above wiki 14:03:21 it even talks about exception process 14:03:33 now its probably wrong or confusing, as its a first draft 14:03:45 but we can evolve that, so we don't keep making it up each release 14:03:56 o/ 14:04:20 so tomorrow is spec review day 14:04:26 basically we burn down on this 14:04:33 #link http://russellbryant.net/openstack-stats/nova-specs-openreviews.html 14:04:36 at least roughly 14:04:55 so, we have quite a few blueprints to discuss 14:05:08 I am not sure this meeting is the most efficient way, so ideas on a postcard 14:05:14 but lets do it this way for now... 14:05:20 #link https://blueprints.launchpad.net/nova/+spec/virtuozzo-instance-resize-support 14:05:28 #link https://blueprints.launchpad.net/nova/+spec/virtuozzo-container-boot-from-volume 14:05:36 #link https://blueprints.launchpad.net/nova/+spec/rename-pcs-to-virtuozzo 14:05:43 nb virtuozzo is the new names for parallels 14:05:51 now they are all things with a single driver impact 14:05:55 so this is about the libvirt parallels support 14:05:56 danpb: ah, good point, thanks 14:06:11 most of them are feature parity 14:06:20 given these are "me too" I say we talk about them in the code review 14:06:22 any objections with that? 14:06:28 nope 14:07:10 seems fine 14:07:14 +1s for those happy with it would be good too, btw 14:07:19 #link https://blueprints.launchpad.net/nova/+spec/ploop-snapshot-support 14:07:24 this is parallels apparently 14:07:33 honestly ploop sounded made up, turns out its real 14:07:34 yup 14:07:36 sounds nasty. 14:07:39 yep, ploop is their disk equivalent to qcow2 14:07:52 I like the honesty of that name 14:08:04 so anything nova does with qcow2, they'll need to do some porting work for ploop 14:08:08 anyways, it seems small, so best reviewed in code? 14:08:20 danpb: oh, thats going to be messy... 14:08:25 the devils in the details, but since its all feature parity stuff its fine in general imho 14:08:38 danpb: would moving to libvirt storage pools help us here? 14:08:47 if the code reviews look like they're getting too messy we can push back to have a spec later i think 14:09:02 i don't think we have storage pools support for ploop in libvirt currently 14:09:13 danpb: do we want to push for that, to keep nova clean? 14:09:53 its certainly one option, but i'm loathe to put a dependancy on the storage pools code as its author is no longer working on it 14:10:05 so not sure when the storage pools conversion will get done 14:10:35 danpb: true, I just worry about the long term with that one 14:10:47 one more... 14:10:50 #link https://blueprints.launchpad.net/nova/+spec/hyper-v-imagecache-cleanup 14:11:01 that one looks a little me too, and isolated 14:11:16 they have a spec, but approving the BP will be quicker 14:11:18 johnthetubaguy: yep, that's why i said we should reserve the right to ask for a spec later if the code review turns out to look too messy 14:11:28 danpb: let us know if anything requiers to pay attention 14:11:40 danpb: i mean storage pools 14:11:49 mnestratov: sure 14:11:58 danpb: I think we should assume thats always true, we could approve it assuming that, lets come back to that 14:12:03 hjohnthetubaguy there’s also a similar one for Hyper-V Fibre Channel support 14:12:12 alexpilotti: got the link? 14:12:17 claudiub: ^ 14:12:25 johnthetubaguy: sure 14:12:45 any more spec less blueprint people want to talk about? 14:12:53 the hyperv imagecache cleanup looks like a no brainer - presumably its scope is entirely within hyperv anyway 14:12:53 here's the Hyper-V FC spec https://review.openstack.org/#/c/190107/ 14:13:46 #link https://blueprints.launchpad.net/nova/+spec/hyperv-fibre-channel 14:13:58 I will see how the spec looks after 14:13:59 so... 14:14:15 I will look at these after the meeting, and approve them using my judgement 14:14:28 if you think i screwed up, let me know, and we can ask for a spec 14:14:47 if the code review goes south, and its clear we need a spec, lob a -2 on it, and lets talk about getting a spec merge 14:14:55 does that seem a cool approach? 14:15:12 any obvious objections around these ones? 14:15:15 if not, we can move on 14:15:21 johnthetubaguy: looks good for us, almost all those BPs are specific to the Hyper-V driver 14:15:46 OK... moving on 14:15:56 falling in the “trivial” category (no API, database, etc impact) 14:16:10 #topic tracking liberty progress 14:16:19 #link https://etherpad.openstack.org/p/liberty-nova-priorities-tracking 14:16:35 #link http://specs.openstack.org/openstack/nova-specs/priorities/liberty-priorities.html 14:16:38 so we merged the above 14:16:41 but... 14:17:03 #action need to work out what specs are need for each priority, owners please update things in: https://etherpad.openstack.org/p/liberty-nova-priorities-tracking 14:17:10 right 14:17:27 any major updates on priorities before I move on? 14:17:37 so, some changes are on the fly but until we really consider them mergeable, we don't put them in 14:17:57 bauzas: do create a second list, for stuff the subteam is still looking at 14:18:01 bauzas: if thats helpful 14:18:06 johnthetubaguy: fair point 14:18:19 bauzas: particularly for specs that are being debated 14:18:24 I created a second list under cells, for visibility 14:18:38 alaski: yes, thank you for that, was really useful 14:18:48 yup, level down the exigence 14:18:58 we have a wiki for scheduler specs/bps, I can add a link to the priority spec 14:19:05 n0ano: no please 14:19:11 bauzas: why not? 14:19:17 what johnthetubaguy said 14:19:32 I mean I would prefer it written in the etherpad, but linking from there is the next best thing 14:19:33 johnthetubaguy: n0ano: because I would like to only have the changes directly, and not go to the wiki 14:19:52 bauzas: yeah, I will let you argue that in the scheduler meeting if thats OK? 14:19:58 and wiki is not the best for providing quick updates IMO 14:20:00 johnthetubaguy: sure 14:20:03 its up to the sub team to find what works for them 14:20:25 I have a preference for the etherpad, but thats just me 14:20:28 (also the wikipage is named Gantt/Liberty ;) ) 14:20:28 I think a spec is even harder but we'll talk about it at the net meeting 14:20:38 n0ano: let's discuss that off-topic 14:20:43 +1 14:20:51 thats cool, its a good debate about process 14:21:02 I am keen to adopt what works for the sub team 14:21:07 just be sure to tell us all whats happening 14:21:15 np 14:21:19 but I am assuming it will be made obvious in the etherpad 14:21:24 +1 14:21:31 fer sur 14:21:33 moving on ? 14:21:34 cools 14:21:43 bauzas: I renamed that page to Scheduler/Liberty 14:21:55 #topic stuck spec reviews 14:22:07 now we might end up talking more about this during the spec review day 14:22:11 edleafe: ack, but I would prefer to keep it in the Nova space :) 14:22:35 delete-retired-services - https://blueprints.launchpad.net/nova/+spec/delete-retired-services 14:22:46 forgot to add the link to the wiki 14:22:53 #info some specs are blocked by us not having made a decision on something, when that happens list it at the bottom of here: https://etherpad.openstack.org/p/liberty-nova-priorities-tracking 14:23:10 lchen: that's not a stuck review IMHO 14:23:12 lchen: lets hold that debate till the end if thats OK? 14:23:24 so the stuck specs on that list don't all have -1s form me 14:23:34 bauzas, johnthetubaguy yep, thanks! 14:23:38 they probably should, but I felt bad about it 14:23:52 basically the list is where we need a general policy debate 14:24:03 make and agreement, allowing us to move forward 14:24:08 right now I am just making a list 14:24:21 an example 14:24:32 https://review.openstack.org/#/c/169638/9/specs/liberty/approved/selecting-subnet-when-creating-vm.rst,cm 14:24:36 https://review.openstack.org/#/c/182242/6/specs/liberty/approved/user-controlled-sriov-ports-allocation.rst,cm 14:24:41 https://review.openstack.org/#/c/187812/3/specs/liberty/approved/add-volume-type-to-create-server-api.rst,cm 14:24:50 now we said no more proxy apis 14:24:59 no more pass through apis to cinder and neutron 14:25:07 ....but we haven't delete those APIs yet either 14:25:23 we need to sort out that, and decide what we want in the future 14:25:41 jaypipes: whats your take on these, from the API group point of view? 14:26:31 OK... so jaypipes has run away 14:26:32 anyways 14:26:33 johnthetubaguy: have not had a chance to review. will do so today. 14:26:35 honestly, I'd suggest we punt on removing the proxies until next cycle 14:26:48 * jaypipes need to go into windows to get on webex :( 14:26:51 sdague: so that could well be the correct call 14:26:55 jaypipes: sad times :( 14:27:10 so... 14:27:10 we could really use some breathing room to do a polish on the the whole API through story, including getting our documentation flowing well 14:27:34 #action johnthetubaguy to make an ML thread and policy proposal for each of the "stuck" topics early next week 14:27:37 there is so much in flight with the API right now, I don't want to add any more big changes in liberty (especially as it's a short cycle) 14:28:02 sdague: yeah, so thats a good point I think 14:28:06 johnthetubaguy: THis may not be the place, but I'd like to understand better what you mean by pass-through APIs. 14:28:36 neiljerram: thats another good point, I try to define that here: http://docs.openstack.org/developer/nova/devref/project_scope.html#no-more-api-proxies 14:28:38 could we consider feature branches as one possible answer to that kind of problems ? 14:28:54 bauzas: thats just creating merge pain 14:28:55 saying 'for the moment, we don't have them, but we're considering them' 14:29:11 bauzas: but we could totally make more use of branches for some things, and we should look into 14:29:23 johnthetubaguy: OK, thanks, I understand now - I'm familiar with the security groups example. 14:29:24 bauzas: I think its better to say, "please go away for this release, we are full" 14:29:27 johnthetubaguy: agreed, my point is, since we don't this flexibility, it directly collapses with all our efforts to reduce the debt 14:29:39 neiljerram: yeah, thats where we screwed it up the most 14:30:00 bauzas: not sure I get that, but we should probably keep moving at this point 14:30:12 so... basically I am trying to collect areas where we need to make a decision 14:30:20 then I will propose one for discussion on the ML 14:30:30 so we can tell these poor spec writers what they should do 14:30:44 johnthetubaguy: nvm, my take is just to say 'that's not possible by now' 14:30:55 the options are: merge the spec, for this release, go onto the backlog for later, go away its not in scope 14:31:16 bauzas: yeah, I want folks to not be left in limbo hoping they might just sneak something in though 14:31:31 bauzas: we used to do that alot, and it was very painful 14:31:37 mostly for the submitter 14:31:48 partly for me telling them know in 5 months time 14:31:53 but anyways, lets move on 14:31:56 agreed 14:32:12 this is a problem I see, and I have a plan to fix it, help making that list much appreciated 14:32:41 #topic Bugs 14:32:48 anyone got a bug thing to discuss 14:32:58 yup 14:33:01 one critical 14:33:24 https://bugs.launchpad.net/nova/+bug/1462424 which I consider non-critical since it's only impacting one driver 14:33:25 Launchpad bug 1462424 in OpenStack Compute (nova) "VMware: stable icehouse unable to spawn VM" [Critical,Confirmed] 14:33:39 (and a stable branc) 14:33:52 yeah, i just made that high 14:33:59 cool, was about to ask 14:34:14 also 33 bugs to triage yet 14:34:17 I try to update the devref with a few thoughts and would appreciate feedback: https://review.openstack.org/#/c/187571 14:34:43 Should end in a "one pager" for bug handling in nova 14:34:50 https://bugs.launchpad.net/nova/+bug/1456228 is also waiting for a nova-core feedback 14:34:51 Launchpad bug 1456228 in OpenStack Security Advisory "Trusted vm can be powered on untrusted host" [Undecided,Incomplete] 14:34:57 markus_z: love the diagram by the way, its nice 14:35:09 ...so 14:35:14 this is stuck bugs and critical bugs 14:35:20 lets not get distracted 14:35:21 johnthetubaguy: thanks 14:35:26 ok 14:35:53 that's it for me about bugs 14:36:01 any news on the gate 14:36:02 the client has no bugs 14:36:08 I mean no criticals :D 14:36:09 I assume people would shout if we broke it right now 14:36:21 #topic Open Discussion 14:36:25 About the gate... 14:36:33 gilliard: ah, fire away 14:36:46 There was a regression around live migration this week, caught by the multinode job but that's non-voting, because of... 14:36:57 #link https://bugs.launchpad.net/nova/+bug/1462305 14:36:58 Launchpad bug 1462305 in OpenStack Compute (nova) "multi-node test causes nova-compute to lockup" [Undecided,Incomplete] - Assigned to Joe Gordon (jogo) 14:37:07 and 14:37:07 oh thats good one to point out 14:37:08 #link https://bugs.launchpad.net/nova/+bug/1445569 14:37:08 Launchpad bug 1445569 in OpenStack Compute (nova) "No dhcp lease after shelve unshelve" [High,Confirmed] 14:37:36 so if anyone can spare any time to see if they have any ideas, it'd be really appreciated. jogo and I looked at them recently but no progress yet :( 14:37:49 yeah, slammed right now, but very curious 14:37:55 gilliard: thanks for those, they are good ones 14:38:08 #help please look at above two bugs to get the multi host job voting 14:38:18 Thanks. 14:38:28 #link http://doodle.com/eyzvnawzv86ubtaw 14:38:40 #info the poll will close right after this meeting, be quick if you missed it 14:38:53 thats the poll for the preferred meeting time for this meeting 14:38:57 Just a note. The Ironic bugs are being slowly worked on. Appreciate the reviews. Have got 3-4 fixes merged so far. We continue to triage and work them. 14:39:21 jlvillal: are you adding the ready patches to that etherpad, and getting reviews from that? 14:39:35 johnthetubaguy, Yes we are. 14:39:55 jlvillal: cools 14:40:06 johnthetubaguy, And delete them once they have been merged. 14:40:08 lchen: so you had a blueprint to talk about? 14:40:18 johnthetubaguy, yep 14:40:22 jlvillal: awesome, stuff is moving thats the main thing 14:40:35 https://blueprints.launchpad.net/nova/+spec/delete-retired-services 14:40:48 johnthetubaguy, Can you please this little nava-manage util enhancement again? 14:41:02 Can you please consider this little nava-manage util enhancement again? 14:41:27 lchen: do we want folks to use the API rather than nova-manage really, I don't quite understand the request 14:41:39 I mean we want, not do we want 14:41:46 s/do we/we/ 14:42:21 johnthetubaguy, this can be helpful in some cases like what I explained in the bp and the code review.. 14:42:36 I think we've previously said, 14:42:43 that nova-manage should only be for things we can't do via the API 14:42:48 dansmith: +1 14:43:11 johnthetubaguy, we already have things like list/enable/disable in nova-manage actually 14:43:22 lchen: so you can setup environment variables for the same user as nova-mange to make it the same? 14:43:48 lchen: thats some stuff we keep meaning to remove really, there is a spec to fix us that those are broken, where that might get discussed 14:43:57 lchen: those are from before we made it a goal to not add more things to nova-manage 14:44:24 johnthetubaguy, dansmith, ok. 14:44:51 lchen: I think we need to get people used to using python-novaclient (although thats about to die, but lets ignore that for this moment...) 14:45:00 that makes sense. I will think of other ways to do that 14:45:07 lchen: Ok, thank you 14:45:18 that't alright. 14:45:20 lchen: help sharing that best practice with folks would really help 14:45:43 johnthetubaguy, dansmith, sure. thanks 14:46:00 lchen: by which I mean, it would be great if you update the docs so its more obvious about the best way to do that, if you fancy that 14:46:08 lchen: was there another spec? 14:46:31 johnthetubaguy, no, this is just a specless bp 14:46:53 cool, np 14:46:59 any more for any more today? 14:47:16 request-based-filter-selection 14:47:20 is on the wiki 14:47:30 johnthetubaguy, yeah 14:47:34 https://blueprints.launchpad.net/nova/+spec/request-based-filter-selection 14:47:35 I added it in 14:48:06 johnthetubaguy, It's just a simple idea to save some computational cost of the scheduler 14:48:22 let me find the link 14:48:38 https://blueprints.launchpad.net/nova/+spec/request-based-filter-selection 14:48:40 https://blueprints.launchpad.net/nova/+spec/request-based-filter-selection 14:48:46 bauzas, thanks 14:48:53 lchen: I see no changes in the whiteboard 14:49:20 lchen: so it feels like that needs a spec, just to agree the direction that might affect all current filters and weights 14:49:23 bauzas, it's all in the description. 14:49:39 lchen: also, that's part of an action I have to write to explain why amending filt_props needs a spec 14:50:00 johnthetubaguy, yep, np 14:50:25 johnthetubaguy, we can discuss about it after a spec is composed. 14:50:33 lchen: well, I have questions and concerns about your blueprint, so indeed I would love a spec 14:50:35 johnthetubaguy, bauzas thanks for the information 14:50:44 yeah, too many questions, lets do a spec 14:50:45 johnthetubaguy: open question about the tree restructure to clean up the v3 bits on disk which confuse folks. What's the best way to get concensus on the final tree structure we should have? I'd like to sort that late this week, early next, then get plouging through that next week 14:50:52 johnthetubaguy, bauzas sure 14:51:08 mailing list thread, etherpad it, spec? 14:51:14 sdague: discuss on the spec? 14:51:22 edleafe: there isn't a spec 14:51:32 #link https://review.openstack.org/#/c/189218/ ? 14:51:46 oh, apparently there is one that I never saw :) 14:51:50 sdague: good question, maybe a spec is a good idea, with an ML point to spot to the spec? 14:51:53 ah, cool 14:52:00 gilliard: thx - you beat me to it 14:52:15 lets add this to the API priority etherpad to track it 14:52:20 yep 14:52:22 ok 14:52:44 sdague: possibly all of the above then 14:52:57 sdague: its one of those things we probably want to warn people just before we merge it 14:52:59 yeh, sure, I didn't realize edleafe and alex_xu were already running with it 14:53:05 like the unit test thing 14:53:09 all good 14:53:14 so one thing about that 14:53:28 we probably want to remove the plugin/extension idea at the same time? 14:53:35 well not remove the idea 14:53:46 johnthetubaguy: yes, it should be another spec? 14:53:49 but you know, move it to something else 14:53:55 johnthetubaguy: the capability to load the extensions? 14:54:00 yeh, it feels like if we are giving everyone merge conflict hell we should try to do it only once 14:54:00 alex_xu: its just the path, but yes, thats a different spec 14:54:09 sdague: exactly my thinking there 14:54:19 I'm not convinced it's a different spec 14:54:32 I think we want to talk about what we want our api on disk structure to look like eventually 14:54:35 sdague: johnthetubaguy: yeah, the merge conflict problem has been what I've been thinking about the most 14:54:48 sdague: so the remove all the silly extensions and the mechanism can be separate, but yeah, lets get the path correct first time 14:54:53 and it's steps to get there, but we should think about the whole end game 14:54:57 so its possible the move breaks on the config 14:54:59 sdague: true 14:55:01 5 minutes left 14:55:07 so we should probably break the config first 14:55:20 jlvillal: yeah, we are almost done I think 14:55:26 johnthetubaguy: right, removing the extension optionality is another spec, but moving things around should be all part of this one 14:55:33 sdague: +1 14:55:50 so turns out we are probably all thinking the same thing here 14:55:56 but we should agree that on the spec 14:56:03 ...so that means we are done I guess? 14:56:16 * johnthetubaguy waits for tumble weed to go past 14:56:21 +1 to discussing on the spec 14:56:55 so thanks all, and happy spec review day for tomorrow (ish) 14:57:05 #endmeeting