14:01:45 #startmeeting nova 14:01:46 Meeting started Thu Feb 20 14:01:45 2014 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:49 The meeting name has been set to 'nova' 14:01:52 hello! who's around? 14:01:58 hi 14:02:03 Mostly. 14:02:14 o/ 14:02:38 2.5 people! 14:02:39 :) 14:02:48 there were at least 3 others in -nova 14:03:01 ok, usually a load of lurkers too 14:03:03 onward 14:03:06 #topic general 14:11:07 ok 14:11:07 in general, i'm all ears 14:11:07 did we lose russellb? 14:11:23 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 14:11:23 icehouse-3 closing in fast 14:11:23 we've past the deadline for having blueprint code proposed 14:11:23 that was tuesday of this week 14:11:23 feature freeze is march 4 14:11:23 < 2 weeks away 14:11:23 so we should focus on merging blueprints 14:11:23 any questions on the schedule? 14:11:23 #topic sub-teams 14:11:24 the bot is ignoring me, or we're having some irc troubles ... 14:11:24 is this thing on? 14:11:24 * russellb taps on the mic 14:11:24 ow my ears 14:11:24 russellb: I'm seeing topics set and bot stuff 14:11:24 seems i'm back now 14:11:32 wow 14:11:33 what's the last thing you saw from me ... 14:15:30 ~ 14:15:30 russellb: you tapped the mic 14:15:30 heh before that 14:15:30 russellb: the general topic tag 14:15:30 but everything just dumped in 14:15:30 reading scrollback 14:15:49 hi 14:15:52 * johnthetubaguy got IRC connected again, and crashes through the door a bit late 14:15:53 is this the nova meeting? 14:15:56 yes, already started with the topic #general 14:15:56 though waiting for your guys to fill the silence here 14:15:59 * johnthetubaguy wonders if IRC is acting funny for other people too 14:16:00 There was a netsplit a short while ago 14:16:00 so probably 14:16:00 i can hear silence 14:16:00 In fact only 10 minutes ago. 14:16:00 * llu-laptop sometimes see IRC messages got jammed 14:16:02 BobBall: ah yes, I did wonder 14:16:04 * johnthetubaguy raises hand for XenAPI news, but BobBall has all the juicy details 14:16:15 Indeed - but not much point going through it without half the people and possibly without the log bot :) 14:16:15 agreed 14:16:16 ah, hello 14:16:16 14:11 < russellb> what's the last thing you saw from me ... 14:16:16 It just dumped into my screen, with taps on the mic 14:16:16 and the icehouse-3 closing in fast stuff 14:16:17 * johnthetubaguy raises hand for XenAPI 14:16:17 Looks like IRC isn't back working yet 14:16:17 yeah, I guess not 14:16:18 ah 14:16:18 topic setting does seem slowwww 14:16:18 yes this is it, we're starting a bit slow, irc troubles 14:16:18 ok well in generall all i had was the release schedule 14:16:18 johnthetubaguy: yes 14:16:18 big net split i think 14:16:18 #topic https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 14:16:18 #undo 14:16:18 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 14:16:18 so, just wanted to make sure everyone is aware that feature freeze is just under 2 weeks away 14:16:18 and on to sub-teams ... 14:16:18 johnthetubaguy and BobBall: go for it 14:16:18 russellb: you're onto sub-teams now 14:16:18 johnthetubaguy: let's wait for ack from russellb, i think he's gone again 14:16:18 Based on russellb's comments and mriedem's later comments than ours, this is a 3-way split! the most fun kind. 14:16:18 https://wiki.openstack.org/wiki/XenServer/XenServer_CI 14:16:18 wow 14:16:18 CI is there, processing changes, and was voting for a short time... 14:16:19 omg, now i just got a huge flood 14:16:20 Removing item from minutes: 14:16:35 I've disabled voting now because two things changed - one was a d-g change in how tempest concurrancy was calculated 14:16:37 are we back? 14:16:38 but we've worked arund that 14:16:46 the second was something we're still digging in to 14:17:00 the result of the second change is that now all tempest full's are failing with different reasons 14:17:00 mriedem: not sure we are, but this bit is quite one way, so lets go for it 14:17:10 sure 14:17:33 So far all of the failures we've investigated have been traced back to existing bugs - but it's useless to report all of the failures yet 14:17:35 internet sneaker chat.... 14:17:37 BobBall: whats the symptom of the second change? 14:17:46 sdague lurks... 14:17:57 anyone out there? 14:18:10 garyk: yes, xenapi CI status is going on 14:18:25 The symptom is just that full tempest used to pass now it consistently fails with different bugs which all seem to be races. We think it's highly likely to be a configuration thing - but haven't tracked it down. 14:18:41 BobBall: gotcha, ouch 14:18:46 consistently racey 14:18:53 BobBall: nice to hear that the CI is up and running 14:18:54 in different races mriedem :D 14:18:56 yeah, it wasn't so bad yesterday mind 14:19:03 funky 14:19:15 minesweeper also has discovered all kinds of races - bugs atcually.. 14:19:19 We don't seem to be getting lucky and getting through them all :/ 14:19:19 BobBall: johnthetubaguy: could the known buggy tests be excluded for a time to at least stabilize your CI? 14:19:22 BobBall: so based on the fails that you were seeing, I think if you disable volumes tests, all you'd need to do is sort out the issue with that one image test 14:19:38 what sdague said 14:19:48 yeah, seems like a good path 14:19:53 That's part of the fun... we're using d-g with tox which doesn't have a way to exclude tests ATM 14:20:07 the docker guys were having the same kinds of problems at the meetup last week, high failure rates b/c they were just flooded with fails - they should pair it down and get to a working baseline, then expand 14:20:08 BobBall: you can disable volumes tests with a tempest config variable 14:20:15 oh brilliant sdague ! 14:20:21 We'll do that then. 14:20:30 and see what falls out 14:20:37 yeah, start small 14:20:46 otherwise it's overwhelming 14:20:48 matel is also looking at adding a specific exclude list 14:21:04 basically discovering all tests then removing some from the list before getting testr to run them 14:21:13 but disabling volume tests is easy to do 14:21:29 BobBall: do you guys use nova-network? 14:21:32 So I'll do that and we'll see where we get to later today when it's ran a fwe tests. 14:21:35 yes, nova-network 14:22:18 BobBall: you happy with that plan? any more updates? 14:22:33 No more updates 14:22:34 very happy to see the progress 14:22:35 BobBall: do you have the capacity you need to drain the queue now? 14:22:55 Probably john 14:23:22 other sub-team status? 14:23:23 BobBall: cool, more than you had yesterday at least, thats the main 14:23:30 i have a concern regarding all of the third party CI's - when there is the end of cycle rush - it is going to be intersting to see how it all works. 14:23:38 * johnthetubaguy raises scheduler hand if none else 14:23:50 johnthetubaguy: sure, update on your fun scheduler? 14:23:56 scheduler optimization at least 14:24:04 oh right, I wasn't going to talk too much about that 14:24:08 oh :) 14:24:12 thought that's why you raised your hand 14:24:15 but anyways, a little bit of DB caching is going a long way 14:24:18 please review that 14:24:21 heh 14:24:28 yeah, seems doable for icehouse 14:24:29 no, was at the meeting yesterday 14:24:33 talk about a split 14:24:34 cool 14:24:37 well, I suggested one at least 14:24:42 meeting split? 14:24:46 or? 14:24:48 just finding the dam etherpad 14:24:54 oh, scheduler split, not meeting split 14:24:58 ah 14:25:20 #link https://etherpad.openstack.org/p/icehouse-external-scheduler 14:25:28 yeah, and consensus seems to be that even if we don't think gantt is close to ready, we should leave it in openstack/ and not move to stackforge/ 14:25:33 because moving repos around is a PITA 14:25:48 ah thats cool 14:25:57 we would only have to move it again 14:26:01 so, the plan 14:26:08 its at the bottom of that etherpad 14:26:13 basically, like no-db-compute 14:26:15 russellb: will there be time set aside at the summit to speak about gantt? 14:26:17 plan b? 14:26:25 it seems like a topic on its own 14:26:31 garyk: as a part of the nova schedule 14:26:42 lets separate out obvious bits, then see where we are, basically 14:26:42 a project has to be incubated to get its own time 14:26:58 i think at least one nova session should be devoted to it and then it may split out onto its own track 14:27:14 yeah when we originally talked about this, i think i was assuming we'd have the conductor bits done 14:27:17 ok, it will be an incubated project. 14:27:46 russellb: yeah, sorry about that, it sucks 14:27:53 oh no worries 14:27:58 just a bit of dependency management fail 14:28:17 yeah, it should have been higher priority sooner 14:28:19 oops 14:28:37 but all good, on from here 14:28:49 but the plan does sound reasonable 14:28:52 if we get time, should we push that through? 14:28:58 the conductor bits that is 14:29:02 sure, would love to if we can 14:29:12 yeah, thinking RPC cleanup 14:29:15 exactly 14:29:27 if we do it before release, we can remove all that code once juno opens 14:29:32 otherwise it's stuck until K 14:29:32 right, well its bumped a bit higher in a massive pile of doom 14:30:04 * johnthetubaguy feels he is done typing for now, wonders if we will cover v3 API 14:30:24 yes, we need to cover that 14:30:41 maybe in blueprints 14:30:46 want to talk bugs real quick, then to blueprints 14:30:48 #topic bugs 14:31:07 so, tracy jones has taken the bug czar role now 14:31:14 yeah, bug sub groups seems to be forming now :) 14:31:18 she put out a message to openstack-dev trying to get an initial team together 14:31:29 saldy there are not too many volutneers 14:31:34 #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/027190.html 14:31:46 more now 14:31:51 but more the merrier :-) 14:31:54 nice to hear 14:32:05 really need a solid group that can devote an hour to this every week 14:32:09 at least an hour is all i ask :-) 14:32:14 List is not too tiny right now https://etherpad.openstack.org/p/nova-bug-management 14:32:34 that seems like a good plan, an hour a week 14:32:38 so, if you're interested, please sign up! 14:32:51 i'm quite hopeful that this renewed effort will help improve our bug queue 14:33:06 * mriedem wishes the ML thread was tagged with [nova], never saw it 14:33:20 after feature freeze, we'll all switch focus to bugs, so more on bugs in a couple weeks ... 14:33:25 #topic blueprints 14:33:35 and speaking of czars, we have a blueprint czar! 14:33:40 #link https://etherpad.openstack.org/p/nova-icehouse-blueprint-cull 14:33:41 johnthetubaguy: <--- \o/ 14:33:44 russellb: i did have a bugs topic in the agenda, we can do in open discussion 14:33:50 mriedem: ah sorry 14:34:04 let's come back to it 14:34:12 later i mean :) 14:34:23 i am not sure how anmial rights groups are going to deal with the cull 14:34:28 so lots of shifting blueprints into Juno, those without all their patches up 14:34:46 and i probably didn't do a great job of recording all the changes i made on the etherpad, sorry 14:34:53 the deadline for code in review was the 18th. is that correct? 14:35:02 garyk: correct 14:35:12 so, v3 blueprints ... 14:35:14 russellb: yeah, hey ho, me neither 14:35:46 #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/027588.html 14:36:00 yeah, we have two 14:36:08 I mean, to 14:36:20 there are a bunch of v3 blueprints, what should we do with them? 14:36:25 we previously had them prioritized up to Medium 14:36:31 but given the post above, i dropped them to Low 14:36:31 well, low for sure 14:36:36 +1 14:36:46 I wonder about defer to juno though 14:36:48 should we leave them and just let whatever happens to merge get in? 14:36:56 yeah, duno, to be honest 14:37:05 or actively defer things, even like this, to help focus on a smaller set? 14:37:07 we could do with a bit more review bandwidth for the other stuff 14:37:14 yeah, that's what i'm worried about 14:37:20 well, I was wondering about the meta issue. Because it seems like we should figure out if we're still going ahead on the v3 front as planned 14:37:22 yeah, its tempting, feels bad not asking Chris though 14:37:37 yeah, should definitely talk to him 14:37:42 bummer he's not here 14:37:44 sdague: I think its a question of how to maintain v2 14:37:50 yeah, stupid timezones 14:37:52 and if not, I really wonder about any review time on it. That also includes the api validation bits 14:38:03 i haven't had a chance to read your post sdague , sorry 14:38:07 because there are a bunch of other things that are only happening on v3 14:38:28 johnthetubaguy: 'how to maintain v2' is the same topic as my bug item... 14:38:32 oh IRC just broke again :( 14:38:38 mriedem: its a big fella 14:38:40 yeh, silly irc 14:38:55 really messing up our meeting today 14:39:07 ok, so let's see if there's anything else for today, and if not, we can use the rest of the time on the API topic 14:39:16 johnthetubaguy: any other thoughts on blueprints? 14:39:23 yeah, not much really... 14:39:29 please keep them up to date, I guess 14:39:41 but I will try keep people honest on that, once we get the list down 14:39:43 i had a general discussion item 14:39:46 and we should try to put more review focus on the targeted blueprints that have all code up 14:39:55 I might start un-approving things too, that are stale 14:39:56 etc 14:40:04 johnthetubaguy: +1 14:40:06 johnthetubaguy: that's a good idea 14:40:15 johnthetubaguy: i found some that had code, but have been abandoned for 1 or 2 months ... just deferred 14:40:20 yeah, I will try get a list together for the patches too, just one tiny error 14:40:28 russellb: likewise, I will try clean that up 14:40:41 johnthetubaguy: really appreciate the help, it's no small pile of work 14:40:50 cheers for our blueprint czar! :-) 14:40:53 I am seeing how much time it takes, no worries 14:40:57 furry hats for everyone! 14:40:58 anyways, that all for now 14:41:01 hehe 14:41:14 #topic open discussion 14:41:24 mriedem: what's up 14:41:27 mriedem: http://www.youtube.com/watch?v=fWucPckXbIw&feature=kp 14:41:36 i was just going to bring up jog0's post last night about an oslo sync tema 14:41:38 *team 14:41:42 he's not here i don't think though 14:41:50 was wondering what the feelings are on that 14:41:56 since oslo sync seems to be a contentious issue 14:41:58 #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/027631.html 14:42:22 I don't like the risk right now, we should do it when Juno opens for sure 14:42:32 but maybe we should take the hit anyway, it could fix things 14:42:39 its a nice fence I am on here 14:42:46 johnthetubaguy: yeah, i think jog0 would say we want to sync before icehouse closes to get any bug fixes 14:42:48 * russellb joins johnthetubaguy on the fence 14:42:49 but yeah, i see both ways 14:42:56 make room 14:43:11 honestly, if we think we have enough time to catch fallout, I'd say damn the torpedos on this one 14:43:29 at a high level, it seems to be "well if we exclude this and this and this module, auto-sync should be fine" 14:43:29 because we've definitely run into bugs in the past where were entirely because olso was old 14:43:40 sync, this is the best time to do it I'd say 14:43:45 so basically the assertion is that ad-hoc syncs aren't good enough, and someone should just take ownership of it? 14:43:51 i at least agree that sure, we should sync 14:43:53 lets just sync the lot, and if the gate breaks, revert it? 14:44:03 +1 :) 14:44:09 :) 14:44:10 russellb: by Juno-1 we should be in sync 14:44:18 yeah revers are easy no one touches that code except syncs 14:44:26 +1 14:44:31 well, and jog0 is doing a full sync on cinder as a POC 14:44:39 so he'll probably learn what to trim out there first 14:44:56 oslo folks also seem to be working hard on splitting that code out into libs now 14:45:08 yeah, that's nice, still some bumps to iron out there 14:45:10 they have a big dependency tree between the stuff in there and are working out from the leaves 14:45:14 yup 14:45:16 log_handler 14:45:24 hmm, there is a thought... 14:45:46 we should sync before icehouse ends, else its gona make upgrade hard 14:45:52 but there's still the icehouse question 14:45:56 i'm all for syncing for icehouse 14:46:08 and regarding his idea, having someone take ownership of making sure we stay up to date, sure 14:46:09 +1 14:46:09 seems fine 14:46:36 seriously, lets just see if it breaks the gate or not, then see were we are 14:46:54 i'm sure jog0 will be happy to hear this 14:46:57 heh, k :) 14:47:06 so we can catch up more later in -nova when he's up 14:47:12 so, on to the API stuff? 14:47:15 sure 14:47:17 yeah 14:47:24 "now what" 14:47:31 so my topic... 14:47:40 we have two bugs for v2 regarding neutron, 14:47:50 one for the quotas extension and one for the limits extension, 14:48:07 both have patches, in one we proxy to neutron (limits), in the other (proposed last night) we don't 14:48:13 but both are needed to make the tests work in tempest 14:48:28 so if we're going to live with v2 for awhile, and support neutron, do we proxy or not? 14:48:40 wellllll 14:48:44 v3 doesn't proxy, which is fine, but this hodge podge in v2 kind of sucks 14:48:50 my general feeling is proxying sucks 14:48:50 at least regarding anything to do with security groups and floating IPs 14:48:55 depends if info_cache gets updated I guess? 14:49:06 for end user facing APIs, i think we have to, for compatibility 14:49:13 for admin APIs, i'm less concerned 14:49:15 right, but v2 is all about the proxy. We have proxy API for images, volumes, and network 14:49:15 russellb: that was my feeling 14:49:23 sdague: right 14:49:29 also note that the limits api change is ready 14:49:30 so, i wonder if we should draw the line between cloud user or admin 14:49:32 i've already tested it with tempest 14:49:34 quotas? meh, 14:49:57 but users list their own quotas, or does it not do that? 14:50:06 oh, you're right 14:50:16 in this case i see limits and quotas as being tied together 14:50:29 so if you are going to proxy for one i'd assume (as a user) that you will for the other also 14:50:35 limits and quotas sure sound the same don't they. 14:50:58 well, and you find out here how they are tied together: https://review.openstack.org/#/c/74839/ 14:51:09 and the ugly hack if they aren't 14:51:10 so, whats the question again? 14:51:18 should we proxy for neutron? 14:51:24 the question is for these two bugs for the APIs, do we proxy to neutron or not, yeah 14:51:39 its consistent at least 14:51:40 also note that in havana, the v2 limits API bug was marked high 14:51:41 i suppose the default answer is always "proxy, or whatever we have to do, to keep APIs working, when reasonable to do so" 14:52:03 so proxy, unless we have a good excuse not to support it 14:52:08 russellb: +1 but nothing in v3, whatever that is 14:52:13 ok, agreed 14:52:20 yeah, all of that for v2 specifically 14:52:49 i wanted to bring this up specifically for the quotas patch since i -1'ed it for inconsistency 14:52:59 thanks :) 14:53:04 that's it for me 14:53:22 so sdague, i guess you were bringing up http://lists.openstack.org/pipermail/openstack-dev/2014-February/027688.html 14:53:26 cool, so general v3 stuff? 14:53:30 russellb: yeh 14:54:11 because with the idea that v2 is around for a while, I think we need to reassess the gameplan for the nova API 14:54:22 agree. 14:54:34 with a goal of 1 API with 1 data format in L (or worst case M) 14:54:40 and how do we get there 14:54:43 so you've jumped right to some really hard questions i was a bit afraid to ask (yet at least), heh 14:54:57 well just call me an instigator :) 14:55:13 I kinda thing we don't like the current format 14:55:25 so I like the idea to move to a slightly updated one 14:55:27 yeah, it hasn't gone how anyone really expected 14:55:35 if only to add in the tasks support 14:55:49 so tasks ... could we do tasks in v2? 14:56:01 yeah, through headers and stuff, but it sucks 14:56:10 does it suck more than 2 APIs? 14:56:20 fair point 14:56:24 probably not 14:56:35 the inconsistencies in v3 would be nice to remove 14:56:37 not trying to imply an answer, just asking, it's what we have to figure out 14:56:45 if only by some filter that accepts both 14:56:53 we also said we don't like proxy APIs, but if we are talking about v2 largely forever, we need a mechanism to deprecate and remove those 14:57:02 that would really reduce the testing though? 14:57:14 we could unit test that stuff, and leave tempest on the new one 14:57:25 the return values are harder right 14:57:28 the more I think about what major API revs mean, the more I think it should be a fundamental do-over almost 14:57:29 because the internals on v2 are way more gross and fragile than 3 14:57:33 and deprecating XML is harder 14:57:56 on internals, can we backport internals improvements to v2? 14:58:11 russellb: it's not so much a backport as a gut 14:58:25 re-do the internals improvements on v2 :-/ 14:58:35 possibly, but versioning extensions, we could add that at some point I guess 14:58:56 not sure why we couldn't 14:59:03 it's all just time and motivation 14:59:09 its only the XML clients, and well... 14:59:28 and I think a big piece is first agreeing where we are headed 14:59:34 I guess we are out of time 14:59:42 let's continue on the ML 14:59:44 sdague: +1 14:59:46 because 1 API with 1 data format I think is an important thing to see if we agree on 15:00:00 agreed, we can take to the ML 15:00:06 thanks everyone 15:00:13 #endmeeting