21:01:07 #startmeeting 21:01:08 Meeting started Thu Aug 16 21:01:07 2012 UTC. The chair is vishy. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:36 #topic Role Call 21:01:40 who is here? 21:01:41 o/ 21:01:43 me, me, me 21:01:44 moi 21:01:52 Moi 21:01:59 hi 21:02:05 me 21:02:19 o/ 21:02:20 welcome everyone 21:02:21 I 21:02:24 lets get started! 21:02:42 #topic F3 milestone-critical bugs review 21:02:43 hi 21:02:53 #link https://launchpad.net/nova/+milestone/folsom-3 21:03:16 so are those all blocking F3 ? 21:03:23 looks like we have three bugs for backport 21:03:47 ttx: i think so 21:04:12 I see one that merged in master and needs to be backported... 21:04:14 dprince: it looks like your fix merged? 21:04:26 (bug 968696) 21:04:27 did it merge after milestone split? 21:04:27 Launchpad bug 968696 in keystone ""admin"-ness not properly scoped" [Low,Confirmed] https://launchpad.net/bugs/968696 21:04:31 the others still need to land 21:04:54 i think you mean bug 925731 ? 21:04:55 Launchpad bug 925731 in nova "GET on key pairs gives 500" [Low,In progress] https://launchpad.net/bugs/925731 21:04:56 the keypairs one halfway landed, there were 2 patches, right dprince? 21:05:07 vishy/russellb: Yes. 21:05:24 ah i see the other one 21:05:24 The presence of one without the other won't break anything though. 21:05:27 I +2d the second one, vishy you +2d last rev 21:05:30 That was a patch series. 21:05:41 cool then 21:05:45 so can probably quickly approve 21:05:53 done 21:06:10 ttx: i will get those into milestone proposed once they land today 21:06:16 ttx: shouldn't be too hard 21:06:28 vishy: oh, you set to FixCo bug 968696.. it's probably in the branch already. Will mark FixRel 21:06:28 Launchpad bug 968696 in keystone ""admin"-ness not properly scoped" [Low,Confirmed] https://launchpad.net/bugs/968696 21:06:53 ttx: although i'm not absolutely sure there isn't more we need to do in nova it is sorta vague 21:07:00 I created a new bug for a specific issue in nova 21:07:15 ttx: https://bugs.launchpad.net/nova/+bug/1037786 21:07:15 Launchpad bug 1037786 in nova "nova admin based on hard-coded 'admin' role" [Critical,Triaged] 21:07:26 ok so that's three bugs as far as F3 is concerned 21:07:29 fixed-released i think is good for the existing one 21:07:44 done 21:07:50 bug 925731 21:07:51 Launchpad bug 925731 in nova "GET on key pairs gives 500" [Low,In progress] https://launchpad.net/bugs/925731 21:07:52 ttx: we have reviews in for all of them, so I think it will be no problem to get those done in the next couple of hours 21:08:13 the second fix is going into master right now, will backport 21:08:17 shall we assign reviewers to each thing to make sure it gets done? 21:08:27 vishy: ok, will push them to milestone-proposed in the morning if you don't complete it by then 21:08:34 (things that still need a review) 21:08:36 ttx: excellent 21:08:46 russellb: I reviewed Marks patches, they all seem sane 21:08:54 I also looked at eglynn. That one is the big one 21:08:58 So those are all blocking and in good progress to be fixed today 21:09:09 so if someone wants to go in and give marks a quick check 21:09:09 ok, well i'll go through mark's patch series that you looked at 21:09:21 * markmc will look at per-user quota revert 21:09:24 we should probably spend extra time with eglynn's 21:09:27 https://review.openstack.org/11477 needs a +1 to get it over the line (from vek or comstud?) 21:09:51 comstud: can you also keep an eye on that one: ^^ 21:09:51 probably worth a few of us looking at it 21:09:57 all: if there is anoything that needs to be backported, make sure it's targeted against F3 21:09:57 markmc: agreed 21:09:59 cool 21:10:11 ok ready for FFE discussion? 21:10:14 so that I know I need to hold 21:10:15 yes 21:10:38 #topic Feature Freeze Decisions 21:10:47 # link https://blueprints.launchpad.net/nova/+spec/hyper-v-revival 21:11:00 so that one snuck in just after FF 21:11:13 we need to officially grant an exception or propose a revert 21:11:16 Did we see anything from them before the freeze? 21:11:17 already in, and I think it's not disruptive 21:11:19 It seemed very sudden 21:11:31 mikal: they have been working on it for 6 months 21:11:38 mikal: same with bare metal provisioning 21:11:47 it does seem self-contained, and it's restoring something we took out 21:11:50 vishy: sure, but they haven't been sending patches for six months... 21:11:50 makes sense to me 21:11:53 just in feature branches 21:12:07 FFE is about how much disruption you introduce and how late you do it 21:12:09 vishy, were the feature branches publicised, though? 21:12:17 vishy: ahhh, ok, so its cause I wasn't paying attention? 21:12:30 mikal: note the wiki: http://wiki.openstack.org/Hyper-V 21:12:54 here it just adds surface and is completed early enough, so +1 from me 21:13:01 hey had weekly meetings, etc. Perhaps they could have done a big more communication on the ML specifically asking for people to look at. 21:13:05 * they 21:13:16 vishy: yeah, fair points. 21:13:22 vishy: I withdraw my objection 21:13:26 think this makes sense, but a general approach of "develop huge patch on feature branch, propose shortly before feature freeze" isn't workable 21:13:27 touches no core code 21:13:37 yes, this was the day before 21:13:39 that's insane 21:13:42 and it's a giant patch 21:13:52 and its a revival! 21:13:52 markmc: agreed we need to give people a cutoff for large feature branches that is much earlier 21:14:06 markmc: I'm considering reintroducing a FeatureProposalFreeze 21:14:15 we used to have that 21:14:22 one week before the cut 21:14:28 maybe even 2 weeks ... 21:14:29 ttx, as in, first rev of the patch? 21:14:33 yes 21:14:37 for bigger stuff, need some time to digest it 21:14:41 +1 for two weeks 21:14:57 food for Grizzly 21:15:01 yep 21:15:01 even 2 weeks is pushing it, depending on what the review brings up 21:15:09 I do feel like a 2,000 line patch the day before puts a lot of pressure on reviewers to just say yes 21:15:09 like ... "where should we put the database" types of things. 21:15:17 yep, also need time for regressions to pop up 21:15:24 ttx: I like that... food 21:15:37 Is there any way we could mark Hyper-V as experimental ? 21:15:46 as in... brand new ? 21:15:49 i don't like expiremental much :( 21:16:04 ttx: I don't know exactly what that would mean. As in we may not backport fixes? 21:16:09 if it's expirmental, it should be baking in another branch probably 21:16:14 ttx: or just warning people that there may be bugs? 21:16:19 no ,as in we have no idea if it actually works. 21:16:25 experimental is a nice warning for folks 21:16:39 ttx: how would we mark it, in code? 21:16:40 kernel is a good analogy, stuff gets marked as experimental when it first goes in 21:16:44 ttx: in release notes? 21:16:53 We could do it similar to deprecation warnings... 21:16:55 vishy: we did it before in release notes 21:17:02 vishy, release notes would be good, cfg support in grizzly would be better 21:17:03 in that case we should do the same for tilera and vmware imo 21:17:04 Log a message once per process if enabled. 21:17:13 can be as simple as "Folsom sees the first release of the new Hyper-V support" 21:17:29 agree on tilera and vmware 21:17:31 +1 for experimental on vmware, hyper-v, bare-metal 21:17:45 we know that libvirt and xenapi are the two that are actively maintained 21:17:53 that is useful information for users 21:17:58 anyway, that's a bit orthogonal to the FFE decision 21:18:03 markmc: I think that makes sense 21:18:07 is hyper-v not going to be actively maintained? 21:18:10 and bare-metal? 21:18:17 yes lets keep going through the FFE stuff 21:18:19 and come back to this 21:18:24 it's kind of making them 2nd class citizens ... 21:18:27 so +1 on hyper-V ? 21:18:28 sorry, on a call 21:18:30 but here now 21:18:45 vishy: ya, will keep an eye on user quota thing 21:18:52 #vote Grant a Feature Freeze Exception to Hyper-v Driver? yes, no, abstain 21:19:01 #vote yes 21:19:04 #vote yes 21:19:04 #vote yes 21:19:05 #startvote Grant a Feature Freeze Exception to Hyper-v Driver? yes, no, abstain 21:19:05 Begin voting on: Grant a Feature Freeze Exception to Hyper-v Driver? Valid vote options are yes, no, abstain. 21:19:06 #vote yes 21:19:07 Vote using '#vote OPTION'. Only your last vote counts. 21:19:08 #vote yes 21:19:11 #vote y 21:19:11 #vote yes 21:19:11 eglynn: y is not a valid option. Valid options are yes, no, abstain. 21:19:11 ffff 21:19:13 #vote yes 21:19:15 #vote yes 21:19:15 #vote yes 21:19:16 #vote yes 21:19:20 forgot the start :) 21:19:20 #vote yes 21:19:26 #vote yes 21:19:26 #vote yes 21:19:37 we don't need to vote :) 21:19:38 #endvote 21:19:39 Voted on "Grant a Feature Freeze Exception to Hyper-v Driver?" Results are 21:19:40 yes (9): ttx, vishy, maoy, eglynn, russellb, lzyeval, mikal, comstud, dprince 21:19:41 no-one was disagreeing 21:19:44 markmc: +1 21:19:53 markmc: i suppose. Just wanted a record 21:19:56 ok next one 21:19:57 but voting is so much fun 21:19:57 vishy, sure :) 21:20:02 And if I -1 it you better convince me rather than outnumber me 21:20:11 #link https://blueprints.launchpad.net/nova/+spec/os-api-network-create 21:20:13 russellb: it makes me feel I've achieved something 21:20:15 russellb: it really is. 21:20:26 network-create can go in with a small tweak to the API 21:20:37 owner seems resistant, tho 21:20:44 maybe I should just jump in and do it 21:20:49 I think the tweak mark wants is simple and code should go in with the change 21:20:59 How useful is this ? 21:21:04 i believe they are resistant because they would have to rewrite their tool 21:21:16 ttx: people can stop using nova-manage to create networks so imo very useful 21:21:30 alright 21:21:37 so it's going to take an exception and a volunteer to finish the code ... :-/ 21:21:48 markmc: do you want to make the change? 21:21:56 vishy, yeah, it'll be in the morning tho 21:21:58 I'm fine with a one-week exception. 21:22:02 cool 21:22:04 #info FFE granted to Hyper-V driver 21:22:05 markmc: no hurry 21:22:09 ok lets go with it 21:22:10 * markmc will do it in a week's time then 21:22:17 yay 21:22:23 #info FFE granted to os-api-network-create 21:22:24 markmc: such a team player! 21:22:32 markmc: it won't be in F3, just needs to be in soon [tm] 21:22:40 #link https://blueprints.launchpad.net/nova/+spec/scheduler-resource-race 21:22:45 ttx, you gave me a week, can't take it back! :-P 21:23:01 this one's scary 21:23:02 ok this one was mostly my bad, it has been under review for a while and I just totally missed adding it to the list for people to review. 21:23:32 isn't that a bug ? 21:23:37 just for information, RAX has a patch in to serialize all create requests on the compute node to avoid this race 21:23:56 vishy: doesn't run for SmokeStack... 21:23:56 ttx: it is, but I thought it might need an FFE because it is kind of a large change 21:24:12 * ttx looks 21:24:16 dprince: that is good to know. so it sounds like it is not quite there 21:24:34 ttx, it's a bug fix, but involves a fairly significant arch change 21:24:40 ouch 21:24:42 I really dislike the idea of shipping folsom with a nasty race. 21:24:53 vishy, did essex have it too? 21:25:01 markmc: I believe so 21:25:24 markmc: nodes get overprovisioned in certain situations 21:25:28 vishy, yeah 21:25:35 markmc: I would guess you notice it a lot more if you use fill-first 21:25:40 one the one hand - would be really nice to have it 21:25:44 which most deployers are not using, but rax is. 21:25:54 As a work around one could always under allocate the compute hosts right? 21:26:02 on the other hand - seems sane to only take these kind of risks for regressions vs the last release 21:26:03 guess they should stop that then :-) 21:26:14 russellb: :) 21:26:16 Could we just go with serializing requests for now? 21:26:42 comstud: how nasty is the patch for serializing requests? 21:26:50 vishy: how far is it ? 21:27:13 ttx: from being ready? I thought it was good, but dprince says it breaks smokestack so apparently not quite. 21:27:44 lets give comstud a minute and come back to this one 21:27:57 vishy: I commented on the review about it. Its this right: https://review.openstack.org/#/c/9402/ 21:28:00 #info returning to this one in a bit 21:28:17 dprince: yes 21:28:40 #link https://blueprints.launchpad.net/nova/+spec/general-bare-metal-provisioning-framework 21:28:47 * dprince will dig into it later 21:28:55 so this is definitely not quite ready 21:29:02 I -1ed this one 21:29:15 it impacts existing supposedly-working code 21:29:25 They've put a lot of work into this so it is sad, but I think we have to delay this one 21:29:31 and looks like it could benefit from further discussion 21:29:41 any other opinions? 21:29:44 seems like a case of showing up fairly late, and unfortunately hitting too many issues in review to get resolved in time 21:29:54 grizzly isn't far away, and there's really useful discussion on how it might be re-worked 21:30:00 yeah, so -1 on ffe 21:30:31 even simple stuff like all the binaries it adds to bin/bm_* gives me pause 21:30:41 would be nice to discuss stuff like that, rather than rush it in 21:30:43 #info FFE denied for general-bare-metal-provisioning-framework 21:30:47 yay, one down 21:31:03 #info shuld be reworked for Grizzly 21:31:17 #link https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha 21:31:29 this is a very useful feature, but it just isn't quite there 21:31:44 yeah, looks like you found a bunch of issues with it 21:32:07 -1 from me. A bit invasive, and looks like it needs a bit too much time 21:32:07 markmc: I was trying to get the functionality without having to modify the driver interface 21:32:18 i'm -0 on this one 21:32:32 i could see -1 or giving it a week to try and get it cleaned up 21:32:36 opinions? 21:32:38 I somewhat like that we use the word 'rebuild' for this BP. 'rebuild' in the OSAPI is destructive. 21:32:39 I don't really like introducing new API calls after F3 21:32:53 since it breaks QA people 21:32:59 dprince: * dislike? 21:33:27 Sorry. Maybe I misphrased that.... 21:33:40 -1 ... issues found during review, came in late, only a "nice to have", invasive, ... 21:33:42 ttx: well it is adding a new admin action, but if the patch is small enough then deployers can grab it if they need it. 21:33:53 is anyone +1 ? 21:33:58 -1 21:34:12 The less we have, the more we can focus on the ones we grant 21:34:17 #info FFE denied for rebuild-for-ha 21:34:21 not me. I'm agnostic. -0 on it though. 21:34:47 #info Great feature but not quite ready. Should be shrunk down so it is easy for deployers to cherry pick it if needed. 21:34:49 dprince, let's keep religious beliefs out of this 21:35:00 #link https://blueprints.launchpad.net/nova/+spec/project-specific-flavors 21:35:11 also seems to be a very useful feature 21:35:19 * dprince is totally misunderstood 21:35:20 almost made it in 21:35:36 markmc: you had some concerns but they didn't seem to be strong enough for you to -1 21:35:37 yeah, +1 - it's got a bunch of good feedback, people seem to want it 21:35:55 vishy, I don't like the API modelling here, but it's consistent with other APIs 21:35:59 Sounds like the typical corner feature, but it touches plenty of core code... 21:36:05 vishy, so, no objection on that front 21:36:12 I kinda like it 21:36:12 * markmc just had a nitpick after a shallow review 21:36:15 I'll let nova-core assess how disruptive it actually is 21:36:27 vishy: how much time does it need ? 21:36:32 eglynn, the feature or the API modelling? :) 21:36:42 I'm +1 on this one. It is an extension but it seems useful 21:36:46 markmc: the feature 21:36:47 vishy: which serializing requests? 21:37:05 vishy: ah, for scheduler. it's easy 21:37:05 comstud: you told me you were serializing build requests to avoid scheduler races 21:37:11 eglynn, thought so :) take a look at add_tenant_access action, could be a subcollection 21:37:15 but the current code is extremely inefficient for large # of instances 21:37:19 because we have to instance_et_all 21:37:22 and add up usage 21:37:24 etc 21:37:27 instance_get_all 21:37:33 THis new patch eliminates all of that 21:37:46 comstud: so it is both an efficiency problem and a race 21:37:50 corect 21:37:52 +r 21:37:54 comstud: hold a sec we'll get back to that one 21:38:13 ttx: I don't think it will take long 21:38:20 got it (sorry, trying to multitask... on a phone call too) 21:38:23 ttx: just marks minor nit and i think it is good to go. 21:38:27 * ttx reverts -1 to +0 21:38:36 ok do we need to vote? 21:38:38 * russellb prefers -0 21:38:44 why not get really precise and just use some fractions here? 21:38:51 or are we agreed to give it a week? 21:38:51 russellb: that's you seeing glasses half empty. 21:38:57 one wee kmax 21:39:00 typical me 21:39:06 I want API locked down asap 21:39:13 even extensions 21:39:16 * dprince consulted wife (english major) now understands context which agnostic can be used 21:39:33 we're still on project-flavors? 21:39:37 I'm going to mark it granted unless anyone complains 21:39:38 or scheduler-race? 21:39:43 markmc: still on flavors 21:39:44 project-flavors 21:39:47 project flavors 21:39:49 ah, grand 21:39:53 vishy: go for it 21:39:55 +1. not agnostic 21:39:59 heh 21:40:00 #info FFE granted to project-specific-flavors 21:40:11 #info needs to merge ASAP. One week max. 21:40:22 one more before the scheduler race 21:40:30 #info https://blueprints.launchpad.net/nova/+spec/per-user-quotas 21:40:37 so we are reverting that one 21:40:51 yep 21:40:53 do we give it a FFE to get fixed and back in? 21:41:02 or do we ship without it? 21:41:15 how broken is it? 21:41:17 is there a fixed version in progress? 21:41:17 do we have an assignee or an ETA ? 21:41:23 bit strange this, it was in gerrit for 2 months 21:41:23 we have neither 21:41:24 I'm off on vaction for the next two weeks, but would be happy to fix it up after that 21:41:29 too late? 21:41:31 vishy: then -2 21:41:34 eglynn: too late 21:41:43 ooh -2 21:41:55 ttx is using his powers to block this one. 21:42:01 k 21:42:10 per user quotas will have to come back in grizzly 21:42:18 vishy: not time now to look for assignees 21:42:23 unless someone takes it now... 21:42:40 ttx: I was going to toss it back at the original author 21:42:52 hmmm 21:42:58 ttx: but you're right someone in core probably has to own it 21:43:13 anyone feel like owning it? 21:43:20 I'm happy to revert to -1 if someone takes it :) 21:43:26 how about pwning it 21:43:26 is the original author on irc? 21:43:37 anyone had interaction with the author other than gerrit? 21:43:47 (goes to how likely he/she is to turn it around) 21:44:02 unknown 21:44:04 ok 21:44:09 -1 from me too, then 21:44:18 k going with denied 21:44:18 but frankly we have enough exceptions granted not to add uncertain ones. 21:44:31 agreed 21:44:35 #info FFE denied for per-user-quotas 21:44:45 #info needs to be fixed and reproposed for grizzly 21:44:52 every exception we grant is less effort spent on bugfixing 21:44:56 ok back to the other one 21:45:01 comstud: still here? 21:45:05 yes 21:45:14 #link https://blueprints.launchpad.net/nova/+spec/scheduler-resource-race 21:45:36 ttx, that's a point worth repeating 21:45:55 it's not just being sadistic. Though that's an added benefit 21:46:33 so comstud has stated that they want to use this in production, so they will be happy to solve any issues that come up 21:46:58 vishy: So, ya... right now in larger deployments, there's a problem with race conditions in the scheduler... even in single process nova-scheduler. It's worse if you try to run 2 for HA purposes. 21:46:58 it does seem like a nasty bug 21:47:01 yes, i was about to say, that's the one thing that gives me some confidence that this will get worked out ... 21:47:06 vishy: I'm +0 on this. This is a bug, so it's nova-core decisoin to assess if the proposed change is too disruptive at this point in the cycle 21:47:27 I'm +1 for it going in... once it works though... 21:47:38 I'm +1 on this. 21:47:38 my vote would be to put it in, and if we uncover major issues we can still revert 21:47:39 vishy: It's even more of a problem because we need to run serializing build requests... and the scheduling of builds is slow due to having to instance_get_all() 21:47:43 So this fixes a couple of problems 21:47:47 basically it's not a FFE thing... It's just a review thing 21:47:53 ok 21:48:05 ttx: so decision is it doesn't need an ffe? 21:48:07 vishy: We'll be using this in prod within a week of it landing in trunk... 21:48:21 vishy: and we won't be tolerating bugs with it, so they'll certainly get fixed. 21:48:34 so if we can get it in asap, should have time to have it solid by release ... 21:48:40 correct 21:48:47 every exception we grant is less effort spent on bugfixing 21:48:47 that sounds great 21:48:52 vishy: that's my view... but that doesn't prevent you from discussing if it's wanted at this point 21:48:54 This is one that will get great exercise by us (rackspace) 21:48:58 i.e. the time fixing regressions in this could be spent fixing other things 21:49:27 markmc: I think this one is pretty important so I'm ok with that 21:49:33 markmc: are you -1? 21:49:37 markmc: IMO there is always things to fix. this one is important enough.. 21:49:49 if the worst case scenario is that we get Essex behavior... I think it might not be worth it 21:49:56 vishy, just based on instinct, haven't review too carefully, -0 21:50:09 Either way, we'll be running this patch soon 21:50:12 ttx, worst case scenario is regressions 21:50:12 whether it's in trunk or not 21:50:26 I'm thinking worth risking also 21:50:35 comstud: see my comment on the review. I can't boot an instance on XenServer w/ SmokeStack using that patch. 21:50:43 dprince: yep, we need to look @ it 21:50:49 comstud: once it actually works I'm cool w/ it though. 21:50:52 markmc: right, that's worst case scenario if you accept it. Worst case scenario if you don't is.. get Essex behavior 21:51:03 ttx, ah, right - yes 21:51:56 I think the regression risk can be discussed in the review 21:52:06 and be used as a reason to delay it 21:52:12 Note that this has been up for review for months with everyone overlooking it 21:52:15 :-/ 21:52:22 comstud, yes, that's sad 21:52:26 not everyone, but 21:52:40 it points out a problem nonetheless 21:52:46 that's a good reason to take the risk actually 21:52:53 our (nova-core's) bad for not reviewing it 21:53:19 vishy: should be tracked as bug, not blueprint ? 21:53:30 i haven't reviewed it, but with comstud saying they'll be using it asap and committing to fixing issues, +1, as it does seem important 21:53:44 (once smokestack issues are resolved of course) 21:53:48 i.e. reject stuff that came in late to teach submitters to submit earlier, accept stuff that came in early and wasn't review to teach reviewers to review earlier :) 21:54:04 heh, bad reviewers. 21:54:17 Yeah, it sounds like RAX is happy to stand behind this one, which makes it feel a lot safer 21:54:18 ttx: there is a bug as well 21:54:23 need more incentive not to review low hanging fruit :) 21:54:23 ttx: i targeted it 21:54:39 oh the shame :( 21:54:48 RAX is committed to making sure it is solid for release 21:54:55 #info scheduler-resource-race is a bug not a feature so it doesn't need a specific FFE. Will be tracked in a bug 21:54:57 whether it's in trunk or not :) 21:55:07 lets move on :) 21:55:12 vishy: I'd advocate removing the blueprint from folsom/f3, and treat it as a normal bug with a normal review.. only with potential regressions in mind :) 21:55:13 party on. 21:55:19 trunkify it! 21:55:32 #topic http://wiki.openstack.org/Meetings/Nova 21:55:48 #topic Exception Needed? 21:55:55 bad copy paste 21:56:29 ttx, isn't that a bit processy? it's a big change, the blueprint has useful info, seems worth keeping the bp 21:56:50 this merged already, but it involved a minor rpcapi change: https://review.openstack.org/#/c/11379/ 21:57:18 markmc: don't really want to set a precedent that large bugfixes ALSO need a blueprint 21:57:18 are we worried about rpcapi changes posted f-3? 21:57:23 I guess the question is, do bugfixes that modify rpcapi or add minor features to drivers need an FFE? 21:57:40 ttx, bps for arch changes are worthwhile IMHO 21:57:47 i think rpcapi changes that are backwards compat are safe any time 21:57:52 vishy, not imo 21:58:05 ok good, so lets move on 21:58:12 #topic XML support in Nova 21:58:18 just wanted to update status 21:58:28 dansmith has been doing work to get tempest to have xml tests 21:58:38 w00t! 21:58:40 it is going well but he's having trouble with people giving reviews 21:58:43 nice 21:58:50 vishy: as long as they are motivated by a bugfix, I'm fine with them 21:59:00 well, there's a bit more 21:59:07 (about previous topic) 21:59:13 Daryl was surprised to see the patches in gerrit today 21:59:14 ttx cool 21:59:19 and I thought I was toast 21:59:35 OTOH removing XML support didn't appear to be at all controversial - maybe we should just do it 21:59:36 * markmc runs 21:59:39 #info small backwards compatible changes to rpcapi or drivers do not need FFEs 21:59:48 * vishy slaps markmc 21:59:51 heh 21:59:55 but he seems to think that we've got more than he did, or more working, or something.. anyway, he's going to drop his stuff and review ours 22:00:19 so far, we haven't found anything that doesn't work as advertised with the xml interface, FWIW 22:00:19 I also have some work on xml verification going on here: https://review.openstack.org/#/c/11263/ 22:01:00 dansmith: good to know. Thanks for doing this. 22:01:05 my stuff is specifically trying to get real tested working samples for api.openstack.org but it has the side effect of actually testing xml end-to-end. If people like my approach I could use some help extending it to all of the apis and extensions. 22:01:21 vishy: on that subject, 22:01:22 vishy, nice! 22:01:24 dansmith: i think we will start seeing a lot more errors when we get into the extensions 22:01:35 I think addSecurityGroup is missing from both xml and json 22:01:43 vishy: yeah, we're just getting to those 22:01:50 * ttx goes to bed 22:01:55 night ttx 22:01:56 night ttx 22:02:07 night ttx 22:02:08 er, missing from the API examples on the website I mean 22:02:08 'night ttx 22:02:17 #info if anyone wants to help with xml support talk to dansmith or vishy 22:02:18 nighty night ttx 22:02:50 we are basically out of time, so lets jump to the last topic real quick 22:03:06 #topic Bug Stragegy 22:03:20 #topic Bug Strategy 22:03:24 * vishy can't type 22:03:30 personally, I'd like to do more bug triaging and never seem to get to it 22:03:37 we should fix some of them 22:03:42 i've been trying to triage some this past week 22:03:54 it sounds like we have a small set of FFE stuff so that should give us lots of time for bugs 22:03:56 it's exhausting sometimes ... lots of junk issues in there :( 22:04:02 weekly nova bug triage day? 22:04:06 moral support for each other? 22:04:07 identifying dups important too to avoid wasted effort 22:04:20 markmc: yeah, that would have helped. i kept wanting to rant :) 22:04:31 i think the most important thing is finding the important bugs and targetting them 22:04:39 we need to know what is critical for release 22:04:40 Maybe we should be talking more about them in IRC? 22:04:44 Yeah. I'm seeing dups for sure. There were actually 2 bugs file for the Keypairs API Get thing that just went in today... 22:04:58 vishy, that's what triaging is all about :) 22:04:59 need to get through as many as possible, or searching through gets harder and harder 22:05:31 I'd actually like to see us *jump* on them rather than let them pile up and have a bug fest. 22:06:05 gotta dig out of the hole sometime though 22:06:13 dprince: that is what the next three weeks is for! 22:06:22 3 weeks, is that all? 22:06:27 cripes 22:06:31 #info nova-core: stop working on features and focus on bugs! 22:06:32 * markmc looks at the schedule again 22:06:45 markmc: might be 5 I can't remember :) 22:07:05 first RC is sept 6 22:07:06 http://wiki.openstack.org/FolsomReleaseSchedule 22:07:10 markmc: although usually grizzly opens after rc1 which distracts people 22:07:10 week of sept 6 22:07:19 geez, it is soon 22:07:33 so wait longer to open grizzly? 22:07:34 cripes 22:07:39 * markmc repeats himself 22:07:46 markmc: I'm ok with that 22:08:41 * markmc directs vishy's "I'm ok with that" to russellb 22:08:52 ooh a proxy 22:09:00 so, closer to final RC? 22:09:13 ok lets revisit that next week once we see how things are shaping up. 22:09:17 more than enough bugs to stay busy until then 22:09:20 k 22:09:31 #info Discuss when to open grizzly at next weeks meeting 22:09:34 as long as we have a support group 22:09:37 anything else? 22:09:52 FOCUS YOU BABOONS 22:09:53 it's always a good time to open a grizzly 22:09:56 another productive meeting, thanks! 22:10:04 grizzly is hungry! 22:10:18 #endmeeting