21:01:07 <vishy> #startmeeting
21:01:08 <openstack> Meeting started Thu Aug 16 21:01:07 2012 UTC.  The chair is vishy. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:36 <vishy> #topic Role Call
21:01:40 <vishy> who is here?
21:01:41 <lzyeval> o/
21:01:43 <markmc> me, me, me
21:01:44 <eglynn> moi
21:01:52 <mikal> Moi
21:01:59 <russellb> hi
21:02:05 <maoy> me
21:02:19 <ttx> o/
21:02:20 <vishy> welcome everyone
21:02:21 <dansmith> I
21:02:24 <vishy> lets get started!
21:02:42 <vishy> #topic F3 milestone-critical bugs review
21:02:43 <dprince> hi
21:02:53 <vishy> #link https://launchpad.net/nova/+milestone/folsom-3
21:03:16 <ttx> so are those all blocking F3 ?
21:03:23 <vishy> looks like we have three bugs for backport
21:03:47 <vishy> ttx: i think so
21:04:12 <ttx> I see one that merged in master and needs to be backported...
21:04:14 <vishy> dprince: it looks like your fix merged?
21:04:26 <ttx> (bug 968696)
21:04:27 <vishy> did it merge after milestone split?
21:04:27 <uvirtbot> Launchpad bug 968696 in keystone ""admin"-ness not properly scoped" [Low,Confirmed] https://launchpad.net/bugs/968696
21:04:31 <ttx> the others still need to land
21:04:54 <vishy> i think you mean bug 925731 ?
21:04:55 <uvirtbot> Launchpad bug 925731 in nova "GET on key pairs gives 500" [Low,In progress] https://launchpad.net/bugs/925731
21:04:56 <russellb> the keypairs one halfway landed, there were 2 patches, right dprince?
21:05:07 <dprince> vishy/russellb: Yes.
21:05:24 <vishy> ah i see the other one
21:05:24 <dprince> The presence of one without the other won't break anything though.
21:05:27 <russellb> I +2d the second one, vishy you +2d last rev
21:05:30 <dprince> That was a patch series.
21:05:41 <dprince> cool then
21:05:45 <russellb> so can probably quickly approve
21:05:53 <vishy> done
21:06:10 <vishy> ttx: i will get those into milestone proposed once they land today
21:06:16 <vishy> ttx: shouldn't be too hard
21:06:28 <ttx> vishy: oh, you set to FixCo bug 968696.. it's probably in the branch already. Will mark FixRel
21:06:28 <uvirtbot> Launchpad bug 968696 in keystone ""admin"-ness not properly scoped" [Low,Confirmed] https://launchpad.net/bugs/968696
21:06:53 <vishy> ttx: although i'm not absolutely sure there isn't more we need to do in nova it is sorta vague
21:07:00 <vishy> I created a new bug for a specific issue in nova
21:07:15 <vishy> ttx: https://bugs.launchpad.net/nova/+bug/1037786
21:07:15 <uvirtbot> Launchpad bug 1037786 in nova "nova admin based on hard-coded 'admin' role" [Critical,Triaged]
21:07:26 <ttx> ok so that's three bugs as far as F3 is concerned
21:07:29 <vishy> fixed-released i think is good for the existing one
21:07:44 <ttx> done
21:07:50 <ttx> bug 925731
21:07:51 <uvirtbot> Launchpad bug 925731 in nova "GET on key pairs gives 500" [Low,In progress] https://launchpad.net/bugs/925731
21:07:52 <vishy> ttx: we have reviews in for all of them, so I think it will be no problem to get those done in the next couple of hours
21:08:13 <vishy> the second fix is going into master right now, will backport
21:08:17 <russellb> shall we assign reviewers to each thing to make sure it gets done?
21:08:27 <ttx> vishy: ok, will push them to milestone-proposed in the morning if you don't complete it by then
21:08:34 <russellb> (things that still need a review)
21:08:36 <vishy> ttx: excellent
21:08:46 <vishy> russellb: I reviewed Marks patches, they all seem sane
21:08:54 <vishy> I also looked at eglynn. That one is the big one
21:08:58 <ttx> So those are all blocking and in good progress to be fixed today
21:09:09 <vishy> so if someone wants to go in and give marks a quick check
21:09:09 <russellb> ok, well i'll go through mark's patch series that you looked at
21:09:21 * markmc will look at per-user quota revert
21:09:24 <vishy> we should probably spend extra time with eglynn's
21:09:27 <eglynn> https://review.openstack.org/11477 needs a +1 to get it over the line (from vek or comstud?)
21:09:51 <vishy> comstud: can you also keep an eye on that one: ^^
21:09:51 <markmc> probably worth a few of us looking at it
21:09:57 <ttx> all: if there is anoything that needs to be backported, make sure it's targeted against F3
21:09:57 <vishy> markmc: agreed
21:09:59 <eglynn> cool
21:10:11 <vishy> ok ready for FFE discussion?
21:10:14 <ttx> so that I know I need to hold
21:10:15 <ttx> yes
21:10:38 <vishy> #topic Feature Freeze Decisions
21:10:47 <vishy> # link https://blueprints.launchpad.net/nova/+spec/hyper-v-revival
21:11:00 <vishy> so that one snuck in just after FF
21:11:13 <vishy> we need to officially grant an exception or propose a revert
21:11:16 <mikal> Did we see anything from them before the freeze?
21:11:17 <ttx> already in, and I think it's not disruptive
21:11:19 <mikal> It seemed very sudden
21:11:31 <vishy> mikal: they have been working on it for 6 months
21:11:38 <vishy> mikal: same with bare metal provisioning
21:11:47 <markmc> it does seem self-contained, and it's restoring something we took out
21:11:50 <mikal> vishy: sure, but they haven't been sending patches for six months...
21:11:50 <markmc> makes sense to me
21:11:53 <vishy> just in feature branches
21:12:07 <ttx> FFE is about how much disruption you introduce and how late you do it
21:12:09 <markmc> vishy, were the feature branches publicised, though?
21:12:17 <mikal> vishy: ahhh, ok, so its cause I wasn't paying attention?
21:12:30 <vishy> mikal: note the wiki: http://wiki.openstack.org/Hyper-V
21:12:54 <ttx> here it just adds surface and is completed early enough, so +1 from me
21:13:01 <vishy> hey had weekly meetings, etc. Perhaps they could have done a big more communication on the ML specifically asking for people to look at.
21:13:05 <vishy> * they
21:13:16 <mikal> vishy: yeah, fair points.
21:13:22 <mikal> vishy: I withdraw my objection
21:13:26 <markmc> think this makes sense, but a general approach of "develop huge patch on feature branch, propose shortly before feature freeze" isn't workable
21:13:27 <russellb> touches no core code
21:13:37 <russellb> yes, this was the day before
21:13:39 <russellb> that's insane
21:13:42 <russellb> and it's a giant patch
21:13:52 <dprince> and its a revival!
21:13:52 <vishy> markmc: agreed we need to give people a cutoff for large feature branches that is much earlier
21:14:06 <ttx> markmc: I'm considering reintroducing a FeatureProposalFreeze
21:14:15 <ttx> we used to have that
21:14:22 <ttx> one week before the cut
21:14:28 <russellb> maybe even 2 weeks ...
21:14:29 <markmc> ttx, as in, first rev of the patch?
21:14:33 <ttx> yes
21:14:37 <russellb> for bigger stuff, need some time to digest it
21:14:41 <vishy> +1 for two weeks
21:14:57 <ttx> food for Grizzly
21:15:01 <markmc> yep
21:15:01 <russellb> even 2 weeks is pushing it, depending on what the review brings up
21:15:09 <mikal> I do feel like a 2,000 line patch the day before puts a lot of pressure on reviewers to just say yes
21:15:09 <russellb> like ... "where should we put the database" types of things.
21:15:17 <eglynn> yep, also need time for regressions to pop up
21:15:24 <dprince> ttx: I like that... food
21:15:37 <ttx> Is there any way we could mark Hyper-V as experimental ?
21:15:46 <ttx> as in... brand new ?
21:15:49 <russellb> i don't like expiremental much :(
21:16:04 <vishy> ttx: I don't know exactly what that would mean. As in we may not backport fixes?
21:16:09 <russellb> if it's expirmental, it should be baking in another branch probably
21:16:14 <vishy> ttx: or just warning people that there may be bugs?
21:16:19 <ttx> no ,as in we have no idea if it actually works.
21:16:25 <markmc> experimental is a nice warning for folks
21:16:39 <vishy> ttx: how would we mark it, in code?
21:16:40 <markmc> kernel is a good analogy, stuff gets marked as experimental when it first goes in
21:16:44 <vishy> ttx: in release notes?
21:16:53 <dprince> We could do it similar to deprecation warnings...
21:16:55 <ttx> vishy: we did it before in release notes
21:17:02 <markmc> vishy, release notes would be good, cfg support in grizzly would be better
21:17:03 <vishy> in that case we should do the same for tilera and vmware imo
21:17:04 <dprince> Log a message once per process if enabled.
21:17:13 <ttx> can be as simple as "Folsom sees the first release of the new Hyper-V support"
21:17:29 <markmc> agree on tilera and vmware
21:17:31 <maoy> +1 for experimental on vmware, hyper-v, bare-metal
21:17:45 <markmc> we know that libvirt and xenapi are the two that are actively maintained
21:17:53 <markmc> that is useful information for users
21:17:58 <ttx> anyway, that's a bit orthogonal to the FFE decision
21:18:03 <vishy> markmc: I think that makes sense
21:18:07 <russellb> is hyper-v not going to be actively maintained?
21:18:10 <russellb> and bare-metal?
21:18:17 <vishy> yes lets keep going through the FFE stuff
21:18:19 <vishy> and come back to this
21:18:24 <russellb> it's kind of making them 2nd class citizens ...
21:18:27 <ttx> so +1 on hyper-V ?
21:18:28 <comstud> sorry, on a call
21:18:30 <comstud> but here now
21:18:45 <comstud> vishy: ya, will keep an eye on user quota thing
21:18:52 <vishy> #vote Grant a Feature Freeze Exception to Hyper-v Driver? yes, no, abstain
21:19:01 <markmc> #vote yes
21:19:04 <russellb> #vote yes
21:19:04 <ttx> #vote yes
21:19:05 <vishy> #startvote Grant a Feature Freeze Exception to Hyper-v Driver? yes, no, abstain
21:19:05 <openstack> Begin voting on: Grant a Feature Freeze Exception to Hyper-v Driver? Valid vote options are yes, no, abstain.
21:19:06 <comstud> #vote yes
21:19:07 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
21:19:08 <maoy> #vote yes
21:19:11 <eglynn> #vote y
21:19:11 <dprince> #vote yes
21:19:11 <openstack> eglynn: y is not a valid option. Valid options are yes, no, abstain.
21:19:11 <russellb> ffff
21:19:13 <eglynn> #vote yes
21:19:15 <lzyeval> #vote yes
21:19:15 <russellb> #vote yes
21:19:16 <mikal> #vote yes
21:19:20 <vishy> forgot the start :)
21:19:20 <ttx> #vote yes
21:19:26 <vishy> #vote yes
21:19:26 <dprince> #vote yes
21:19:37 <markmc> we don't need to vote :)
21:19:38 <vishy> #endvote
21:19:39 <openstack> Voted on "Grant a Feature Freeze Exception to Hyper-v Driver?" Results are
21:19:40 <openstack> yes (9): ttx, vishy, maoy, eglynn, russellb, lzyeval, mikal, comstud, dprince
21:19:41 <markmc> no-one was disagreeing
21:19:44 <ttx> markmc: +1
21:19:53 <vishy> markmc: i suppose. Just wanted a record
21:19:56 <vishy> ok next one
21:19:57 <russellb> but voting is so much fun
21:19:57 <markmc> vishy, sure :)
21:20:02 <ttx> And if I -1 it you better convince me rather than outnumber me
21:20:11 <vishy> #link https://blueprints.launchpad.net/nova/+spec/os-api-network-create
21:20:13 <mikal> russellb: it makes me feel I've achieved something
21:20:15 <dprince> russellb: it really is.
21:20:26 <markmc> network-create can go in with a small tweak to the API
21:20:37 <markmc> owner seems resistant, tho
21:20:44 <markmc> maybe I should just jump in and do it
21:20:49 <vishy> I think the tweak mark wants is simple and code should go in with the change
21:20:59 <ttx> How useful is this ?
21:21:04 <vishy> i believe they are resistant because they would have to rewrite their tool
21:21:16 <vishy> ttx: people can stop using nova-manage to create networks so imo very useful
21:21:30 <ttx> alright
21:21:37 <russellb> so it's going to take an exception and a volunteer to finish the code ... :-/
21:21:48 <vishy> markmc: do you want to make the change?
21:21:56 <markmc> vishy, yeah, it'll be in the morning tho
21:21:58 <ttx> I'm fine with a one-week exception.
21:22:02 <markmc> cool
21:22:04 <vishy> #info FFE granted to Hyper-V driver
21:22:05 <ttx> markmc: no hurry
21:22:09 <vishy> ok lets go with it
21:22:10 * markmc will do it in a week's time then
21:22:17 <russellb> yay
21:22:23 <vishy> #info FFE granted to os-api-network-create
21:22:24 <russellb> markmc: such a team player!
21:22:32 <ttx> markmc: it won't be in F3, just needs to be in soon [tm]
21:22:40 <vishy> #link https://blueprints.launchpad.net/nova/+spec/scheduler-resource-race
21:22:45 <markmc> ttx, you gave me a week, can't take it back! :-P
21:23:01 <markmc> this one's scary
21:23:02 <vishy> ok this one was mostly my bad, it has been under review for a while and I just totally missed adding it to the list for people to review.
21:23:32 <ttx> isn't that a bug ?
21:23:37 <vishy> just for information, RAX has a patch in to serialize all create requests on the compute node to avoid this race
21:23:56 <dprince> vishy: doesn't run for SmokeStack...
21:23:56 <vishy> ttx: it is, but I thought it might need an FFE because it is kind of a large change
21:24:12 * ttx looks
21:24:16 <vishy> dprince: that is good to know. so it sounds like it is not quite there
21:24:34 <markmc> ttx, it's a bug fix, but involves a fairly significant arch change
21:24:40 <ttx> ouch
21:24:42 <vishy> I really dislike the idea of shipping folsom with a nasty race.
21:24:53 <markmc> vishy, did essex have it too?
21:25:01 <vishy> markmc: I believe so
21:25:24 <vishy> markmc: nodes get overprovisioned in certain situations
21:25:28 <markmc> vishy, yeah
21:25:35 <vishy> markmc: I would guess you notice it a lot more if you use fill-first
21:25:40 <markmc> one the one hand - would be really nice to have it
21:25:44 <vishy> which most deployers are not using, but rax is.
21:25:54 <dprince> As a work around one could always under allocate the compute hosts right?
21:26:02 <markmc> on the other hand - seems sane to only take these kind of risks for regressions vs the last release
21:26:03 <russellb> guess they should stop that then :-)
21:26:14 <vishy> russellb: :)
21:26:16 <mikal> Could we just go with serializing requests for now?
21:26:42 <vishy> comstud: how nasty is the patch for serializing requests?
21:26:50 <ttx> vishy: how far is it ?
21:27:13 <vishy> ttx: from being ready? I thought it was good, but dprince says it breaks smokestack so apparently not quite.
21:27:44 <vishy> lets give comstud a minute and come back to this one
21:27:57 <dprince> vishy: I commented on the review about it. Its this right: https://review.openstack.org/#/c/9402/
21:28:00 <vishy> #info returning to this one in a bit
21:28:17 <vishy> dprince: yes
21:28:40 <vishy> #link https://blueprints.launchpad.net/nova/+spec/general-bare-metal-provisioning-framework
21:28:47 * dprince will dig into it later
21:28:55 <vishy> so this is definitely not quite ready
21:29:02 <ttx> I -1ed this one
21:29:15 <ttx> it impacts existing supposedly-working code
21:29:25 <vishy> They've put a lot of work into this so it is sad, but I think we have to delay this one
21:29:31 <ttx> and looks like it could benefit from further discussion
21:29:41 <vishy> any other opinions?
21:29:44 <russellb> seems like a case of showing up fairly late, and unfortunately hitting too many issues in review to get resolved in time
21:29:54 <markmc> grizzly isn't far away, and there's really useful discussion on how it might be re-worked
21:30:00 <russellb> yeah, so -1 on ffe
21:30:31 <markmc> even simple stuff like all the binaries it adds to bin/bm_* gives me pause
21:30:41 <markmc> would be nice to discuss stuff like that, rather than rush it in
21:30:43 <vishy> #info FFE denied for general-bare-metal-provisioning-framework
21:30:47 <ttx> yay, one down
21:31:03 <vishy> #info shuld be reworked for Grizzly
21:31:17 <vishy> #link https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha
21:31:29 <vishy> this is a very useful feature, but it just isn't quite there
21:31:44 <markmc> yeah, looks like you found a bunch of issues with it
21:32:07 <ttx> -1 from me. A bit invasive, and looks like it needs a bit too much time
21:32:07 <vishy> markmc: I was trying to get the functionality without having to modify the driver interface
21:32:18 <vishy> i'm -0 on this one
21:32:32 <vishy> i could see -1 or giving it a week to try and get it cleaned up
21:32:36 <vishy> opinions?
21:32:38 <dprince> I somewhat like that we use the word 'rebuild' for this BP. 'rebuild' in the OSAPI is destructive.
21:32:39 <ttx> I don't really like introducing new API calls after F3
21:32:53 <ttx> since it breaks QA people
21:32:59 <vishy> dprince: * dislike?
21:33:27 <dprince> Sorry. Maybe I misphrased that....
21:33:40 <markmc> -1 ... issues found during review, came in late, only a "nice to have", invasive, ...
21:33:42 <vishy> ttx: well it is adding a new admin action, but if the patch is small enough then deployers can grab it if they need it.
21:33:53 <vishy> is anyone +1 ?
21:33:58 <russellb> -1
21:34:12 <ttx> The less we have, the more we can focus on the ones we grant
21:34:17 <vishy> #info FFE denied for rebuild-for-ha
21:34:21 <dprince> not me. I'm agnostic. -0 on it though.
21:34:47 <vishy> #info Great feature but not quite ready. Should be shrunk down so it is easy for deployers to cherry pick it if needed.
21:34:49 <markmc> dprince, let's keep religious beliefs out of this
21:35:00 <vishy> #link https://blueprints.launchpad.net/nova/+spec/project-specific-flavors
21:35:11 <vishy> also seems to be a very useful feature
21:35:19 * dprince is totally misunderstood
21:35:20 <vishy> almost made it in
21:35:36 <vishy> markmc: you had some concerns but they didn't seem to be strong enough for you to -1
21:35:37 <markmc> yeah, +1 - it's got a bunch of good feedback, people seem to want it
21:35:55 <markmc> vishy, I don't like the API modelling here, but it's consistent with other APIs
21:35:59 <ttx> Sounds like the typical corner feature, but it touches plenty of core code...
21:36:05 <markmc> vishy, so, no objection on that front
21:36:12 <eglynn> I kinda like it
21:36:12 * markmc just had a nitpick after a shallow review
21:36:15 <ttx> I'll let nova-core assess how disruptive it actually is
21:36:27 <ttx> vishy: how much time does it need ?
21:36:32 <markmc> eglynn, the feature or the API modelling? :)
21:36:42 <vishy> I'm +1 on this one. It is an extension but it seems useful
21:36:46 <eglynn> markmc: the feature
21:36:47 <comstud> vishy: which serializing requests?
21:37:05 <comstud> vishy: ah, for scheduler.  it's easy
21:37:05 <vishy> comstud: you told me you were serializing build requests to avoid scheduler races
21:37:11 <markmc> eglynn, thought so :) take a look at add_tenant_access action, could be a subcollection
21:37:15 <comstud> but the current code is extremely inefficient for large # of instances
21:37:19 <comstud> because we have to instance_et_all
21:37:22 <comstud> and add up usage
21:37:24 <comstud> etc
21:37:27 <comstud> instance_get_all
21:37:33 <comstud> THis new patch eliminates all of that
21:37:46 <vishy> comstud: so it is both an efficiency problem and a race
21:37:50 <comstud> corect
21:37:52 <comstud> +r
21:37:54 <vishy> comstud: hold a sec we'll get back to that one
21:38:13 <vishy> ttx: I don't think it will take long
21:38:20 <comstud> got it (sorry, trying to multitask... on a phone call too)
21:38:23 <vishy> ttx: just marks minor nit and i think it is good to go.
21:38:27 * ttx reverts -1 to +0
21:38:36 <vishy> ok do we need to vote?
21:38:38 * russellb prefers -0
21:38:44 <dansmith> why not get really precise and just use some fractions here?
21:38:51 <vishy> or are we agreed to give it a week?
21:38:51 <ttx> russellb: that's you seeing glasses half empty.
21:38:57 <ttx> one wee kmax
21:39:00 <russellb> typical me
21:39:06 <ttx> I want API locked down asap
21:39:13 <ttx> even extensions
21:39:16 * dprince consulted wife (english major) now understands context which agnostic can be used
21:39:33 <markmc> we're still on project-flavors?
21:39:37 <vishy> I'm going to mark it granted unless anyone complains
21:39:38 <markmc> or scheduler-race?
21:39:43 <vishy> markmc: still on flavors
21:39:44 <ttx> project-flavors
21:39:47 <russellb> project flavors
21:39:49 <markmc> ah, grand
21:39:53 <ttx> vishy: go for it
21:39:55 <dprince> +1. not agnostic
21:39:59 <markmc> heh
21:40:00 <vishy> #info FFE granted to project-specific-flavors
21:40:11 <vishy> #info needs to merge ASAP. One week max.
21:40:22 <vishy> one more before the scheduler race
21:40:30 <vishy> #info https://blueprints.launchpad.net/nova/+spec/per-user-quotas
21:40:37 <vishy> so we are reverting that one
21:40:51 <eglynn> yep
21:40:53 <vishy> do we give it a FFE to get fixed and back in?
21:41:02 <vishy> or do we ship without it?
21:41:15 <markmc> how broken is it?
21:41:17 <russellb> is there a fixed version in progress?
21:41:17 <ttx> do we have an assignee or an ETA ?
21:41:23 <markmc> bit strange this, it was in gerrit for 2 months
21:41:23 <vishy> we have neither
21:41:24 <eglynn> I'm off on vaction for the next two weeks, but would be happy to fix it up after that
21:41:29 <eglynn> too late?
21:41:31 <ttx> vishy: then -2
21:41:34 <vishy> eglynn: too late
21:41:43 <russellb> ooh -2
21:41:55 <vishy> ttx is using his powers to block this one.
21:42:01 <eglynn> k
21:42:10 <vishy> per user quotas will have to come back in grizzly
21:42:18 <ttx> vishy: not time now to look for assignees
21:42:23 <ttx> unless someone takes it now...
21:42:40 <vishy> ttx: I was going to toss it back at the original author
21:42:52 <ttx> hmmm
21:42:58 <vishy> ttx: but you're right someone in core probably has to own it
21:43:13 <vishy> anyone feel like owning it?
21:43:20 <ttx> I'm happy to revert to -1 if someone takes it :)
21:43:26 <russellb> how about pwning it
21:43:26 <markmc> is the original author on irc?
21:43:37 <markmc> anyone had interaction with the author other than gerrit?
21:43:47 <markmc> (goes to how likely he/she is to turn it around)
21:44:02 <vishy> unknown
21:44:04 <markmc> ok
21:44:09 <markmc> -1 from me too, then
21:44:18 <vishy> k going with denied
21:44:18 <ttx> but frankly we have enough exceptions granted not to add uncertain ones.
21:44:31 <eglynn> agreed
21:44:35 <vishy> #info FFE denied for per-user-quotas
21:44:45 <vishy> #info needs to be fixed and reproposed for grizzly
21:44:52 <ttx> every exception we grant is less effort spent on bugfixing
21:44:56 <vishy> ok back to the other one
21:45:01 <vishy> comstud: still here?
21:45:05 <comstud> yes
21:45:14 <vishy> #link https://blueprints.launchpad.net/nova/+spec/scheduler-resource-race
21:45:36 <markmc> ttx, that's a point worth repeating
21:45:55 <ttx> it's not just being sadistic. Though that's an added benefit
21:46:33 <vishy> so comstud has stated that they want to use this in production, so they will be happy to solve any issues that come up
21:46:58 <comstud> vishy: So, ya... right now in larger deployments, there's a problem with race conditions in the scheduler... even in single process nova-scheduler.  It's worse if you try to run 2 for HA purposes.
21:46:58 <vishy> it does seem like a nasty bug
21:47:01 <russellb> yes, i was about to say, that's the one thing that gives me some confidence that this will get worked out ...
21:47:06 <ttx> vishy: I'm +0 on this. This is a bug, so it's nova-core decisoin to assess if the proposed change is too disruptive at this point in the cycle
21:47:27 <dprince> I'm +1 for it going in... once it works though...
21:47:38 <maoy> I'm +1 on this.
21:47:38 <vishy> my vote would be to put it in, and if we uncover major issues we can still revert
21:47:39 <comstud> vishy: It's even more of a problem because we need to run serializing build requests... and the scheduling of builds is slow due to having to instance_get_all()
21:47:43 <comstud> So this fixes a couple of problems
21:47:47 <ttx> basically it's not a FFE thing... It's just a review thing
21:47:53 <comstud> ok
21:48:05 <vishy> ttx: so decision is it doesn't need an ffe?
21:48:07 <comstud> vishy: We'll be using this in prod within a week of it landing in trunk...
21:48:21 <comstud> vishy: and we won't be tolerating bugs with it, so they'll certainly get fixed.
21:48:34 <russellb> so if we can get it in asap, should have time to have it solid by release ...
21:48:40 <comstud> correct
21:48:47 <markmc> <ttx> every exception we grant is less effort spent on bugfixing
21:48:47 <maoy> that sounds great
21:48:52 <ttx> vishy: that's my view... but that doesn't prevent you from discussing if it's wanted at this point
21:48:54 <comstud> This is one that will get great exercise by us (rackspace)
21:48:58 <markmc> i.e. the time fixing regressions in this could be spent fixing other things
21:49:27 <vishy> markmc: I think this one is pretty important so I'm ok with that
21:49:33 <vishy> markmc: are you -1?
21:49:37 <maoy> markmc: IMO there is always things to fix. this one is important enough..
21:49:49 <ttx> if the worst case scenario is that we get Essex behavior... I think it might not be worth it
21:49:56 <markmc> vishy, just based on instinct, haven't review too carefully, -0
21:50:09 <comstud> Either way, we'll be running this patch soon
21:50:12 <markmc> ttx, worst case scenario is regressions
21:50:12 <comstud> whether it's in trunk or not
21:50:26 <eglynn> I'm thinking worth risking also
21:50:35 <dprince> comstud: see my comment on the review. I can't boot an instance on XenServer w/ SmokeStack using that patch.
21:50:43 <comstud> dprince: yep, we need to look @ it
21:50:49 <dprince> comstud: once it actually works I'm cool w/ it though.
21:50:52 <ttx> markmc: right, that's worst case scenario if you accept it. Worst case scenario if you don't is.. get Essex behavior
21:51:03 <markmc> ttx, ah, right - yes
21:51:56 <ttx> I think the regression risk can be discussed in the review
21:52:06 <ttx> and be used as a reason to delay it
21:52:12 <comstud> Note that this has been up for review for months with everyone overlooking it
21:52:15 <comstud> :-/
21:52:22 <markmc> comstud, yes, that's sad
21:52:26 <comstud> not everyone, but
21:52:40 <comstud> it points out a problem nonetheless
21:52:46 <markmc> that's a good reason to take the risk actually
21:52:53 <markmc> our (nova-core's) bad for not reviewing it
21:53:19 <ttx> vishy: should be tracked as bug, not blueprint ?
21:53:30 <russellb> i haven't reviewed it, but with comstud saying they'll be using it asap and committing to fixing issues, +1, as it does seem important
21:53:44 <russellb> (once smokestack issues are resolved of course)
21:53:48 <markmc> i.e. reject stuff that came in late to teach submitters to submit earlier, accept stuff that came in early and wasn't review to teach reviewers to review earlier :)
21:54:04 <russellb> heh, bad reviewers.
21:54:17 <mikal> Yeah, it sounds like RAX is happy to stand behind this one, which makes it feel a lot safer
21:54:18 <vishy> ttx: there is a bug as well
21:54:23 <russellb> need more incentive not to review low hanging fruit :)
21:54:23 <vishy> ttx: i targeted it
21:54:39 <dprince> oh the shame :(
21:54:48 <comstud> RAX is committed to making sure it is solid for release
21:54:55 <vishy> #info scheduler-resource-race is a bug not a feature so it doesn't need a specific FFE. Will be tracked in a bug
21:54:57 <comstud> whether it's in trunk or not :)
21:55:07 <vishy> lets move on :)
21:55:12 <ttx> vishy: I'd advocate removing the blueprint from folsom/f3, and treat it as a normal bug with a normal review.. only with potential regressions in mind :)
21:55:13 <comstud> party on.
21:55:19 <russellb> trunkify it!
21:55:32 <vishy> #topic http://wiki.openstack.org/Meetings/Nova
21:55:48 <vishy> #topic Exception Needed?
21:55:55 <vishy> bad copy paste
21:56:29 <markmc> ttx, isn't that a bit processy? it's a big change, the blueprint has useful info, seems worth keeping the bp
21:56:50 <vishy> this merged already, but it involved a minor rpcapi change: https://review.openstack.org/#/c/11379/
21:57:18 <ttx> markmc: don't really want to set a precedent that large bugfixes ALSO need a blueprint
21:57:18 <markmc> are we worried about rpcapi changes posted f-3?
21:57:23 <vishy> I guess the question is, do bugfixes that modify rpcapi or add minor features to drivers need an FFE?
21:57:40 <markmc> ttx, bps for arch changes are worthwhile IMHO
21:57:47 <russellb> i think rpcapi changes that are backwards compat are safe any time
21:57:52 <markmc> vishy, not imo
21:58:05 <vishy> ok good, so lets move on
21:58:12 <vishy> #topic XML support in Nova
21:58:18 <vishy> just wanted to update status
21:58:28 <vishy> dansmith has been doing work to get tempest to have xml tests
21:58:38 <markmc> w00t!
21:58:40 <vishy> it is going well but he's having trouble with people giving reviews
21:58:43 <russellb> nice
21:58:50 <ttx> vishy: as long as they are motivated by a bugfix, I'm fine with them
21:59:00 <dansmith> well, there's a bit more
21:59:07 <ttx> (about previous topic)
21:59:13 <dansmith> Daryl was surprised to see the patches in gerrit today
21:59:14 <vishy> ttx cool
21:59:19 <dansmith> and I thought I was toast
21:59:35 <markmc> OTOH removing XML support didn't appear to be at all controversial - maybe we should just do it
21:59:36 * markmc runs
21:59:39 <vishy> #info small backwards compatible changes to rpcapi or drivers do not need FFEs
21:59:48 * vishy slaps markmc
21:59:51 <markmc> heh
21:59:55 <dansmith> but he seems to think that we've got more than he did, or more working, or something.. anyway, he's going to drop his stuff and review ours
22:00:19 <dansmith> so far, we haven't found anything that doesn't work as advertised with the xml interface, FWIW
22:00:19 <vishy> I also have some work on xml verification going on here: https://review.openstack.org/#/c/11263/
22:01:00 <dprince> dansmith: good to know. Thanks for doing this.
22:01:05 <vishy> my stuff is specifically trying to get real tested working samples for api.openstack.org but it has the side effect of actually testing xml end-to-end. If people like my approach I could use some help extending it to all of the apis and extensions.
22:01:21 <dansmith> vishy: on that subject,
22:01:22 <markmc> vishy, nice!
22:01:24 <vishy> dansmith: i think we will start seeing a lot more errors when we get into the extensions
22:01:35 <dansmith> I think addSecurityGroup is missing from both xml and json
22:01:43 <dansmith> vishy: yeah, we're just getting to those
22:01:50 * ttx goes to bed
22:01:55 <vishy> night ttx
22:01:56 <markmc> night ttx
22:02:07 <dprince> night ttx
22:02:08 <dansmith> er, missing from the API examples on the website I mean
22:02:08 <comstud> 'night ttx
22:02:17 <vishy> #info if anyone wants to help with xml support talk to dansmith or vishy
22:02:18 <russellb> nighty night ttx
22:02:50 <vishy> we are basically out of time, so lets jump to the last topic real quick
22:03:06 <vishy> #topic Bug Stragegy
22:03:20 <vishy> #topic Bug Strategy
22:03:24 * vishy can't type
22:03:30 <markmc> personally, I'd like to do more bug triaging and never seem to get to it
22:03:37 <russellb> we should fix some of them
22:03:42 <russellb> i've been trying to triage some this past week
22:03:54 <vishy> it sounds like we have a small set of FFE stuff so that should give us lots of time for bugs
22:03:56 <russellb> it's exhausting sometimes ... lots of junk issues in there :(
22:04:02 <markmc> weekly nova bug triage day?
22:04:06 <markmc> moral support for each other?
22:04:07 <eglynn> identifying dups important too to avoid wasted effort
22:04:20 <russellb> markmc: yeah, that would have helped.  i kept wanting to rant :)
22:04:31 <vishy> i think the most important thing is finding the important bugs and targetting them
22:04:39 <vishy> we need to know what is critical for release
22:04:40 <mikal> Maybe we should be talking more about them in IRC?
22:04:44 <dprince> Yeah. I'm seeing dups for sure. There were actually 2 bugs file for the Keypairs API Get thing that just went in today...
22:04:58 <markmc> vishy, that's what triaging is all about :)
22:04:59 <russellb> need to get through as many as possible, or searching through gets harder and harder
22:05:31 <dprince> I'd actually like to see us *jump* on them rather than let them pile up and have a bug fest.
22:06:05 <dprince> gotta dig out of the hole sometime though
22:06:13 <vishy> dprince: that is what the next three weeks is for!
22:06:22 <markmc> 3 weeks, is that all?
22:06:27 <markmc> cripes
22:06:31 <vishy> #info nova-core: stop working on features and focus on bugs!
22:06:32 * markmc looks at the schedule again
22:06:45 <vishy> markmc: might be 5 I can't remember :)
22:07:05 <markmc> first RC is sept 6
22:07:06 <markmc> http://wiki.openstack.org/FolsomReleaseSchedule
22:07:10 <vishy> markmc: although usually grizzly opens after rc1 which distracts people
22:07:10 <markmc> week of sept 6
22:07:19 <russellb> geez, it is soon
22:07:33 <russellb> so wait longer to open grizzly?
22:07:34 <markmc> cripes
22:07:39 * markmc repeats himself
22:07:46 <vishy> markmc: I'm ok with that
22:08:41 * markmc directs vishy's "I'm ok with that" to russellb
22:08:52 <russellb> ooh a proxy
22:09:00 <russellb> so, closer to final RC?
22:09:13 <vishy> ok lets revisit that next week once we see how things are shaping up.
22:09:17 <russellb> more than enough bugs to stay busy until then
22:09:20 <russellb> k
22:09:31 <vishy> #info Discuss when to open grizzly at next weeks meeting
22:09:34 <russellb> as long as we have a support group
22:09:37 <vishy> anything else?
22:09:52 <markmc> FOCUS YOU BABOONS
22:09:53 <comstud> it's always a good time to open a grizzly
22:09:56 <russellb> another productive meeting, thanks!
22:10:04 <dprince> grizzly is hungry!
22:10:18 <vishy> #endmeeting