21:00:49 <ttx> #startmeeting
21:00:50 <openstack> Meeting started Tue Mar 29 21:00:49 2011 UTC.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
21:01:02 <ttx> Welcome everyone to our weekly meeting...
21:01:10 <ttx> Today's agenda:
21:01:14 <ttx> #link http://wiki.openstack.org/Meetings
21:01:30 <ttx> No actions from last week...
21:01:38 <ttx> Moving directly to:
21:01:41 <ttx> #topic Current release stage: QA
21:01:57 <ttx> We are 9 days away from GammaFreeze, which will happen late on April 7.
21:02:13 <ttx> Like for the Bexar release, I just pushed back the Gamma milestone by two days so as to maximize the open bugfixing time...
21:02:29 <ttx> before we switch to release-targeted bugfixing for the last 3 days before release.
21:02:29 <jaypipes> cool with me... we need it.
21:02:41 <ttx> Worked well last time
21:02:57 <ttx> #info Until GammaFreeze, all bugfix branches are accepted, so the idea is to go and fix as many of them as you can.
21:03:17 <ttx> #info We no longer accept feature branches: core teams should reject proposed branches that add a new feature or change default (non-buggy) behavior, unless they have a standing exception.
21:03:48 <ttx> #topic Cactus Release status
21:04:02 <ttx> All the targeted features that had feature freeze exceptions got recently merged.
21:04:20 <ttx> We have two feature branches left with FeatureFreeze exceptions that expire tonight, I'd like us to review them to see if that should be extended.
21:04:34 <ttx> Please consider that the cost of a standing feature freeze exception is double:
21:04:45 <ttx> it diverts review resources from bugfixing and bugfix reviews...
21:04:57 <ttx> and it introduces instability in the code base, hindering testing.
21:05:10 <ttx> Both result in a less stable release. The longer the exception stands, the higher the cost.
21:05:24 <ttx> First BMP to consider is: https://code.launchpad.net/~justin-fathomdb/nova/volumes-api/+merge/54464
21:05:41 <ttx> So this is still under heavy discussion, so it doesn't seem close to merging...
21:05:56 <ttx> My opinion is that it's too late for this release and should now be deferred. Opinions ?
21:06:07 <anotherjesse> ttx: why doesn't it seem close to merging
21:06:17 <anotherjesse> ttx: I thought the needs fixings were for style issues
21:06:21 <ttx> anotherjesse: it has a disapprove, no approve.
21:06:23 <justinsb> I don't think we can look customers in the eye and tell them to use the OpenStack API if it doesn't support volumes
21:06:51 <justinsb> I believe the only remaining issue is the Disapprove, which is about whether it should go in to Cactus or not
21:06:55 <anotherjesse> justinsb: I agree - sucks to have to tell customers they have to use ec2 api
21:07:14 <ttx> justinsb: when do you think it would be ready for merging ?
21:07:21 <vishy> this may be a general problem with time based releases, but I think we aren't offering much to users in cactus without a more complete openstack api
21:07:29 <justinsb> ttx: It is ready
21:07:45 <justinsb> ttx: IMHO :-)
21:07:48 <ttx> justinsb: so you think it can make it tonight ?
21:07:55 <pvo> ttx: agree we need to get the api finished
21:08:01 <ttx> justinsb: in which case it doesn't need an extension :)
21:08:07 <alekibango> +1
21:08:30 <justinsb> I personally think it should not be an extension
21:08:37 <justinsb> But I had given up hope of that
21:08:40 <jaypipes> vishy, all: the goal of cactus was *parity with the OS API 1.0*. and testing. this isn't in the goals of cactus.
21:08:52 <soren> vishy: Well, this release was supposed to be about stability and deployability. I do think we made great strides in those areas.
21:08:56 <ttx> justinsb: I mean, a feature freeze exception extension.
21:09:00 <justinsb> Oops... sorry, overload of 'extension' word - sorry!
21:09:22 <anotherjesse> jaypipes: we had assumed that the api when released would include volumes
21:09:28 <anotherjesse> but it came so late and didn't
21:09:35 <jaypipes> anotherjesse: who assumed what? I didn't assume that.
21:10:37 <jaypipes> I'd like to make it clear that I don't think the volumes code is bad. just that it isn't for cactus, IMO. Nothing that was ever said about the goals of this release would point to including volumes. Period.
21:10:50 <vishy> soren, jaypipes: Customers are actually trying to deploy cactus.  If we push out cactus now, we're basically telling them they have to use ec2_api if they want volumes.  That makes me sad, but I don't have a good suggestion aside from delaying the release.
21:11:11 <justinsb> vishy: I don't think we need to delay the release
21:11:19 <justinsb> I think I can resolve any style issues in a few hours
21:11:26 <justinsb> It's just about whether volumes goes in or not
21:11:28 <ttx> I certainly won't delay the release on an unplanned feature.
21:11:33 <justinsb> That I can't resolve easily
21:11:36 <jaypipes> vishy: believe me, it makes me sad, too. But if volumes had a blueprint that was Essential, and people made it a priority, it would have already been in there. but it wasn't and isn't.
21:11:55 <justinsb> So the question is: Is the OpenStack API the one we expect customers to use, or is the EC2 API?
21:11:57 <ttx> I might consider extending the merging deadline fopr this one is there is consensus that it should be part of cactus.
21:12:20 <sandywalsh> gotta have OS API
21:12:28 <jk0> OS API
21:12:32 <jaypipes> sandywalsh: volumes is not in the OS API. Period.
21:12:53 <justinsb> No volumes => customers that want to use volumes should use the EC2 API though
21:13:06 <justinsb> So if we want customers to use the OS API => we need volumes in there
21:13:13 <anotherjesse> justinsb: ++
21:13:19 <ttx> justinsb: that's a failure in the API definition I guess...
21:13:38 <tr3buchet> so in a few hours volumes can be in OS api?
21:13:39 <justinsb> I thought packaging it as an extension was a nice compromise
21:14:02 <justinsb> tr3buchet: Yes, particularly if I sacrifice my Java principles :-)
21:14:14 <tr3buchet> sounds like a good compromise!
21:14:20 <ttx> justinsb: how much more time do you expect to need to get it merged ?
21:14:36 <justinsb> I didn't see Dan's review until just now
21:14:42 <jaypipes> gah..
21:14:45 <justinsb> But I can probably resolve the issues today
21:14:49 <justinsb> Other than "should it go in"
21:14:56 <tr3buchet> if a few hours is the difference between recommending the OS api instead of the ec2 api, i think it's an easy decision
21:15:42 <ttx> justinsb: the FFe still stands until the end of today.
21:15:46 <_cerberus_> I'm with jaypipes. I think allowing it would set a precedent we don't want.
21:15:47 <ttx> I won't take it back :)
21:16:10 <justinsb> Can we try to resolve the "should it go in" question now pls?
21:16:22 <justinsb> That's the real barrier, not the style stuff
21:16:35 <vishy> I vote for should go in
21:16:46 <justinsb> I vote in, obviously :-)
21:16:59 <dendrobates> _cerberus_: what is that precedent?
21:17:05 <tr3buchet> yeah i was just about to ask that
21:17:07 <jaypipes> this is exactly why the process of documenting blueprints, prioritizing those blueprints and discussing them in the public square is so critical... so we don't get into just this kind of situation where "oh, a customer wants this, so we *have* to put this feature in even though it doesn't relate to the stated goals of the release".
21:17:23 <_cerberus_> dendrobates: we'd be opening the gates for anyone to argue their feature belongs in the release?
21:17:35 <jaypipes> _cerberus_: exactly what I've been trying to say.
21:17:35 <soren> _cerberus_: ++
21:17:54 <jaypipes> we go from a stated process to "release by marketing".
21:17:59 <_cerberus_> Yep
21:18:04 <soren> This discussion belongs months ago.
21:18:05 <anotherjesse> jaypipes: volumes api isn't a marketing feature
21:18:06 <justinsb> Isn't this just an oversight though?  Does anyone really not think volumes belongs in the OS API?  (If we were in a world without blueprints etc)
21:18:21 <vishy> jaypipes, _cerberus_: who is this project for?
21:18:23 <tr3buchet> this branch was already granted FFe?
21:18:30 <jaypipes> anotherjesse: that's precisely what it is. you just said "how do we convince people to use OS API without it". sorry, that's marketing.
21:18:44 <anotherjesse> jaypipes: it is core functionality
21:18:55 <justinsb> I don't think it's marketing... marketing will tell them to use the OS API no matter what :-)
21:18:56 <jaypipes> anotherjesse: if it is, it should have been voted as Essential.
21:18:58 <dendrobates> I think it should come down to risk and impact
21:19:02 <justinsb> It's our job to make sure it works :-)
21:19:07 <anotherjesse> we've been asking since the first design summit how to move to openstack api
21:19:17 <ttx> dendrobates: it's low-risk, which is why I granted an exception, expiring tonight
21:19:18 <_cerberus_> vishy: customers, certainly. What I'm saying is we should take the time to learn from our poor planning, not shovel things in
21:19:23 <anotherjesse> and were told to wait for the openstack 1.1 api
21:19:29 <anotherjesse> which came too late
21:19:30 <pvo> isn't this irrelevant if justinsb gets it in?
21:19:39 <vishy> _cerberus_: agreed, planning on this feature was botched
21:19:47 <pvo> vishy: don't disagree there either
21:19:49 <justinsb> pvo: The barrier to getting it in is whether we can agree that it belongs in
21:19:49 <sandywalsh> and, it's an extension so it's OS 1.1 anyway
21:19:54 <ttx> ok everyone ---
21:19:59 <tr3buchet> i think if he was granted the FFe, he's got until tonight.
21:20:07 <dendrobates> I don't think it has to set a precedent, if we don't let it
21:20:21 <jaypipes> vishy: if you're trying to imply that I don't think OpenStack is for its users, you're off target. I'm trying to get us to think about releases in a professional, consistent manner, so that our users can say "OK, I understand what this release is about, and I understand what is coming in the next one".
21:20:43 <jbryce> i think justinsb is right when he said it was oversight. many (including some devs in this meeting) thought the new api would be able to expose the underlying features of the system
21:21:10 <anotherjesse> jbryce: ++
21:21:14 <vishy> _cerberus_, jaypipes: we should definitely improve the process, but blocking a feature that makes the project usable, because it wasn't in the planning and priorities is silly.
21:21:48 <jaypipes> vishy: silly? hmm. no, I think that's what consistency is, and is the whole point of consistent, train-based, timed releases.
21:22:00 <ttx> So, from a release management perspective, this feature can land until EOD today. If it passes the nova-core reviews and gets in, it's fine by me
21:22:09 <dendrobates> If we resolve not to allow it again and discuss process at the summit, I don;t think letting this in would do any harm
21:22:12 <jaypipes> vishy: delaying for some feature is what defines marketing-driven releases or releases of software for a single user or few users.
21:22:13 <pvo> ttx: EOD what timezone?
21:22:19 <tr3buchet> this conflict ensures allowing it doesn't set a precendent. It should have been discussed months ago, but wasn't. here's our chance to rectify that.
21:22:20 <ttx> EOD Hawaii time
21:22:24 <eday> besides volumes, what about the other holes in OS API? shared IPs haven't made it in yet either, so we still can't support all core functionality via OS. Any other features?
21:23:02 <eday> it may be pointless to try to get volumes if other parts are not yet supported in OS, and we need to push ec2 anyways
21:23:03 <vishy> eday: floating_ips are not in, shared_ip groups are a bit moot
21:23:15 <soren> eday: Shared IP's as in EC2 style floating ip's or as in cloud servers shared ip groups?
21:23:15 <vishy> eday: I think they are only needed for rackspace
21:23:17 <ttx> justinsb: but given that it was very unplanned I won't grant an extension, so if it doesn't make it today... it's out
21:23:22 <ttx> justinsb: is that fair ?
21:23:44 <justinsb> ttx: I think the real issue isn't the FFE
21:23:52 <justinsb> ttx: It'll either make it today or not at all
21:23:56 <jaypipes> I'm willing to lift my Disapprove on volumes in favour of a community consensus to push it through (because I actually do believe in a democratic process). I've said my piece on this and will continue to argue it in the future and at the summit.
21:23:57 <eday> soren: floating IPs, what the core supports (we don't have shared ip groups, RS, in any way yet)
21:23:59 <justinsb> ttx: So I think the FFE point is moot
21:24:13 <ttx> justinsb: ack.
21:24:14 <justinsb> jaypipes: That would be great, thank you
21:24:19 <soren> eday: I know. That's why I wondered :)
21:24:24 <anotherjesse> the real issue was that we still don't have ownership of the api
21:24:35 <anotherjesse> we didn't get it until a couple of weeks ago - and didn't know it was a gap
21:24:40 <ttx> anotherjesse: soren pushed a summit discussion in that area
21:24:43 <tr3buchet> jaypipes: i think this means we definitely need to improve the process
21:24:52 <ttx> anotherjesse: I agree the API definition should be more dev-centric
21:24:53 <eday> also, key pair management isn't in OS API yet either, correct?
21:24:58 <vishy> soren: good, we need to get api sorted out
21:25:13 <vishy> eday: i think there is already a feature for uploading a key in os api?
21:25:22 <jaypipes> eday: no, though justinsb has some stuff in the works on that I believe for Diablo.
21:25:24 <soren> Yeah, the API definition process is fundamentally broken, IMO.
21:25:30 <justinsb> eday: I think no keypairs, but you can upload the authorized_keys file instead
21:25:34 <vishy> soren: +1
21:25:40 <jaypipes> soren: ya.
21:25:43 <alekibango> i would love to have some smarter limits in api... allowing to contract resource partitioning....
21:25:52 <ttx> ok, can we switch to the next one before I fall asleep ?
21:26:04 <eday> so, even with volumes, what's the point of pushing for volumes branch now? OS API still has many holes that won't help repalce ec2 for cactus
21:26:06 <tr3buchet> i hear that alekibango
21:26:09 <tr3buchet> :)
21:26:16 <ttx> The rest of the discussion can happen on the branch review
21:26:20 <eday> unless we file all those as bugs and get busy :)
21:26:25 <anotherjesse> ttx: any chance that lp:nova/diablo will be opened sooner?
21:26:40 <_cerberus_> eday: +1
21:26:50 <tr3buchet> yep i agree with eday's stance here as well
21:26:55 <soren> anotherjesse: I'd be against that.
21:26:55 <ttx> anotherjesse: no. I don't want everyone to switch to diablo development and letting all those bugs in Cactus
21:27:07 <soren> Yeah, what ttx said.
21:27:09 <ttx> anotherjesse: unless you volunteer to fix all of them.
21:27:12 <jbryce> ttx: +1
21:27:28 <ttx> anotherjesse: that doesn't prevent you from having your own staging branch
21:27:38 <anotherjesse> ttx: yep
21:27:39 <soren> Less excuses to spend time on not fixing bugs is a win.
21:27:53 <ttx> ok next branch !
21:27:58 <ttx> https://code.launchpad.net/~sleepsonthefloor/nova/vnc_console/+merge/54805
21:28:11 <ttx> That one looks slightly less conflictual, but it also looks like it might take a few days before it can land. Opinions ?
21:28:22 <ttx> will it make it by EOD ?
21:28:24 <alekibango> maybe we should do releases weekly and monthly... :)
21:28:28 <vishy> ttx: seems pretty close to me
21:28:30 <alekibango> and daily... :)
21:28:33 <dabo> 'conflictual'?
21:28:34 <termie> i've been reviewing it with anthony, i think it will land today
21:28:48 <termie> very self contained only minor cleanups left
21:28:54 <ttx> dabo: hmm, "generating conflicts" ?
21:29:00 <vishy> if it doesn't make it today, i don't really see the need for extending
21:29:06 <ttx> dabo: "creating arguments" ?
21:29:07 <jlmjlm> contentious
21:29:14 <ttx> jlmjlm: oh, that one.
21:29:14 <dabo> ttx - I like it - gonna steal it!
21:29:15 <jaypipes> I haven't seen any answer to dragondm's questions on that.
21:29:21 <dragondm> true.
21:29:21 <ttx> dabo:  must be french
21:29:40 <jaypipes> dragondm: perhaps they are in the github comments? ...
21:29:43 <ttx> vishy: I'm fine with that.
21:29:54 <dragondm> ?
21:30:05 <jaypipes> dragondm: https://github.com/termie/nova/pull/1
21:30:08 <ttx> vishy: in today or out. I'm cool with that.
21:30:31 <ttx> We also have one that didn't file an exception, but is proposed for merging nevertheless:
21:30:37 <ttx> https://code.launchpad.net/~xtoddx/nova/disable_creds/+merge/54453
21:30:45 <ttx> I'm split on this one... on one hand I should just reject it now, since it's not even filed.
21:30:56 <ttx> On the other hand it's quite simple, and could be seen as a security fix rather than a feature. Opinions ?
21:31:33 <jaypipes> xtoddx: ?
21:32:01 <vishy> i don't think there is any vital reason for that to go into cactus
21:32:08 * jaypipes would almost be inclined to file a bug like "No way to revoke creds" for that one...
21:32:09 <anotherjesse> either way ...
21:32:14 <jaypipes> vishy: ya..
21:32:55 <ttx> ok, works for me. Should be a bug.
21:33:34 <ttx> In other news, the Nova stabilization effort:
21:33:38 <eday> jaypipes: uh, github pull request comments? since when are we doing reviews there? :)
21:33:43 <ttx> Last week we had 31 bugs opened and 29 fixes committed
21:33:48 <ttx> eday: don't start him :)
21:33:48 <jaypipes> eday: ask termie.
21:33:58 <vishy> :)
21:33:59 <termie> eday: anthony and i were testing it out
21:33:59 <ttx> This week we had 65 bugs opened (!) and 27 fixes committed
21:34:20 <ttx> So now the focus is on fixing those bugs. For nova, going to http://tinyurl.com/nova-bugs is a good start.
21:34:21 <termie> eday: so i could make informed statements about it
21:34:39 <ttx> I'll send a few "let's focus on this list" emails this week so that we get the most obnoxious ones fixed for Cactus.
21:34:53 <ttx> Any last comments before we switch to the next topic ?
21:35:32 <anotherjesse> ttx: I'm really close to actually having the staging environment for smoketesting
21:35:59 <alekibango> anotherjesse: how are u making it? is it described somewhere?
21:36:01 <anotherjesse> ttx: took a really long time to get the boxes up - almost done with ipmi/preseed/... to format between each build
21:36:20 <anotherjesse> alekibango: writing up as I go - it is being abstracted from our testing environment we setup for nasa that runs after each commit
21:36:22 <alekibango> anotherjesse: would you like me to help you  to automate this using FAI ?
21:36:22 <ttx> anotherjesse: sounds cool, you would trigger it after each merge ?
21:36:52 <ttx> maybe a topic for open discussion once we are done with the agenda topiccs
21:36:54 <anotherjesse> ttx: the goal is each combination of: commit+reference architecture (hypervisor, api, ...)
21:36:57 <anotherjesse> k
21:37:03 <ttx> #topic Preparation for Diablo design summit
21:37:05 <alekibango> anotherjesse: ic... +1000
21:37:07 <anotherjesse> I think it is a vital part of stabiliization
21:37:10 <soren> ttx: We did that last week :)
21:37:27 <ttx> soren: what ?  what ?
21:37:34 <soren> 21:36 < ttx> maybe a topic for open discussion once we are done with the agenda topiccs
21:37:37 <soren> That.
21:37:38 <ttx> haha
21:37:45 <ttx> So at the end of next month we'll have the Design Summit in Santa Clara, 3 days from Wednesday April 27th to Friday April 29th.
21:37:58 <ttx> On the Tuesday there is the OpenStack conference running, for those interested
21:38:14 <ttx> Like last time, the agenda of the design summit is made of the sessions discussing the features the developers propose.
21:38:27 <ttx> You can already start thinking about and proposing sessions, the current process is outlined at:
21:38:30 <ttx> http://wiki.openstack.org/Summit
21:38:42 <ttx> Once the PTLs are elected we'll review the process together
21:38:44 <dendrobates> when is the call for blueprints?
21:38:47 <ttx> and confirm it
21:39:12 <ttx> dendrobates: you can start filing them. I wanted to discuss the whole thing with PTLs before actually sending a call
21:39:32 <ttx> dendrobates: think that will be too late ?
21:39:46 <jaypipes> anotherjesse: ++ on that. (stabilization and your smoketest env.)
21:39:49 <ttx> (i.e. next week)
21:40:34 <dendrobates> ttx:  it should be fine
21:40:38 <ttx> ok
21:40:46 <ttx> #topic Open discussion
21:40:55 <tr3buchet> TOPIC -> how to handle changing something in nova that would break certain hypervisors.
21:41:22 <vishy> tr3buchet: explain?
21:41:29 <ttx> anotherjesse: btw the branching/release model will be discussed at the summit -- justinsb already filed a session on that subject.
21:41:30 <dabo> tr3buchet: you mean outside of confining the change to nova/virt?
21:41:37 <tr3buchet> yes
21:41:44 <alekibango> vishy: for example sheepdog support ?
21:41:50 <tr3buchet> vishy, let's say changing db structure
21:41:53 <adiantum> tr3buchet: May be we need register several blueprints and leave it unassigned?
21:41:56 <ttx> anotherjesse: how many reference architectures did you identify ?
21:42:10 <vishy> tr3buchet: how does that break other hypervisors?
21:42:32 <tr3buchet> vishy, say certain places in certain hypervisors refer to a column that gets moved to a different table
21:42:39 <anotherjesse> ttx: one per hypervisor (potentially times the number of network models) -- then a hand full of things like -  multiple hypervisor deploys - multiple zone deploys
21:42:43 <vishy> tr3buchet: you mean features that aren't supported?
21:42:53 <tr3buchet> vishy: specifically, moving the mac address column from instances, into it's own table.
21:42:56 <anotherjesse> ttx: I'm going to try to get 3 simple reference deployments and then start increasing the complexity
21:43:09 <vishy> tr3buchet: ah ok i'm with you now
21:43:16 <anotherjesse> ttx: the goal is that before we accept a new hypervisor / network model / ... we have a testing environemtn for it
21:43:22 <tr3buchet> there are many places that pull that mac address data straight from the table
21:43:34 <vishy> tr3buchet: an architecture change like that needs to be fixed in all hypervisors
21:43:38 <vishy> IMO
21:43:39 <ttx> anotherjesse: so you expect to cover all of them ? In San Antonio we discussed the possibility of asking Citrix/Microsft etc. to give out test resources
21:44:01 <vishy> tr3buchet: changes that are specific to one hypervisor should not be modifying shared tables imo
21:44:05 <tr3buchet> vishy: agree, how to arrange the fixing.
21:44:11 <alekibango> anotherjesse: i  did installing nova using http://fai-project.org/  --> on debian stable/testing/unstable and ubuntu (selected few versions). servers up and running  in few (4-10) minutes...    this can be used for smoketesting
21:44:12 <anotherjesse> ttx: we would run as many as possible - but you can chain jenkins so others can chain to us for their custom ones
21:44:25 <ttx> anotherjesse: sounds cool.
21:44:33 <tr3buchet> vishy: none of this is specific to a certain hypervisor
21:44:40 <dabo> tr3buchet: shouldn't they be getting it from the orm?
21:44:53 <vishy> tr3buchet: i think we just have to do our best to fix all of the hypervisors
21:44:57 <anotherjesse> alekibango: neat - does fai support xenserver :)
21:45:19 <vishy> and notify the teams that are responsible for them that we will need help
21:45:21 <alekibango> anotherjesse: can do... i just used kvm for now... but i can help you to set it up
21:45:25 <anotherjesse> vishy / tr3buchet - having a testing environment for each hypervisor would help with this
21:45:40 <anotherjesse> vishy / tr3buchet the goal is that we can request a test run on any branch - not just trunk
21:45:44 <tr3buchet> vishy: so the actual work gets done by a team responsible for the hypervisor?
21:45:47 <pvo> anotherjesse: I think tr3buchet is wanting to know how to coordinate all the work?
21:45:56 <alekibango> anotherjesse: we can run tests on different environments and systems, on different HW even.... to test it well
21:45:56 <tr3buchet> yes thanks pvo
21:45:58 <soren> tr3buchet: Have you seen this happen or is this entirely hypothetical?
21:46:04 <vishy> tr3buchet, dabo: they use the orm, but this is a large arch change, that will break everything
21:46:08 <pvo> soren: this is for multi-nic blueprint
21:46:14 <ttx> tr3buchet: ideally, with a good design summit session where you have people representing all the hypervisors, you should be able to make sure work will be done
21:46:15 <tr3buchet> soryen this is very soon to happen
21:46:19 <tr3buchet> oops
21:46:21 <tr3buchet> soren ^
21:46:34 <soren> pvo: Ok.
21:46:38 <dabo> vishy: just wondering if the change can be handled via orm access
21:46:53 <ttx> tr3buchet: and obviously the code breaking stuff would have to land very early to give time to fix it
21:46:53 <alekibango> anotherjesse: i would love to help with this...
21:46:56 <anotherjesse> in my opinion openstack shouldn't accept a hypervisor / network model into core that it can't test with each change - so developers know they can see if they are breaking systems
21:46:58 <vishy> dabo: i think we could hack around it at the orm layer
21:47:20 <pvo> anotherjesse: I dont' disagree, but the problem exists when you don't have the expertise or dev systems to test all the changes
21:47:20 <tr3buchet> vishy/dabo i'd prefer not hacking around it in the orm layer if possible
21:47:31 <vishy> dabo: but all hypervisors should support multiple nics
21:47:40 <tr3buchet> vishy dabo: if the network db ever gets moved restructured etc, it will cause problems
21:47:42 <vishy> tr3buchet: agreed
21:47:50 <dabo> vishy: e.g., 'mac_address' returns the default (first?), and 'mac_addresses' return all of them
21:47:52 <anotherjesse> pvo: that is why the testing environemnt is so important and why I've been working on getting it setup since joining rackspace
21:47:53 <ttx> anotherjesse: I agree with that -- but in the past the pressure has been important on getting that extra support in
21:48:02 <_cerberus_> I don't think the intent is to leave it "hacked" in the ORM, but simply to buy time for the other implementers to catch up, no?
21:48:06 <ttx> anotherjesse: see Hyper-V, vSphere
21:48:20 <vishy> tr3buchet, dabo: right we could provide that for compatibility, but then work with other hypervisors to fix them for multi-nic
21:48:21 <anotherjesse> ttx: the problem is that we've not prioritized setting up this environment
21:48:29 <adiantum> actually in two virt drivers changes already done not at the orm
21:48:31 <dabo> vishy: re: multiple nics - agreed
21:48:34 <anotherjesse> ttx: we don't even do it for "simple" environments yet
21:48:36 <pvo> anotherjesse: but tr3buchet's branch would break those.
21:48:40 <ttx> anotherjesse: so we shouldn't accept HP SAN support code if we don't have a test rig with HP SANs ?
21:49:00 <dabo> _cerberus_: exactly
21:49:16 <vishy> ttx: hopefully the group proposing will offer to have a test cluster that we can test against
21:49:22 <dabo> the concern was breaking other hypervisors
21:49:32 <dabo> not improving them all
21:49:39 <pvo> anotherjesse: I agree +1000 with test env. But tr3buchet  is trying to figure out how to coordinate a disruptive change. I think the PTL has to solve this with hypervisor lieutenants
21:49:55 <ewanmellor> If you let us know in advance, I'd be happy for us to smoke test a branch on XenServer or vSphere for you, whatever the change is.   I think it's the responsibility of the feature developer to try and get the code write in all hypervisor backends though.  We have unit tests with hypervisor simulators included.
21:49:57 <anotherjesse> ttx: in the case of HP san I would hope that either: 1) a hp san is donated to test cluster 2) someone runs jenkins against trunk with their hp san 3) someone implements a mock where we can test it
21:50:02 <ttx> vishy: (justinsb landed in HP Lefthand/SAN support in cactus)
21:50:16 <justinsb> ttx: We can probably get an HP SAN VM image
21:50:34 <justinsb> Although I do agree that a real HP SAN would be cooler
21:50:40 <vishy> dabo, tr3buchet, pvo: I propose a) try to provide a compatibilty mode for breaking changes. b) the PTL gets a clear picture of which teams are "responsible" for each hypervisor c) the breaking changers work with the PTL to help get all of the hypervisors aligned with the new world view.
21:50:40 <ewanmellor> s/code write/code right/  (how embarrassing)
21:50:50 <anotherjesse> ewanmellor: I want to talk about how to automatically install  / test xenserver - we've got code for ubuntu/kvm environemnt
21:51:01 <dragondm> yup. we need someone to answer the questions like "I think this is going to affect the HyperV virt lay which I know naught abt. Who do I ask abt HyperV?!?!"
21:51:07 <tr3buchet> pvo: in teh case of hypervisor lieutenants, each lieutenant is in charge of making sure changes to nova don't break in their hypervisor, correct?
21:51:11 <dabo> vishy: agree 100%
21:51:12 <ttx> justinsb: right, my point is that for some areas (storage comes to mind) this will seriously slow down code acceptation
21:51:14 <pvo> vishy: agree with that.
21:51:27 <anotherjesse> ewanmellor: once I get kvm/ubuntu going here I'll ping the mailing list
21:51:42 <pvo> tr3buchet: something like that.
21:52:11 <vishy> tr3buchet: yes, although the person proposing breaking changes should note it in the Spec/MP, and try to work with the PTL to get the changes done in tandem.
21:52:18 <ewanmellor> anotherjesse: Would love to talk.  I don't have a lot of hardware myself yet, so I can't donate a cluster to the public right now, but we are obviously going to bring up a cluster internally.
21:52:35 <anotherjesse> ewanmellor: rax just stood up a small cluster
21:52:44 <justinsb> ttx: I think requiring VM images that simulate the device is reasonable.
21:52:46 <tr3buchet> vishy: but the feature developer wouldn't be responsible for updating the hypervisors?
21:53:37 <anotherjesse> justinsb: or someone can provide a chained jenkins that runs the tests against the real device if we can't add one to our testing environment
21:53:51 <vishy> tr3buchet: i don't think we can make a clear rule on that, the responsiblity would probably be shared
21:54:03 <ewanmellor> I think the feature developer _should_ be responsible for updating the hypervisors.  We can't expect all cross-cutting changes to be made by one or two "lieutenants".
21:54:42 <dragondm> but no-one will have the knowlage to update all hypervisors, in many cases.
21:54:42 <ewanmellor> If our unit tests are good enough, then you should be able to check your work that way.
21:54:49 <ttx> anotherjesse: I've had several contacts that told me they would consider donating some chained jenkins to ensure their use case is properly tested against new revisions of the code
21:55:04 <vishy> tr3buchet, ewanmellor: I'm seeing the reviews being done on a feature branch and hypervisor fixes being proposed and merged into the feature branch.
21:55:23 <tr3buchet> vishy this is true
21:55:41 <vishy> tr3buchet, ewanmellor: As long as there is a compatibility mode provided, it isn't necessary that "all" hypervisor fixes go in at once
21:55:49 <tr3buchet> it seems like in general people use a particular hypervisor.. seems to happen
21:55:49 <anotherjesse> vishy / tr3buchet / ewanmellor - that is why we want to run the smoketests against arbitrary branches
21:55:52 <dabo> tr3buchet: ewanmellor: the developer should be responsible for coordinating the lieutenants, if they want their change to be adopted.
21:55:58 <vishy> (for example, perhaps hyper-v will only have single-nic for a while)
21:56:38 <tr3buchet> dabo: why i should the feature developer be responsible for keeping track of many hypervisors do we support now?
21:56:58 <vishy> we should make every effort to allow for backward compatiblity, though
21:57:01 <tr3buchet> i agree there needs to be clear communication
21:57:13 <jk0> you'd have to (at the very least) stick in some pass or NotImplemented methods
21:57:20 <vishy> tr3buchet: this is one of the roles of the PTL
21:57:31 <tr3buchet> vishy: yep
21:57:37 <dabo> tr3buchet: that's where the PTL should be involved - getting all the concerned people working together. Once that group is defined, the feature developer should coordinate among them
21:57:46 <ewanmellor> vishy: Yes, I'm not saying that all hypervisors need to have all features.  Just that people shouldn't be allowed to make a change in one place that completely breaks something else.
21:57:55 <anotherjesse> ewanmellor: ++
21:57:55 <vishy> tr3buchet: but i don't think you can expect to just toss a feature over the wall and hope that somehow all of the hypervisors will need to adopt it.
21:58:22 <pvo> I think we will have a much clearer picture once PTL is announced.
21:58:24 <vishy> ewanmellor: Agreed, breaking changes need to have equivalent fixes in the hypervisors brefore merging
21:58:36 <tr3buchet> vishy: yes absolutely correct.
21:59:13 <tr3buchet> vishy: that's my goal. getting the hypervisors in good shape before merging
21:59:47 <vishy> btw: good work on the multi-nic stuff
21:59:50 <vishy> :)
21:59:57 <tr3buchet> my plan was to have fixtures in place in each hypervisor which allowed them to work with and without multi-nic. then once multi-nic is merged to trunk, the fixtures can be removed
22:00:04 <ttx> oook, I'll close the meeting now, since I really need to get some sleep...
22:00:05 <ewanmellor> This doesn't just apply to hypervisors, by the way.  Just wait until the networking topologies get more interesting -- we'll have this same problem in that layer.  Very few people will run loads of different network topologies.
22:00:08 <tr3buchet> thanks vishy
22:00:22 <ttx> but you can continue the discussion here, or even better, on #openstack.
22:00:34 <pvo> longest meeting ever
22:00:41 <ttx> #endmeeting