17:00:24 <jaypipes> #startmeeting
17:00:25 <openstack> Meeting started Thu Aug  2 17:00:24 2012 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:01 <rohitk> hello
17:01:27 <jaypipes> rohitk: so...
17:02:14 <jaypipes> rohitk: I branched your whitebox tests locally, made a few changes, and have only a single failure that I'm still working through (in the test_server_basic_ops test, not the whitebox tests)
17:02:32 <jaypipes> rohitk: hoping to push those changes shortly for you to take a look at
17:02:38 <rohitk> jaypipes: ah, thnx for your time
17:02:47 <jaypipes> rohitk: for the volumes patch, there is something really wrong...
17:02:54 <jaypipes> rohitk: it's not a failure in jenkins.
17:03:04 <jaypipes> rohitk: but I haven't had time to track it down yet
17:03:08 <rohitk> jaypipes: I'm a bit puzzled too
17:03:14 <jaypipes> rohitk: going to try to get to that today
17:03:29 <jaypipes> rohitk: I suspect it MAY have something to do with the size of the volume group backing file
17:03:34 <rohitk> jaypipes: trying to figure out where the create_volume is failing,
17:03:38 <jaypipes> rohitk: but I need to test loclly first to verify that
17:03:40 <rohitk> jaypipes: me too
17:03:49 <jaypipes> so please be patient on that one..:(
17:03:53 <rohitk> jaypipes: yes, that would help if you can test it locally :)
17:04:45 <jaypipes> rohitk: yeah, sorry, was focusing on the whitebox tests because sdague also needed those to go through
17:04:47 <rohitk> jaypipes: there aren't any issues with the cleanups by the existing volume tests in Tempest though, i guess
17:05:04 <rohitk> jaypipes: I
17:05:14 <rohitk> I'd also like your feedback on the design comments
17:05:29 <jaypipes> rohitk: the one thing I did still disagree with though, is combining the nova-volumes (volume as extension of compute API) client and the Cinder client..
17:05:31 <rohitk> I had responded to on the volumes patch,
17:06:14 <rohitk> actually I think nova volumes is not volume as an extension of compute API
17:06:26 <rohitk> n-vol and compute extensions are different, no?
17:06:47 <jaypipes> rohitk: n-vol is the service that manages the volumes, not the API endpoint...
17:07:06 <jaypipes> rohitk: the Compute API is still the API endpoint for volume operations when running with nova-volumes and not cinder
17:07:13 <rohitk> hmm, yes the compute extensions call the n-vol service API
17:08:14 <jaypipes> rohitk: which is why I think it would be better to just leave the existing /services/nova/json/volumes_client code and instead just create a /services/volume/json/volumes_client
17:08:37 <jaypipes> rohitk: and put all Cinder tests into /tempest/tests/volume/ and leave the existing Compute volume tests in /tempest/tests/compute/
17:09:06 <rohitk> jaypipes: ok, will do a re-check on that
17:09:29 <jaypipes> rohitk:  that way, when we eventually deprecate nova-volume all we need to do is delete those files from the compute tests and service.
17:09:39 <jaypipes> rohitk: and not need to modify and files.
17:09:45 <jaypipes> any files..
17:10:08 <rohitk> jaypipes: yeah, makes sense
17:10:33 <jaypipes> ok, cool.
17:10:44 <jaypipes> rohitk: thx for understanding and being patient!
17:10:52 <rohitk> jaypipes: :)
17:11:28 <jaypipes> rohitk: back to the whitebox tests, I actually didn't change much at all -- I just reorganized things a bit. let me elaborate on the changes I made:
17:11:36 <davidkranz> jaypipes, rohitk : sorry I am late
17:11:58 <jaypipes> 1) put the /tempes/tests/whitebox/compute/* tests just in /tempest/tests/compute/
17:12:23 <jaypipes> 2) Changed the config so that there wasn't a [whitebox] section in the config file -- instead just added whitebox options to the [compute] section
17:13:19 <rohitk> jaypipes: don't we want to segregate whitebox stuff from the blackbox stuff?, even the configs?
17:13:42 <jaypipes> 3) Made tempest.whitebox.WhiteboxTestCase NOT inherit from BaseComputeTest. Instead, have the actual test cases (i.e. tempest.tests.compute.test_servers_whitebox.TestServersWhitebox) inherit from tempest.whitebox.WhiteboxTest AND tempest.tests.compute.base.BaseComputeTest
17:14:27 <jaypipes> rohitk: actually no ... so the thought is that whitebox testing is just a type of testing -- but it's still a test of the Compute stuff.
17:14:44 <rohitk> jaypipes: ok
17:15:33 <rohitk> jaypipes: I'll take a look to understand the re-org
17:15:39 <jaypipes> rohitk: so instead of having a compute_db_uri, glance_db_uri, identity_db_uri, etc in a [whitebox] section of config, I just have a db_uri option in [compute], [images], [identity] so any whitebox test has the same config option depending on which things it's testing
17:16:29 <rohitk> jaypipes: sure
17:16:52 <jaypipes> rohitk: I'm about 30 minutes away I think from pushing up the  reorg. I hope it will make sense when you see it.
17:17:22 <rohitk> jaypipes: since we're discussing configs, the build_interval ,build_timeout need to be understood
17:17:29 <rohitk> and not used randomly too
17:17:48 <rohitk> so we should have these timing params for each service (where needed)
17:18:03 <jaypipes> rohitk: I agree 100%
17:18:14 <rohitk> a volume timeout need not be as high as a compute build timeout
17:18:17 <jaypipes> rohitk: want to add a bug for that?
17:18:25 <rohitk> jaypipes: yep, I'll do that
17:18:28 <jaypipes> rock on.
17:18:30 <jaypipes> ty!
17:20:04 <davidkranz> jaypipes: There are some Quanta people working on testing now. Do we have any neglected projects it would make sense for them to look at?
17:24:05 <jaypipes> davidkranz: hmm, good question.
17:24:31 <jaypipes> davidkranz: perhaps it's worth you me and rohitk going through the bug list later today or tomorrow to clean up and find stuff for them?
17:24:50 <davidkranz> jaypipes: Sure.
17:25:13 <rohitk> jaypipes: np
17:25:17 <jaypipes> davidkranz: how about tomorrow morning? rohitk you cool with that?
17:25:25 <jaypipes> a Google+ Hangout?
17:25:41 <davidkranz> jaypipes: BTW, the discussions about API compatibility made me wonder if any one has run code coverage of nova, etc. of from  Tempest run?
17:26:14 <davidkranz> jaypipes: I have a meeting at 10EDT tomorrow but am otherwise free.
17:26:30 <rohitk> jaypipes: that should be fine
17:26:45 <davidkranz> I could do 9
17:27:05 <jaypipes> davidkranz: 9am it is. want to send an invite?
17:27:24 <davidkranz> OK. I will have to find rohit on G+
17:27:45 <jaypipes> davidkranz: tough to do API coverage... easier to do unit test coverage :) because coverage plugins and testing generally do not cover out of process calls :(
17:29:02 <davidkranz> jaypipes: I know but was thinking you could run nova-api, nova-compute, etc. with coverage and then run Tempest. Is there a reason that would not work? Never done this in python...
17:29:15 <rohitk> jaypipes: not sure but module level coverage results from unittest reports could help?
17:29:22 <jaypipes> davidkranz: I don't know of any way to do that :(
17:29:58 <davidkranz> jaypipes: OK, I'll see if I can think of something.
17:30:00 <jaypipes> rohitk: unittest coverage works fine (and that happens for all the core projects). But seeing whether a tempest test run covers code paths is not something I know how to do with existing tools.\
17:31:17 <rohitk> jaypipes: I meant if there was a way to run nose coverage tools from Tempest runners
17:31:28 <davidkranz> I think tempest is at the point where you need some kind of coverage to take it the final distance as a regression suite.
17:31:39 <rohitk> davidkranz: we
17:31:49 <rohitk> we've been trying to figure this out for some time too
17:33:03 <davidkranz> jaypipes, rohitk : Anything else that can't wait until tomorrow?
17:34:23 <rohitk> davidkranz: nothing from me now.
17:35:13 <davidkranz> rohitk: Are you 'Rohit Karajgi' on Google+
17:35:28 <rohitk> davidkranz: right
17:37:44 <davidkranz> OK, talk to you tomorrow.
17:39:12 <rohitk> davidkranz: did you send an invite? I am unable to pinpoint you in search
17:41:24 <davidkranz> rohitk: Not sure how to do that in advance. My name there is 'David Kranz'
17:42:33 <rohitk> davidkranz: gotcha
17:42:36 <med_> a trick for setting up a hangout in advance is to associate it with a google plus event you create--which can be in the far distant future.
17:42:45 <med_> The link it creates for the hangout is valid immediately though.
17:43:05 <davidkranz> med_: Thanks, I'll give that a whirl.
17:43:05 <med_> and remains static/available until the actual event ends... which you may have placed in 2014.
19:18:27 <jkff> Hello. This is Eugene Kirpichev from Mirantis. We're about to hold the first LBaaS meeting but we went to the wrong channel first (#openstack-meetings).
19:18:50 <ogelbukh> hi
19:19:03 <jkff> There's me, Kirill Ishanov, Oleg, and another person will join shortly
19:19:11 <YorikSar> o/
19:19:18 <jkff> Hi Yuri
19:20:36 <jkff> Let's start with a questions/answers phase. Folks, is there anything you would like to ask right away?
19:21:11 <jkff> We'll wait for a couple of minutes and if we do not get any questions we'll discuss the current state of the project and roadmap.
19:21:40 <jkff> Also, is there anyone over here who doesn't know what is LBaaS, what we're talking about?
19:22:14 <jkff> Hi kindaopsdevy, are you here for the LBaaS meeting?
19:23:17 <jkff> Hi kindaopsdevy_, are you here for the LBaaS meeting?
19:23:35 <kindaopsdevy_> hi there -- no i'm not
19:23:43 <kindaopsdevy_> was just connected to the channel
19:23:52 <jkff> All right.
19:24:18 <jkff> Ok guys, since we're not getting any questions, let's tell where we are
19:24:28 <jkff> Yuri, can you describe the project's current state briefly?
19:25:05 <jkff> Like - what's the level of support for haproxy, Cisco ACE and F5 right now and what do we have on the immediate roadmap?
19:25:29 <YorikSar> We've implemented the most of core service functionality with good test coverage.
19:25:34 <jkff> Also - is the project ready for someone to download it and play with it?
19:26:18 <jkff> Also - yesterday you said that you're working on improvements to the scheduler which allow to select a balancer based on its "capabilities". Can you tell more about that?
19:27:47 <YorikSar> The service supports pluggable drivers, HAproxy and ACE drivers are implemented to some level. Our first fully supported driver will be an F5 driver.
19:28:21 <kishanov> ETA for F5?
19:28:32 <YorikSar> The project is published in our GitHub page at https://github.com/Mirantis/openstack-lbaas.
19:29:10 <jkff> Is anything special needed for someone to create a test environment for it?
19:29:32 <jkff> Assuming they have an ACE or F5 device :) or, if they don't, then haproxy.
19:29:44 <YorikSar> We're planning to implement F5 driver in 6 weeks from now. By that moment we're going to stabilise driver API.
19:30:26 <jkff> Got it. How mature is the "capabilities" functionality; is it in the API yet?
19:31:02 <kishanov> on F5: is this done through iControl lib?
19:31:47 <YorikSar> Device capabilities support is on its way to master. Drivers will be able to report the underlying device's capabilities and all new balancers will be provisioned based on these info.
19:32:44 <YorikSar> We're going to implement a scheduler like FilterScheduler in Nova so that we'll gain a lot of flexibility there.
19:33:00 <jkff> I see. How do you select which device to allocate to someone who hasn't asked for any particular capabilities - do you, like, select the least capable device, or the first available one? Sounds like an interesting problem
19:34:27 <jkff> Also, do you plan to support something like transparent migration of a user's balancing configuration from one device to another compatible one? (e.g. to free up a more capable device for someone who needs it)
19:34:28 <ogelbukh> API calls for unsupported capabilities must yield error responses, I suppose
19:35:15 <jkff> Oleg, I'm asking about something different. Suppose I have 5 haproxys and 5 ACE's. Obviously if someone asks just for simple load balancing, I better give him a haproxy instance rather than an ACE
19:35:17 <ogelbukh> or there should be capabilities request in API so client could determine what to ask for
19:35:20 <YorikSar> Just like FilterScheduler, after filtering out all devices that don't fit user's requirements scheduler sorts them according some number of weighted cost functions. By default it will be the least busy device in the pool.
19:35:45 <jkff> Hm, least busy, ok
19:36:23 <jkff> I think in perspective we'll see more intricate strategies, perhaps customizable
19:36:43 <jkff> Allright. So, the immediate roadmap is to implement F5 - anything else?
19:38:01 <YorikSar> We're going to provide a client library with CLI and Horizon extentions providing UI to manage balancers.
19:38:39 <jkff> I see. So - you mean, now you have only the tenant's front-end, and you're going to implement the provider's frontend. Right?
19:39:14 <danwent> hi guy, just joining… looks like its just us.   I think its great that you're starting to prototype this stuff, but based on comments on list and from the PPB, I'm pretty confident that all of this will be in Quantum moving forward, so I'm not sure separate CLI, etc. makes sense
19:39:38 <danwent> there are a set of vendors interested in LBaaS, and we'll be talking about it at the Grizzly summit
19:39:51 <jkff> Hi Danwent, thanks for joining :) That's an interesting comment, let's discuss
19:40:02 <danwent> you all, of course, are free to do whatever you like in the run up to that though, and starting to push on a lbaas prototype is great in my opinion
19:40:39 <jkff> So, before we start quantum integration, we need to have *some* way of configuring LBaaS in a user-friendly way
19:40:58 <jkff> Also, I can imagine someone who wants to use LBaaS but doesn't want to use Quantum
19:40:59 <danwent> i think the API will be the same regardless
19:41:14 <YorikSar> We may integrate our service with Quantum as well. We can support both standalone service and Quantum extension.
19:41:20 <jkff> So I think that there should be some way of using it without Quantum, but of course quantum integration will happen too.
19:41:26 <danwent> I'd think of LBaaS as a component of Quantum, that might be run standalone if that is desired
19:41:32 <jkff> Exactly
19:41:51 <danwent> but my point is, there are many members of the quantum community who plan on implementing LBaaS in Quantum
19:42:14 <jkff> Do you mean they're interested in LBaaS in general, or they're interested in integrating with ours specifically?
19:42:28 <danwent> so going off and implementing lots of the parts of it as a separate service, with separate CLI, etc. risks wasting a lot of time.
19:42:50 <danwent> LBaas in general.  t
19:43:07 <danwent> we'll need to bring this to openstack users as part of a core openstack project
19:43:30 <Boris___> dan, are you suggesting that it might make sense to merge whatever is in LBaaS and drivers into quantum already now so as to not drive parallel efforts?
19:43:41 <danwent> Boris___: yes
19:43:52 <jkff> I see your point and agree to an extent that *focusing* on the standalone service's front-end is probably not the best thing to do. But neither is omitting it.
19:43:54 <ogelbukh> danwent: what would be a starting point for implementing the quantum plugin of lbaas?
19:44:09 <danwent> we actually talked about this last summit
19:44:24 <danwent> the goal is to allow quantum to be able to load multiple "engins", not just the "l2 engin"
19:44:31 <jkff> Oh, Dan - do you mean that there are people in Quantum who are developing, like, generic front-ends for LBaaS services?
19:44:36 <danwent> one engine would be a LBaaS engine
19:44:42 <ogelbukh> danwent: ok
19:44:47 <YorikSar> Well, Quantum plugin will need CLI and WebUI just as standalone service, so no effort is wasted here.
19:44:48 <danwent> jkff: sorry, don't follow
19:45:23 <ogelbukh> danwent: my impression was that quantum can load one 'plugin' at a time
19:45:28 <danwent> YorikSar:  but i'd encourage you to implement CLI in a way that is similar ot the Quantum CLI framework, so there's not a lot to reimplement (or even just extend the existing quantum cli)
19:45:42 <ogelbukh> better call it back-end, actually, I think
19:45:42 <jkff> You said that developing a front-end could be a waste of time because it would duplicate the effort of Quantum developers
19:45:46 <danwent> ogelbukh: that is what I was refering to for "engines".  For a given set of APIs, there is one plugin
19:45:48 <ogelbukh> this was in ML recently
19:45:53 <ogelbukh> yes
19:45:59 <jkff> That implies that Quantum developers are already doing something that qualifies as a replacement for our LBaaS frontend
19:46:47 <danwent> jkff: i wouldn't say that, as there is not really work done on it, i'm just making sure that the work that you all are starting will fit into how we will eventually deliver this to users.
19:47:33 <danwent> quantum is biting off chunks fo the stack: L2 in essex, L3 in folsom, and in Grizzly we'll be moving into L4-L7
19:47:43 <jkff> I understand that one thing you have in mind that is CLI; I don't know much about the Quantum CLI framework, but I can believe that it might make sense to integrate with it instead of doing our own CLI
19:47:51 <ogelbukh> danwent: oh, great )
19:48:29 <ogelbukh> so, first step is to integrate lbaas api as a quantum engine
19:48:30 <danwent> jkff:  like i said, you're free to do whatever you want, but I want to make sure you understand how I expect things to eventually get into core, as I don't want you wasting time on details like building your own CLI framework.
19:48:50 <danwent> ogelbukh: there's work that needs to be done there, but it would be a great place to start.
19:48:52 <ogelbukh> with some fake driver for testing purposes
19:49:01 <jkff> danwent: I understand; I'm just trying to get your opinion on the best integration points
19:49:12 <danwent> I actually want to port the L3 stuff we're doing in Folsom over to be a separate "engine" as well
19:49:26 <jkff> danwent: Do you think that Quantum's integration into horizon will allow to build a good front-end for LBaaS?
19:49:55 <ogelbukh> dabo: and how does it work now, if not engine? just an api extesion?
19:50:00 <ogelbukh> oops
19:50:16 <jkff> danwent: Oh, I see you're the owner of the "quantum horizon integration" blueprint!
19:50:17 <ogelbukh> danwent: and how does it work now, if not engine? just an api extension?
19:50:17 <danwent> I think we'll be able to leverage much of what we've done for "quantum routers", though lbalancers may have more bells and whistles to expose
19:50:35 <danwent> jkff: that's actually a bit misleading, as that is a "proxy" blueprint.
19:50:52 <YorikSar> Note that we're basing most of our work on exsting OpenStack libraries and code. We're trying our best not to reinvent all these wheels.
19:50:53 <danwent> the real people doing the work and who own the horizon blueprint linked from that blueprint are from NEC and IBM
19:51:25 <danwent> Cisco also did some work on the quantum + horizon integration, but that stalled a bit and these other folks have picked it up.
19:52:26 <danwent> either way, we should have a good basic ability to add "network services" in horizon
19:52:36 <danwent> and the LBaaS stuff should fit into that model.
19:53:57 <danwent> anyway, i've got a 1pm meeting coming up soon, so need to run in a few minutes
19:54:54 <danwent> I think circulating the proposed API spec, as well as a basic internal spec (driver interfaces, schedules) and if possible a proof-of-concept, would be the best things to focus on pre-Grizzly summit.
19:55:20 <danwent> that will let us hit the ground running after the summit in terms of development.
19:55:25 <jkff> I agree. Thanks for the feedback Dan, it was very valuable.
19:55:38 <danwent> k, well, you all know where to find me on openstack-dev :)
19:55:54 <ogelbukh> :)
19:56:41 <ogelbukh> thanks for interesting points, danwent
20:02:45 <n0ano> Anyone here for the orchestration meeting?
20:02:55 <maoy> n0ano: I'm available
20:03:28 <n0ano> maoy, good afternoon (and congratulations, you're now a core team member)
20:03:46 <maoy> n0ano: thanks!
20:05:00 <n0ano> hmm, so far it's just the two of us, did you have anything for today?
20:06:08 <maoy> i've cleared my schedule to revive the orchestation branch.. but nothing more to say yet.
20:06:57 <n0ano> Well, I was hoping for more attendees but it looks like that isn't going to work out
20:07:24 <maoy> ok.
20:22:30 <n0ano> Looks like nothing for this week, I wonder if we need to ping the dev mailing list to see if people are interested in the subject
20:43:26 <mikal> .
21:01:51 <nati_ueno> Nova meeting?
21:02:04 <anniec> i am waiting for the same
21:02:06 <vishy> hi everyone
21:02:35 <openstack> vishy: Error: Can't start another meeting, one is in progress.
21:02:43 <maoy> ...
21:02:46 <vishy> oh last meeting wasn't ended
21:02:55 <clarkb> jaypipes: ^
21:04:14 <vishy> nijaba: ping looks like you were chair of last meeting?
21:04:24 <vishy> nijaba: can you issue an #endmeeting?
21:04:56 <vishy> #endmeeting
21:05:05 <vishy> #chair
21:06:32 <markmc> #reset
21:06:34 <markmc> #eject
21:06:37 <markmc> #kill
21:07:01 <russellb> /kick openstack
21:07:20 <vishy> jaypipes: actually you may have the meeting open
21:07:21 <nati_ueno> op en sta ck
21:07:59 <vishy> sigh
21:08:08 <vishy> ok well i guess we get no logging until someone comes back
21:08:21 <vishy> lets go through the agenda
21:08:22 <markmc> well, it's all logged anyway
21:08:31 <nati_ueno> vishy: I may be logged in another meeting, so we can copy it later
21:08:35 <maoy> right, in the previous meeting
21:08:38 <russellb> just don't get the nice meeting summary
21:08:45 <vishy> #info agenda is here: http://wiki.openstack.org/Meetings/Nova
21:09:19 <markmc> that's a packed agenda
21:09:28 <vishy> #topic api consistency
21:09:37 <vishy> hopefully it won't bee too long :)
21:09:44 <vishy> so I sent an email about api consistency
21:10:02 <vishy> the main concern is that we have extra parameters in post that are not documented
21:10:51 <Ravikumar_hp> vishy: what is the best way to handle this
21:11:03 <markmc> vishy, the plan looks good to me
21:11:17 <markmc> vishy, except the renaming part - are we talking about breaking backwards compat?
21:11:18 <vishy> I proposed a solution in the blueprint here: https://blueprints.launchpad.net/nova/+spec/disable-server-extensions
21:11:25 <markmc> Ravikumar_hp, http://wiki.openstack.org/DisableServerExtensions
21:11:26 <vishy> markmc: i don't believe so
21:11:39 <vishy> markmc: and the renaming part only really changes the name of the extension in the list
21:11:56 <markmc> vishy, ah, ok - renaming the extension, not the parameters
21:12:06 <vishy> markmc: so unless there is some unknown client out there looking for a particular name it shouldn't matter
21:12:07 <nati_ueno> FYI: quantum added attribute extension. https://github.com/openstack/quantum/blob/master/quantum/api/v2/attributes.py It is very well working.
21:12:24 <nati_ueno> At least, it is very clear which parameter is core by code.
21:12:26 <bcwaldon> vishy: just want to highlight for the group that breaking backwards compatibility can NOT happen right now
21:12:35 <vishy> markmc: I will leave them as is if necessary, but it annoys me that there are a couple that are different
21:13:00 <dprince> What if we just said... anything that doesn't break novaclient is fair game?
21:13:00 <vishy> so I don't love that the server create code will have to know about the extensions, it is tightly coupled
21:13:21 <vishy> but I don't see any way to fix that in the short term without serious risk of breakage
21:13:33 <vishy> and it is certainly better than what we have now
21:13:45 <vishy> so it sounds like we are all ok with that as a strategy
21:13:46 <bcwaldon> dprince: I'm not a fan of that
21:13:59 <vishy> so next point on the same topic: I need help
21:14:06 <bcwaldon> vishy: don't we all
21:14:30 <vishy> there are going to be 6 or 7 patches with test changes in addition to the basic one. Does anyone want to help me do some of those?
21:14:44 <_0x44> jk0 does :P
21:14:57 <jk0> certainly
21:15:16 <vishy> this is one of those things that really looks bad from a release perspective so I want to get it buttoned down so our api is coherent
21:15:25 <vishy> ok final point on the first topic
21:15:41 <vishy> jk0: I will put the initial patch in and then I will farm out some of the other patches to you. Sound good?
21:15:42 <sdague> vishy: if this is easy to split up among a few folks, I'm happy to help
21:15:52 <vishy> sdague: awesome I will include you as well
21:15:57 <jk0> sounds good
21:16:05 <vishy> ok last point: our xml support is weak
21:16:06 <annegentle> vishy: ideally we'll have 6-7 doc patches for each patch so the truth be told
21:16:07 <nati_ueno> vishy: I wanna also help
21:16:08 <mikal> vishy: I'm happy to help too if you talk slow and explain what I need to do
21:16:13 <vishy> especially in the extensions area
21:16:42 <vishy> ok four volunteers awesome, I will do the first and send an email to all of you with an etherpad for collaborating on the others
21:16:49 <sdague> sounds great
21:16:52 <annegentle> also what is the naming decision? does os- stay?
21:16:53 <mikal> cool
21:16:58 <vishy> so the question is: do we care enough about xml to fix it?
21:17:00 <nati_ueno> gotcha
21:17:04 <_0x44> vishy: The XML contingent has already asked us to kill it.
21:17:12 <_0x44> vishy: Where XML contingent = justinsb
21:17:13 <mikal> vishy: do we know if anyone uses XML?
21:17:18 <mikal> Who do we upset if we kill it?
21:17:23 <vishy> annegentle: solely for the name of the extension os-xxxx will stay
21:17:35 <annegentle> vishy: thanks
21:17:36 <vishy> annegentle: no actual usable endpoints/params will be renamed right now
21:17:40 <_0x44> mikal: justinsb does, but he has to use both xml and json because our XML support is so abysmal.
21:17:49 <annegentle> #link http://wiki.openstack.org/DisableServerExtensions
21:17:56 <markmc> vishy, Vek made a good case for killing it - perhaps deprecate in folsom?
21:17:58 <annegentle> #link https://blueprints.launchpad.net/nova/+spec/disable-server-extensions
21:17:59 <Ravikumar_hp> Vishy: so xml is not yet supported for Nova
21:18:11 <ewindisch> ;['}
21:18:12 <ewindisch> \
21:18:19 <vishy> Ravikumar_hp: i believe it works for all core api, but support in extensions is very weak
21:18:20 <ewindisch> … sorry, child on keyboard.
21:18:28 <nati_ueno> +1 for remove xml
21:18:40 <_0x44> ewindisch: Having your child attend doesn't count as attendance. :)
21:18:41 <vishy> cleaning up extensions and adding tests is a dev effort.
21:18:53 <vishy> personally I would like it to work for what we have now
21:18:54 <russellb> and by remove, deprecate in folsom to give people a chance to migrate off of it if they are using it?
21:19:05 <vishy> but it means some volunteers to fix it
21:19:24 <vishy> I don't see any need to deprecate xml support for the v2 api, we can specifically state that many extensions don't support it
21:19:24 <_0x44> I don't think we can remove XML support if it's defined in the V2API spec
21:19:29 <_0x44> Since apparently that can't ever be changed.
21:19:32 <vishy> and potentially remove it for v3
21:19:33 <markmc> vishy, yeah, deprecating it would be a way of saying "wake up and help fix this or it's dead man walking"
21:19:40 <russellb> ah yeah ... so just remove it for the next spec version then
21:19:41 <_0x44> From when it was handed down from on high by monks on golden platters.
21:20:04 <nati_ueno> markmc: lol for dead man walking
21:20:05 <vishy> I don't know of anything broken in the xml core spec, except for the fact that wadls etc. don't work
21:20:19 <sdague> +1 on deprecate, and plan for removal on v3
21:20:30 <vishy> annegentle: we should definitely document that xml + extensions doesn't work
21:20:37 <annegentle> vishy: noted
21:20:38 <_0x44> vishy: If the WADLs don't work, why don't we ask the people who originally created them to update them?
21:20:46 <Ravikumar_hp> vishy: just to get clarified - xml is desupported only for extensions or for core api too
21:20:47 <markmc> if we had a blueprint for v3, we could list removing xml as an item to warn folks
21:20:48 <sdague> anyone know what the tempest coverage is for the API? that might be a good way to figure out what's working and what's not?
21:20:52 <annegentle> #info document that xml + extensions do not work
21:21:23 <annegentle> #info See Vish if you want to work on server extensions cleanup and docs
21:21:27 <vishy> If we have an xml expert, I would love someone to go through trying to use the core api and see if that works, although maybe i'm beating a dead horse, because nova with no extensions is essentially useless
21:21:29 <Ravikumar_hp> tempest tests are Json response format only
21:21:41 <vishy> is there an xml advocate here?
21:22:01 <annegentle> I'm not it (an xml advocate) but I'll represent what I get asked if it's helpful?
21:22:21 <jog0> vishy: xml at least needs to stay around partially for EC2
21:22:22 <vishy> ok I will ask on the mailing list asking for help on verifying if our xml support is even remotely usable
21:22:29 <annegentle> 1) Where are the XSDs? 2) Is WSDL supported? 3) What's the namespace?
21:22:35 <vishy> jog0: xml in ec2 is totally reasonable
21:22:43 <_0x44> jog0: XML support in this case is for the OpenStack API
21:22:43 <markmc> it's definitely a nice thing to have if we someone can step up to fix it
21:22:56 <vishy> ok we need to move on, I will ask for volunteers for help with xml
21:23:02 <jog0> _0x44: understood, I just wanted to put  it out there
21:23:12 <vishy> next items
21:23:12 <lzyeval> vishy: i'm in
21:23:15 <vishy> blueprint updates
21:23:20 <vishy> lzyeval: for xml support?
21:23:26 <lzyeval> yup
21:23:34 <vishy> lzyeval: awesome chat with me offline about what we need
21:23:38 <lzyeval> k
21:23:49 <vishy> #topic blueprint updates
21:24:03 <vishy> so there are a few blueprints that i think are important that may need help
21:24:06 <vishy> first is configdrive
21:24:37 <vishy> smoser is not here so lets see if he pops in after
21:24:40 <vishy> jog0 is here
21:24:46 <vishy> so next is host-aggregates
21:24:56 <vishy> really would like to get that one in, need help on reviews
21:24:59 <jog0> vishy: all parts are ready for code review.
21:25:12 <vishy> jog0: can you link them here please?
21:25:22 <vishy> jog0: so you don't need implementation help? just review help?
21:25:35 <jog0> https://review.openstack.org/#/q/status:open+topic:bp/general-host-aggregates,n,z
21:25:41 <vishy> jog0: thanks
21:26:07 <vishy> jog0: is there anything after part2?
21:26:09 <jog0> vishy: I did not cover moving availability zones to aggregates,  but everything else is ready.
21:26:18 <jog0> vishy: just documentation and tests
21:26:29 <jog0> vishy: which are also ready for review
21:26:54 <vishy> jog0: ok az's to agreggates would be awesome but the migration is hard. Do you think we could add support for az's to use either existing tag or aggregate?
21:27:40 <vishy> jog0: as in adding support for a certain metadata tag that would work ['availability_zone': xxx' in addition to the one set in the service table?]
21:27:47 <jog0> vishy:  well one question I had was only compute nodes are part of an aggregate but all services get a availability zone for now.
21:28:18 <vishy> jog0: only compute now is fine, the code will need to be in cinder also for volumes etc.
21:28:35 <jog0> vishy: supporting both is straight forward. I will post a patch later this afternoon
21:28:38 <vishy> jo0: we can do migration and migrate compute_capabilities into az metadata in Grizzly
21:28:59 <vishy> s/az metadata/host aggregate metadata/
21:29:15 <vishy> ok next blueprint: instance-uuids
21:29:17 <jog0> vishy:  sounds good, so support both in Folsom and migrate after
21:29:25 <vishy> done, thanks mikal!
21:29:27 <mikal> :)
21:29:30 <vishy> jog0: right
21:29:35 <mikal> With some minor collatoral damage
21:29:46 <mikal> My only remaining concern is terst coverage is low for some of this code
21:29:55 <mikal> So we'll only know nothing broke as people test it more
21:29:58 <vishy> russellb: no-db-messaging? how goes?
21:30:11 <russellb> no-db-messaging is going well.  no-db-compute as a whole doesn't have a chance for folsom.
21:30:21 <russellb> no-db-messaging is almost done, depending on where we draw the line
21:30:27 <vishy> russellb: understood, already moved it
21:30:34 <vishy> russellb: need any help besides reviews?
21:30:37 <russellb> for instances in the compute rpc api, i've got 2 methods left, 1 of which i'm finishing up right now locally
21:30:46 <ewindisch> russellb: let me know if you need help there.
21:30:52 <russellb> ewindisch: k
21:31:08 <vishy> ewindisch: I delayed the signed messages bp as well, but anything you can get done is gravy :)
21:31:28 <vishy> ok so last one is the config-drive / metadata-v2 blueprint
21:31:29 <russellb> i think just reviews, i'll probably draw the line soon and declare success with "much-less-db-messaging" and leave the rest with no-db-compute next cycle
21:31:35 <vishy> smoser has stated he needs help
21:31:45 <ewindisch> vishy: awesome. So everyone else knows: I put forward a draft change today with the signed messages.
21:31:46 <vishy> can anyone grab that and make sure it gets in?
21:32:09 <russellb> ewindisch: i think we definitely need to look hard at how we can do that in a transport agnostic way
21:32:14 <vishy> russellb: you should look at that that draft btw, to see what you think about getting it in qpid and rabbit
21:32:20 <vishy> russellb: oh you already did :)
21:32:24 <ewindisch> russellb: right, which is why you're a reviewer on the draft :)
21:32:25 <russellb> yeah i will, read the comments at least
21:32:37 <russellb> cool
21:32:47 <vishy> so any volunteers for config-drive  / metadata-v2?
21:32:51 <russellb> haven't read the code yet, but will soon (probably tomorrow)
21:33:01 <vishy> it is pretty close, so I think it could be done pretty quickly
21:33:08 <russellb> great
21:33:08 <mikal> vishy: whats the link for the bp?
21:33:35 <russellb> vishy: i'll help drive that one
21:33:50 <vishy> mikal, russellb: https://blueprints.launchpad.net/nova/+spec/config-drive-v2
21:34:02 <vishy> awesome
21:34:03 <russellb> i meant the trusted messaging one :-)
21:34:09 <vishy> russellb: oh good
21:34:20 <vishy> mikal: take a look at that and see if you can do it?
21:34:32 <vishy> I can use the other three volunteers for the api changes
21:34:46 <vishy> ok are there any other bps that people know about that seem important?
21:34:48 <mikal> vishy: sure, I'll poke smoser with a stick and see where he's up to
21:34:55 <vishy> we had two big code drops recently
21:35:12 * ewindisch looks at the metadata blueprint
21:35:15 <vishy> cells and bare metal provisioning
21:35:16 <pixelbeat> vishy, mikal : I'll help out with config drive
21:35:30 <russellb> pixelbeat: nice, thanks
21:35:30 <vishy> pixelbeat: awesome, thanks
21:35:32 <ewindisch> vishy: link to the metadata blueprint?
21:35:33 <mikal> pixelbeat: cool
21:35:44 <vishy> ewindisch: it is the same bp as config up there
21:35:45 <russellb> ewindisch: https://blueprints.launchpad.net/nova/+spec/config-drive-v2
21:36:11 <vishy> any opinions on whether we should get cells and baremetal in?
21:36:12 <ewindisch> oh, I see. You meant the config-drive metadata.
21:36:26 <markmc> vishy, does cells have a blueprint?
21:36:37 <russellb> or docs?  :-)
21:36:38 <vishy> ewindisch: the bp is to unify both metadata api and config drive
21:36:44 <vishy> markmc: it did at one point
21:37:12 <vishy> markmc: looks like it was lost at some point in history
21:37:23 <russellb> cells patch: https://review.openstack.org/#/c/10707/
21:37:29 <markmc> vishy, at a glance - it seems large and invasive for this point in the cycle, but I'm hoping to look closer
21:37:46 <markmc> vishy, the baremetal one at least looks isolated to the baremetal driver
21:37:49 <Daviey> vishy: smoser did some cloud-init work recently for config-drive, do you think that needs refreshing ?
21:37:55 <vishy> markmc: the design of both is to not really affect core code at all, but they may not succeed completely
21:38:00 <russellb> and baremetal is well documented
21:38:03 <markmc> vishy, ok
21:38:17 <maoy> vishy: the baremetal one looks good on paper
21:38:28 <vishy> Daviey: possibly, hopefully when the bp is finished, cloud-init will be able to setup networking from config-drive properly
21:38:38 <steveb_> is config-drive info immutable after launch? This could be a problem if floating IP changes at runtime
21:38:41 <Daviey> splendid
21:38:41 <russellb> they're pretty big to get in by folsom-3, unless people dedicate a significant amount of time to the review very quickly...
21:38:49 <vishy> russellb: I agree
21:39:02 <vishy> the advantage of cells is that it helps one of the largest contributors unfork
21:39:15 * markmc will try and grab some of the big reviews over the next week
21:39:18 <vishy> which is really nice but it may not matter to them if it happens post-release
21:39:28 <vishy> since they are doing CD
21:39:49 <russellb> baremetal would be a cool item on the release notes
21:39:49 <vishy> the other advantage of cells is some users are anxiously waiting on it
21:39:54 <vishy> like mercado-libre
21:39:55 <russellb> so would cells
21:40:09 <vishy> if either merges i think they should be considered experimental
21:40:16 <jaypipes> #endmeeting