20:00:34 <johnsom> #startmeeting Octavia
20:00:34 <minwang2> o/
20:00:34 <openstack> Meeting started Wed Mar  9 20:00:34 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:37 <openstack> The meeting name has been set to 'octavia'
20:00:38 <sbalukoff> Howdy folks!
20:00:50 <johnsom> Hi everyone
20:00:58 <bharathm> Hi o/
20:01:00 <rm_work> o/
20:01:02 * mhayden stumbles in
20:01:05 <markvan> yo
20:01:22 <bank> hi
20:01:39 <neelashah> o/
20:01:43 <ajmiller> o/
20:01:44 <johnsom> #topic Announcements
20:02:14 <johnsom> We got at least six LBaaS related talks accepted for Austin!  Good job folks!
20:02:25 <sbalukoff> Yay!
20:02:29 <johnsom> I have included the links here:
20:02:31 <johnsom> #link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda
20:03:06 <johnsom> Really pretty cool from our little corner of OpenStack
20:03:41 <johnsom> Any other announcements?
20:03:59 <johnsom> #topic
20:03:59 <johnsom> Mitaka blueprints/rfes/m-3 bugs for neutron-lbaas and octavia
20:04:22 <johnsom> Ok, so Mitaka is winding to a close.  RC1 this week.
20:04:31 <sbalukoff> There are more bugfixes in the Octavia queue that need attention
20:04:36 <sbalukoff> #link https://etherpad.openstack.org/p/Mitaka_LBaaS_priority_reviews
20:04:41 <johnsom> We still have ~23 open bugs targeted for Mitaka
20:04:49 <johnsom> #link https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka
20:05:03 <dougwig> related note, this review modifies the governance of lbaas-dashboard to match octavia: https://review.openstack.org/#/c/290735/
20:05:22 <sbalukoff> Handy!
20:05:29 <johnsom> dougwig Yeah, let's talk about that given dashboard is for LBaaSv2.....
20:05:47 <rm_work> ok sbalukoff i will look today if i can
20:05:55 <rm_work> maybe laaaaater or maybe nowish
20:06:04 <johnsom> dougwig Can we talk about that during open discussion?
20:06:09 <rm_work> in HAProxy training with bedis :P
20:06:19 <dougwig> yep
20:06:30 <sbalukoff> rm_work: Sounds good, thanks!
20:06:54 <johnsom> There are a number of bugs still open for TrevorV
20:07:04 <johnsom> Do we know if those are going to get addressed?
20:07:28 <johnsom> All of which are deferred bugs from the merge
20:07:49 <TrevorV> o/
20:07:51 <TrevorV> Sorry I'm late
20:07:55 <blogan> what a coincidence!
20:07:55 <johnsom> Speaking of TevorV
20:08:00 <sbalukoff> Speak of the devil!
20:08:01 <blogan> its like someone told him...
20:08:04 <TrevorV> *poof* here I am
20:08:09 <sbalukoff> Haha
20:08:24 <johnsom> TrevorV I was just asking about the deferred bugs that are still open for Mitaka
20:08:42 <johnsom> Can we get those done today/tomorrow?
20:09:01 <fnaval> o/
20:09:09 <TrevorV> That should be possible, if we can do a little talk about how shared pools should work
20:09:22 <sbalukoff> TrevorV: Didn't we already have that talk?
20:09:33 <TrevorV> Did we decide?  (scattered in the brain ATM)
20:09:37 <sbalukoff> Also, I don't *think* those bugs are dependent on shared-pools support in one-call-create.
20:09:50 <TrevorV> I thought one of them was shared-pools specifically
20:09:55 <sbalukoff> Well, one of them is, yes.
20:09:56 <johnsom> I think they are all missing unit test issues
20:10:04 <johnsom> https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka
20:10:40 <sbalukoff> If you have time to fix the lack of shared-pools support in one-call-create, that would be great. But at this point I doubt you've got the time.
20:11:02 <johnsom> Other than that, there are a bunch of patches up that have +2, cores please review those so we can get them in
20:11:04 <blogan> sbalukoff: to make it in for Mitaka?
20:11:16 <sbalukoff> blogan: Yes.
20:11:16 <TrevorV> sbalukoff To recap, its utilizing the name as a unique identifier, right?
20:11:29 <TrevorV> As well as adding id-verification
20:11:35 <sbalukoff> TrevorV: No... let's talk after this.
20:11:38 <TrevorV> Alright
20:11:49 <johnsom> The last open bug(s) is around admin_state_up not working.  bharathm and I are working on that today.
20:12:47 <johnsom> Also, FYI, the tracking page for FFE and postmortem is here:
20:12:50 <johnsom> #link https://review.openstack.org/#/c/286413/5/specs/mitaka/postmortem/postmortem.rst
20:12:58 <johnsom> I think we have updated all of our sections.
20:13:07 <johnsom> dougwig is there more we need to do there?
20:13:18 <dougwig> i don't think so.
20:13:36 <johnsom> Excellent
20:13:49 <johnsom> #topic Decide on Mitaka octavia release number
20:14:03 <blogan> 42
20:14:09 <johnsom> With our impending Mitaka release we need to decide on our release number.
20:14:12 <neelashah1> dougwig if dashboard will release with octavia does it need to be added to the ffe list?
20:14:21 * johnsom slaps blogan's wrist.  (Again...)
20:14:25 <dougwig> neelashah1: no.
20:14:32 <TrevorV> johnsom actually I don't see the shared-pools bug we talked about.
20:14:44 <rm_work> 9001?
20:14:52 <sbalukoff> Haha
20:14:53 <johnsom> TrevorV So it's not targeted for Mitaka and there is time on that one
20:14:57 <TrevorV> Oh okay
20:14:57 <blogan> what are the release number options?
20:14:58 <TrevorV> got it
20:15:03 <johnsom> We are currently at 0.5
20:15:12 <johnsom> Do we think we are 1.0 worthy?
20:15:20 <sbalukoff> I'd put at 0.9.
20:15:21 <blogan> hmmm
20:15:22 <johnsom> Or still 0.75-ish?
20:15:23 <fnaval> nope
20:15:23 <TrevorV> Do we have scaling?
20:15:28 <TrevorV> I thought scaling was 1.0
20:15:34 <rm_work> 0.999a-1
20:15:36 <dougwig> our earlier 1.0 mvp included some kind of fault tolerance.
20:15:41 <sbalukoff> Yeah... maybe closer to 0.75
20:15:43 <TrevorV> We have failover?
20:15:53 <sbalukoff> TrevorV: We have failover.
20:15:53 <johnsom> Well, on my roadmap active/active was after 1.0
20:15:53 <blogan> api is missing features, i'd not be comfortable with it being 1.0 bc of that
20:15:58 <dougwig> if active/standby and ha controllers are good to go...
20:16:03 <rm_work> yeah < 1
20:16:10 <johnsom> 1.0 was active/standby
20:16:12 <rm_work> maybe .9753
20:16:14 <johnsom> ish
20:16:21 <rm_work> or are we feeling more 0.8-ish
20:16:28 * rm_work stops
20:16:34 <blogan> number shedding?
20:16:34 * rm_work is too caffeinated
20:16:40 <dougwig> good heavens, this is the most hippyish release numbering i've ever seen.
20:16:46 <sbalukoff> HA controllers are not... particularly ready to go. You can do it as long as only one runs at a time. But operators are beholden to "figure that out" right now without documentation.
20:16:47 <TrevorV> bike numbering?
20:17:08 <Apsu> Just don't HA your HA and you're set.
20:17:11 <rm_work> I like 0.75
20:17:14 <Apsu> Yo dawg
20:17:17 <johnsom> HA controllers as a post 1.0 I think as well.  Not sure
20:17:18 <sbalukoff> Haha
20:17:22 <rm_work> next release can be 0.875
20:17:32 <TrevorV> rm_work you didn'
20:17:36 <TrevorV> you didn't stop****
20:17:37 <sbalukoff> Man... have to dig out those old roadmaps we put together a loooong time ago.
20:17:41 <sbalukoff> Should probably get those updated.
20:17:42 <dougwig> y'all are gonna vote, i can feel it.
20:17:45 <blogan> .6
20:17:47 <TrevorV> sbalukoff they're still in tree I think :P
20:17:50 <blogan> just go one up and be sane
20:17:53 <rm_work> then then 0.9375
20:17:58 <TrevorV> I think 0.5.9
20:18:00 <fnaval> just use constants, e, c
20:18:01 <sbalukoff> blogan: +1
20:18:08 <rm_work> we can just approach 1
20:18:09 <johnsom> Ok, so I'm hearing 1.0 is out, so I vote for 0.8
20:18:20 <rm_work> yeah i think we're more 0.8 than 0.6
20:18:22 * blogan chokes dougwig for being prophetic
20:18:25 <sbalukoff> Haha
20:18:37 <sbalukoff> I'm fine with 0.8
20:18:39 <TrevorV> I'm okay with 0.8, all jokes aside
20:18:44 <blogan> sure .8
20:18:49 <sbalukoff> Done!
20:18:55 <johnsom> Done
20:18:58 <sbalukoff> No vote necessary
20:19:03 <rm_work> ... that was a vote
20:19:03 <blogan> it fits the requirement of .5 < x < 1.0
20:19:04 * sbalukoff sticks out his tongue at dougwig
20:19:21 <sbalukoff> That was a *discussion*
20:19:25 <sbalukoff> We didn't use the voting system.
20:19:29 <johnsom> #topic Open Discussion
20:19:36 <johnsom> Cores please review patches!
20:19:41 <sbalukoff> Yes, please!
20:19:43 * johnsom nags some more
20:19:44 <fnaval> please review this later today: https://review.openstack.org/#/c/172199
20:19:52 <sbalukoff> I'll jump into that for the patches that aren't mine later today, too. :D
20:19:53 * dougwig head is spinning.
20:19:59 <fnaval> i'm going to make 1 change to make it work with the shared pools stuff
20:20:03 <fnaval> then it should be rady
20:20:05 <fnaval> ready
20:20:10 <blogan> I'd like for Newton to start filling in the gaps of the Octavia API to get parity with neutron-lbaas
20:20:23 <blogan> are we still as a group on board with that?
20:20:24 <sbalukoff> fnaval: Ping me when it's ready.
20:20:29 <fnaval> sbalukoff: sure thanks!
20:20:31 <madhu_ak> dougwig, can you +1 on project-config patch: https://review.openstack.org/#/c/284875/
20:20:35 <rm_work> what is newton
20:20:40 <sbalukoff> blogan: Yep.
20:20:41 <blogan> N release
20:20:41 <sbalukoff> So!
20:20:42 <rm_work> did i miss a project starting? :P
20:20:48 <dougwig> madhu_ak: i can take a look at it, i can't promise a particular result.  :)
20:20:48 <rm_work> oh we named that already, right
20:20:56 <rm_work> forgot they did two at once this time
20:21:03 <johnsom> dougwig The topic of dashboard under octavia project.
20:21:06 <sbalukoff> Next week, or whenever the RC is cut and newton is open: We need to have a design discussion around features targeted for Newton.
20:21:20 <sbalukoff> We could really use feedback on this:
20:21:21 <sbalukoff> https://review.openstack.org/#/c/234639/ Add spec for active-active
20:21:23 <TrevorV> I have a question to gauge the group with
20:21:25 <johnsom> That seems, odd.  Just temp?
20:21:35 <TrevorV> So I'm supposed to write up docs for the single create
20:21:35 <sbalukoff> Because I know that IBM will be working hard to make that happen in Newton.
20:21:45 <dougwig> johnsom: really, that's a topic of me considering the "lbaas project" to be octavia, neutron-lbaas, and neutron-lbaas-dashboard.  and if we're kicking -dashboard to release indepedent, let's keep it close to octavia.  IMO.
20:22:05 <sbalukoff> Especially given I'm apparently signed up for a talk to showcase a proof-of-concept of the same at the Austin summit. XD
20:22:09 <johnsom> sbalukoff Let's focus on getting the Mitaka patches reviewed as we already put active/active out of scope for Mitaka
20:22:15 <dougwig> repeat of review link on that governance change:
20:22:15 <dougwig> https://review.openstack.org/#/c/290735/
20:22:23 <TrevorV> Is anyone against me detailing the request in a paragraph and then providing a full example of request/response?
20:22:35 <johnsom> dougwig Ok
20:22:36 <sbalukoff> dougwig: +1
20:22:37 <minwang2> I am still working on the launchpad ticket —File creation susceptible to TOCTOU type attack(https://bugs.launchpad.net/octavia/+bug/1548552)
20:22:37 <openstack> Launchpad bug 1548552 in octavia "File creation susceptible to TOCTOU type attack" [High,In progress] - Assigned to min wang (swiftwangster)
20:22:38 <TrevorV> Rather than some massively verbose table of all the elements and such?
20:22:55 <nmagnezi> hello guys :)
20:22:58 <sbalukoff> Oh!
20:23:08 <sbalukoff> For all things docs:
20:23:13 <sbalukoff> #link https://etherpad.openstack.org/p/lbaas-octavia-docs-needed
20:23:33 <sbalukoff> I have a call with IBM tech writers tomorrow afternoon. I'm hoping to get help fleshing out our docs.
20:23:45 <sbalukoff> That's the summary of what I would like to see eventually.
20:23:47 <sbalukoff> Please update that!
20:23:54 <johnsom> Yes, that etherpad for docs is great
20:23:56 <sbalukoff> TrevorV: Please add a note about single-create to that.
20:24:23 <nmagnezi> a question about amphora-agent: i noticed that the agent configures interfaces inside the service vm when it starts. my question is: why? won't it be preferable that the diskimage-create script will pre-configure the interfaces (which are dhcp anyways) when it cooks the image?
20:24:38 <madhu_ak> I created a launchpad ticket for lb's status is in PENDING_UPDATE for about 50-55 mins after creating a listener with TLS no SNI. https://bugs.launchpad.net/octavia/+bug/1555316
20:24:38 <openstack> Launchpad bug 1555316 in octavia "loadbalancer status is stuck in PENDING_UPDATE for about 50-55 mins after creating listener with TLS no SNI" [Undecided,New]
20:24:48 <johnsom> nmagnezi No, we hot-plug interfaces
20:25:05 <nmagnezi> johnma, care to elaborate?
20:25:07 <fnaval> madhu_ak: i'll see if I can reproduce
20:25:19 <TrevorV> So no one cares about how I'll write that up?  If I follow the current structure of requests, the table will be insane.
20:25:34 <blogan> nmagnezi: we have the option to have a standby pool of prebuilt amps, which are active but are not configured we will not know what networks they need to be hot plugged into upfront
20:25:42 <johnsom> nmagnezi We hot-plug interfaces into the amphora VM, so those interfaces are not necessarily present at boot time.
20:25:47 <sbalukoff> TrevorV: You will need to show a fully-fleshed-out example in your docs somewhere. :/
20:26:05 <TrevorV> sbalukoff that's what I'm talking about, I WANT the example, I DON'T want the table
20:26:12 <blogan> nmagnezi: would work great for containers though! since they can't hot plug
20:26:19 <blogan> whenever we get support for that in
20:26:26 <sbalukoff> TrevorV: Then maybe I don't understand what you mean by 'table'
20:26:44 <johnsom> TrevorV yeah, I am having a hard time visualizing as well
20:27:17 <TrevorV> https://raw.githubusercontent.com/openstack/octavia/master/doc/source/api/octaviaapi.rst
20:27:20 <nmagnezi> blogan, johnsom, I understand the reasoning now. thanks! the reason i'm asking is because it is configuring the NICs ubuntu style. meaning it sets invalid configuration in Centos/Fedora/RHEL based images
20:27:33 <TrevorV> If you look in there, the sections that are "tables"
20:27:46 <TrevorV> All the objects of the LB are defined in tables, and then an example is provided
20:28:02 <TrevorV> I'm talking about just saying "single create uses all the things above in one call, like so:" more or less
20:28:06 <TrevorV> You know, more officially
20:28:14 <TrevorV> But only including an example, and a reference to the other sections
20:28:23 <nmagnezi> blogan, johnsom, the bug https://bugs.launchpad.net/octavia/+bug/1548070 . I hope to submit a fix soon, but in general the agent should be aware of the OS it is running on.
20:28:23 <openstack> Launchpad bug 1548070 in octavia "Amphora agent fails to start in a Centos based amphora" [Medium,New]
20:28:24 <johnsom> nmagnezi Yes.  That could be fixed.  If you set the interfaces file name option, it does work with RedHat style systems as it uses the interface file and not the .d directory
20:28:28 <sbalukoff> TrevorV: Oh! Ok, so... something does need to be added to the Octavia API reference about this. But I don't think you need a massive table under 'create a load balancer' which repeats all the stuff in later sections.
20:28:51 <blogan> nmagnezi: indeed it should, we just got it working for ubuntu first
20:28:54 <TrevorV> Right, if everyone can agree on that, then I'll have a review up tonight/tomorrow
20:29:08 <blogan> i never tested it out with any redhat distros
20:29:13 <johnsom> TrevorV I think you can just reference the other sections.
20:29:17 <TrevorV> Perfect
20:29:35 <neelashah1> johnsom: are amphora images hosted on openstack infra? if so, can you point me to where?
20:29:35 <nmagnezi> johnsom, the filename is one thing. but it's configurable so it's ok. but it is missing the jinja2 template for valid configuration and also the logic of when to use which template
20:29:39 <TrevorV> Then I'll work something into it all, and when its up for review you guys can provide some feedback, as per usual
20:29:42 <TrevorV> as is tradition
20:29:46 <sbalukoff> Ok.
20:30:24 <johnsom> neelashah1 No, they are built by devstack plugin script or the user using the diskimage-create.sh script included
20:30:28 <sbalukoff> johnsom: Just to be clear on everyone's focus for the next week:  We should be working on bug fixes or bug fix reviews, yes?
20:30:40 <neelashah1> markvan ^
20:30:44 <sbalukoff> johnsom: Newton stuff and stuff not necessary for the RC1 deadline should happen next week, yes?
20:31:06 <nmagnezi> I have a question about what sbalukoff pointed above. re: controller HA. is that not expected to work at all for Mitaka? for all services what consist the controller? worker, health-manager, housekeeping
20:31:32 <johnsom> sbalukoff Yes, the focus this week is bug fixes and bug reviews.  We really need to get the fixes in for Mitaka
20:31:49 <TrevorV> I have a top priority internally for neutron lbaas single-create
20:31:54 <sbalukoff> nmagnezi: It works so long as you have one instance of the controller-worker and health manager running at a time. We have not tested it with multiple instances, and it's unlikely to work, given what I know about the code.
20:32:08 <sbalukoff> So you can achieve HA if you code a way to detect a failure of one of these processes and deal with it yourself.
20:32:14 <TrevorV> So I can't say I share that dedication for the bug fixes, but the two bugs I have for mitaka I'll get in tonight/tomorrow for sure
20:32:17 <sbalukoff> There are no pre-packaged scripts for this right now, though.
20:32:21 <johnsom> nmagnezi You can have multiple instances of the controller processes
20:32:33 <johnsom> sbalukoff You are wrong on single instance
20:32:40 <minwang2> I will try to wrap upthe bug that i am working ASAP
20:32:45 <sbalukoff> johnsom: I am? You guys have tested this?
20:32:53 <bharathm> sbalukoff: we can have multiple workers
20:32:56 <rm_work> TrevorV: i thought single-create for neutron-lbaas got -2'd for Mitaka, or is that old news and now it is RFE'd in or something?
20:33:01 <nmagnezi> johnsom, same node? multiple nodes? both cases?
20:33:09 <bharathm> We tested this on a multi-node env and it works
20:33:12 <TrevorV> rm_work doesn't matter, reach needs it at Rax
20:33:12 <rm_work> or was that something else entirely
20:33:14 <johnsom> nmagnezi It has been very lightly tested and will not fail over requests that are in process yet
20:33:21 <sbalukoff> Huh! Ok.
20:33:23 <rm_work> ah :/
20:33:26 <TrevorV> Yeah
20:33:28 <TrevorV> my same face
20:34:14 <nmagnezi> johnsom, got it. and by controller which octavia process do you mean?
20:34:40 <johnsom> nmagnezi All four and both cases
20:34:57 <johnsom> nmagnezi but same node might get strange with port issues
20:34:57 <nmagnezi> johnsom, got it, thank you for the answers :)
20:35:22 <nmagnezi> johnsom, make sense
20:35:44 <johnsom> nmagnezi I highly recommend doing extra testing if you plan to deploy this way, as we have lightly tested it.
20:35:55 <blogan> i haven't tested it out either, but seems like it would work
20:35:57 <sbalukoff> johnsom: When did y'all test it?
20:36:03 <johnsom> It's not a feature I am highlighting as "we are controller HA"
20:36:17 <sbalukoff> I'm willing to bet there are race conditions... but I'm not sure I could reliably reproduce them.
20:36:26 <sbalukoff> After all, events that change things are relatively rare.
20:36:33 <blogan> you're a race condition!
20:36:37 <nmagnezi> johnsom, is this expected to get test coverage (tempest?) at some point? is it in the works?
20:36:43 <johnsom> As those services were being written.  Yes, there likely are bugs
20:37:14 <johnsom> Yes, at some point.  We also plan to have recovery for in progress requests in a future release
20:37:31 <sbalukoff> (ie. job board)
20:37:47 <johnsom> Ok, any other Mitaka concerns.  I think cascade delete is going to be on the out list.
20:37:54 <johnsom> dougwig Any comments on that?
20:37:56 <blogan> sounds like it
20:38:11 <fnaval> nmagnezi: no tempest tests for it as far as I know; please create a blueprint/bug for it though
20:38:13 <johnsom> Really unfortunate given the series of events
20:38:21 <dougwig> johnsom: sorry, wasn't paying attention.  scrolling
20:38:22 <blogan> and if thats the case...should we just drop the current proposed way and put it on the lbtree resource? since that is being done for single create?
20:38:42 <nmagnezi> fnaval, will do
20:38:44 * sbalukoff gets out his paint again.
20:38:50 <fnaval> nmagnezi: thank you so much
20:38:55 <johnsom> blogan If that is the case, we can continue painting after Mitaka release
20:39:08 * blogan paints paint
20:39:52 <johnsom> dougwig Cascade delete mess.  Not landing I take it?  I hope it's not just because of the client release timing.
20:40:15 <dougwig> johnsom: oh, i hadn't seen the defer. gimme a few, not sure yet.
20:40:59 <blogan> dougwig: https://review.openstack.org/#/c/287593/
20:41:09 <blogan> last comment
20:41:24 <dougwig> are both the client and server ready to go?
20:41:37 <johnsom> Yes
20:42:01 <johnsom> I think the issue was around neutron client being cut early
20:42:16 <johnsom> We argued about the API path too long
20:42:19 <sbalukoff> Why was it cut early?
20:42:42 <sbalukoff> So that somebody could create the CLI command reference using their non-documented procedure? :P
20:43:38 <dougwig> hmm.... OR, we could move the client commands into an in-repo extension and do an end-run around the cat's skin.
20:43:39 <johnsom> Don't get me started on the docs issues.  No answers that actually work from those folks.  Just deleted docs.
20:43:50 <sbalukoff> (Not actually looking for an answer to that, I guess, just pointing out what a mess that all is.)
20:44:32 <johnsom> doug-fish for the dashboard cascade delete are you using the neutron client or just going to the API?
20:45:04 <doug-fish> neutron client
20:45:11 <sbalukoff> Damn.
20:45:16 <dougwig> posted an alternative in the review.
20:45:25 <dougwig> we'd need to shuffle some files around.
20:45:49 <johnsom> Ok, so, seems un-likely
20:46:46 <doug-fish> Going out the door without cascading delete from the UI isn't my favorite, but it's not the end of the world.
20:47:31 <doug-fish> this could be updated during the stable release cycle later on right?
20:47:40 <blogan> doug-fish: so without cascading delete users will still be able to delete things?
20:47:55 <blogan> doug-fish: you mean as a backport?
20:48:01 <doug-fish> blogan: yes
20:48:21 <blogan> doug-fish: likely not, features arent backported
20:48:35 <doug-fish> blogan: yes, as a backport - also yes to users can delete things part by part
20:48:35 <johnsom> Right, only bug fixes get backported
20:48:40 <blogan> doug-fish: sorry i asked back to back yes/no questions lol
20:49:22 <doug-fish> I have to run but will be back later
20:49:25 <blogan> ah well if users can delete things part by part, i'm less inclined to jump through hoops just to get the cascade delete in, it'll go in almost immediately when N opens up
20:49:41 <blogan> not assuming bike shedding
20:49:44 <johnsom> Yeah, I agree, pushing rope at this point
20:49:58 <Apsu> phrasing
20:49:59 <rm_work> lol
20:50:23 <dougwig> if the client piece got done today, i'd be willing to push hard for M. if that's going to take awhile, then it might indeed be N.
20:50:42 <rm_work> well, i think it IS done?
20:50:49 <rm_work> but not with the alternate method?
20:50:53 <dougwig> as an extension, not in python-neutronclient
20:51:00 <dougwig> no way will i be able to budge the client proper.
20:51:03 <johnsom> dougwig Client is done as-is, your proposal I don't know how that works
20:51:39 <blogan> dougwig: would that extension have to live on as an extension?
20:51:46 <dougwig> johnsom: you put the client code in neutron-lbaas, and add a stevedore extension that python-neutronclient finds automagically.  http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html
20:51:49 <rm_work> ah k
20:52:02 <dougwig> blogan: if you ask me, all the lbaas client commands should be in our repo as an extension.
20:52:06 <rm_work> huh, that doesn't seem to bad
20:52:11 <rm_work> ^^ actually I agree with that
20:52:17 <johnsom> So you have to install neutron-lbaas on the hosts that run the client?  Not sure that would fly
20:52:27 <rm_work> ah THAT is interesting
20:52:39 <dougwig> johnsom: you could always put it in yet another separate repo, i guess. :)
20:52:50 <rm_work> T_T nevermind i take it back
20:52:55 <johnsom> Yep, that would sadly be the way to go
20:53:10 <blogan> can we create an lbaas stadium?
20:53:18 <rm_work> :P
20:53:23 <johnsom> blogan I like the way you think!  Grin
20:53:25 <dougwig> johnsom: right, because installing n-l pulls in n, which pulls in ovs and crap.  ugh.
20:53:38 <johnsom> Right
20:53:45 <blogan> we have to accept our fate
20:53:55 <dougwig> does -dashboard just need the api?
20:54:03 <blogan> dougwig: it needs the client
20:54:06 <johnsom> I'm sure openstackclient will solve all of our problems....
20:54:13 <sbalukoff> Haha
20:54:18 <blogan> johnsom: yes...of course
20:54:20 <dougwig> really gross, but the client extension could go into -dashboard.  oh god, i can't believe i typed that.
20:54:32 <sbalukoff> Eew.
20:54:33 <blogan> okay with that meeting is almost done
20:54:41 <dougwig> hitler!
20:54:42 <blogan> dougwig has lost it
20:54:45 * johnsom face palms
20:54:47 <dougwig> godwin has been invoked
20:54:49 <blogan> and godwin
20:54:49 <sbalukoff> Haha
20:55:03 <johnsom> Ok, any other topics in the next six minutes?
20:55:39 <ajmiller> Quick sanity check regarding dashboard.;
20:55:51 <Apsu> Dashboard sanity checks always fail
20:56:06 <johnsom> ajmiller yes?
20:56:16 <ajmiller> LOL.  I just want to have common understanding that it is OK to merge patches.  Or not?
20:56:55 <johnsom> ajmiller Yes.  They are proposing an independent release cadence, so please continue reviews.
20:57:02 <johnsom> dougwig Do you agree?
20:57:42 <neelashah1> johnsom dougwig - its unclear to me who will be doing this independent release for the dashboard…just so we can stay in sync and get ready...
20:58:16 <neelashah1> maybe we can continue in the lbaas channel
20:58:17 <johnsom> neelashah1 I can be your release point person.
20:58:29 <neelashah1> ok, great…thanks, johnsom!
20:58:59 <johnsom> Ok, thanks folks!  Please fix bugs and do reviews!!!!
20:59:04 <johnsom> #endmeeting