19:59:53 <xgerman> #startmeeting Octavia
19:59:54 <openstack> Meeting started Wed Aug  3 19:59:53 2016 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:59:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:59:57 <openstack> The meeting name has been set to 'octavia'
20:00:07 <xgerman> Hi all
20:00:20 <xgerman> johnsom asked me to run the meeting today :-)
20:00:43 <sbalukoff> Howdy, howdy!
20:00:43 <diltram> hey
20:00:50 <chenli> hi
20:00:50 <xgerman> #chair sbalukoff
20:00:51 <openstack> Current chairs: sbalukoff xgerman
20:00:52 <evgenyf> 0/
20:00:53 <Frito> hola
20:01:00 <sbalukoff> Uh-oh!
20:01:05 <sbalukoff> I can feeeeel the powah!
20:01:10 <xgerman> lol
20:01:24 <xgerman> #topic Announcements
20:01:36 <xgerman> Octavia / Neutron-LBaaS mid-cycle - https://etherpad.openstack.org/p/lbaas-octavia-newton-midcycle - https://wiki.openstack.org/wiki/Sprints/LbaasNewtonSprint
20:01:51 <xgerman> Midcyle - yada, yada
20:01:59 <sbalukoff> Please RSVP for it if you haven't already.
20:02:13 <sbalukoff> Or yes, "Midcycle - yada, yada"
20:02:24 <TrevorV> o/  I'm late
20:02:28 <xgerman> yep, be sure not to be vegetarian :-)
20:02:35 <bana_k> hi
20:03:02 <sbalukoff> TrevorV: You know, if you don't point it out explicitly, people probably wouldn't notice. ;)
20:03:08 <xgerman> +1
20:03:12 <TrevorV> sbalukoff, I have a natural -v
20:03:17 <sbalukoff> Haha
20:03:24 <xgerman> also I am pretty sure easiness are fast approaching...
20:03:44 <sbalukoff> easiness?
20:03:51 * TrevorV xgerman lost me
20:04:17 <sbalukoff> Me too.
20:04:23 <xgerman> sorry my keyboard hates a few characters
20:04:25 <xgerman> deadlines
20:04:28 <sbalukoff> Oh!
20:04:43 <sbalukoff> Right.
20:04:49 <sbalukoff> Yes, get your code done.
20:04:56 <xgerman> N-3 8/29
20:05:07 <xgerman> #link http://releases.openstack.org/newton/schedule.html
20:05:12 <sbalukoff> Right after the mid-cycle.
20:05:15 <xgerman> that seems to be feature freeze as well
20:05:45 <sbalukoff> Yep. So if it doesn't get in by the end of the mid-cycle, it probably doesn't get in. Unless you're really good at bribing people to work through the weekend.
20:06:03 <xgerman> #topic Brief progress reports / bugs needing review
20:06:35 <evgenyf> #link https://review.openstack.org/#/c/326018
20:06:40 <sbalukoff> I apologize that I had almost no time for upstream work this last week. The good news is that the internal stuff I was working on is basically done, so now I have a lot of time for upstream reviews and whatnot.
20:06:44 * TrevorV regrets to inform that has still been working almost entirely downstream
20:06:59 <sbalukoff> So, I'll be spending a number of days working on that and hopefully getting people un-stuck.
20:07:18 <diltram> https://review.openstack.org/345003, https://review.openstack.org/350274 - my two pachets
20:07:22 * xgerman has been re-orged again and is busy with containers
20:07:25 <evgenyf> #link https://review.openstack.org/#/c/323188
20:07:50 <diltram> #link https://review.openstack.org/345003
20:07:51 <sbalukoff> diltram: Can you link those on separate lines (the way evgenyf just did)?
20:07:56 <sbalukoff> Haha
20:07:57 <sbalukoff> Ok.
20:08:06 <diltram> #link https://review.openstack.org/350274
20:08:08 <diltram> done :)
20:08:09 <xgerman> yeah, I am more like Review-as-a-Service, so holler if you need mt look at sth
20:08:21 <sbalukoff> xgerman: Ok, cool.
20:08:33 <sbalukoff> I'm glad you're able to still do n-lbaas / octavia reviews at all. :)
20:08:51 <xgerman> in my copious freetime ;-)
20:09:02 <chenli> do not have updates this week, still the same as last week...
20:09:04 <sbalukoff> Parts of my team are also stuck in container hell right now. Luckily, my "evade" rolls have been excellent the last couple weeks.
20:09:06 <chenli> https://review.openstack.org/#/c/342695/
20:09:44 <sbalukoff> chenli: Ok, and I apologize I didn't get reviews of your code done last week. Unless I get hit by a bus, it should happen this week.
20:09:50 <chenli> And have question about this one: https://review.openstack.org/#/c/347715/
20:10:28 <perelman> hi
20:10:40 <sbalukoff> chenli: Can you ask your question during open discussion?
20:10:48 <xgerman> +1
20:10:59 <chenli> sbalukoff: sure
20:12:20 <sbalukoff> Any other progress reports?
20:12:56 <xgerman> #topic Patches requiring discussion/consensus
20:13:10 <xgerman> https://review.openstack.org/#/c/325624/ (Update member status to return real statuses) [rm_work]
20:13:26 <xgerman> #link https://review.openstack.org/#/c/325624/
20:13:39 <xgerman> rm_work?
20:14:37 <xgerman> anybody?
20:14:43 <sbalukoff> Hmmm....
20:14:50 <xgerman> otherwise I will defer until johnsom is back ;-)
20:15:06 <xgerman> though didn’t look offending to me
20:15:19 <sbalukoff> So, really, rm_work needs to offer input on this.
20:15:21 <sbalukoff> :/
20:15:28 <TrevorV> Not sure where rm_work is, but maybe defer until Michael is around.
20:15:32 <sbalukoff> Yep.
20:15:34 <sbalukoff> +1
20:15:44 <xgerman> ok, will defer the second rm_work topic as well
20:16:14 <xgerman> #link https://review.openstack.org/#/c/255963/
20:16:22 <xgerman> unless anybody has info...
20:16:46 <sbalukoff> Oh!
20:16:50 <xgerman> I see sbalukoff among the reviewers...
20:16:53 <sbalukoff> I think that one was my objection.
20:17:12 <sbalukoff> So, the way this patch works, there's no kind of restrictions on what the tenant can specify here.
20:17:47 <diltram> and you're feeling that tenant will destroy something?
20:18:02 <sbalukoff> And there's no "standard way" of doing things defined.
20:18:20 <sbalukoff> So, for example, if you want X-Forwarded-For inserted...
20:18:21 <xgerman> I am always a bit worried about free-for-alls as well
20:18:32 <sbalukoff> One vendor could tell you to set the parameter to X-Forwarded-For: True
20:18:40 <sbalukoff> And other to:  X-Forwarded-For: client_ip
20:18:43 <sbalukoff> Or some such nonsense.
20:19:05 <sbalukoff> This also opens up this feature to being abused by vendors to activate vendor-specific functionality through the neutron-lbaas API.
20:19:17 <sbalukoff> Which breaks cloud inter-operability.
20:19:31 <TrevorV> I recall us having this conversation a long time ago, and I thought we were going to add some defaults in with basic validation around their structure/args, and then add others if they became "normal"
20:19:59 <sbalukoff> I think we did discuss this at the January mid-cycle. But I forget what our conclusions were.
20:20:08 <diltram> hahaha
20:20:18 <TrevorV> Well I'm against arbitrary headers...
20:20:21 <sbalukoff> In any case, I feel bad poo-pooing the idea in this meeting if there's not anyone here to defend the way it's implemented in this patch set.
20:20:46 <TrevorV> Yeah, push it off for when rm_work  and johnsom are here
20:20:51 <sbalukoff> Ok.
20:20:51 <xgerman> k
20:21:16 <xgerman> https://review.openstack.org/#/c/333800/ (How to set loadbalancer_pending_timeout) [chenli]
20:21:31 <xgerman> that’s the last one and I believe chenli is here...
20:21:53 <sbalukoff> Again, the major objections here were from johnsom.
20:21:59 <chenli> This change is about try to fix the case when a loadbalancer will stay in pending status forever
20:22:07 <sbalukoff> We discussed it a little last week. I think people were supposed to comment on the patch.
20:22:10 <chenli> :)
20:22:15 <sbalukoff> (Again, apologies I didn't end up having time for that.)
20:22:21 <xgerman> same here
20:22:54 <sbalukoff> So, I think johnsom's objection wasn't that this is a bad idea, but I think he thought this would happen eventually anyway (like after an hour).
20:23:07 <sbalukoff> If that's not the case, then we definitely have a bug on our hands.
20:23:29 <diltram> unfortunatelly it will not change
20:23:31 <xgerman> I think stuff stays in pending forever
20:23:37 <xgerman> diltram +1
20:23:43 <sbalukoff> Also, I think he didn't like it happening as part of an API get request-- that it ought to be something that happens in the house keeper or something.
20:23:44 <xgerman> and worse you can’t delete
20:24:00 <sbalukoff> Yeah, that's a problem.
20:24:00 <diltram> because there is nothing what is checking is it updated or not and nothing what is working on this lb
20:24:38 <diltram> original idea was to do this on api get_loadbalancers_list
20:24:53 <sbalukoff> Yep. And I see two bugs referenced by this... so yes, this is definitely a bug
20:25:16 <diltram> and we proposed to do this in hk but johnsom don't want to add this code there
20:25:17 <sbalukoff> It seems "strange" to change the status of something on a get request.
20:25:24 <sbalukoff> Get requests should never cause changes to the system
20:25:29 <xgerman> yep
20:25:39 <xgerman> I think hk is the right place as well
20:25:56 <diltram> exactly, because of that we're still looking for some place which should be responsible for that
20:26:07 <johnsom> I have strong feelings on this
20:26:10 <xgerman> health might be another area
20:26:15 <xgerman> johnsom!!
20:26:20 <sbalukoff> HK does sound like the right place for this. I don't think this should be part of a CW thread, as CW is likely to get restarted and then we've lost the ability to recover from an indefinite PENDING status.
20:26:26 <sbalukoff> Oh yay!
20:26:26 <johnsom> Ignoring a finance presentation
20:26:29 <sbalukoff> HAHA
20:26:33 <johnsom> On mobile.
20:26:35 <sbalukoff> Give us your feels!
20:26:41 <diltram> I was talking about health and looks like good place
20:27:03 <TrevorV> I know this is kinda odd to propose, but what if o-hk just checked for a running controller worker?
20:27:14 <TrevorV> Then updated it to ERROR state or something
20:27:31 <sbalukoff> TrevorV: Still doesn't fix the problem if the CW gets restarted.
20:27:40 <johnsom> So, my issue is this should be handled in the flow/thread that was taking the action.  I really don't want outside threads guessing at what is going on in other proccesses
20:27:44 <sbalukoff> TrevorV: Also sounds unnecessarily complicated.
20:27:47 <TrevorV> if it gets restarted it should re-establish connection, right?
20:27:53 <diltram> but how o-hk will know that this particular lb was created when o-cw was down?
20:27:59 <TrevorV> Oh, you mean the flow dies
20:28:00 <TrevorV> Got it.
20:28:41 <sbalukoff> johnsom: I think the problem is that the flow won't always be reliable.
20:28:46 <johnsom> Controller dies, this is why we need to enable job board
20:28:46 <diltram> we need to implement flows checkpointing and state keepers
20:28:55 <sbalukoff> Right job board.
20:29:01 <xgerman> +1
20:29:02 <sbalukoff> Right. Job board.
20:29:18 <sbalukoff> But... that's probably a lot of work. Should this bug fix wait until then?
20:29:20 <xgerman> ok, so the flow should wait and if it doesn’t go active move it to error
20:29:41 <johnsom> I thought this was trying to solve our broken revert situation, not complete controller failure
20:29:43 <sbalukoff> Or rather, should we make it part of the flow, and just live with the fact that restarting CW could break things until we get job board implemented?
20:29:57 <diltram> sbalukoff: in my opinion, yes
20:30:01 <xgerman> +1
20:30:16 <diltram> sbalukoff: +1
20:30:40 <sbalukoff> Right. Because we expect o-cw to be pretty reliable. Certainly more so than the amphora boot process (and its bazillion dependencies)
20:30:56 <sbalukoff> I'm OK with that. What do you think, johnsom?
20:31:02 <TrevorV> wait, now I'm confused, is the problem not "what do we do when a create_lb flow is interrupted?"
20:31:16 <sbalukoff> TrevorV: No, that's not the problem.
20:31:22 <johnsom> I see two issues.  1 flows that don't terminate in an end state, ie active or error.  This is a fix the reverts
20:31:26 <sbalukoff> Or rather, not the problem we're trying to solve right now.
20:32:04 <johnsom> 2 is controller process dies, flow doesn't progress.  This is where I think job board is the right answer
20:32:18 <sbalukoff> +1 on job board being the solution to problem 2.
20:32:21 <TrevorV> johnsom, so in either case, this review is "moot"?
20:33:03 <chenli> can I ask, what is 'job board' ?
20:33:08 <johnsom> I did not like the approach in this CR and wanted to discuss it with the wider team
20:33:13 <sbalukoff> So, we should fix the flow that leaves things in PENDING status indefinitely. Possibly introducing a new timeout in the process. But it won't be something that gets set on a GET, nor will it be part of HK?
20:33:49 <johnsom> Yeah, that would be fixes in the flows
20:33:52 <johnsom> Imho
20:33:58 <sbalukoff> Thanks for your feels, johnsom. I think you are right.
20:33:58 <TrevorV> Well, apparently it just means we need to make sure all our reverts are functional, but idk that it involves any kind of timeout
20:34:02 <diltram> chenli: http://docs.openstack.org/developer/taskflow/persistence.html
20:34:11 <chenli> diltram: thanks!
20:34:34 <sbalukoff> TrevorV: Yep.
20:34:38 <TrevorV> johnsom, does that sound right to you as well?
20:34:47 <sbalukoff> It would be much better if the error comes as soon as possible anyway.
20:34:52 <diltram> sbalukoff: there is create_lb_flow rewriten so it should not leave in pending create any more
20:34:54 <sbalukoff> Instead of relying on a timeout.
20:34:55 <diltram> any lb
20:34:57 <johnsom> Thanks diltram
20:35:12 <sbalukoff> Oh, ok!
20:35:18 <sbalukoff> Cool beans.
20:35:23 <diltram> chenli, johnsom: np
20:35:28 <TrevorV> diltram, that's a separate review though right?  Not the one we're talking about here?  So maybe this review is just... "dead"  (No offense)
20:35:43 <sbalukoff> TrevorV: Possibly, yes.
20:35:54 <sbalukoff> I haven't seen the other review yet.
20:35:56 <diltram> TrevorV: looks like this review is dead
20:35:59 <TrevorV> Sorry, I just want to be clear about what's being discussed and such.
20:36:09 <sbalukoff> But yeah, setting this status on a GET is not the right way to do it in any case.
20:36:10 <chenli> ok, I will abandon this one, for now.
20:36:18 <TrevorV> Sorry chenli thanks for your work.
20:36:18 <sbalukoff> Ok!
20:36:28 <diltram> there is new create_lb_flow + johnsom is working on failover flow + implement job boards
20:36:35 <sbalukoff> Oh!
20:36:53 <chenli> TrevorV:  thanks
20:36:56 <TrevorV> Yeah, diltram one of those is the review you linked earlier right?  I'll try and take a look later.  Was concerned you merged two flows together
20:36:58 <sbalukoff> Ok. I'll look for that when it's ready, then. Is there a review number for that already?
20:37:05 <diltram> TrevorV: yes
20:37:14 <sbalukoff> Ok.
20:37:17 <sbalukoff> Linked previously.
20:37:27 <diltram> I merged them to never ever leave lb in pending_create
20:37:30 <johnsom> Well, I have not started work on job boards, yet.  I think the merge is higher priority right now
20:37:50 <sbalukoff> Probably. o-cw doesn't die that often.
20:37:55 <diltram> because right now if something will be broken it should revert everything
20:38:06 <diltram> johnsom: +1
20:38:17 <johnsom> Yeah, diltram fixed or tech debt there.  Looking forward to review it!
20:38:19 <TrevorV> diltram, agreed, but typically we kept flows separate due to their repetitive use, that's what I was concerned about
20:38:37 <TrevorV> Just going to read through it, could totally be a non-issue, ya know?
20:38:53 <diltram> TrevorV: I'm reusing existing flows so no worry :)
20:39:00 <TrevorV> Awesome awesome
20:39:03 <diltram> it's only merge and small rebuild
20:39:14 <xgerman> +1
20:39:40 <johnsom> Trevorv this was a single flow that was split due to missing funct in taskflow.   It was fixed, so diltram fixed the todo
20:40:11 <xgerman> nice
20:40:50 <TrevorV> Ooooh sick thanks johnsom
20:41:13 <xgerman> thx johnsom
20:41:40 <xgerman> #topic Open Discussion
20:41:50 <xgerman> looks like we might use the whole hour :-)
20:42:05 <diltram> sometimes we need :P
20:42:13 <chenli> one question about the update pool:  https://bugs.launchpad.net/python-neutronclient/+bug/1606822
20:42:13 <openstack> Launchpad bug 1606822 in python-neutronclient "can not update lbaas pool name" [Undecided,New] - Assigned to li,chen (chen-li)
20:42:50 <xgerman> neutron-client is a bit tricky since I lost track what’s going on with open stack client (Bosc)
20:42:52 <xgerman> osc
20:42:56 <diltram> is anyone is actively testing multinode octavia installation?
20:43:16 <diltram> xgerman: x-client is deprecated
20:43:18 <sbalukoff> diltram: You mean, putting the controller on different hosts?
20:43:24 <xgerman> we (=HPE) install that at our customers
20:43:24 <diltram> sbalukoff: yes
20:43:36 <chenli> when I update the pool name, it requests for --lb-algorithm,  but Reedip said in the comments, this is requested by lb team:  https://review.openstack.org/#/c/347715/
20:43:46 <sbalukoff> diltram: We've been doing a little testing of it at IBM. I expect we'll be doing a lot more testing of it in coming weeks.
20:44:10 <chenli> yes, we're doing a full functional testing recently.
20:44:12 <diltram> ok, great, so something is working :P
20:44:18 <johnsom> It is not yet deprecated last I saw, but there is guidance.  I think bug fixes are still ok
20:44:19 <chenli> using CLI commands
20:44:32 <xgerman> chenli nice!!
20:44:38 <sbalukoff> chenli: Aah-- ok, so if the lb_algorithm is set, it shouldn't be asking for it again. Especially not on an update. So yes, that's a bug.
20:44:50 <xgerman> I heard a rumor that neutron-client will be a plugin to osc
20:45:06 <diltram> xgerman: really?
20:45:07 <sbalukoff> xgerman: Sure. But we have to work with what we have right now, yes?
20:45:18 <chenli> xgerman: o.. then we'd better testing with osc...
20:45:39 <johnsom> There is a parallel path for a while.  We are moving to osc plugins.
20:45:53 <sbalukoff> Does osc do any neutron-lbaas stuff right now? (I haven't looked in a while.)
20:46:05 <johnsom> If you really want I can summon the neutron PTL and he will lecture you
20:46:12 <sbalukoff> :P
20:46:21 <xgerman> the who-should-not-be-named
20:46:22 <sbalukoff> I saw him last week. He gave me the evil eye.
20:46:25 <sbalukoff> I was tickled pink.
20:46:25 <johnsom> Sbalukoff last I looked, no
20:46:45 <johnsom> Hahahhaa
20:46:57 <xgerman> ok, so we should be good with that pool fix ;-)
20:47:07 <diltram> nothing reporting greping lbaas
20:47:09 <sbalukoff> Right.
20:47:18 <johnsom> There is a spec that covers this.  I don't have the link on my mobile
20:47:30 <xgerman> k, sounds good
20:47:44 <diltram> or load
20:47:54 <chenli> ok! I'll add test case for it and waiting reviews. :)
20:47:56 <johnsom> If it is a bug, I can lobby to get it in. The deadline will be soon though
20:48:03 <sbalukoff> Ok!
20:48:09 <xgerman> yep, ONE week before code-freeze
20:48:16 <xgerman> so about two weeks
20:48:20 <sbalukoff> So, two weeks-ish...
20:48:21 <johnsom> As client freezes early each cycle
20:48:23 <sbalukoff> jinx!
20:48:30 <chenli> Another question is about https://bugs.launchpad.net/octavia/+bug/1607818
20:48:30 <openstack> Launchpad bug 1607818 in octavia "Can not get loadbalancer statistics by neutron command" [Undecided,New]
20:48:31 <xgerman> johnsom +1
20:48:57 <chenli> no statistics in neutron db currently ?
20:49:13 <sbalukoff> Yeah, our statistics interface is shit.
20:49:24 <chenli> We need statistics. It is very important.
20:49:25 <sbalukoff> I think we need to overhaul it.
20:49:41 <sbalukoff> Yep, it's essential for billing in any case.
20:49:42 <johnsom> So, I think there is a listener vs load balancer disconnect if I remember
20:50:11 <sbalukoff> yeah, this is something we were to hand-wavey about when we designed lbaas v2.
20:50:12 <johnsom> Octavia we wanted listener level stats
20:50:16 <sbalukoff> And now it's biting us in the ass.
20:50:43 <johnsom> For ssl vs non-ssl listeners if I remember right
20:50:57 <sbalukoff> Yes. Because billing will likely be vastly different for them.
20:51:29 <johnsom> Can someone confirm they are diff octavia vs v2?
20:51:30 <xgerman> Al worked on making it for listener’s in LBaaS v2 a long time ago
20:51:48 <sbalukoff> xgerman: How far did he get?
20:52:02 <xgerman> not far… probably abondoned
20:52:06 <chenli> Any links for previous work ?
20:52:08 <johnsom> Again, being on mobile is rough.
20:52:22 <sbalukoff> Octavia definitely does statistics at the listener level.
20:52:29 <johnsom> Yes
20:52:41 <sbalukoff> Looking in n-lbaas real quick...
20:52:45 <johnsom> I can't remember v2
20:52:49 <xgerman> lol
20:52:55 <diltram> https://review.openstack.org/#/c/158823/
20:53:13 <diltram> abandoned
20:53:36 <sbalukoff> Yes, n-lbaas v2 has it at the load balancer level.
20:53:46 <xgerman> diltram yep
20:53:48 <johnsom> If we can pass through the query, that would be cool.  Otherwise the event streamer would need to be updated to sync stats as well as status
20:53:52 <xgerman> it was close though ;-)
20:54:16 <johnsom> This might also be a defer to the merged api work...
20:54:18 <xgerman> I am for passing through the query...
20:54:25 <sbalukoff> We should probably look at adding statistics at the listener level to n-lbaas as well.
20:54:29 <chenli> so we need to add REST API in octavia ?
20:54:34 <xgerman> +1000
20:54:38 <sbalukoff> Since it really is going to be important for billing.
20:54:42 <xgerman> chenli +1
20:54:56 <xgerman> there was also a plan to stream them straight to ceilometer, etc.
20:55:04 <xgerman> but API is needed as well
20:55:20 <xgerman> (hence the handwaving sbalukoff eluded to)
20:55:22 <sbalukoff> It sounds like we need to define a spec about this.
20:55:29 <diltram> may anyone kick that patchset - https://review.openstack.org/#/c/324197/ it stuck
20:55:39 <johnsom> Streaming to celio is a whole different conversation
20:55:45 <sbalukoff> Because I want to make sure people have a chance to give feedback on the nature of this feature. :/
20:56:02 <diltram> maybe at first we will grab that data
20:56:06 <johnsom> +1
20:56:16 <xgerman> there even was a spec for streaming straight from amps on ceilometer side (also abandoned)
20:56:20 <diltram> after that we can think about in which way to share them
20:56:22 <xgerman> diktram +!
20:56:34 <xgerman> I like your pragmatism
20:56:41 <diltram> thx ;)
20:56:43 <johnsom> We do collect it in octavia
20:57:10 <chenli> y, I see octavia database updated by o-hm
20:57:25 <xgerman> well, there was talk that this would not scale
20:57:43 <xgerman> but let’s run with that ;-)
20:57:50 <sbalukoff> Heh!
20:58:13 <xgerman> I doubt mysql can go beyond 10,000 LBs
20:58:26 <johnsom> That is why stats have their own db setup.  But this is yet another different issue
20:58:33 <sbalukoff> Indeed.
20:58:40 <xgerman> ok, enough bike shedding
20:58:45 <xgerman> T-2
20:58:50 <sbalukoff> What, we have a whole minute left!
20:59:04 <chenli> ok. so first we need to add a REST API to feed neutron-server the statistics
20:59:07 <xgerman> in case we have a “serious: topic
20:59:16 <chenli> I will try to working on that :)
20:59:19 <xgerman> chenli +1
20:59:21 <sbalukoff> Thanks, chenli!
20:59:26 <chenli> because this is really important to us.
20:59:27 <xgerman> and very first we need a spec ;-)
20:59:35 <sbalukoff> Yes, please spec this out!
20:59:40 <chenli> sure!
20:59:49 <johnsom> So, original was about exposing the stats.  Look at the octavia api spec and implement what was approved
20:59:52 <xgerman> chenli underatood — but I wouldn’t be surprised if it misses N
20:59:56 <sbalukoff> Yes!
21:00:09 <sbalukoff> It's OK so long as it makes it into master.
21:00:19 <diltram> #link https://review.openstack.org/#/c/324197/
21:00:22 <sbalukoff> Anyway, thanks folks!
21:00:34 <diltram> again please some core to kick in ass that patchset
21:00:44 <sbalukoff> diltram: Ok!
21:00:48 <diltram> thx and cu
21:00:51 <xgerman> #endmeeting