20:00:17 #startmeeting Octavia 20:00:18 Meeting started Wed May 11 20:00:17 2016 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:22 The meeting name has been set to 'octavia' 20:00:24 o/ 20:00:26 Helps if I can type the room name.... 20:00:27 NOW you do the "I'm here" line :D 20:00:27 hi 20:00:28 o/ 20:00:29 hi 20:00:37 Then you're in there for the roll call :P 20:00:46 hi everyone 20:00:54 meh ;-) 20:00:57 hi 20:01:10 Hello everyone. I have a short agenda for today, so could be a quick one. 20:01:17 #topic Announcements 20:01:38 o/ 20:01:44 I will be traveling next week and cannot host this meeting. Does someone want to chair or should we cancel? 20:01:54 i can cover. 20:01:59 Hey folks! 20:02:08 Ok, thank you dougwig! 20:02:09 I'm also traveling next week. 20:02:14 Thanks dougwig! 20:02:35 Any other announcements? 20:02:55 I'm awesome. That is all. 20:03:14 he did ask for any other lies 20:03:14 * johnsom glares at TrevorV 20:03:16 didnt 20:03:19 i guess announcements can technically be fiction. 20:03:29 FYI, newton-1 is May 30th 20:03:32 why so mean.... 20:03:36 TrevorV meant he's awesome in the sense of that song by Spose 20:03:50 You hush, FNG 20:03:54 o_0 20:03:55 I always forget how soon those milestones are... 20:04:03 Heh! 20:04:14 #topic Brief progress reports 20:04:35 o/ 20:04:36 i still owe spec re-spins on the spinout. no progress, but good comments. 20:04:50 90% of my time has been spent in meetings since I got back, so I haven't gotten much done with trying to get a smaller amp image 20:04:58 I have been poking at the failover bug with the namespace driver. There is still something strange going on, so still WIP sadly. Internal stuff as grabbed a chunk of my time. 20:05:06 I'm still working on some new neutron-lbaas v2 tests. these have +2's already, it'd be great to get more +2's on them 20:05:12 I've been working on getting the amphora / listener data loss thing done. The first part is out for review; the second part I'll probably bring up to discuss some later. 20:05:20 #link https://review.openstack.org/305525 20:05:28 #link https://review.openstack.org/306182 20:05:31 #link https://review.openstack.org/#/c/310667 20:05:38 fnaval Thank you again for the work on the session persistence stuff! 20:05:40 Hah, wrong review... 20:05:49 #link https://review.openstack.org/#/c/257201/ 20:05:50 That one 20:06:02 yeah thanks for the quick review turnaround too 20:06:18 We still need to adjust some timeouts as I'm seeing amp boot times ~11:37 so our scenario tests are timing out failed. 20:06:22 I haven't anything to show the group. Been in meetings, working on internal stuff, or riding a motorcycle across the country since the conference. :/ 20:06:31 Yeah, lot of reviewing of the spec for spin-out going on, but more needed 20:06:41 Ok. 20:06:42 I'd link it but TrevorV already did :P 20:07:06 johnsom: yeah if i can actually get further on minimal image, those issues might just go away :/ 20:07:17 o/ 20:07:18 I think that's the correct direction to be working 20:07:21 sbalukoff I got one doc to review from Leslie, but I'm not sure it was in gerrit (if I remember right). It would be easier for me to review if it was 20:07:22 11 minutes!!!!! 20:07:33 I think I saw her propose something 20:07:43 johnsom: I think she got a revision of something in gerrit on Monday evening. 20:07:49 In the neutron-lbaas project. 20:08:06 Agreed, if we can find a way around the qemu/tsg issue and get the boot times down life would be better 20:08:26 sbalukoff Ok great! 20:08:55 #link http://logs.openstack.org/74/278874/13/check/gate-neutron-lbaasv2-dsvm-scenario/0374f53/logs/screen-o-cw.txt.gz 20:09:10 i think the scenario tests would be much more reliable 20:09:12 That was the log from a test run where end-to-end the boot took 11:37 20:09:14 and maybe one day, be voting 20:09:21 yeah i'm targetting <= 2m for boots even without vtx 20:09:30 but we'll see if that's feasible 20:09:35 and with fnaval adding more scenario tests, we're going to hit the 2 hour timeout if it remains currently 20:09:36 That would be great! 20:09:42 That would be excellent. Yeah, it's about 37 seconds with virt 20:10:00 yeah and that scales, so maybe would be down to like 10 with virt :P 20:10:26 Ok, any other progress reports etc? 20:10:27 nice 20:10:48 #topic Open Discussion 20:10:57 Other topics? 20:11:11 I have some +1's on #link https://review.openstack.org/#/c/312595 and would love some more reviews 20:11:17 there was something about topology on lb create 20:11:40 I have a follow up on that regarding aggregating the amphora / listener stats and bubbling them up to Neutron LBaaS. 20:11:45 I feel a bit guilty as the weeks since the summit have been low cycles for me. 20:12:06 So blogan rm_work ptoohill and I just got out of a meeting talking about work efforts for the Octavia Parity with Neutron LBaaS 20:12:11 johnsom: +1 20:12:18 And I'm not quite out of the woods yet. :/ 20:12:23 One of the requirements in the list right now is "Fields by query parameter" 20:12:24 do we want a mid-cycle for newton? 20:12:27 Is this an optional field? 20:12:28 So, sorry, folks! 20:12:41 dougwig: Yes. 20:12:44 same johnsom / sbalukoff T_T 20:13:15 TrevorV I think that is the query filter right? 20:13:15 ^^ what TrevorV said 20:13:32 Or just the list of fields to return? 20:13:37 ^^ second one 20:13:39 johnsom there is a "list query filter" and then there is "give me these fields for this resource I want" 20:13:43 Ok 20:13:49 I'm talking about the second 20:13:50 Yeah 20:13:56 Can't we steal that code from LBaaS/neutron? 20:13:58 I'm thinking this is optional, but is it? 20:14:09 No idea. 20:14:19 It's optional from a query perspective, but for parity is would be required 20:14:26 it's not really "breaking" to give MORE info than required is it? >_> 20:14:39 I think there is an "OpenStack API guideline" somewhere 20:14:40 i mean... i guess if someone did some REALLY bad coding 20:15:00 rm_work: What? People code badly around OpenStack APIs? Never! 20:15:05 T_T 20:15:24 i would put it at a low priority, filters are a higher priority 20:15:53 Speaking of, we need to talk about single create/delete. We need to get that stuff merged soon and I want to get the client change done before we get told only in osc 20:15:56 heh 20:16:04 blogan +1 20:16:19 same 20:16:22 to both 20:16:42 client work should be FAIRLY straightforward there, I assume 20:16:47 Yeah, please review my single-create that's Neutron LBaaS 20:17:01 since all the stuff is there already individually, just need to kinda cobble it together into a new single call 20:17:02 The client is basically done, it's just pending the API side being done 20:17:02 Its up there, been there a month now :P 20:17:09 ah lol 20:17:13 shows how much I've seen 20:17:16 TrevorV: is it working? 20:17:20 I guess I only look at Octavia reviews these days 20:17:27 #link https://review.openstack.org/#/c/288187/ 20:17:39 blogan I have 1 known bug for l7rule, but I'm working that out. Everything else gets through octavia and in the haproxy config. 20:18:03 Is that just create or is delete in it's final(ish) api form? 20:18:47 just create 20:18:53 who was working on delete? that's a separate effort right? 20:18:54 another review needs to add the delete 20:19:02 I know xgerman WAS working on it, but did someone take over? 20:19:09 blogan did 20:19:13 lol 20:19:22 i rushed a quick spin on it before the Mitaka deadline but it got axed 20:19:32 but review is still up 20:19:35 Right. 20:19:43 do you have it handy to link? 20:19:56 #link https://review.openstack.org/#/c/287593/ 20:20:02 i may have abandoned it bc its not the right way to do it now 20:20:09 cool 20:20:21 oh, lol, so it'll need to be scrapped and redone? 20:20:33 it should be put into the same endpoint as the single create 20:20:44 cascade delete will need to be updated to be off "/graph" endpoint. 20:20:47 well a lot of the code can probably be reused 20:20:53 The actual cascade changes may not actually need updated. 20:20:54 Yeah 20:20:59 like below the plugin layer should remain the same 20:21:04 * xgerman shudders about graph 20:21:09 I'm just saying Armando is on a hunt for client things to push into OSC (see his comment on my patch), so if we want cascade delete in the client, we likely need to move soonish 20:21:11 Heh! 20:21:35 I will sign up to update the client code, I just need to know the new endpoint plan 20:22:31 DELETE /graphs/ 20:22:43 +1 20:22:46 Ok 20:22:53 Yep. 20:23:05 * johnsom waits for dougwig to throw paint on it 20:23:17 Ha ha. 20:23:46 * dougwig 's paint bucket is empty 20:23:48 i thought dougwig is happy with that call 20:23:52 and i did just put words into his mouth 20:23:56 he is my puppet 20:24:22 Ok, cool. I will take an action to make the required client changes for that. Who can take the API side? 20:24:32 * blogan looks around 20:24:43 * johnsom steps back 20:24:44 Sorry, I can't just yet. :/ 20:24:49 johnsom are those separate efforts for neutron lbaas? 20:24:59 TrevorV: teh client and api? 20:25:09 Oh oh oh the CLI versus the API you mean? 20:25:10 will never work on an endpoint called graph for deleting as loadbalancer 20:25:22 I think create, delete, and client are all separate patches. 20:25:43 Uh... I'm now lost. 20:25:46 yes they are all 20:25:55 xgerman: such a downer 20:26:12 I thought people knew my position on this 20:26:12 * blogan watches xgerman take his ball and go home 20:26:16 Haha 20:26:27 I think he's just looking for an excuse to do just that. ;) 20:26:44 Like being moved to a team that will eat up all his time with containers. ;) 20:27:11 TrevorV: can you work on the cascade delete? 20:27:11 I would comment as well, but I have to get on stage and present with him next week 20:27:18 Hahaha! 20:27:40 yeah, we are coming to San Antonio :-O) 20:27:46 blogan I think I can do that... 20:27:56 TrevorV You are the man! 20:27:58 TrevorV: okay cool thanks 20:28:10 Sure thing. 20:28:15 +1 TrevorGraph 20:28:20 Ouch xgerman 20:28:21 * johnsom Hands xgerman's ball to TrevorV 20:28:28 * TrevorV pops the ball 20:28:28 whoa 20:28:45 (our meeting minutes have to be weird to other readers....) 20:28:56 Hahaha, yes, yes they do... 20:29:05 i got another discussion topic 20:29:11 if we've wrapped up that one 20:29:11 Uh-oh. 20:29:12 me too 20:29:21 Frito: dibs 20:29:26 go for it 20:29:46 Ok cool. I will work on that in some off hours and get the client update. TrevorV let me know if you are going to adopt the current patch or start a new one. I will need a depends-on link 20:30:12 Sigh, blogan.... grin 20:30:13 johnsom sure thing. 20:30:24 so with a certain company scaling back their involvement in lbaas, that leaves us with 2 cores who will not be as active 20:30:38 Ok. 20:30:39 Three actuall 20:31:01 we previously kind of all had a gentlemen's agreement that features would not be merged with 2 cores from teh same company only reviewing it 20:31:12 yep 20:31:13 well only 2 +2's from the same company 20:31:13 But one will still be semi-active. How is that for vague 20:31:35 Heh! 20:31:49 there are still five companies with cores, four of which are semi-active 20:31:51 so with THREE cores now being that way, i think we should do away with that, because its really just ibm and rackspace at that point and half an HP'er 20:32:05 or we could just merge octavia-core and neutron-lbaas-core 20:32:06 A10? 20:32:08 And I hope to be more active than I have been for the last 4 weeks or so. Trying to get more IBM people to show up to the IRC channel and meetings, but obviously haven't had much success with that yet. 20:32:14 dougwig doesn't count 20:32:19 Haha! 20:32:21 of course not 20:32:25 now you sound like my wife. 20:32:37 lol 20:32:43 dougwig: if we merged them wouldn't just net another racker and another HP'er? 20:32:47 * johnsom give dougwig permission to -2 the world to prove a point 20:32:56 plus an ibm 20:32:56 So maybe not necessarily a "do away with", but what about a time-limit? What about waiting for a period of time before the second +2+A for it to merge? 20:33:00 Like, say, 48 hours 20:33:04 so same situation almost 20:33:08 I'm not a neutron-lbaas-core right nowl 20:33:25 But yes, it's close to the same situation. 20:33:28 ibm, a10, rax, at a minimum. that's more spread than we started with in n-l. 20:34:08 very well, although i never agreed with that rule anyway, at least not until someone abused it 20:34:26 We could also go with the gentleman's agreement that if a patch sits there for, say, more than 2 weeks with a +2 on it, and no -1 from another core, then a core from the same company should feel free to +A. 20:34:30 As a stop-gap anyway. 20:34:32 Ok, so I am ok with a time limit. One condition would be if any core -1's it doesn't +A. I.e. give someone a chance to hold a merge for review. 20:34:48 i'm fine with the "we'll deal with it if it's a problem" approach. 20:34:53 sbalukoff: fine with that but maybe a shorter time frame 20:34:59 Alright, but sbalukoff 2 weeks could be a really long time when we start iterating on bug fixes and the like 20:35:04 johnsom: i think everyone mostly does that anyway 20:35:12 sbalukoff: +1 20:35:19 I'd be happy for a much shorter time frame for bug fixes. 20:35:26 yeah 20:35:29 but new features... probably ought to go 2 weeks. 20:35:30 maybe 1 week for bug fixes 20:35:37 thanks for talking about this - it's important! ;-) 20:35:38 That's still a long time... 20:35:40 but since this is all gentlemen's agreement anyway... just be good about it 20:35:42 I'm fine with that 20:35:50 Yep. 20:35:50 yeah but if it's really urgent, ping people 20:35:58 rm_work: +1000 20:36:04 I've pinged a million times for single-create reviews... haven't seen anyone but phil 20:36:05 rm_work: +2 20:36:06 o_0 20:36:14 I can sometimes be reached even when I'm underwater working on internal stuff. :) 20:36:17 I am in the channel - so would see it 20:36:17 yeah that isn't ... urgent 20:36:23 My biggest drag is I want to install and try these things, which I don't always have the resources available to do. 20:36:25 You hush your mouth rm_work 20:36:37 Haha 20:36:40 johnsom: +1 20:36:42 i just ahven't seen any of "push this in because our company needs it and it breaks everyone else, but so what" attitudes so i'm not worried about it either way 20:36:47 Alright, so about a week for any changes, or 2 weeks for changes and 1 week for bugs? 20:37:23 2 weeks for feature changes / additions... 1 week for bugs. 20:37:23 (I like being explicit, that way I can't complain if it happens too fast, or I CAN complain if it takes too long) 20:37:28 Yeah, same here, if it's really blocking you, please feel free to nag me 20:37:30 Though, of course, just use good judgement. 20:37:55 If it's a bug fix that's like 1 line and it's holding you up and is very unlikely to break anyone else... just get the damned thing merged! 20:37:58 sbalukoff +1 20:38:09 so lets use common sense! 20:38:13 Haha! 20:38:13 Alright, well, apparently I'm the only one that wants a set value.... 20:38:15 Fine. 20:38:16 Whatever. 20:38:17 common sense isn't too common 20:38:19 Hate you all. Just a little. 20:38:20 Please two sets of eyes though, rage merges are still un-cool 20:38:28 TrevorV: We have "wet concrete guidelines" 20:38:31 decision: we are good stewards of the project and use common sense 20:38:39 johnsom: +1 20:38:56 blogan +1 20:39:11 oh, did anyone say what they actually thought of merging the core groups? 20:39:21 It'll happen anyway when we finish the spinoff 20:39:31 sure, sounds good 20:39:31 Yeah, to me that is just a timing thing 20:39:33 sooo... i don't care -- though i won't be +2ing anything n-lbaas anyway prolly 20:39:34 I'm OK with that. 20:39:43 but go ahead IMO 20:39:54 it does make us a very large core group 20:39:57 I could -2 graph… 20:40:01 ... do we need to do any pruning? 20:40:09 xgerman: We know where you live. 20:40:12 or are we just going to worry about that later 20:40:26 we should have an official vote / proposal for this on the ML right? 20:40:27 sbalukoff I have combat chicken 20:40:39 since I don't think we have a majority here actually :/ 20:40:50 xgerman: I see you combat chicken and raise you a combat duck. 20:41:01 We're going amphibious, baby! 20:41:08 nah, we can skip the bureaucracy. 20:41:26 i wonder those animals taste like 20:41:37 fnaval: Delicious. 20:41:45 lol yeah my thoughts too 20:41:48 ok 20:42:06 There was so little information in the last 10 lines.... 20:42:09 ha ha ha 20:42:26 i have o idea what just happened 20:42:33 Ok, so are we good on cores/+A? 20:42:45 TrevorV: You have to read between the lines. The message there reads, "Nobody is against this, so let's do it." 20:42:52 I see... 20:42:53 mid-cycle, i heard one yes. if we had one, how many would attend? 20:43:09 * dougwig raises hand. 20:43:18 I probably would, though it depends on where it is 20:43:19 Depending on location, Rackspace will probably attend 20:43:22 dougwig: I'm almost certain I could attend. Other IBMers attending would depend on location. 20:43:33 Also, when? 20:43:37 nope 20:43:38 and when, yeah 20:43:44 * johnsom gets "the future is cloudy" 20:43:47 unless it’s in San Diego 20:43:48 as always depends on where, though i'm not sure if i'd be allowed to attend now 20:43:49 Yes, when is important. 20:43:51 is 'depends on location' code for "if it's at rax" ? or "if it's in seattle?" :) 20:43:52 if at rackspace, i'd be there 20:43:54 so, we should try? but we need to propose a date and location to really get a firm idea 20:44:00 which may then tell us whether to do it or not 20:44:09 for me it's location 20:44:24 I may *be* in WA already depending on when it is, planning to spend some time up there this year prolly 20:44:29 i'd like to attend but with my new role i dont know how likely that is, rm_work, TrevorV, and/or ptoohill might be better to go over me 20:44:31 No, IBMers showing up would probably be best with San Jose. However, I don't see any of those IBMers here voicing their opinion, so... San Antonio it is! 20:44:32 esp. around when the midcycle might be, lol 20:44:34 i'd lean towards june sometime, but it sounds like location is the harder part. 20:44:55 boise! 20:44:56 so for me it might be easier if it's WA, heh 20:45:04 I would absolutely do Boise. 20:45:08 frankly, i'm fine with san antonio. 20:45:09 location is a huge part. I'm not even sure if Rackspace *will* host... We'd have to get that information 20:45:14 i'd also be fine with boise. ;-) 20:45:14 Boise has some fun festivals and stuff in June, IIRC. 20:45:19 yeah I'm ok with Boise prolly 20:45:34 but that's me in a void without any knowledge of RAX's budget and where i'll be 20:45:40 We could do it at my parents' house. 20:45:48 lol 20:45:54 And I'm actually not even joking about that. They have space to host like 60 people. 20:45:57 Only if we get pancakes 20:46:01 LAN Party at Stephen's parents place!!! 20:46:03 if it's in boise, the worst case is me sitting in an office, sad and alone. aka, every day. 20:46:05 HAHA! 20:46:10 I feel like I'm in highschool 20:46:23 rm_work: So what are your plans for the senior prom? 20:46:24 I would totally crash sbalukoff 's family's house... 20:46:32 He's going with me 20:46:33 o_0 20:46:34 lol 20:46:38 i'm not sure its safe to be around the people who spawned sbalukoff 20:46:43 HAha! 20:46:45 ha ha ha ha 20:46:45 haha 20:46:47 That's true 20:46:49 hahaha 20:47:05 these meetings 20:47:15 oh no you didn't 20:47:18 i expect there are a lot of bernie sanders signs. or trump signs. i doubt it's anywhere in-between. 20:47:23 Ok, well, I doubt we're going to get a decision this week on location or time. 20:47:23 Every other team envies us, you know it/ 20:47:34 ok, thoughts on time? 20:47:42 dougwig: To my surprise, it was Sanders signs. 20:47:43 even tentatively, just ... when would a "mid"cycle be 20:47:53 Corvallis is also nice any time of year ;-) 20:47:53 dougwig: Mostly because they absolutely cannot stand Hillary. 20:48:10 June sometime would be my vote. 20:48:13 no Gary Johnson fans? >_> 20:48:20 Like mid-June-ish. 20:48:49 GJ! 20:49:01 ok, mid June is when I might already be visiting WA, depending on how my plans develop 20:49:07 not August for me 20:49:14 so I could drive or fly from BLI to whatever Boise's airport is 20:49:16 Yeah, I can host in Corvallis. 20:49:16 Personally, I'm going with Vermin Supreme. I'm looking forward to a pony-based economy and going back in time to kill Hitler. 20:49:25 lol 20:49:35 What's the minimal pony-unit? one full pony? 20:49:37 Ooh! Corvallis is also nice. 20:49:38 that could be problematic 20:49:46 I'd be fine with boise or corvallis, they're different 20:49:47 unless there are pony-derivitives markets 20:50:04 we could also do Windcrest 20:50:11 rofl how is that different 20:50:15 shhh 20:50:17 i don't think people want to get shot 20:50:21 Er... maybe we should have people look into serious options and come back next week (or in two, since johsom and I will be out next week)? 20:50:26 yeah 20:50:38 corvallis and boise sound serious as long as they can host 20:50:42 I want to make sure that Frito gets time for his topic. 20:50:44 haha 20:50:46 though I'm tentatively ok with Boise or Corvallis June-ish 20:50:50 oh god i forgot about Frito 20:50:55 he should have been more pushy 20:50:55 yeah any other topics? lol 20:50:56 whoots sbalukoff. I was wondering if the group forgot 20:51:04 * blogan did 20:51:07 Frito: you have to say something or we'll just BS forever and end it 20:51:08 Regarding pushing listener statistics from Octavia to Neutron LBaaS. If we leave the propigation in the heartbeat handler as it is today we run the risk of increased likelyhood of race conditions and such. 20:51:09 The other thing I can see doing is having a background process in HouseKeeping that rolls the data up before sending the message back up to NLBaaS. 20:51:11 The risk here comes in from things like someone deleting a listener between rollup cycles and losing a little data. So this route might involve a soft delete until the rollup happens then a hard delete. 20:51:12 So vote on rollup in heartbeat handler vs background process for rollup vs something else assuming this is the forum for it? 20:51:13 oh god actual work 20:51:14 Also, sorry if this isn't the right forum for this question / discussion. 20:51:14 whoa! 20:51:21 quick typer 20:51:26 he was SOOOO ready 20:51:29 I had it t yped up in notepad already assuming I'd be time crunched. 20:51:46 Heh! 20:52:04 He's too ready... we don't have time for his readiness. 20:52:06 I am still not super worried about the heartbeat handler doing this stuff since it's a quick processing and bump to a queue 20:52:06 but 20:52:08 the long pause while everyone reads it 20:52:12 So, having had 30 seconds to think about this, I think data consistency is key here, and not worry too much about losing data from a very short-lived listener. 20:52:18 nlbaas only needs the stats for customers to see right? 20:52:19 I'm still thinking through it and all that. It's dependent on that other review I have out there but I wanted to bring it up. 20:52:21 Frito We already have a soft delete at the load balancer level 20:52:24 I think we'd need to really get an idea of scale and performance and when it'd really be a problem 20:52:37 johnsom: this would be at the listener_statistics level 20:52:38 rm_work: +1 20:52:38 I'm wondering if we even need to push stats up to nlbaas 20:52:51 I don't think we need to push stats to nlbaas 20:52:53 But that's me 20:52:54 yeah probably not 20:52:56 blogan: that's another thought I had. 20:52:59 blogan: Oh, are you thinking people should query Octavia 20:53:02 because honestly either the driver can pull it 20:53:03 API directly? 20:53:10 or ... we're about to merge the twp 20:53:12 *two 20:53:14 " I think data consistency is key here" -- clearly a mission for mongo 20:53:17 lol 20:53:24 currently, whenever a stats call is requested by a customer, nlbaas will ask teh driver for it and update the table in the db and return that to the user through teh API 20:53:27 cassandra? 20:53:27 I think our octavia driver can PULL the stats from octavia with n-lbaas 20:53:30 lol @ nosql 20:53:32 don't think it needs to be in both places 20:53:34 thats how V1 did it 20:53:38 dougwig: Oh, haha! I'd forgotten that some openstack systems use that pile of crap. Like the one that gathers stats. XD 20:53:44 and since nlbaas is going away, should we really do more than that? 20:53:46 for nlbaas 20:53:48 right 20:53:49 Can't we just do a stats pass through for nlbaas or the octavia api? 20:53:57 i don't want to devote too much effort working on n-lbaas stuff 20:54:00 johnsom: thats essentially what its donig 20:54:00 johnsom: yes 20:54:04 that is what i am saying too 20:54:08 Well, the stats model in n-lbaas is really, really primitive. 20:54:17 it still needs to handle listener stats 20:54:22 We didn't spend much time on that at all when we built the lbaas v2 api / model. 20:54:24 sbalukoff: it is but why improve it and we're giong to throw it away? 20:54:30 yeah 20:54:31 ^^ this 20:54:34 Right. 20:54:37 Agreed. 20:54:50 so many words to keep up with. I'll have to re-read after. 20:54:51 i'd rather spend the time towards finishing what we need to deprecate n-lbaas, rather than do yet more work in it 20:55:08 it only ever served as af unction for the user to get the current stats for their load balancer, that can still easily happen without octavia sending stats up to nlbaas 20:55:09 blogan +1 20:55:10 Frito: I don't think anyone here really knows what the right answer is right now. 20:55:15 Yea, this would just be moving the call that's in the bottom of the current heartbeat right now. 20:55:17 Right 20:55:25 i know the right answer! 20:55:28 i vote leave it for now 20:55:35 makes sense. I wanted to toss it out there. We can follow up in #openstack-lbaas if need be 20:55:40 blogan: You don't count. 20:55:42 yeah i'll need to look 20:55:45 add listener stats support to nlbaas, but leave it as a passthrough to the driver 20:55:52 Ok. 20:55:52 sbalukoff: :( 20:56:05 yeah i mean, stats HAVE TO come in through the heartbeat sooo 20:56:09 just ... put them in our DB 20:56:11 and leave it at that 20:56:17 rm_work: you mean octavia's db 20:56:20 yes 20:56:20 Sure. 20:56:22 our 20:56:24 rm_work: they are already in octavias db 20:56:33 exactly 20:56:33 so 20:56:34 I think we should handle stats as they come into healthmanager in octavia, and just query pass through from nlbaas to octavia DB for requests. 20:56:38 don't change anything :p 20:56:44 current implementation has some holes in it where Neutrons stuff gets overwritten and all that w/ multiple amphorae 20:56:45 yeah this ^^ johnsom ++ 20:57:07 i don't think there's an actual problem 20:57:08 johnsom: cool. Okay. I didn't think of that. 20:57:09 yeah, octavia will still need some mechanisms to send stats off to something like ceilometer 20:57:16 yeah but that'll be separate 20:57:16 that shouldn't be in nlbaaas 20:57:24 blogan: +1 20:57:26 Frito +1 yes, current implementation is not complete. 20:57:54 ok 2 mins, sounds like we've got consensus on this 20:58:07 Frito: got it? 20:58:11 Agreed, the celio or whatever part is separate from this. 20:58:24 Man, for a short meeting, we sure managed to fill the hour. Mostly with useless banter, but, eh... 20:58:25 Consensus enough. Leave as is for the message back up and at some point in the future the API will get re-routed (for lack of a better term) 20:58:31 yep 20:58:37 yea, driver is a different discussion for me 20:58:40 we don't need to be sending stats to n-lbaas IMO 20:58:45 sbalukoff I knew I was doomed as soon as I typed it. 20:58:51 essentially we should just NOT, and let n-lbaas query octavia 20:59:07 Right on. Thx all 20:59:08 aaaanywho, meeting over then? 20:59:08 rm_work +1 20:59:10 30 seconds left :-D 20:59:11 still need to add listeners to the nlbaas stats though 20:59:19 Heh! 20:59:34 Yeah, that API probably needs to be enhanced. 20:59:35 So it'll be a pass-through, as I (reluctantly) agree with blogan, eh. 20:59:37 ;) 20:59:51 Ok, thanks folks! 20:59:57 Thanks y'all! 20:59:58 yes! 20:59:59 cool thanks! o/ 21:00:03 dance puppet 21:00:05 #endmeeting