*** sbfox has joined #openstack-lbaas | 00:01 | |
*** fnaval has quit IRC | 00:06 | |
*** sbfox has quit IRC | 00:07 | |
*** xgerman_ has quit IRC | 00:09 | |
dougwig | oh, i'm in california. that means bad things for the irc meeting time. | 00:37 |
---|---|---|
dougwig | bad things for me. | 00:37 |
dougwig | blogan: is that push the last of them, or was that just a rebase fix? | 00:46 |
*** banix has joined #openstack-lbaas | 00:50 | |
*** sbfox has joined #openstack-lbaas | 01:14 | |
*** sbfox has quit IRC | 01:28 | |
*** sbfox has joined #openstack-lbaas | 01:34 | |
*** crc32 has quit IRC | 01:59 | |
*** fnaval has joined #openstack-lbaas | 02:00 | |
*** sbfox has quit IRC | 02:10 | |
*** sbalukoff has quit IRC | 02:15 | |
*** sbfox has joined #openstack-lbaas | 02:32 | |
*** sbfox has quit IRC | 02:39 | |
*** crc32 has joined #openstack-lbaas | 02:45 | |
*** blogan_ has joined #openstack-lbaas | 02:54 | |
*** woodster has joined #openstack-lbaas | 02:57 | |
blogan_ | [M ?W | 03:08 |
*** crc32 has quit IRC | 03:22 | |
*** fnaval has quit IRC | 04:06 | |
*** HenryG is now known as HenryG_afk | 04:06 | |
*** fnaval has joined #openstack-lbaas | 04:06 | |
*** banix has quit IRC | 04:29 | |
*** sbfox has joined #openstack-lbaas | 04:34 | |
*** sbalukoff has joined #openstack-lbaas | 04:50 | |
*** fnaval has quit IRC | 06:26 | |
*** fnaval has joined #openstack-lbaas | 06:26 | |
*** fnaval has quit IRC | 06:31 | |
*** sbfox has quit IRC | 07:02 | |
*** blogan_ has quit IRC | 07:15 | |
*** woodster has quit IRC | 07:15 | |
*** evgenyf has joined #openstack-lbaas | 07:58 | |
*** banix has joined #openstack-lbaas | 10:44 | |
*** banix has quit IRC | 11:42 | |
*** woodster has joined #openstack-lbaas | 12:49 | |
*** HenryG_afk is now known as HenryG | 12:49 | |
*** HenryG has quit IRC | 13:02 | |
sballe_ | morning | 13:13 |
*** fnaval has joined #openstack-lbaas | 13:24 | |
*** banix has joined #openstack-lbaas | 13:30 | |
*** banix has quit IRC | 13:31 | |
*** banix has joined #openstack-lbaas | 13:32 | |
*** banix has quit IRC | 13:34 | |
*** crc32 has joined #openstack-lbaas | 13:36 | |
*** banix has joined #openstack-lbaas | 13:38 | |
*** fnaval has quit IRC | 13:51 | |
*** xgerman_ has joined #openstack-lbaas | 13:51 | |
*** fnaval has joined #openstack-lbaas | 13:51 | |
sballe_ | mestery, morning | 13:52 |
mestery | sballe_: howdy! | 13:53 |
sballe_ | mestery, What is the deadline for design session submissions? | 13:53 |
mestery | Those haven't opened up yet :) | 13:53 |
sballe_ | when do they open? | 13:53 |
sballe_ | and close :) | 13:53 |
*** fnaval has quit IRC | 13:56 | |
mestery | Not until late September. | 13:57 |
*** jorgem has joined #openstack-lbaas | 13:58 | |
rm_work | morning | 13:58 |
*** xgerman_ has quit IRC | 13:58 | |
*** xgerman has joined #openstack-lbaas | 13:58 | |
sballe_ | morning | 13:59 |
sbalukoff | Morning! | 13:59 |
*** rolledback has joined #openstack-lbaas | 14:01 | |
*** HenryG has joined #openstack-lbaas | 14:43 | |
*** Youcef has joined #openstack-lbaas | 14:50 | |
blogan | hello | 15:00 |
sbalukoff | I guess we do have eavesdropping bots recording everything that happens here, so the discussion won't be lost to the ether. :) | 15:01 |
blogan | ill be back in 15 | 15:01 |
sbalukoff | vjay isn't here. | 15:01 |
*** vjay has joined #openstack-lbaas | 15:01 | |
sbalukoff | There he is! | 15:01 |
sbalukoff | vjay: blogan says he'll be back in 15. | 15:01 |
vjay | ok | 15:02 |
*** jorgem1 has joined #openstack-lbaas | 15:03 | |
*** fnaval has joined #openstack-lbaas | 15:04 | |
*** jorgem has quit IRC | 15:05 | |
*** fnaval has quit IRC | 15:07 | |
*** fnaval has joined #openstack-lbaas | 15:08 | |
*** jorgem1 has quit IRC | 15:12 | |
*** banix has quit IRC | 15:12 | |
*** fnaval has quit IRC | 15:13 | |
blogan | im back | 15:14 |
vjay | hi blogan | 15:15 |
vjay | I was checking https://review.openstack.org/#/c/105609/34/neutron/services/loadbalancer/plugin.py. THe latest update has changed the behaviour; The LBaaS extension updates the DB assuming the Driver operation is synchronous and the operation is complete when the driver call returns. This might not be the case for drivers like NetScaler, Radware etc. This is very useful for simple No-Op drivers. | 15:15 |
*** jorgem has joined #openstack-lbaas | 15:17 | |
vjay | blogan: are u t? | 15:19 |
*** mlavalle has joined #openstack-lbaas | 15:23 | |
blogan | vjay: sorry got pulled a way, give me a few mins | 15:30 |
vjay | ok | 15:30 |
*** rolledba_ has joined #openstack-lbaas | 15:31 | |
*** rolledback has quit IRC | 15:31 | |
blogan | ok vjay back | 15:39 |
vjay | I was checking https://review.openstack.org/#/c/105609/34/neutron/services/loadbalancer/plugin.py. THe latest update has changed the behaviour; The LBaaS extension updates the DB assuming the Driver operation is synchronous and the operation is complete when the driver call returns. This might not be the case for drivers like NetScaler, Radware etc. | 15:40 |
blogan | vjay changing it back to where the drivers update the status would solve this right? | 15:40 |
vjay | Yes | 15:40 |
blogan | okay, but sometime I'd like to solve it so the drivers don't have to have that responsibility, whether they're async or sync | 15:41 |
blogan | doesn't have to be now | 15:41 |
blogan | or juno | 15:41 |
vjay | correct, agreed! | 15:41 |
blogan | and its my fault for making an assumption that all the drivers that weren't using the agent were sync | 15:41 |
blogan | but its easy to revert | 15:41 |
*** banix has joined #openstack-lbaas | 15:42 | |
vjay | Cool! | 15:42 |
vjay | the scenario tests in the V1 API understood this. It used to wait for the STATUS to be marked ACTIVE and then ran the tests | 15:42 |
blogan | now this means right now the drivers have a lot of responsibility in changing entities to ACTIVE (which is the easy part) and changing them back to DEFERRED when they are unlinked for a loadbalancer entity (whcih is the hardest part) | 15:43 |
vjay | hmm... the No-Op driver will act as a model driver for these cases? | 15:45 |
ptoohill | vjay, the nonagent_namespace_driver wil be the reference that displays this behaviour | 15:47 |
vjay | ptoohill: thanks! | 15:48 |
blogan | vjay: the noop driver should do this as well | 15:52 |
blogan | and I'm thinking there should just be a plugin method that all drivers can call to automatically set everything to ACTIVE or set the entities that need to be changed to DEFERRED | 15:53 |
*** crc32 has quit IRC | 15:54 | |
blogan | but drivers dont have to use it | 15:54 |
vjay | blogan, makes sense | 15:54 |
blogan | okay ill get on that, thanks vjay | 15:55 |
*** jorgem has quit IRC | 15:59 | |
*** jorgem has joined #openstack-lbaas | 16:00 | |
*** sbfox has joined #openstack-lbaas | 16:03 | |
*** evgenyf has quit IRC | 16:16 | |
*** crc32 has joined #openstack-lbaas | 16:18 | |
*** sbalukoff has quit IRC | 16:34 | |
*** rolledba_ has quit IRC | 16:38 | |
dougwig | morning, sorry i missed the meeting. | 16:45 |
dougwig | blogan answered this fine, but the noop driver should be a reference, and the minimum that any vendor needs (sort of a "just add your backend calls to this skeleton".) it must result in ACTIVE objects and working CLI/API, or I'll consider it a bug. the haproxy driver is a reference of something that actually makes those backend calls and can pass traffic. | 16:48 |
dougwig | blogan: where are we at in terms of auto-ACTIVE/FAILED/DEFERRED, and sync vs async? | 16:48 |
*** fnaval has joined #openstack-lbaas | 16:53 | |
dougwig | i see going back to async, concur given objections. and let's talk about whether automatically changing deferred makes sense. i can see a few cases where it does not. | 16:53 |
*** fnaval has quit IRC | 16:56 | |
*** banix has quit IRC | 16:56 | |
*** fnaval has joined #openstack-lbaas | 16:57 | |
*** banix has joined #openstack-lbaas | 17:00 | |
*** fnaval has quit IRC | 17:01 | |
*** vjay has left #openstack-lbaas | 17:07 | |
blogan | dougwig: im just gonig to go back and let the drivers do the status management, since support both async and sync drivers will require work and time investments we don't enough of in juno | 17:29 |
blogan | dougwig: and then make that something we can address later, if we can do it in kilo great | 17:30 |
*** mestery has quit IRC | 17:31 | |
*** mestery has joined #openstack-lbaas | 17:31 | |
*** HenryG_ has joined #openstack-lbaas | 17:31 | |
*** HenryG has quit IRC | 17:34 | |
*** banix has quit IRC | 17:37 | |
dougwig | ok | 17:37 |
*** HenryG_ is now known as HenryG | 17:39 | |
blogan | any objections to that? | 17:40 |
*** banix has joined #openstack-lbaas | 17:40 | |
*** banix has quit IRC | 17:43 | |
*** banix has joined #openstack-lbaas | 17:43 | |
dougwig | ok is dougwig code for 'no objections'. | 17:45 |
dougwig | i'll put the noop fix back on my list. | 17:46 |
dougwig | given that it was functionality removal, and some depend on it, it was clearly a bit premature. | 17:46 |
dougwig | does this mean you retract yesterday's bowing down to my awesomeness? | 17:47 |
blogan | no i bow down to my own failures | 17:51 |
*** sbfox has quit IRC | 17:56 | |
dougwig | well, i was pushing for it as well. | 17:56 |
dougwig | i'll share. | 17:56 |
blogan | i may add the code in the noop to set to active and deferred in this PS because I've already got tests that will test it | 17:59 |
*** sbalukoff has joined #openstack-lbaas | 18:00 | |
*** sbalukoff has quit IRC | 18:00 | |
*** sbfox has joined #openstack-lbaas | 18:02 | |
*** Youcef has quit IRC | 18:05 | |
*** rolledback has joined #openstack-lbaas | 18:19 | |
*** jorgem1 has joined #openstack-lbaas | 18:28 | |
*** sbalukoff has joined #openstack-lbaas | 18:29 | |
*** jorgem has quit IRC | 18:30 | |
sbalukoff | Proof that anyone can submit a talk idea to the OpenStack paris summit: https://www.openstack.org/vote-paris/Presentation/astrologer | 19:03 |
sbalukoff | I think that one might even beat the UDP load balancing session | 19:03 |
*** banix has quit IRC | 19:05 | |
*** fnaval has joined #openstack-lbaas | 19:11 | |
*** sballe_ has quit IRC | 19:13 | |
*** banix has joined #openstack-lbaas | 19:27 | |
*** sbfox has quit IRC | 19:36 | |
*** fnaval has quit IRC | 19:47 | |
*** fnaval has joined #openstack-lbaas | 19:48 | |
*** sbfox has joined #openstack-lbaas | 19:50 | |
*** rolledback has quit IRC | 20:01 | |
*** sbfox1 has joined #openstack-lbaas | 20:02 | |
*** sbfox has quit IRC | 20:03 | |
jorgem1 | sbalukoff: wow! really | 20:04 |
jorgem1 | lol | 20:04 |
*** jorgem1 is now known as jorgem | 20:04 | |
dougwig | blogan: ok, sweet. | 20:16 |
crc32 | sbalukoff: This may be a late question but the SNI stuff is only for determining which certificate to return to the users browser right. Was there any intention of using it to select which back end pool was going to be used? | 20:26 |
crc32 | sbalukoff: for example SNI host www.toysrus.org should go to the toysrus pool while SNI host www.pornopuffs.com should go to some other pool.Is there anything that was supposed to be inplace to dictate which host goes to which pool cause other wise it sounds just like a Cert selector going to the same Application. | 20:28 |
*** sbfox has joined #openstack-lbaas | 20:28 | |
*** sbfox1 has quit IRC | 20:29 | |
ptoohill | In addition to what carlos has said, in the proposed tls spec and code review theres only a sni_container_ids object that is a list of barbican container ids. Unless im misunderstand how to configure haproxy with the additional certs I dont see this being useful at this point | 20:29 |
crc32 | Oh course my question is thinking beyond the scope of Juno and more interms of rencrypting. | 20:32 |
ptoohill | my question is more for the imediate work for the tls ref impl and its configurations | 20:35 |
*** fnaval has quit IRC | 20:36 | |
*** fnaval has joined #openstack-lbaas | 20:36 | |
*** fnaval has quit IRC | 20:41 | |
*** fnaval has joined #openstack-lbaas | 20:53 | |
johnsom | It seems like that could be covered by a layer 7 inspection of the HTTP request after the SSL is removed | 20:54 |
*** fnaval has quit IRC | 20:56 | |
*** fnaval has joined #openstack-lbaas | 20:57 | |
*** openstackgerrit has quit IRC | 21:01 | |
*** fnaval has quit IRC | 21:01 | |
*** openstackgerrit has joined #openstack-lbaas | 21:02 | |
ptoohill | johnson, what im trying to get at is, with current spec/review all we have is sni_container_ids. we dont have any rules at this point. Im trying to understand what we are supposed to do with those certs without rules. | 21:07 |
ptoohill | I think carlos has somewhat cleared up my confusion | 21:07 |
*** openstack has joined #openstack-lbaas | 21:08 | |
*** fnaval has joined #openstack-lbaas | 21:13 | |
*** banix has quit IRC | 21:32 | |
*** mestery has quit IRC | 21:53 | |
*** mestery has joined #openstack-lbaas | 21:54 | |
sbalukoff | crc32 and ptoohill: Yes, and this will generally be accomplished using Layer 7 rules (this is how our proprietary load balancer product works today). So yes, without L7, SNI is less useful (though not totally useless-- a single pool could service multiple secure domains, so using SNI means you can do this while consuming fewer IPs). | 22:03 |
sbalukoff | I haven't looked at any code that's written for this yet, but the spec should have also associated those sni_container_ids with a mandatory order, which will be used for hostname conflict resolution. | 22:04 |
sbalukoff | ptoohill: The existence (or non-existence) of L7 rules should not affect SNI functionality at all... | 22:05 |
sbalukoff | So I don't understand why you're saying "Im trying to understand what we are supposed to do with those certs without rules." | 22:05 |
sbalukoff | What you do with them is put them in an SNI configuration so haproxy can choose the proper cert to use based on what the client asks for, according to the SNI protocol. | 22:06 |
sbalukoff | (That exchange acutally happens right after the TCP session gets started, early in the handshaking process. This is waaaay before we've started speaking HTTP or any other protocol over the connection between client and server.) | 22:07 |
crc32 | sbalukoff: Yes but no mention of mappings to pools or anything. I think the question was more on the line of should an SNI entry be determining which backend pool was being used. | 22:08 |
sbalukoff | crc32: Aah! So the answer, from my perspective is: No, the L7 rules will do that. | 22:09 |
crc32 | IE SNI host www.toysrus.com should go to the toysrus members pool/backend. | 22:09 |
ptoohill | gotcha, thanks for clarifying! | 22:09 |
sbalukoff | Right, and the L7 rule would make that determination by examining the Host: http header. | 22:09 |
crc32 | ok so then SNI is just for selecting the correct certificate. The L7 rules later on down the road (Not slated for juno if I understood correctly). | 22:09 |
crc32 | Ok I was just verifying this was your expectation as well. | 22:10 |
sbalukoff | I haven't looked closely at what parameters or variables are available to us in haproxy 1.5 yet, but I imagine selecting the backend pool would still fall solidly under the whole L7 functionality even there. | 22:10 |
sbalukoff | Yep! | 22:10 |
crc32 | yea I just barely got interduced to the docs my self today. | 22:11 |
sbalukoff | crc32: I would love to see L7 get into Juno. But I'm having trouble wrangling engineering time to get it done in time. | 22:11 |
sbalukoff | (Mostly, it's become apparent that we need to get Octavia bootstrapped, which is where my time has been lately.) | 22:11 |
ptoohill | That makes sense, think my questions were more unclear then my confusion ><, So thanks for the resonse! | 22:12 |
sbalukoff | I'm pretty familiar with haproxy 1.4 functionality, as I was heavily involved in defining the templates for using it in our internal load balancer product. But this was before SSL functionality was cooked into haproxy. (We're doing SNI using stunnel there.) | 22:12 |
sbalukoff | No worries, eh! | 22:12 |
sbalukoff | Also, I'm a pretty terrible python coder, which is something I'm also working on correcting, though that won't happen overnight. :P | 22:13 |
ptoohill | Yea, it seems you do acls based off provided certs, then can determine which backend or node to server based on a match. I just wanted to verify that the spec wasnt assuming that we were doing what is essentailly L7 at this point. | 22:13 |
ptoohill | :) | 22:13 |
sbalukoff | (our existing product is written in a combination of perl, shell, and ruby.) | 22:14 |
sbalukoff | ptoohill: Right. Yeah, leave that for L7. | 22:14 |
ptoohill | ;) | 22:14 |
*** mestery has quit IRC | 22:22 | |
*** mestery has joined #openstack-lbaas | 22:23 | |
*** sbalukoff has quit IRC | 22:40 | |
*** HenryG has quit IRC | 22:45 | |
*** Pattabi has joined #openstack-lbaas | 22:52 | |
*** jorgem has quit IRC | 22:53 | |
Pattabi | hi <blogan> | 22:53 |
blogan | Pattabi: hey whats up, was hoping to see you | 22:54 |
Pattabi | quick observation on the update_listener API for the case when default_pool_id is set to None | 22:54 |
Pattabi | in the driver code, i observed that the old listener db object is not fully loaded | 22:55 |
blogan | Pattabi: we're not going the route of the plugin doing all the status management now, because it would cause major problems with other drivers | 22:55 |
Pattabi | yes .. i read that in the IRC meeting minutes .. sorry i missed today's IRC call | 22:56 |
blogan | Pattabi: let me check that out about the update in the driver | 22:56 |
blogan | I have not run into that issue | 22:57 |
Pattabi | for example, old_listener_object.default_pool.listener seems to be None | 22:57 |
blogan | ah that could be because of some lazy loading issues | 22:58 |
blogan | is there a reason you need to go back to listenr? | 22:58 |
Pattabi | yes .. seems like lazy loading issue | 22:58 |
Pattabi | when the default_pool_id is set to None as part of update_listener, I need to clean up/delete pool, healthmonitor and members on the device | 22:59 |
blogan | i get that, but default_pool is there right? | 23:00 |
blogan | is healthmonitor and members not there? | 23:01 |
blogan | meaning old_listener.default_pool.healthmonitor | 23:01 |
Pattabi | from update_listener implementation, i invoke the methods delete_pool and delete_pool would in turn call delete_healthmonitor and delete_member ... these are the same apis that would get invoked when user delete pool or delete healthmonitor | 23:02 |
Pattabi | yes .. thats my observation | 23:02 |
Pattabi | actually i take it back .... | 23:03 |
Pattabi | old_listener.default_pool.healthmonitor is avaialble | 23:03 |
blogan | what about members? | 23:03 |
Pattabi | but then old_listener.default_pool.healthmonitor.pool seems null | 23:03 |
Pattabi | old_listner.default_pool.healthmonitor.pool.listener seems null | 23:04 |
Pattabi | memebrs seem okay | 23:04 |
blogan | i think thats right because these are getting eager loaded, so when you try to go back up normally sqlalchemy would just do a select statement on the fly (lazy loading), but since its eager loaded it can't do that | 23:06 |
Pattabi | yes .. thats the exception i see on the neutron log also ... | 23:07 |
Pattabi | because of this i cannot have a delete_pool taking a pool db object and do pool_db.listener.loadbalancer | 23:08 |
blogan | but i think its fine because you shouldn't need to go back up, because you've already got those references | 23:08 |
blogan | ahh i see | 23:09 |
blogan | ill try to do some tests on this tonight, or tomorrow and see if i can reproduce what you're seeing and if anywway around it | 23:10 |
Pattabi | i have for now implemented a private internale method _delete_pool for example taking loadbalancer as as additional parameter that i derive from the new_obj | 23:10 |
Pattabi | and use this method to hnadle setting of defualt_pool_id to Null in the update_listener API | 23:10 |
Pattabi | i will also upload my code to my git and share the linkf or your referece | 23:11 |
blogan | okay but i think it might just require you to have to make a db call to grab the loadbalancer | 23:11 |
Pattabi | yes .. that's another option | 23:12 |
Pattabi | only issue is that i cannot grab a loadbalancer from the healthmonitor object | 23:13 |
Pattabi | reason being, healthmonitor.pool will be None for the old object due to lazy loading | 23:14 |
blogan | just did a quick test, didn't run into that issue i coudl traverse down the tree and back up | 23:14 |
blogan | ill have to do some in depth testing tonight | 23:14 |
Pattabi | ok.. thanks blogan | 23:15 |
blogan | np sorry for all the churn on this | 23:15 |
*** sbfox has quit IRC | 23:24 | |
mlavalle | blogan: you around? | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!