*** yamamoto has joined #openstack-lbaas | 00:01 | |
*** yamamoto has quit IRC | 00:06 | |
*** ianychoi has quit IRC | 00:07 | |
*** ianychoi has joined #openstack-lbaas | 00:08 | |
johnsom | Yeah, I can load old barbican containers fine, even up to master branch. So, not sure what is happening for Jorge | 00:11 |
---|---|---|
*** armax has joined #openstack-lbaas | 00:16 | |
*** yamamoto has joined #openstack-lbaas | 00:58 | |
*** dmellado has quit IRC | 01:22 | |
*** ianychoi has quit IRC | 01:30 | |
*** ianychoi has joined #openstack-lbaas | 01:32 | |
*** gthiemon1e has joined #openstack-lbaas | 01:43 | |
*** gthiemonge has quit IRC | 01:43 | |
*** dmellado has joined #openstack-lbaas | 01:46 | |
*** happyhemant has quit IRC | 02:09 | |
*** ianychoi has quit IRC | 02:10 | |
*** gthiemonge has joined #openstack-lbaas | 02:15 | |
*** gthiemon1e has quit IRC | 02:16 | |
*** ianychoi has joined #openstack-lbaas | 02:18 | |
*** hongbin has joined #openstack-lbaas | 02:20 | |
*** yamamoto has quit IRC | 02:35 | |
*** hongbin has quit IRC | 02:39 | |
*** yamamoto has joined #openstack-lbaas | 02:43 | |
dawzon | I'm wondering how existing listeners should be migrated to the new configuration where every listener will have its own cipher string. My first thought is to just set them to the built-in default (OWASP recommendations), would this cause any issues with existing deployments? I suppose that existing deployments never chose their ciphers in the first place, so switching the ciphers may not be a big deal. | 03:23 |
*** psachin has joined #openstack-lbaas | 03:26 | |
*** ianychoi has quit IRC | 03:35 | |
*** ianychoi has joined #openstack-lbaas | 03:37 | |
*** TrevorV has joined #openstack-lbaas | 03:57 | |
*** armax has quit IRC | 04:13 | |
*** gcheresh_ has joined #openstack-lbaas | 04:24 | |
*** gcheresh_ has quit IRC | 04:45 | |
*** ianychoi has quit IRC | 05:00 | |
*** ianychoi has joined #openstack-lbaas | 05:03 | |
*** TrevorV has quit IRC | 05:06 | |
*** yamamoto has quit IRC | 05:34 | |
*** yamamoto has joined #openstack-lbaas | 05:39 | |
*** yamamoto has quit IRC | 05:43 | |
*** rcernin has quit IRC | 05:58 | |
*** ianychoi has quit IRC | 06:01 | |
*** ianychoi has joined #openstack-lbaas | 06:03 | |
*** gthiemonge has quit IRC | 06:05 | |
*** gthiemonge has joined #openstack-lbaas | 06:05 | |
*** ianychoi has quit IRC | 06:11 | |
*** ianychoi has joined #openstack-lbaas | 06:13 | |
rm_work | dawzon: if the default is the same as whatever HAProxy does by default, then there will be no change -- not sure if that's exactly the same as OWASP, but that should be the setting for OLD stuff, I think, and NEW stuff can default to the OWASP list | 06:19 |
rm_work | would be awesome if they line up but not sure they will exactly | 06:19 |
*** yamamoto has joined #openstack-lbaas | 06:25 | |
*** ianychoi has quit IRC | 06:29 | |
*** ianychoi has joined #openstack-lbaas | 06:31 | |
*** ianychoi has quit IRC | 06:39 | |
*** ianychoi has joined #openstack-lbaas | 06:41 | |
*** ianychoi has quit IRC | 06:58 | |
*** ianychoi has joined #openstack-lbaas | 07:05 | |
*** emccormick has joined #openstack-lbaas | 07:25 | |
emccormick | is there any way to get a loadbalancer in error state to reset and retry? | 07:31 |
emccormick | It would be really painful to have to delete and recreate them as several are part of a heat stack | 07:31 |
emccormick | the amphorae went awol during an upgrade because of a config error. I can make new LBs but the old ones won't spawn again | 07:32 |
*** gcheresh_ has joined #openstack-lbaas | 07:44 | |
*** maciejjozefczyk has joined #openstack-lbaas | 07:49 | |
*** yamamoto has quit IRC | 07:59 | |
*** yamamoto has joined #openstack-lbaas | 08:00 | |
*** tesseract has joined #openstack-lbaas | 08:02 | |
*** yamamoto has quit IRC | 08:04 | |
*** yamamoto has joined #openstack-lbaas | 08:05 | |
*** ccamposr has joined #openstack-lbaas | 08:06 | |
emccormick | ok, failover was the magic trick | 08:10 |
emccormick | (for anyone that reads this history and wants an answer) to my stupidity) | 08:11 |
*** tkajinam has quit IRC | 08:14 | |
*** psachin has quit IRC | 08:15 | |
*** yamamoto_ has joined #openstack-lbaas | 08:18 | |
*** yamamoto has quit IRC | 08:22 | |
*** yamamoto_ has quit IRC | 08:35 | |
*** sapd1_x has quit IRC | 08:36 | |
*** yamamoto has joined #openstack-lbaas | 08:37 | |
*** dulek has quit IRC | 08:49 | |
*** vishalmanchanda has joined #openstack-lbaas | 08:56 | |
*** rpittau|afk is now known as rpittau | 08:58 | |
*** ccamposr has quit IRC | 09:01 | |
*** ccamposr has joined #openstack-lbaas | 09:02 | |
*** dulek has joined #openstack-lbaas | 09:03 | |
*** yamamoto has quit IRC | 09:56 | |
*** yamamoto has joined #openstack-lbaas | 09:57 | |
*** yamamoto has quit IRC | 09:59 | |
*** yamamoto has joined #openstack-lbaas | 10:20 | |
-openstackstatus- NOTICE: The mail server for lists.openstack.org is currently not handling emails. The infra team will investigate and fix during US morning. | 10:26 | |
*** lucadelmonte90 has joined #openstack-lbaas | 10:27 | |
lucadelmonte90 | hello, i have a question regarding octavia amphora lb, probably a dumb one, i deployed a test openstack infrastracture, and manage ti deploy a loadbalancer, but when i try to add some serverto the pool or listener to the lb, the operating status is Offline, i connected to the amphora instance and the backend server are not reachable, also the | 10:27 |
lucadelmonte90 | amphora instance has only one network card connected to the managment octavia network (amp_boot_network_list), no ips on the backend network, wich i assigned when creating the load balancer, is that normal? Are backend server reached by the amphora instance in some other way ( router?)? | 10:27 |
*** yamamoto has quit IRC | 10:27 | |
openstackgerrit | Ann Taraday proposed openstack/octavia-tempest-plugin master: TEST https://review.opendev.org/704783 | 10:39 |
*** yamamoto has joined #openstack-lbaas | 11:03 | |
*** yamamoto has quit IRC | 11:09 | |
openstackgerrit | Merged openstack/python-octaviaclient stable/queens: Fix long CLI error messages https://review.opendev.org/706353 | 11:17 |
*** maciejjozefczyk_ has joined #openstack-lbaas | 11:17 | |
*** maciejjozefczyk has quit IRC | 11:18 | |
openstackgerrit | Merged openstack/python-octaviaclient stable/stein: Fix long CLI error messages https://review.opendev.org/706348 | 11:19 |
openstackgerrit | Merged openstack/python-octaviaclient stable/train: Fix long CLI error messages https://review.opendev.org/706347 | 11:21 |
*** ianychoi has quit IRC | 11:24 | |
cgoncalves | lucadelmonte90, hi. you should have a network namespace named amphora-haproxy in the amphora. check with "ip netns exec amphora-haproxy ip a" | 11:26 |
lucadelmonte90 | cgoncalves thanks! you are right, and using that namespace to execute a curl i can reach the backend servers | 11:29 |
lucadelmonte90 | cgoncalves why the operating status is reporting Offline? | 11:30 |
cgoncalves | lucadelmonte90, could you please clarify of which object is the operating status reporting offline? | 11:31 |
cgoncalves | is it the load balancer? pool? health monitor? members? | 11:31 |
lucadelmonte90 | cgoncalves the lodbalancer, loadbalancer,listener,pool | 11:32 |
lucadelmonte90 | the healt monitor is insted reporting Online | 11:32 |
lucadelmonte90 | root@amphora-63d97a66-1909-45c0-ad1e-beadc51432bb:~# ip netns exec amphora-haproxy netstat -nlputActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.1.238:80 0.0.0.0:* LISTEN 1231/haproxy | 11:33 |
lucadelmonte90 | but this command times out root@amphora-63d97a66-1909-45c0-ad1e-beadc51432bb:~# ip netns exec amphora-haproxy curl 192.168.1.238 | 11:33 |
cgoncalves | lucadelmonte90, OFFLINE operating status means the resource is administratively disabled. could you please share the output of "openstack loadbalancer status show $LB_ID"? | 11:37 |
lucadelmonte90 | [root@ostackkolla ~]# openstack loadbalancer status show lb2{ "loadbalancer": { "name": "lb2", "listeners": [ { "pools": [ { "name": "Pool1", "provisioning_status": "ACTIVE", "health_monitor": { | 11:37 |
lucadelmonte90 | "name": "monitorHTTP", "type": "HTTP", "id": "f6333447-2249-4de5-9b32-3f14f3608fba", "operating_status": "ONLINE", "provisioning_status": "ACTIVE" }, "members": [ { | 11:37 |
lucadelmonte90 | "name": "Test", "provisioning_status": "ACTIVE", "address": "192.168.1.159", "protocol_port": 80, "id": "02fc6366-182f-4d54-89dd-9517b70aae2a", "operating_status": "NO_MONITOR" | 11:37 |
lucadelmonte90 | }, { "name": "test2", "provisioning_status": "ACTIVE", "address": "192.168.1.225", "protocol_port": 80, "id": "c7d629bb-17f4-4c28-83bd-1a00f37c37df", | 11:37 |
lucadelmonte90 | "operating_status": "NO_MONITOR" } ], "id": "8c583f04-3684-42b0-ab49-86950b26e2f5", "operating_status": "OFFLINE" } ], "name": "HTTP", "id": "7967857b-f281-4934-88cc-86ed98d816d3", | 11:37 |
lucadelmonte90 | "operating_status": "OFFLINE", "provisioning_status": "ACTIVE" } ], "id": "8127ce81-7b94-42f1-a6d6-b401386acfa5", "operating_status": "OFFLINE", "provisioning_status": "ACTIVE" }} | 11:37 |
*** xakaitetoia has joined #openstack-lbaas | 11:38 | |
*** nicolasbock has joined #openstack-lbaas | 11:38 | |
lucadelmonte90 | https://pastebin.com/UbDuLSRU | 11:38 |
cgoncalves | thanks for the pastebin. so the load balancer, listener and pool are in operating status OFFLINE. IIRC, this may indicate our network is not properly configured to allow the periodic health messages from the amphora to be received by the controller | 11:42 |
cgoncalves | lucadelmonte90, please check that the network allows the amphora to reach the node where the octavia health manager service is running on port UDP/5555 and no firewall is dropping those packets | 11:43 |
lucadelmonte90 | cgoncalves i see, thanks for the help, i' ll investigate | 11:49 |
cgoncalves | oops, s/indicate our network/indicate the network/. sorry for the dyslexia :) | 11:50 |
lucadelmonte90 | no problem, :D | 11:50 |
lucadelmonte90 | my mistake is probably that the mgnt network that i used for testing the load balancer is an external network | 11:51 |
lucadelmonte90 | i deployed with kolla, and the control node where the octavia-health-manager container is is not reachable in that subnet | 11:52 |
lucadelmonte90 | i see that the healt_manager is binded to the mngmt network, which is not reachable from the network specified in amp_boot_network_list | 11:54 |
cgoncalves | I see. I'm not familiar with kolla-ansible deployments. maybe someone else here as experience and could help. best would be to contact #openstack-kolla | 11:55 |
lucadelmonte90 | the amphora sends the healtchecks to the controller via the default namespace right ( so the network specified in amp_boot_network_list), and not from the amphora netns | 11:57 |
lucadelmonte90 | ? | 11:57 |
cgoncalves | correct | 11:58 |
*** rpittau is now known as rpittau|bbl | 11:58 | |
lucadelmonte90 | perfect, thank you very much for the clarifications, you have been very helpful : ) | 11:58 |
cgoncalves | it sends to all controller nodes listed under controller_ip_port_list config in a round-robin way | 11:58 |
cgoncalves | glad we could help. feel free to reach out again anytime for octavia questions :) | 11:59 |
openstackgerrit | Ann Taraday proposed openstack/octavia master: Check https://review.opendev.org/712091 | 12:03 |
openstackgerrit | Ann Taraday proposed openstack/octavia-tempest-plugin master: TEST https://review.opendev.org/704783 | 12:16 |
*** lucadelmonte90 has quit IRC | 12:35 | |
*** irclogbot_3 has quit IRC | 13:22 | |
*** irclogbot_3 has joined #openstack-lbaas | 13:23 | |
*** gcheresh_ has quit IRC | 13:24 | |
*** gcheresh has joined #openstack-lbaas | 13:24 | |
*** ataraday_ has joined #openstack-lbaas | 13:26 | |
*** ataraday_ has quit IRC | 13:31 | |
*** yamamoto has joined #openstack-lbaas | 13:36 | |
*** maciejjozefczyk_ is now known as maciejjozefczyk | 13:41 | |
*** yamamoto has quit IRC | 13:49 | |
*** zigo has quit IRC | 13:49 | |
*** servagem has joined #openstack-lbaas | 13:55 | |
*** vishalmanchanda has quit IRC | 14:00 | |
*** sapd1_x has joined #openstack-lbaas | 14:25 | |
*** yamamoto has joined #openstack-lbaas | 14:25 | |
johnsom | Sigh, yet another person struggling because they used kolla. I wish someone would go fix kolla to not be broken out of the box. | 14:27 |
johnsom | cgoncalves: Thanks for helping them out. | 14:27 |
*** yamamoto has quit IRC | 14:31 | |
*** armax has joined #openstack-lbaas | 14:38 | |
*** vishalmanchanda has joined #openstack-lbaas | 14:41 | |
rm_work | I don't understand how it'd be possible to fix that? it TOTALLY depends on how your network is set up? | 15:00 |
rm_work | there's no automated way to make that happen AFAIU :/ | 15:00 |
johnsom | rm_work Kolla is the only deployment tool for OpenStack that doesn't setup the network for the user | 15:01 |
rm_work | i have no idea how the other ones could possibly do it | 15:01 |
rm_work | i assume they make assumptions and only work like 20% of the time or something | 15:02 |
* rm_work shrugs | 15:02 | |
rm_work | like, conceptually it seems impossible to do, every deployment is completely different IME | 15:03 |
*** TrevorV has joined #openstack-lbaas | 15:09 | |
*** yamamoto has joined #openstack-lbaas | 15:17 | |
*** yamamoto has quit IRC | 15:25 | |
*** gcheresh has quit IRC | 15:33 | |
*** sapd1_x has quit IRC | 15:51 | |
*** zigo has joined #openstack-lbaas | 15:52 | |
*** ataraday_ has joined #openstack-lbaas | 15:57 | |
*** yamamoto has joined #openstack-lbaas | 15:57 | |
rm_work | #startmeeting Octavia | 16:01 |
openstack | Meeting started Wed Mar 11 16:01:05 2020 UTC and is due to finish in 60 minutes. The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:01 |
*** openstack changes topic to " (Meeting topic: Octavia)" | 16:01 | |
openstack | The meeting name has been set to 'octavia' | 16:01 |
rm_work | #chair johnsom | 16:01 |
openstack | Current chairs: johnsom rm_work | 16:01 |
rm_work | #chair cgoncalves | 16:01 |
openstack | Current chairs: cgoncalves johnsom rm_work | 16:01 |
cgoncalves | hi | 16:01 |
gthiemonge | Hi | 16:01 |
rm_work | hey, carlos can you run this? both johnsom and I are in meetings internally that are running over | 16:01 |
johnsom | Hi | 16:02 |
cgoncalves | I am on a call too :/ | 16:02 |
johnsom | Daylight time is fun! | 16:02 |
cgoncalves | #topic Announcements | 16:02 |
*** openstack changes topic to "Announcements (Meeting topic: Octavia)" | 16:02 | |
ataraday_ | hi | 16:03 |
cgoncalves | I have not prepared anything, sorry. is there anything relevant to share? | 16:03 |
*** yamamoto has quit IRC | 16:03 | |
johnsom | The last release of octavia-lib date is coming up quickly | 16:04 |
johnsom | Week of March 30 | 16:04 |
cgoncalves | thanks | 16:05 |
cgoncalves | Any other announcements this week? | 16:05 |
cgoncalves | #topic Brief progress reports / bugs needing review | 16:06 |
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)" | 16:06 | |
johnsom | #link https://releases.openstack.org/ussuri/schedule.html | 16:06 |
*** maciejjozefczyk has quit IRC | 16:07 | |
ataraday_ | Hightlight for #link https://review.opendev.org/#/c/709696/ | 16:07 |
johnsom | Oye, Ummmm, I'm down to just the network driver testing work. I have had some internal distractions that have slowed this down. | 16:07 |
johnsom | I also needed to address a few other small bugs, you probably saw the patches for. | 16:07 |
johnsom | Plan is to wrap failover as soon as possible and start the v2 driver work on it. | 16:08 |
ataraday_ | And wanted to raise attention for this bug #link https://storyboard.openstack.org/#!/story/2007340 - please share you thoughts. | 16:08 |
johnsom | I saw it, but have not had time to read/understand it. | 16:09 |
cgoncalves | I have been busy internally and with tripleo. I did some work in the octavia tempest plugin side to add coverage for the allowed CIDRs. sadly the jobs fail occasionally, it is a shortcoming in Neutron where the SG rule doesn't get configured in the agent faster than Octavia completes its flow, returns and tempest sends checks if the allowed CIDR is applied in the listener. | 16:10 |
johnsom | ataraday_ Are you keeping the priority review list up to date for the jobboard chain? I think in the next we or so we really need to prioritize getting the last base patches reviewed and merged. IMO, this is a *must* feature for U. | 16:12 |
ataraday_ | johnsom, yes, things are on track there. Just two changes left - one small I pointed earlier | 16:13 |
ataraday_ | and the main change | 16:13 |
johnsom | Ok, thank you | 16:14 |
johnsom | Any other updates this week? | 16:15 |
johnsom | #topic Open Discussion | 16:16 |
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)" | 16:16 | |
johnsom | One item I wanted to mention today, the upstream periodic image building jobs are not working | 16:16 |
johnsom | They stopped early last month. | 16:16 |
johnsom | It appears the python "yaml" package issue is going on there. | 16:17 |
johnsom | If anyone has some time, it would be great if you could track that down. | 16:17 |
johnsom | Maybe work with those DIB cores.... grin | 16:17 |
cgoncalves | http://zuul.openstack.org/builds?project=openstack%2Foctavia&pipeline=periodic-stable&pipeline=periodic | 16:17 |
* cgoncalves wears the Octavia hat at all times | 16:18 | |
johnsom | I think I had some patches up for new image building jobs, but it's been such chaos recently I'm not 100% sure if those are related or not | 16:18 |
cgoncalves | I will check what is going on there | 16:18 |
rm_work | weird ok, we have not had issues with this internally and we build centos images from master :/ | 16:19 |
rm_work | have not seen any build failures | 16:19 |
johnsom | Ah, mine is just a "test" not a publish, so different issues | 16:19 |
johnsom | I *think* this may just be that DIB has not released a fixed package for this issue yet, but not sure | 16:20 |
johnsom | #link https://review.opendev.org/#/c/706393/ | 16:20 |
johnsom | That is the "test" job patch | 16:20 |
cgoncalves | ianw released DIB 2.43.0 today (?) | 16:20 |
johnsom | Ah, yes, maybe that will fix it???? | 16:21 |
cgoncalves | I can check | 16:21 |
johnsom | Thank you | 16:22 |
johnsom | Any other topics today? | 16:22 |
cgoncalves | thanks for monitoring the periodic jobs! | 16:22 |
johnsom | Ok, if there are not any more topics we can close it out. | 16:24 |
johnsom | Maybe our PTL rm_work can scrub the priority list and bring a proposed list of features that are at risk of not making Ussuri next week. | 16:24 |
johnsom | We are pretty much up to the feature in/out time. | 16:25 |
rm_work | that would be awesome if i could do that | 16:25 |
rm_work | yes, so I have time in my sprint this week/next for actually working on the multi-vip stuff | 16:25 |
rm_work | but I am worried that may not make it into U even though the lib change did | 16:25 |
rm_work | which is fine | 16:25 |
rm_work | I think our real bottleneck is legitimately review cycles | 16:25 |
rm_work | and even if the work can get done, there's no cycles to get people to review it | 16:26 |
rm_work | for instance, STILL waiting on reviews for outstanding AZ code bugs that are making AZ support unusable | 16:26 |
rm_work | (thanks carlos for the +2 on the first one) | 16:26 |
rm_work | so I will try to take some time to scrub that and make some decisions about what we will most likely have to delay till next cycle, and what we need to focus on for this cycle | 16:27 |
johnsom | Ok, any thing else today or onward? | 16:27 |
johnsom | Ok, thanks everyone! | 16:29 |
rm_work | o/ | 16:29 |
johnsom | #endmeeting | 16:29 |
*** openstack changes topic to "Discussions for OpenStack Octavia | Priority bug review list: https://etherpad.openstack.org/p/octavia-priority-reviews" | 16:29 | |
openstack | Meeting ended Wed Mar 11 16:29:06 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:29 |
cgoncalves | o/ | 16:29 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.html | 16:29 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.txt | 16:29 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2020/octavia.2020-03-11-16.01.log.html | 16:29 |
rm_work | whelp, got out of my meeting in time to be here for 5 minutes of this one :D | 16:29 |
xgerman | lol | 16:36 |
johnsom | rm_work BTW, I tested the barbican client thing yesterday (with a master devstack) and it behaved as expected, so, not sure what is up with that story. Glad we stopped the revert. | 16:38 |
rm_work | yes | 16:38 |
johnsom | I think we need more information about what version they are running, etc. | 16:39 |
johnsom | It *seems* like they are passing a secret reference instead of the container into the listener create, but the story docs don't show that. If that is the case, the exception is valid. | 16:40 |
*** rpittau|bbl is now known as rpittau | 16:43 | |
*** nicolasbock has quit IRC | 17:14 | |
*** nicolasbock has joined #openstack-lbaas | 17:20 | |
*** gthiemonge has quit IRC | 17:30 | |
dawzon | So I did a little bit of digging and it seems like HAProxy doesn't have any of its own built-in defaults for ciphers (https://github.com/haproxy/haproxy/blob/751e5e21a9b9228dc035bd4c65fe65a043b31f77/Makefile#L237). Either a list of defaults is specified at compile time, otherwise it uses the default cipher string provided by OpenSSL (i.e. there's not a master default we can rely on). So there's potential for | 17:58 |
dawzon | it to change between distros and different version of OpenSSL, although I can't say how much it actually varies in practice. Seems like in an ideal scenario there would be a way to query the cipher strings of each instance of haproxy, then insert that into the database, but is there even a way to perform a migration like that? | 17:58 |
*** vishalmanchanda has quit IRC | 18:00 | |
*** gthiemonge has joined #openstack-lbaas | 18:04 | |
johnsom | dawzon Well, I don't see ciphers, only protocols even being exposed in haproxy and I don't really like the idea of trying to poll haproxy for that information. | 18:07 |
johnsom | I think we either need to have some type of "undefined", such as "None" implies today, or set a default in the DB at migration time and add a release note for operators to failover to pick up the new default. | 18:09 |
johnsom | rm_work What do you think? | 18:09 |
*** gcheresh has joined #openstack-lbaas | 18:16 | |
rm_work | the least possibly impactful would be a None that just doesn't set this | 18:16 |
rm_work | for legacy objects | 18:17 |
rm_work | (and we set the default on all new objects moving forward) | 18:17 |
rm_work | BUT that introduces ongoing overhead for maintainability | 18:17 |
rm_work | picking defaults from OpenSSL and backfilling everything with that seems sane, except that will have the side effect of deploying a technically new and unrelated config option on the next touch of the LB, which could lead to things like "i added a member and suddenly the LB is rejecting my SSL requests" | 18:19 |
rm_work | as a maintainer I prefer option 2, but the safer option is 1 | 18:19 |
*** tesseract has quit IRC | 18:43 | |
*** gcheresh has quit IRC | 19:31 | |
*** rpittau is now known as rpittau|afk | 19:40 | |
*** yamamoto has joined #openstack-lbaas | 20:33 | |
*** yamamoto has quit IRC | 20:38 | |
*** servagem has quit IRC | 20:38 | |
*** gcheresh has joined #openstack-lbaas | 21:02 | |
*** rcernin has joined #openstack-lbaas | 21:28 | |
*** gcheresh has quit IRC | 21:50 | |
*** spatel has joined #openstack-lbaas | 22:12 | |
*** spatel has quit IRC | 22:17 | |
*** jrosser has quit IRC | 22:31 | |
*** rm_work has quit IRC | 22:31 | |
*** andrein has quit IRC | 22:31 | |
*** xakaitetoia has quit IRC | 22:33 | |
*** rm_work has joined #openstack-lbaas | 22:33 | |
*** jrosser has joined #openstack-lbaas | 22:36 | |
*** andrein has joined #openstack-lbaas | 22:36 | |
*** guilhermesp has quit IRC | 22:36 | |
*** generalfuzz has quit IRC | 22:36 | |
*** mnaser has quit IRC | 22:36 | |
*** gmann has quit IRC | 22:36 | |
*** fyx has quit IRC | 22:36 | |
*** dawzon has quit IRC | 22:36 | |
*** beisner has quit IRC | 22:36 | |
*** dougwig has quit IRC | 22:36 | |
*** luketollefson has quit IRC | 22:36 | |
*** fyx has joined #openstack-lbaas | 22:38 | |
*** dougwig has joined #openstack-lbaas | 22:38 | |
*** mnaser has joined #openstack-lbaas | 22:39 | |
*** beisner has joined #openstack-lbaas | 22:40 | |
*** gmann has joined #openstack-lbaas | 22:40 | |
*** guilhermesp has joined #openstack-lbaas | 22:41 | |
*** luketollefson has joined #openstack-lbaas | 22:42 | |
*** dawzon has joined #openstack-lbaas | 22:42 | |
*** TrevorV has quit IRC | 22:44 | |
*** generalfuzz has joined #openstack-lbaas | 22:44 | |
*** irclogbot_3 has quit IRC | 22:47 | |
*** irclogbot_0 has joined #openstack-lbaas | 22:48 | |
*** openstackstatus has quit IRC | 22:48 | |
*** stevenglasford has quit IRC | 22:50 | |
*** xgerman has quit IRC | 22:50 | |
*** nmickus has quit IRC | 22:50 | |
*** coreycb has quit IRC | 22:50 | |
*** ccamposr has quit IRC | 22:50 | |
*** xgerman has joined #openstack-lbaas | 22:52 | |
*** coreycb has joined #openstack-lbaas | 22:54 | |
*** nmickus has joined #openstack-lbaas | 22:55 | |
*** emccormick has quit IRC | 22:56 | |
*** lxkong has quit IRC | 22:56 | |
*** stevenglasford has joined #openstack-lbaas | 22:57 | |
*** emccormick has joined #openstack-lbaas | 23:01 | |
*** tkajinam has joined #openstack-lbaas | 23:01 | |
*** lxkong has joined #openstack-lbaas | 23:02 | |
*** ianychoi has joined #openstack-lbaas | 23:06 | |
*** yamamoto has joined #openstack-lbaas | 23:47 | |
*** KeithMnemonic has quit IRC | 23:51 | |
*** KeithMnemonic has joined #openstack-lbaas | 23:51 | |
*** yamamoto has quit IRC | 23:54 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!