*** yamamoto has quit IRC | 00:00 | |
*** ktibi_ has quit IRC | 00:02 | |
*** SumitNaiksatam has quit IRC | 00:03 | |
*** blake has quit IRC | 00:22 | |
*** longkb has joined #openstack-lbaas | 00:23 | |
*** Swami has quit IRC | 00:25 | |
*** Swami_ has quit IRC | 00:25 | |
*** annp has joined #openstack-lbaas | 00:48 | |
*** blake has joined #openstack-lbaas | 00:49 | |
*** blake has quit IRC | 00:52 | |
*** yamamoto has joined #openstack-lbaas | 00:56 | |
*** yamamoto has quit IRC | 01:02 | |
*** fnaval has joined #openstack-lbaas | 01:16 | |
*** yamamoto has joined #openstack-lbaas | 01:58 | |
*** hongbin has joined #openstack-lbaas | 02:01 | |
*** yamamoto has quit IRC | 02:04 | |
*** yamamoto has joined #openstack-lbaas | 03:00 | |
*** yamamoto has quit IRC | 03:05 | |
*** SumitNaiksatam has joined #openstack-lbaas | 03:13 | |
*** yamamoto has joined #openstack-lbaas | 03:38 | |
*** longkb has quit IRC | 03:55 | |
*** longkb has joined #openstack-lbaas | 03:55 | |
*** annp has quit IRC | 04:00 | |
*** annp has joined #openstack-lbaas | 04:00 | |
*** hongbin has quit IRC | 04:21 | |
*** threestrands has quit IRC | 04:47 | |
*** links has joined #openstack-lbaas | 05:00 | |
*** pcaruana has quit IRC | 05:09 | |
*** nmanos has joined #openstack-lbaas | 05:25 | |
*** fnaval has quit IRC | 05:50 | |
*** nmanos has quit IRC | 06:06 | |
*** pcaruana has joined #openstack-lbaas | 06:17 | |
*** kobis has joined #openstack-lbaas | 06:18 | |
*** kobis has quit IRC | 06:19 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 06:20 | |
*** kobis has joined #openstack-lbaas | 06:20 | |
*** threestrands has joined #openstack-lbaas | 06:27 | |
*** AlexeyAbashkin has quit IRC | 06:33 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 06:59 | |
*** tesseract has joined #openstack-lbaas | 07:16 | |
*** nmanos has joined #openstack-lbaas | 07:19 | |
*** kobis has quit IRC | 07:22 | |
*** kobis has joined #openstack-lbaas | 07:27 | |
*** rcernin has quit IRC | 07:27 | |
*** sapd has quit IRC | 07:27 | |
*** sapd has joined #openstack-lbaas | 07:27 | |
cgoncalves | rm_work, I don't know (ansible RH person). the impression I have is that the ansible community is pretty quick with reviews so if they are not, ping me later :) | 08:02 |
---|---|---|
*** ramishra has joined #openstack-lbaas | 08:19 | |
*** openstackgerrit has joined #openstack-lbaas | 08:26 | |
openstackgerrit | Merged openstack/octavia master: Improve the error logging for zombie amphora https://review.openstack.org/561369 | 08:26 |
ramishra | Hi All, Any idea why we see these errors intermittently with octavia? http://logs.openstack.org/61/502961/13/check/heat-functional-orig-mysql-lbaasv2/84df284/logs/screen-o-hm.txt.gz#_Jun_14_07_25_34_346443 | 08:28 |
*** ianychoi has quit IRC | 08:39 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia master: Add grenade support https://review.openstack.org/549654 | 08:43 |
cgoncalves | ramishra, hi. https://review.openstack.org/#/c/561369/ may fix the error you're seeing. it was merged a few minutes ago. worth rechecking your patch | 08:48 |
ramishra | cgoncalves: thanks! Will recheck | 08:51 |
*** AlexeyAbashkin has quit IRC | 08:53 | |
ramishra | cgoncalves: hopefully it's related. But we see these kind of errors from time to time and I don't see any way to dig deep for further details.. may be some db transaction issues? | 08:55 |
cgoncalves | ramishra, let's hope it fixes :) | 08:57 |
cgoncalves | ramishra, dunno, I'd need to check it a bit more | 08:57 |
ramishra | cgoncalves: OK, thanks! | 09:02 |
*** AlexeyAbashkin has joined #openstack-lbaas | 09:04 | |
*** salmankhan has joined #openstack-lbaas | 09:07 | |
*** kobis has quit IRC | 09:28 | |
*** links has quit IRC | 09:33 | |
*** links has joined #openstack-lbaas | 09:33 | |
*** threestrands has quit IRC | 09:54 | |
*** AlexeyAbashkin has quit IRC | 10:03 | |
*** kobis has joined #openstack-lbaas | 10:07 | |
*** kobis has quit IRC | 10:07 | |
*** salmankhan has quit IRC | 10:07 | |
*** kobis has joined #openstack-lbaas | 10:08 | |
*** salmankhan has joined #openstack-lbaas | 10:10 | |
*** salmankhan has quit IRC | 10:13 | |
*** salmankhan has joined #openstack-lbaas | 10:13 | |
*** annp has quit IRC | 11:02 | |
*** yamamoto has quit IRC | 11:02 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 11:03 | |
*** yamamoto has joined #openstack-lbaas | 11:44 | |
*** yamamoto has quit IRC | 11:48 | |
*** atoth has joined #openstack-lbaas | 12:04 | |
*** yamamoto has joined #openstack-lbaas | 12:06 | |
*** yamamoto has quit IRC | 12:10 | |
*** amuller has joined #openstack-lbaas | 12:24 | |
*** longkb has quit IRC | 12:26 | |
*** fnaval has joined #openstack-lbaas | 12:32 | |
*** yamamoto has joined #openstack-lbaas | 12:40 | |
*** yamamoto has quit IRC | 12:45 | |
*** ianychoi has joined #openstack-lbaas | 12:46 | |
*** AlexeyAbashkin has quit IRC | 12:56 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 13:04 | |
*** yamamoto has joined #openstack-lbaas | 13:22 | |
*** yamamoto has quit IRC | 13:27 | |
*** kobis has quit IRC | 13:28 | |
*** salmankhan1 has joined #openstack-lbaas | 13:29 | |
*** salmankhan has quit IRC | 13:30 | |
*** salmankhan1 is now known as salmankhan | 13:30 | |
*** yamamoto has joined #openstack-lbaas | 13:38 | |
*** yamamoto has quit IRC | 13:42 | |
*** AlexeyAbashkin has quit IRC | 13:43 | |
*** kobis has joined #openstack-lbaas | 13:44 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 13:46 | |
*** nmanos has quit IRC | 13:56 | |
*** yamamoto has joined #openstack-lbaas | 14:08 | |
*** yamamoto has quit IRC | 14:12 | |
*** links has quit IRC | 14:15 | |
*** kobis has quit IRC | 14:16 | |
*** yamamoto has joined #openstack-lbaas | 14:20 | |
*** yamamoto has quit IRC | 14:20 | |
*** kobis has joined #openstack-lbaas | 14:30 | |
*** AlexeyAbashkin has quit IRC | 14:32 | |
*** ramishra has quit IRC | 14:37 | |
*** Swami has joined #openstack-lbaas | 14:54 | |
*** kobis has quit IRC | 15:12 | |
*** salmankhan has quit IRC | 15:18 | |
*** yamamoto has joined #openstack-lbaas | 15:20 | |
*** pcaruana has quit IRC | 15:24 | |
*** yamamoto has quit IRC | 15:25 | |
cgoncalves | johnsom, hi. https://review.openstack.org/#/c/554004/ requires a story, no? | 15:56 |
johnsom | cgoncalves Yes, it should | 15:57 |
johnsom | That stuff is for German's proxy plugin | 15:57 |
xgerman_ | yeah, why is that not merged ;-) | 15:58 |
cgoncalves | I know. I'm about to -1 it because of missing story | 15:58 |
cgoncalves | it was required for https://review.openstack.org/#/c/568361/ so we should be consistent | 15:58 |
johnsom | Go for it | 15:58 |
xgerman_ | or write the story and add it to the patch — be part of the solution | 15:59 |
cgoncalves | xgerman_, you may have a good looking traceback worth attaching to the story :P | 15:59 |
johnsom | Yeah, I have to say that patch is *super* low on my personal priority list right now | 16:00 |
cgoncalves | johnsom, the grenade patch passed CI with your "Improve the error logging for zombie amphora" patch in and without Adam's | 16:14 |
johnsom | Yep, good. Now that zombie merged we should be able to merge grenade! | 16:15 |
*** yamamoto has joined #openstack-lbaas | 16:22 | |
*** yamamoto has quit IRC | 16:27 | |
*** Swami has quit IRC | 16:35 | |
*** SumitNaiksatam has quit IRC | 16:36 | |
*** kobis has joined #openstack-lbaas | 16:43 | |
*** kobis has quit IRC | 16:45 | |
*** SumitNaiksatam has joined #openstack-lbaas | 17:02 | |
rm_work | johnsom: well, it looks like they fixed whatever it was upstream in centos | 17:18 |
rm_work | johnsom: as well as the ridiculous cloud-init bug | 17:18 |
rm_work | so | 17:18 |
* rm_work shrugs | 17:18 | |
johnsom | Nice | 17:18 |
rm_work | two less patches for me | 17:19 |
*** yamamoto has joined #openstack-lbaas | 17:23 | |
*** yamamoto has quit IRC | 17:29 | |
*** tesseract has quit IRC | 17:30 | |
*** pcaruana has joined #openstack-lbaas | 17:43 | |
johnsom | rm_work Can you abandon that patch then? | 17:51 |
*** salmankhan has joined #openstack-lbaas | 17:51 | |
rm_work | I guess so | 17:53 |
rm_work | it irks you a lot, doesn't it :P | 17:53 |
johnsom | Well, plus I would like to reduce the ghost patches laying around. | 17:55 |
*** salmankhan has quit IRC | 17:56 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Experimental multi-az support https://review.openstack.org/558962 | 18:00 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: AZ Evacuation resource https://review.openstack.org/559873 | 18:00 |
*** pcaruana has quit IRC | 18:01 | |
openstackgerrit | Merged openstack/octavia master: Add grenade support https://review.openstack.org/549654 | 18:06 |
johnsom | Wahoo! | 18:06 |
*** pcaruana has joined #openstack-lbaas | 18:06 | |
johnsom | rm_work Is this one dead now too? https://review.openstack.org/#/c/572702/ | 18:07 |
* johnsom is building the priority review list so looking through stuff | 18:07 | |
*** kobis has joined #openstack-lbaas | 18:08 | |
rm_work | ehhhhh | 18:08 |
rm_work | let me see | 18:08 |
johnsom | My zombie version merged | 18:08 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s) https://review.openstack.org/435612 | 18:09 |
rm_work | yeah | 18:10 |
rm_work | i'm trying to make sure | 18:11 |
*** kobis has quit IRC | 18:13 | |
rm_work | so the only thing is that mine tries a little harder than yours | 18:15 |
rm_work | but probably it shouldn't (?) be possible for it to matter | 18:15 |
rm_work | so, sure | 18:16 |
cgoncalves | johnsom, I take that we'll deprecate the 'octavia' provider alias at some point, so a follow-up patch for grenade would be to create a LB with provider=octavia, update DB and verify | 18:19 |
johnsom | Yes, it will go away, but I don't see a need to rush. | 18:20 |
cgoncalves | sure | 18:21 |
johnsom | I think the next step is to make grenade voting, then maybe work on testing that the data plane (amps) don't go down. I think that is the next step on the upgrade tag ladder | 18:22 |
cgoncalves | yes for making it voting | 18:23 |
cgoncalves | aren't we testing that data plane is up during upgrade? I think we are | 18:23 |
cgoncalves | that's the verify_noapi function | 18:23 |
johnsom | That would be great if we are | 18:23 |
cgoncalves | we curl the vip | 18:23 |
johnsom | Ok. I haven't looked at the patch in a while | 18:24 |
johnsom | So cool | 18:24 |
johnsom | So I guess the next step is getting the multi-node stuff going again and work on rolling upgrades | 18:24 |
cgoncalves | spreading out o-* services across multiple nodes? | 18:26 |
johnsom | Yep | 18:26 |
cgoncalves | if so, we'd need to change a few things in devstack plugin right? to make sure o-worker can reach amps without relying on o-hm0 | 18:27 |
cgoncalves | it should have it's own interface or whatever | 18:27 |
johnsom | Wouldn't we have an o-hm0 on each control plane instance? | 18:27 |
openstackgerrit | Adam Harwell proposed openstack/neutron-lbaas master: Add gate for Direct L7-Proxy to Octavia https://review.openstack.org/561049 | 18:30 |
rm_work | I always link o-hm to the dataplane <_< | 18:30 |
johnsom | before charging the flux capacitor | 18:31 |
rm_work | ^_^ | 18:31 |
cgoncalves | johnsom, the naming 'o-hm0' gives the understanding that it's more for the o-hm service | 18:31 |
rm_work | ah you're talking about the interface | 18:31 |
rm_work | not the service | 18:31 |
cgoncalves | so in case of composable deployments where o-hm is not running on a given (set of) controller nodes..... | 18:31 |
johnsom | Yeah, true, but, it's just a devstack hack, so, does it matter? | 18:32 |
cgoncalves | depends on how multi-node is set up I guess | 18:32 |
*** salmankhan has joined #openstack-lbaas | 18:32 | |
johnsom | Not sure if we can do fake VLANs in multi-node to support adding a provider network | 18:33 |
*** yamamoto has joined #openstack-lbaas | 18:34 | |
cgoncalves | I'm not sure either | 18:34 |
*** yamamoto has quit IRC | 18:39 | |
*** yamamoto has joined #openstack-lbaas | 18:49 | |
*** kobis has joined #openstack-lbaas | 18:53 | |
*** yamamoto has quit IRC | 18:53 | |
johnsom | Ok folks, I have updated the priority patch review list for Rocky: https://etherpad.openstack.org/p/octavia-priority-reviews | 18:53 |
johnsom | The link is also in the topic for this channel | 18:54 |
johnsom | If I missed something you think is a priority feel free to add it to the Awaiting section and ping me | 18:54 |
johnsom | These are things we think are a priority to get into Rocky | 18:55 |
rm_work | xgerman_: https://review.openstack.org/#/c/573470/ | 18:56 |
rm_work | or nmagnezi | 18:56 |
*** yamamoto has joined #openstack-lbaas | 18:58 | |
rm_work | thx | 18:58 |
*** kobis has quit IRC | 19:06 | |
*** amuller has quit IRC | 19:07 | |
*** yamamoto has quit IRC | 19:07 | |
*** yamamoto has joined #openstack-lbaas | 19:16 | |
*** yamamoto has quit IRC | 19:16 | |
*** aojea_ has joined #openstack-lbaas | 19:41 | |
*** kobis has joined #openstack-lbaas | 19:51 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Gather fail/pass after executor is done https://review.openstack.org/477720 | 19:54 |
*** yamamoto has joined #openstack-lbaas | 20:16 | |
*** yamamoto has quit IRC | 20:22 | |
openstackgerrit | Merged openstack/octavia-tempest-plugin master: Spare amps have no role https://review.openstack.org/573470 | 20:35 |
*** kobis has quit IRC | 20:35 | |
eandersson | Anyone happen to know how the policy worked for neutron-lbaas? | 20:36 |
eandersson | If I want to allow a role to list all load balancers | 20:36 |
eandersson | from all projects | 20:36 |
johnsom | Ummm, by default it's "owner or admin", but you can override that | 20:36 |
eandersson | I just can't find what that key is called in the policy | 20:36 |
johnsom | Sure, just a minute, I will find it | 20:37 |
eandersson | oh | 20:37 |
eandersson | https://github.com/openstack/neutron-lbaas/blob/master/etc/neutron/policy.json.sample | 20:37 |
eandersson | I am silly | 20:37 |
eandersson | although not sure how one of these would control that | 20:38 |
johnsom | You know if you were running Octavia: https://docs.openstack.org/octavia/latest/configuration/policy.html GRIN | 20:38 |
eandersson | haha | 20:38 |
eandersson | We are moving towards it | 20:39 |
eandersson | but even if we deployed it today, it would take time to migrate over :p | 20:39 |
johnsom | Did you need help defining the rule? | 20:39 |
eandersson | Yea - if you have a moment | 20:39 |
johnsom | Sure, what is it you want to do? | 20:39 |
johnsom | A global list all lbs? | 20:40 |
eandersson | So basically we have a readonly user that we want to be able to list all load balancers (across all projects) | 20:40 |
eandersson | Yea | 20:40 |
johnsom | so the rule would be "rule:admin_or_owner or role:<audit role name>" where <audit role name> is the role you setup for this and assigned to the user | 20:41 |
johnsom | Let me find the root policies for neutron, that file looks old and incomplete | 20:41 |
eandersson | https://github.com/openstack/neutron/blob/master/etc/policy.json | 20:42 |
johnsom | Hmmm, yeah, it must be "get_loadbalancer" though I am rusty on the neutron-lbaas side | 20:46 |
johnsom | Yeah, their granularity is only at the HTTP method, so GET loadbalancer and GET loadbalancers is the same RBAC rule | 20:55 |
eandersson | Yea - but looks like it's "" by default, which means no restrictions | 21:01 |
johnsom | No, it falls back to the "default" rule | 21:04 |
eandersson | Ah | 21:04 |
johnsom | Which is owner_or_admin | 21:04 |
johnsom | https://github.com/openstack/neutron/blob/master/etc/policy.json#L15 | 21:04 |
eandersson | Is that a neutron special? because according to olso.policy : "" means always | 21:08 |
eandersson | https://docs.openstack.org/oslo.policy/latest/admin/policy-json-file.html | 21:08 |
eandersson | Ah I see - default is what is used if the rule is not defined, but "" means always. | 21:10 |
johnsom | There you go, ok | 21:11 |
*** SumitNaiksatam has quit IRC | 21:12 | |
eandersson | tl;dr octavia>lbaas :p | 21:13 |
johnsom | We think so | 21:14 |
*** yamamoto has joined #openstack-lbaas | 21:19 | |
*** yamamoto has quit IRC | 21:24 | |
rm_work | why is octavia-grenade just going to ERROR on all of the currently running jobs | 21:37 |
rm_work | ? | 21:37 |
rm_work | look at zuul status | 21:37 |
johnsom | Hmm, that is odd. Well, we have to wait for the run to finish to see the zuul error | 21:38 |
johnsom | Looks like the experimental AZ support patch will be the first to finish | 21:40 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Ignore a port not found when deleting an LB https://review.openstack.org/564848 | 21:46 |
eandersson | johnsom, figured it out | 22:00 |
eandersson | > is_advsvc | 22:00 |
eandersson | This is the flag that determines if you can get all lbs or not | 22:00 |
johnsom | hmmm | 22:01 |
eandersson | actually is_admin or is_advsvc | 22:04 |
johnsom | Yeah, just found it in the neutron code. very odd.... | 22:05 |
johnsom | https://github.com/openstack/neutron/blob/master/neutron/db/_utils.py#L75 | 22:07 |
eandersson | Yep | 22:08 |
eandersson | Was just looking at the same code | 22:08 |
johnsom | I think neutron is in need of an RBAC overhaul | 22:09 |
*** aojea_ has quit IRC | 22:11 | |
eandersson | Yep | 22:16 |
eandersson | Thanks for the help johnsom | 22:17 |
eandersson | Another selling point for Octavia :p | 22:17 |
johnsom | +1 NP | 22:18 |
*** rcernin has joined #openstack-lbaas | 22:20 | |
*** yamamoto has joined #openstack-lbaas | 22:20 | |
*** yamamoto has quit IRC | 22:25 | |
mnaser | so | 22:29 |
mnaser | reasons why http requests will randomly give 503 service unavailable? | 22:30 |
mnaser | even though backends are responding with no problems? | 22:30 |
mnaser | and it would flap between ok and not ok | 22:30 |
johnsom | Ummm | 22:30 |
johnsom | 503 usually means the pool doesn't have any healthy members. | 22:31 |
*** fnaval has quit IRC | 22:33 | |
mnaser | johnsom: ever seen a case where maybe two processes are fighting for the port or a bad reload maybe | 22:33 |
mnaser | http://paste.openstack.org/show/723513/ | 22:33 |
mnaser | see how it's flapping | 22:33 |
mnaser | now if i show http traffic back and forth from the backends | 22:34 |
johnsom | What do you get if you do repeated member list? | 22:34 |
mnaser | johnsom: let me check but for whats its worth you see traffic is ok here - http://paste.openstack.org/show/723514/ | 22:35 |
mnaser | two backends are a-ok | 22:35 |
johnsom | Ok, then the next step is to grab the haproxy log from the amp | 22:37 |
johnsom | It will tell us exactly the issue. | 22:37 |
mnaser | i think i'll have to finally accept my fate and add keys | 22:37 |
johnsom | And yes, we know we need to add the log forwarding stuff | 22:37 |
mnaser | is there a way to inject keys into a running amphora by any chance | 22:38 |
mnaser | using the api | 22:38 |
johnsom | Not our API, but there is nova recovery/rescue mode | 22:38 |
mnaser | wont shutting down the vm into rescue failover? | 22:39 |
rm_work | mnaser: so you don't build your amps with the ssh-disabled flag? :P | 22:39 |
johnsom | No, it should reboot fast enough that it won't trigger a full failover | 22:39 |
mnaser | i actually build them with ssh disabled in the idea that i should never touch them but i guess life ain't easy | 22:39 |
rm_work | that's the hardcore way to go ^_^ | 22:39 |
johnsom | If they are using TLS offload it will though | 22:39 |
johnsom | Yeah, we have more work to do for admin stuff before running with no keys is ok | 22:40 |
rm_work | mnaser: you can go to the health table and flip the busy bit | 22:44 |
rm_work | mnaser: for just that amp, it will protect it while you debug | 22:44 |
rm_work | when you're done, flip it back | 22:44 |
mnaser | oh that's an idea | 22:44 |
rm_work | that's what I do | 22:44 |
johnsom | rm_work FYI, infra says it was a zuul race condition with merging the grenade gate and those jobs. They say just recheck. | 22:44 |
rm_work | ok | 22:44 |
johnsom | Yeah, if you are using the default settings, as long as your cloud can reboot instances inside 60 seconds, the amp will come back up. However, if they are using TLS offload, it will failover as the cert(s) and key will be gone. | 22:46 |
mnaser | yeah but the amp will go back up in rescue mode which won't have any keys or anything | 22:47 |
mnaser | i might just rebuild em | 22:47 |
xgerman_ | mnaser: I always run with no keys | 22:47 |
rm_work | it won't go into normal run mode | 22:47 |
rm_work | correct | 22:47 |
mnaser | xgerman_: how do you troubleshoot issues like this | 22:47 |
xgerman_ | I dont have issues like that — but plenty others | 22:48 |
xgerman_ | never had to log onto a box… all my problems are control plane | 22:48 |
mnaser | looping over member list | 22:49 |
mnaser | shows them constantly online | 22:49 |
mnaser | i think i might add keys | 22:50 |
mnaser | seems to be a common issue | 22:50 |
johnsom | Yeah, let's look in the logs and see what is up. | 22:51 |
mnaser | i think i'll add keys | 22:52 |
mnaser | and fail it over | 22:52 |
mnaser | easier and less messy | 22:52 |
johnsom | yep | 22:52 |
mnaser | can you effectively run multiple load balancers in a single amphora | 22:55 |
mnaser | something like.. multiple listeners with each their own pool? | 22:55 |
rm_work | no | 22:55 |
johnsom | Wait... | 22:55 |
mnaser | i see | 22:55 |
rm_work | oh, that yes | 22:55 |
rm_work | multiple LBs no | 22:55 |
rm_work | multiple listeners totally do go in one amp | 22:55 |
rm_work | but that's != multiple LBs | 22:55 |
rm_work | might be a terminology issue | 22:55 |
mnaser | multiple listeners across multiple ips though is a no go right? | 22:55 |
johnsom | No to multiple LBs per amp. So only one VIP per amp. | 22:55 |
rm_work | https://docs.openstack.org/octavia/pike/reference/glossary.html | 22:56 |
mnaser | yeah one vip at the end of teh day | 22:56 |
mnaser | so only one port 443 | 22:56 |
rm_work | yes | 22:56 |
johnsom | You can have as many listeners (ports) etc. as you want | 22:56 |
*** threestrands has joined #openstack-lbaas | 22:56 | |
*** threestrands has quit IRC | 22:56 | |
*** threestrands has joined #openstack-lbaas | 22:56 | |
rm_work | "Object describing a logical grouping of listeners on one or more VIPs" that's an interesting definition | 22:56 |
*** threestrands has quit IRC | 22:57 | |
johnsom | An Amp will have one VIP, but will have zero or more listeners (ports) | 22:57 |
*** threestrands has joined #openstack-lbaas | 22:57 | |
*** threestrands has quit IRC | 22:57 | |
*** threestrands has joined #openstack-lbaas | 22:57 | |
johnsom | listeners can have multiple pools for L7 policiies | 22:57 |
mnaser | ok cool, time to get some amphoras rebuilt i guess | 23:02 |
mnaser | do i just delete the idle amphoras? | 23:02 |
mnaser | and then failover | 23:02 |
mnaser | once the new ones get started | 23:02 |
johnsom | You should have a failover API/command | 23:03 |
mnaser | but before failing over i want it to not fail over to the amphoras that are already booted with no keys | 23:03 |
mnaser | (i have a few on stand by) | 23:03 |
johnsom | I mean deleting the amp will trigger one (please only delete one out of an active/standby pair at a time!) | 23:03 |
rm_work | mnaser: what release are you on | 23:04 |
mnaser | queens | 23:04 |
rm_work | ok so | 23:04 |
rm_work | i think queens had amp failover | 23:04 |
rm_work | use the amphora failover api to fail the spares | 23:04 |
rm_work | i THINK we backported my fix for that? | 23:04 |
* rm_work checks | 23:04 | |
johnsom | https://developer.openstack.org/api-ref/load-balancer/v2/index.html#failover-amphora | 23:04 |
rm_work | crap maybe not | 23:04 |
mnaser | setting admin port to down has worked in the past | 23:05 |
mnaser | just takes a little while | 23:05 |
johnsom | Yes | 23:05 |
rm_work | https://review.openstack.org/#/c/564082/ | 23:05 |
rm_work | without that, he can't use it on spares | 23:05 |
rm_work | mnaser: you can just go to the health table and change "updated time" to 0 | 23:05 |
rm_work | ;) | 23:05 |
mnaser | naughty naughty ideas | 23:05 |
rm_work | that's how i make stuff failover fast, lol | 23:05 |
rm_work | I am a dirty little db-monkey | 23:06 |
rm_work | i have a constant DB connection up for all my DCs <_< | 23:06 |
johnsom | lol | 23:06 |
rm_work | This is GREAT on OSX btw: https://www.sequelpro.com/ | 23:07 |
rm_work | one tab per DB :) | 23:07 |
rm_work | nicely color coded | 23:07 |
rm_work | supports connections through an ssh bastion | 23:07 |
rm_work | reconnects well | 23:07 |
johnsom | Nice | 23:07 |
mnaser | ok failed over all 3 standby (by standby i mean preprovisioned? not sure what is the best term) | 23:09 |
johnsom | I used to use dbvisualizer at HP, but I don't have a license (or a need really) anymore | 23:09 |
johnsom | Spares | 23:09 |
johnsom | It's your spare pool | 23:09 |
mnaser | oh fyi | 23:10 |
mnaser | https://wiki.openstack.org/wiki/OpenStack_health_tracker | 23:10 |
mnaser | so let me know if you need anything :) | 23:10 |
johnsom | Grin, you two signed up for us because you know we are low maintenance didn't you.... | 23:11 |
johnsom | Little did you know the trouble we cause.... | 23:12 |
mnaser | nah, i picked the teams i work with often so it's pretty easy to keep in touch with whats going on :p | 23:12 |
johnsom | So let's talk about the diversity tag..... | 23:12 |
mnaser | johnsom: little did you know that i have some nested virt running very very well and hoping to put that in your hands soon | 23:12 |
johnsom | Nice, yeah, both limestone and OVH have it, but something is breaking it. We've been talking with both to see if we can debug, but not moving very quickly | 23:13 |
johnsom | Maybe when the nodepool images go 18.04 the kernel will have a patch | 23:13 |
mnaser | 16.04, fedora and centos have been stable in this case of this user | 23:14 |
mnaser | 17.10 has been a mess | 23:14 |
johnsom | Yeah, they crash booting up cirros even, it's not our amp image | 23:14 |
mnaser | ok | 23:17 |
mnaser | i got new ones with keys | 23:17 |
mnaser | time to fail over the amphoras that have stuff running on them | 23:17 |
johnsom | Which health monitor type is the LB using? | 23:18 |
mnaser | http | 23:18 |
rm_work | LOL mugsie is also our liason :P | 23:18 |
johnsom | Right that is why I laughed too | 23:19 |
*** yamamoto has joined #openstack-lbaas | 23:21 | |
mnaser | so | 23:22 |
mnaser | as i totally expected | 23:22 |
mnaser | after failover | 23:22 |
mnaser | it works fine. | 23:22 |
rm_work | :P | 23:22 |
mnaser | the customer seems to have been able to replicate it | 23:22 |
rm_work | of course it does | 23:22 |
johnsom | Ummm????? | 23:22 |
mnaser | so at least i have ssh access the next time it happens | 23:22 |
johnsom | I am puzzled on that one. Well, grab the /var/log/haproxy.log for us if you see another one. | 23:24 |
johnsom | I have seen that flapping before, but it's always been the backend app or server had issues | 23:25 |
*** yamamoto has quit IRC | 23:26 | |
rm_work | anyone know if RHEL8/CENT8 is going to support live kernel patching? | 23:29 |
rm_work | cgoncalves / nmagnezi? :P | 23:29 |
johnsom | Voodoo I tell you | 23:41 |
rm_work | lol | 23:43 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!