| *** yamamoto has joined #openstack-lbaas | 00:05 | |
| *** yamamoto has quit IRC | 00:10 | |
| openstackgerrit | guang-yee proposed openstack/octavia stable/rocky: HTTPS HMs need the same validation path as HTTP https://review.opendev.org/710161 | 00:31 |
|---|---|---|
| rm_work | johnsom: lol yes though in reviewing the patch for adding quotes on some other objects ... i honestly don't understand why we BOTHER for anything besides LBs <_< everything else is just a logical object on the same amphorae >_> | 01:08 |
| rm_work | *for adding quotas | 01:09 |
| rm_work | like https://review.opendev.org/#/c/590620/ | 01:09 |
| rm_work | why? who cares? lol | 01:09 |
| johnsom | Yeah, that one was ... odd | 01:10 |
| johnsom | I mean there probably is some limit, but .... it would be very large | 01:10 |
| rm_work | but like... what does the limit affect? | 01:14 |
| rm_work | the user's LB? | 01:14 |
| rm_work | if there's a hard limit (like, in haproxy itself), we should just put code in for it | 01:14 |
| rm_work | not make a quota, lol | 01:14 |
| rm_work | i remember there being one like that... | 01:15 |
| *** ramishra has joined #openstack-lbaas | 01:18 | |
| openstackgerrit | Michael Johnson proposed openstack/octavia master: WIP - Refactor the failover flows https://review.opendev.org/705317 | 01:22 |
| *** ramishra has quit IRC | 01:26 | |
| *** ramishra has joined #openstack-lbaas | 01:48 | |
| rm_work | what the shit | 01:49 |
| rm_work | $ curl $OCTAVIA_URL/v2/lbaas/loadbalancers/ -H "X-Auth-Tent-Type: application/json" -X POST -d "" | 01:49 |
| rm_work | {"faultcode": "Client", "faultstring": "Missing argument: \"load_balancer\"", "debuginfo": null} | 01:49 |
| rm_work | ^^ "load_balancer"?!?! where is that coming from | 01:49 |
| rm_work | in our API we take it as "loadbalancer" | 01:49 |
| *** yamamoto has joined #openstack-lbaas | 01:56 | |
| rm_work | also, listener list doesn't show provisioning_status? O_o | 01:59 |
| rm_work | but pool list does | 01:59 |
| *** luketollefson has joined #openstack-lbaas | 02:16 | |
| rm_work | ugh, i really want to revisit https://review.opendev.org/#/c/549297/ | 02:27 |
| rm_work | `Updated1 year, 12 months ago` | 02:28 |
| rm_work | am I mistaken about how time works? wouldn't that be "2 years"? :D | 02:28 |
| *** archiephan has joined #openstack-lbaas | 02:47 | |
| *** yamamoto has quit IRC | 02:49 | |
| *** nicolasbock has quit IRC | 03:09 | |
| *** yamamoto has joined #openstack-lbaas | 03:55 | |
| *** yamamoto has quit IRC | 04:00 | |
| *** yamamoto has joined #openstack-lbaas | 04:13 | |
| *** rcernin has quit IRC | 05:33 | |
| *** rcernin has joined #openstack-lbaas | 05:33 | |
| *** gcheresh has joined #openstack-lbaas | 06:15 | |
| *** rcernin has quit IRC | 06:24 | |
| *** ccamposr__ has joined #openstack-lbaas | 07:49 | |
| *** ccamposr has quit IRC | 07:52 | |
| *** tesseract has joined #openstack-lbaas | 07:53 | |
| *** tkajinam has quit IRC | 08:02 | |
| openstackgerrit | Ann Taraday proposed openstack/octavia master: [Amphorav2] Fix noop driver case https://review.opendev.org/709696 | 08:02 |
| openstackgerrit | Ann Taraday proposed openstack/octavia master: [Amphorav2] Fix noop driver case https://review.opendev.org/709696 | 08:02 |
| openstackgerrit | Ann Taraday proposed openstack/octavia master: Testing https://review.opendev.org/697213 | 08:03 |
| *** rpittau|afk is now known as rpittau | 09:09 | |
| *** yamamoto has quit IRC | 09:35 | |
| openstackgerrit | Merged openstack/octavia-lib master: Remove all usage of six library https://review.opendev.org/703500 | 09:56 |
| *** dmellado has quit IRC | 09:59 | |
| *** ccamposr__ has quit IRC | 10:48 | |
| *** ccamposr__ has joined #openstack-lbaas | 10:48 | |
| *** rpittau is now known as rpittau|bbl | 11:12 | |
| *** yamamoto has joined #openstack-lbaas | 11:42 | |
| *** yamamoto has quit IRC | 11:46 | |
| *** ccamposr__ has quit IRC | 11:48 | |
| *** ccamposr__ has joined #openstack-lbaas | 11:49 | |
| *** ivve has joined #openstack-lbaas | 11:53 | |
| ivve | hi, i have a small question (this might be a bug). i have a standalone lb, functioned all well. did a failover on it and it came up all nice but it doesn't seem to have bound it's ha_ip on the amphora namespace, but haproxy is listening to the vrrp_ip and therefor not functioning | 11:56 |
| ivve | https://hastebin.com/inoluhejoz.rb | 11:59 |
| ivve | just keep getting resets from the ha_ip | 12:02 |
| *** nicolasbock has joined #openstack-lbaas | 12:02 | |
| ivve | so the question is | 12:11 |
| ivve | should octavia via api use some rootwarp or similar mechanism to "ip netns amphora-haproxy ip addr add <ha_ip/32> dev eth1" | 12:12 |
| ivve | and why did it not do that when i issued a failover (on the new amphora) | 12:12 |
| rm_work | no, in STANDALONE it's supposed to be brought up via the agent I believe | 12:13 |
| ivve | checking agent log | 12:13 |
| rm_work | unclear why the agent didn't do that (or how haproxy got configured on the vrrp address?) | 12:13 |
| ivve | i even tried a configure api command | 12:13 |
| rm_work | haproxy config comes from the controller-worker, which knows the ha_ip even if it isn't bound right -- so it should have sent an haproxy.conf that had that IP | 12:13 |
| ivve | but it didn't add the ha_ip | 12:13 |
| ivve | it only has vrrp_ip configured on eth1 (in the amphora-haproxy namespace) | 12:14 |
| ivve | i tried to bump listeners too to see if that changed anything, no bueno | 12:15 |
| ivve | just to test things out | 12:15 |
| ivve | https://hastebin.com/ukogohuzuh.rb | 12:17 |
| ivve | i can't really see anything in the agent log | 12:18 |
| ivve | i mean nothing that fails or errors out, not even warnings. just couple of debug get/puts | 12:19 |
| ivve | the last thing it issues is a reload | 12:19 |
| ivve | could be associated with my bumps on the listener | 12:20 |
| ivve | this is version 4.1.1, amphora agent is 4.1.2.dev7 | 12:22 |
| ivve | could that be an issue? | 12:22 |
| ivve | its built on branch, not tag | 12:23 |
| ivve | and octavia-api on controller nodes are also stein, which ends up in 4.1.1 | 12:23 |
| *** yamamoto has joined #openstack-lbaas | 12:25 | |
| ivve | rm_work: found the problem | 12:37 |
| ivve | if you have a config "mismatch" between the topology in octavia-api and you have spawned a standalone, the agent config contains active_standby instead of single | 12:37 |
| ivve | so im guessing here that the amphora is simply waiting for keepalived to set the ha_ip on the interface | 12:38 |
| ivve | which never happens, because its single and no keepalived | 12:38 |
| ivve | quite easily reproduced | 12:42 |
| ivve | shall i submit is as a bug? | 12:42 |
| ivve | it* | 12:42 |
| rm_work | uhh | 12:54 |
| rm_work | how do you have a mismatch there | 12:55 |
| rm_work | we totally allow for multiple topologies in the system -- we look at the topology of the LB when deciding how to make the config | 12:55 |
| rm_work | the LB and amp *cannot* have different topos, it's not possible (at least without some admin intervention) | 12:56 |
| *** servagem has joined #openstack-lbaas | 12:59 | |
| *** nicolasbock has quit IRC | 13:03 | |
| *** nicolasbock has joined #openstack-lbaas | 13:04 | |
| *** dmellado has joined #openstack-lbaas | 13:05 | |
| ivve | rm_work: basically you can set octavia.conf to topology = standalone, create a loadbalancer via k8s, it will use any default topology since you can't really set that in heat | 13:12 |
| rm_work | uhh, ok? | 13:12 |
| ivve | reconfig the octavia.conf to active_standby and then failover that amphora | 13:12 |
| rm_work | that shouldn't happen | 13:12 |
| ivve | i can reproduce that | 13:12 |
| rm_work | if the LB was created as standalone, it should stay standalone forever | 13:12 |
| rm_work | if you can reproduce that, then yeah, bug | 13:13 |
| ivve | so switching it back again to standalone | 13:13 |
| rm_work | but i've got a ton of both in my env | 13:13 |
| ivve | and then failover again, then it works | 13:13 |
| rm_work | and no issues... | 13:13 |
| rm_work | what version are you on again? | 13:13 |
| rm_work | this may have been resolved in like ... rocky | 13:13 |
| ivve | api 4.1.1 | 13:16 |
| ivve | agent in amphora 4.1.2.dev7 | 13:16 |
| ivve | https://hastebin.com/apogohodec.makefile | 13:17 |
| ivve | so it gets this config when octavia-api/worker etc is reconfigured | 13:18 |
| *** nicolasbock has quit IRC | 13:21 | |
| ivve | just to be super clear, i am performing the octavia.conf change on the api on the controller nodes | 13:23 |
| *** rpittau|bbl is now known as rpittau | 13:24 | |
| ivve | well, its 4 different ones, because all the different components are in different containers, but i change and restart all of them | 13:24 |
| *** nicolasbock has joined #openstack-lbaas | 13:25 | |
| *** yamamoto has quit IRC | 13:35 | |
| *** yamamoto has joined #openstack-lbaas | 13:54 | |
| johnsom | So your api is configured with one topology setting and your controller a different one? Can you do an lb show for us? | 13:59 |
| *** yamamoto has quit IRC | 14:18 | |
| *** gcheresh has quit IRC | 14:43 | |
| *** ccamposr__ has quit IRC | 14:48 | |
| *** ccamposr__ has joined #openstack-lbaas | 14:49 | |
| rm_work | yeah went back and looked, it REALLY shouldn't matter what's set in config, during a failover it will ONLY care about what the LB's topology is from the DB | 14:51 |
| johnsom | That is what I thought as well | 14:55 |
| mloza | Hello, I have kolla-ansible deployment and I wanted to test https://github.com/a10networks/a10-octavia but I don't know which octavia container the a10 driver should be installed. Should it be in the worker or api? | 15:56 |
| rm_work | mloza: i believe it should be in all of them | 16:04 |
| mloza | rm_work: including housekeeping and health-manager ? | 16:06 |
| rm_work | probably, yes | 16:06 |
| rm_work | if it isn't needed, it just won't be used | 16:06 |
| rm_work | but it looks like it'd be necessary in all pieces | 16:06 |
| rm_work | this code is confusing me a bit tho, they are replacing a lot of stuff they shouldn't need to replace I think... | 16:07 |
| *** tesseract has quit IRC | 16:27 | |
| johnsom | mloza I don't know much about the A10 driver. I would contact A10 for install instructions, etc. | 16:38 |
| *** dosaboy has quit IRC | 16:45 | |
| *** ccamposr__ has quit IRC | 16:49 | |
| *** ccamposr__ has joined #openstack-lbaas | 16:49 | |
| *** dosaboy has joined #openstack-lbaas | 17:01 | |
| *** rpittau is now known as rpittau|afk | 17:23 | |
| *** ccamposr__ has quit IRC | 17:49 | |
| *** ccamposr__ has joined #openstack-lbaas | 17:50 | |
| *** ccamposr has joined #openstack-lbaas | 18:05 | |
| *** ccamposr__ has quit IRC | 18:09 | |
| *** gcheresh has joined #openstack-lbaas | 18:37 | |
| -openstackstatus- NOTICE: Memory pressure on zuul.opendev.org is causing connection timeouts resulting in POST_FAILURE and RETRY_LIMIT results for some jobs since around 06:00 UTC today; we will be restarting the scheduler shortly to relieve the problem, and will follow up with another notice once running changes are reenqueued. | 19:11 | |
| *** gcheresh has quit IRC | 19:22 | |
| -openstackstatus- NOTICE: The scheduler for zuul.opendev.org has been restarted; any changes which were in queues at the time of the restart have been reenqueued automatically, but any changes whose jobs failed with a RETRY_LIMIT, POST_FAILURE or NODE_FAILURE build result in the past 14 hours should be manually rechecked for fresh results | 19:45 | |
| *** nicolasbock has quit IRC | 20:07 | |
| *** gcheresh has joined #openstack-lbaas | 20:31 | |
| *** gcheresh has quit IRC | 20:55 | |
| *** gcheresh has joined #openstack-lbaas | 21:00 | |
| *** gcheresh has quit IRC | 21:09 | |
| *** servagem has quit IRC | 21:23 | |
| *** rcernin has joined #openstack-lbaas | 21:44 | |
| *** xakaitetoia1 has quit IRC | 21:47 | |
| *** tkajinam has joined #openstack-lbaas | 22:51 | |
| *** tkajinam has quit IRC | 22:51 | |
| *** tkajinam has joined #openstack-lbaas | 22:51 | |
| *** ivve has quit IRC | 22:57 | |
| *** jamesdenton has quit IRC | 23:49 | |
| *** jamesdenton has joined #openstack-lbaas | 23:50 | |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!