*** spatel has joined #openstack-lbaas | 00:48 | |
*** dayou_ has quit IRC | 00:56 | |
*** spatel has quit IRC | 01:26 | |
rm_work | huh, why is today the first time i heard about pycodestyle | 01:29 |
---|---|---|
*** mithilarun has quit IRC | 01:32 | |
*** dayou_ has joined #openstack-lbaas | 01:36 | |
openstackgerrit | Merged openstack/octavia master: Add bindep.txt for Octavia https://review.opendev.org/667208 | 01:39 |
*** bzhao__ has joined #openstack-lbaas | 01:45 | |
*** rcernin has quit IRC | 01:48 | |
*** rcernin has joined #openstack-lbaas | 01:48 | |
*** hongbin has joined #openstack-lbaas | 02:04 | |
*** rcernin has quit IRC | 02:42 | |
*** rcernin has joined #openstack-lbaas | 02:42 | |
*** yamamoto has joined #openstack-lbaas | 02:50 | |
*** yamamoto_ has joined #openstack-lbaas | 02:55 | |
*** yamamoto_ has quit IRC | 02:56 | |
*** yamamoto has quit IRC | 02:56 | |
*** yamamoto has joined #openstack-lbaas | 02:59 | |
*** yamamoto has quit IRC | 02:59 | |
*** rcernin has quit IRC | 03:04 | |
*** rcernin has joined #openstack-lbaas | 03:04 | |
*** sapd1_x has joined #openstack-lbaas | 03:07 | |
*** fyx has quit IRC | 03:18 | |
*** mnaser has quit IRC | 03:19 | |
*** fyx has joined #openstack-lbaas | 03:19 | |
*** mnaser has joined #openstack-lbaas | 03:20 | |
*** rcernin has quit IRC | 03:30 | |
*** rcernin has joined #openstack-lbaas | 03:31 | |
*** psachin has joined #openstack-lbaas | 03:39 | |
*** AlexStaf has joined #openstack-lbaas | 04:24 | |
*** hongbin has quit IRC | 04:25 | |
*** yamamoto has joined #openstack-lbaas | 04:34 | |
*** yamamoto has quit IRC | 04:39 | |
*** hongbin has joined #openstack-lbaas | 04:42 | |
*** hongbin has quit IRC | 04:49 | |
*** ivve has joined #openstack-lbaas | 05:04 | |
*** sapd1_x has quit IRC | 05:34 | |
*** pcaruana has joined #openstack-lbaas | 05:56 | |
*** pcaruana has quit IRC | 05:57 | |
*** pcaruana has joined #openstack-lbaas | 05:57 | |
*** AlexStaf has quit IRC | 05:58 | |
*** gcheresh has joined #openstack-lbaas | 06:16 | |
*** ccamposr has joined #openstack-lbaas | 06:22 | |
*** ccamposr__ has quit IRC | 06:25 | |
*** luksky has joined #openstack-lbaas | 06:41 | |
*** rcernin has quit IRC | 06:41 | |
*** altlogbot_3 has quit IRC | 06:46 | |
*** altlogbot_3 has joined #openstack-lbaas | 06:49 | |
*** altlogbot_3 has quit IRC | 06:50 | |
*** altlogbot_3 has joined #openstack-lbaas | 06:55 | |
*** tesseract has joined #openstack-lbaas | 07:11 | |
*** ccamposr__ has joined #openstack-lbaas | 07:12 | |
*** ccamposr has quit IRC | 07:14 | |
*** yboaron_ has joined #openstack-lbaas | 07:26 | |
*** AlexStaf has joined #openstack-lbaas | 07:30 | |
*** Emine has joined #openstack-lbaas | 07:33 | |
*** yboaron_ has quit IRC | 07:36 | |
*** yboaron_ has joined #openstack-lbaas | 07:36 | |
*** ricolin has joined #openstack-lbaas | 08:05 | |
*** yboaron_ has quit IRC | 08:07 | |
*** ricolin has quit IRC | 08:13 | |
*** yboaron_ has joined #openstack-lbaas | 08:21 | |
*** ricolin has joined #openstack-lbaas | 08:22 | |
*** yboaron_ has quit IRC | 08:28 | |
*** Dinesh_Bhor has quit IRC | 08:29 | |
*** yboaron_ has joined #openstack-lbaas | 08:29 | |
*** lemko has joined #openstack-lbaas | 08:36 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 09:01 | |
*** kobis1 has joined #openstack-lbaas | 09:05 | |
*** ccamposr has joined #openstack-lbaas | 09:21 | |
*** ccamposr__ has quit IRC | 09:23 | |
*** gcheresh has quit IRC | 09:34 | |
openstackgerrit | Noboru Iwamatsu proposed openstack/octavia master: Add failover logging to show the amphora details. https://review.opendev.org/667316 | 09:52 |
*** fnaval has quit IRC | 09:55 | |
*** fnaval has joined #openstack-lbaas | 10:00 | |
*** fnaval has quit IRC | 10:04 | |
*** yamamoto has joined #openstack-lbaas | 10:15 | |
*** Dinesh_Bhor has quit IRC | 10:16 | |
*** yamamoto has quit IRC | 10:17 | |
*** kobis1 has quit IRC | 10:28 | |
*** kobis1 has joined #openstack-lbaas | 10:42 | |
*** gcheresh has joined #openstack-lbaas | 10:52 | |
*** gcheresh has quit IRC | 10:57 | |
*** gcheresh has joined #openstack-lbaas | 10:58 | |
*** AlexStaf has quit IRC | 11:08 | |
*** ajay33 has joined #openstack-lbaas | 11:11 | |
*** AlexStaf has joined #openstack-lbaas | 11:23 | |
*** AlexStaf has quit IRC | 11:45 | |
*** psachin has quit IRC | 11:54 | |
*** spatel has joined #openstack-lbaas | 11:55 | |
*** luksky has quit IRC | 12:00 | |
*** goldyfruit has joined #openstack-lbaas | 12:00 | |
*** spatel has quit IRC | 12:00 | |
*** goldyfruit has quit IRC | 12:05 | |
*** boden has joined #openstack-lbaas | 12:15 | |
*** AlexStaf has joined #openstack-lbaas | 12:16 | |
*** AlexStaf has quit IRC | 12:27 | |
*** AlexStaf has joined #openstack-lbaas | 12:28 | |
*** goldyfruit has joined #openstack-lbaas | 12:33 | |
*** lemko has quit IRC | 12:55 | |
*** luksky has joined #openstack-lbaas | 13:03 | |
*** Vorrtex has joined #openstack-lbaas | 13:30 | |
*** openstackgerrit has quit IRC | 13:48 | |
*** spatel has joined #openstack-lbaas | 13:52 | |
*** spatel has quit IRC | 13:52 | |
*** yamamoto has joined #openstack-lbaas | 13:55 | |
*** fnaval has joined #openstack-lbaas | 14:16 | |
*** lemko has joined #openstack-lbaas | 14:25 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 14:25 | |
*** Dinesh_Bhor has quit IRC | 14:40 | |
*** yamamoto has quit IRC | 14:45 | |
*** yamamoto has joined #openstack-lbaas | 14:45 | |
*** yamamoto has quit IRC | 14:47 | |
mnaser | is there a way to 'rebuild' a load balnacer if its in `provisioning_status` status? | 14:58 |
cgoncalves | mnaser, as in a transient state? PENDING_* | 15:01 |
mnaser | nope in ERROR | 15:01 |
cgoncalves | you can failover | 15:02 |
cgoncalves | https://review.opendev.org/#/q/Ic4b4516cd6b2a254ea32939668c906486066da42 | 15:02 |
mnaser | oh let me try | 15:03 |
mnaser | "Invalid state ERROR of loadbalancer resource 707d743f-1b40-4416-8889-6e72783d8d92 (HTTP 409) (Request-ID: req-db5d99e7-00f2-4e1b-bd14-7bd722f48da5)" | 15:04 |
*** yboaron_ has quit IRC | 15:04 | |
cgoncalves | mnaser, check if you are running with that patch | 15:06 |
*** sapd1_x has joined #openstack-lbaas | 15:16 | |
*** yamamoto has joined #openstack-lbaas | 15:23 | |
mnaser | cgoncalves: sigh, looks like i have to recreate the vrrp ports | 15:24 |
mnaser | and now neutron is giving 404s when creating a port with secgroups that arent part of my tenant | 15:24 |
mnaser | is there any reason why octavia just doesnt recreate the vrrp_port_id if it cant find it? | 15:24 |
*** kobis1 has quit IRC | 15:25 | |
johnsom | It should on failover create the base vrrp_port. It sill not re-create the VIP however, as that has the reserved/assigned VIP IP address. | 15:26 |
mnaser | the vip port exists | 15:26 |
mnaser | the misssing port is the vrrp one here | 15:26 |
*** gcheresh has quit IRC | 15:26 | |
johnsom | That said, working on this failover flow is one of the next things I want to look at and fix some of this stuff. | 15:27 |
mnaser | any infra issues seem to cause a failing failover and then leave all of octavia in a broken state unfortuantely | 15:27 |
johnsom | So, yeah, that base vrrp port is expendable, so should be recreated | 15:27 |
johnsom | automatically by the failover flow/process | 15:28 |
mnaser | yeah.. and now im struggling to recreate it | 15:28 |
*** yamamoto has quit IRC | 15:28 | |
johnsom | Yes, with the defaults, it attempts automatic repairs, if those also fail, we stop and mark it error for the operator. There has been debate about if we should endlessly loop, etc. I pushed for us to error on the side of "don't do more harm to the cloud", but interested in your feedback | 15:29 |
mnaser | any infra issues has almost always left me with an irrecoverable-y broken octavia load balancers | 15:31 |
johnsom | We do have a few moles to whack when nova and neutron fail us, agreed. It's one of the next two things I'm going to look at. I don't like how the failover flow was designed. | 15:32 |
johnsom | mnaser Which version are you running BTW? | 15:33 |
mnaser | afaik this might be rocky.. or stein | 15:33 |
johnsom | 4.0.1? | 15:33 |
johnsom | If you can't launch a failover on ERROR objects, you are likely behind in the patch releases. | 15:34 |
mnaser | johnsom: not sure, but uh i'm looking at this https://github.com/openstack/octavia/blob/09020b6bfc06c615f6d01d550188b1e6d7ed9d21/octavia/network/drivers/neutron/allowed_address_pairs.py#L592-L611 | 15:36 |
mnaser | im thinking maybe if i set the amphora state to deleted it might skip that ? | 15:36 |
mnaser | i dont know if it needs that down the line | 15:36 |
mnaser | https://github.com/openstack/octavia/blob/ff4680eb71cf03e4eae1a58e7a66321ddadcbead/octavia/controller/worker/v2/flows/amphora_flows.py#L291-L324 | 15:38 |
mnaser | cause it only plugs the vip | 15:38 |
*** AlexStaf has quit IRC | 15:39 | |
*** Emine has quit IRC | 15:39 | |
*** sapd1_x has quit IRC | 15:40 | |
johnsom | I don't even know why the failover flow would call that | 15:41 |
johnsom | I see that it is, but why? | 15:41 |
colin- | what you're describing mnaser (setting states to deleted/error/etc in db) is what i've had to do to recover from the scenario you've described each time nova or neutron stops giving 200s | 15:44 |
mnaser | i have no idea how to fix these at this point, i moved the amphoras to deleted state, and then move the provisioning status to active, and force a failover | 15:45 |
mnaser | and uh, nothing is happeneing afaik | 15:45 |
colin- | that flow is successful, right? | 15:45 |
mnaser | if anything, when something goers wrong with the control plane, it almost always propagates to data plane | 15:45 |
colin- | what you just described | 15:45 |
mnaser | colin-: well i dont have debugging on but it isnt stacktracing anymore | 15:45 |
colin- | gotcha | 15:45 |
colin- | depending if you're in active/standby or not, i have sometimes had to edit the master/backup roles too | 15:46 |
mnaser | nope this is just single | 15:46 |
mnaser | im not seeing anything being fixed tho | 15:46 |
johnsom | mnaser No, it almost NEVER propagates to the data plane. Were your operating status OFFLINE? | 15:46 |
mnaser | i can easily make octavia destroy every single load balancer unfortunately | 15:47 |
mnaser | when things dont go right, it tries to failover, and then breaks things even more | 15:47 |
johnsom | It is designed to fail safe, it may be in provisioning_status ERROR, but the LBs are still handling traffic. | 15:47 |
colin- | yeah, that has impacted my data plane in the past | 15:47 |
colin- | sorry to be contrary | 15:47 |
mnaser | this is the 3rd case i have where data plane dies because of a control plane issue | 15:48 |
mnaser | and not one or two lbs, every single one | 15:48 |
johnsom | If that is the case, we need a detailed bug as we have not seen that for over a year when the "both down" bug was fixed when nova nukes both amps in an LB. | 15:48 |
colin- | https://storyboard.openstack.org/#!/story/2005512 is the closest i've come to describing it | 15:48 |
colin- | probably does not cover what mnaser is talking about (at least not 100%) | 15:48 |
*** kobis1 has joined #openstack-lbaas | 15:49 | |
johnsom | colin- Yeah, that doesn't match up. Your story shows you had amps. I know the vxlan cut traffic, but you had working amps right? | 15:49 |
mnaser | vxlan cuts traffic | 15:50 |
mnaser | health manager freaks out | 15:50 |
mnaser | starts blowing up amphoras to fix them | 15:50 |
mnaser | control plane is borked already | 15:50 |
colin- | mnaser: do you see any control plane logs from octavia that suggest heartbeats weren't able to be written? they're easy to spot because the log info comes out in all caps with something like "THIS IS NOT GOOD!" | 15:50 |
mnaser | yes, ive had that once when a rabbitmq upgrade pooped out | 15:51 |
*** sapd1_x has joined #openstack-lbaas | 15:51 | |
mnaser | like it'd be nice if i just had a "rebuild" button | 15:51 |
*** kobis1 has quit IRC | 15:51 | |
colin- | ok | 15:51 |
mnaser | "you have everything you need about this lb, just rebuild it." | 15:51 |
*** kosa777777 has joined #openstack-lbaas | 15:53 | |
johnsom | Yeah, that was exactly what failover is supposed to be. It just didn't get implemented well. | 15:55 |
johnsom | THIS IS NOT GOOD doesn't have anything to do with rabbit, BTW. The health manager doesn't use rabbit for anything | 15:56 |
mnaser | http://paste.openstack.org/show/753372/ | 15:56 |
mnaser | i ended up writing this and giving it to a customer | 15:56 |
mnaser | (that should not exist) | 15:56 |
johnsom | please, if you see a scenario that actually takes down the data plane, (not provisioning status ERROR), please report it and provide logs, we would like to see that. | 15:58 |
johnsom | mnaser Aslo, you have your VXLAN setup to disable ports if it's sees duplicate mac addresses? Neutron is doing that and impacted colin-, it would be good to collect another deployment that has VXLAN configured that way so we can approach the neutron team with the problem. | 15:59 |
*** luksky has quit IRC | 16:00 | |
*** sapd1_x has quit IRC | 16:02 | |
*** kosa777777 has quit IRC | 16:04 | |
*** ricolin has quit IRC | 16:33 | |
*** tesseract has quit IRC | 16:35 | |
colin- | is it expected when creating a pool with ~75 members that the provisioning statsus should bounce between ACTIVE and PENDING_UPDATE before all members exist in the pool definition? | 16:36 |
*** tesseract has joined #openstack-lbaas | 16:36 | |
colin- | can we add multiple members in one call in code? or is it always one at a time? | 16:37 |
colin- | just saw the batch update in the member section of the api ref, pursuing that for now | 16:38 |
colin- | oh, it's not additive it just replaces? | 16:39 |
cgoncalves | yeah. we don't have batch member create | 16:43 |
*** ramishra has quit IRC | 16:46 | |
cgoncalves | I think it could be implemented rather easy...? | 16:47 |
*** mithilarun has joined #openstack-lbaas | 16:49 | |
*** mithilarun has quit IRC | 16:50 | |
*** mithilarun has joined #openstack-lbaas | 16:52 | |
colin- | this is all via gophercloud, might try to leverage this https://github.com/gophercloud/gophercloud/blob/master/openstack/loadbalancer/v2/pools/requests.go#L356 | 16:52 |
colin- | but it's populating the pool one member at a time heh | 16:52 |
johnsom | Wait, it is totally additive and support batch member create. | 16:54 |
johnsom | cgoncalves colin- It is a "desired state" API, you can add, delete, and update as many members as you want all at once. | 16:55 |
colin- | weird, i have a watch going on pool show -c members wc -l and it's growing one at a time | 16:56 |
colin- | from 14 on its way to 75 atm | 16:56 |
cgoncalves | "This may include creating new members, deleting old members, and updating existing members." | 16:56 |
cgoncalves | https://developer.openstack.org/api-ref/load-balancer/v2/?expanded=update-a-member-detail,batch-update-members-detail#batch-update-members | 16:56 |
johnsom | colin- I don't know what gophercloud has, but our API supports batch create. | 16:57 |
colin- | that's why i linked it :) | 16:57 |
colin- | understood, appreciate the notes | 16:58 |
johnsom | Ok, I just saw cgoncalves saying we didn't have batch create and was .... Yes, we do. | 16:58 |
johnsom | lol. Just getting off a meeting so context switching. | 16:58 |
*** tesseract has quit IRC | 16:59 | |
cgoncalves | yeah, my bad | 16:59 |
cgoncalves | I had to expand and read the first two lines. shame | 17:00 |
johnsom | Different "we" here. lol My "we" is always Octavia. grin | 17:01 |
*** yamamoto has joined #openstack-lbaas | 17:06 | |
xgerman | gophercloud has issues… | 17:09 |
colin- | without committing to anything do you guys have any strong negative reaction to supporting syntax like `openstack loadbalancer member create <member1> <member2> <member3> ....`? | 17:09 |
cgoncalves | I bet rm_work does not ;) | 17:10 |
colin- | in the future | 17:10 |
cgoncalves | 1 sec | 17:10 |
cgoncalves | https://review.opendev.org/#/c/634302/ | 17:10 |
johnsom | That is a very different patch | 17:11 |
johnsom | So, for the CLI we have discussed this with the OpenStack client team. It's a tricky one to implement given each of those <member1> would have to contain the whole member definition. | 17:12 |
xgerman | and might be confusing for users | 17:13 |
johnsom | The last plan I remember, to address the batch member functionality via the CLI was to pass the CLI a JSON document that reflected the API: https://developer.openstack.org/api-ref/load-balancer/v2/index.html?expanded=batch-update-members-detail#id95 | 17:13 |
johnsom | If it were delete calls with a list of IDs, yes, I would love to see that. But the create, it's not clear how to pass all of those params, for each member, and have them all line up | 17:16 |
colin- | yeah i see what you mean about doing it on the cli that way | 17:17 |
colin- | any middle ground where the api could still accept that format without the client having to be involved so that other services contacting octavia could express in that way without having to present the entire finished manifest each time (adding v overwriting member config on the amps) | 17:18 |
colin- | ?* | 17:19 |
johnsom | That is an interesting use case. Where you just want to bulk add, but not specify all of the existing stuff. | 17:20 |
johnsom | I would probably lobby for extending the member create API to accept a list of members. | 17:21 |
colin- | ok cool, yeah that would likely help in this case. thanks for the info everyone | 17:23 |
*** lemko has quit IRC | 17:34 | |
*** gcheresh has joined #openstack-lbaas | 17:52 | |
*** gcheresh has quit IRC | 18:32 | |
*** spatel has joined #openstack-lbaas | 18:33 | |
spatel | johnsom: yt? | 18:33 |
johnsom | spatel Hi | 18:33 |
spatel | https://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html | 18:34 |
spatel | is this octavia_management_net_subnet_cidr lb-mgmt right? | 18:35 |
johnsom | Yes, I think so. | 18:35 |
spatel | in openstack-ansible there is a variable octavia_management_net_subnet_allocation_pools: | 18:36 |
spatel | trying to understand that variable.. | 18:36 |
spatel | does amphora vm need lb-mgmt ip address? | 18:37 |
spatel | or just compute node? | 18:37 |
johnsom | I am not 100% up to speed on the openstack ansible role, but I can try to answer. | 18:37 |
spatel | i have subnet 172.27.8.0/21 for lb-mgmt and trying to understand who will need IP from that subnet | 18:38 |
spatel | controller/compute and who else? | 18:38 |
johnsom | So the lb-mgmt-net is used by the controllers to send command/control messages to the amphora, and for the amphora to send heartbeat message back to the controllers. Each amphora gets an IP on that network (it's outside the tenant network namespace however). | 18:39 |
johnsom | Each controller (worker, health manager, and housekeeping) will need an IP on that network as well, to talk with the amphora. | 18:39 |
*** gcheresh has joined #openstack-lbaas | 18:39 | |
johnsom | API does not need to be on the lb-mgmt-net at this time | 18:39 |
*** ianychoi_ has quit IRC | 18:49 | |
spatel | so this option octavia_management_net_subnet_allocation_pools: is for amphora ip pool? | 18:50 |
*** ianychoi_ has joined #openstack-lbaas | 18:50 | |
johnsom | Yes, you could DHCP the controllers too if you want, but it's best to used fixed for those | 18:51 |
spatel | I have fix IP for controller and compute.. | 18:52 |
johnsom | The compute hosts should not need an IP on that network, only the octavia parts. | 18:52 |
spatel | so i am thinking out of 172.27.8.1-172.27.10.255 i keep it for static and give 172.27.11.1-172.27.15.255 give it to DHCP for amphora | 18:53 |
*** luksky has joined #openstack-lbaas | 18:53 | |
johnsom | Sure, that is a big block for static though. I wouldn't expect you need that many controllers. lol. | 18:54 |
spatel | controller + compute ( static ) | 18:54 |
spatel | don't we need br-lbaas for controller? | 18:55 |
*** mithilarun has quit IRC | 18:58 | |
spatel | johnsom: how neutron wire up lb-mgmt to amphora ? | 19:04 |
johnsom | I had to dig through old notes to see what br-lbaas was. That should only be on the controller side where the containers are. lb-mgmt-net is just a neutron network, and neutron/nova should handle the computes for you | 19:04 |
spatel | do we need to create subnet in neutron ? | 19:04 |
johnsom | Yes. Doesn't the OSA role do that for you? | 19:05 |
spatel | trying to understand.. and looking at playbook | 19:05 |
johnsom | Yeah, OSA does that for you: https://github.com/openstack/openstack-ansible-os_octavia/blob/master/tasks/octavia_mgmt_network.yml#L27 | 19:06 |
spatel | hmmm | 19:08 |
spatel | you are saying neutron will take care of lb-mgmt-net | 19:08 |
spatel | ans lb-mgmt-net will be different than br-lbaas right? | 19:09 |
johnsom | lb-mgmt-net is just a neutron network/subnet that gets attached to the amphora. Nothing special. The harder part is connecting the controllers to it, which is what the OSA stuff uses those bridges for. OSA seems overly complicated to me, but not my call | 19:09 |
spatel | oh.. i think i get it what you saying.. | 19:10 |
spatel | br-lbaas is different subnet pool than lb-mgmt-net | 19:11 |
spatel | they can't be same.. | 19:11 |
johnsom | Isn't br-lbaas just a bridge? It shouldn't have a subnet pool | 19:11 |
johnsom | It should all be the same | 19:12 |
spatel | yes br-lbaas is just a bridge | 19:14 |
spatel | johnsom: this is my openstack_user_config.yml file http://paste.openstack.org/show/753393/ | 19:15 |
spatel | if you see first block cidr you will be i have define lbaas: 172.27.8.0/21 | 19:16 |
*** mithilarun has joined #openstack-lbaas | 19:16 | |
spatel | later you can see container_bridge: br-lbaas block | 19:16 |
spatel | br-lbaas is wire up with three service octavia-worker / octavia-housekeeping / octavia-health-manager using veth | 19:17 |
spatel | johnsom: https://imgur.com/a/wn1MRNT | 19:19 |
spatel | based on this diagram lb-mgmt should talk to worker/housekeeping/manager + amphora | 19:19 |
spatel | johnsom: going with this config and will see if any issue | 19:24 |
-spatel- octavia_ssh_enabled: True | 19:24 | |
-spatel- octavia_management_net_subnet_allocation_pools: 172.27.12.1-172.27.15.255 | 19:24 | |
-spatel- octavia_management_net_subnet_cidr: 172.27.8.0/21 | 19:24 | |
-spatel- octavia_loadbalancer_topology: ACTIVE_STANDBY | 19:24 | |
-spatel- octavia_enable_anti_affinity: True | 19:24 | |
*** Vorrtex has quit IRC | 19:38 | |
*** pcaruana has quit IRC | 19:38 | |
*** mithilarun has quit IRC | 19:48 | |
*** mithilarun has joined #openstack-lbaas | 20:04 | |
*** mithilarun has quit IRC | 20:20 | |
*** mithilarun has joined #openstack-lbaas | 20:21 | |
*** ivve has quit IRC | 20:24 | |
*** gcheresh has quit IRC | 20:42 | |
*** ianychoi_ has quit IRC | 20:59 | |
*** ianychoi_ has joined #openstack-lbaas | 21:00 | |
*** ianychoi_ has quit IRC | 21:05 | |
*** ianychoi_ has joined #openstack-lbaas | 21:09 | |
*** ianychoi_ is now known as ianychoi | 21:18 | |
*** boden has quit IRC | 21:35 | |
*** openstackgerrit has joined #openstack-lbaas | 21:40 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia master: WIP: Add VIP access control list https://review.opendev.org/659626 | 21:40 |
openstackgerrit | Carlos Goncalves proposed openstack/python-octaviaclient master: Add support to VIP access control list https://review.opendev.org/659627 | 21:44 |
*** Emine has joined #openstack-lbaas | 21:45 | |
*** tobberydberg has quit IRC | 21:49 | |
*** tobberydberg has joined #openstack-lbaas | 21:51 | |
*** Emine has quit IRC | 22:07 | |
rm_work | mnaser: just read scrollback, I'm super interested in why your failovers are predictably dieing | 22:09 |
johnsom | rm_work this local cert manager is.... annoying | 22:10 |
rm_work | Yes | 22:10 |
rm_work | It's not designed to be used for anything except debug | 22:10 |
rm_work | It should work though, just drop files in place | 22:10 |
johnsom | I'm trying to use it for a functional test | 22:10 |
rm_work | Or mock open | 22:11 |
johnsom | I'm just getting the .pass file at the moment | 22:11 |
rm_work | What's wrong with it? | 22:11 |
johnsom | IDK, just venting | 22:11 |
johnsom | This simple functional test is becoming a nightmare | 22:12 |
johnsom | I guess it's good as it's going to test the world, but | 22:12 |
johnsom | Ah, figured it out. cleanup was firing early | 22:16 |
*** mithilarun has quit IRC | 22:17 | |
rm_work | You could probably even mock higher up if you wanted and not even have to deal with mocking open and such | 22:20 |
rm_work | Just mock the whole get functions? lol | 22:20 |
rm_work | Could do that even with the Barbican driver :D | 22:21 |
*** mithilarun has joined #openstack-lbaas | 22:21 | |
johnsom | I got local working, my mistake which caused the cleanup to fire early | 22:24 |
rm_work | Yeah | 22:29 |
rm_work | Just saying that might be less complex to maintain | 22:29 |
*** fnaval has quit IRC | 22:30 | |
johnsom | Coming from someone recently commenting on the level of mocking going on... lol | 22:33 |
johnsom | I have all of them working now except the SNI certs, which is likely a bug in the utils. | 22:33 |
*** luksky has quit IRC | 22:34 | |
*** spatel has quit IRC | 22:38 | |
*** fnaval has joined #openstack-lbaas | 22:40 | |
*** sapd1_x has joined #openstack-lbaas | 22:41 | |
rm_work | well not like anyone is running the local cert manager in production :D | 22:47 |
johnsom | We definitely have some gaps in SNI testing. | 22:50 |
johnsom | Not sure how to create a listener with SNI using the repos. Creating SNI records requires a listener, creating a listener with SNI requires SNI objects.... | 22:52 |
johnsom | Ok, listener first, no reference to the SNI, then create SNI records. sqlalchemy magic | 22:54 |
johnsom | rm_work BTW, colin was inquiring about a member bulk add option this morning. I.e. not re-pushing the existing members. | 22:57 |
rm_work | i saw | 22:57 |
rm_work | you cleared that up | 22:57 |
rm_work | i almost choked when i read "i see update but no batch create" lol | 22:58 |
rm_work | glad you were around to respond :D | 22:58 |
johnsom | Does that work for you? Adding a way to pass an array of members to the POST? | 22:58 |
rm_work | yeah maybe | 22:58 |
johnsom | Yeah, I was like, what? | 22:58 |
johnsom | The part I'm not 100% sure about is if we can support both or need a new path for that. | 22:59 |
rm_work | well you mean, for additive only? | 22:59 |
rm_work | yeah uhh >_> | 22:59 |
rm_work | we could add a query flag | 22:59 |
johnsom | Seems like the wsme types might get unhappy and we might violate the lbaasv2 compat if we just add it. | 22:59 |
rm_work | "create_only=True" or something | 22:59 |
rm_work | nah i mean it's the same structure | 22:59 |
rm_work | just if create_only is set, then skip the delta calculation | 23:00 |
johnsom | Oh, like on the bulk update endpoint | 23:00 |
rm_work | easy peasy to code | 23:00 |
johnsom | I was thinking make POST accept an array. | 23:00 |
johnsom | Other services have done that. | 23:00 |
johnsom | Might need to be v3 though | 23:00 |
rm_work | ahh | 23:00 |
rm_work | ehhh | 23:00 |
rm_work | yeah | 23:00 |
rm_work | v3 | 23:00 |
rm_work | but can do what he wants in v2 on the update | 23:01 |
johnsom | colin- Are you still around? | 23:01 |
colin- | indeed | 23:01 |
colin- | read the buffer, understood | 23:01 |
johnsom | I probably led you down the wrong path this morning | 23:01 |
colin- | just out of curiosity why the distinction of v3 for the array on POST? | 23:02 |
johnsom | Yeah, Adam's idea with the query string would work and be fairly straight forward. | 23:02 |
colin- | (ignorant of wsme/compat subject) | 23:02 |
johnsom | Well, the LBaaSv2 specification does not show it as a list, it's just an object. So, we have to maintain compatibility with that on the /v2 Octavia API. We can't just switch that as anything coded to the existing API would break if we start requiring a list. | 23:04 |
colin- | oh ok. i was imagining it earlier as an optional alternative to how it works now that you'd only use electively | 23:04 |
colin- | that makes sense | 23:05 |
johnsom | It is likely not possible to accept both given the API validation layer, wsme | 23:05 |
*** rcernin has joined #openstack-lbaas | 23:05 | |
*** fnaval has quit IRC | 23:05 | |
johnsom | It's pretty picky about the content formatting, as it should be really. | 23:05 |
rm_work | yeah | 23:08 |
johnsom | Actually, it might be possible... Still not sure it's a good idea | 23:10 |
rm_work | maaaaaybe | 23:13 |
rm_work | it's not the best idea | 23:13 |
rm_work | we can do additive on the batch call and it will not interfere | 23:13 |
rm_work | want me to write it up really quick? | 23:13 |
rm_work | colin-: i'll do the core work really quick if you're willing to babysit the patch through merging | 23:13 |
colin- | sure that sounds good to me, thanks for offering | 23:15 |
rm_work | meanwhile will people review my multivip stuff? lol | 23:20 |
rm_work | https://review.opendev.org/#/c/660239/ | 23:20 |
rm_work | pretty plz | 23:21 |
rm_work | get your vips here, hot off the vip-line, as many as you want! | 23:21 |
johnsom | * subject to network limitations and restrictions. | 23:22 |
rm_work | shhh fine print | 23:23 |
rm_work | shhh fine print | 23:23 |
johnsom | * offer only valid on the Train model year and later. | 23:27 |
johnsom | * subject to multiple subnet availability | 23:27 |
johnsom | * only take VIPs as directed | 23:28 |
*** yamamoto has quit IRC | 23:36 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!