*** Swami has quit IRC | 00:02 | |
*** ipsecguy_ has quit IRC | 00:21 | |
*** ipsecguy has joined #openstack-lbaas | 00:22 | |
*** fnaval has quit IRC | 00:36 | |
*** ipsecguy has quit IRC | 00:42 | |
*** fnaval has joined #openstack-lbaas | 00:43 | |
*** yamamoto has joined #openstack-lbaas | 00:52 | |
*** yamamoto has quit IRC | 00:57 | |
rm_work | ok, just had the first complaint from a user about the 50s timeout | 01:08 |
---|---|---|
rm_work | i'm gonna do the timeout expose patch tomorrow | 01:08 |
*** ipsecguy has joined #openstack-lbaas | 01:09 | |
rm_work | what did we say? listener level? | 01:09 |
rm_work | right now we put it in "defaults" section, which is for backend and listen | 01:09 |
rm_work | oh | 01:10 |
rm_work | but, derp, each listener is one haproxy config so yeah it's fine in defaults :P | 01:10 |
*** harlowja has quit IRC | 01:31 | |
johnsom | rm_work: please put it in the listener both on the API and the config. I eventually want to fix the process per listener and there is some strange interactions with the defaults if I remember | 01:40 |
rm_work | right | 01:40 |
johnsom | Otherwise, yes please! Grin | 01:41 |
rm_work | err, the "listener"? | 01:41 |
rm_work | IE, the frontend? | 01:41 |
johnsom | Yep | 01:41 |
rm_work | which, is only for client timeout | 01:41 |
rm_work | server timeout is on backend | 01:41 |
rm_work | usually it's easier to just throw them both in the defaults section | 01:41 |
rm_work | but | 01:41 |
rm_work | i can poke at it | 01:41 |
johnsom | Oh really, hmm | 01:41 |
johnsom | Ok, i can always review comment too when I have the docs open | 01:41 |
johnsom | Mobile at the moment | 01:42 |
rm_work | k | 01:42 |
*** yamamoto has joined #openstack-lbaas | 01:53 | |
*** yamamoto has quit IRC | 01:59 | |
*** jaff_cheng has joined #openstack-lbaas | 02:14 | |
*** dayou has quit IRC | 02:18 | |
*** yamamoto has joined #openstack-lbaas | 02:37 | |
*** jaff_cheng has quit IRC | 02:42 | |
*** yamamoto has quit IRC | 02:46 | |
*** yamamoto has joined #openstack-lbaas | 02:56 | |
*** annp has joined #openstack-lbaas | 03:14 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 03:17 | |
*** AlexeyAbashkin has quit IRC | 03:21 | |
*** harlowja has joined #openstack-lbaas | 03:32 | |
*** harlowja has quit IRC | 03:54 | |
*** isssp has joined #openstack-lbaas | 05:03 | |
*** burned has quit IRC | 05:06 | |
*** slashnick has joined #openstack-lbaas | 05:07 | |
*** slashnick has left #openstack-lbaas | 05:09 | |
*** isssp has quit IRC | 05:09 | |
*** isssp has joined #openstack-lbaas | 05:12 | |
*** imacdonn has quit IRC | 05:15 | |
*** imacdonn has joined #openstack-lbaas | 05:15 | |
*** rcernin has quit IRC | 05:34 | |
*** rcernin has joined #openstack-lbaas | 05:46 | |
*** links has joined #openstack-lbaas | 05:53 | |
*** rcernin has quit IRC | 06:06 | |
*** rcernin has joined #openstack-lbaas | 06:08 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 06:16 | |
*** AlexeyAbashkin has quit IRC | 06:21 | |
*** threestrands has quit IRC | 06:25 | |
*** kobis has joined #openstack-lbaas | 06:34 | |
*** kobis has quit IRC | 06:40 | |
*** velizarx_ has joined #openstack-lbaas | 07:00 | |
*** links has quit IRC | 07:15 | |
*** links has joined #openstack-lbaas | 07:23 | |
*** velizarx_ has quit IRC | 07:23 | |
*** kobis has joined #openstack-lbaas | 07:24 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata https://review.openstack.org/555177 | 07:24 |
*** kobis has quit IRC | 07:25 | |
*** isssp has quit IRC | 07:25 | |
*** bbzhao has quit IRC | 07:25 | |
*** ptoohill has quit IRC | 07:25 | |
*** kobis has joined #openstack-lbaas | 07:26 | |
*** bbzhao has joined #openstack-lbaas | 07:26 | |
*** isssp has joined #openstack-lbaas | 07:26 | |
*** ptoohill has joined #openstack-lbaas | 07:28 | |
*** rcernin has quit IRC | 07:31 | |
*** pcaruana has joined #openstack-lbaas | 07:42 | |
*** pcaruana has quit IRC | 07:44 | |
*** velizarx has joined #openstack-lbaas | 07:44 | |
*** pcaruana has joined #openstack-lbaas | 07:44 | |
*** pcaruana has quit IRC | 07:45 | |
*** pcaruana has joined #openstack-lbaas | 07:45 | |
*** pcaruana has quit IRC | 07:47 | |
*** pcaruana has joined #openstack-lbaas | 07:47 | |
*** pcaruana has quit IRC | 07:48 | |
*** pcaruana has joined #openstack-lbaas | 07:48 | |
*** pcaruana has quit IRC | 07:50 | |
*** pcaruana has joined #openstack-lbaas | 07:50 | |
*** pcaruana has quit IRC | 07:51 | |
*** pcaruana has joined #openstack-lbaas | 07:51 | |
*** pcaruana has quit IRC | 07:53 | |
*** pcaruana has joined #openstack-lbaas | 07:53 | |
*** pcaruana has quit IRC | 07:54 | |
*** pcaruana has joined #openstack-lbaas | 07:55 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:56 | |
*** pcaruana has quit IRC | 07:56 | |
*** ispp has joined #openstack-lbaas | 07:58 | |
*** isssp has quit IRC | 08:00 | |
*** shananigans has quit IRC | 08:05 | |
*** pcaruana has joined #openstack-lbaas | 08:06 | |
*** pcaruana has quit IRC | 08:07 | |
*** pcaruana has joined #openstack-lbaas | 08:08 | |
*** pcaruana has quit IRC | 08:09 | |
*** shananigans has joined #openstack-lbaas | 08:09 | |
*** pcaruana has joined #openstack-lbaas | 08:10 | |
*** pcaruana has quit IRC | 08:10 | |
*** pcaruana has joined #openstack-lbaas | 08:11 | |
*** tesseract has joined #openstack-lbaas | 08:11 | |
*** pcaruana has quit IRC | 08:12 | |
*** pcaruana has joined #openstack-lbaas | 08:13 | |
*** pcaruana has quit IRC | 08:15 | |
*** pcaruana has joined #openstack-lbaas | 08:15 | |
*** pcaruana has quit IRC | 08:16 | |
*** AlexeyAbashkin has quit IRC | 08:17 | |
*** pcaruana has joined #openstack-lbaas | 08:18 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 08:18 | |
*** pcaruana has quit IRC | 08:20 | |
*** pcaruana has joined #openstack-lbaas | 08:20 | |
*** pcaruana has quit IRC | 08:21 | |
*** pcaruana has joined #openstack-lbaas | 08:21 | |
*** pcaruana has quit IRC | 08:22 | |
velizarx | Hi, all. I have a question about support the active-active topology. Could you tell me what priority does this blueprint (https://blueprints.launchpad.net/octavia/+spec/l3-active-active) have on your roadmap? What tasks now have a higher priority? I ask, because currently I research LBaaS technologies for our virtual private cloud and I see that one of the main killer-features it's the active-active topology. | 08:24 |
velizarx | I want understand, when you are planning add this functionality. | 08:26 |
*** pcaruana has joined #openstack-lbaas | 08:29 | |
*** pcaruana has quit IRC | 08:30 | |
*** pcaruana has joined #openstack-lbaas | 08:36 | |
*** pcaruana has quit IRC | 08:37 | |
*** pcaruana has joined #openstack-lbaas | 08:38 | |
*** pcaruana has quit IRC | 08:39 | |
*** pcaruana has joined #openstack-lbaas | 08:40 | |
*** pcaruana has quit IRC | 08:40 | |
*** pcaruana has joined #openstack-lbaas | 08:41 | |
*** pcaruana has quit IRC | 08:41 | |
*** sapd has quit IRC | 08:44 | |
*** sapd has joined #openstack-lbaas | 08:45 | |
*** salmankhan has joined #openstack-lbaas | 08:52 | |
*** pcaruana has joined #openstack-lbaas | 09:02 | |
*** pcaruana has quit IRC | 09:03 | |
*** pcaruana has joined #openstack-lbaas | 09:03 | |
*** pcaruana has quit IRC | 09:05 | |
*** dayou has joined #openstack-lbaas | 09:38 | |
*** dayou has quit IRC | 09:44 | |
*** nmagnezi_ has quit IRC | 09:46 | |
*** dayou has joined #openstack-lbaas | 09:53 | |
*** velizarx_ has joined #openstack-lbaas | 09:56 | |
*** velizarx_ has quit IRC | 10:00 | |
*** nmagnezi has joined #openstack-lbaas | 10:12 | |
toker_ | Hi again everyone, I'm still struggling to get the octavia-dashboard to work with horizon on OSP 12 (pike). I get the menu, but when clicking it it just "reloads" the page in a constant loop. | 10:22 |
*** nmagnezi has quit IRC | 10:30 | |
*** nmagnezi has joined #openstack-lbaas | 10:30 | |
*** salmankhan has quit IRC | 10:52 | |
*** salmankhan has joined #openstack-lbaas | 10:57 | |
*** AlexeyAbashkin has quit IRC | 11:00 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 11:00 | |
*** logan- has quit IRC | 11:03 | |
*** logan- has joined #openstack-lbaas | 11:03 | |
dayou | toker_, check this, http://logs.openstack.org/91/544591/5/check/build-openstack-sphinx-docs/cad913d/html/readme.html#enabling-octavia-dashboard-and-neutron-lbaas-dashboard | 11:25 |
dayou | I don't know whether it works with pike or not, but should work with queens | 11:25 |
toker_ | hm, ok, so now I get to this point, clicking loadbalancer in the gui, gives me an exception in horizon, SDKException: public endpoint for load_balancer service in regionOne region not found | 11:45 |
toker_ | which is really weird since "e4e001c5da1847eea1821c63ccde35a1 | regionOne | octavia | load-balancer | True | public | https://xxx:13876 |" gets listed when I do "openstack endpoint list | grep load" | 11:46 |
dayou | That might be related to https://review.openstack.org/#/c/524011/ | 11:48 |
dayou | You can revert that commit and give it a try | 11:48 |
dayou | on Pike | 11:48 |
dayou | Somehow octavia-dashboard relies on a newer version of openstacksdk to be working | 11:49 |
dayou | which might not be available on Pike | 11:50 |
openstackgerrit | Jacky Hu proposed openstack/octavia-dashboard master: Add package-lock.json https://review.openstack.org/555270 | 11:51 |
toker_ | dayou: Hm, I figured it had to do with the type of the endpoint, it seems like octavia-dashboard was requesing "load_balancer" but mine was "load-balancer" (note the - vs _). So I renamed it, but then I got another exception :) | 12:01 |
toker_ | service_type) File "/usr/lib/python2.7/site-packages/openstack/session.py", line 277, in _get_version_match for link in version["links"]: KeyError: 'links' | 12:01 |
toker_ | I'll try with your suggestion | 12:05 |
*** aojea has joined #openstack-lbaas | 12:18 | |
*** velizarx has quit IRC | 12:19 | |
toker_ | hm, no luck. | 12:20 |
toker_ | still stuck with the 'SDKException: public endpoint for load_balancer service in regionOne region not found' | 12:20 |
toker_ | argh, really frustrating :( | 12:30 |
openstackgerrit | Jacky Hu proposed openstack/octavia-dashboard master: Add package-lock.json https://review.openstack.org/555270 | 12:31 |
toker_ | how does octavia-dashboard try list the endpoints ? with which user ? | 12:36 |
*** velizarx has joined #openstack-lbaas | 12:40 | |
dayou | https://github.com/openstack/octavia-dashboard/blob/master/octavia_dashboard/api/rest/lbaasv2.py#L38-L64 | 12:41 |
dayou | It tries to use the current horizon user token to access the loadbalancer api with openstacksdk | 12:41 |
dayou | openstack/load_balancer/load_balancer_service.py: service_type='load-balancer', | 12:42 |
dayou | So if you have v2-api enabled in your octavia.conf | 12:43 |
toker_ | hm, well that makes sence | 12:46 |
toker_ | it doesnt make sense that it doesnt find the endpoint though | 12:47 |
toker_ | but as a regular user, i cant do "openstack endpoint list" | 12:47 |
toker_ | I can only do "openstack catalog list" | 12:47 |
toker_ | but thats the default | 12:47 |
dayou | so you tried to patch your installed openstacksdk with the forward compatibility disabled? | 12:48 |
dayou | and openstack loadbalancer list works | 12:48 |
toker_ | https://review.openstack.org/#/c/524011/ <- I reversed this in lbaasv2.py | 12:49 |
toker_ | only one file | 12:49 |
toker_ | yes, openstack loadbalancer list | 12:49 |
toker_ | workds | 12:49 |
dayou | then restart apache? | 12:49 |
toker_ | yes | 12:49 |
toker_ | even tried to add broken code just to see that it "actually reloaded and using the modified version", which it does. | 12:53 |
*** yamamoto has quit IRC | 12:54 | |
toker_ | [Thu Mar 22 12:56:57.272783 2018] [:error] [pid 16] InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html [Thu Mar 22 12:56:57.272827 2018] [:error] [pid 16] WARNING:py.warnings:InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: http | 12:57 |
toker_ | ^ is shown in the apache error log on horizon when I visit the loadbalancer page | 12:57 |
toker_ | is it suppose to be that way ? | 12:57 |
*** velizarx has quit IRC | 13:01 | |
dayou | It seems the internal services are accessed using https, but with a self generated certificate | 13:03 |
dayou | pip list | grep openstack | 13:08 |
dayou | What is your output of this command? | 13:08 |
toker_ | I have a OSP 12 installation so the horizon is packed in a docker container | 13:12 |
toker_ | (which is the one I'm testing against) | 13:12 |
*** velizarx has joined #openstack-lbaas | 13:17 | |
toker_ | https://github.com/openstack/python-openstacksdk/blob/master/openstack/load_balancer/load_balancer_service.py <- where do i define version ? | 13:19 |
toker_ | should version somehow be defined for the endpoint ? | 13:19 |
toker_ | I cant find that anywhere | 13:20 |
toker_ | doesnt make sense though since the openstack cli works | 13:20 |
toker_ | I dont think there is anything wrong with the endpoint itself, rather how octavia-dashboard tries to look it up | 13:21 |
*** yamamoto has joined #openstack-lbaas | 13:21 | |
*** fnaval has quit IRC | 13:29 | |
dayou | https://github.com/openstack/python-openstacksdk/blob/master/openstack/load_balancer/load_balancer_service.py#L19 | 13:35 |
toker_ | yes but what i mean is should that be defined on the endpoint or anything ? | 13:35 |
*** aojea has quit IRC | 13:36 | |
dayou | I am not aware of that, it seems it only constraint what it has received from upstream | 13:37 |
toker_ | Ok, well Im not sure how I can debug this further. I would like to see what kind of request octavia-dashboard makes to the SDK and why it cand find my endpoint. | 13:38 |
toker_ | Is there a debuglog for the sdk somewhere? | 13:39 |
dayou | Maybe debug=True for get_one_cloud method will work | 13:41 |
dayou | connection.Connection( | 13:42 |
dayou | Or if you reverted, it the method above | 13:42 |
dayou | People at #openstack-sdks should be aware of more about the openstacksdk itself | 13:44 |
*** fnaval has joined #openstack-lbaas | 13:44 | |
dayou | https://github.com/openstack/requirements/blob/stable/pike/upper-constraints.txt#L405 | 13:45 |
dayou | So in pike it is using 0.9.17 of openstacksdk, and in queens is 0.11.3 | 13:46 |
*** aojea has joined #openstack-lbaas | 13:46 | |
dayou | I'll see if I downgrade my openstacksdk to 0.9.17 and what happens | 13:47 |
*** aojea has quit IRC | 13:50 | |
*** yamamoto has quit IRC | 13:53 | |
toker_ | request.user.domain_id <- this value is None | 13:57 |
toker_ | in the connection to the SDK | 13:57 |
toker_ | that doesnt seem right ? | 13:57 |
*** yamamoto has joined #openstack-lbaas | 14:08 | |
*** links has quit IRC | 14:10 | |
*** yamamoto has quit IRC | 14:13 | |
*** atoth has joined #openstack-lbaas | 14:14 | |
*** huseyin has joined #openstack-lbaas | 14:17 | |
huseyin | Hi guys. Can anybody help me about configuration of Octavia? I followed all the instructions in the docs, and when I want to create a load-balancer I get the “Driver error: The request you have made requires authentication. (HTTP 401)” error. | 14:18 |
*** huseyin has left #openstack-lbaas | 14:22 | |
*** huseyin_ has joined #openstack-lbaas | 14:23 | |
*** yamamoto has joined #openstack-lbaas | 14:24 | |
huseyin_ | Hi guys. Can anybody help me about configuration of Octavia? I followed all the instructions in the docs, and when I want to create a load-balancer I get the “Driver error: The request you have made requires authentication. (HTTP 401)” error. | 14:24 |
*** huseyin_ has quit IRC | 14:25 | |
*** huseyin_ has joined #openstack-lbaas | 14:27 | |
*** yamamoto has quit IRC | 14:28 | |
*** huseyin_ has quit IRC | 14:28 | |
toker_ | #openstack-sdks | 14:31 |
*** kobis1 has joined #openstack-lbaas | 14:33 | |
*** kobis1 has quit IRC | 14:34 | |
*** kobis has quit IRC | 14:35 | |
johnsom | toker_ I'm not in the office yet, but your endpoint list should look like: | 14:38 |
johnsom | https://www.irccloud.com/pastebin/cjj1PdiX/ | 14:38 |
*** yamamoto has joined #openstack-lbaas | 14:39 | |
johnsom | domain is not used in most OpenStack deployments and is fine being None | 14:39 |
toker_ | what, should it be /load-balancer ? | 14:39 |
toker_ | I dont have that | 14:39 |
johnsom | Hmm, in older OpenStack you might have a port. | 14:39 |
toker_ | Yes, I only have a port. | 14:40 |
toker_ | And only a public addr | 14:40 |
johnsom | You can check this with "openstack --debug loadbalancer list" and see what URL it is using | 14:40 |
toker_ | Yes, I mean with the openstack loadbalancer list everything works, thats why I'm saying the endpoint seems to be right.. Just octavia-dashbord and the SDK not getting it :/ | 14:41 |
toker_ | I'm talking with the sdk-people now helping me | 14:41 |
johnsom | huseyin_ I'm not sure if you are using neutron-lbaas or not (hope not), but that error indicates the [service_auth] part of the configuration is missing or incorrect: https://github.com/openstack/octavia/blob/master/etc/octavia.conf#L308 and https://docs.openstack.org/octavia/latest/configuration/configref.html#service-auth | 14:42 |
*** yamamoto has quit IRC | 14:43 | |
*** yamamoto has joined #openstack-lbaas | 14:46 | |
*** yamamoto has quit IRC | 14:46 | |
johnsom | toker_ You should be able to curl the endpoint and get a version document: | 14:48 |
johnsom | https://www.irccloud.com/pastebin/PyGDHHWG/ | 14:49 |
toker_ | yes, I can do that | 14:49 |
johnsom | Ok, so if your URL is good, then the only thing I see wrong is you don't have the other endpoint types. It might be looking for the "internal" endpoint | 14:50 |
toker_ | johnsom: well I had all the urls first, but I removed them. The errormessage from the sdk is pretty verbose about trying to find the public endpoint, "SDKException: public endpoint for load_balancer service in regionOne region not found" | 14:51 |
*** salmankhan has quit IRC | 14:55 | |
*** Swami has joined #openstack-lbaas | 14:57 | |
dayou | toker_ | 15:01 |
dayou | I got it working | 15:01 |
dayou | git revert -n baae43b65 | 15:01 |
dayou | then sudo pip install openstacksdk==0.9.17 | 15:02 |
dayou | ==0.9.19 | 15:02 |
dayou | should be | 15:02 |
dayou | so the official requirements in pike is 0.9.17, but it does not work, you need to use 0.9.19 | 15:02 |
dayou | https://github.com/openstack/python-openstacksdk/commit/d1b242ee850f286f1fae56021f2edf13cde1eb9c | 15:04 |
dayou | Also you need to cherry pick this | 15:04 |
dayou | to get healthmoniotr tabs working | 15:04 |
toker_ | Oh, wait. You say I need a newer SDK to get it working in pike ? | 15:05 |
dayou | If you want it working | 15:06 |
dayou | It took me like only 10 minutes to figure it out after my devstack is up | 15:06 |
toker_ | hm ok | 15:06 |
dayou | Go to sleep now, good luck, have fun | 15:07 |
toker_ | Oh, thank you very much ! I'll try it out | 15:07 |
*** mordred has joined #openstack-lbaas | 15:08 | |
dayou | Might not work for you :-D | 15:08 |
*** aojea_ has joined #openstack-lbaas | 15:09 | |
*** aojea_ has quit IRC | 15:14 | |
johnsom | Yeah, the queens forward SDK had a lot of changes to how it interacts with the cloud. A number of things were broken late in that version, so this could be one of them. Give me a few and I will dig through some code and see if I can find out what went wrong. | 15:19 |
johnsom | Hmm, that error is coming from keystoneauth | 15:27 |
rm_work | johnsom: err, how exactly do these timeouts we defined in the PTG etherpad line up with the timeout names in haproxy? | 15:28 |
rm_work | i think i get a couple of them | 15:28 |
rm_work | but not all maybe | 15:28 |
johnsom | You wanted them re-ordered to be consistent | 15:28 |
toker_ | Hm ok, talking to mordor in #openstack-sdks, says he's having the same issue with the 0.9.19 SDK. He's looking into it though. | 15:29 |
*** Swami has quit IRC | 15:29 | |
johnsom | Ok, cool | 15:29 |
johnsom | rm_work if you have ones you don't know, give me that list | 15:30 |
rm_work | yeah working on it | 15:30 |
velizarx | Hello everybody. Could you comment my question (http://eavesdrop.openstack.org/irclogs/#openstack-lbaas/#openstack-lbaas.2018-03-22.log.html#t2018-03-22T08:24:18), please? | 15:30 |
toker_ | I dont think just updating my SDK to a newer version would help my problem, and that would explain why mordor is having the same problem. | 15:30 |
toker_ | although, still not sure why the sdk cant find the loadbalancer service.. but yeah, he's looking into it. | 15:31 |
rm_work | johnsom: http://paste.openstack.org/show/708974/ | 15:32 |
rm_work | or just look at the etherpad | 15:32 |
rm_work | it's from there | 15:32 |
rm_work | xgerman_: i responded on https://review.openstack.org/#/c/552632/ | 15:37 |
xgerman_ | k | 15:37 |
*** velizarx has quit IRC | 15:40 | |
*** pcaruana has joined #openstack-lbaas | 15:42 | |
johnsom | velizarx Yes, L3 active/active is currently being worked on by a team. There goal is a Rocky release. | 15:43 |
*** pcaruana has quit IRC | 15:44 | |
*** yamamoto has joined #openstack-lbaas | 15:47 | |
rm_work | johnsom: also, not sure if `timeout_tcp_inspect` is the best either | 15:47 |
rm_work | is it actually a timeout? | 15:47 |
rm_work | i guess technically? | 15:47 |
johnsom | Yes, it is | 15:48 |
rm_work | more like `delay_tcp_inspect` | 15:48 |
johnsom | It times out waiting for additional packets | 15:48 |
*** pcaruana has joined #openstack-lbaas | 15:48 | |
rm_work | but anyway, are those others correct? | 15:48 |
*** salmankhan has joined #openstack-lbaas | 15:48 | |
johnsom | timeout is more consistent with the other drivers | 15:48 |
rm_work | k | 15:48 |
rm_work | whatever, it's 5 lines | 15:49 |
rm_work | timeout_client_connect - timeout connect | 15:49 |
rm_work | timeout_client_data - timeout client | 15:49 |
rm_work | timeout_member_connect - ?? | 15:49 |
rm_work | timeout_member_data - timeout server | 15:49 |
rm_work | timeout_tcp_inspect - tcp-request inspect-delay? | 15:49 |
johnsom | Yeah, just a sec, looking. I have like five people pinging me this morning | 15:50 |
rm_work | lol k | 15:50 |
rm_work | yeah i'm working anyway, will adjust according to decisions | 15:50 |
rm_work | no biggie | 15:50 |
rm_work | also, timeouts, milliseconds? | 15:52 |
rm_work | or seconds | 15:52 |
johnsom | I am updating the etherpad version. | 15:52 |
johnsom | I think in haproxy they are all milliseconds, so go with that. | 15:52 |
*** yamamoto has quit IRC | 15:53 | |
toker_ | hm, ok so I got octavia-dashboard to work on the OSP12 setup.. But when trying to create a loadbalancer from the gui, I can push the "create a loadbalancer" button.. But then after filling in all the details, the button is "non clickable"... Anyone got any thought ? Does it have to do with policy ? | 15:54 |
johnsom | toker_ Do not install the policy file on horizon for Octavia dashboard | 15:54 |
johnsom | toker_ It is broken in older versions. | 15:54 |
toker_ | ok ok | 15:55 |
johnsom | toker_ Also, try logging out, restarting apache, and logging back in. I have seen it do that on initial startup. | 15:55 |
toker_ | ok ok | 15:55 |
johnsom | It just has to do with the caching in horizon and all of those changes underneath it. | 15:55 |
johnsom | One time thing when you are setting up | 15:56 |
*** harlowja has joined #openstack-lbaas | 16:04 | |
*** harlowja has quit IRC | 16:09 | |
*** velizarx has joined #openstack-lbaas | 16:11 | |
*** velizarx has quit IRC | 16:11 | |
*** velizarx has joined #openstack-lbaas | 16:12 | |
*** yamamoto has joined #openstack-lbaas | 16:17 | |
*** yamamoto has quit IRC | 16:22 | |
*** yamamoto has joined #openstack-lbaas | 16:32 | |
*** AlexeyAbashkin has quit IRC | 16:34 | |
*** yamamoto has quit IRC | 16:36 | |
ftersin | Hi everybody. Is there any chance for you guys to fix https://storyboard.openstack.org/#!/story/2001481 soon? | 16:37 |
johnsom | ftersin That turned out to be a very complicated issue. One patch was proposed, but it had defects. Are you seeing this situation occur or just concerned about it? Commenting your interest on the story can help us prioritize it. | 16:41 |
johnsom | rm_work ^^^ act/stdby both down story | 16:41 |
johnsom | We have some ideas of how to solve it, but it will be a bit of work | 16:42 |
ftersin | We got it while were testing Octavia | 16:42 |
xgerman_ | I have seen it in conjunction withdelayed health manager processing | 16:43 |
johnsom | ftersin Thanks for letting us know, it does help us prioritize effort. This was one we weren't sure how much it was happening in the real world. | 16:45 |
ftersin | Can you use external resources to fix it? If our customer wants it to be fixed we'll work on it in any case... | 16:45 |
xgerman_ | we love help ;-) | 16:46 |
*** yamamoto has joined #openstack-lbaas | 16:47 | |
johnsom | Yeah, help would be great. I laid out some things I think need to be updated to address it in the story. Adam (rm_work) can also probably provide some commentary | 16:47 |
rm_work | yeah, it's just ... a very complex issue | 16:48 |
xgerman_ | my kneejerk fix wold be to mark all amphora on the LB busy ;-) | 16:49 |
rm_work | erm | 16:49 |
ftersin | I see. Does https://storyboard.openstack.org/#!/story/1703547 duplicate it? Or these one differ from each other? | 16:49 |
rm_work | i think that would be a possible deadlock case as it can't be atomic | 16:50 |
rm_work | so what, you open a transaction and try to grab both, what if it happens at the same time T_T | 16:50 |
rm_work | i have been considering a different fix | 16:50 |
xgerman_ | I think DBs have a way to time out locks but yes, I know my idea wasn’t great | 16:51 |
rm_work | yeah so i want to disentangle it a bit | 16:52 |
rm_work | it's on my shortlist | 16:52 |
*** yamamoto has quit IRC | 16:52 | |
xgerman_ | ftersin: potentially but it also reference a discrepancy between neutron and octavia so hard to say what’s goin gon | 16:53 |
johnsom | Yeah, we already do mark them all busy, that is not the issue really. It's more about the way some of our code is designed where it assumes both are available all of the time. Really, that needs refactored so that we can work on one amp in isolation, restore it, then come back and fix the other. | 16:55 |
rm_work | yes | 16:56 |
rm_work | johnsom: so is timeout_client_connect - not a thing? | 16:56 |
johnsom | Ugh, I got pulled of looking at that, sorry. It should be a thing, it might be connect just assigned to the frontend. I did this mapping once before, but lost track of it. Hmm, maybe it was on the story. | 16:57 |
*** aojea_ has joined #openstack-lbaas | 16:58 | |
johnsom | Yeah, likely that other story is a duplicate | 16:58 |
*** velizarx has quit IRC | 16:58 | |
xgerman_ | they talk about n-lbaas so not 100% sure | 17:01 |
*** aojea_ has quit IRC | 17:02 | |
*** yamamoto has joined #openstack-lbaas | 17:02 | |
johnsom | Yeah, two layers of issues there, but I think the end case was the same | 17:03 |
*** yamamoto has quit IRC | 17:06 | |
rm_work | so, 0 is infine, and what is our max? literal maxint? | 17:08 |
*** salmankhan has quit IRC | 17:09 | |
*** yamamoto has joined #openstack-lbaas | 17:17 | |
*** yamamoto has quit IRC | 17:22 | |
toker_ | yey! got octavia-dashboard working with openstack-sdk 0.9.19. :D | 17:24 |
toker_ | ... except that I cant press that god damn blue create loadbalancer button :) | 17:24 |
ftersin | johnsom: xgerman_: Ok, thank you. I let you know if we have to do something with that. | 17:28 |
*** salmankhan has joined #openstack-lbaas | 17:29 | |
*** yamamoto has joined #openstack-lbaas | 17:32 | |
toker_ | I've restarted the container and tried logging in from different browsers... Still having the issue with not being able to create stuff via the giu | 17:33 |
toker_ | What determines if the blue button should be clickabke or not ? | 17:34 |
*** yamamoto has quit IRC | 17:37 | |
johnsom | toker_ You are talking about the final create button in the wizard right? | 17:41 |
toker_ | johnsom: yes thats right | 17:41 |
johnsom | One part is that all of the required fields are filled in, those with a * next to them, on all of the screens of the wizard | 17:41 |
toker_ | hm, well yes I've filled in information in every field... still no go :/ | 17:43 |
johnsom | Yeah, looking through the code. What is the label on that button? | 17:44 |
toker_ | "Create Load Balancer" | 17:46 |
*** yamamoto has joined #openstack-lbaas | 17:47 | |
johnsom | So, the RBAC is handled here: https://github.com/openstack/octavia-dashboard/blob/master/octavia_dashboard/static/dashboard/project/lbaasv2/loadbalancers/actions/create/create.service.js#L59 | 17:50 |
johnsom | But we have confirmed you do not have the octavia_policy.json installed in the horizon policy file directory, so that should be disabled. | 17:50 |
johnsom | The other checks are all under here: https://github.com/openstack/octavia-dashboard/tree/master/octavia_dashboard/static/dashboard/project/lbaasv2/loadbalancers | 17:51 |
johnsom | I am looking through that now | 17:51 |
*** yamamoto has quit IRC | 17:52 | |
toker_ | yep, no octavia_policy.json anywhere | 17:54 |
johnsom | Just to double check, no neutron_lbaas_policy.json either right? | 17:55 |
johnsom | It would be in the horizon directory somewhere. (booting my dashboard VM now) | 17:56 |
toker_ | mm nope, | 17:57 |
openstackgerrit | Merged openstack/octavia-dashboard master: Imported Translations from Zanata https://review.openstack.org/555177 | 17:58 |
toker_ | no such file. | 17:58 |
toker_ | this was also working before switching from neutron-lbaas to octavia directly.. | 17:58 |
johnsom | FYI horizon/openstack_dashboard/conf is the directory with these policy files. | 17:59 |
johnsom | Hmmm, yeah, still thinking about it. dayou is our expert, but he is probably offline now. | 18:00 |
toker_ | ok, no worries, I'll figure it out sooner or later =) | 18:00 |
johnsom | Yeah, on mine as soon as I select the "Monitor type" drop down, the create button goes live | 18:01 |
*** yamamoto has joined #openstack-lbaas | 18:02 | |
*** beisner has joined #openstack-lbaas | 18:04 | |
beisner | hi all - curious if anyone has a hardware hsm in place (+ barbican), testing or using with octavia? if not, what barbican back-end is under dev/test with octavia? | 18:04 |
*** yamamoto has quit IRC | 18:06 | |
*** yamamoto has joined #openstack-lbaas | 18:06 | |
*** yamamoto has quit IRC | 18:06 | |
johnsom | We support barbican and castellan. Most of us are developers, so use the devstack default local store. | 18:07 |
johnsom | I think at one point someone used an Atalla in testing, but not prod. | 18:08 |
johnsom | I know there has been a lot of interest in Vault, which you can use with Octavia queens via Castellan. But I personally have not tried it. | 18:09 |
johnsom | Here is an old HPE helion white paper talking about using the Atalla ESKM with barbican | 18:13 |
johnsom | http://files.asset.microfocus.com/4aa6-5241/en/4aa6-5241.pdf | 18:13 |
*** harlowja has joined #openstack-lbaas | 18:15 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 18:15 | |
*** harlowja_ has joined #openstack-lbaas | 18:17 | |
*** harlowja has quit IRC | 18:19 | |
*** AlexeyAbashkin has quit IRC | 18:20 | |
rm_work | johnsom: i am not sure timeout_client_connect exists | 18:24 |
xgerman_ | rm_work, johnsom: https://review.openstack.org/554960 | 18:30 |
johnsom | xgerman_ Can we have someone from infra check that? | 18:31 |
xgerman_ | you know somebody? | 18:31 |
rm_work | i just +2'd it because it can't be worse | 18:31 |
*** tesseract has quit IRC | 18:32 | |
beisner | johnsom: thx for the input | 18:34 |
johnsom | Sure, NP | 18:34 |
beisner | rm_work: absent any context, that is comical :) | 18:35 |
rm_work | beisner: :P | 18:35 |
rm_work | I think it's pretty comical *in-context*, actually >_< | 18:36 |
rm_work | but yes, even more so out | 18:36 |
johnsom | rm_work back to looking, context switching is killing me this morning. | 18:37 |
rm_work | lol yes same | 18:37 |
rm_work | sorry for my part :( | 18:37 |
rm_work | but this is almost done I think lol | 18:37 |
johnsom | rm_work Yeah, I am not finding it. Let's drop it for now. | 18:43 |
rm_work | k | 18:43 |
johnsom | Wondering about timeout client-fin and timeout tunnel though. But these feel very HAProxy ish | 18:43 |
rm_work | yes | 18:45 |
rm_work | tunnel is interesting but QUITE haproxy | 18:45 |
rm_work | it seems | 18:45 |
*** aojea has joined #openstack-lbaas | 18:46 | |
johnsom | rm_work FYI, there is a story for this: https://storyboard.openstack.org/#!/story/1457556 | 18:48 |
rm_work | k | 18:49 |
rm_work | man | 18:49 |
rm_work | timeout http-keep-alive timeout http-request timeout tunnel | 18:49 |
rm_work | so many | 18:49 |
johnsom | Yeah, there are the keepalive options too that I am sure someone will need in the future | 18:50 |
rm_work | sooo many, lol | 18:50 |
*** aojea has quit IRC | 18:50 | |
rm_work | ALMOST feels like worth a little object, but i think no | 18:50 |
rm_work | did reedip actually do anything? | 18:50 |
rm_work | k looks like not on octavia | 18:53 |
*** voelzmo has joined #openstack-lbaas | 19:01 | |
*** aojea has joined #openstack-lbaas | 19:01 | |
*** yamamoto has joined #openstack-lbaas | 19:07 | |
*** voelzmo has quit IRC | 19:10 | |
*** Swami has joined #openstack-lbaas | 19:10 | |
*** voelzmo has joined #openstack-lbaas | 19:11 | |
*** yamamoto has quit IRC | 19:13 | |
*** voelzmo has quit IRC | 19:22 | |
*** aojea has quit IRC | 19:23 | |
rm_work | johnsom: i think this is going to take some revisions | 19:24 |
rm_work | lol | 19:24 |
johnsom | Yeah, the patch for n-lbaas added a "timeout" which I rejected as too vague, etc. | 19:30 |
*** _toker_ has joined #openstack-lbaas | 19:32 | |
_toker_ | Hm, after switching to octavia-api from neutron-lbaas I cant longer see the port the loadbalancer uses as vip. Before the switch I saw the ipaddress and then the "attached device" would be "neutron:LOADBALANCERV2".. But now, I only see the ports if I log in as the octavia user (and the attached device is "Octavia")... Can anyone explain that change of behaviour ? | 19:34 |
_toker_ | This is what breaks terraform now I believe, since terraform tries to get that port information... | 19:35 |
johnsom | You might be missing this patch: https://review.openstack.org/#/c/524254/ | 19:36 |
_toker_ | johnsom: *checking* | 19:37 |
johnsom | I am of course assuming you are on the pike series | 19:37 |
_toker_ | yes | 19:37 |
_toker_ | correct, that patch is not applied. | 19:39 |
*** aojea has joined #openstack-lbaas | 19:43 | |
*** aojea has quit IRC | 19:45 | |
*** aojea has joined #openstack-lbaas | 19:45 | |
*** aojea has quit IRC | 19:46 | |
*** aojea has joined #openstack-lbaas | 19:49 | |
xgerman_ | rm_work: you ever saw a memory leak in the health manager | 19:50 |
xgerman_ | ? | 19:50 |
rm_work | nope | 19:51 |
rm_work | but i didn't look SUPER closely? | 19:51 |
xgerman_ | well mine were eating up 30 Gigs of Meme | 19:51 |
rm_work | ummmmmm | 19:51 |
rm_work | that's not... ideal | 19:51 |
rm_work | let me look | 19:51 |
xgerman_ | you wold have noticed ;-) | 19:51 |
rm_work | maybe? | 19:51 |
rm_work | maybe | 19:52 |
xgerman_ | I had people noticing it for me… | 19:52 |
rm_work | yeah nope, my memory footprint on my HMs hasn't budged in a month | 19:52 |
xgerman_ | sweet, something must have been fixed from Pike | 19:53 |
rm_work | umm did we backport the HM fixes? | 19:53 |
rm_work | i don't think so? | 19:53 |
xgerman_ | no | 19:53 |
rm_work | the thing where they were actually really shitty? lol | 19:53 |
xgerman_ | here is the latest attempt to improbve it: https://review.openstack.org/#/c/554063/ | 19:55 |
_toker_ | johnsom: Sweet! That did it :D | 20:01 |
johnsom | Yeah, I level devstacks running for a while too and have not seen any memory leaks | 20:01 |
*** yamamoto has joined #openstack-lbaas | 20:09 | |
*** yamamoto has quit IRC | 20:14 | |
*** aojea has quit IRC | 20:17 | |
*** aojea has joined #openstack-lbaas | 20:18 | |
_toker_ | Still cant get my head around "Create buttons beying greyd out". So, it turns out I can edit already added loadbalancers, pools, members etc. Its JUST when I try to create each one of them, the button is greyed out. | 20:18 |
_toker_ | spyOn(policy, 'ifAllowed').and.returnValue(true); | 20:24 |
_toker_ | expect(service.allowed()).toBe(true); | 20:24 |
_toker_ | expect(policy.ifAllowed).toHaveBeenCalledWith({rules: [['neutron', 'create_loadbalancer']]}); | 20:24 |
_toker_ | What does those lines do ? | 20:24 |
*** aojea has quit IRC | 20:33 | |
*** aojea has joined #openstack-lbaas | 20:34 | |
_toker_ | bah, I give up for today. Listing and editing through gui and creating through terraform on OSP 12 is fine for me... Maybe I'll be back tomorrow asking questions again =) | 20:39 |
_toker_ | Anyway, thanks a bunch for all the help! | 20:39 |
*** _toker_ has quit IRC | 20:44 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 20:45 | |
rm_work | ok, so are we tracking what we might need to backport? :P | 20:45 |
rm_work | ah i guess we DID backport that actually, but OSP12 didn't catch it :( | 20:46 |
*** kobis has joined #openstack-lbaas | 20:50 | |
*** AlexeyAbashkin has quit IRC | 20:52 | |
*** kobis has quit IRC | 20:52 | |
rm_work | johnsom: going to need descriptions for these timeouts | 20:55 |
rm_work | and then also I guess we can debate which additional ones we will need | 20:56 |
rm_work | I just put 4 to start | 20:56 |
rm_work | (the ones we listed in the etherpad) | 20:56 |
*** aojea has quit IRC | 20:57 | |
xgerman_ | rm_work so maybe we need to be smarter with our hm and keep track of the futures we schedule in a weak ref map indexed by amphora-id so we can cncel updates when a new one comes in | 20:57 |
rm_work | hmmm | 20:57 |
xgerman_ | so far we assumed that we can keep up but if we can’t… | 20:57 |
rm_work | well if you can't keep up | 20:57 |
rm_work | you have *problems* | 20:57 |
xgerman_ | don’t tell me abou them | 20:57 |
xgerman_ | I am burning ~120% of CPU | 20:58 |
*** salmankhan has quit IRC | 20:58 | |
xgerman_ | and fot up to 26 GB of mmeory | 20:58 |
*** ssmith has joined #openstack-lbaas | 20:59 | |
johnsom | Didn't you have keystone problems in that environment too? | 20:59 |
xgerman_ | well, if I eat up all the mem keystone starts swapping | 21:00 |
xgerman_ | we 64 GB and if hm needs 26GB there is not much left over for the rest of the con trol plane | 21:00 |
johnsom | I'm just skeptical given the issues there | 21:00 |
rm_work | ok well here goes | 21:01 |
rm_work | xgerman_: the CPU might be related to the HM issues also | 21:01 |
xgerman_ | ok | 21:01 |
rm_work | xgerman_: you REALLY need the updated multiproc HM code | 21:01 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Allow members to be set as "backup" https://review.openstack.org/552632 | 21:01 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: [WIP] Expose timeout options https://review.openstack.org/555454 | 21:01 |
rm_work | you probably have the same issues I did | 21:02 |
rm_work | johnsom: ^^ | 21:02 |
xgerman_ | I am running Pike. So maybe we just need to backport that wholesale | 21:02 |
rm_work | that's... fairly complete. just need some messages, and some discussion about what else to expose | 21:02 |
xgerman_ | I don’t have that many amps | 21:02 |
rm_work | xgerman_: i don't know if we can | 21:02 |
rm_work | xgerman_: right, i didn't either | 21:02 |
rm_work | it starts to break down at like 20, lol | 21:02 |
rm_work | it is so bad | 21:02 |
xgerman_ | ouch | 21:02 |
rm_work | honestly we NEED to backport it | 21:03 |
rm_work | or Pike will be essentially useless | 21:03 |
xgerman_ | +1000 | 21:03 |
rm_work | let me look | 21:03 |
johnsom | Yeah, I'm just saying I want to see something like that outside that environment before I say it's us | 21:03 |
johnsom | I mean you can't even tell neutron to delete a port in there | 21:03 |
rm_work | johnsom: i can't imagine a scenario where the old HM code wouldn't fall over | 21:03 |
xgerman_ | https://usercontent.irccloud-cdn.com/file/vA5OL9ym/Screen%20Shot%202018-03-22%20at%2012.48.04%20PM.png | 21:04 |
johnsom | Well, we fixed one issue | 21:04 |
rm_work | it falls into the bucket of "did anyone ever try to actually run this" | 21:04 |
rm_work | hmmm | 21:04 |
rm_work | uwsgi shouldn't be that bad tho >_> | 21:04 |
xgerman_ | yep | 21:04 |
johnsom | Yeah, pretty sure this is effect of soemthing else | 21:04 |
xgerman_ | https://usercontent.irccloud-cdn.com/file/RiGiDVAH/Screen%20Shot%202018-03-22%20at%2012.39.56%20PM.png | 21:04 |
rm_work | yeah i mean | 21:05 |
rm_work | octavia-health ALSO shouldn't be even close | 21:05 |
rm_work | he is definitely suffering from that one | 21:05 |
xgerman_ | yeah, normally I would agree but the only thing hm needs is the DB | 21:05 |
xgerman_ | and mysql runs at 50% CPPU | 21:06 |
*** salmankhan has joined #openstack-lbaas | 21:06 | |
johnsom | Given the multiple issues, including those outside octavia, I bet they lost storage for a bit or something similar | 21:07 |
*** aojea has joined #openstack-lbaas | 21:07 | |
xgerman_ | well, it’s trending up after the restart | 21:07 |
xgerman_ | (hm memory) | 21:07 |
xgerman_ | and they even now allegedly tuned mysql | 21:09 |
rm_work | wait how the heck do I *checkout* stable/pike | 21:10 |
*** salmankhan has quit IRC | 21:10 | |
xgerman_ | girt chekout origin/stable/pike | 21:10 |
rm_work | ah | 21:10 |
*** yamamoto has joined #openstack-lbaas | 21:10 | |
xgerman_ | yeah, I think having the hm being unbound is not ok — unless we want people to run it in some straight jacket cgroup | 21:11 |
*** yamamoto has quit IRC | 21:16 | |
xgerman_ | https://bugs.python.org/issue14119 | 21:17 |
*** AlexeyAbashkin has joined #openstack-lbaas | 21:17 | |
*** kobis has joined #openstack-lbaas | 21:18 | |
*** AlexeyAbashkin has quit IRC | 21:21 | |
rm_work | umm | 21:22 |
xgerman_ | yeah, rumor on the Internet has it that ProccessPol blocks | 21:22 |
rm_work | xgerman_: there's a number of changes that i think will fix this | 21:22 |
rm_work | and you won't need to worry about that | 21:23 |
rm_work | i'm working on the cherry-picks | 21:23 |
rm_work | the merge conflicts are somewhat insane tho | 21:23 |
xgerman_ | sweet | 21:23 |
*** kobis has quit IRC | 21:26 | |
*** kobis has joined #openstack-lbaas | 21:28 | |
*** kobis has quit IRC | 21:28 | |
*** ssmith has quit IRC | 21:36 | |
*** pcaruana has quit IRC | 21:53 | |
rm_work | WHEEEELP this will be funtimes | 21:54 |
rm_work | you ready xgerman_? | 21:54 |
*** yamamoto has joined #openstack-lbaas | 21:55 | |
rm_work | oh one sec | 21:55 |
rm_work | alright here goes nothing... | 21:57 |
rm_work | errr | 21:58 |
rm_work | it rejected my PR | 21:58 |
rm_work | s/PR/review/ | 21:58 |
rm_work | OH. right. | 21:59 |
rm_work | ok so.... i remember why we got stuck on this | 22:00 |
rm_work | anyway, here's the end of the chain: https://review.openstack.org/#/c/555475/ | 22:01 |
rm_work | but the first one was blocked due to adding a requirement <_< | 22:01 |
rm_work | which we just can't avoid | 22:01 |
rm_work | several of those things solve bits of the problems you have xgerman_ | 22:02 |
rm_work | like, "Minimize the effect overloaded Health Manager processes" | 22:02 |
rm_work | solves the "we got way behind" problem | 22:03 |
*** rcernin has joined #openstack-lbaas | 22:34 | |
*** dayou has quit IRC | 22:42 | |
*** dayou has joined #openstack-lbaas | 22:43 | |
*** fnaval has quit IRC | 22:46 | |
*** aojea has quit IRC | 22:49 | |
*** dayou has quit IRC | 22:52 | |
*** dayou has joined #openstack-lbaas | 22:55 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 23:16 | |
*** AlexeyAbashkin has quit IRC | 23:20 | |
*** Swami has quit IRC | 23:26 | |
*** fnaval has joined #openstack-lbaas | 23:41 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!