*** slaweq has quit IRC | 00:00 | |
*** slaweq has joined #openstack-lbaas | 00:05 | |
*** rtjure has joined #openstack-lbaas | 00:07 | |
*** sshank has quit IRC | 00:10 | |
*** leitan has quit IRC | 00:12 | |
*** rtjure has quit IRC | 00:12 | |
*** rcernin has joined #openstack-lbaas | 00:19 | |
*** rtjure has joined #openstack-lbaas | 00:20 | |
*** rcernin has quit IRC | 00:23 | |
*** rcernin has joined #openstack-lbaas | 00:24 | |
*** rtjure has quit IRC | 00:24 | |
*** rtjure has joined #openstack-lbaas | 00:32 | |
*** jdavis has joined #openstack-lbaas | 00:34 | |
*** rtjure has quit IRC | 00:37 | |
*** slaweq has quit IRC | 00:37 | |
*** jdavis has quit IRC | 00:39 | |
*** rtjure has joined #openstack-lbaas | 00:42 | |
*** rtjure has quit IRC | 00:47 | |
*** slaweq has joined #openstack-lbaas | 00:49 | |
*** rtjure has joined #openstack-lbaas | 00:54 | |
*** jniesz has quit IRC | 00:57 | |
*** ipsecguy has quit IRC | 00:58 | |
*** rtjure has quit IRC | 00:59 | |
*** ipsecguy has joined #openstack-lbaas | 00:59 | |
*** ltomasbo has quit IRC | 01:04 | |
*** rtjure has joined #openstack-lbaas | 01:05 | |
*** ltomasbo has joined #openstack-lbaas | 01:06 | |
*** rtjure has quit IRC | 01:10 | |
*** ltomasbo has quit IRC | 01:12 | |
*** rtjure has joined #openstack-lbaas | 01:19 | |
*** ltomasbo has joined #openstack-lbaas | 01:20 | |
*** slaweq has quit IRC | 01:20 | |
*** rtjure has quit IRC | 01:23 | |
*** dayou has quit IRC | 01:26 | |
*** rtjure has joined #openstack-lbaas | 01:28 | |
*** slaweq has joined #openstack-lbaas | 01:29 | |
*** leitan has joined #openstack-lbaas | 01:29 | |
*** rcernin has quit IRC | 01:32 | |
*** rcernin has joined #openstack-lbaas | 01:32 | |
*** rcernin has quit IRC | 01:33 | |
*** rtjure has quit IRC | 01:33 | |
*** leitan has quit IRC | 01:33 | |
*** rtjure has joined #openstack-lbaas | 01:40 | |
*** dayou has joined #openstack-lbaas | 01:44 | |
*** rtjure has quit IRC | 01:45 | |
*** sapd has quit IRC | 01:46 | |
*** sapd has joined #openstack-lbaas | 01:48 | |
*** rtjure has joined #openstack-lbaas | 01:53 | |
*** rcernin has joined #openstack-lbaas | 01:55 | |
*** annp has joined #openstack-lbaas | 01:56 | |
*** rtjure has quit IRC | 01:58 | |
*** slaweq has quit IRC | 02:02 | |
*** slaweq has joined #openstack-lbaas | 02:04 | |
*** rtjure has joined #openstack-lbaas | 02:07 | |
*** rtjure has quit IRC | 02:12 | |
*** ltomasbo has quit IRC | 02:12 | |
*** yamamoto has joined #openstack-lbaas | 02:14 | |
kong | hmm..octavia client only supports initialized with session? | 02:15 |
---|---|---|
*** rtjure has joined #openstack-lbaas | 02:18 | |
*** rtjure has quit IRC | 02:23 | |
*** fnaval has joined #openstack-lbaas | 02:27 | |
*** rtjure has joined #openstack-lbaas | 02:27 | |
*** rtjure has quit IRC | 02:32 | |
*** slaweq has quit IRC | 02:37 | |
*** rtjure has joined #openstack-lbaas | 02:37 | |
*** rtjure has quit IRC | 02:42 | |
*** slaweq has joined #openstack-lbaas | 02:42 | |
*** ramishra has joined #openstack-lbaas | 02:44 | |
*** rtjure has joined #openstack-lbaas | 02:48 | |
*** cody-somerville has joined #openstack-lbaas | 02:48 | |
*** cody-somerville has joined #openstack-lbaas | 02:48 | |
*** rtjure has quit IRC | 02:53 | |
*** rtjure has joined #openstack-lbaas | 02:57 | |
*** fnaval has quit IRC | 03:01 | |
*** fnaval has joined #openstack-lbaas | 03:02 | |
*** rtjure has quit IRC | 03:02 | |
*** rtjure has joined #openstack-lbaas | 03:10 | |
*** rtjure has quit IRC | 03:15 | |
*** slaweq has quit IRC | 03:15 | |
*** slaweq has joined #openstack-lbaas | 03:21 | |
*** rtjure has joined #openstack-lbaas | 03:22 | |
*** rtjure has quit IRC | 03:26 | |
*** rtjure has joined #openstack-lbaas | 03:32 | |
*** rtjure has quit IRC | 03:37 | |
*** rtjure has joined #openstack-lbaas | 03:45 | |
*** rtjure has quit IRC | 03:50 | |
*** slaweq has quit IRC | 03:55 | |
*** rtjure has joined #openstack-lbaas | 03:56 | |
*** rtjure has quit IRC | 04:01 | |
*** slaweq has joined #openstack-lbaas | 04:06 | |
*** rtjure has joined #openstack-lbaas | 04:08 | |
*** rtjure has quit IRC | 04:12 | |
*** rtjure has joined #openstack-lbaas | 04:18 | |
*** rcernin has quit IRC | 04:21 | |
*** rtjure has quit IRC | 04:22 | |
*** rtjure has joined #openstack-lbaas | 04:27 | |
*** rtjure has quit IRC | 04:32 | |
*** slaweq has quit IRC | 04:39 | |
*** rtjure has joined #openstack-lbaas | 04:40 | |
*** rcernin has joined #openstack-lbaas | 04:45 | |
*** rtjure has quit IRC | 04:45 | |
*** slaweq has joined #openstack-lbaas | 04:47 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 04:47 | |
*** krypto has joined #openstack-lbaas | 04:48 | |
*** rtjure has joined #openstack-lbaas | 04:49 | |
*** rtjure has quit IRC | 04:54 | |
*** rtjure has joined #openstack-lbaas | 04:59 | |
*** rtjure has quit IRC | 05:04 | |
*** rtjure has joined #openstack-lbaas | 05:10 | |
*** rtjure has quit IRC | 05:15 | |
*** slaweq has quit IRC | 05:19 | |
*** rtjure has joined #openstack-lbaas | 05:20 | |
*** rtjure has quit IRC | 05:24 | |
*** slaweq has joined #openstack-lbaas | 05:26 | |
*** ltomasbo has joined #openstack-lbaas | 05:26 | |
*** m3m0r3x has joined #openstack-lbaas | 05:26 | |
*** m3m0r3x has quit IRC | 05:30 | |
*** rtjure has joined #openstack-lbaas | 05:30 | |
*** rtjure has quit IRC | 05:35 | |
*** rtjure has joined #openstack-lbaas | 05:40 | |
*** kbyrne has quit IRC | 05:42 | |
*** kbyrne has joined #openstack-lbaas | 05:45 | |
*** slaweq has quit IRC | 05:59 | |
*** slaweq has joined #openstack-lbaas | 06:07 | |
*** Alex_Staf has joined #openstack-lbaas | 06:34 | |
*** pcaruana has joined #openstack-lbaas | 06:39 | |
*** csomerville has joined #openstack-lbaas | 06:40 | |
*** slaweq has quit IRC | 06:40 | |
*** cody-somerville has quit IRC | 06:43 | |
*** slaweq has joined #openstack-lbaas | 06:49 | |
*** tesseract has joined #openstack-lbaas | 06:54 | |
*** armax has quit IRC | 07:09 | |
*** spectr has joined #openstack-lbaas | 07:13 | |
*** slaweq has quit IRC | 07:21 | |
*** slaweq has joined #openstack-lbaas | 07:23 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:46 | |
*** slaweq_ has joined #openstack-lbaas | 07:48 | |
*** slaweq has quit IRC | 07:56 | |
*** slaweq has joined #openstack-lbaas | 08:02 | |
*** slaweq has quit IRC | 08:06 | |
*** bcafarel has quit IRC | 08:20 | |
*** bcafarel has joined #openstack-lbaas | 08:21 | |
*** slaweq has joined #openstack-lbaas | 08:36 | |
*** jdavis has joined #openstack-lbaas | 08:37 | |
*** jdavis has quit IRC | 08:42 | |
*** rcernin has quit IRC | 08:59 | |
*** krypto has quit IRC | 09:00 | |
*** krypto has joined #openstack-lbaas | 09:01 | |
*** yamamoto has quit IRC | 09:05 | |
*** slaweq has quit IRC | 09:09 | |
*** salmankhan has joined #openstack-lbaas | 09:11 | |
*** yamamoto has joined #openstack-lbaas | 09:12 | |
*** krypto has quit IRC | 09:21 | |
*** slaweq has joined #openstack-lbaas | 09:21 | |
*** krypto has joined #openstack-lbaas | 09:27 | |
isantosp | is there any equivalent way to switch base_url=http://openstack.xxx.xx:9876 to base_url=http://octavia-box.xxx.xx:9876/something ? what should I write in /something? | 09:39 |
*** yamamoto has quit IRC | 09:49 | |
*** slaweq has quit IRC | 09:55 | |
krypto | hello all i am able to start most octavia service except worker which gives this error "InvalidTarget: A server's target must have topic and server names specified:<Target server=amphora1>" where amphora1 is my node where octavia-api and other services are running | 09:55 |
krypto | i dont see anywhere in the configuration i have specified my hostname | 09:57 |
*** slaweq has joined #openstack-lbaas | 10:02 | |
*** eN_Guruprasad_Rn has quit IRC | 10:08 | |
*** krypto has quit IRC | 10:22 | |
*** krypto has joined #openstack-lbaas | 10:23 | |
*** annp has quit IRC | 10:24 | |
*** slaweq has quit IRC | 10:36 | |
*** pcaruana has quit IRC | 10:37 | |
*** slaweq has joined #openstack-lbaas | 10:39 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 10:40 | |
*** yamamoto has joined #openstack-lbaas | 10:49 | |
*** salmankhan has quit IRC | 10:50 | |
*** salmankhan has joined #openstack-lbaas | 10:51 | |
*** eN_Guruprasad_Rn has quit IRC | 10:56 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 10:57 | |
*** ramishra has quit IRC | 11:00 | |
*** yamamoto has quit IRC | 11:01 | |
*** ramishra has joined #openstack-lbaas | 11:02 | |
*** slaweq has quit IRC | 11:11 | |
*** yamamoto has joined #openstack-lbaas | 11:14 | |
*** atoth has joined #openstack-lbaas | 11:19 | |
*** slaweq has joined #openstack-lbaas | 11:23 | |
openstackgerrit | Santhosh Fernandes proposed openstack/octavia master: [WIP] ACTIVE-ACTIVE with exabgp-speaker - Octavia agent https://review.openstack.org/491016 | 11:32 |
*** pcaruana has joined #openstack-lbaas | 11:43 | |
*** leitan has joined #openstack-lbaas | 11:55 | |
*** leitan has quit IRC | 11:55 | |
*** leitan has joined #openstack-lbaas | 11:56 | |
*** yamamoto has quit IRC | 12:02 | |
*** pcaruana has quit IRC | 12:03 | |
*** pcaruana has joined #openstack-lbaas | 12:04 | |
*** jdavis has joined #openstack-lbaas | 12:22 | |
*** slaweq has quit IRC | 12:28 | |
*** knsahm has joined #openstack-lbaas | 12:32 | |
knsahm | does anybody know how can i use the octavia temptest tests with rally? | 12:32 |
knsahm | https://github.com/openstack/octavia/tree/master/octavia/tests/tempest | 12:32 |
*** slaweq has joined #openstack-lbaas | 12:33 | |
*** jdavis has quit IRC | 12:43 | |
*** knsahm has quit IRC | 12:48 | |
*** knsahm has joined #openstack-lbaas | 12:59 | |
*** eN_Guruprasad_Rn has quit IRC | 12:59 | |
*** yamamoto has joined #openstack-lbaas | 13:02 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 13:05 | |
*** fnaval has quit IRC | 13:06 | |
*** slaweq has quit IRC | 13:06 | |
*** yamamoto has quit IRC | 13:12 | |
*** slaweq has joined #openstack-lbaas | 13:12 | |
*** eN_Guruprasad_Rn has quit IRC | 13:12 | |
*** sanfern has joined #openstack-lbaas | 13:12 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 13:12 | |
*** eN_Guruprasad_Rn has quit IRC | 13:16 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 13:16 | |
*** eN_Guruprasad_Rn has quit IRC | 13:23 | |
*** fnaval has joined #openstack-lbaas | 13:24 | |
*** slaweq has quit IRC | 13:44 | |
*** slaweq has joined #openstack-lbaas | 13:48 | |
*** armax has joined #openstack-lbaas | 13:53 | |
johnsom | isantosp You can modify the base_url to be something different/deeper if that is where your Octavia API v1 endpoint is located, but the default is just hostname:port and the neutron-lbaas octavia driver fills in the rest. | 13:57 |
johnsom | krypto When settings are not specified in the config file many have a "default". In this case the default behavior is to pull in the hostname. This is the "host" setting in the [default] block. | 13:59 |
johnsom | It is used for the message queuing over rabbitmq | 13:59 |
*** maestropandy has joined #openstack-lbaas | 14:00 | |
johnsom | knsahm I don't off my head. We had talked about doing so, but no one has taken the time to set Rally up for octavia as far as I know. | 14:00 |
maestropandy | https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1727335 | 14:01 |
openstack | Launchpad bug 1727335 in Neutron LBaaS Dashboard "Devstack - Could not satisfy constraints for 'horizon': installation from path or url cannot be constrained to a version" [Undecided,New] | 14:01 |
johnsom | maestropandy We don't use launchpad anymore, we use storyboard | 14:04 |
johnsom | https://storyboard.openstack.org/#!/project/907 | 14:05 |
johnsom | Not sure how you were able to create that bug | 14:05 |
*** slaweq_ has quit IRC | 14:09 | |
maestropandy | johnsom: I have filed in launchpad, can i able to see same in storyboard ? | 14:09 |
johnsom | maestropandy It was a one-time migration so the bug will need to be opened in Storyboard. All of the OpenStack projects are getting moved over.... | 14:10 |
maestropandy | oh I have to open/create fresh here isn't ? | 14:11 |
johnsom | Sadly yes. Bugs are turned off for the project in launchpad. It only allowed it as you migrated the bug from devstack | 14:11 |
maestropandy | got it, how about old bugs it will be archived ? | 14:12 |
johnsom | They all got moved over and opened as stories. | 14:12 |
johnsom | They still exist in launchpad just so old links from patches still work | 14:12 |
*** dayou has quit IRC | 14:13 | |
maestropandy | https://storyboard.openstack.org/#!/story/2001262 | 14:13 |
johnsom | Thank you! | 14:14 |
johnsom | maestropandy You are trying to use neutron-lbaas-dashboard as a devstack plugin? | 14:18 |
maestropandy | johnsom: yes | 14:20 |
maestropandy | http://paste.openstack.org/show/624625/ | 14:21 |
*** slaweq has quit IRC | 14:21 | |
maestropandy | updated my story with same | 14:22 |
johnsom | Ok, and you have horizon in your local.rc/local.conf? | 14:23 |
maestropandy | default no need to mention in local.conf.. devstack will install, when i removed lbaas-dashbaord plugin from local.conf file, its doing ./stack.sh properly | 14:24 |
johnsom | maestropandy If you have this environment up, can you try removing the horizon tar line from the test-requirements? I think that is the issue, it should no longer be needed. | 14:24 |
maestropandy | you mean to remove "tarballs.openstack.org/horizon/horzion-,aster.tar.gz from that file ? | 14:26 |
*** dayou has joined #openstack-lbaas | 14:27 | |
johnsom | Yeah remove the line "http://tarballs.openstack.org/horizon/horizon-master.tar.gz#egg=horizon" | 14:27 |
maestropandy | ok, doing and re-installing devstack | 14:27 |
johnsom | If you get a good install I will work on a patch | 14:28 |
*** slaweq has joined #openstack-lbaas | 14:28 | |
maestropandy | sure | 14:28 |
johnsom | Thanks for testing for me. It would take me longer to setup that environment to do initial testing. | 14:28 |
maestropandy | yup, devstack ./stack.sh takes time :( | 14:30 |
johnsom | I already have four devstack VMs running, so that is the bigger issue for me... Grin Only so much RAM to go around | 14:30 |
maestropandy | hmm | 14:31 |
xgerman_ | cores: https://review.openstack.org/#/c/514402/ | 14:40 |
* johnsom notes he can't vote as he wrote it | 14:41 | |
xgerman_ | yep, want to ut it back on the front burner | 14:42 |
*** slaweq_ has joined #openstack-lbaas | 14:45 | |
*** ramishra has quit IRC | 14:56 | |
xgerman_ | sanfern has the following issue. Ideas? | 14:56 |
xgerman_ | http://paste.openstack.org/show/GsbdNpvfCtVAQpn4GSeF/ | 14:56 |
johnsom | I would need more of the log to understand if it is just an order issue or a missing requirement | 14:57 |
*** AlexeyAbashkin has quit IRC | 14:58 | |
xgerman_ | yeah, I am distracted by meetings so punting it here ;-) | 14:58 |
johnsom | Also, I think capitalization matters there | 14:58 |
johnsom | WebOb>=1.7.1 # MIT from the requirements file | 14:59 |
sanfern | johnsom, http://logs.openstack.org/16/491016/2/check/openstack-tox-pep8/bf7993b/job-output.txt.gz | 14:59 |
johnsom | sanfern Yeah, ok, so it's an import order and spacing issue. | 15:00 |
johnsom | Import statements are in the wrong order. import logging should be before import flask | 15:00 |
johnsom | Missing newline before sections or imports. | 15:00 |
johnsom | Import statements are in the wrong order. import stat should be before import pyroute2 | 15:00 |
johnsom | So just a couple of simple import order/spacing cleanups in exabgp.py | 15:01 |
sanfern | my local did not prompt me,:( | 15:01 |
*** slaweq has quit IRC | 15:01 | |
sanfern | thanks johnsom | 15:01 |
*** slaweq has joined #openstack-lbaas | 15:01 | |
*** fnaval has quit IRC | 15:01 | |
johnsom | Hmm, tox and/or tox -e pep8 should warn on those. I would try a tox -r to see if new versions of packages are needed for tox | 15:02 |
johnsom | NP | 15:02 |
*** fnaval has joined #openstack-lbaas | 15:02 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 15:03 | |
*** sri_ has quit IRC | 15:07 | |
isantosp | Is there any way to delete loadbalancers stack in pending_delete? | 15:11 |
xgerman_ | they should eventually get into error | 15:19 |
*** rohara has joined #openstack-lbaas | 15:24 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable https://review.openstack.org/514452 | 15:24 |
xgerman_ | ^^ good to go | 15:24 |
*** knsahm has quit IRC | 15:25 | |
*** slaweq has quit IRC | 15:29 | |
*** slaweq has joined #openstack-lbaas | 15:39 | |
*** slaweq has quit IRC | 15:44 | |
*** krypto has quit IRC | 15:49 | |
*** pcaruana has quit IRC | 15:49 | |
*** krypto has joined #openstack-lbaas | 15:49 | |
*** krypto has quit IRC | 15:49 | |
*** krypto has joined #openstack-lbaas | 15:49 | |
*** Alex_Staf has quit IRC | 15:51 | |
*** bcafarel has quit IRC | 15:54 | |
*** bcafarel has joined #openstack-lbaas | 15:54 | |
*** maestropandy has quit IRC | 15:55 | |
*** maestropandy has joined #openstack-lbaas | 15:57 | |
johnsom | xgerman_ reviewed with comments | 16:07 |
*** maestropandy has quit IRC | 16:09 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable https://review.openstack.org/514452 | 16:13 |
xgerman_ | johnsom I am not sure how we can stop other projects from (ab)using the event streamer — also changing the name has implications on another commit… | 16:14 |
johnsom | Yeah, I wasn't trying to "stop them" but make it clear that it's not a general purpose thing they should start using. I.e. as soon as we deprecate neutron-lbaas this goes deprecated as well. Trying to be super clear with the wording. | 16:14 |
xgerman_ | agreed with deprecating | 16:16 |
xgerman_ | let me know how strongly you feel about the name and I will change it… | 16:17 |
*** maestropandy has joined #openstack-lbaas | 16:18 | |
johnsom | I think I can give it a pass now that the help text is more clear | 16:20 |
johnsom | We will see what the other cores say | 16:20 |
*** maestropandy has quit IRC | 16:20 | |
xgerman_ | k | 16:22 |
*** tesseract has quit IRC | 16:25 | |
*** sanfern has quit IRC | 16:35 | |
*** sanfern has joined #openstack-lbaas | 16:37 | |
*** AlexeyAbashkin has quit IRC | 16:38 | |
*** Swami has joined #openstack-lbaas | 16:48 | |
*** AJaeger has joined #openstack-lbaas | 17:00 | |
AJaeger | lbaas cores, please approve https://review.openstack.org/514143 to fix the zuul.yaml config. | 17:00 |
AJaeger | johnsom: could you help to get a +2A, please? | 17:01 |
AJaeger | you now have a periodic-newton job but no newton branch anymore | 17:01 |
AJaeger | the change above removes that | 17:01 |
*** salmankhan has quit IRC | 17:06 | |
johnsom | AJaeger Yep, still a bunch of work to do. | 17:08 |
johnsom | xgerman_ Can you take a look at ^^^^ | 17:08 |
xgerman_ | done | 17:09 |
AJaeger | thanks | 17:09 |
*** salmankhan has joined #openstack-lbaas | 17:09 | |
xgerman_ | also my job on OSA was experimental so didn’t get migrated (?!) | 17:10 |
johnsom | I plan to go through and fix the stable stuff today. The py3 jobs aren't running on Pike at the moment, etc. | 17:10 |
*** sshank has joined #openstack-lbaas | 17:21 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 17:27 | |
*** sshank has quit IRC | 17:31 | |
rm_work | xgerman_: had a quick question on https://review.openstack.org/#/c/514452/4 | 17:33 |
xgerman_ | rm_work the url has both username and password in it | 17:36 |
johnsom | Yep | 17:37 |
rm_work | xgerman_: actually decided to -1 for updating octavia.conf example config to have this listed | 17:38 |
johnsom | Ah, yeah good catch | 17:38 |
rm_work | though yeah, i forgot that it's all one big URL | 17:38 |
rm_work | but i would have been reminded if i saw it in the example config :P | 17:38 |
xgerman_ | ok, sure | 17:39 |
rm_work | also: releasenote typo fix while you're doing that | 17:39 |
*** sshank has joined #openstack-lbaas | 17:39 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable https://review.openstack.org/514452 | 17:45 |
xgerman_ | ok, hopefully that helps | 17:45 |
rm_work | xgerman_: so close :P | 17:50 |
xgerman_ | I am afraid when I have a more substantial patch… | 17:50 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable https://review.openstack.org/514452 | 17:51 |
rm_work | lol | 17:51 |
rm_work | but look how quick that was | 17:52 |
johnsom | I only nit'd them because the help text called it the transport url, so at least had some relation | 17:52 |
xgerman_ | yep, I try to be quick when I get reviews ;-) | 17:54 |
rm_work | i'm trying to understand why all the tests are failing on this change: https://review.openstack.org/#/c/510225/ | 17:56 |
rm_work | it seems like the servers aren't building | 17:56 |
rm_work | but not seeing errors in the logs | 17:56 |
rm_work | oh wait, one more log to check | 17:56 |
xgerman_ | I thought last week we decided to go with querying nova and not caching? Guess Zuul incorporates chat logs in it’s -1s | 17:57 |
rm_work | yep nm i found it | 17:57 |
rm_work | xgerman_: talked with johnsom at length, reproposing it as "cached_az" to be clear about it, and still using nova-querying for other bits | 17:58 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Add cached_az to the amphora record https://review.openstack.org/510225 | 17:59 |
xgerman_ | I need to think about that and probably look at the code… | 17:59 |
rm_work | or rather, we will when we write the other bits | 17:59 |
rm_work | really this patch is mostly identical to the old version, but it names it clearly as "cached_az" so it's obvious it might not be current | 18:00 |
rm_work | so some things that need a "best-guess but FAST" az value can use this | 18:00 |
rm_work | then we can query when we need a 100% | 18:00 |
xgerman_ | yep, and I need to do more thinking if Octavia needs to be AZ aware because this is only one way to do HA and it might need to be more generalized… | 18:00 |
rm_work | well, technically this doesn't add anything to make it truly AZ-Aware | 18:02 |
rm_work | just puts the cached AZ in the db for easy lookup for filtering | 18:02 |
rm_work | we can get the exact AZ via the nova-query method in any functions that NEED that | 18:03 |
xgerman_ | mmh, so it’s basically metadata? Why don’t we make a general metadata solution? | 18:08 |
xgerman_ | I was trying to read up on stuff and I ran across https://docs.openstack.org/senlin/latest/index.html | 18:09 |
*** AlexeyAbashkin has joined #openstack-lbaas | 18:10 | |
rm_work | well, it needs to be queryable in a DB join | 18:10 |
xgerman_ | so why wouldn’t metadata be that way? | 18:12 |
rm_work | a lot of metadata solutions don't store in easily joinable fields | 18:12 |
*** strigazi_ has joined #openstack-lbaas | 18:13 | |
*** AlexeyAbashkin has quit IRC | 18:14 | |
xgerman_ | well, we need more metadata than AZ in the future, e.g. if somebody uses kosmos or DNSload balancing they might want to put data in to tie them together | 18:14 |
xgerman_ | or people might want to schedule LBs close to the web servers… | 18:14 |
xgerman_ | so I feel we are painting ourselves into a corner | 18:15 |
*** jniesz has joined #openstack-lbaas | 18:15 | |
rm_work | you're killing me German | 18:15 |
rm_work | *killing me* | 18:15 |
xgerman_ | I think AZ probably plays a role when we schedule LBs for HA but I haven’t thought deeply how that will work - especially with ACTIVE-ACTIVE… | 18:16 |
*** strigazi has quit IRC | 18:16 | |
*** strigazi_ is now known as strigazi | 18:17 | |
xgerman_ | we need to make a call if AZ is random metadata or something actionable for us… and I don’t think we have our HA story figured out (one idea was to put AZ into the flavor) | 18:18 |
*** slaweq has joined #openstack-lbaas | 18:19 | |
xgerman_ | I also like to evaluate what happens if we shove the LB or AMP is into the nova metadata — would that meet your needs? | 18:20 |
rm_work | hmmmmmm | 18:21 |
rm_work | not quite | 18:22 |
*** slaweq has quit IRC | 18:24 | |
openstackgerrit | Santhosh Fernandes proposed openstack/octavia master: [WIP] ACTIVE-ACTIVE with exabgp-speaker - Octavia agent https://review.openstack.org/491016 | 18:24 |
xgerman_ | ok, let me mull that over. a bit… | 18:27 |
rm_work | I will do any code you want as long as there is some field in the DB that can store the cached AZ data | 18:29 |
*** slaweq has joined #openstack-lbaas | 18:31 | |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas-dashboard master: Set package.json version to 4.0.0.0b1 Queens MS1 https://review.openstack.org/515164 | 18:32 |
openstackgerrit | Michael Johnson proposed openstack/octavia-dashboard master: Set package.json version to 1.0.0.0b1 Queens MS1 https://review.openstack.org/515166 | 18:33 |
*** sanfern has quit IRC | 18:33 | |
johnsom | rm_work xgerman_ Can you take a look at those two dashboard patches and approve so we can un-block our MS1 release? Thanks | 18:35 |
*** slaweq has quit IRC | 18:36 | |
*** csomerville has quit IRC | 18:39 | |
*** csomerville has joined #openstack-lbaas | 18:39 | |
*** eN_Guruprasad_Rn has quit IRC | 18:43 | |
*** atoth has quit IRC | 18:46 | |
*** sshank has quit IRC | 18:54 | |
johnsom | Oye: Here's how npm's semver implementation deviates from what's on semver.org: | 18:57 |
johnsom | Guess I need another spin.... | 18:57 |
johnsom | Would love to find the person that decided that... "Let's deviate from a standard" | 18:58 |
openstackgerrit | Michael Johnson proposed openstack/octavia-dashboard master: Set package.json version to 1.0.0.0b1 Queens MS1 https://review.openstack.org/515166 | 19:01 |
openstackgerrit | Michael Johnson proposed openstack/neutron-lbaas-dashboard master: Set package.json version to 4.0.0.0b1 Queens MS1 https://review.openstack.org/515164 | 19:01 |
*** sshank has joined #openstack-lbaas | 19:02 | |
krypto | hi johnsom thanks for the suggestion.Is certificate mandatory in configuration.I skipped that.could you have a quick glance at my configuration http://paste.ubuntu.com/25818737/ not sure what i am missing | 19:05 |
*** slaweq has joined #openstack-lbaas | 19:05 | |
rm_work | johnsom: hmm found an interesting bug | 19:07 |
*** sshank has quit IRC | 19:07 | |
rm_work | i think | 19:07 |
*** armax has quit IRC | 19:08 | |
rm_work | https://github.com/openstack/octavia/blob/master/octavia/controller/worker/controller_worker.py#L446 | 19:08 |
rm_work | johnsom: if the pool we're creating doesn't have any listeners... should this at least exist!? I saw: | 19:08 |
openstackgerrit | Swaminathan Vasudevan proposed openstack/octavia master: (WIP):Support SUSE distro based Amphora Image for Octavia https://review.openstack.org/498909 | 19:10 |
rm_work | http://paste.openstack.org/show/624640/ | 19:11 |
rm_work | i thought the model would at least HAVE the attribute | 19:11 |
johnsom | rm_work That trace implies the call to the database to pull the pool record failed to find the pool record. That is the issue there. How did the API not create the DB record for the pool but pass it to the handler? | 19:12 |
*** krypto has quit IRC | 19:12 | |
rm_work | yeah i assumed that would have been impossible too | 19:12 |
rm_work | OH I bet I know | 19:13 |
johnsom | That has nothing to do with the listener | 19:13 |
rm_work | i bet it's the DB replication being slow | 19:13 |
rm_work | probably ended up on a read-slave that didn't catch the replication yt | 19:13 |
johnsom | Ouch, that sounds like fun.... | 19:13 |
rm_work | that's ... eugh | 19:14 |
rm_work | do we need to *support* db clusters? | 19:14 |
rm_work | even just master-slave? | 19:14 |
rm_work | i wonder if this is "our problem" or "my problem" | 19:14 |
johnsom | I could comment, but won't. | 19:15 |
johnsom | I have seen patches starting to float around neutron about nbd mode support | 19:15 |
johnsom | I mean in this case, we *could* put some retry logic in | 19:15 |
rm_work | well | 19:15 |
rm_work | that'd be ... messy | 19:15 |
rm_work | is there a more standard way to support cluster mode? | 19:16 |
rm_work | or is that pretty much it | 19:16 |
rm_work | i guess i should just take us back down to using simply a single master node and the slave is just backup we can swing the DB FLIP over to | 19:16 |
johnsom | Ummm well, there are many options here. | 19:16 |
johnsom | Turn your cluster to sync replication | 19:16 |
rm_work | because it seems like this could happen to anyone using read-slaves | 19:17 |
rm_work | ah, yeah | 19:17 |
rm_work | then the first call doesn't return until the replication is complete? | 19:17 |
johnsom | Add retries to the controller worker methods as it's the entry point. | 19:17 |
rm_work | i think that might be the solution | 19:17 |
johnsom | There is this patch open in nlbaas: https://review.openstack.org/#/c/513081/ | 19:17 |
rm_work | well, retries in the controller-worker would slow down everything in cases where it REALLY doesn't exist? | 19:17 |
johnsom | I haven't had time to look yet though | 19:17 |
rm_work | or, is there no such case | 19:17 |
johnsom | Well, for the base objects, it should never happen, but the secondary objects, yes could be valid not found I think. | 19:18 |
rm_work | so yeah I am going to revert back to a single sql IP I think | 19:19 |
rm_work | or... somehow i thought i was already on that actually | 19:19 |
rm_work | :/ | 19:19 |
openstackgerrit | Merged openstack/neutron-lbaas master: Remove common jobs from zuul.d https://review.openstack.org/514143 | 19:20 |
johnsom | krypto Haven't forgot about you, just trying to eat a bit of lunch before the meeting. Will take a look in a few | 19:21 |
*** krypto has joined #openstack-lbaas | 19:30 | |
johnsom | krypto You can also compare with the configuration our gate jobs are using: http://logs.openstack.org/02/514402/1/check/octavia-v1-dsvm-scenario/501753b/logs/etc/octavia/octavia.conf.txt.gz | 19:31 |
johnsom | krypto Remind me, which version of Octavia are you deploying? | 19:37 |
johnsom | rm_work xgerman_ Sorry, same deal with the dashboard patches... https://review.openstack.org/515164 and https://review.openstack.org/515166 | 19:39 |
krypto | johnsom 1.0.1 | 19:39 |
johnsom | Thanks | 19:40 |
rm_work | johnsom: what happened to them? I didn't see a difference | 19:40 |
rm_work | oh | 19:40 |
rm_work | too many .0 | 19:40 |
rm_work | lol | 19:40 |
johnsom | That nodejs stuff uses a custome semver | 19:40 |
*** atoth has joined #openstack-lbaas | 19:41 | |
*** armax has joined #openstack-lbaas | 19:42 | |
johnsom | krypto controller_ip_port_list = 192.1681.17:5555 | 19:43 |
johnsom | typo there, so health manager probably isn't getting it's messages | 19:43 |
johnsom | krypto You will need the certificate section filled out and some at least demo certs. There is a script in /bin for the demo certs | 19:45 |
johnsom | ca_private_key_passphrase = passphrase | 19:45 |
johnsom | ca_private_key = /etc/octavia/certs/private/cakey.pem | 19:45 |
johnsom | ca_certificate = /etc/octavia/certs/ca_01.pem | 19:45 |
krypto | wow thats awesome sir.. you got time to go through my configs .really appreciate it :) | 19:45 |
krypto | let me fix it .. thanks alot | 19:45 |
*** AlexeyAbashkin has joined #openstack-lbaas | 19:51 | |
*** krypto has quit IRC | 19:53 | |
johnsom | We try to help folks when we can | 19:53 |
johnsom | Probably need this too: [haproxy_amphora] | 19:55 |
johnsom | server_ca = /etc/octavia/certs/ca_01.pem | 19:55 |
johnsom | client_cert = /etc/octavia/certs/client.pem | 19:55 |
*** AlexeyAbashkin has quit IRC | 19:55 | |
openstackgerrit | Merged openstack/neutron-lbaas-dashboard master: Set package.json version to 4.0.0.0b1 Queens MS1 https://review.openstack.org/515164 | 19:55 |
*** longstaff has joined #openstack-lbaas | 19:55 | |
johnsom | Ah, cool | 19:56 |
openstackgerrit | Merged openstack/octavia-dashboard master: Set package.json version to 1.0.0.0b1 Queens MS1 https://review.openstack.org/515166 | 19:56 |
johnsom | #startmeeting Octavia | 20:00 |
openstack | Meeting started Wed Oct 25 20:00:06 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
*** openstack changes topic to " (Meeting topic: Octavia)" | 20:00 | |
openstack | The meeting name has been set to 'octavia' | 20:00 |
xgerman_ | o/ | 20:00 |
johnsom | Hi folks | 20:00 |
nmagnezi | םץ. | 20:00 |
nmagnezi | o/ | 20:00 |
nmagnezi | (sorry :D ) | 20:01 |
longstaff | hi | 20:01 |
jniesz | hi | 20:01 |
johnsom | Ah, there are some people. | 20:01 |
johnsom | #topic Announcements | 20:01 |
*** openstack changes topic to "Announcements (Meeting topic: Octavia)" | 20:01 | |
rm_work | o/ | 20:01 |
johnsom | The Queens MS1 release has not yet gone out. We are working through gate/release system issues from the zuul v3 changes. I expect it will happen in the next day or two. | 20:02 |
johnsom | The newton release has officially gone EOL and the git branches removed. This happened last night. | 20:02 |
johnsom | Just a reminder if you need to reference that code, there is a tag newton-eol you can pull, but we can't put anymore patches on newton. | 20:03 |
johnsom | And finally, I am still working through all of the zuul v3 changes. | 20:03 |
johnsom | Octavia and neutron-lbaas should no longer have duplicate jobs, but I'm still working on the dashboard repos. | 20:04 |
johnsom | I still have more cleanup to finish and I need to fix the stable branches. | 20:04 |
nmagnezi | for zuul v3 ? | 20:04 |
johnsom | However, I think we have zuul v3 functional at this point. | 20:04 |
johnsom | yes | 20:04 |
johnsom | stable branch gates aren't running at the moment (just failing). | 20:05 |
johnsom | Oh, we do have some TC changes: | 20:06 |
johnsom | #link https://governance.openstack.org/election/results/queens/tc.html | 20:06 |
johnsom | Any other announcements today? | 20:06 |
*** sshank has joined #openstack-lbaas | 20:07 | |
johnsom | #topic Brief progress reports / bugs needing review | 20:07 |
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)" | 20:07 | |
johnsom | Please continue reviewing and commenting on the provider driver spec: | 20:07 |
johnsom | #link https://review.openstack.org/509957 | 20:07 |
rm_work | back to asking folks to consider THIS option for amphora az cacheing: | 20:08 |
rm_work | #link https://review.openstack.org/#/c/511045/ | 20:08 |
johnsom | longstaff Are you planning patchset version 3 soon? | 20:08 |
xgerman_ | I added a topic to the the agenda | 20:08 |
xgerman_ | for that | 20:08 |
rm_work | oh k | 20:08 |
longstaff | Yes. I plan to commit an update to the provider driver spec tomorrow. | 20:09 |
johnsom | Excellent, thank you! | 20:09 |
johnsom | I have been focused on zuul v3 stuffs and some bug fixes to issues that came up over the last week. | 20:10 |
johnsom | I still have more patches for bug fixes coming. | 20:10 |
johnsom | And much more zuulv3 patches... sigh | 20:10 |
jniesz | #link https://storyboard.openstack.org/#!/story/2001258 | 20:11 |
johnsom | Any other progress updates to share or should we jump into the main event: AZs | 20:11 |
xgerman_ | johnsom and I put up some patches to improve LBaaS v2 <-> Octavia | 20:11 |
jniesz | johnsom: saw your notes about HM failover after the listener fails . Is there a way to not trigger failover unless the original listener was in a known good state? | 20:12 |
jniesz | since this is new provision | 20:12 |
johnsom | jniesz Yeah, one of those has a patch up for review. The secondary outcome is around the network driver being dumb. I haven't finished that patch yet | 20:12 |
*** sshank has quit IRC | 20:12 | |
*** sshank has joined #openstack-lbaas | 20:12 | |
xgerman_ | I have taken the eyes of the ball in OpenStackAnsible and so there is some cruft we are tackling: https://review.openstack.org/#/c/514767/ — | 20:13 |
*** salmankhan has quit IRC | 20:13 | |
xgerman_ | good news I will remain core for the Q cycle over there… | 20:14 |
xgerman_ | in Octavia OSA | 20:14 |
johnsom | jniesz Well, the failover we saw was a valid failover. The amp should have a working listener on it, but it didn't, so IMO Octavia did "the right thing' and ended in the right state with the load balancer in error as well because we could not resolve the problem with the amp. (notes for those that haven't read my novel. A bad local jinja template change caused listeners to fail to deploy) | 20:14 |
johnsom | Yeah, Octavia OSA is getting some more attention | 20:15 |
jniesz | so wouldn't we want to not fail over unless the original provisioned ok? | 20:15 |
rm_work | that seems right -- if the listeners fail to deploy, even on an initial create, why wouldn't it be an error? | 20:15 |
rm_work | oh, failover -- ehh... maybe? I mean, it could have been a one-time issue | 20:15 |
rm_work | go to error -- definitely | 20:16 |
rm_work | but was it failover-looping? | 20:16 |
johnsom | jniesz No, it could have failed for reasons related to that host, like the base host ran out of disk | 20:16 |
rm_work | we might want to have something that tracks failover-count (I wanted this anyway) and at some point if it fails too many times quickly detect something is wrong | 20:16 |
johnsom | No, it saw the listener was failed, attempted a failover, which also failed (same template), so it gave up and marked the LB in error too | 20:16 |
rm_work | ah | 20:17 |
rm_work | yeah that seems right | 20:17 |
jniesz | is failover the same has re-create listener? | 20:17 |
jniesz | is that the same code path | 20:17 |
johnsom | LB in ERROR will stop the failover attempts | 20:17 |
johnsom | No, failover is rebuild the whole amphora, which, because there was a listener deployed did attempt to deploy a listener again. | 20:18 |
johnsom | Fundamentally the "task" that deploys a listener is shared between create and failover | 20:18 |
jniesz | and we trigger the failover of the amp because at that point it is in an unknown state I assume | 20:20 |
jniesz | because of error | 20:20 |
johnsom | It triggered failover because the timeout expired for the health checking on the amp. The amp was showing no listeners up but the controller knew there should be a listener deployed. It gives it some time and then starts the failover process. | 20:21 |
jniesz | because the listener create responded back with invalid request | 20:22 |
johnsom | No, this health checking was independent of that error coming back and the listener going into "ERROR". | 20:23 |
johnsom | Two paths, both figuring out there was a problem, with one escalating to a full amp failover | 20:24 |
jniesz | so if listener create goes from PENDING -> ERROR HM will still trigger? | 20:24 |
jniesz | or if it goes from PENDING -> RUNNING -> failure | 20:24 |
johnsom | If the LB goes ACTIVE and then some part of the load balancing engine is not working it will trigger | 20:26 |
jniesz | ok, because from logs octavia.controller.worker.tasks.lifecycle_tasks.ListenersToErrorOnRevertTask went to RUNNING from PENDING | 20:27 |
jniesz | and then it failed on octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate | 20:28 |
rm_work | that task always runs I think? | 20:28 |
jniesz | which is when hit the template issue | 20:28 |
rm_work | it's so when the chain reverts, it hits the revert method in there | 20:28 |
johnsom | Yeah, in o-cw logs you will see ListenersUpdate went to running, then failed on the jinja so went to REVERTED, then reverted ListenersToErrorOnRevertTask which put the listener into ERROR. | 20:29 |
rm_work | yep | 20:29 |
johnsom | independent the o-hm (health manager) was watching the amp and noticed there should be a working listener but there is not. | 20:30 |
johnsom | that is when o-hm would trigger a failover in an attempt to recover the amp | 20:30 |
jniesz | ok, but should ListenersUpdate have ever went to running? | 20:31 |
johnsom | Yes | 20:31 |
johnsom | ListenersUpdate task should have gone: PENDING, RUNNING, REVERTING, REVERTED | 20:31 |
johnsom | I think there is a scheduling in the beginning somewhere too | 20:32 |
johnsom | ListenersUpdate definitely started, but failed due to the bad jinja | 20:32 |
jniesz | so jinja check is not done until post LIstenerUpdate | 20:33 |
johnsom | Ok, we are at the half way point in the meeting time, I want to give the next topic some time. We can continue to discuss the bug after the meeting if you would like. | 20:33 |
jniesz | ok that is fine | 20:33 |
johnsom | jinja check is part of the ListenerUpdate task | 20:33 |
rm_work | yeah basically, i think it went exactly as it should | 20:33 |
rm_work | from what i've heard | 20:33 |
johnsom | Yeah, me too aside from the bug and patch I have already posted. | 20:34 |
johnsom | #topic AZ in Octavia DB (xgerman_) | 20:34 |
*** openstack changes topic to "AZ in Octavia DB (xgerman_) (Meeting topic: Octavia)" | 20:34 | |
rm_work | So, I wonder if there are highlights from our PM discussion on this that would be good to summarize | 20:35 |
johnsom | Sure, | 20:35 |
rm_work | Basically, there are cases where we need to be able to quickly filter by AZ ... and doing a full query to nova is not feasible (at scale) | 20:35 |
johnsom | My summary is rm_work wants to cache the AZ info from an amphora being build by nova | 20:35 |
xgerman_ | and my argument is if somebody wants to query by data center, rack — I don’t want to add those columns | 20:36 |
xgerman_ | so this feels like meta data and needs a more generalized solution | 20:36 |
rm_work | so we can compromise by storing it clearly labeled as a "cache" that can be used by processes that need quick-mostly-accurate, and we can query nova in addition for cases that need "slow-but-exact" | 20:36 |
rm_work | well, that's not something that's in nova is it? | 20:36 |
rm_work | rack? | 20:36 |
rm_work | and DC would be... well outside the scope of Octavia, since we run in one DC | 20:37 |
rm_work | as does ... the cloud | 20:37 |
rm_work | so it's kinda obvious which DC you're in :P | 20:37 |
johnsom | The use case is getting a list of amps that live in an AZ and to be able to pull an AZ specific amp from the spares pool. | 20:37 |
rm_work | i mean, the scope limit here is "data nova provides us" | 20:37 |
xgerman_ | so we are doing scheduling by AZ | 20:37 |
rm_work | I'd like to enable operators to do that if they want to | 20:38 |
johnsom | Well, other deployer have different ideas of what an AZ is.... Many AZ is different DC | 20:38 |
rm_work | right which does make sense | 20:38 |
rm_work | so, that's still fine IMO | 20:38 |
xgerman_ | yeah, so if is for scheduling it should be some schesulign hint | 20:38 |
rm_work | yeah, we can do scheduling hints in our driver | 20:38 |
jniesz | yea, I would like the scheduling control as well | 20:39 |
rm_work | but they rely on knowing what AZ stuff is in already | 20:39 |
xgerman_ | my worry is to keep it generalzied enought that we can add other hints in the future | 20:39 |
rm_work | and querying from nova for that isn't feasible at scale and in some cases | 20:39 |
*** slaweq has quit IRC | 20:39 | |
rm_work | i'm just not sure what other hints there ARE in nova | 20:39 |
rm_work | maybe HV | 20:39 |
rm_work | err, "host" | 20:39 |
xgerman_ | I don’t want to limit it to nova hints | 20:40 |
rm_work | which ... honestly i'm OK with cached_host as well, if anyone every submits it | 20:40 |
johnsom | Really the issue is OpenStack has a poor definition of AZ and nova doesn't give us the tools we want without modifications or sorting through every amphora the service account has. | 20:40 |
rm_work | well, that's what we get back on the create | 20:40 |
*** slaweq has joined #openstack-lbaas | 20:40 | |
rm_work | we're just talking about adding stuff we already get from a compute service | 20:40 |
rm_work | and it's hard to talk about theoretical compute services | 20:40 |
nmagnezi | johnsom, it does not allow to get a filtered list by AZ ? | 20:40 |
nmagnezi | i mean, just the instances in a specific AZ? | 20:41 |
rm_work | but in general, we'd assume some concept that fits generally into the category of "az" | 20:41 |
xgerman_ | I am saying we should take a step back and look at scheduling — is adding an az column the best/cleanest way to achieve scheduling hints | 20:41 |
johnsom | nmagnezi Yes, but we would have to take that list, deal with paging it, and then match to our spares list. For example. | 20:41 |
rm_work | nmagnezi: basically, if we want to see what AZs we have in the spares pool, the options are: call nova X times, where X is the number of amps in the spares pool; or: call nova-list and match stuff up, which will cause significant paging issues at scale | 20:42 |
jniesz | or you can pull out of nova db : ) | 20:42 |
johnsom | Don't get me wrong, I'm not a huge fan of this, but thinking through it with rm_work led me to feel this might be the simplest solution | 20:42 |
rm_work | jniesz: lol no we cannot :P | 20:42 |
nmagnezi | yeah sounds problematic.. | 20:42 |
rm_work | xgerman_: well, our driver actually *does* do scheduling on AZ already | 20:43 |
* johnsom slaps jniesz's had for even considering getting into the nova DB.... | 20:43 | |
rm_work | it's just not reliant on existing AZs | 20:43 |
xgerman_ | so if I run different nova flavors and want to pull one out based on flavor and AZ — how would I dod that? | 20:43 |
johnsom | had->hand | 20:43 |
xgerman_ | would I add another column? | 20:43 |
rm_work | xgerman_: do we not track the flavor along with the amp? | 20:43 |
rm_work | I guess not? actually we probably should | 20:43 |
rm_work | I even kinda want to track the image_ref we used too... | 20:44 |
*** slaweq has quit IRC | 20:44 | |
rm_work | we can add image_ref and compute_flavor IMO | 20:44 |
rm_work | alongside cached_az | 20:44 |
rm_work | i would find that data useful | 20:44 |
*** PagliaccisCloud has quit IRC | 20:45 | |
johnsom | I see xgerman_'s point, this is getting a bit tightly coupled to the specific nova compute driver | 20:45 |
jniesz | should we store this in flavor metadata? | 20:45 |
rm_work | (I was also looking at a call to allow failover of amps by image_id so we could force-failover stuff in case of security concerns on a specific image) | 20:45 |
jniesz | that would only be initial though and wouldn't account for changes | 20:45 |
nmagnezi | johnsom, i was just thinking the same. we need to remember we do want to allow future support for other compute drivers | 20:45 |
rm_work | jniesz: i was talking about in the amphora table | 20:45 |
xgerman_ | yep, that’s what I was getting at + I don’t know if I want to write python code for different scheduling hints | 20:45 |
rm_work | i think the DB columns should absolutely be agnostic | 20:46 |
rm_work | but we can use them in the compute drivers | 20:46 |
rm_work | I think the concept of "compute_flavor" and "compute_image" and "compute_az" are pretty agnostic though -- won't most things need those? | 20:46 |
rm_work | even amazon and GCE have AZs AFAIU | 20:47 |
rm_work | and flavors and images... | 20:47 |
rm_work | kubernetes does too | 20:47 |
jniesz | what about vmware | 20:47 |
rm_work | so I would be in favor of adding *all three* of those columns to the amphora table | 20:47 |
nmagnezi | aside from flavors (which just don't know if it exists as a concept in other alternatives) I agree | 20:47 |
nmagnezi | i'm not saying i disagree about flavors, just don't really sure | 20:48 |
nmagnezi | :) | 20:48 |
rm_work | i don't know much about vmware -- i would assume they'd NEED something like that, no? | 20:48 |
rm_work | we're all about being generic enough here to support almost any use-case -- and i'm totally on board with that, and in fact that's the reason for this | 20:49 |
xgerman_ | we can either use some metadata concept where we go completely configurable or pick sinner and throw them in the DB | 20:49 |
xgerman_ | winners | 20:49 |
nmagnezi | rm_work, google says they do have it. | 20:49 |
xgerman_ | not sinners | 20:49 |
rm_work | it should be possible to write a custom compute driver that can take advantage of these fields | 20:49 |
rm_work | i think it's pretty clear that those fields are something shared across most compute engines | 20:50 |
johnsom | I mean at a base level, I wish nova server groups (anti-affinity) just "did the right thing" | 20:50 |
xgerman_ | +1 | 20:50 |
rm_work | yeah, that'd be grand, lol | 20:50 |
jniesz | vmware I thought only had flavors with OpenStack, not native | 20:50 |
jniesz | you clone from template or create vm | 20:50 |
*** AlexeyAbashkin has joined #openstack-lbaas | 20:51 | |
rm_work | jniesz: in that case, flavor would be ~= "template"? | 20:51 |
*** PagliaccisCloud has joined #openstack-lbaas | 20:51 | |
jniesz | well that included image | 20:51 |
jniesz | and I thin you still specify vcpu, mem | 20:51 |
johnsom | So, some thoughts: | 20:51 |
johnsom | Cache this now | 20:51 |
johnsom | Work on nova to make it do what we need | 20:51 |
johnsom | Setup each compute driver to have it's own scheduling/metadata table | 20:51 |
johnsom | .... | 20:51 |
jniesz | has been awhile for me with VMware | 20:51 |
jniesz | different drivers might care about different metadata, so like that idea | 20:52 |
johnsom | jniesz the vmware integration has a "flavor" concept | 20:52 |
xgerman_ | johnsom +1 | 20:52 |
xgerman_ | I think keeping it flexible and not putting it straight into the amp table gives us space for future comoute drivers /schedulers | 20:53 |
johnsom | Those were options/ideas, not a ordered list BTW | 20:53 |
rm_work | i think it *is* flexible enough in the amphora table to allow for future compute drivers | 20:53 |
johnsom | I just worry if the schema varies between compute drivers, how is the API going to deal with it... | 20:54 |
nmagnezi | i think rm_work has a point here. and for example even if a compute driver does not have the concept of AZs we can just see it as a single AZ for all instances | 20:54 |
*** krypto has joined #openstack-lbaas | 20:54 | |
*** krypto has quit IRC | 20:54 | |
*** krypto has joined #openstack-lbaas | 20:54 | |
johnsom | Yes | 20:55 |
rm_work | yep | 20:55 |
rm_work | and yes, exactly, worried about schema differences | 20:55 |
*** AlexeyAbashkin has quit IRC | 20:55 | |
johnsom | We have five minutes left in the meeting BTW | 20:55 |
xgerman_ | I am not too worried about those — they are mostly for scheduling and the API doesnt know about azs right now anyway | 20:55 |
johnsom | We have the amphora API now, which I expect rm_work wants the AZ exposed in | 20:56 |
rm_work | it's been three years and what we have for compute is "nova", haven't even managed to get the containers stuff working (which also would have these fields), and every compute solution we can think of to mention here would also fit into these ... | 20:56 |
rm_work | so blocking something useful like this because of "possible future whatever" seems lame to me | 20:57 |
rm_work | just saying | 20:57 |
xgerman_ | so what’s bad about another table? | 20:57 |
johnsom | Well, let's all comment on the patch: | 20:57 |
johnsom | #link https://review.openstack.org/#/c/510225/ | 20:57 |
rm_work | just, why | 20:57 |
rm_work | we could do another table -- but we'd want a concrete schema there too | 20:57 |
rm_work | and the models would just auto-join it anyway | 20:57 |
johnsom | I will mention, AZ is an extension to nova from what I remember and is optional | 20:57 |
xgerman_ | yes, but it would be for nova and if we do vmware they would do their table | 20:57 |
rm_work | the models need to do the join | 20:58 |
rm_work | we can't have a variable schema, can we? | 20:58 |
johnsom | +1 | 20:58 |
rm_work | +1 to which, lol | 20:58 |
johnsom | I really don't want a variable schema. It has ended poorly for other projects | 20:58 |
jniesz | isn't it hard to have single schema for multiple drivers that would have completely different concepts? | 20:59 |
johnsom | Final minute folks | 20:59 |
rm_work | jniesz: yes, but i don't think they HAVE different concepts | 20:59 |
rm_work | please name a compute system that we couldn't somehow fill az/flavor/image_ref in a meaningful way | 21:00 |
rm_work | i can't think of one | 21:00 |
johnsom | Review, comment early, comment often... grin | 21:00 |
johnsom | #endmeeting | 21:00 |
*** openstack changes topic to "Welcome to LBaaS / Octavia - Queens development is now open." | 21:00 | |
openstack | Meeting ended Wed Oct 25 21:00:22 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2017/octavia.2017-10-25-20.00.html | 21:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2017/octavia.2017-10-25-20.00.txt | 21:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2017/octavia.2017-10-25-20.00.log.html | 21:00 |
rm_work | AFAICT those are pretty basic building blocks of any compute service | 21:00 |
*** longstaff has quit IRC | 21:00 | |
jniesz | yes those three are basic I agree | 21:00 |
xgerman_ | so we put some into the amp table and some others somewhere else | 21:00 |
xgerman_ | ? | 21:01 |
jniesz | but if it gets expanded beyond those, maybe labels | 21:01 |
rm_work | we put common denominators in the amp table, yes | 21:01 |
rm_work | others ... i have no idea what others we'd need | 21:01 |
xgerman_ | we alreday established that the ones you want to out in there are not common | 21:01 |
rm_work | err | 21:01 |
rm_work | we established exactly that they *are* common | 21:02 |
rm_work | were we in different meetings? | 21:02 |
xgerman_ | anyhow I don;t know why we can’t use the ORM trick and just have a NovaAMP inherit from amp with some additional fields | 21:02 |
xgerman_ | this has been solved | 21:02 |
xgerman_ | there were people who didn’t need AZs — VMWare had templates instead of flavor, etc. | 21:02 |
rm_work | the name isn't that important | 21:03 |
rm_work | and the mappings are fairly obvious | 21:03 |
xgerman_ | so why don’t want we do two tables? performance? | 21:03 |
xgerman_ | or one table and throw in a classifier? | 21:03 |
rm_work | variable schema is not fun to work with | 21:03 |
xgerman_ | this is not a variable schema | 21:03 |
nmagnezi | xgerman_, meaning a "metadata" table linked to the amps? | 21:03 |
xgerman_ | no | 21:03 |
xgerman_ | something like this: https://docs.jboss.org/hibernate/orm/3.3/reference/en-US/html/inheritance.html | 21:04 |
rm_work | you want to have different tables for different drivers, which would have different schemas | 21:04 |
rm_work | yeah i see how that'd work | 21:05 |
xgerman_ | yep | 21:05 |
rm_work | but i don't think we need to do that | 21:05 |
rm_work | at least for the core stuff | 21:05 |
rm_work | xgerman_: so, if we ever need anything that we wouldn't define as "core to any compute system", then lets do that | 21:05 |
rm_work | but i'd define the things i listed as "core" | 21:05 |
rm_work | very common denominators | 21:05 |
rm_work | so let's put THOSE in the base table | 21:06 |
johnsom | I feel like we shouldn't get too far into the scheduling business.... | 21:06 |
xgerman_ | that, too | 21:06 |
rm_work | technically no code i'm proposing upstream has any impact on scheduling | 21:06 |
johnsom | The cached_AZ column is a compromise there where we aren't doing the scheduling, nova is still the point of truth, but we save a bit more metadata off the compute system response. | 21:07 |
rm_work | yes | 21:07 |
rm_work | storing flavor and image_ref could also be useful though | 21:07 |
rm_work | image_ref in particular | 21:07 |
rm_work | or whatever more generic name anyone might want | 21:07 |
johnsom | Convince me that we can't get enough of that from the nova API | 21:08 |
rm_work | A) is image_ref possible to change? answer, no | 21:08 |
xgerman_ | well, if we do the extra table route rm_work can do a go_daddy_compute driver and put in all the values he sees fit ;-) | 21:09 |
johnsom | True, but your use case is to find amps with alternate image IDs. To me the nova API can do that for us, no need to cache it | 21:09 |
rm_work | so just add that to the compute_driver API | 21:09 |
xgerman_ | when it comes to scheduling it doesn’t really matter what we call the columns — they can be colors or whatever | 21:09 |
johnsom | xgerman_ and have a go_daddy API path instead of amphora? Technically we don't limit that today. | 21:09 |
xgerman_ | ;-) | 21:10 |
johnsom | rm_work yes, if we need it | 21:10 |
rm_work | i guess so, it does technically work, it's just slow | 21:10 |
xgerman_ | well, there are the info calls — where we don’t care about speed | 21:10 |
johnsom | We should, as a team, decide on what level of devops we want to enable via our API vs. what the operator should use devops tools (ansible, puppet, etc.) to manage | 21:11 |
xgerman_ | +1 | 21:11 |
xgerman_ | I understand if we need to do “scheduling” we need values in our DB | 21:11 |
xgerman_ | and pulling the correct AMP from the spares pool would be useful | 21:12 |
rm_work | Octavia should be able to internally do certain maintenance tasks, IMO -- such as recycling amps that have specific image_ids (or that DON'T have a specific image_id), forone | 21:12 |
rm_work | that's why i bring up image_id, since it's going to be relevant to me soon | 21:12 |
xgerman_ | well, in most orgs they have a configuration management system which has records of that | 21:13 |
johnsom | I kind of feel that is a devops tool thing. It can go pull the list from nova and call our failover api... | 21:13 |
xgerman_ | +1 | 21:13 |
johnsom | Yes, it would be nice. Heck just have housekeeping do it, but... Is now the time for that lever of gold plating? | 21:13 |
rm_work | well, i'm going to be doing it | 21:13 |
rm_work | so would you rather i do it internally, or upstream? | 21:13 |
johnsom | Hahaha | 21:14 |
rm_work | I would 100% rather do it upstream | 21:14 |
rm_work | but these are the kinds of things that are on my jira board right now | 21:14 |
xgerman_ | I would rather not have that in Octavia | 21:14 |
nmagnezi | rm_work, making an offer we can't refuse.. :P | 21:14 |
* johnsom sees visions of the Octavia ansible driver interface | 21:14 | |
xgerman_ | the whole error handling is a nightmare | 21:14 |
rm_work | why do we make a bunch of operators write a bunch of tooling for stuff we can just provide easily in the API? | 21:14 |
xgerman_ | and everybody handles aerrors differently (retries, error, etc.) | 21:14 |
xgerman_ | we can give them “scripts” they can customize to their need | 21:15 |
xgerman_ | not so easy with the AOI | 21:15 |
xgerman_ | API | 21:15 |
*** rcernin has joined #openstack-lbaas | 21:16 | |
rm_work | so for years we've promised an "admin API" with rich features for operators... | 21:16 |
rm_work | i guess it turns out we were lieing? | 21:16 |
rm_work | our "admin api" is one endpoint to do failover, and "go figure it out" | 21:16 |
xgerman_ | we have more plans (related to cert rotations) | 21:17 |
xgerman_ | now, my problem is that to decide which amps to failover and what to do if there is an error is highly operator specific | 21:17 |
rm_work | so my goal is to provide some nice tooling that will work with 90% of deployments out-of-the-box | 21:18 |
jniesz | for us we would have to pull additional metadata out of our application lifecycle manager to make decisions by organization, or whether it is prod, dev, qa | 21:18 |
xgerman_ | yep, exactly | 21:18 |
rm_work | but if you'd rather force everyone to write custom tooling instead to do the same thing, i guess that's fine | 21:18 |
jniesz | but there is a LCD that applies to a lot of people | 21:18 |
rm_work | ^^ this | 21:19 |
xgerman_ | failover a list of LB is something like that | 21:19 |
rm_work | i'm talking about pretty common operations, the kind of thing we'd want to call out as "you'll need to do this periodically" in our user guides | 21:20 |
rm_work | like, recycling old/bad images | 21:20 |
rm_work | *amps based on | 21:20 |
jniesz | a question I have is should admin api be basic and those tools built on top of it, even if those tools are providing by us community | 21:21 |
jniesz | or does it belong in admin api | 21:22 |
xgerman_ | my view was the API is basic and we provide “scripts” for it — so people can customize… | 21:22 |
rm_work | adding additional features that work for the 90% don't break the ability for the 10% that want/need to write their own scripts | 21:23 |
xgerman_ | I personally like scripts over API calls since then I can follow the output, etc. — and otherwise we need a whole jobs api… | 21:25 |
jniesz | Yes, and it would be easy to integrate into custom systems | 21:25 |
jniesz | having common scripts that are shared by everyone would be good, so everybody doesn't have to write their own | 21:27 |
xgerman_ | +1 | 21:27 |
jniesz | for the LCD tasks | 21:27 |
rm_work | k | 21:27 |
rm_work | so, image_ref and flavor are something we'll just force the operator to look up on their own from nova, and match up with amphorae with some scripting, and then generate a list that has all the info? | 21:28 |
rm_work | rather than just "having it" because it can't change post-create? | 21:29 |
rm_work | and we were already passed the info but threw it away | 21:29 |
jniesz | what about hot add : ) | 21:30 |
jniesz | AZ possibly could change through live migrate | 21:30 |
krypto | Thanks johnsom after correcting the typo services are stable ,creating lb returns "An error happened in the driver | 21:31 |
krypto | Neutron server returns request_ids: ['req-d097dd4c-5498-4e25-a4db-9273a9ef1ac5']" i am using Mitaka neutron | 21:31 |
rm_work | jniesz: i'm talking about image_ref | 21:31 |
jniesz | yes, that one I wouldn't expect to change | 21:31 |
rm_work | we already acknowledged AZ could change, that's why I was listing it as "cached_az" | 21:31 |
rm_work | and I don't think `flavor` can change either | 21:32 |
johnsom | krypto Probably a configuration issue on the neutron-lbaas side. base_url maybe? Check the q-svc logs for the details | 21:32 |
jniesz | the question is the frequency. Which maintenance / admin tasks do we think will be running a lot in order to need to cache layer | 21:33 |
rm_work | so ... if we acknowledge that "image_ref" won't change post-provision, and we can have it stored internally, and the failover code that the failover-api runs is internal (obviously), why can't we shortcut to "provide an API that will failover everything that is/isn't image_id <X>"? | 21:34 |
xgerman_ | “scheduling hints” should be in our DB; other stuff it’s on a case by case | 21:34 |
jniesz | for non-maintenance / admin tasks like failover I can see the benefit | 21:34 |
rm_work | sorry i keep trying to talk about image_ref right now, can get back to cached_az if you want tho | 21:34 |
rm_work | because that's also important | 21:35 |
xgerman_ | rm_work, so somebody kicks that off the task on our API | 21:35 |
xgerman_ | how can he follow progress? how can he stop it? What do we do if the server crashes the task is running on? | 21:35 |
jniesz | some function the controller is continuously doing are the ones I think make more sense | 21:35 |
rm_work | xgerman_: is failover passed to the worker or done directly in the API? I didn't actually get a chance to review that API endpoint | 21:35 |
rm_work | I assumed you passed it to the worker | 21:36 |
xgerman_ | yep, I did | 21:36 |
xgerman_ | but it only fails over one LB | 21:36 |
rm_work | then ... it's the same as that | 21:36 |
rm_work | we just create a bunch of failover messages onto the queue | 21:36 |
xgerman_ | no, it’s not — | 21:36 |
*** Alex_Staf has joined #openstack-lbaas | 21:36 | |
xgerman_ | one failover nobody needs to stop — 1,000 might need to be stopped/paused/wahtever | 21:37 |
rm_work | it should take almost no time to do that, and the actual response isn't something that returns in the response anyway | 21:37 |
xgerman_ | also we need to report progress | 21:37 |
rm_work | hmm, that's the one i could see | 21:37 |
rm_work | "oops that's a lot of servers, can we cancel plz" | 21:38 |
rm_work | lol | 21:38 |
rm_work | when the API tells them "OK! Failing over 1052 servers." | 21:38 |
xgerman_ | yep + if each failover crashes the LB we need a way to pause/stop | 21:38 |
rm_work | alright, i'll rethink that plan | 21:38 |
xgerman_ | yeah, not saying it can’t be done — only saying script might be easier | 21:39 |
rm_work | then i guess back to cached_az | 21:39 |
rm_work | your main issue with that is that it's either FOR scheduling, in which case we shouldn't be getting into scheduling hints, or it ISN'T for scheduling, in which case it doesn't need to be in the DB? | 21:39 |
Alex_Staf | rm_work, hi, just wanted to update u that dynamic and non dynamic creds test did not run well for me =\ | 21:39 |
rm_work | Alex_Staf: what happened in the dynamic creds test? | 21:40 |
xgerman_ | rm_work my biggest issue is that I don;t want to put stuff specific to nova into the DB | 21:40 |
rm_work | AFAIK they were working in kong's testing | 21:40 |
rm_work | xgerman_: i don't think AZ is specific to nova | 21:40 |
rm_work | http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html | 21:40 |
rm_work | https://cloud.google.com/compute/docs/regions-zones/ | 21:40 |
rm_work | https://kubernetes.io/docs/admin/multiple-zones/ | 21:41 |
xgerman_ | and even then I am happy to put sepcific stuff in some extra nova_amp table | 21:41 |
rm_work | http://pubs.vmware.com/vmware-validated-design-40/index.jsp?topic=%2Fcom.vmware.vvd.sddc-design.doc%2FGUID-139507E8-E494-4AFF-9C9B-3E8538184E85.html | 21:41 |
rm_work | think of another compute system and i'll show you where it uses zones | 21:41 |
*** krypto has quit IRC | 21:42 | |
Alex_Staf | rm_work, I had different type of errors , and tried to run with testr and python -m testtools.run, the behavior is different for those. With testr +dynamic I saw the same "Details: No "load-balancer_member" role found" | 21:43 |
rm_work | would you like "cached_zone" better than "cached_az"? | 21:43 |
rm_work | xgerman_: ^^ | 21:43 |
rm_work | Alex_Staf: err, that's not how you run tempest :P | 21:43 |
rm_work | one sec | 21:43 |
xgerman_ | sure let’s do cached_zone | 21:43 |
rm_work | xgerman_: done, give me 5 minutes | 21:44 |
xgerman_ | rm_work: can all clouds do life migrations? | 21:44 |
Alex_Staf | rm_work, ok , we are running itfrom a cloud docker in that way for years now =\ | 21:44 |
xgerman_ | in other clouds the Zone might never change | 21:44 |
Alex_Staf | rm_work, https://github.com/itzikb/docker-tempest | 21:44 |
rm_work | hmm | 21:44 |
rm_work | Alex_Staf: ok, then let me rephrase "that's not how we are aware that tempest is supposed to be run" | 21:45 |
rm_work | are you sure tempest is loading the correct config? | 21:45 |
jniesz | to me I think the logical delineation is if it is a maintenance task that the operator does on occasion like rotate images it belongs in a script and not in db. If the metadata is needed for something like scheduling / fail-over / etc.. it makes sense to cache in db | 21:46 |
kong | Alex_Staf: the role should be created automatically by tempest. If you take a look at the commit message of that patch, that's how i run thet test case | 21:46 |
xgerman_ | jniesz +1 | 21:46 |
rm_work | ok | 21:46 |
rm_work | so then... are we ok with cached_zone in the amp table? | 21:46 |
rm_work | because it could feasibly be used by drivers to do hinting | 21:46 |
rm_work | (and by feasibly, I mean, my local driver does this) | 21:46 |
jniesz | yes, I think that is reasonable | 21:47 |
Alex_Staf | rm_work, pretty sure, I am running all neutron scenario, tempest scenario , neutron-lbaas scenario and it is always ok | 21:47 |
xgerman_ | yep, grudingly agree. Would love to have the in a schedulig hint tabel or nova_amp but let’s cross that bridge later | 21:47 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Add cached_az to the amphora record https://review.openstack.org/510225 | 21:48 |
xgerman_ | would be silly to start a new table for one column | 21:48 |
rm_work | ^^ just rebasing first | 21:48 |
xgerman_ | johnsom cool? Or should we have some more pure database design… | 21:49 |
johnsom | Sorry folks will have to catch up on the scroll back in a bit. Working with release team on some late landing issues with horizon plugins | 21:50 |
xgerman_ | k | 21:50 |
xgerman_ | I will sleep over it before I +2 :-) | 21:51 |
*** AlexeyAbashkin has joined #openstack-lbaas | 21:51 | |
*** AlexeyAbashkin has quit IRC | 21:55 | |
johnsom | We need this on a t-shirt: "OK! Failing over 1052 servers." | 21:56 |
rm_work | lol | 21:57 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Add cached_zone to the amphora record https://review.openstack.org/510225 | 21:57 |
rm_work | ^^ there's the real updated version | 21:57 |
rm_work | Alex_Staf: can you pastebin me your tempest config? | 21:59 |
rm_work | censor passwords/usernames | 21:59 |
Alex_Staf | rm_work, https://pastebin.com/zw1W0asB | 22:35 |
*** salmankhan has joined #openstack-lbaas | 22:50 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 22:52 | |
*** AlexeyAbashkin has quit IRC | 22:56 | |
rm_work | I would love to call out companies that are doing Octavia deployments using recent code -- who all do we know that's doing that and is willing to be listed? | 23:16 |
rm_work | ^^ In the Sydney summit Project Update for Octavia | 23:16 |
rm_work | jniesz: i assume you guys don't want your name anywhere near a slide | 23:16 |
*** fnaval has quit IRC | 23:18 | |
*** salmankhan has quit IRC | 23:25 | |
Alex_Staf | rm_work, was the tempest conf helpful ? | 23:25 |
rm_work | still need to get a chance to compare it with mine | 23:25 |
rm_work | actually can do that now, give me a sec | 23:26 |
rm_work | let's see... \ | 23:27 |
rm_work | on line 9, i don't know what setting that does -- we don't have that in our config | 23:27 |
rm_work | in validation, i think you need 'run_validation = true' | 23:29 |
rm_work | Alex_Staf: are you running with tempest from git? | 23:30 |
Alex_Staf | rm_work, yes | 23:30 |
rm_work | we are using stuff that is newer than the 17.0.0 release | 23:30 |
rm_work | k | 23:30 |
Alex_Staf | let me think | 23:30 |
Alex_Staf | running with validation | 23:31 |
rm_work | brb 15m | 23:31 |
Alex_Staf | rm_work, did not work, I am out it is 2:30 AM here , I proceed tomorrow | 23:33 |
Alex_Staf | rm_work, tnx for the help | 23:33 |
*** sshank has quit IRC | 23:53 | |
*** sshank has joined #openstack-lbaas | 23:55 | |
*** sshank has quit IRC | 23:55 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!