Wednesday, 2017-10-25

*** slaweq has quit IRC00:00
*** slaweq has joined #openstack-lbaas00:05
*** rtjure has joined #openstack-lbaas00:07
*** sshank has quit IRC00:10
*** leitan has quit IRC00:12
*** rtjure has quit IRC00:12
*** rcernin has joined #openstack-lbaas00:19
*** rtjure has joined #openstack-lbaas00:20
*** rcernin has quit IRC00:23
*** rcernin has joined #openstack-lbaas00:24
*** rtjure has quit IRC00:24
*** rtjure has joined #openstack-lbaas00:32
*** jdavis has joined #openstack-lbaas00:34
*** rtjure has quit IRC00:37
*** slaweq has quit IRC00:37
*** jdavis has quit IRC00:39
*** rtjure has joined #openstack-lbaas00:42
*** rtjure has quit IRC00:47
*** slaweq has joined #openstack-lbaas00:49
*** rtjure has joined #openstack-lbaas00:54
*** jniesz has quit IRC00:57
*** ipsecguy has quit IRC00:58
*** rtjure has quit IRC00:59
*** ipsecguy has joined #openstack-lbaas00:59
*** ltomasbo has quit IRC01:04
*** rtjure has joined #openstack-lbaas01:05
*** ltomasbo has joined #openstack-lbaas01:06
*** rtjure has quit IRC01:10
*** ltomasbo has quit IRC01:12
*** rtjure has joined #openstack-lbaas01:19
*** ltomasbo has joined #openstack-lbaas01:20
*** slaweq has quit IRC01:20
*** rtjure has quit IRC01:23
*** dayou has quit IRC01:26
*** rtjure has joined #openstack-lbaas01:28
*** slaweq has joined #openstack-lbaas01:29
*** leitan has joined #openstack-lbaas01:29
*** rcernin has quit IRC01:32
*** rcernin has joined #openstack-lbaas01:32
*** rcernin has quit IRC01:33
*** rtjure has quit IRC01:33
*** leitan has quit IRC01:33
*** rtjure has joined #openstack-lbaas01:40
*** dayou has joined #openstack-lbaas01:44
*** rtjure has quit IRC01:45
*** sapd has quit IRC01:46
*** sapd has joined #openstack-lbaas01:48
*** rtjure has joined #openstack-lbaas01:53
*** rcernin has joined #openstack-lbaas01:55
*** annp has joined #openstack-lbaas01:56
*** rtjure has quit IRC01:58
*** slaweq has quit IRC02:02
*** slaweq has joined #openstack-lbaas02:04
*** rtjure has joined #openstack-lbaas02:07
*** rtjure has quit IRC02:12
*** ltomasbo has quit IRC02:12
*** yamamoto has joined #openstack-lbaas02:14
konghmm..octavia client only supports initialized with session?02:15
*** rtjure has joined #openstack-lbaas02:18
*** rtjure has quit IRC02:23
*** fnaval has joined #openstack-lbaas02:27
*** rtjure has joined #openstack-lbaas02:27
*** rtjure has quit IRC02:32
*** slaweq has quit IRC02:37
*** rtjure has joined #openstack-lbaas02:37
*** rtjure has quit IRC02:42
*** slaweq has joined #openstack-lbaas02:42
*** ramishra has joined #openstack-lbaas02:44
*** rtjure has joined #openstack-lbaas02:48
*** cody-somerville has joined #openstack-lbaas02:48
*** cody-somerville has joined #openstack-lbaas02:48
*** rtjure has quit IRC02:53
*** rtjure has joined #openstack-lbaas02:57
*** fnaval has quit IRC03:01
*** fnaval has joined #openstack-lbaas03:02
*** rtjure has quit IRC03:02
*** rtjure has joined #openstack-lbaas03:10
*** rtjure has quit IRC03:15
*** slaweq has quit IRC03:15
*** slaweq has joined #openstack-lbaas03:21
*** rtjure has joined #openstack-lbaas03:22
*** rtjure has quit IRC03:26
*** rtjure has joined #openstack-lbaas03:32
*** rtjure has quit IRC03:37
*** rtjure has joined #openstack-lbaas03:45
*** rtjure has quit IRC03:50
*** slaweq has quit IRC03:55
*** rtjure has joined #openstack-lbaas03:56
*** rtjure has quit IRC04:01
*** slaweq has joined #openstack-lbaas04:06
*** rtjure has joined #openstack-lbaas04:08
*** rtjure has quit IRC04:12
*** rtjure has joined #openstack-lbaas04:18
*** rcernin has quit IRC04:21
*** rtjure has quit IRC04:22
*** rtjure has joined #openstack-lbaas04:27
*** rtjure has quit IRC04:32
*** slaweq has quit IRC04:39
*** rtjure has joined #openstack-lbaas04:40
*** rcernin has joined #openstack-lbaas04:45
*** rtjure has quit IRC04:45
*** slaweq has joined #openstack-lbaas04:47
*** eN_Guruprasad_Rn has joined #openstack-lbaas04:47
*** krypto has joined #openstack-lbaas04:48
*** rtjure has joined #openstack-lbaas04:49
*** rtjure has quit IRC04:54
*** rtjure has joined #openstack-lbaas04:59
*** rtjure has quit IRC05:04
*** rtjure has joined #openstack-lbaas05:10
*** rtjure has quit IRC05:15
*** slaweq has quit IRC05:19
*** rtjure has joined #openstack-lbaas05:20
*** rtjure has quit IRC05:24
*** slaweq has joined #openstack-lbaas05:26
*** ltomasbo has joined #openstack-lbaas05:26
*** m3m0r3x has joined #openstack-lbaas05:26
*** m3m0r3x has quit IRC05:30
*** rtjure has joined #openstack-lbaas05:30
*** rtjure has quit IRC05:35
*** rtjure has joined #openstack-lbaas05:40
*** kbyrne has quit IRC05:42
*** kbyrne has joined #openstack-lbaas05:45
*** slaweq has quit IRC05:59
*** slaweq has joined #openstack-lbaas06:07
*** Alex_Staf has joined #openstack-lbaas06:34
*** pcaruana has joined #openstack-lbaas06:39
*** csomerville has joined #openstack-lbaas06:40
*** slaweq has quit IRC06:40
*** cody-somerville has quit IRC06:43
*** slaweq has joined #openstack-lbaas06:49
*** tesseract has joined #openstack-lbaas06:54
*** armax has quit IRC07:09
*** spectr has joined #openstack-lbaas07:13
*** slaweq has quit IRC07:21
*** slaweq has joined #openstack-lbaas07:23
*** AlexeyAbashkin has joined #openstack-lbaas07:46
*** slaweq_ has joined #openstack-lbaas07:48
*** slaweq has quit IRC07:56
*** slaweq has joined #openstack-lbaas08:02
*** slaweq has quit IRC08:06
*** bcafarel has quit IRC08:20
*** bcafarel has joined #openstack-lbaas08:21
*** slaweq has joined #openstack-lbaas08:36
*** jdavis has joined #openstack-lbaas08:37
*** jdavis has quit IRC08:42
*** rcernin has quit IRC08:59
*** krypto has quit IRC09:00
*** krypto has joined #openstack-lbaas09:01
*** yamamoto has quit IRC09:05
*** slaweq has quit IRC09:09
*** salmankhan has joined #openstack-lbaas09:11
*** yamamoto has joined #openstack-lbaas09:12
*** krypto has quit IRC09:21
*** slaweq has joined #openstack-lbaas09:21
*** krypto has joined #openstack-lbaas09:27
isantospis there any equivalent way to switch base_url= to base_url= ? what should I write in /something?09:39
*** yamamoto has quit IRC09:49
*** slaweq has quit IRC09:55
kryptohello all i am able to start most octavia service except worker which gives this error "InvalidTarget: A server's target must have topic and server names specified:<Target server=amphora1>" where amphora1 is my node where octavia-api and other services are running09:55
kryptoi dont see anywhere in the configuration i have specified my hostname09:57
*** slaweq has joined #openstack-lbaas10:02
*** eN_Guruprasad_Rn has quit IRC10:08
*** krypto has quit IRC10:22
*** krypto has joined #openstack-lbaas10:23
*** annp has quit IRC10:24
*** slaweq has quit IRC10:36
*** pcaruana has quit IRC10:37
*** slaweq has joined #openstack-lbaas10:39
*** eN_Guruprasad_Rn has joined #openstack-lbaas10:40
*** yamamoto has joined #openstack-lbaas10:49
*** salmankhan has quit IRC10:50
*** salmankhan has joined #openstack-lbaas10:51
*** eN_Guruprasad_Rn has quit IRC10:56
*** eN_Guruprasad_Rn has joined #openstack-lbaas10:57
*** ramishra has quit IRC11:00
*** yamamoto has quit IRC11:01
*** ramishra has joined #openstack-lbaas11:02
*** slaweq has quit IRC11:11
*** yamamoto has joined #openstack-lbaas11:14
*** atoth has joined #openstack-lbaas11:19
*** slaweq has joined #openstack-lbaas11:23
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] ACTIVE-ACTIVE with exabgp-speaker - Octavia agent
*** pcaruana has joined #openstack-lbaas11:43
*** leitan has joined #openstack-lbaas11:55
*** leitan has quit IRC11:55
*** leitan has joined #openstack-lbaas11:56
*** yamamoto has quit IRC12:02
*** pcaruana has quit IRC12:03
*** pcaruana has joined #openstack-lbaas12:04
*** jdavis has joined #openstack-lbaas12:22
*** slaweq has quit IRC12:28
*** knsahm has joined #openstack-lbaas12:32
knsahmdoes anybody know how can i use the octavia temptest tests with rally?12:32
*** slaweq has joined #openstack-lbaas12:33
*** jdavis has quit IRC12:43
*** knsahm has quit IRC12:48
*** knsahm has joined #openstack-lbaas12:59
*** eN_Guruprasad_Rn has quit IRC12:59
*** yamamoto has joined #openstack-lbaas13:02
*** eN_Guruprasad_Rn has joined #openstack-lbaas13:05
*** fnaval has quit IRC13:06
*** slaweq has quit IRC13:06
*** yamamoto has quit IRC13:12
*** slaweq has joined #openstack-lbaas13:12
*** eN_Guruprasad_Rn has quit IRC13:12
*** sanfern has joined #openstack-lbaas13:12
*** eN_Guruprasad_Rn has joined #openstack-lbaas13:12
*** eN_Guruprasad_Rn has quit IRC13:16
*** eN_Guruprasad_Rn has joined #openstack-lbaas13:16
*** eN_Guruprasad_Rn has quit IRC13:23
*** fnaval has joined #openstack-lbaas13:24
*** slaweq has quit IRC13:44
*** slaweq has joined #openstack-lbaas13:48
*** armax has joined #openstack-lbaas13:53
johnsomisantosp You can modify the base_url to be something different/deeper if that is where your Octavia API v1 endpoint is located, but the default is just hostname:port and the neutron-lbaas octavia driver fills in the rest.13:57
johnsomkrypto When settings are not specified in the config file many have a "default".  In this case the default behavior is to pull in the hostname.  This is the "host" setting in the [default] block.13:59
johnsomIt is used for the message queuing over rabbitmq13:59
*** maestropandy has joined #openstack-lbaas14:00
johnsomknsahm I don't off my head.  We had talked about doing so, but no one has taken the time to set Rally up for octavia as far as I know.14:00
openstackLaunchpad bug 1727335 in Neutron LBaaS Dashboard "Devstack - Could not satisfy constraints for 'horizon': installation from path or url cannot be constrained to a version" [Undecided,New]14:01
johnsommaestropandy We don't use launchpad anymore, we use storyboard14:04
johnsomNot sure how you were able to create that bug14:05
*** slaweq_ has quit IRC14:09
maestropandyjohnsom: I have filed in launchpad, can i able to see same in storyboard ?14:09
johnsommaestropandy It was a one-time migration so the bug will need to be opened in Storyboard.  All of the OpenStack projects are getting moved over....14:10
maestropandyoh I have to open/create fresh here isn't ?14:11
johnsomSadly yes.  Bugs are turned off for the project in launchpad.  It only allowed it as you migrated the bug from devstack14:11
maestropandygot it, how about old bugs it will be archived ?14:12
johnsomThey all got moved over and opened as stories.14:12
johnsomThey still exist in launchpad just so old links from patches still work14:12
*** dayou has quit IRC14:13
johnsomThank you!14:14
johnsommaestropandy You are trying to use neutron-lbaas-dashboard as a devstack plugin?14:18
maestropandyjohnsom: yes14:20
*** slaweq has quit IRC14:21
maestropandyupdated my story with same14:22
johnsomOk, and you have horizon in your local.rc/local.conf?14:23
maestropandydefault no need to mention in local.conf.. devstack will install, when i removed lbaas-dashbaord plugin from local.conf file, its doing ./ properly14:24
johnsommaestropandy If you have this environment up, can you try removing the horizon tar line from the test-requirements?  I think that is the issue, it should no longer be needed.14:24
maestropandyyou mean to remove ",aster.tar.gz from that file ?14:26
*** dayou has joined #openstack-lbaas14:27
johnsomYeah remove the line ""14:27
maestropandyok, doing and re-installing devstack14:27
johnsomIf you get a good install I will work on a patch14:28
*** slaweq has joined #openstack-lbaas14:28
johnsomThanks for testing for me.  It would take me longer to setup that environment to do initial testing.14:28
maestropandyyup, devstack ./ takes time :(14:30
johnsomI already have four devstack VMs running, so that is the bigger issue for me...  Grin  Only so much RAM to go around14:30
* johnsom notes he can't vote as he wrote it14:41
xgerman_yep, want to ut it back on the front burner14:42
*** slaweq_ has joined #openstack-lbaas14:45
*** ramishra has quit IRC14:56
xgerman_sanfern has the following issue. Ideas?14:56
johnsomI would need more of the log to understand if it is just an order issue or a missing requirement14:57
*** AlexeyAbashkin has quit IRC14:58
xgerman_yeah, I am distracted by meetings so punting it here ;-)14:58
johnsomAlso, I think capitalization matters there14:58
johnsomWebOb>=1.7.1 # MIT from the requirements file14:59
johnsomsanfern Yeah, ok, so it's an import order and spacing issue.15:00
johnsomImport statements are in the wrong order. import logging should be before import flask15:00
johnsomMissing newline before sections or imports.15:00
johnsomImport statements are in the wrong order. import stat should be before import pyroute215:00
johnsomSo just a couple of simple import order/spacing cleanups in exabgp.py15:01
sanfernmy local did not prompt me,:(15:01
*** slaweq has quit IRC15:01
sanfernthanks johnsom15:01
*** slaweq has joined #openstack-lbaas15:01
*** fnaval has quit IRC15:01
johnsomHmm, tox and/or tox -e pep8 should warn on those.  I would try a tox -r to see if new versions of packages are needed for tox15:02
*** fnaval has joined #openstack-lbaas15:02
*** AlexeyAbashkin has joined #openstack-lbaas15:03
*** sri_ has quit IRC15:07
isantospIs there any way to delete loadbalancers stack in pending_delete?15:11
xgerman_they should eventually get into error15:19
*** rohara has joined #openstack-lbaas15:24
openstackgerritGerman Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable
xgerman_^^ good to go15:24
*** knsahm has quit IRC15:25
*** slaweq has quit IRC15:29
*** slaweq has joined #openstack-lbaas15:39
*** slaweq has quit IRC15:44
*** krypto has quit IRC15:49
*** pcaruana has quit IRC15:49
*** krypto has joined #openstack-lbaas15:49
*** krypto has quit IRC15:49
*** krypto has joined #openstack-lbaas15:49
*** Alex_Staf has quit IRC15:51
*** bcafarel has quit IRC15:54
*** bcafarel has joined #openstack-lbaas15:54
*** maestropandy has quit IRC15:55
*** maestropandy has joined #openstack-lbaas15:57
johnsomxgerman_ reviewed with comments16:07
*** maestropandy has quit IRC16:09
openstackgerritGerman Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable
xgerman_johnsom I am not sure how we can stop other projects from (ab)using the event streamer — also changing the name has implications on another commit…16:14
johnsomYeah, I wasn't trying to "stop them" but make it clear that it's not a general purpose thing they should start using.  I.e. as soon as we deprecate neutron-lbaas this goes deprecated as well.  Trying to be super clear with the wording.16:14
xgerman_agreed with deprecating16:16
xgerman_let me know how strongly you feel about the name and I will change it…16:17
*** maestropandy has joined #openstack-lbaas16:18
johnsomI think I can give it a pass now that the help text is more clear16:20
johnsomWe will see what the other cores say16:20
*** maestropandy has quit IRC16:20
*** tesseract has quit IRC16:25
*** sanfern has quit IRC16:35
*** sanfern has joined #openstack-lbaas16:37
*** AlexeyAbashkin has quit IRC16:38
*** Swami has joined #openstack-lbaas16:48
*** AJaeger has joined #openstack-lbaas17:00
AJaegerlbaas cores, please approve to fix the zuul.yaml config.17:00
AJaegerjohnsom: could you help to get a +2A, please?17:01
AJaegeryou now have a periodic-newton job but no newton branch anymore17:01
AJaegerthe change above removes that17:01
*** salmankhan has quit IRC17:06
johnsomAJaeger Yep, still a bunch of work to do.17:08
johnsomxgerman_ Can you take a look at ^^^^17:08
*** salmankhan has joined #openstack-lbaas17:09
xgerman_also my job on OSA was experimental so didn’t get migrated (?!)17:10
johnsomI plan to go through and fix the stable stuff today.  The py3 jobs aren't running on Pike at the moment, etc.17:10
*** sshank has joined #openstack-lbaas17:21
*** eN_Guruprasad_Rn has joined #openstack-lbaas17:27
*** sshank has quit IRC17:31
rm_workxgerman_: had a quick question on
xgerman_rm_work the url has both username and password in it17:36
rm_workxgerman_: actually decided to -1 for updating octavia.conf example config to have this listed17:38
johnsomAh, yeah good catch17:38
rm_workthough yeah, i forgot that it's all one big URL17:38
rm_workbut i would have been reminded if i saw it in the example config :P17:38
xgerman_ok, sure17:39
rm_workalso: releasenote typo fix while you're doing that17:39
*** sshank has joined #openstack-lbaas17:39
openstackgerritGerman Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable
xgerman_ok, hopefully that helps17:45
rm_workxgerman_: so close :P17:50
xgerman_I am afraid when I have a more substantial patch…17:50
openstackgerritGerman Eichberger proposed openstack/octavia master: Make the event streamer transport URL configurable
rm_workbut look how quick that was17:52
johnsomI only nit'd them because the help text called it the transport url, so at least had some relation17:52
xgerman_yep, I try to be quick when I get reviews ;-)17:54
rm_worki'm trying to understand why all the tests are failing on this change:
rm_workit seems like the servers aren't building17:56
rm_workbut not seeing errors in the logs17:56
rm_workoh wait, one more log to check17:56
xgerman_I thought last week we decided to go with querying nova and not caching? Guess Zuul incorporates chat logs in it’s -1s17:57
rm_workyep nm i found it17:57
rm_workxgerman_: talked with johnsom at length, reproposing it as "cached_az" to be clear about it, and still using nova-querying for other bits17:58
openstackgerritAdam Harwell proposed openstack/octavia master: Add cached_az to the amphora record
xgerman_I need to think about that and probably look at the code…17:59
rm_workor rather, we will when we write the other bits17:59
rm_workreally this patch is mostly identical to the old version, but it names it clearly as "cached_az" so it's obvious it might not be current18:00
rm_workso some things that need a "best-guess but FAST" az value can use this18:00
rm_workthen we can query when we need a 100%18:00
xgerman_yep, and I need to do more thinking if Octavia needs to be AZ aware because this is only one way to do HA and it might need to be more generalized…18:00
rm_workwell, technically this doesn't add anything to make it truly AZ-Aware18:02
rm_workjust puts the cached AZ in the db for easy lookup for filtering18:02
rm_workwe can get the exact AZ via the nova-query method in any functions that NEED that18:03
xgerman_mmh, so it’s basically metadata? Why don’t we make a general metadata solution?18:08
xgerman_I was trying to read up on stuff and I ran across
*** AlexeyAbashkin has joined #openstack-lbaas18:10
rm_workwell, it needs to be queryable in a DB join18:10
xgerman_so why wouldn’t metadata be that way?18:12
rm_worka lot of metadata solutions don't store in easily joinable fields18:12
*** strigazi_ has joined #openstack-lbaas18:13
*** AlexeyAbashkin has quit IRC18:14
xgerman_well, we need more metadata than AZ in the future, e.g. if somebody uses kosmos or DNSload balancing they might want to put data in to tie them together18:14
xgerman_or people might want to schedule LBs close to the web servers…18:14
xgerman_so I feel we are painting ourselves into a corner18:15
*** jniesz has joined #openstack-lbaas18:15
rm_workyou're killing me German18:15
rm_work*killing me*18:15
xgerman_I think AZ probably plays a role when we schedule LBs for HA but I haven’t thought deeply how that will work - especially with ACTIVE-ACTIVE…18:16
*** strigazi has quit IRC18:16
*** strigazi_ is now known as strigazi18:17
xgerman_we need to make a call if AZ is random metadata or something actionable for us… and I don’t think we have our HA story figured out (one idea was to put AZ into the flavor)18:18
*** slaweq has joined #openstack-lbaas18:19
xgerman_I also like to evaluate what happens if we shove the LB or AMP is into the nova metadata — would that meet your needs?18:20
rm_worknot quite18:22
*** slaweq has quit IRC18:24
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] ACTIVE-ACTIVE with exabgp-speaker - Octavia agent
xgerman_ok, let me mull that over. a bit…18:27
rm_workI will do any code you want as long as there is some field in the DB that can store the cached AZ data18:29
*** slaweq has joined #openstack-lbaas18:31
openstackgerritMichael Johnson proposed openstack/neutron-lbaas-dashboard master: Set package.json version to Queens MS1
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Set package.json version to Queens MS1
*** sanfern has quit IRC18:33
johnsomrm_work xgerman_ Can you take a look at those two dashboard patches and approve so we can un-block our MS1 release?  Thanks18:35
*** slaweq has quit IRC18:36
*** csomerville has quit IRC18:39
*** csomerville has joined #openstack-lbaas18:39
*** eN_Guruprasad_Rn has quit IRC18:43
*** atoth has quit IRC18:46
*** sshank has quit IRC18:54
johnsomOye: Here's how npm's semver implementation deviates from what's on
johnsomGuess I need another spin....18:57
johnsomWould love to find the person that decided that...  "Let's deviate from a standard"18:58
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Set package.json version to Queens MS1
openstackgerritMichael Johnson proposed openstack/neutron-lbaas-dashboard master: Set package.json version to Queens MS1
*** sshank has joined #openstack-lbaas19:02
kryptohi johnsom thanks for the suggestion.Is certificate mandatory in configuration.I skipped that.could you have a quick glance at my configuration not sure what i am missing19:05
*** slaweq has joined #openstack-lbaas19:05
rm_workjohnsom: hmm found an interesting bug19:07
*** sshank has quit IRC19:07
rm_worki think19:07
*** armax has quit IRC19:08
rm_workjohnsom: if the pool we're creating doesn't have any listeners... should this at least exist!? I saw:19:08
openstackgerritSwaminathan Vasudevan proposed openstack/octavia master: (WIP):Support SUSE distro based Amphora Image for Octavia
rm_worki thought the model would at least HAVE the attribute19:11
johnsomrm_work That trace implies the call to the database to pull the pool record failed to find the pool record.  That is the issue there.  How did the API not create the DB record for the pool but pass it to the handler?19:12
*** krypto has quit IRC19:12
rm_workyeah i assumed that would have been impossible too19:12
rm_workOH I bet I know19:13
johnsomThat has nothing to do with the listener19:13
rm_worki bet it's the DB replication being slow19:13
rm_workprobably ended up on a read-slave that didn't catch the replication yt19:13
johnsomOuch, that sounds like fun....19:13
rm_workthat's ... eugh19:14
rm_workdo we need to *support* db clusters?19:14
rm_workeven just master-slave?19:14
rm_worki wonder if this is "our problem" or "my problem"19:14
johnsomI could comment, but won't.19:15
johnsomI have seen patches starting to float around neutron about nbd mode support19:15
johnsomI mean in this case, we *could* put some retry logic in19:15
rm_workthat'd be ... messy19:15
rm_workis there a more standard way to support cluster mode?19:16
rm_workor is that pretty much it19:16
rm_worki guess i should just take us back down to using simply a single master node and the slave is just backup we can swing the DB FLIP over to19:16
johnsomUmmm well, there are many options here.19:16
johnsomTurn your cluster to sync replication19:16
rm_workbecause it seems like this could happen to anyone using read-slaves19:17
rm_workah, yeah19:17
rm_workthen the first call doesn't return until the replication is complete?19:17
johnsomAdd retries to the controller worker methods as it's the entry point.19:17
rm_worki think that might be the solution19:17
johnsomThere is this patch open in nlbaas:
rm_workwell, retries in the controller-worker would slow down everything in cases where it REALLY doesn't exist?19:17
johnsomI haven't had time to look yet though19:17
rm_workor, is there no such case19:17
johnsomWell, for the base objects, it should never happen, but the secondary objects, yes could be valid not found I think.19:18
rm_workso yeah I am going to revert back to a single sql IP I think19:19
rm_workor... somehow i thought i was already on that actually19:19
openstackgerritMerged openstack/neutron-lbaas master: Remove common jobs from zuul.d
johnsomkrypto Haven't forgot about you, just trying to eat a bit of lunch before the meeting.  Will take a look in a few19:21
*** krypto has joined #openstack-lbaas19:30
johnsomkrypto You can also compare with the configuration our gate jobs are using:
johnsomkrypto Remind me, which version of Octavia are you deploying?19:37
johnsomrm_work xgerman_ Sorry, same deal with the dashboard patches... and
kryptojohnsom 1.0.119:39
rm_workjohnsom: what happened to them? I didn't see a difference19:40
rm_worktoo many .019:40
johnsomThat nodejs stuff uses a custome semver19:40
*** atoth has joined #openstack-lbaas19:41
*** armax has joined #openstack-lbaas19:42
johnsomkrypto controller_ip_port_list = 192.1681.17:555519:43
johnsomtypo there, so health manager probably isn't getting it's messages19:43
johnsomkrypto You will need the certificate section filled out and some at least demo certs.  There is a script in /bin for the demo certs19:45
johnsomca_private_key_passphrase = passphrase19:45
johnsomca_private_key = /etc/octavia/certs/private/cakey.pem19:45
johnsomca_certificate = /etc/octavia/certs/ca_01.pem19:45
kryptowow thats awesome sir.. you got time to go through my configs .really appreciate it :)19:45
kryptolet me fix it .. thanks alot19:45
*** AlexeyAbashkin has joined #openstack-lbaas19:51
*** krypto has quit IRC19:53
johnsomWe try to help folks when we can19:53
johnsomProbably need this too: [haproxy_amphora]19:55
johnsomserver_ca = /etc/octavia/certs/ca_01.pem19:55
johnsomclient_cert = /etc/octavia/certs/client.pem19:55
*** AlexeyAbashkin has quit IRC19:55
openstackgerritMerged openstack/neutron-lbaas-dashboard master: Set package.json version to Queens MS1
*** longstaff has joined #openstack-lbaas19:55
johnsomAh, cool19:56
openstackgerritMerged openstack/octavia-dashboard master: Set package.json version to Queens MS1
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Oct 25 20:00:06 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks20:00
nmagnezi(sorry :D )20:01
johnsomAh, there are some people.20:01
johnsom#topic Announcements20:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:01
johnsomThe Queens MS1 release has not yet gone out.  We are working through gate/release system issues from the zuul v3 changes.  I expect it will happen in the next day or two.20:02
johnsomThe newton release has officially gone EOL and the git branches removed.  This happened last night.20:02
johnsomJust a reminder if you need to reference that code, there is a tag newton-eol you can pull, but we can't put anymore patches on newton.20:03
johnsomAnd finally, I am still working through all of the zuul v3 changes.20:03
johnsomOctavia and neutron-lbaas should no longer have duplicate jobs, but I'm still working on the dashboard repos.20:04
johnsomI still have more cleanup to finish and I need to fix the stable branches.20:04
nmagnezifor zuul v3 ?20:04
johnsomHowever, I think we have zuul v3 functional at this point.20:04
johnsomstable branch gates aren't running at the moment (just failing).20:05
johnsomOh, we do have some TC changes:20:06
johnsomAny other announcements today?20:06
*** sshank has joined #openstack-lbaas20:07
johnsom#topic Brief progress reports / bugs needing review20:07
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:07
johnsomPlease continue reviewing and commenting on the provider driver spec:20:07
rm_workback to asking folks to consider THIS option for amphora az cacheing:20:08
johnsomlongstaff Are you planning patchset version 3 soon?20:08
xgerman_I added a topic to the the agenda20:08
xgerman_for that20:08
rm_workoh k20:08
longstaffYes. I plan to commit an update to the provider driver spec tomorrow.20:09
johnsomExcellent, thank you!20:09
johnsomI have been focused on zuul v3 stuffs and some bug fixes to issues that came up over the last week.20:10
johnsomI still have more patches for bug fixes coming.20:10
johnsomAnd much more zuulv3 patches...  sigh20:10
johnsomAny other progress updates to share or should we jump into the main event: AZs20:11
xgerman_johnsom and I put up some patches to improve LBaaS v2 <-> Octavia20:11
jnieszjohnsom: saw your notes about HM failover after the listener fails .  Is there a way to not trigger failover unless the original listener was in a known good state?20:12
jnieszsince this is new provision20:12
johnsomjniesz Yeah, one of those has a patch up for review.  The secondary outcome is around the network driver being dumb.  I haven't finished that patch yet20:12
*** sshank has quit IRC20:12
*** sshank has joined #openstack-lbaas20:12
xgerman_I have taken the eyes of the ball in OpenStackAnsible and so there is some cruft we are tackling:
*** salmankhan has quit IRC20:13
xgerman_good news I will remain core for the Q cycle over there…20:14
xgerman_in Octavia OSA20:14
johnsomjniesz Well, the failover we saw was a valid failover.  The amp should have a working listener on it, but it didn't, so IMO Octavia did "the right thing' and ended in the right state with the load balancer in error as well because we could not resolve the problem with the amp.  (notes for those that haven't read my novel.  A bad local jinja template change caused listeners to fail to deploy)20:14
johnsomYeah, Octavia OSA is getting some more attention20:15
jnieszso wouldn't we want to not fail over unless the original provisioned ok?20:15
rm_workthat seems right -- if the listeners fail to deploy, even on an initial create, why wouldn't it be an error?20:15
rm_workoh, failover -- ehh... maybe? I mean, it could have been a one-time issue20:15
rm_workgo to error -- definitely20:16
rm_workbut was it failover-looping?20:16
johnsomjniesz No, it could have failed for reasons related to that host, like the base host ran out of disk20:16
rm_workwe might want to have something that tracks failover-count (I wanted this anyway) and at some point if it fails too many times quickly detect something is wrong20:16
johnsomNo, it saw the listener was failed, attempted a failover, which also failed (same template), so it gave up and marked the LB in error too20:16
rm_workyeah that seems right20:17
jnieszis failover the same has re-create listener?20:17
jnieszis that the same code path20:17
johnsomLB in ERROR will stop the failover attempts20:17
johnsomNo, failover is rebuild the whole amphora, which, because there was a listener deployed did attempt to deploy a listener again.20:18
johnsomFundamentally the "task" that deploys a listener is shared between create and failover20:18
jnieszand we trigger the failover of the amp because at that point it is in an unknown state I assume20:20
jnieszbecause of error20:20
johnsomIt triggered failover because the timeout expired for the health checking on the amp.  The amp was showing no listeners up but the controller knew there should be a listener deployed.  It gives it some time and then starts the failover process.20:21
jnieszbecause the listener create responded back with invalid request20:22
johnsomNo, this health checking was independent of that error coming back and the listener going into "ERROR".20:23
johnsomTwo paths, both figuring out there was a problem, with one escalating to a full amp failover20:24
jnieszso if listener create goes from PENDING -> ERROR HM will still trigger?20:24
jnieszor if it goes from PENDING -> RUNNING -> failure20:24
johnsomIf the LB goes ACTIVE and then some part of the load balancing engine is not working it will trigger20:26
jnieszok, because from logs octavia.controller.worker.tasks.lifecycle_tasks.ListenersToErrorOnRevertTask  went to RUNNING from PENDING20:27
jnieszand then it failed on octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate20:28
rm_workthat task always runs I think?20:28
jnieszwhich is when hit the template issue20:28
rm_workit's so when the chain reverts, it hits the revert method in there20:28
johnsomYeah, in o-cw logs you will see ListenersUpdate went to running, then failed on the jinja so went to REVERTED, then reverted ListenersToErrorOnRevertTask which put the listener into ERROR.20:29
johnsomindependent the o-hm (health manager) was watching the amp and noticed there should be a working listener but there is not.20:30
johnsomthat is when o-hm would trigger a failover in an attempt to recover the amp20:30
jnieszok, but should ListenersUpdate have ever went to running?20:31
johnsomListenersUpdate task should have gone: PENDING, RUNNING, REVERTING, REVERTED20:31
johnsomI think there is a scheduling in the beginning somewhere too20:32
johnsomListenersUpdate  definitely started, but failed due to the bad jinja20:32
jnieszso jinja check is not done until post LIstenerUpdate20:33
johnsomOk, we are at the half way point in the meeting time, I want to give the next topic some time.  We can continue to discuss the bug after the meeting if you would like.20:33
jnieszok that is fine20:33
johnsomjinja check is part of the ListenerUpdate task20:33
rm_workyeah basically, i think it went exactly as it should20:33
rm_workfrom what i've heard20:33
johnsomYeah, me too aside from the bug and patch I have already posted.20:34
johnsom#topic AZ in Octavia DB (xgerman_)20:34
*** openstack changes topic to "AZ in Octavia DB (xgerman_) (Meeting topic: Octavia)"20:34
rm_workSo, I wonder if there are highlights from our PM discussion on this that would be good to summarize20:35
rm_workBasically, there are cases where we need to be able to quickly filter by AZ ... and doing a full query to nova is not feasible (at scale)20:35
johnsomMy summary is rm_work wants to cache the AZ info from an amphora being build by nova20:35
xgerman_and my argument is if somebody wants to query by data center, rack — I don’t want to add those columns20:36
xgerman_so this feels like meta data and needs a more generalized solution20:36
rm_workso we can compromise by storing it clearly labeled as a "cache" that can be used by processes that need quick-mostly-accurate, and we can query nova in addition for cases that need "slow-but-exact"20:36
rm_workwell, that's not something that's in nova is it?20:36
rm_workand DC would be... well outside the scope of Octavia, since we run in one DC20:37
rm_workas does ... the cloud20:37
rm_workso it's kinda obvious which DC you're in :P20:37
johnsomThe use case is getting a list of amps that live in an AZ and to be able to pull an AZ specific amp from the spares pool.20:37
rm_worki mean, the scope limit here is "data nova provides us"20:37
xgerman_so we are doing scheduling by AZ20:37
rm_workI'd like to enable operators to do that if they want to20:38
johnsomWell, other deployer have different ideas of what an AZ is....  Many AZ is different DC20:38
rm_workright which does make sense20:38
rm_workso, that's still fine IMO20:38
xgerman_yeah, so if is for scheduling it should be some schesulign hint20:38
rm_workyeah, we can do scheduling hints in our driver20:38
jnieszyea, I would like the scheduling control as well20:39
rm_workbut they rely on knowing what AZ stuff is in already20:39
xgerman_my worry is to keep it generalzied enought that we can add other hints in the future20:39
rm_workand querying from nova for that isn't feasible at scale and in some cases20:39
*** slaweq has quit IRC20:39
rm_worki'm just not sure what other hints there ARE in nova20:39
rm_workmaybe HV20:39
rm_workerr, "host"20:39
xgerman_I don’t want to limit it to nova hints20:40
rm_workwhich ... honestly i'm OK with cached_host as well, if anyone every submits it20:40
johnsomReally the issue is OpenStack has a poor definition of AZ and nova doesn't give us the tools we want without modifications or sorting through every amphora the service account has.20:40
rm_workwell, that's what we get back on the create20:40
*** slaweq has joined #openstack-lbaas20:40
rm_workwe're just talking about adding stuff we already get from a compute service20:40
rm_workand it's hard to talk about theoretical compute services20:40
nmagnezijohnsom, it does not allow to get a filtered list by AZ ?20:40
nmagnezii mean, just the instances in a specific AZ?20:41
rm_workbut in general, we'd assume some concept that fits generally into the category of "az"20:41
xgerman_I am saying we should take a step back and look at scheduling — is adding an az column the best/cleanest way to achieve scheduling hints20:41
johnsomnmagnezi Yes, but we would have to take that list, deal with paging it, and then match to our spares list.  For example.20:41
rm_worknmagnezi: basically, if we want to see what AZs we have in the spares pool, the options are: call nova X times, where X is the number of amps in the spares pool; or: call nova-list and match stuff up, which will cause significant paging issues at scale20:42
jnieszor you can pull out of nova db : )20:42
johnsomDon't get me wrong, I'm not a huge fan of this, but thinking through it with rm_work led me to feel this might be the simplest solution20:42
rm_workjniesz: lol no we cannot :P20:42
nmagneziyeah sounds problematic..20:42
rm_workxgerman_: well, our driver actually *does* do scheduling on AZ already20:43
* johnsom slaps jniesz's had for even considering getting into the nova DB....20:43
rm_workit's just not reliant on existing AZs20:43
xgerman_so if I run different nova flavors and want to pull one out based on flavor and AZ — how would I dod that?20:43
xgerman_would I add another column?20:43
rm_workxgerman_: do we not track the flavor along with the amp?20:43
rm_workI guess not? actually we probably should20:43
rm_workI even kinda want to track the image_ref we used too...20:44
*** slaweq has quit IRC20:44
rm_workwe can add image_ref and compute_flavor IMO20:44
rm_workalongside cached_az20:44
rm_worki would find that data useful20:44
*** PagliaccisCloud has quit IRC20:45
johnsomI see xgerman_'s point, this is getting a bit tightly coupled to the specific nova compute driver20:45
jnieszshould we store this in flavor metadata?20:45
rm_work(I was also looking at a call to allow failover of amps by image_id so we could force-failover stuff in case of security concerns on a specific image)20:45
jnieszthat would only be initial though and wouldn't account for changes20:45
nmagnezijohnsom, i was just thinking the same. we need to remember we do want to allow future support for other compute drivers20:45
rm_workjniesz: i was talking about in the amphora table20:45
xgerman_yep, that’s what I was getting at + I don’t know if I want to write python code for different scheduling hints20:45
rm_worki think the DB columns should absolutely be agnostic20:46
rm_workbut we can use them in the compute drivers20:46
rm_workI think the concept of "compute_flavor" and "compute_image" and "compute_az" are pretty agnostic though -- won't most things need those?20:46
rm_workeven amazon and GCE have AZs AFAIU20:47
rm_workand flavors and images...20:47
rm_workkubernetes does too20:47
jnieszwhat about vmware20:47
rm_workso I would be in favor of adding *all three* of those columns to the amphora table20:47
nmagneziaside from flavors (which just don't know if it exists as a concept in other alternatives) I agree20:47
nmagnezii'm not saying i disagree about flavors, just don't really sure20:48
rm_worki don't know much about vmware -- i would assume they'd NEED something like that, no?20:48
rm_workwe're all about being generic enough here to support almost any use-case -- and i'm totally on board with that, and in fact that's the reason for this20:49
xgerman_we can either use some metadata concept where we go completely configurable or pick sinner and throw them in the DB20:49
nmagnezirm_work, google says they do have it.20:49
xgerman_not sinners20:49
rm_workit should be possible to write a custom compute driver that can take advantage of these fields20:49
rm_worki think it's pretty clear that those fields are something shared across most compute engines20:50
johnsomI mean at a base level, I wish nova server groups (anti-affinity) just "did the right thing"20:50
rm_workyeah, that'd be grand, lol20:50
jnieszvmware I thought only had flavors with OpenStack, not native20:50
jnieszyou clone from template or create vm20:50
*** AlexeyAbashkin has joined #openstack-lbaas20:51
rm_workjniesz: in that case, flavor would be ~= "template"?20:51
*** PagliaccisCloud has joined #openstack-lbaas20:51
jnieszwell that included image20:51
jnieszand I thin you still specify vcpu, mem20:51
johnsomSo, some thoughts:20:51
johnsomCache this now20:51
johnsomWork on nova to make it do what we need20:51
johnsomSetup each compute driver to have it's own scheduling/metadata table20:51
jnieszhas been awhile for me with VMware20:51
jnieszdifferent drivers might care about different metadata, so like that idea20:52
johnsomjniesz the vmware integration has a "flavor" concept20:52
xgerman_johnsom +120:52
xgerman_I think keeping it flexible and not putting it straight into the amp table gives us space for future comoute drivers /schedulers20:53
johnsomThose were options/ideas, not a ordered list BTW20:53
rm_worki think it *is* flexible enough in the amphora table to allow for future compute drivers20:53
johnsomI just worry if the schema varies between compute drivers, how is the API going to deal with it...20:54
nmagnezii think rm_work has a point here. and for example even if a compute driver does not have the concept of AZs we can just see it as a single AZ for all instances20:54
*** krypto has joined #openstack-lbaas20:54
*** krypto has quit IRC20:54
*** krypto has joined #openstack-lbaas20:54
rm_workand yes, exactly, worried about schema differences20:55
*** AlexeyAbashkin has quit IRC20:55
johnsomWe have five minutes left in the meeting BTW20:55
xgerman_I am not too worried about those — they are mostly for scheduling and the API doesnt know about azs right now anyway20:55
johnsomWe have the amphora API now, which I expect rm_work wants the AZ exposed in20:56
rm_workit's been three years and what we have for compute is "nova", haven't even managed to get the containers stuff working (which also would have these fields), and every compute solution we can think of to mention here would also fit into these ...20:56
rm_workso blocking something useful like this because of "possible future whatever" seems lame to me20:57
rm_workjust saying20:57
xgerman_so what’s bad about another table?20:57
johnsomWell, let's all comment on the patch:20:57
rm_workjust, why20:57
rm_workwe could do another table -- but we'd want a concrete schema there too20:57
rm_workand the models would just auto-join it anyway20:57
johnsomI will mention, AZ is an extension to nova from what I remember and is optional20:57
xgerman_yes, but it would be for nova and if we do vmware they would do their table20:57
rm_workthe models need to do the join20:58
rm_workwe can't have a variable schema, can we?20:58
rm_work+1 to which, lol20:58
johnsomI really don't want a variable schema.  It has ended poorly for other projects20:58
jnieszisn't it hard to have single schema for multiple drivers that would have completely different concepts?20:59
johnsomFinal minute folks20:59
rm_workjniesz: yes, but i don't think they HAVE different concepts20:59
rm_workplease name a compute system that we couldn't somehow fill az/flavor/image_ref in a meaningful way21:00
rm_worki can't think of one21:00
johnsomReview, comment early, comment often...  grin21:00
*** openstack changes topic to "Welcome to LBaaS / Octavia - Queens development is now open."21:00
openstackMeeting ended Wed Oct 25 21:00:22 2017 UTC.  Information about MeetBot at . (v 0.1.4)21:00
openstackMinutes (text):
rm_workAFAICT those are pretty basic building blocks of any compute service21:00
*** longstaff has quit IRC21:00
jnieszyes those three are basic I agree21:00
xgerman_so we put some into the amp table and some others somewhere else21:00
jnieszbut if it gets expanded beyond those, maybe labels21:01
rm_workwe put common denominators in the amp table, yes21:01
rm_workothers ... i have no idea what others we'd need21:01
xgerman_we alreday established that the ones you want to out in there are not common21:01
rm_workwe established exactly that they *are* common21:02
rm_workwere we in different meetings?21:02
xgerman_anyhow I don;t know why we can’t use the ORM trick and just have a NovaAMP inherit from amp with some additional fields21:02
xgerman_this has been solved21:02
xgerman_there were people who didn’t need AZs — VMWare had templates instead of flavor, etc.21:02
rm_workthe name isn't that important21:03
rm_workand the mappings are fairly obvious21:03
xgerman_so why don’t want we do two tables? performance?21:03
xgerman_or one table and throw in a classifier?21:03
rm_workvariable schema is not fun to work with21:03
xgerman_this is not a variable schema21:03
nmagnezixgerman_, meaning a "metadata" table linked to the amps?21:03
xgerman_something like this:
rm_workyou want to have different tables for different drivers, which would have different schemas21:04
rm_workyeah i see how that'd work21:05
rm_workbut i don't think we need to do that21:05
rm_workat least for the core stuff21:05
rm_workxgerman_: so, if we ever need anything that we wouldn't define as "core to any compute system", then lets do that21:05
rm_workbut i'd define the things i listed as "core"21:05
rm_workvery common denominators21:05
rm_workso let's put THOSE in the base table21:06
johnsomI feel like we shouldn't get too far into the scheduling business....21:06
xgerman_that, too21:06
rm_worktechnically no code i'm proposing upstream has any impact on scheduling21:06
johnsomThe cached_AZ column is a compromise there where we aren't doing the scheduling, nova is still the point of truth, but we save a bit more metadata off the compute system response.21:07
rm_workstoring flavor and image_ref could also be useful though21:07
rm_workimage_ref in particular21:07
rm_workor whatever more generic name anyone might want21:07
johnsomConvince me that we can't get enough of that from the nova API21:08
rm_workA) is image_ref possible to change? answer, no21:08
xgerman_well, if we do the extra table route rm_work can do a go_daddy_compute driver and put in all the values he sees fit ;-)21:09
johnsomTrue, but your use case is to find amps with alternate image IDs.  To me the nova API can do that for us, no need to cache it21:09
rm_workso just add that to the compute_driver API21:09
xgerman_when it comes to scheduling it doesn’t really matter what we call the columns — they can be colors or whatever21:09
johnsomxgerman_ and have a go_daddy API path instead of amphora?  Technically we don't limit that today.21:09
johnsomrm_work yes, if we need it21:10
rm_worki guess so, it does technically work, it's just slow21:10
xgerman_well, there are the info calls — where we don’t care about speed21:10
johnsomWe should, as a team, decide on what level of devops we want to enable via our API vs. what the operator should use devops tools (ansible, puppet, etc.) to manage21:11
xgerman_I understand if we need to do “scheduling” we need values in our DB21:11
xgerman_and pulling the correct AMP from the spares pool would be useful21:12
rm_workOctavia should be able to internally do certain maintenance tasks, IMO -- such as recycling amps that have specific image_ids (or that DON'T have a specific image_id), forone21:12
rm_workthat's why i bring up image_id, since it's going to be relevant to me soon21:12
xgerman_well, in most orgs they have a configuration management system which has records of that21:13
johnsomI kind of feel that is a devops tool thing.  It can go pull the list from nova and call our failover api...21:13
johnsomYes, it would be nice.  Heck just have housekeeping do it, but... Is now the time for that lever of gold plating?21:13
rm_workwell, i'm going to be doing it21:13
rm_workso would you rather i do it internally, or upstream?21:13
rm_workI would 100% rather do it upstream21:14
rm_workbut these are the kinds of things that are on my jira board right now21:14
xgerman_I would rather not have that in Octavia21:14
nmagnezirm_work, making an offer we can't refuse.. :P21:14
* johnsom sees visions of the Octavia ansible driver interface21:14
xgerman_the whole error handling is a nightmare21:14
rm_workwhy do we make a bunch of operators write a bunch of tooling for stuff we can just provide easily in the API?21:14
xgerman_and everybody handles aerrors differently (retries, error, etc.)21:14
xgerman_we can give them “scripts” they can customize to their need21:15
xgerman_not so easy with the AOI21:15
*** rcernin has joined #openstack-lbaas21:16
rm_workso for years we've promised an "admin API" with rich features for operators...21:16
rm_worki guess it turns out we were lieing?21:16
rm_workour "admin api" is one endpoint to do failover, and "go figure it out"21:16
xgerman_we have more plans (related to cert rotations)21:17
xgerman_now, my problem is that to decide which amps to failover and what to do if there is an error is highly operator specific21:17
rm_workso my goal is to provide some nice tooling that will work with 90% of deployments out-of-the-box21:18
jnieszfor us we would have to pull additional metadata out of our application lifecycle manager to make decisions by organization, or whether it is prod, dev, qa21:18
xgerman_yep, exactly21:18
rm_workbut if you'd rather force everyone to write custom tooling instead to do the same thing, i guess that's fine21:18
jnieszbut there is a LCD that applies to a lot of people21:18
rm_work^^ this21:19
xgerman_failover a list of LB is something like that21:19
rm_worki'm talking about pretty common operations, the kind of thing we'd want to call out as "you'll need to do this periodically" in our user guides21:20
rm_worklike, recycling old/bad images21:20
rm_work*amps based on21:20
jniesza question I have is should admin api be basic and those tools built on top of it, even if those tools are providing by us community21:21
jnieszor does it belong in admin api21:22
xgerman_my view was the API is basic and we provide “scripts” for it — so people can customize…21:22
rm_workadding additional features that work for the 90% don't break the ability for the 10% that want/need to write their own scripts21:23
xgerman_I personally like scripts over API calls since then I can follow the output, etc. — and otherwise we need a whole jobs api…21:25
jnieszYes, and it would be easy to integrate into custom systems21:25
jnieszhaving common scripts that are shared by everyone would be good, so everybody doesn't have to write their own21:27
jnieszfor the LCD tasks21:27
rm_workso, image_ref and flavor are something we'll just force the operator to look up on their own from nova, and match up with amphorae with some scripting, and then generate a list that has all the info?21:28
rm_workrather than just "having it" because it can't change post-create?21:29
rm_workand we were already passed the info but threw it away21:29
jnieszwhat about hot add : )21:30
jnieszAZ possibly could change through live migrate21:30
kryptoThanks johnsom after correcting the typo services are stable ,creating lb returns "An error happened in the driver21:31
kryptoNeutron server returns request_ids: ['req-d097dd4c-5498-4e25-a4db-9273a9ef1ac5']" i am using Mitaka neutron21:31
rm_workjniesz: i'm talking about image_ref21:31
jnieszyes, that one I wouldn't expect to change21:31
rm_workwe already acknowledged AZ could change, that's why I was listing it as "cached_az"21:31
rm_workand I don't think `flavor` can change either21:32
johnsomkrypto Probably a configuration issue on the neutron-lbaas side.  base_url maybe?  Check the q-svc logs for the details21:32
jnieszthe question is the frequency.  Which maintenance / admin tasks do we think will be running a lot in order to need to cache layer21:33
rm_workso ... if we acknowledge that "image_ref" won't change post-provision, and we can have it stored internally, and the failover code that the failover-api runs is internal (obviously), why can't we shortcut to "provide an API that will failover everything that is/isn't image_id <X>"?21:34
xgerman_“scheduling hints” should be in our DB; other stuff it’s on a case by case21:34
jnieszfor non-maintenance / admin tasks like failover I can see the benefit21:34
rm_worksorry i keep trying to talk about image_ref right now, can get back to cached_az if you want tho21:34
rm_workbecause that's also important21:35
xgerman_rm_work, so somebody kicks that off the task on our API21:35
xgerman_how  can he follow progress? how can he stop it? What do we do if the server crashes the task is running on?21:35
jnieszsome function the controller is continuously doing are the ones I think make more sense21:35
rm_workxgerman_: is failover passed to the worker or done directly in the API? I didn't actually get a chance to review that API endpoint21:35
rm_workI assumed you passed it to the worker21:36
xgerman_yep, I did21:36
xgerman_but it only fails over one LB21:36
rm_workthen ... it's the same as that21:36
rm_workwe just create a bunch of failover messages onto the queue21:36
xgerman_no, it’s not —21:36
*** Alex_Staf has joined #openstack-lbaas21:36
xgerman_one failover nobody needs to stop — 1,000 might need to be stopped/paused/wahtever21:37
rm_workit should take almost no time to do that, and the actual response isn't something that returns in the response anyway21:37
xgerman_also we need to report progress21:37
rm_workhmm, that's the one i could see21:37
rm_work"oops that's a lot of servers, can we cancel plz"21:38
rm_workwhen the API tells them "OK! Failing over 1052 servers."21:38
xgerman_yep + if each failover crashes the LB we need a way to pause/stop21:38
rm_workalright, i'll rethink that plan21:38
xgerman_yeah, not saying it can’t be done — only saying script might be easier21:39
rm_workthen i guess back to cached_az21:39
rm_workyour main issue with that is that it's either FOR scheduling, in which case we shouldn't be getting into scheduling hints, or it ISN'T for scheduling, in which case it doesn't need to be in the DB?21:39
Alex_Stafrm_work, hi, just wanted to update u that dynamic  and non dynamic creds test did not run well for me =\21:39
rm_workAlex_Staf: what happened in the dynamic creds test?21:40
xgerman_rm_work my biggest issue is that I don;t want to put stuff specific to nova into the DB21:40
rm_workAFAIK they were working in kong's testing21:40
rm_workxgerman_: i don't think AZ is specific to nova21:40
xgerman_and even then  I am happy to put sepcific stuff in some extra nova_amp table21:41
rm_workthink of another compute system and i'll show you where it uses zones21:41
*** krypto has quit IRC21:42
Alex_Stafrm_work, I had different type of errors , and tried to run with testr and python -m, the behavior is different for those. With testr +dynamic  I saw the same "Details: No "load-balancer_member" role found"21:43
rm_workwould you like "cached_zone" better than "cached_az"?21:43
rm_workxgerman_: ^^21:43
rm_workAlex_Staf: err, that's not how you run tempest :P21:43
rm_workone sec21:43
xgerman_sure let’s do cached_zone21:43
rm_workxgerman_: done, give me 5 minutes21:44
xgerman_rm_work: can all clouds do life migrations?21:44
Alex_Stafrm_work, ok , we are running itfrom a cloud docker in that way for years now =\21:44
xgerman_in other clouds the Zone might never change21:44
rm_workAlex_Staf: ok, then let me rephrase "that's not how we are aware that tempest is supposed to be run"21:45
rm_workare you sure tempest is loading the correct config?21:45
jnieszto me I think the logical delineation is if it is a maintenance task that the operator does on occasion like rotate images it belongs in a script and not in db.  If the metadata is needed for something like scheduling / fail-over / etc.. it makes sense to cache in db21:46
kongAlex_Staf: the role should be created automatically by tempest. If you take a look at the commit message of that patch, that's how i run thet test case21:46
xgerman_jniesz +121:46
rm_workso then... are we ok with cached_zone in the amp table?21:46
rm_workbecause it could feasibly be used by drivers to do hinting21:46
rm_work(and by feasibly, I mean, my local driver does this)21:46
jnieszyes, I think that is reasonable21:47
Alex_Stafrm_work, pretty sure, I am running all neutron scenario, tempest scenario , neutron-lbaas scenario and it is always ok21:47
xgerman_yep, grudingly agree. Would love to have the in a schedulig hint tabel or nova_amp but let’s cross that bridge later21:47
openstackgerritAdam Harwell proposed openstack/octavia master: Add cached_az to the amphora record
xgerman_would be silly to start a new table for one column21:48
rm_work^^ just rebasing first21:48
xgerman_johnsom cool? Or should we have some more pure database design…21:49
johnsomSorry folks will have to catch up on the scroll back in a bit.  Working with release team on some late landing issues with horizon plugins21:50
xgerman_I will sleep over it before I +2 :-)21:51
*** AlexeyAbashkin has joined #openstack-lbaas21:51
*** AlexeyAbashkin has quit IRC21:55
johnsomWe need this on a t-shirt: "OK! Failing over 1052 servers."21:56
openstackgerritAdam Harwell proposed openstack/octavia master: Add cached_zone to the amphora record
rm_work^^ there's the real updated version21:57
rm_workAlex_Staf: can you pastebin me your tempest config?21:59
rm_workcensor passwords/usernames21:59
*** salmankhan has joined #openstack-lbaas22:50
*** AlexeyAbashkin has joined #openstack-lbaas22:52
*** AlexeyAbashkin has quit IRC22:56
rm_workI would love to call out companies that are doing Octavia deployments using recent code -- who all do we know that's doing that and is willing to be listed?23:16
rm_work^^ In the Sydney summit Project Update for Octavia23:16
rm_workjniesz: i assume you guys don't want your name anywhere near a slide23:16
*** fnaval has quit IRC23:18
*** salmankhan has quit IRC23:25
Alex_Stafrm_work, was the tempest conf helpful ?23:25
rm_workstill need to get a chance to compare it with mine23:25
rm_workactually can do that now, give me a sec23:26
rm_worklet's see... \23:27
rm_workon line 9, i don't know what setting that does -- we don't have that in our config23:27
rm_workin validation, i think you need 'run_validation = true'23:29
rm_workAlex_Staf: are you running with tempest from git?23:30
Alex_Stafrm_work, yes23:30
rm_workwe are using stuff that is newer than the 17.0.0 release23:30
Alex_Staflet me think23:30
Alex_Stafrunning with validation23:31
rm_workbrb 15m23:31
Alex_Stafrm_work, did not work, I am out it is 2:30 AM here , I proceed tomorrow23:33
Alex_Stafrm_work, tnx for the help23:33
*** sshank has quit IRC23:53
*** sshank has joined #openstack-lbaas23:55
*** sshank has quit IRC23:55

Generated by 2.15.3 by Marius Gedminas - find it at!