Thursday, 2018-01-25

*** rstarmer has joined #openstack-lbaas00:02
*** AlexeyAbashkin has joined #openstack-lbaas00:11
*** AlexeyAbashkin has quit IRC00:15
*** jniesz has quit IRC00:53
*** rstarmer has quit IRC01:03
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Setup octavia-dashboard for Queens release
*** sanfern has joined #openstack-lbaas02:39
*** ianychoi has quit IRC02:49
*** ianychoi has joined #openstack-lbaas02:49
*** dougwig has quit IRC03:13
*** harlowja has quit IRC03:21
*** armax has joined #openstack-lbaas03:24
*** openstackgerrit has quit IRC03:33
*** yamamoto has joined #openstack-lbaas03:37
*** armax_ has joined #openstack-lbaas03:43
*** armax has quit IRC03:43
*** armax_ is now known as armax03:43
*** annp has joined #openstack-lbaas03:49
*** openstackgerrit has joined #openstack-lbaas04:09
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Update the release notes for Queens
*** links has joined #openstack-lbaas04:09
*** harlowja has joined #openstack-lbaas04:28
*** kobis has joined #openstack-lbaas05:15
*** harlowja has quit IRC05:17
*** kobis has quit IRC05:41
*** sanfern has quit IRC05:54
*** sanfern has joined #openstack-lbaas05:55
*** ianychoi has quit IRC06:08
*** ianychoi has joined #openstack-lbaas06:09
*** gcheresh has joined #openstack-lbaas06:09
*** yboaron has joined #openstack-lbaas06:12
*** rcernin has quit IRC06:14
*** kobis has joined #openstack-lbaas06:20
*** AlexeyAbashkin has joined #openstack-lbaas06:22
*** AlexeyAbashkin has quit IRC06:30
*** kobis has quit IRC06:40
*** armax has quit IRC06:45
*** armax has joined #openstack-lbaas06:46
*** armax has quit IRC06:46
*** armax has joined #openstack-lbaas06:46
*** armax has quit IRC06:47
*** armax has joined #openstack-lbaas06:47
*** armax has quit IRC06:47
*** armax has joined #openstack-lbaas06:48
*** armax has quit IRC06:48
*** armax has joined #openstack-lbaas06:49
*** armax has quit IRC06:49
*** armax has joined #openstack-lbaas06:49
*** armax has quit IRC06:50
openstackgerritOpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata
*** threestrands_ has quit IRC07:02
*** kobis has joined #openstack-lbaas07:05
*** kobis has quit IRC07:26
*** sanfern has quit IRC07:29
*** pcaruana has joined #openstack-lbaas07:55
*** kobis has joined #openstack-lbaas07:59
*** yboaron has quit IRC08:00
*** bbzhao has quit IRC08:02
*** bbzhao has joined #openstack-lbaas08:03
*** AlexeyAbashkin has joined #openstack-lbaas08:04
*** tesseract has joined #openstack-lbaas08:20
*** Alex_Staf has joined #openstack-lbaas09:08
*** yboaron has joined #openstack-lbaas09:09
*** yboaron_ has joined #openstack-lbaas09:16
*** yboaron has quit IRC09:16
*** yamamoto has quit IRC10:04
*** salmankhan has joined #openstack-lbaas10:15
*** cgoncalves has quit IRC10:18
*** cgoncalves has joined #openstack-lbaas10:24
*** bbzhao has quit IRC10:26
*** salmankhan has quit IRC10:35
*** salmankhan has joined #openstack-lbaas10:39
*** annp has quit IRC10:46
openstackgerritGanpat Agarwal proposed openstack/octavia master: Active-Active: ExaBGP distributor driver
*** yamamoto has joined #openstack-lbaas11:05
*** yamamoto has quit IRC11:18
*** yamamoto has joined #openstack-lbaas11:31
*** salmankhan has quit IRC11:32
*** salmankhan has joined #openstack-lbaas11:34
*** sapd_ has quit IRC11:45
*** eN_Guruprasad_Rn has joined #openstack-lbaas11:57
*** AlexeyAbashkin has quit IRC11:58
*** AlexeyAbashkin has joined #openstack-lbaas11:58
*** sapd_ has joined #openstack-lbaas11:58
*** tesseract has quit IRC12:24
*** yamamoto has quit IRC12:31
*** yamamoto has joined #openstack-lbaas12:31
*** yamamoto has quit IRC12:52
*** yamamoto has joined #openstack-lbaas13:07
*** tesseract has joined #openstack-lbaas13:27
*** eN_Guruprasad_Rn has quit IRC13:31
*** salmankhan has quit IRC13:40
*** yamamoto has quit IRC13:42
*** tesseract has quit IRC13:42
*** tesseract has joined #openstack-lbaas13:42
*** salmankhan has joined #openstack-lbaas13:59
*** dougwig has joined #openstack-lbaas14:00
*** yamamoto has joined #openstack-lbaas14:02
*** Supun has joined #openstack-lbaas14:14
*** tesseract has quit IRC14:17
*** tesseract has joined #openstack-lbaas14:18
*** salmankhan has quit IRC14:25
-openstackstatus- NOTICE: We're currently experiencing issues with the server which will result in POST_FAILURE for jobs, please stand by and don't needlessly recheck jobs while we troubleshoot the problem.14:26
*** fnaval has joined #openstack-lbaas14:26
*** dmellado has quit IRC14:32
*** dmellado has joined #openstack-lbaas14:39
*** dmellado has quit IRC14:39
*** dmellado has joined #openstack-lbaas14:41
*** dmellado has quit IRC14:42
*** dmellado has joined #openstack-lbaas14:43
*** dmellado has quit IRC14:44
*** salmankhan has joined #openstack-lbaas14:46
*** dmellado has joined #openstack-lbaas14:53
*** dmellado has quit IRC14:53
*** salmankhan has quit IRC14:58
*** gcheresh has quit IRC14:59
*** dmellado has joined #openstack-lbaas15:01
*** dmellado has quit IRC15:01
*** dmellado has joined #openstack-lbaas15:03
*** dmellado has quit IRC15:03
*** salmankhan has joined #openstack-lbaas15:05
*** dmellado has joined #openstack-lbaas15:19
*** pcaruana has quit IRC15:20
*** armax has joined #openstack-lbaas15:22
*** dmellado has quit IRC15:25
*** dmellado has joined #openstack-lbaas15:26
*** kbyrne has quit IRC15:27
*** dmellado has quit IRC15:27
*** dmellado has joined #openstack-lbaas15:29
*** dmellado has quit IRC15:29
*** numans has quit IRC15:34
*** numans has joined #openstack-lbaas15:34
*** armax_ has joined #openstack-lbaas16:01
-openstackstatus- NOTICE: is stabilized and there should no longer be *new* POST_FAILURE errors. Logs for jobs that ran in the past weeks until earlier today are currently unavailable pending FSCK completion. We're going to temporarily disable *successful* jobs from uploading their logs to reduce strain on our current limited capacity. Thanks for your patience !16:01
*** armax has quit IRC16:03
*** armax_ is now known as armax16:03
johnsomWe got lucky getting our release in before this....16:06
*** kobis has quit IRC16:08
yboaron_Hi Octavia experts16:25
yboaron_I’m trying to debug packet loss in loadbalancer with L7 policy/L7 rule in devstack environment16:25
yboaron_I’m still using octavia via Neutron LbaasV2.16:25
yboaron_I can see that packet reach to amphora ( tcpdump the ha-proxy namespace)16:26
yboaron_But amphora send TCP - RESET for  the relevant connection request16:26
yboaron_Are there any counters that I can check  , to analyze why packets being dropped/rejected ?16:26
johnsomHi yboaron_16:26
yboaron_hi johnsom16:26
johnsomAre you seeing any haproxy log entries in the amphora?16:26
yboaron_johnsom, checking it ..16:27
johnsomShould be /var/log/haproxy.log16:27
johnsomA RST being sent means to me that for some reason the listener is not configured on that load balanacer16:28
yboaron_johnsom, my devstack VM sudenly  decided to 'die'16:28
yboaron_johnsom, I16:28
johnsomOuch, ok.  Maybe we work through debuging when you get it all back up16:28
yboaron_johnsom, I'll try to recover it , and ping U16:29
yboaron_johnsom, thanks a lot16:29
johnsomSure, NP16:29
*** yboaron_ has quit IRC16:34
*** openstackstatus has quit IRC16:41
*** openstackstatus has joined #openstack-lbaas16:43
*** ChanServ sets mode: +v openstackstatus16:43
*** links has quit IRC16:57
*** tesseract has quit IRC16:58
*** slaweq has joined #openstack-lbaas17:00
*** slaweq has quit IRC17:02
*** kobis has joined #openstack-lbaas17:10
*** kobis has quit IRC17:13
*** kobis has joined #openstack-lbaas17:18
*** dmellado has joined #openstack-lbaas17:23
*** Alex_Staf has quit IRC17:26
*** kobis has quit IRC17:26
*** kobis has joined #openstack-lbaas17:26
*** kobis has quit IRC17:27
*** kobis has joined #openstack-lbaas17:34
*** kobis has quit IRC17:40
*** dmellado has quit IRC17:40
*** yamamoto has quit IRC17:40
*** slaweq has joined #openstack-lbaas17:41
*** yamamoto has joined #openstack-lbaas17:43
*** yamamoto has quit IRC17:44
*** yamamoto has joined #openstack-lbaas17:44
*** slaweq has quit IRC17:46
*** slaweq has joined #openstack-lbaas17:54
*** AlexeyAbashkin has quit IRC17:59
*** slaweq has quit IRC18:02
*** SumitNaiksatam has joined #openstack-lbaas18:04
*** gcheresh has joined #openstack-lbaas18:29
*** yamamoto has quit IRC18:42
*** salmankhan has quit IRC18:57
openstackgerritMichael Johnson proposed openstack/python-octaviaclient master: Updates the command mapping to include LB stats
*** kobis has joined #openstack-lbaas19:11
*** SumitNaiksatam has quit IRC19:18
*** slaweq has joined #openstack-lbaas19:19
*** Bar__ has joined #openstack-lbaas19:19
Bar__johnsom, hey19:22
johnsomBar_ Hi19:22
johnsomOpps, Bar__19:22
Bar__johnsom, I've logged in to see if I can convince you to approve my octavia patches...19:23
xgerman_today isn’t the best day…19:23
cgoncalvesBar__: find out first whether he's a beer or wine person ;)19:23
xgerman_still in feature freeze19:23
Bar__cgoncalves, haha19:24
*** AlexeyAbashkin has joined #openstack-lbaas19:24
Bar__xgerman_, does apply for feature freeze?19:24
Bar__cgoncalves, you could vote too, by the way (your vote has been erased due to recent patch set updates)19:25
johnsomBar__ I think this does fall under the feature freeze as I *think* our multi-node gates use the multi-node local.conf file. I will have to look.  If not, we can probably consider this for merge.19:26
Bar__johnsom, bourbon? hmm19:26
*** gcheresh has quit IRC19:26
johnsomIt's kind of a "documentation" type patch19:27
cgoncalvesjohnsom, xgerman_, rm_work: what's your upgrade strategy with respect to running amphorae? calling failover api is just half-way, if that much19:27
johnsomcgoncalves Not sure I follow the "half-way" comment19:28
cgoncalvesBar__: yeah, I wanted to re-vote but was not sure you were okay with it. Hengqing had been uploaded new patchsets...19:28
johnsomShort story: we have not declared any upgrade tags yet.19:28
*** AlexeyAbashkin has quit IRC19:28
johnsom+1 same, not sure on the current state of that19:28
xgerman_yep, failover is said strategy19:29
johnsomLong story: we have manual processes that provide a successful upgrade process, we just don't have gates yet.19:29
johnsomIn general it is this process:19:29
johnsom1. Update the DB using octavia-db-manage19:29
cgoncalvesjohnsom: one needs to keep track of which amps have already been upgraded, which version they are running (pre-/pos-upgrade), etc19:29
johnsom2. Update the controllers, ideally taking one out of service at a time.19:30
Bar__cgoncalves, he has rolled-back on his changes. so it reflects my changes to a tee19:30
johnsom3. Build a new amphora image with the updates and upload it to glance, moving the "amphora" tag to the new image19:30
johnsom4. Use the LB failover API to upgrade the amphora19:31
cgoncalvesBar__: ok. I'll revisit it19:31
cgoncalvesjohnsom: so you upgrade them all and at once? never skipping image versions?19:32
johnsomWell, You can do some CLI tricks to merge the output of the amphora list API and the nova API to get the image Ids19:32
johnsomSure, you can skip versions.  Existing amps won't break19:32
Bar__johnsom, can you think of any reason why there're no core-reviews in osc-lib repo for more than a week?19:33
johnsomBar__ Ummm, no, have you asked in openstack-sdks?  I have seen Dean commenting in there about OSC type things.19:34
*** KeithMnemonic has quit IRC19:34
cgoncalvesjohnsom: ok, so operators are redeploying existing load balancers with new images19:35
Bar__johnsom, ok, I'll try. Although I have his vote already.19:35
johnsomBar__ Here is the list of cores:,members19:35
johnsomYou can also try adding those folks to the "reviews" list on the patch19:35
johnsomThat usually sends an e-mail19:35
cgoncalvesjohnsom: I am wondering if doing in-place upgrades would be something worth considering to lower e.g. upgrade time and possible downtime, and bandwidth19:36
johnsomcgoncalves Umm, I wouldn't say redeploy, they don't have to delete and re-create the LBs, it's just a failover process.19:36
cgoncalvesjohnsom: right. although that would require additional resources from the platform, even if temporary19:37
johnsomcgoncalves We need to be careful with terminology a bit so we all don't get confused. The above process is an "in place upgrade" and depending on the topology of the LB, is zero downtime.19:38
johnsomcgoncalves Yes, it means there may be one additional amphora during the failover process.19:38
johnsomIf the cloud can't handle one more amp, it's in HA trouble anyway....19:39
johnsomcgoncalves Are you advocating for package upgrades inside the amp?19:40
cgoncalvesjohnsom: not advocating, no. exploring all possible options and weighting them :)19:40
*** slaweq has quit IRC19:41
cgoncalvesas part of our product, together with octavia service we will also support the amphora image with regular images being push to glance and moving tag19:43
*** yamamoto has joined #openstack-lbaas19:43
cgoncalvesjohnsom: as so, it would be important to have a way to offer upgrades of running amphorae19:44
*** harlowja has joined #openstack-lbaas19:44
johnsomcgoncalves Ok, sure.  We don't do anything explicit to block an operator from enabling that, but there are some considerations.19:45
johnsom1. Amps don't typically have a route to anywhere that would have a repo. An operator would have to enable that.19:45
johnsom2. Most important updates still require a reboot to replace kernels and/or libraries that are in use. This is currently not functional if you are using TLS offload as the encrypted RAMFS gets wiped. Though I hope we can fix that.19:45
johnsom3. Even if the patches do not require a reboot, they will restart services, which means an outage to data plane traffic. Most folks would prefer to control/schedule those.19:45
johnsom4. It can mean that you are running an un-tested combination of code depending on how it is done. The image method allows a testing process.19:45
cgoncalvesone where the operator could see which amps are running on outdated images19:45
johnsomcgoncalves Yes, this is why we added the failover API and commands. Currently you can see the outdated images, it just takes some CLI magic.  I commented on your RFE in support of making that "easier" in one way or another.19:46
johnsomOh, wait, it might have been Alex's RFE.  Someone put one in storyboard....  grin19:47
cgoncalvesjohnsom: ok (and Alex's RFE :))19:47
cgoncalvesI also asked some Sahara folks how do they deal with their cluster VM upgrades -- they dont19:48
johnsomIf you have interest, I would super support you on working on an "upgrade"/"gernade" gate to start testing upgrades. This is the next step in declaring we are upgrade-able with a governance tag.19:48
cgoncalvesoperators have to build new images, upload to glance and respawn new clusters19:49
johnsomYeah, trove has a similar "hands on" approach19:49
johnsomI think we have the best tools so far19:49
cgoncalvesjohnsom: I'm a total stranger to grenade atm :)19:50
johnsomYeah, me too19:50
johnsomHeck, I even typed it wrong.... grin19:50
cgoncalveslet me go back to Alex and in-flight build and delivery of amp images19:51
johnsom and
cgoncalves*and check next steps of ...19:52
johnsomThe upgrade tags we aspire for are all listed here:
*** reedip has quit IRC19:52
johnsomThis kind of sums up the state of upgrades in OpenStack: "Note No projects are using assert:supports-zero-downtime-upgrade, yet."19:53
*** yboaron_ has joined #openstack-lbaas19:53
cgoncalvesheh :)19:53
cgoncalvesjohnsom: thanks for your feedback!19:54
johnsomYep, super support working on upgrades.19:54
*** yamamoto has quit IRC19:55
*** Supun has quit IRC19:58
*** kobis has quit IRC19:58
cgoncalvesBar__: you don't want to bring that back?19:58
Bar__cgoncalves, it's in a seperate patch now.19:59
Bar__look for related changes19:59
cgoncalvesgot it20:00
Bar__cgoncalves, thanks!20:01
cgoncalvesBar__: you're welcome! thank YOU for the patches20:02
cgoncalvesoff for today. 9pm and late for dinner20:02
Bar__cgoncalves, cya20:02
Bar__johnsom, I cannot call a devstack function before I clone it, and I cannot clone a repo prior to installing git20:04
*** reedip has joined #openstack-lbaas20:05
johnsomwget and tar? Grin20:05
johnsomYeah, hmm. This same script does expect some packages to be installed20:06
johnsomFrankly it probably should not install packages but just run with error handling20:08
Bar__johnsom, so you assume git is already installed?20:10
Bar__(not sure what you have meant by 'error handling' sine we're talking about shell scripts)20:11
*** slaweq has joined #openstack-lbaas20:14
johnsomYeah, you can temporarily disable "halt on error" and check $? to see if the command ran, then print a useful error message20:16
Bar__johnsom, one more thing. In the configuration patch. Do you think I should have updated too?20:18
johnsomBar__ No, those plugins only need to run on the controller node20:19
johnsomWe aren't testing if neutron can run two controllers for example20:20
johnsomJust if Octavia can20:20
Bar__johnsom, but this addition is more than a mere plug-in, it reconfigures ml2 e.g. via neutron/>>configure qos20:21
johnsomYeah, I was just thinking about that.  I'm not sure if the enable service for q-qos is needed or not20:21
johnsomI think it probably is20:22
johnsomJust not sure20:22
johnsomDo you know where in the networking guide you need to look it up?20:22
johnsomShould be somewhere in there20:23
johnsomYep, there is a "On the network and compute nodes" section (no perm link though)20:24
johnsomAssuming the multi-node setup uses the second node as a compute host and not just our controllers, which I think is the case20:26
*** slaweq has quit IRC20:28
*** salmankhan has joined #openstack-lbaas20:29
*** kobis has joined #openstack-lbaas20:31
Bar__johnsom, the current behavior of neutron/ is as if it was controller node20:33
Bar__not to say it is a problem, just an observation.20:33
*** salmankhan has quit IRC20:34
*** slaweq has joined #openstack-lbaas20:37
yboaron_ping johnsom20:43
johnsomyboaron_ Hi20:43
yboaron_johnsom, I've manage to recover my devstack env ,  are u available for 10 min ?20:44
johnsomSure, NP.  Just finished lunch20:45
yboaron_johnsom, enjoy your lunch ! ping me when u r available20:46
johnsomyboaron_ I just finished, so I am available20:46
yboaron_johnsom, great , So I'm having packet loss when trying http get through loadbalancer with l7 policy (host name equal =XXXX)20:48
johnsomJust to bring me up to speed, every packet gets an TCP RST? Or just some?20:49
yboaron_johnsom, I have a loadbalancer with l7policy attached to listener on port 80 , the L7 policy direct the traffic (with hostname=XXX) to pool with two members20:49
yboaron_johnsom, just some20:49
johnsomWhat is the LB topology, standalone or active/standby?20:49
johnsomAh, yes, the story:!/story/200142520:50
yboaron_johnsom, I'll copy the relevant part from tcpdump and send u the link20:50
johnsomIt's on the priority etherpad "Bugs that should be investigated for Queens" list20:50
yboaron_johnsom, yes that's it  , I thought you might point for something ...20:51
johnsomAre you using a floating IP for this test? Have you tries without the floating IP?20:51
yboaron_Yes , I'm using FIP , but the packet reached the VIP  IP20:52
yboaron_so I don't think it's related to FIP20:52
johnsomAny chance you can capture a pcap file for me?
johnsomAlso, we were going to look in the haproxy.log file for the request that failed20:54
yboaron_johnsom, I'll try to do it , meanwhile  I just paste the tcpdump of successful and failed HTTP GET #link
rm_workcgoncalves / johnsom: I was actually just about to roll a patch that adds the glance image name to the amp table when they're created20:55
rm_workbecause i am dealing with this internally as well20:55
rm_workand it seems logical to do20:55
*** kobis has quit IRC20:55
rm_workor rather20:55
rm_workamp image ID20:55
johnsomID I hope20:55
rm_workyeah sorry, meant ID not name20:55
rm_worksince THAT will not change for the amp20:56
rm_workand it's trivial20:56
rm_workany objections?20:56
rm_workshould help cgoncalves as well20:56
johnsomNo, I think that is fine20:56
rm_worki would have done it like two weeks ago but shit has been whack, to use the technical term20:56
xgerman_yeah, while you are there allow filtering by image id20:56
johnsomCan't land until Rocky though20:56
rm_work"can't land"20:56
* rm_work runs a lot of patches ahead of merging20:57
johnsomLike a wild man... grin20:57
rm_workI'm like a bleeding-edge hipster21:00
rm_work"Oh, you run git-master? I ran those patches BEFORE they were merged."21:00
johnsomyboaron_ Yeah, that tcpdump is easier to read.  I think the haproxy.log is going to be important.21:00
* johnsom Thanks rm_work for his alpha testing21:00
rm_workyes, this WOULD explain why I always run into these issues before anyone else notices <_<21:01
*** kobis has joined #openstack-lbaas21:03
*** gcheresh has joined #openstack-lbaas21:09
*** kobis has quit IRC21:12
yboaron_johnsom, you can find ~ 15 lines of haproxy.log  at the following link : #link , please let me know if u still need pcap file and the entire haproxy.log file21:20
johnsomOk, looking21:21
johnsomS = the TCP session was unexpectedly aborted by the server, or the21:27
johnsom            server explicitly refused it.21:27
johnsomC = the proxy was waiting for the CONNECTION to establish on the21:27
johnsom            server. The server might at most have noticed a connection attempt.21:27
johnsomyboaron_ It appears from that log that only two requests were serviced and the rest had a problem with the backend server21:27
yboaron_johnsom, what do you mean by 'problem with backend server' ?21:28
johnsomyboaron_ For incoming requests, haproxy has a "frontend" that terminates the incoming request and a "backend" that establishes a connection to the backend server(s) (typically web servers).  This backend connection to the web server is failing on most of those requests.21:29
johnsomyboaron_ Can you go into /var/lib/octavia/<uuid>/haproxy.cfg and get the "backend 1d52c2c1-2d26-420b-825e-966af84cdcb6" section?21:30
yboaron_johnsom, sure21:30
yboaron_johnsom, once again I'm having problems connecting to my VM , sorry21:33
johnsomyboaron_ Could this be related?21:34
yboaron_johnsom, I'll get it and ping ASAP21:34
yboaron_johnsom, don't think so , when I'm running 'standard' loadbalancer with no L7policy - all works fine21:34
johnsomI just about have a test VM setup to try to reproduce21:35
yboaron_johnsom, my guess , it's related somehow to the l7policy/rule21:35
yboaron_johnsom, that's will be great21:35
johnsomyboaron_ When you get back in, can you do a "openstack loadbalancer member list"?21:43
yboaron_johnsom, sure21:43
*** gcheresh has quit IRC21:44
yboaron_johnsom, [stack@ghghghgh devstack]$ openstack loadbalancer member list 63bba235-d918-4163-a5a2-1e8aeb00595d21:47
yboaron_| id                                   | name | project_id                       | provisioning_status | address   | protocol_port | operating_status | weight |21:47
yboaron_| deed8a2a-2133-415e-a93f-ae316a5e486f |      | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE              | |          8080 | NO_MONITOR       |      1 |21:47
yboaron_| 71ff3979-d6ff-45d6-b196-38bc05454b0a |      | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE              | |          8080 | NO_MONITOR       |      1 |21:47
yboaron_[stack@ghghghgh devstack]$21:47
rm_workxgerman_: were you trying to talk to me? :P21:47
xgerman_no, wrong window21:47
rm_workso close21:48
johnsomyboaron_ Ok, so with "NO_MONITOR" that means there is no health monitor which means the load balancer will send requests to backend servers that are down.21:48
xgerman_yeah, I know… I keep looking in my chat window while doing some shell stuff21:48
johnsomyboaron_ I would add a health monitor to the pool and I bet you will see that one of the backend servers is down or not responding to requests.21:48
yboaron_johnsom, OK , will do it , BTW i couldn't find any '"backend 1d52c2c1-2d26-420b-825e-966af84cdcb6' in all haproxy.cfg files21:49
*** slaweq has quit IRC22:04
*** slaweq has joined #openstack-lbaas22:05
*** yamamoto has joined #openstack-lbaas22:05
johnsomJan 25 22:04:09 amphora-6ee2be0a-ff84-4389-bf22-3394317828c3 kernel: [ 1237.753106] A link change request failed with some changes committed already. Interface ens6 may have been left with an inconsistent configuration, please check.22:06
yboaron_johnsom, added the health monitor - members seems to be online although  traffic failed22:11
johnsommember list now?22:11
yboaron_[stack@ghghghgh devstack]$ openstack loadbalancer member list 63bba235-d918-4163-a5a2-1e8aeb00595d22:12
yboaron_| id                                   | name | project_id                       | provisioning_status | address   | protocol_port | operating_status | weight |22:12
yboaron_| deed8a2a-2133-415e-a93f-ae316a5e486f |      | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE              | |          8080 | ONLINE           |      1 |22:12
yboaron_| 71ff3979-d6ff-45d6-b196-38bc05454b0a |      | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE              | |          8080 | ONLINE           |      1 |22:12
yboaron_[stack@ghghghgh devstack]$22:12
johnsomyboaron_ Here is my result:22:16
*** slaweq has quit IRC22:19
yboaron_johnsom, 100% success , cool , did u define l7 rule with HOSTNAME equal to 'xxxxx' ?22:19
*** slaweq has joined #openstack-lbaas22:19
johnsomyboaron_ Second pastebin is the commands I ran to create the LB22:19
johnsomTried to recreate it per your story22:20
yboaron_johnsom, what did u use for connecting the private subnet and 172.21.xx (the members ) , neutron router ?22:23
yboaron_johnsom, Did u used  neutron LBAASV2 with octavia  provider or octavia client ?22:25
johnsomI do not use neutron-lbaas22:25
johnsomI only run native octavia22:25
yboaron_johnsom, ooopssss22:25
johnsomThat should not matter though, in this case22:26
johnsomHere is my haproxy.cfg22:27
johnsomI didn't set a health monitor because I new my backends were healthy and you didn't in your story.22:27
yboaron_johnsom, I"ll take a look , well thanks a lot , I'll continue digging22:30
johnsomOk, poke around.  It looks to me like a backend server is failing on you.22:30
johnsomLet me know if you do not find the issue22:30
yboaron_johnsom, I'm almost sure that the backend is fine , cause when HTTPing wout L7policy all fine22:31
johnsomCheck that your haproxy config matches mine (basically)22:32
yboaron_johnsom, will do it , once again thank u very much for the time and effort .22:33
johnsomSure, NP22:33
yboaron_johnsom, I'm back again - this is my haproxy.cfg #link
yboaron_johnsom, seems that something is missing comparing to your ha-proxy file22:39
johnsomyboaron_ That is the wrong haproxy.cfg22:40
johnsomyboaron_ /var/lib/octavia/0a1fd67f-13d9-415d-ab4d-7ae93e12f225/haproxy.cfg Where the UUID is your UUID not mine22:41
johnsomThat is a sample file for an haproxy in front of the API controllers used in a gate test.  Not used in the amp22:43
openstackgerritBar Elharar proposed openstack/python-octaviaclient master: Improve unit tests results with subTest
yboaron_johnsom, this is  the right one #link
yboaron_johnsom, I can see that at yours file mode as HTTP , mine mode=TCP ....22:48
yboaron_johnsom,  yours/your22:48
johnsomyboaron_ Yes, that looks like the right config.  However, I do notice something strange. You have it configured for TCP mode but you are using HTTP L7 rules.  This *should* work, but is a very odd configuration.22:48
yboaron_johnsom, I"ll change it to HTTP and recheck22:50
openstackgerritBar Elharar proposed openstack/python-octaviaclient master: Improve unit tests results with subTest
johnsomyboaron_ I will try TCP too out of curiousity22:50
johnsomyboaron_ Ha, yes, that reproduces the problem22:52
johnsomLooks like HAProxy has a bug there, though I have to admit, it's kind of an invalid configuration combination.22:53
yboaron_johnsom, yes that's correct , for L7policy - we should configure LB for HTTP22:54
johnsomSo with your configuration the pool goes offline22:55
johnsomIt is reported that there is a problem22:55
johnsomYep, everything works fine, but haproxy takes the pool offline22:57
yboaron_johnsom, my pool is online #link
*** armax has quit IRC23:00
johnsomCould be the health monitor overriding it since all of the members are healthy.  I will check that code23:00
yboaron_johnsom, Cool , I"ll change my configuration to HTTP , and recheck23:03
yboaron_johnsom, I'll keep u posted23:03
yboaron_johnsom, Bye23:05
*** slaweq has quit IRC23:06
*** yboaron_ has quit IRC23:09
*** rstarmer has joined #openstack-lbaas23:12
*** rstarmer has quit IRC23:24
openstackgerritBar Elharar proposed openstack/octavia master: Update configuration samples
openstackgerritBar Elharar proposed openstack/octavia master: Make OS agnostic
*** rstarmer has joined #openstack-lbaas23:25
*** ianychoi has quit IRC23:26
*** ianychoi has joined #openstack-lbaas23:27
johnsomYeah, full run down there, since it's in TCP mode it still tries to parse for http host header, but fragmentation/timing may me the TCP matcher doesn't see the whole header so it ends up not matching the L7 rule. Since the listener is in TCP mode it can't return the 503 (no servers since there is no default pool) so it just sends an RST.23:28
*** armax has joined #openstack-lbaas23:31
*** Alex_Staf has joined #openstack-lbaas23:32
*** fnaval has quit IRC23:39
openstackgerritBar Elharar proposed openstack/octavia master: Update configuration samples (QoS)
openstackgerritBar Elharar proposed openstack/octavia master: Make OS agnostic
*** Bar__ has quit IRC23:40
*** kbyrne has joined #openstack-lbaas23:50

Generated by 2.15.3 by Marius Gedminas - find it at!