*** rstarmer has joined #openstack-lbaas | 00:02 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 00:11 | |
*** AlexeyAbashkin has quit IRC | 00:15 | |
*** jniesz has quit IRC | 00:53 | |
*** rstarmer has quit IRC | 01:03 | |
openstackgerrit | Michael Johnson proposed openstack/octavia-dashboard master: Setup octavia-dashboard for Queens release https://review.openstack.org/537705 | 01:39 |
---|---|---|
*** sanfern has joined #openstack-lbaas | 02:39 | |
*** ianychoi has quit IRC | 02:49 | |
*** ianychoi has joined #openstack-lbaas | 02:49 | |
*** dougwig has quit IRC | 03:13 | |
*** harlowja has quit IRC | 03:21 | |
*** armax has joined #openstack-lbaas | 03:24 | |
*** openstackgerrit has quit IRC | 03:33 | |
*** yamamoto has joined #openstack-lbaas | 03:37 | |
*** armax_ has joined #openstack-lbaas | 03:43 | |
*** armax has quit IRC | 03:43 | |
*** armax_ is now known as armax | 03:43 | |
*** annp has joined #openstack-lbaas | 03:49 | |
*** openstackgerrit has joined #openstack-lbaas | 04:09 | |
openstackgerrit | Michael Johnson proposed openstack/octavia-dashboard master: Update the release notes for Queens https://review.openstack.org/537702 | 04:09 |
*** links has joined #openstack-lbaas | 04:09 | |
*** harlowja has joined #openstack-lbaas | 04:28 | |
*** kobis has joined #openstack-lbaas | 05:15 | |
*** harlowja has quit IRC | 05:17 | |
*** kobis has quit IRC | 05:41 | |
*** sanfern has quit IRC | 05:54 | |
*** sanfern has joined #openstack-lbaas | 05:55 | |
*** ianychoi has quit IRC | 06:08 | |
*** ianychoi has joined #openstack-lbaas | 06:09 | |
*** gcheresh has joined #openstack-lbaas | 06:09 | |
*** yboaron has joined #openstack-lbaas | 06:12 | |
*** rcernin has quit IRC | 06:14 | |
*** kobis has joined #openstack-lbaas | 06:20 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 06:22 | |
*** AlexeyAbashkin has quit IRC | 06:30 | |
*** kobis has quit IRC | 06:40 | |
*** armax has quit IRC | 06:45 | |
*** armax has joined #openstack-lbaas | 06:46 | |
*** armax has quit IRC | 06:46 | |
*** armax has joined #openstack-lbaas | 06:46 | |
*** armax has quit IRC | 06:47 | |
*** armax has joined #openstack-lbaas | 06:47 | |
*** armax has quit IRC | 06:47 | |
*** armax has joined #openstack-lbaas | 06:48 | |
*** armax has quit IRC | 06:48 | |
*** armax has joined #openstack-lbaas | 06:49 | |
*** armax has quit IRC | 06:49 | |
*** armax has joined #openstack-lbaas | 06:49 | |
*** armax has quit IRC | 06:50 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata https://review.openstack.org/537765 | 06:52 |
*** threestrands_ has quit IRC | 07:02 | |
*** kobis has joined #openstack-lbaas | 07:05 | |
*** kobis has quit IRC | 07:26 | |
*** sanfern has quit IRC | 07:29 | |
*** pcaruana has joined #openstack-lbaas | 07:55 | |
*** kobis has joined #openstack-lbaas | 07:59 | |
*** yboaron has quit IRC | 08:00 | |
*** bbzhao has quit IRC | 08:02 | |
*** bbzhao has joined #openstack-lbaas | 08:03 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 08:04 | |
*** tesseract has joined #openstack-lbaas | 08:20 | |
*** Alex_Staf has joined #openstack-lbaas | 09:08 | |
*** yboaron has joined #openstack-lbaas | 09:09 | |
*** yboaron_ has joined #openstack-lbaas | 09:16 | |
*** yboaron has quit IRC | 09:16 | |
*** yamamoto has quit IRC | 10:04 | |
*** salmankhan has joined #openstack-lbaas | 10:15 | |
*** cgoncalves has quit IRC | 10:18 | |
*** cgoncalves has joined #openstack-lbaas | 10:24 | |
*** bbzhao has quit IRC | 10:26 | |
*** salmankhan has quit IRC | 10:35 | |
*** salmankhan has joined #openstack-lbaas | 10:39 | |
*** annp has quit IRC | 10:46 | |
openstackgerrit | Ganpat Agarwal proposed openstack/octavia master: Active-Active: ExaBGP distributor driver https://review.openstack.org/537842 | 10:51 |
*** yamamoto has joined #openstack-lbaas | 11:05 | |
*** yamamoto has quit IRC | 11:18 | |
*** yamamoto has joined #openstack-lbaas | 11:31 | |
*** salmankhan has quit IRC | 11:32 | |
*** salmankhan has joined #openstack-lbaas | 11:34 | |
*** sapd_ has quit IRC | 11:45 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 11:57 | |
*** AlexeyAbashkin has quit IRC | 11:58 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 11:58 | |
*** sapd_ has joined #openstack-lbaas | 11:58 | |
*** tesseract has quit IRC | 12:24 | |
*** yamamoto has quit IRC | 12:31 | |
*** yamamoto has joined #openstack-lbaas | 12:31 | |
*** yamamoto has quit IRC | 12:52 | |
*** yamamoto has joined #openstack-lbaas | 13:07 | |
*** tesseract has joined #openstack-lbaas | 13:27 | |
*** eN_Guruprasad_Rn has quit IRC | 13:31 | |
*** salmankhan has quit IRC | 13:40 | |
*** yamamoto has quit IRC | 13:42 | |
*** tesseract has quit IRC | 13:42 | |
*** tesseract has joined #openstack-lbaas | 13:42 | |
*** salmankhan has joined #openstack-lbaas | 13:59 | |
*** dougwig has joined #openstack-lbaas | 14:00 | |
*** yamamoto has joined #openstack-lbaas | 14:02 | |
*** Supun has joined #openstack-lbaas | 14:14 | |
*** tesseract has quit IRC | 14:17 | |
*** tesseract has joined #openstack-lbaas | 14:18 | |
*** salmankhan has quit IRC | 14:25 | |
-openstackstatus- NOTICE: We're currently experiencing issues with the logs.openstack.org server which will result in POST_FAILURE for jobs, please stand by and don't needlessly recheck jobs while we troubleshoot the problem. | 14:26 | |
*** fnaval has joined #openstack-lbaas | 14:26 | |
*** dmellado has quit IRC | 14:32 | |
*** dmellado has joined #openstack-lbaas | 14:39 | |
*** dmellado has quit IRC | 14:39 | |
*** dmellado has joined #openstack-lbaas | 14:41 | |
*** dmellado has quit IRC | 14:42 | |
*** dmellado has joined #openstack-lbaas | 14:43 | |
*** dmellado has quit IRC | 14:44 | |
*** salmankhan has joined #openstack-lbaas | 14:46 | |
*** dmellado has joined #openstack-lbaas | 14:53 | |
*** dmellado has quit IRC | 14:53 | |
*** salmankhan has quit IRC | 14:58 | |
*** gcheresh has quit IRC | 14:59 | |
*** dmellado has joined #openstack-lbaas | 15:01 | |
*** dmellado has quit IRC | 15:01 | |
*** dmellado has joined #openstack-lbaas | 15:03 | |
*** dmellado has quit IRC | 15:03 | |
*** salmankhan has joined #openstack-lbaas | 15:05 | |
*** dmellado has joined #openstack-lbaas | 15:19 | |
*** pcaruana has quit IRC | 15:20 | |
*** armax has joined #openstack-lbaas | 15:22 | |
*** dmellado has quit IRC | 15:25 | |
*** dmellado has joined #openstack-lbaas | 15:26 | |
*** kbyrne has quit IRC | 15:27 | |
*** dmellado has quit IRC | 15:27 | |
*** dmellado has joined #openstack-lbaas | 15:29 | |
*** dmellado has quit IRC | 15:29 | |
*** numans has quit IRC | 15:34 | |
*** numans has joined #openstack-lbaas | 15:34 | |
*** armax_ has joined #openstack-lbaas | 16:01 | |
-openstackstatus- NOTICE: logs.openstack.org is stabilized and there should no longer be *new* POST_FAILURE errors. Logs for jobs that ran in the past weeks until earlier today are currently unavailable pending FSCK completion. We're going to temporarily disable *successful* jobs from uploading their logs to reduce strain on our current limited capacity. Thanks for your patience ! | 16:01 | |
*** armax has quit IRC | 16:03 | |
*** armax_ is now known as armax | 16:03 | |
johnsom | We got lucky getting our release in before this.... | 16:06 |
*** kobis has quit IRC | 16:08 | |
yboaron_ | Hi Octavia experts | 16:25 |
yboaron_ | I’m trying to debug packet loss in loadbalancer with L7 policy/L7 rule in devstack environment | 16:25 |
yboaron_ | I’m still using octavia via Neutron LbaasV2. | 16:25 |
yboaron_ | I can see that packet reach to amphora ( tcpdump the ha-proxy namespace) | 16:26 |
yboaron_ | But amphora send TCP - RESET for the relevant connection request | 16:26 |
yboaron_ | Are there any counters that I can check , to analyze why packets being dropped/rejected ? | 16:26 |
johnsom | Hi yboaron_ | 16:26 |
yboaron_ | hi johnsom | 16:26 |
johnsom | Are you seeing any haproxy log entries in the amphora? | 16:26 |
yboaron_ | johnsom, checking it .. | 16:27 |
johnsom | Should be /var/log/haproxy.log | 16:27 |
johnsom | A RST being sent means to me that for some reason the listener is not configured on that load balanacer | 16:28 |
yboaron_ | johnsom, my devstack VM sudenly decided to 'die' | 16:28 |
yboaron_ | johnsom, I | 16:28 |
johnsom | Ouch, ok. Maybe we work through debuging when you get it all back up | 16:28 |
yboaron_ | johnsom, I'll try to recover it , and ping U | 16:29 |
yboaron_ | johnsom, thanks a lot | 16:29 |
johnsom | Sure, NP | 16:29 |
*** yboaron_ has quit IRC | 16:34 | |
*** openstackstatus has quit IRC | 16:41 | |
*** openstackstatus has joined #openstack-lbaas | 16:43 | |
*** ChanServ sets mode: +v openstackstatus | 16:43 | |
*** links has quit IRC | 16:57 | |
*** tesseract has quit IRC | 16:58 | |
*** slaweq has joined #openstack-lbaas | 17:00 | |
*** slaweq has quit IRC | 17:02 | |
*** kobis has joined #openstack-lbaas | 17:10 | |
*** kobis has quit IRC | 17:13 | |
*** kobis has joined #openstack-lbaas | 17:18 | |
*** dmellado has joined #openstack-lbaas | 17:23 | |
*** Alex_Staf has quit IRC | 17:26 | |
*** kobis has quit IRC | 17:26 | |
*** kobis has joined #openstack-lbaas | 17:26 | |
*** kobis has quit IRC | 17:27 | |
*** kobis has joined #openstack-lbaas | 17:34 | |
*** kobis has quit IRC | 17:40 | |
*** dmellado has quit IRC | 17:40 | |
*** yamamoto has quit IRC | 17:40 | |
*** slaweq has joined #openstack-lbaas | 17:41 | |
*** yamamoto has joined #openstack-lbaas | 17:43 | |
*** yamamoto has quit IRC | 17:44 | |
*** yamamoto has joined #openstack-lbaas | 17:44 | |
*** slaweq has quit IRC | 17:46 | |
*** slaweq has joined #openstack-lbaas | 17:54 | |
*** AlexeyAbashkin has quit IRC | 17:59 | |
*** slaweq has quit IRC | 18:02 | |
*** SumitNaiksatam has joined #openstack-lbaas | 18:04 | |
*** gcheresh has joined #openstack-lbaas | 18:29 | |
*** yamamoto has quit IRC | 18:42 | |
*** salmankhan has quit IRC | 18:57 | |
openstackgerrit | Michael Johnson proposed openstack/python-octaviaclient master: Updates the command mapping to include LB stats https://review.openstack.org/538005 | 19:08 |
*** kobis has joined #openstack-lbaas | 19:11 | |
*** SumitNaiksatam has quit IRC | 19:18 | |
*** slaweq has joined #openstack-lbaas | 19:19 | |
*** Bar__ has joined #openstack-lbaas | 19:19 | |
Bar__ | johnsom, hey | 19:22 |
johnsom | Bar_ Hi | 19:22 |
johnsom | Opps, Bar__ | 19:22 |
johnsom | grin | 19:22 |
Bar__ | johnsom, I've logged in to see if I can convince you to approve my octavia patches... | 19:23 |
Bar__ | :-) | 19:23 |
xgerman_ | today isn’t the best day… | 19:23 |
cgoncalves | Bar__: find out first whether he's a beer or wine person ;) | 19:23 |
xgerman_ | still in feature freeze | 19:23 |
Bar__ | cgoncalves, haha | 19:24 |
*** AlexeyAbashkin has joined #openstack-lbaas | 19:24 | |
Bar__ | xgerman_, https://review.openstack.org/#/c/529861/ does apply for feature freeze? | 19:24 |
johnsom | Bourbon? | 19:24 |
Bar__ | cgoncalves, you could vote too, by the way (your vote has been erased due to recent patch set updates) | 19:25 |
johnsom | Bar__ I think this does fall under the feature freeze as I *think* our multi-node gates use the multi-node local.conf file. I will have to look. If not, we can probably consider this for merge. | 19:26 |
Bar__ | johnsom, bourbon? hmm | 19:26 |
*** gcheresh has quit IRC | 19:26 | |
johnsom | It's kind of a "documentation" type patch | 19:27 |
Bar__ | right | 19:27 |
cgoncalves | johnsom, xgerman_, rm_work: what's your upgrade strategy with respect to running amphorae? calling failover api is just half-way, if that much | 19:27 |
johnsom | cgoncalves Not sure I follow the "half-way" comment | 19:28 |
cgoncalves | Bar__: yeah, I wanted to re-vote but was not sure you were okay with it. Hengqing had been uploaded new patchsets... | 19:28 |
johnsom | Short story: we have not declared any upgrade tags yet. | 19:28 |
*** AlexeyAbashkin has quit IRC | 19:28 | |
johnsom | +1 same, not sure on the current state of that | 19:28 |
xgerman_ | yep, failover is said strategy | 19:29 |
johnsom | Long story: we have manual processes that provide a successful upgrade process, we just don't have gates yet. | 19:29 |
johnsom | In general it is this process: | 19:29 |
johnsom | 1. Update the DB using octavia-db-manage | 19:29 |
cgoncalves | johnsom: one needs to keep track of which amps have already been upgraded, which version they are running (pre-/pos-upgrade), etc | 19:29 |
johnsom | 2. Update the controllers, ideally taking one out of service at a time. | 19:30 |
Bar__ | cgoncalves, he has rolled-back on his changes. so it reflects my changes to a tee | 19:30 |
johnsom | 3. Build a new amphora image with the updates and upload it to glance, moving the "amphora" tag to the new image | 19:30 |
johnsom | 4. Use the LB failover API to upgrade the amphora | 19:31 |
cgoncalves | Bar__: ok. I'll revisit it | 19:31 |
cgoncalves | johnsom: so you upgrade them all and at once? never skipping image versions? | 19:32 |
johnsom | Well, You can do some CLI tricks to merge the output of the amphora list API and the nova API to get the image Ids | 19:32 |
johnsom | Sure, you can skip versions. Existing amps won't break | 19:32 |
Bar__ | johnsom, can you think of any reason why there're no core-reviews in osc-lib repo for more than a week? | 19:33 |
johnsom | Bar__ Ummm, no, have you asked in openstack-sdks? I have seen Dean commenting in there about OSC type things. | 19:34 |
*** KeithMnemonic has quit IRC | 19:34 | |
cgoncalves | johnsom: ok, so operators are redeploying existing load balancers with new images | 19:35 |
Bar__ | johnsom, ok, I'll try. Although I have his vote already. | 19:35 |
johnsom | Bar__ Here is the list of cores: https://review.openstack.org/#/admin/groups/87,members | 19:35 |
johnsom | You can also try adding those folks to the "reviews" list on the patch | 19:35 |
johnsom | That usually sends an e-mail | 19:35 |
cgoncalves | johnsom: I am wondering if doing in-place upgrades would be something worth considering to lower e.g. upgrade time and possible downtime, and bandwidth | 19:36 |
johnsom | cgoncalves Umm, I wouldn't say redeploy, they don't have to delete and re-create the LBs, it's just a failover process. | 19:36 |
cgoncalves | johnsom: right. although that would require additional resources from the platform, even if temporary | 19:37 |
johnsom | cgoncalves We need to be careful with terminology a bit so we all don't get confused. The above process is an "in place upgrade" and depending on the topology of the LB, is zero downtime. | 19:38 |
johnsom | cgoncalves Yes, it means there may be one additional amphora during the failover process. | 19:38 |
johnsom | If the cloud can't handle one more amp, it's in HA trouble anyway.... | 19:39 |
johnsom | cgoncalves Are you advocating for package upgrades inside the amp? | 19:40 |
cgoncalves | johnsom: not advocating, no. exploring all possible options and weighting them :) | 19:40 |
*** slaweq has quit IRC | 19:41 | |
cgoncalves | as part of our product, together with octavia service we will also support the amphora image with regular images being push to glance and moving tag | 19:43 |
*** yamamoto has joined #openstack-lbaas | 19:43 | |
cgoncalves | johnsom: as so, it would be important to have a way to offer upgrades of running amphorae | 19:44 |
*** harlowja has joined #openstack-lbaas | 19:44 | |
johnsom | cgoncalves Ok, sure. We don't do anything explicit to block an operator from enabling that, but there are some considerations. | 19:45 |
johnsom | 1. Amps don't typically have a route to anywhere that would have a repo. An operator would have to enable that. | 19:45 |
johnsom | 2. Most important updates still require a reboot to replace kernels and/or libraries that are in use. This is currently not functional if you are using TLS offload as the encrypted RAMFS gets wiped. Though I hope we can fix that. | 19:45 |
johnsom | 3. Even if the patches do not require a reboot, they will restart services, which means an outage to data plane traffic. Most folks would prefer to control/schedule those. | 19:45 |
johnsom | 4. It can mean that you are running an un-tested combination of code depending on how it is done. The image method allows a testing process. | 19:45 |
cgoncalves | one where the operator could see which amps are running on outdated images | 19:45 |
johnsom | cgoncalves Yes, this is why we added the failover API and commands. Currently you can see the outdated images, it just takes some CLI magic. I commented on your RFE in support of making that "easier" in one way or another. | 19:46 |
johnsom | Oh, wait, it might have been Alex's RFE. Someone put one in storyboard.... grin | 19:47 |
cgoncalves | johnsom: ok (and Alex's RFE :)) | 19:47 |
cgoncalves | yep | 19:47 |
cgoncalves | I also asked some Sahara folks how do they deal with their cluster VM upgrades -- they dont | 19:48 |
johnsom | If you have interest, I would super support you on working on an "upgrade"/"gernade" gate to start testing upgrades. This is the next step in declaring we are upgrade-able with a governance tag. | 19:48 |
cgoncalves | operators have to build new images, upload to glance and respawn new clusters | 19:49 |
johnsom | Yeah, trove has a similar "hands on" approach | 19:49 |
johnsom | I think we have the best tools so far | 19:49 |
cgoncalves | johnsom: I'm a total stranger to grenade atm :) | 19:50 |
johnsom | Yeah, me too | 19:50 |
johnsom | Heck, I even typed it wrong.... grin | 19:50 |
cgoncalves | let me go back to Alex and in-flight build and delivery of amp images | 19:51 |
johnsom | https://docs.openstack.org/grenade/latest/readme.html and https://wiki.openstack.org/wiki/Grenade | 19:51 |
cgoncalves | *and check next steps of ... | 19:52 |
johnsom | The upgrade tags we aspire for are all listed here: https://governance.openstack.org/tc/reference/tags/index.html#project-assertion-tags | 19:52 |
*** reedip has quit IRC | 19:52 | |
johnsom | This kind of sums up the state of upgrades in OpenStack: "Note No projects are using assert:supports-zero-downtime-upgrade, yet." | 19:53 |
*** yboaron_ has joined #openstack-lbaas | 19:53 | |
cgoncalves | heh :) | 19:53 |
cgoncalves | johnsom: thanks for your feedback! | 19:54 |
johnsom | Yep, super support working on upgrades. | 19:54 |
*** yamamoto has quit IRC | 19:55 | |
*** Supun has quit IRC | 19:58 | |
cgoncalves | Bar__: https://review.openstack.org/#/c/529861/4/devstack/contrib/new-octavia-devstack.sh@23 | 19:58 |
*** kobis has quit IRC | 19:58 | |
cgoncalves | Bar__: you don't want to bring that back? | 19:58 |
Bar__ | cgoncalves, it's in a seperate patch now. | 19:59 |
Bar__ | look for related changes | 19:59 |
cgoncalves | got it | 20:00 |
Bar__ | cgoncalves, thanks! | 20:01 |
cgoncalves | Bar__: you're welcome! thank YOU for the patches | 20:02 |
cgoncalves | off for today. 9pm and late for dinner | 20:02 |
Bar__ | cgoncalves, cya | 20:02 |
Bar__ | johnsom, I cannot call a devstack function before I clone it, and I cannot clone a repo prior to installing git | 20:04 |
*** reedip has joined #openstack-lbaas | 20:05 | |
johnsom | wget and tar? Grin | 20:05 |
johnsom | Yeah, hmm. This same script does expect some packages to be installed | 20:06 |
johnsom | Frankly it probably should not install packages but just run with error handling | 20:08 |
Bar__ | johnsom, so you assume git is already installed? | 20:10 |
Bar__ | (not sure what you have meant by 'error handling' sine we're talking about shell scripts) | 20:11 |
*** slaweq has joined #openstack-lbaas | 20:14 | |
johnsom | Yeah, you can temporarily disable "halt on error" and check $? to see if the command ran, then print a useful error message | 20:16 |
Bar__ | johnsom, one more thing. In the configuration patch. Do you think I should have updated https://github.com/openstack/octavia/blob/master/devstack/samples/multinode/local-2.conf too? | 20:18 |
johnsom | Bar__ No, those plugins only need to run on the controller node | 20:19 |
johnsom | We aren't testing if neutron can run two controllers for example | 20:20 |
johnsom | Just if Octavia can | 20:20 |
Bar__ | johnsom, but this addition is more than a mere plug-in, it reconfigures ml2 e.g. via neutron/plugin.sh->>configure qos | 20:21 |
johnsom | Yeah, I was just thinking about that. I'm not sure if the enable service for q-qos is needed or not | 20:21 |
johnsom | I think it probably is | 20:22 |
johnsom | Just not sure | 20:22 |
johnsom | Do you know where in the networking guide you need to look it up? | 20:22 |
johnsom | https://docs.openstack.org/neutron/latest/admin/config-qos.html | 20:23 |
johnsom | Should be somewhere in there | 20:23 |
johnsom | Yep, there is a "On the network and compute nodes" section (no perm link though) | 20:24 |
johnsom | Assuming the multi-node setup uses the second node as a compute host and not just our controllers, which I think is the case | 20:26 |
*** slaweq has quit IRC | 20:28 | |
*** salmankhan has joined #openstack-lbaas | 20:29 | |
*** kobis has joined #openstack-lbaas | 20:31 | |
Bar__ | johnsom, the current behavior of neutron/plugin.sh is as if it was controller node | 20:33 |
Bar__ | not to say it is a problem, just an observation. | 20:33 |
*** salmankhan has quit IRC | 20:34 | |
*** slaweq has joined #openstack-lbaas | 20:37 | |
yboaron_ | ping johnsom | 20:43 |
johnsom | yboaron_ Hi | 20:43 |
yboaron_ | johnsom, I've manage to recover my devstack env , are u available for 10 min ? | 20:44 |
johnsom | Sure, NP. Just finished lunch | 20:45 |
yboaron_ | johnsom, enjoy your lunch ! ping me when u r available | 20:46 |
johnsom | yboaron_ I just finished, so I am available | 20:46 |
yboaron_ | johnsom, great , So I'm having packet loss when trying http get through loadbalancer with l7 policy (host name equal =XXXX) | 20:48 |
johnsom | Just to bring me up to speed, every packet gets an TCP RST? Or just some? | 20:49 |
yboaron_ | johnsom, I have a loadbalancer with l7policy attached to listener on port 80 , the L7 policy direct the traffic (with hostname=XXX) to pool with two members | 20:49 |
yboaron_ | johnsom, just some | 20:49 |
johnsom | What is the LB topology, standalone or active/standby? | 20:49 |
yboaron_ | standalone | 20:49 |
johnsom | Ah, yes, the story: https://storyboard.openstack.org/#!/story/2001425 | 20:50 |
yboaron_ | johnsom, I'll copy the relevant part from tcpdump and send u the link | 20:50 |
johnsom | It's on the priority etherpad "Bugs that should be investigated for Queens" list | 20:50 |
yboaron_ | johnsom, yes that's it , I thought you might point for something ... | 20:51 |
johnsom | Are you using a floating IP for this test? Have you tries without the floating IP? | 20:51 |
yboaron_ | Yes , I'm using FIP , but the packet reached the VIP IP | 20:52 |
yboaron_ | so I don't think it's related to FIP | 20:52 |
johnsom | Any chance you can capture a pcap file for me? https://www.wireshark.org/docs/wsug_html_chunked/AppToolstcpdump.html | 20:53 |
johnsom | Also, we were going to look in the haproxy.log file for the request that failed | 20:54 |
yboaron_ | johnsom, I'll try to do it , meanwhile I just paste the tcpdump of successful and failed HTTP GET #link https://pastebin.com/X9VKvkn7 | 20:55 |
rm_work | cgoncalves / johnsom: I was actually just about to roll a patch that adds the glance image name to the amp table when they're created | 20:55 |
rm_work | because i am dealing with this internally as well | 20:55 |
rm_work | and it seems logical to do | 20:55 |
*** kobis has quit IRC | 20:55 | |
rm_work | or rather | 20:55 |
rm_work | amp image ID | 20:55 |
johnsom | ID I hope | 20:55 |
xgerman_ | +1 | 20:55 |
rm_work | yeah sorry, meant ID not name | 20:55 |
rm_work | since THAT will not change for the amp | 20:56 |
rm_work | lol | 20:56 |
rm_work | and it's trivial | 20:56 |
rm_work | any objections? | 20:56 |
rm_work | should help cgoncalves as well | 20:56 |
johnsom | No, I think that is fine | 20:56 |
rm_work | i would have done it like two weeks ago but shit has been whack, to use the technical term | 20:56 |
xgerman_ | yeah, while you are there allow filtering by image id | 20:56 |
johnsom | Can't land until Rocky though | 20:56 |
rm_work | yeah | 20:56 |
rm_work | "can't land" | 20:56 |
* rm_work runs a lot of patches ahead of merging | 20:57 | |
johnsom | Like a wild man... grin | 20:57 |
rm_work | https://i.imgflip.com/23cxyh.jpg | 20:59 |
johnsom | lol | 20:59 |
rm_work | I'm like a bleeding-edge hipster | 21:00 |
rm_work | "Oh, you run git-master? I ran those patches BEFORE they were merged." | 21:00 |
johnsom | yboaron_ Yeah, that tcpdump is easier to read. I think the haproxy.log is going to be important. | 21:00 |
* johnsom Thanks rm_work for his alpha testing | 21:00 | |
rm_work | yes, this WOULD explain why I always run into these issues before anyone else notices <_< | 21:01 |
*** kobis has joined #openstack-lbaas | 21:03 | |
*** gcheresh has joined #openstack-lbaas | 21:09 | |
*** kobis has quit IRC | 21:12 | |
yboaron_ | johnsom, you can find ~ 15 lines of haproxy.log at the following link : #link https://pastebin.com/XV9gaduS , please let me know if u still need pcap file and the entire haproxy.log file | 21:20 |
johnsom | Ok, looking | 21:21 |
johnsom | S = the TCP session was unexpectedly aborted by the server, or the | 21:27 |
johnsom | server explicitly refused it. | 21:27 |
johnsom | C = the proxy was waiting for the CONNECTION to establish on the | 21:27 |
johnsom | server. The server might at most have noticed a connection attempt. | 21:27 |
johnsom | yboaron_ It appears from that log that only two requests were serviced and the rest had a problem with the backend server | 21:27 |
yboaron_ | johnsom, what do you mean by 'problem with backend server' ? | 21:28 |
johnsom | yboaron_ For incoming requests, haproxy has a "frontend" that terminates the incoming request and a "backend" that establishes a connection to the backend server(s) (typically web servers). This backend connection to the web server is failing on most of those requests. | 21:29 |
johnsom | yboaron_ Can you go into /var/lib/octavia/<uuid>/haproxy.cfg and get the "backend 1d52c2c1-2d26-420b-825e-966af84cdcb6" section? | 21:30 |
yboaron_ | johnsom, sure | 21:30 |
yboaron_ | johnsom, once again I'm having problems connecting to my VM , sorry | 21:33 |
johnsom | yboaron_ Could this be related? | 21:34 |
yboaron_ | johnsom, I'll get it and ping ASAP | 21:34 |
yboaron_ | johnsom, don't think so , when I'm running 'standard' loadbalancer with no L7policy - all works fine | 21:34 |
johnsom | I just about have a test VM setup to try to reproduce | 21:35 |
yboaron_ | johnsom, my guess , it's related somehow to the l7policy/rule | 21:35 |
yboaron_ | johnsom, that's will be great | 21:35 |
johnsom | yboaron_ When you get back in, can you do a "openstack loadbalancer member list"? | 21:43 |
yboaron_ | johnsom, sure | 21:43 |
xgerman_ | rm_wfg | 21:43 |
*** gcheresh has quit IRC | 21:44 | |
yboaron_ | johnsom, [stack@ghghghgh devstack]$ openstack loadbalancer member list 63bba235-d918-4163-a5a2-1e8aeb00595d | 21:47 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 21:47 |
yboaron_ | | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | | 21:47 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 21:47 |
yboaron_ | | deed8a2a-2133-415e-a93f-ae316a5e486f | | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE | 10.0.0.66 | 8080 | NO_MONITOR | 1 | | 21:47 |
yboaron_ | | 71ff3979-d6ff-45d6-b196-38bc05454b0a | | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE | 10.0.0.76 | 8080 | NO_MONITOR | 1 | | 21:47 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 21:47 |
yboaron_ | [stack@ghghghgh devstack]$ | 21:47 |
xgerman_ | fg | 21:47 |
rm_work | xgerman_: were you trying to talk to me? :P | 21:47 |
xgerman_ | no, wrong window | 21:47 |
rm_work | heh | 21:47 |
rm_work | "<xgerman_>rm_wfg" | 21:47 |
rm_work | so close | 21:48 |
johnsom | yboaron_ Ok, so with "NO_MONITOR" that means there is no health monitor which means the load balancer will send requests to backend servers that are down. | 21:48 |
xgerman_ | yeah, I know… I keep looking in my chat window while doing some shell stuff | 21:48 |
johnsom | yboaron_ I would add a health monitor to the pool and I bet you will see that one of the backend servers is down or not responding to requests. | 21:48 |
yboaron_ | johnsom, OK , will do it , BTW i couldn't find any '"backend 1d52c2c1-2d26-420b-825e-966af84cdcb6' in all haproxy.cfg files | 21:49 |
*** slaweq has quit IRC | 22:04 | |
*** slaweq has joined #openstack-lbaas | 22:05 | |
*** yamamoto has joined #openstack-lbaas | 22:05 | |
johnsom | Jan 25 22:04:09 amphora-6ee2be0a-ff84-4389-bf22-3394317828c3 kernel: [ 1237.753106] A link change request failed with some changes committed already. Interface ens6 may have been left with an inconsistent configuration, please check. | 22:06 |
johnsom | hmmm | 22:06 |
yboaron_ | johnsom, added the health monitor - members seems to be online although traffic failed | 22:11 |
johnsom | member list now? | 22:11 |
yboaron_ | [stack@ghghghgh devstack]$ openstack loadbalancer member list 63bba235-d918-4163-a5a2-1e8aeb00595d | 22:12 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 22:12 |
yboaron_ | | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | | 22:12 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 22:12 |
yboaron_ | | deed8a2a-2133-415e-a93f-ae316a5e486f | | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE | 10.0.0.66 | 8080 | ONLINE | 1 | | 22:12 |
yboaron_ | | 71ff3979-d6ff-45d6-b196-38bc05454b0a | | de934d9614dd4e9fa3c669e99df8eb85 | ACTIVE | 10.0.0.76 | 8080 | ONLINE | 1 | | 22:12 |
yboaron_ | +--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | 22:12 |
yboaron_ | [stack@ghghghgh devstack]$ | 22:12 |
johnsom | yboaron_ Here is my result: | 22:16 |
johnsom | https://www.irccloud.com/pastebin/ZyCXLBJB/ | 22:16 |
johnsom | https://www.irccloud.com/pastebin/YlMN9r16/ | 22:18 |
*** slaweq has quit IRC | 22:19 | |
yboaron_ | johnsom, 100% success , cool , did u define l7 rule with HOSTNAME equal to 'xxxxx' ? | 22:19 |
*** slaweq has joined #openstack-lbaas | 22:19 | |
johnsom | yboaron_ Second pastebin is the commands I ran to create the LB | 22:19 |
johnsom | Tried to recreate it per your story | 22:20 |
yboaron_ | johnsom, what did u use for connecting the private subnet and 172.21.xx (the members ) , neutron router ? | 22:23 |
johnsom | yes | 22:24 |
yboaron_ | johnsom, Did u used neutron LBAASV2 with octavia provider or octavia client ? | 22:25 |
yboaron_ | used/use | 22:25 |
johnsom | I do not use neutron-lbaas | 22:25 |
johnsom | I only run native octavia | 22:25 |
yboaron_ | johnsom, ooopssss | 22:25 |
johnsom | That should not matter though, in this case | 22:26 |
johnsom | https://www.irccloud.com/pastebin/M9XE9rBn/ | 22:27 |
johnsom | Here is my haproxy.cfg | 22:27 |
johnsom | I didn't set a health monitor because I new my backends were healthy and you didn't in your story. | 22:27 |
yboaron_ | johnsom, I"ll take a look , well thanks a lot , I'll continue digging | 22:30 |
johnsom | Ok, poke around. It looks to me like a backend server is failing on you. | 22:30 |
johnsom | Let me know if you do not find the issue | 22:30 |
yboaron_ | johnsom, I'm almost sure that the backend is fine , cause when HTTPing wout L7policy all fine | 22:31 |
johnsom | Check that your haproxy config matches mine (basically) | 22:32 |
yboaron_ | johnsom, will do it , once again thank u very much for the time and effort . | 22:33 |
johnsom | Sure, NP | 22:33 |
yboaron_ | johnsom, I'm back again - this is my haproxy.cfg #link https://pastebin.com/yAbgegX7 | 22:39 |
yboaron_ | johnsom, seems that something is missing comparing to your ha-proxy file | 22:39 |
johnsom | yboaron_ That is the wrong haproxy.cfg | 22:40 |
johnsom | yboaron_ /var/lib/octavia/0a1fd67f-13d9-415d-ab4d-7ae93e12f225/haproxy.cfg Where the UUID is your UUID not mine | 22:41 |
johnsom | That is a sample file for an haproxy in front of the API controllers used in a gate test. Not used in the amp | 22:43 |
openstackgerrit | Bar Elharar proposed openstack/python-octaviaclient master: Improve unit tests results with subTest https://review.openstack.org/531257 | 22:45 |
yboaron_ | johnsom, this is the right one #link https://pastebin.com/udJc0d28 | 22:47 |
yboaron_ | johnsom, I can see that at yours file mode as HTTP , mine mode=TCP .... | 22:48 |
yboaron_ | johnsom, yours/your | 22:48 |
johnsom | yboaron_ Yes, that looks like the right config. However, I do notice something strange. You have it configured for TCP mode but you are using HTTP L7 rules. This *should* work, but is a very odd configuration. | 22:48 |
yboaron_ | johnsom, I"ll change it to HTTP and recheck | 22:50 |
openstackgerrit | Bar Elharar proposed openstack/python-octaviaclient master: Improve unit tests results with subTest https://review.openstack.org/531257 | 22:50 |
johnsom | yboaron_ I will try TCP too out of curiousity | 22:50 |
johnsom | yboaron_ Ha, yes, that reproduces the problem | 22:52 |
johnsom | Looks like HAProxy has a bug there, though I have to admit, it's kind of an invalid configuration combination. | 22:53 |
yboaron_ | johnsom, yes that's correct , for L7policy - we should configure LB for HTTP | 22:54 |
johnsom | https://www.irccloud.com/pastebin/1vwkqZq0/ | 22:55 |
johnsom | So with your configuration the pool goes offline | 22:55 |
johnsom | It is reported that there is a problem | 22:55 |
johnsom | Yep, everything works fine, but haproxy takes the pool offline | 22:57 |
yboaron_ | johnsom, my pool is online #link https://pastebin.com/sa5ZiKzU | 22:59 |
*** armax has quit IRC | 23:00 | |
johnsom | Could be the health monitor overriding it since all of the members are healthy. I will check that code | 23:00 |
yboaron_ | johnsom, Cool , I"ll change my configuration to HTTP , and recheck | 23:03 |
yboaron_ | johnsom, I'll keep u posted | 23:03 |
yboaron_ | johnsom, Bye | 23:05 |
johnsom | o/ | 23:05 |
*** slaweq has quit IRC | 23:06 | |
*** yboaron_ has quit IRC | 23:09 | |
*** rstarmer has joined #openstack-lbaas | 23:12 | |
*** rstarmer has quit IRC | 23:24 | |
openstackgerrit | Bar Elharar proposed openstack/octavia master: Update configuration samples https://review.openstack.org/529861 | 23:25 |
openstackgerrit | Bar Elharar proposed openstack/octavia master: Make new-octavia-devstack.sh OS agnostic https://review.openstack.org/535231 | 23:25 |
*** rstarmer has joined #openstack-lbaas | 23:25 | |
*** ianychoi has quit IRC | 23:26 | |
*** ianychoi has joined #openstack-lbaas | 23:27 | |
johnsom | Yeah, full run down there, since it's in TCP mode it still tries to parse for http host header, but fragmentation/timing may me the TCP matcher doesn't see the whole header so it ends up not matching the L7 rule. Since the listener is in TCP mode it can't return the 503 (no servers since there is no default pool) so it just sends an RST. | 23:28 |
*** armax has joined #openstack-lbaas | 23:31 | |
*** Alex_Staf has joined #openstack-lbaas | 23:32 | |
*** fnaval has quit IRC | 23:39 | |
openstackgerrit | Bar Elharar proposed openstack/octavia master: Update configuration samples (QoS) https://review.openstack.org/529861 | 23:39 |
openstackgerrit | Bar Elharar proposed openstack/octavia master: Make new-octavia-devstack.sh OS agnostic https://review.openstack.org/535231 | 23:40 |
*** Bar__ has quit IRC | 23:40 | |
*** kbyrne has joined #openstack-lbaas | 23:50 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!