*** tongl has quit IRC | 00:16 | |
*** harlowja has quit IRC | 00:25 | |
*** armax has quit IRC | 00:37 | |
*** xingzhang has joined #openstack-lbaas | 00:41 | |
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers https://review.openstack.org/486775 | 00:46 |
---|---|---|
*** xingzhang has quit IRC | 00:47 | |
*** armax has joined #openstack-lbaas | 00:48 | |
rm_work | yeah crap | 00:52 |
rm_work | johnsom: right now this causes 2x FLIPs to be created per LB, and one is just orphaned forever | 00:53 |
*** sanfern has joined #openstack-lbaas | 00:53 | |
rm_work | :( | 00:54 |
rm_work | trying to figure out why it doesn't actually GET the data for the first one created | 00:54 |
*** JudeC has quit IRC | 00:55 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s) https://review.openstack.org/435612 | 00:56 |
*** jick has quit IRC | 01:01 | |
*** catintheroof has joined #openstack-lbaas | 01:02 | |
*** ipsecguy_ has joined #openstack-lbaas | 01:04 | |
*** armax has quit IRC | 01:07 | |
*** ipsecguy has quit IRC | 01:07 | |
openstackgerrit | Ghanshyam Mann proposed openstack/octavia master: Remove usage of credentials_factory.AdminManager https://review.openstack.org/491640 | 01:10 |
*** catintheroof has quit IRC | 01:11 | |
*** dayou2 has quit IRC | 01:18 | |
*** dayou has joined #openstack-lbaas | 01:19 | |
rm_work | johnsom: did you verify that patch actually WORKED in devstack? | 01:23 |
rm_work | https://review.openstack.org/#/c/463289/8/octavia/api/v2/controllers/load_balancer.py | 01:23 |
rm_work | so | 01:23 |
rm_work | on 219-223 | 01:23 |
rm_work | it seems like it is not ACTUALLY saving that back to the DB | 01:23 |
rm_work | do our models work that way? | 01:23 |
rm_work | will changing the values there and doing a commit below it "work" like expected? | 01:24 |
rm_work | because ... in my deployment, it's getting the right data for the VIP (and might "just work"), except it's not actually being stored in the DB in the VIP object ever | 01:24 |
rm_work | it doesn't return it to the user either | 01:25 |
rm_work | so yeah it's not storing that back to the DB.... | 01:32 |
rm_work | did you guys actually test that patch? johnsom / xgerman_ | 01:32 |
rm_work | IMO it is not working yet | 01:32 |
rm_work | and right now will PROBABLY be creating an extra port | 01:32 |
rm_work | and throwing it away, and not ACTUALLY using the IP specified | 01:32 |
xgerman_ | Mmh, it looked sane when I read it... | 01:34 |
rm_work | yes it LOOKS sane | 01:36 |
rm_work | but it isn't actually writing the vip object back to the DB | 01:36 |
rm_work | could you verify in a devstack? | 01:36 |
*** dougwig has quit IRC | 01:36 | |
rm_work | what i'm seeing is the vip object returns correctly, the data is transfered into "db_lb.vip" ... and then thrown away | 01:36 |
rm_work | SQLAlchemy says the session is clear even after touching the values | 01:55 |
rm_work | in db_lb.vip | 01:55 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Properly store VIP data on LB Create https://review.openstack.org/491643 | 02:03 |
rm_work | johnsom / xgerman_ ^^ this fixes it | 02:03 |
rm_work | plz to be testing to verify my findings, and merging | 02:03 |
xgerman_ | Ok. Testing it is - | 02:05 |
*** xingzhang has joined #openstack-lbaas | 02:10 | |
*** mestery has quit IRC | 02:17 | |
*** afranc has quit IRC | 02:17 | |
*** sanfern has quit IRC | 02:17 | |
*** mestery has joined #openstack-lbaas | 02:22 | |
*** afranc has joined #openstack-lbaas | 02:22 | |
*** dayou1 has joined #openstack-lbaas | 02:37 | |
*** dayou has quit IRC | 02:40 | |
*** armax has joined #openstack-lbaas | 02:41 | |
*** armax has quit IRC | 02:44 | |
*** yamamoto_ has joined #openstack-lbaas | 02:46 | |
*** yamamoto has quit IRC | 02:46 | |
openstackgerrit | lidong proposed openstack/octavia master: Update links in README https://review.openstack.org/491655 | 03:09 |
*** sanfern has joined #openstack-lbaas | 03:48 | |
*** links has joined #openstack-lbaas | 03:53 | |
*** reedip_afk is now known as reedip | 04:17 | |
*** fnaval has quit IRC | 04:21 | |
*** xingzhang has quit IRC | 04:33 | |
*** xingzhang has joined #openstack-lbaas | 04:34 | |
*** harlowja has joined #openstack-lbaas | 04:35 | |
*** Alex_Staf has joined #openstack-lbaas | 04:54 | |
*** harlowja has quit IRC | 05:14 | |
*** xingzhang has quit IRC | 05:38 | |
*** xingzhang has joined #openstack-lbaas | 05:38 | |
*** JudeC has joined #openstack-lbaas | 05:44 | |
*** rcernin has joined #openstack-lbaas | 06:21 | |
*** JudeC has quit IRC | 06:26 | |
*** gcheresh has joined #openstack-lbaas | 06:32 | |
*** pcaruana has joined #openstack-lbaas | 07:00 | |
*** Alex_Staf has quit IRC | 07:04 | |
*** gtrxcb has quit IRC | 07:11 | |
*** tesseract has joined #openstack-lbaas | 07:21 | |
*** cpuga has joined #openstack-lbaas | 07:43 | |
*** yamamoto_ has quit IRC | 07:44 | |
*** yamamoto has joined #openstack-lbaas | 07:50 | |
*** rtjure has quit IRC | 07:58 | |
*** belharar has joined #openstack-lbaas | 08:04 | |
*** leyal has joined #openstack-lbaas | 08:18 | |
*** Guest14 has joined #openstack-lbaas | 08:20 | |
*** leyal- has joined #openstack-lbaas | 08:26 | |
leyal- | Hi - i am trying no run single-node devstack instance with this local.conf - https://github.com/openstack/octavia/blob/master/devstack/samples/singlenode/local.conf | 08:27 |
leyal- | After ./stack.sh succeeded - i tried to create a load-blancer an got a provisioning error.. | 08:28 |
*** leyal has quit IRC | 08:29 | |
leyal- | i tried to look at the log of o-cw - and i saw the following exception - ComputeWaitTimeoutException - while trying to create the lbaas-instance .. | 08:30 |
*** aojea has joined #openstack-lbaas | 08:32 | |
leyal- | I will be happy for any debugging hint :) | 08:33 |
*** Alex_Staf has joined #openstack-lbaas | 08:40 | |
*** cpuga has quit IRC | 08:44 | |
*** aojea has quit IRC | 08:46 | |
*** rtjure has joined #openstack-lbaas | 08:46 | |
*** aojea has joined #openstack-lbaas | 08:47 | |
*** yamamoto has quit IRC | 08:49 | |
nmagnezi | rm_work, o/ | 08:51 |
nmagnezi | leyal-, hi there | 08:51 |
nmagnezi | leyal-, it is possible that your machine is a bit slow and the timeout was just too short for it. | 08:52 |
nmagnezi | leyal-, couple of suggestions | 08:52 |
nmagnezi | leyal-, try to create another loadbalancer, it is possible that the boot time will be quicker this time since the image should now be cached. and also when the loadbalancer gets created monitor the instance creation (nova list would do) | 08:53 |
nmagnezi | leyal-, a second option would be to increase the timeout | 08:53 |
*** dayou1 has quit IRC | 08:55 | |
leyal- | nmagnezi, Hi . thanks ! i don't think that a timeout issue as it's was around 1 minute in state of PENDING_CREATE.. | 08:55 |
*** dayou has joined #openstack-lbaas | 08:55 | |
nmagnezi | leyal-, could you please paste the worker log somewhere and send a link? | 08:55 |
leyal- | But i will try both .. | 08:56 |
leyal- | where can i find the worker log ? | 08:57 |
nmagnezi | leyal-, hmm.. basically devstack now uses systemd. I still use screens so just try something like systemctl status devstack@octavia-cw and check where the log is.. i'm sorry i don't know the exact cmd | 08:59 |
nmagnezi | leyal-, if you want to wait i can spawn a devstack node with systemd and look at it | 09:00 |
leyal- | Is the worker is one of the following - devstack@o-api devstack@o-cw devstack@o-hk devstack@o-hm | 09:01 |
nmagnezi | leyal-, yes. it is devstack@o-cw | 09:02 |
leyal- | nmagnezi, thanks ! - i see there the following msg repeated - "WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance." | 09:03 |
nmagnezi | leyal-, btw if you wish to increase the wait timeout (your debug hint ;) ), the param you should try to fiddle with is amp_active_retries | 09:04 |
nmagnezi | leyal-, https://github.com/openstack/octavia/blob/32819ecc8de293511a7fbb1020abb97b3087665d/octavia/controller/worker/tasks/compute_tasks.py#L185 | 09:04 |
nmagnezi | leyal-, yeah, but one line is not enough for me to understand the root cause | 09:04 |
nmagnezi | leyal-, you may paste with http://paste.openstack.org/ | 09:05 |
nmagnezi | leyal-, and link it here | 09:05 |
leyal- | nmagnezi, cool - pasted that here - http://paste.openstack.org/show/617750/ | 09:06 |
nmagnezi | leyal-, Eyal, when you create an additional loadbalancer, do you see an nova instance being created? | 09:08 |
leyal- | Hi - the second one created well :) | 09:09 |
nmagnezi | leyal-, magniv :P | 09:09 |
nmagnezi | leyal-, caching helped you here | 09:10 |
leyal- | nmagnezi - just noticed that when i checked the server list and only one created - Thanks a lot!! "toda-raba"!! | 09:11 |
*** yamamoto has joined #openstack-lbaas | 09:15 | |
*** yamamoto has quit IRC | 09:22 | |
openstackgerrit | Ji Chengke proposed openstack/neutron-lbaas master: make lbaasv2 support "https" keystone endpoint https://review.openstack.org/491734 | 09:23 |
openstackgerrit | Ji Chengke proposed openstack/neutron-lbaas master: make lbaasv2 support "https" keystone endpoint https://review.openstack.org/491734 | 09:54 |
*** yamamoto has joined #openstack-lbaas | 09:54 | |
*** yamamoto has quit IRC | 10:07 | |
*** yamamoto has joined #openstack-lbaas | 10:12 | |
*** mdavidson has quit IRC | 10:16 | |
*** mdavidson has joined #openstack-lbaas | 10:17 | |
*** yamamoto has quit IRC | 10:17 | |
*** yamamoto has joined #openstack-lbaas | 10:18 | |
*** yamamoto has quit IRC | 10:20 | |
*** yamamoto has joined #openstack-lbaas | 10:20 | |
*** xingzhang has quit IRC | 10:48 | |
*** xingzhang has joined #openstack-lbaas | 10:48 | |
*** Guest14 is now known as ajo | 10:54 | |
*** sanfern has quit IRC | 10:56 | |
*** yamamoto has quit IRC | 10:58 | |
*** slaweq has joined #openstack-lbaas | 10:59 | |
*** yamamoto has joined #openstack-lbaas | 10:59 | |
*** yamamoto has quit IRC | 11:05 | |
openstackgerrit | garyk proposed openstack/neutron-lbaas master: Use flake8-import-order plugin https://review.openstack.org/480543 | 11:06 |
*** yamamoto has joined #openstack-lbaas | 11:07 | |
*** yamamoto has quit IRC | 11:08 | |
*** yamamoto has joined #openstack-lbaas | 11:10 | |
*** yamamoto has quit IRC | 11:13 | |
*** xingzhang has quit IRC | 11:13 | |
*** yamamoto has joined #openstack-lbaas | 11:15 | |
*** yamamoto has quit IRC | 11:16 | |
*** yamamoto has joined #openstack-lbaas | 11:16 | |
*** atoth has joined #openstack-lbaas | 11:48 | |
*** aojea has quit IRC | 12:05 | |
*** aojea has joined #openstack-lbaas | 12:06 | |
*** aojea has quit IRC | 12:11 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/neutron-lbaas master: Updated from global requirements https://review.openstack.org/490856 | 12:16 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements https://review.openstack.org/491293 | 12:18 |
*** yamamoto has quit IRC | 12:18 | |
*** yamamoto has joined #openstack-lbaas | 12:20 | |
*** yamamoto has quit IRC | 12:20 | |
*** yamamoto has joined #openstack-lbaas | 12:40 | |
*** leitan has joined #openstack-lbaas | 12:45 | |
*** aojea has joined #openstack-lbaas | 12:53 | |
*** ssmith has joined #openstack-lbaas | 12:58 | |
*** cpusmith has joined #openstack-lbaas | 12:59 | |
*** catintheroof has joined #openstack-lbaas | 13:00 | |
*** ssmith has quit IRC | 13:03 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Fix sg_rule didn't set protocol field https://review.openstack.org/491792 | 13:09 |
*** links has quit IRC | 13:11 | |
*** sanfern has joined #openstack-lbaas | 13:16 | |
*** gcheresh has quit IRC | 13:50 | |
*** sanfern has quit IRC | 14:05 | |
*** sanfern has joined #openstack-lbaas | 14:06 | |
*** slaweq has quit IRC | 14:10 | |
*** xingzhang has joined #openstack-lbaas | 14:12 | |
*** xingzhang has quit IRC | 14:17 | |
*** Guest13936 is now known as med_ | 14:21 | |
*** med_ has quit IRC | 14:21 | |
*** med_ has joined #openstack-lbaas | 14:21 | |
*** med_ is now known as medberry | 14:21 | |
*** cpuga has joined #openstack-lbaas | 14:33 | |
*** cpuga_ has joined #openstack-lbaas | 14:35 | |
*** armax has joined #openstack-lbaas | 14:37 | |
*** aojea has quit IRC | 14:38 | |
*** cpuga has quit IRC | 14:39 | |
*** medberry is now known as med_ | 14:43 | |
*** links has joined #openstack-lbaas | 14:48 | |
*** links has quit IRC | 14:49 | |
*** belharar has quit IRC | 15:01 | |
*** belharar has joined #openstack-lbaas | 15:04 | |
*** belharar has quit IRC | 15:35 | |
*** belharar has joined #openstack-lbaas | 15:37 | |
*** armax has quit IRC | 15:42 | |
*** armax has joined #openstack-lbaas | 15:43 | |
*** dougwig has joined #openstack-lbaas | 15:44 | |
*** belharar has quit IRC | 15:45 | |
*** yamamoto has quit IRC | 15:54 | |
*** ajo has quit IRC | 15:57 | |
*** yamamoto has joined #openstack-lbaas | 15:58 | |
*** yamamoto has quit IRC | 16:03 | |
*** leyal- has quit IRC | 16:08 | |
*** leyal has joined #openstack-lbaas | 16:09 | |
*** armax has quit IRC | 16:10 | |
*** armax has joined #openstack-lbaas | 16:11 | |
*** pcaruana has quit IRC | 16:27 | |
*** rcernin has quit IRC | 16:31 | |
*** tesseract has quit IRC | 16:31 | |
*** sshank has joined #openstack-lbaas | 16:42 | |
*** KeithMnemonic has joined #openstack-lbaas | 16:45 | |
*** KeithMnemonic has quit IRC | 16:47 | |
*** KeithMnemonic has joined #openstack-lbaas | 16:49 | |
*** KeithMnemonic has quit IRC | 16:52 | |
*** cpuga_ has quit IRC | 17:04 | |
*** fnaval has joined #openstack-lbaas | 17:12 | |
*** sanfern has quit IRC | 17:16 | |
*** cpuga has joined #openstack-lbaas | 17:19 | |
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers https://review.openstack.org/486775 | 17:28 |
openstackgerrit | Jude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers https://review.openstack.org/486775 | 17:30 |
*** Alex_Staf has quit IRC | 18:10 | |
rm_work | i'm fixing the functional test failure presently in my fix | 18:20 |
openstackgerrit | Merged openstack/neutron-lbaas master: Updated from global requirements https://review.openstack.org/490856 | 18:22 |
rm_work | ah man, these functional tests are proving this is broken | 18:22 |
rm_work | the tests were set up to make sure it didn't allocate a VIP before the worker stage... | 18:23 |
*** sshank has quit IRC | 18:49 | |
rm_work | johnsom: will you have time to test this today? or still focused on 404s | 19:03 |
johnsom | yeah, I will take a look. I'm having workstation issues today. Ugh | 19:03 |
rm_work | :( | 19:03 |
johnsom | Pretty close to a restore from backup.... | 19:04 |
rm_work | just looking for you to verify the *issue* | 19:04 |
rm_work | not test the fix yet | 19:04 |
rm_work | well, that's.... sad | 19:04 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Properly store VIP data on LB Create https://review.openstack.org/491643 | 19:25 |
rm_work | ok testing .... "fixed" >_> | 19:25 |
*** aojea has joined #openstack-lbaas | 19:30 | |
nmagnezi | o/ | 19:31 |
rm_work | nmagnezi: https://review.openstack.org/#/c/489015/ | 19:32 |
nmagnezi | rm_work, hello to you too :) | 19:32 |
rm_work | :P | 19:32 |
* nmagnezi reads | 19:32 | |
nmagnezi | rm_work, I was under the impression that his patch is more generic (in a very good way) and should also update members who did not report to hm to OFFLINE | 19:35 |
rm_work | whose patch? | 19:36 |
nmagnezi | rm_work, yours | 19:36 |
rm_work | oh you mean "this" patch prolly, derp | 19:36 |
nmagnezi | rm_work, quote " | 19:36 |
nmagnezi | By actively updating the status of members that don't report, we can set | 19:36 |
nmagnezi | disabled members to OFFLINE exactly when this becomes true." | 19:36 |
rm_work | uhh | 19:37 |
rm_work | right | 19:37 |
rm_work | so i'm not sure what you mean by an updated "reported_members" list | 19:37 |
nmagnezi | not sure why I wrote quote and also "" :P | 19:37 |
rm_work | "Such case could be handled with a simple check that a member is still not in (an updated) reported_members list before you set its status to OFFLINE." | 19:37 |
nmagnezi | rm_work, let me try to explain myself with this comment | 19:37 |
rm_work | we have literally only this one "reported_members" list to work with, there's no getting an updated one | 19:38 |
*** xingzhang has joined #openstack-lbaas | 19:38 | |
nmagnezi | rm_work, can't we re-read a single member from the db each time? | 19:41 |
rm_work | it's not from the DB | 19:43 |
rm_work | that list is from the health message | 19:43 |
*** xingzhang has quit IRC | 19:43 | |
nmagnezi | rm_work, sec, cherry-picking this again | 19:44 |
nmagnezi | rm_work, yeah so fetching a single member by id would need an addition to octavia/controller/healthmanager/update_db.py | 19:49 |
rm_work | no i mean we literally can't | 19:50 |
nmagnezi | oh, not that path | 19:50 |
nmagnezi | sec | 19:50 |
rm_work | "reported_members" isn't from the DB | 19:50 |
nmagnezi | https://github.com/openstack/octavia/blob/98799b54ddc778722c0434af58a0b11d9b00958b/octavia/db/repositories.py#L735-L735 | 19:50 |
nmagnezi | why? you fetch the pool few lines beforehand, no? | 19:50 |
rm_work | *reported_members* is from the health message | 19:51 |
rm_work | real_members is from the db | 19:51 |
rm_work | we're comparing DB to HM | 19:51 |
*** aojea has quit IRC | 19:51 | |
* nmagnezi looks at reported_members | 19:52 | |
nmagnezi | rm_work, ah, it comes from the 'health' param provided to that function, right? | 19:55 |
rm_work | yeah it comes from the health message we're processing | 19:55 |
nmagnezi | got it | 19:56 |
rm_work | i do wonder if we could combine some of those messages | 19:59 |
rm_work | instead of throwing N messages on the queue | 19:59 |
nmagnezi | rm_work, changed my vote | 19:59 |
nmagnezi | rm_work, I wish there was a way to check a health state of a single member at that point in the code. but I understand why this is not doable here. | 20:00 |
nmagnezi | rm_work, I had good intention in mind :P | 20:00 |
rm_work | :) | 20:02 |
nmagnezi | rm_work, i have a question, not related to this patch | 20:03 |
nmagnezi | rm_work, now that we run the api with the wsgi (or uwsgi, can't remember) magic | 20:03 |
nmagnezi | rm_work, how can I query the API service directly? | 20:03 |
rm_work | ? | 20:04 |
nmagnezi | rm_work, I don't see it listening to any port, which is strange | 20:04 |
rm_work | ah | 20:04 |
rm_work | it's because it is routed via apache | 20:04 |
rm_work | johnsom could probably answer this one better | 20:04 |
nmagnezi | aye. the api-ref examples (which really helped me so far) might need to get some update to fit that,. .different port maybe.. | 20:05 |
johnsom | Yeah, new openstack world order, no ports but pathes | 20:06 |
nmagnezi | rm_work, this actually stopped me from reviewing you batch members patch https://review.openstack.org/#/c/477034/ | 20:06 |
nmagnezi | johnsom, hi o/ | 20:06 |
rm_work | you can still hit it directly | 20:07 |
rm_work | it's not "indirect"? | 20:07 |
johnsom | It is like /load-balancer/v2.... now | 20:07 |
rm_work | i still do curl commands as I did before | 20:07 |
nmagnezi | rm_work, on which port number? | 20:07 |
rm_work | just have to use the path per: | 20:07 |
rm_work | export OCTAVIA_URL=$(openstack catalog show load-balancer -f json | jq -r .endpoints | awk '/publicURL/ {print $2}') | 20:07 |
rm_work | no port # | 20:07 |
rm_work | just curl that path | 20:07 |
nmagnezi | ah | 20:07 |
nmagnezi | alright | 20:07 |
nmagnezi | k.. I'm going AFK | 20:08 |
rm_work | kk | 20:08 |
nmagnezi | I'm on PTO tomorrow, so might miss the weekly meeting as well | 20:08 |
rm_work | have fun :) | 20:09 |
nmagnezi | thanks :) | 20:09 |
*** aojea has joined #openstack-lbaas | 20:09 | |
*** sshank has joined #openstack-lbaas | 20:16 | |
johnsom | Fyi, I ran 620 create/delete cycles last night. Not a single 404 | 20:16 |
xgerman_ | did that break your box? | 20:17 |
johnsom | Ha, maybe | 20:17 |
johnsom | It has been acting odd for a few weeks | 20:18 |
*** aojea has quit IRC | 20:24 | |
*** aojea has joined #openstack-lbaas | 20:28 | |
johnsom | rm_work looks good, can't test at the moment, but reading it looked ok. Waiting for gates | 20:37 |
*** cpusmith has quit IRC | 20:38 | |
*** aojea has quit IRC | 20:51 | |
*** yamamoto has joined #openstack-lbaas | 21:09 | |
*** aojea has joined #openstack-lbaas | 21:10 | |
*** sshank has quit IRC | 21:18 | |
*** JudeC has joined #openstack-lbaas | 21:19 | |
*** cpuga has quit IRC | 21:20 | |
*** cpuga_ has joined #openstack-lbaas | 21:21 | |
*** yamamoto has quit IRC | 21:21 | |
*** sshank has joined #openstack-lbaas | 21:22 | |
*** JudeC has quit IRC | 21:24 | |
*** ssmith has joined #openstack-lbaas | 21:25 | |
*** cpuga_ has quit IRC | 21:28 | |
*** cpuga has joined #openstack-lbaas | 21:28 | |
*** cpuga has quit IRC | 21:33 | |
rm_work | johnsom: i don't know how to specify in the API Ref that we take a list of some object | 21:47 |
rm_work | >_> | 21:47 |
johnsom | In the param docs? | 21:48 |
johnsom | They call it array | 21:48 |
xgerman_ | well, I can fire uo. devstack on my Mac… are we still trying to test rm_work’s patch from alst night? | 21:55 |
*** yamamoto has joined #openstack-lbaas | 21:59 | |
rm_work | yesplz | 21:59 |
rm_work | or rather | 21:59 |
rm_work | please just verify my findings on master | 21:59 |
rm_work | which is, you'll still not get a vip_address back on create | 22:00 |
rm_work | and it'll still create the port on the worker side | 22:00 |
rm_work | and you'll have an extra orphaned port | 22:00 |
xgerman_ | ok | 22:00 |
rm_work | also: | 22:01 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Allow PUT to /pools/<id>/members to batch update members https://review.openstack.org/477034 | 22:01 |
rm_work | fixed | 22:01 |
rm_work | but less important by far | 22:01 |
xgerman_ | ok, will test master | 22:01 |
*** sshank has quit IRC | 22:06 | |
*** sshank has joined #openstack-lbaas | 22:21 | |
*** fnaval has quit IRC | 22:24 | |
*** leitan has quit IRC | 22:40 | |
xgerman_ | and beacuse it’s close to RC devstack doesn’t work | 22:54 |
rm_work | <_< | 22:57 |
*** aojea has quit IRC | 23:12 | |
rm_work | johnsom: in the amp driver, if we try to delete a listener and it gets a 404 from the amp that the listener is already gone ....... is that ... "OK"? Could we ignore that? | 23:20 |
johnsom | Yeah, I thought german just had a patch for that | 23:21 |
johnsom | https://review.openstack.org/#/c/487232/ | 23:22 |
xgerman_ | Yep, pls merge | 23:22 |
rm_work | ah, i see | 23:23 |
rm_work | yeah k | 23:23 |
rm_work | xgerman_: i wonder what makes this happen -- i just ran into it in our env | 23:25 |
xgerman_ | Concurrency - in my case those gopher things where deleting concurrently all over the place | 23:27 |
johnsom | Wasn't it that the amp ran out of disk during create, so the listener didn't finish create, then the user tried to delete it to fix it? | 23:28 |
xgerman_ | Nope. That was something else | 23:28 |
*** catintheroof has quit IRC | 23:29 | |
rm_work | in my case, it looks like the request was made but died on a read error for some reason | 23:31 |
rm_work | so the driver retried | 23:31 |
rm_work | but it had actually made it to the amp already and the agent had done the delete | 23:31 |
rm_work | so then it died on the 404 | 23:31 |
rm_work | http://paste.openstack.org/show/617856/ | 23:34 |
rm_work | johnsom / xgerman_ | 23:34 |
rm_work | in the meantime, reviewing that fix, will prolly merge it | 23:34 |
xgerman_ | Thx | 23:35 |
rm_work | yep looks good, merging | 23:36 |
rm_work | you did basically what i was looking at doing | 23:36 |
johnsom | Hmm, wonder why the listener delete took more than 30 seconds???? | 23:36 |
rm_work | so yeah :P | 23:36 |
rm_work | yeah no clue what happened... | 23:37 |
rm_work | but it had worked | 23:37 |
rm_work | because the next try was a 404 | 23:37 |
johnsom | I wonder if haproxy took a while to stop. | 23:38 |
johnsom | Maybe we should bump that timeout up.... | 23:38 |
rm_work | hmmm | 23:39 |
rm_work | dunno, why would it take that much time? | 23:39 |
johnsom | Long running flows maybe? | 23:39 |
rm_work | err | 23:40 |
rm_work | flows? this is on the amp http request right? | 23:40 |
johnsom | No, this was delete listener which stops HAProxy (that's about the only thing that would take any time). Maybe HAProxy waits to stop until the existing flows finish | 23:41 |
rm_work | right but | 23:41 |
rm_work | taskflow isn't running on the amp | 23:41 |
rm_work | this is a request to the amp agent | 23:41 |
johnsom | TCP flow | 23:41 |
rm_work | oh | 23:41 |
rm_work | so haproxy waits for ... connections to finish? | 23:42 |
rm_work | before stopping? | 23:42 |
johnsom | https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L273 | 23:42 |
johnsom | Maybe, it was a guess/thinking out loud | 23:42 |
rm_work | yeah i mean it's got to be that | 23:42 |
johnsom | The whole process lifecycle for HAproxy has changed a bunch since I really messed with it. | 23:42 |
rm_work | so yes, maybe we need to either make it not wait, or increase that timeout | 23:42 |
rm_work | johnsom: you get a chance to look at the IP issue? | 23:59 |
rm_work | this is seriously *bad* | 23:59 |
rm_work | we are leaving master broken for way too long | 23:59 |
rm_work | IM | 23:59 |
rm_work | *IMO | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!