*** ducttape_ has joined #openstack-lbaas | 00:10 | |
*** outofmemory is now known as reedip | 00:11 | |
*** johnsom_ has quit IRC | 00:20 | |
*** ducttape_ has quit IRC | 00:23 | |
*** PK has quit IRC | 00:34 | |
*** amotoki has joined #openstack-lbaas | 00:54 | |
*** ducttape_ has joined #openstack-lbaas | 01:04 | |
*** bochi-michael has joined #openstack-lbaas | 01:33 | |
*** fnaval_ has joined #openstack-lbaas | 01:46 | |
*** paco20151113 has joined #openstack-lbaas | 01:46 | |
*** fnaval has quit IRC | 01:49 | |
*** yamamoto has joined #openstack-lbaas | 01:52 | |
*** ducttape_ has quit IRC | 01:56 | |
*** ducttape_ has joined #openstack-lbaas | 01:57 | |
*** ducttape_ has quit IRC | 02:05 | |
*** yamamoto has quit IRC | 02:14 | |
*** minwang2 has joined #openstack-lbaas | 02:22 | |
*** bochi-michael has quit IRC | 02:37 | |
*** bochi-michael has joined #openstack-lbaas | 02:38 | |
*** minwang2 has quit IRC | 02:43 | |
*** ducttape_ has joined #openstack-lbaas | 02:45 | |
*** amotoki has quit IRC | 02:52 | |
*** minwang2 has joined #openstack-lbaas | 02:53 | |
*** yamamoto has joined #openstack-lbaas | 02:58 | |
*** ducttape_ has quit IRC | 02:59 | |
*** bochi-michael has quit IRC | 02:59 | |
*** amotoki has joined #openstack-lbaas | 03:04 | |
*** ducttape_ has joined #openstack-lbaas | 03:14 | |
*** amotoki has quit IRC | 03:15 | |
*** minwang2 has quit IRC | 03:18 | |
*** yuanying has quit IRC | 03:20 | |
blogan | johnsom: regarding the egress security group stuff, pretty sure we didn't have vrrp implemented when I put that in | 03:20 |
---|---|---|
*** amotoki has joined #openstack-lbaas | 03:20 | |
*** minwang2 has joined #openstack-lbaas | 03:25 | |
*** amotoki_ has joined #openstack-lbaas | 03:25 | |
*** amotoki has quit IRC | 03:29 | |
*** amotoki_ has quit IRC | 03:35 | |
*** kevo has joined #openstack-lbaas | 03:37 | |
*** amotoki has joined #openstack-lbaas | 03:41 | |
openstackgerrit | Stephen Balukoff proposed openstack/neutron-lbaas: L7 capability extension implementation for lbaas v2 https://review.openstack.org/148232 | 03:42 |
*** amotoki has quit IRC | 03:44 | |
*** ducttape_ has quit IRC | 03:45 | |
*** Aish has joined #openstack-lbaas | 03:52 | |
*** Aish has quit IRC | 03:52 | |
*** minwang2 has quit IRC | 03:53 | |
*** links has joined #openstack-lbaas | 03:57 | |
*** amotoki has joined #openstack-lbaas | 04:00 | |
*** allan_h has joined #openstack-lbaas | 04:03 | |
*** yuanying has joined #openstack-lbaas | 04:10 | |
*** PK has joined #openstack-lbaas | 04:13 | |
sbalukoff | By the way, with the above patch, and Evgeny's python-neutronclient patch, in theory we should be able to do end-to-end CLI testing of L7 at this point. (Evgeny's CLI patch still has a couple minor problems, but these shouldn't get in the way of testing.) | 04:19 |
*** alhu_ has joined #openstack-lbaas | 04:26 | |
*** allan_h has quit IRC | 04:28 | |
*** alhu__ has joined #openstack-lbaas | 04:39 | |
*** alhu_ has quit IRC | 04:39 | |
*** PK has quit IRC | 04:44 | |
*** PK has joined #openstack-lbaas | 05:10 | |
*** minwang2 has joined #openstack-lbaas | 05:24 | |
rm_work | awesome | 05:33 |
rm_work | sbalukoff: so i just need .... | 05:34 |
rm_work | https://review.openstack.org/148232 for neutron-lbaas and your full chain for octavia and ... which python-neutronclient patch? | 05:34 |
rm_work | ok building against those | 05:36 |
rm_work | need to figure out what python-neutronclient patch to install, but have to do that at the end manually anyway | 05:37 |
sbalukoff | rm_work: Let me get that patch for you... | 05:50 |
sbalukoff | https://review.openstack.org/#/c/217276/ | 05:50 |
sbalukoff | I'm working on updating that one as well, though-- it's not presently dependent on my shared pools patch. | 05:51 |
sbalukoff | And it's been a few days since Evgeny touched it. I'll probably ask him whether he wants me to update it before I upload a new patch set. | 05:51 |
sbalukoff | (I'm also rebuilding a stack here with all that stuff.) | 05:52 |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 documentation https://review.openstack.org/278830 | 06:06 |
sbalukoff | The above fixes a minor nit that johnsom pointed out. | 06:07 |
sbalukoff | (There are no code changes associated with it, so please carry on with your testing, eh. ;) ) | 06:07 |
*** minwang2 has quit IRC | 06:08 | |
*** minwang2 has joined #openstack-lbaas | 06:10 | |
openstackgerrit | Stephen Balukoff proposed openstack/octavia: Add L7 documentation https://review.openstack.org/278830 | 06:10 |
*** minwang2 has quit IRC | 06:16 | |
*** minwang2 has joined #openstack-lbaas | 06:16 | |
*** alhu__ has quit IRC | 06:19 | |
*** bana_k has joined #openstack-lbaas | 06:19 | |
*** Guest61736 is now known as mariusv | 06:27 | |
*** mariusv has joined #openstack-lbaas | 06:27 | |
*** PK has quit IRC | 06:46 | |
*** minwang2 has quit IRC | 07:15 | |
*** bana_k has quit IRC | 07:16 | |
*** kobis has joined #openstack-lbaas | 07:21 | |
*** chlong_ has quit IRC | 07:30 | |
*** evgenyf has joined #openstack-lbaas | 07:47 | |
*** nmagnezi has joined #openstack-lbaas | 07:50 | |
*** numans has joined #openstack-lbaas | 07:50 | |
*** kobis has quit IRC | 07:50 | |
*** kobis has joined #openstack-lbaas | 07:53 | |
paco20151113 | Hi, I installed octavia with devstack all-in-one , service all up and running . also I can check the amphora rest api with curl | 08:03 |
paco20151113 | but the o-cw log appear WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. | 08:03 |
paco20151113 | the lbaas-loadbanace-list result in "PENDING_CREATE" | 08:04 |
paco20151113 | curl -k --cert /etc/octavia/certs/client.pem https://192.168.0.8:9443/0.5/info | 08:04 |
paco20151113 | { | 08:04 |
paco20151113 | "api_version": "0.5", | 08:04 |
paco20151113 | "haproxy_version": "1.5.14-1ubuntu0.15.10.1~ubuntu14.04.1", | 08:04 |
paco20151113 | "hostname": "amphora-dab7be5b-5c82-429d-b6eb-f0e556404fd9" | 08:04 |
paco20151113 | anyone know how to solve this issue . ? | 08:04 |
*** pcaruana has joined #openstack-lbaas | 08:05 | |
paco20151113 | I succesfully installed octavia with stable/liberty which will set static route with gw br-ex to access the lb-mgnt-network. | 08:06 |
paco20151113 | I checked the commit log . it seems the devstack changed with this patch https://github.com/openstack/octavia/commit/8e242323719d8ed0016ff8296fe46a7feab0745c | 08:07 |
sbalukoff | paco20151113: It takes a while for the amphora to boot. | 08:08 |
sbalukoff | paco20151113: Can you the status of the amphora in 'nova list' ? | 08:08 |
paco20151113 | don't know why the amphorae.drivers rest_api_driver can not access the instance , even I can access it with curl from client . | 08:08 |
paco20151113 | stack@48-28:~/neutron-lbaas$ nova list | 08:09 |
paco20151113 | +--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+ | 08:09 |
paco20151113 | | ID | Name | Status | Task State | Power State | Networks | | 08:09 |
paco20151113 | +--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+ | 08:09 |
paco20151113 | | dce363f4-8d4f-44b3-bd38-d2223321f304 | amphora-2a945bf3-0481-4bfa-96d4-0fb993c340ac | ACTIVE | - | Running | lb-mgmt-net=192.168.0.7; private=fdf5:fc3f:1d33:0:f816:3eff:feca:cfd6, 10.0.0.7 | | 08:09 |
paco20151113 | | 8ffa5f4e-1ce2-4b3f-aec3-9b0e57c50fb0 | amphora-3219faf4-1af7-4e68-9169-4408a9093621 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.9; private=fdf5:fc3f:1d33:0:f816:3eff:febb:8035, 10.0.0.10 | | 08:09 |
paco20151113 | | b421a2d8-4e14-4262-ae6c-44ed79b9c282 | amphora-44b7450d-f0bd-41b0-9790-fbdc9ebe2a2a | ACTIVE | - | Running | lb-mgmt-net=192.168.0.5; private=fdf5:fc3f:1d33:0:f816:3eff:feea:7846, 10.0.0.4 | | 08:09 |
paco20151113 | | 36e338c2-cbaa-47ca-a18f-3161b29fddc8 | amphora-dab7be5b-5c82-429d-b6eb-f0e556404fd9 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.8; private=fdf5:fc3f:1d33:0:f816:3eff:fe26:16f5, 10.0.0.11 | | 08:09 |
paco20151113 | | a392af2f-7986-475e-b60a-9fe1ebfa8b67 | amphora-f360fce4-8b6d-4454-95f7-bf2d3c8da6e8 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.6; private=fdf5:fc3f:1d33:0:f816:3eff:fecc:9737, 10.0.0.8 | | 08:09 |
paco20151113 | | e39e512d-3a02-4b1e-8aeb-994430bfd8fd | amphora-f7601f19-62ce-4568-9c95-f60ad90aaf94 | ACTIVE | - | Running | lb-mgmt-net=192.168.0.4; private=fdf5:fc3f:1d33:0:f816:3eff:feb7:fa0, 10.0.0.5 | | 08:09 |
paco20151113 | +--------------------------------------+----------------------------------------------+--------+------------+-------------+-------------------------------------------------------------- | 08:09 |
sbalukoff | Wow, you've got quite a few of them | 08:09 |
paco20151113 | sorry , it's a little bit longer . but I am sure the instance all up and running well. | 08:09 |
paco20151113 | since , I can access the rest_API by curl . | 08:09 |
sbalukoff | Yeah. | 08:09 |
sbalukoff | This is on stable/liberty? | 08:10 |
paco20151113 | no , it's master | 08:10 |
sbalukoff | Oh, ok! | 08:10 |
sbalukoff | So, there is a bug (just filed by me this evening) where the load balancer can sit in the pending_create state for 5 minutes or more. | 08:11 |
sbalukoff | Even after the amphora is up | 08:11 |
sbalukoff | how long did you let the controller worker try to bring the amphora all the way up? | 08:11 |
sbalukoff | This is the bug I'm talking about: https://bugs.launchpad.net/octavia/+bug/1548212 | 08:12 |
openstack | Launchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [Undecided,New] | 08:12 |
sbalukoff | If that sounds similar to what you're seeing, you might subscribe to that bug. I was going to try to tackle it this week if nobody else does. | 08:12 |
paco20151113 | amp_active_wait_sec = 20 | 08:13 |
paco20151113 | amp_active_retries = 100 | 08:13 |
sbalukoff | paco20151113: Yep, it should notice the amphora is up much quicker than it is. For some reason those values are not being respected here... | 08:13 |
sbalukoff | (That's the nature of this bug.) | 08:13 |
paco20151113 | is there any workaroud for this ? | 08:14 |
sbalukoff | For not, wait for 5 minutes for the controller worker to finish bringing the amphora up. | 08:14 |
sbalukoff | For now. | 08:14 |
sbalukoff | That is, if it's the same bug you are experiencing. | 08:15 |
sbalukoff | If the load balancer never leaves the PENDING_CREATE state (even after 10 or 15 minutes), then you might have encountered a different bug. | 08:16 |
sbalukoff | Also-- once the amphora is up, the rest of the workflow (create listener, create pool, create member, etc.) should all go really quickly-- a second or two at most to execute. | 08:18 |
sbalukoff | Er... once the load balancer is ACTIVE. | 08:18 |
paco20151113 | maybe I encounter another bug , because >20 mintues later . it move to "ERROR". | 08:18 |
sbalukoff | Ok. | 08:18 |
sbalukoff | If you can gather up your logs and submit a bug report, we can look into it. | 08:19 |
paco20151113 | ok . sure ,will do . thanks for help | 08:19 |
sbalukoff | No problem, eh. | 08:19 |
sbalukoff | Ok, I'mma go to bed. Have a good night, y'all! | 08:20 |
paco20151113 | bye. | 08:20 |
paco20151113 | have good night. | 08:21 |
*** jschwarz has joined #openstack-lbaas | 08:53 | |
openstackgerrit | Chaozhe Chen(ccz) proposed openstack/octavia: Trivial: cleanup unused conf and log variables https://review.openstack.org/283007 | 09:03 |
*** amotoki has quit IRC | 09:06 | |
*** amotoki has joined #openstack-lbaas | 09:14 | |
*** amotoki has quit IRC | 09:35 | |
*** chlong_ has joined #openstack-lbaas | 10:06 | |
*** paco20151113 has quit IRC | 10:10 | |
*** yamamoto has quit IRC | 10:28 | |
*** yamamoto has joined #openstack-lbaas | 11:10 | |
*** yamamoto has quit IRC | 11:14 | |
*** zigo has quit IRC | 11:40 | |
*** zigo has joined #openstack-lbaas | 11:41 | |
*** links has quit IRC | 11:53 | |
*** links has joined #openstack-lbaas | 12:07 | |
*** chlong_ has quit IRC | 12:17 | |
*** yamamoto has joined #openstack-lbaas | 12:41 | |
*** nmagnezi has quit IRC | 12:42 | |
*** rtheis has joined #openstack-lbaas | 12:44 | |
*** nmagnezi has joined #openstack-lbaas | 12:57 | |
*** links has quit IRC | 13:08 | |
*** amotoki has joined #openstack-lbaas | 13:08 | |
*** yamamoto has quit IRC | 13:10 | |
*** ducttape_ has joined #openstack-lbaas | 13:12 | |
*** yamamoto has joined #openstack-lbaas | 13:16 | |
*** amotoki has quit IRC | 13:18 | |
*** links has joined #openstack-lbaas | 13:22 | |
*** numans has quit IRC | 13:23 | |
*** ducttape_ has quit IRC | 13:24 | |
*** yamamoto has quit IRC | 13:33 | |
*** links has quit IRC | 13:50 | |
*** numans has joined #openstack-lbaas | 13:55 | |
*** neelashah has joined #openstack-lbaas | 13:56 | |
*** amotoki has joined #openstack-lbaas | 13:58 | |
*** chlong_ has joined #openstack-lbaas | 14:06 | |
*** localloop127 has joined #openstack-lbaas | 14:07 | |
*** openstackgerrit has quit IRC | 14:17 | |
*** openstackgerrit has joined #openstack-lbaas | 14:18 | |
*** piet has joined #openstack-lbaas | 14:27 | |
*** doug-fish has joined #openstack-lbaas | 14:39 | |
*** nmagnezi has quit IRC | 14:53 | |
*** ducttape_ has joined #openstack-lbaas | 15:00 | |
*** ducttape_ has quit IRC | 15:00 | |
*** ducttape_ has joined #openstack-lbaas | 15:00 | |
*** nmagnezi has joined #openstack-lbaas | 15:03 | |
*** woodster_ has joined #openstack-lbaas | 15:34 | |
*** localloop127 has quit IRC | 15:41 | |
*** PK has joined #openstack-lbaas | 15:52 | |
*** numans has quit IRC | 15:58 | |
*** ajmiller has joined #openstack-lbaas | 15:59 | |
*** PK has quit IRC | 16:03 | |
*** TrevorV|Home has joined #openstack-lbaas | 16:05 | |
*** Oku_OS has joined #openstack-lbaas | 16:08 | |
*** armax has joined #openstack-lbaas | 16:10 | |
johnsom | sbalukoff When you are on again, I would like to talk about your bug: https://bugs.launchpad.net/octavia/+bug/1548212 | 16:18 |
openstack | Launchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [High,New] | 16:18 |
*** sbalukoff has quit IRC | 16:21 | |
xgerman | isn’t that configurable? | 16:23 |
*** pcaruana has quit IRC | 16:27 | |
*** nmagnezi has quit IRC | 16:29 | |
*** fnaval_ has quit IRC | 16:30 | |
*** PK has joined #openstack-lbaas | 16:34 | |
johnsom | Yes, and the defaults are pretty short, so I'm looking for more information. | 16:39 |
johnsom | FYI, the neutron client "IndexError" issue is fixed, pending release, and tracked here: https://bugs.launchpad.net/python-cliff/+bug/1539770 | 16:40 |
openstack | Launchpad bug 1539770 in cliff "Empty set causing out of range error" [Undecided,Fix released] - Assigned to Doug Hellmann (doug-hellmann) | 16:40 |
* johnsom wears his oslo liaison hat for a minute | 16:40 | |
*** localloop127 has joined #openstack-lbaas | 16:44 | |
*** PK has quit IRC | 16:45 | |
*** amotoki has quit IRC | 16:45 | |
*** Purandar has joined #openstack-lbaas | 16:46 | |
*** fnaval has joined #openstack-lbaas | 16:49 | |
*** jwarendt has joined #openstack-lbaas | 16:54 | |
*** TrevorV|Home has quit IRC | 16:54 | |
*** jschwarz has quit IRC | 16:56 | |
*** jwarendt has quit IRC | 16:58 | |
*** kobis has quit IRC | 16:58 | |
*** jwarendt has joined #openstack-lbaas | 17:01 | |
*** jwarendt has quit IRC | 17:05 | |
*** jwarendt has joined #openstack-lbaas | 17:06 | |
*** minwang2 has joined #openstack-lbaas | 17:13 | |
*** piet has quit IRC | 17:13 | |
openstackgerrit | Merged openstack/neutron-lbaas: Updated from global requirements https://review.openstack.org/282752 | 17:23 |
*** alhu__ has joined #openstack-lbaas | 17:33 | |
*** alhu__ has quit IRC | 17:33 | |
*** kevo has quit IRC | 17:36 | |
*** Purandar has quit IRC | 17:36 | |
*** allan_h has joined #openstack-lbaas | 17:40 | |
*** Aish has joined #openstack-lbaas | 17:50 | |
rm_work | sbalukoff why are you not here T_T | 17:57 |
*** ducttape_ has quit IRC | 17:57 | |
rm_work | need the python-neutronclient patch rebased on top of whatever fix happened for the index out of range bug | 17:57 |
rm_work | or ... | 17:58 |
rm_work | maybe i can cherry-pick them | 17:58 |
*** piet has joined #openstack-lbaas | 18:05 | |
*** ducttape_ has joined #openstack-lbaas | 18:08 | |
rm_work | oh it's cliff <_< | 18:14 |
johnsom | Yeah | 18:14 |
johnsom | cliff 2.0 fixes it | 18:15 |
rm_work | eh whatever i guess it doesn't impact my testing | 18:15 |
johnsom | Or, go back in time to 1.15.0 | 18:15 |
rm_work | ah yeah just did -U cliff | 18:16 |
rm_work | and it seems good | 18:16 |
rm_work | does requirements cap cliff before 2.0? | 18:16 |
rm_work | otherwise i thought i should have gotten this already | 18:16 |
johnsom | Yeah, it's still in the approval process to release | 18:16 |
rm_work | ah i stacked last night, if they released this morning | 18:16 |
rm_work | ok so per sbalukoff the l7 stuff in neutron-lbaas isn't quite ready so i can't just use the client yet for full testing? | 18:17 |
johnsom | I thought he said last night it was good enough for an end-to-end | 18:17 |
rm_work | backwards-compatible | 18:17 |
rm_work | but not L7 i think | 18:17 |
rm_work | something about the octavia driver | 18:18 |
johnsom | To quote: <sbalukoff> By the way, with the above patch, and Evgeny's python-neutronclient patch, in theory we should be able to do end-to-end CLI testing of L7 at this point. (Evgeny's CLI patch still has a couple minor problems, but these shouldn't get in the way of testing.) | 18:18 |
ajmiller | rm_work johnsom I just posted a -1 review for the shared pools patch. It breaks the dashboard panels. | 18:18 |
johnsom | https://review.openstack.org/148232 | 18:18 |
johnsom | ajmiller Ok, good to know! | 18:19 |
ajmiller | The default pool ID for the listener is coming back empty. That is how the UI learns about pool/members/HMs | 18:19 |
rm_work | ah ok johnsom cool | 18:20 |
johnsom | I will load it up this morning after the meetings die down | 18:20 |
rm_work | i've got it now | 18:21 |
rm_work | going through my baseline | 18:21 |
*** kevo has joined #openstack-lbaas | 18:24 | |
*** Purandar has joined #openstack-lbaas | 18:33 | |
rm_work | BTW remember we still need this: https://review.openstack.org/#/c/276802/ | 18:36 |
*** piet has quit IRC | 18:36 | |
rm_work | because we aren't getting requirements updates/checks | 18:36 |
rm_work | verifying if we need to do anything else on our side to prepare for that | 18:36 |
rm_work | hmm, not sure how to create multiple pools for one LB with neutron-client | 18:40 |
rm_work | no option that I see for setting it on a LB not a listener, and doing a second pool on a listener comes back with an error about default_pool already set | 18:40 |
*** Aish has quit IRC | 18:41 | |
rm_work | oh wait was sbalukoff going to be off today? | 18:45 |
*** neelashah has quit IRC | 18:46 | |
xgerman | rm_work it seems like every dat and then he shows up at 4am… maybe he moved to Australia? | 18:53 |
*** Aish has joined #openstack-lbaas | 18:53 | |
ajmiller | hmmm johnsom, rm_work, I redid my UI test with the shared pools patch, now I'm seeing the default pool.. | 18:57 |
rm_work | so ... it's working then? :P | 18:57 |
rm_work | that'd be good news :) | 18:58 |
ajmiller | It seems to be... But I need to double-check what I've done. | 18:58 |
*** neelashah has joined #openstack-lbaas | 19:07 | |
*** evgenyf has quit IRC | 19:10 | |
neelashah | johnsom rm_work - has this bug been reported before? https://bugs.launchpad.net/neutron/+bug/1548386 we are running into this as part of the dashboard work for Mitaka | 19:14 |
openstack | Launchpad bug 1548386 in neutron "LBaaS v2: Unable to edit load balancer that has a listener" [Undecided,New] | 19:14 |
*** sbalukoff has joined #openstack-lbaas | 19:15 | |
sbalukoff | Hey ajmiller: Are you around? | 19:15 |
ajmiller | sbalukoff: I'm around but in a call right now... | 19:15 |
ajmiller | I'll be slow to respond till that's over. | 19:16 |
johnsom | neelashah I have not seen that before. However, I don't use the namespace driver much. | 19:16 |
sbalukoff | Ok, just looking to get more information on the bug you found, specifically which devstack services to enable in order to reproduce the GUI bug you found. | 19:16 |
rm_work | ah sbalukoff is back | 19:16 |
rm_work | [12:40:04] <rm_work>hmm, not sure how to create multiple pools for one LB with neutron-client | 19:16 |
rm_work | [12:40:30] <rm_work>no option that I see for setting it on a LB not a listener, and doing a second pool on a listener comes back with an error about default_pool already set | 19:16 |
rm_work | sbalukoff: ^^ | 19:16 |
sbalukoff | (Since, from what I can tell, the API is still returning the data it should.) | 19:16 |
ajmiller | sbalukoff Well, I just set it up in what I thought was exactly the same way, and it seems to be working | 19:16 |
sbalukoff | rm_work: Right. You need L7 in order to do that | 19:17 |
rm_work | err | 19:17 |
* ajmiller is trying to figure out just what he did. | 19:17 | |
rm_work | but i thought you created pool2 first | 19:17 |
rm_work | then pointed l7 at it | 19:17 |
sbalukoff | rm_work: Oh! | 19:17 |
sbalukoff | rm_work: Are you using the CLI that has shared-pool updates? | 19:17 |
rm_work | yes | 19:17 |
rm_work | the patch you linked earlier | 19:17 |
rm_work | right? | 19:17 |
rm_work | i see l7 stuff in the tab-complete for commands under lbaas- | 19:17 |
sbalukoff | rm_work: I... think so? I dunno. Evgeny has a patch that adds L7 support but his CLI patch is not dependent on my CLI patch, so it's possible you don't have the shared pools CLI patch.. | 19:18 |
sbalukoff | Let me see... | 19:18 |
sbalukoff | rm_work: This is the CLI shared pools patch: https://review.openstack.org/#/c/218563/ | 19:19 |
sbalukoff | (I've asked Evgeny if I can make his patch dependent on mine, but I have not done that yet as I didn't want to step on his toes.) | 19:19 |
sbalukoff | rm_work: In any case, with that patch (or rather the two relevant files in it) you should be able to create a pool that's not associated with a listener by specifying the --loadbalancer on the pool create command. | 19:20 |
madhu_ak | fnaval: there is a response from Andrea about tempest-plugin ML | 19:20 |
rm_work | aaah that is what you meant | 19:20 |
rm_work | ok I think that's the issue, i need both | 19:21 |
sbalukoff | ajmiller: Thanks for looking into this more closely, eh! | 19:21 |
rm_work | awesome, grabbed that and now it works | 19:22 |
rm_work | thanks sbalukoff | 19:22 |
sbalukoff | Yay! | 19:22 |
rm_work | in fact i just hit up-arrow and re-ran my *guess* at the right command, and it worked :) | 19:22 |
sbalukoff | So it's intuitive to boot, eh? ;) | 19:22 |
rm_work | yep | 19:23 |
rm_work | hmmmm | 19:25 |
*** neelashah1 has joined #openstack-lbaas | 19:25 | |
rm_work | neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener1 | 19:25 |
rm_work | "Redirect pool id is missing for L7 Policy with pool redirect action." | 19:25 |
*** neelashah has quit IRC | 19:25 | |
rm_work | tried with the ID directly too | 19:25 |
rm_work | either the help text is wrong (it says to use "--redirect-pool") or else there's a bug | 19:25 |
rm_work | sbalukoff: ^^ | 19:25 |
sbalukoff | rm_work: I suspect there's a bug. I have not delved too deeply into Evgeny's CLI patch, but I know there are a few problems with it. | 19:26 |
sbalukoff | Bah... Ok, I'm going to work on patching it-- I hope he doesn't mind me stepping on his toes in the interests of getting it working in time for the deadline. :P | 19:27 |
rm_work | back to CLI testing | 19:28 |
rm_work | johnsom: how far have you gotten with L7 testing? | 19:28 |
rm_work | I've been hammering at it for 3 days or so now, but i still am not sure how much i've actually managed to test, manual testing is insane for this product T_T | 19:28 |
johnsom | rm_work I have finished octavia regression testing without neutron-lbaas/neutron client. | 19:28 |
rm_work | I'm really close to rage+2 | 19:28 |
sbalukoff | rm_work: Do legacy things work with the CLI stuff you've tested thus far? (ie. stuff you could do prior to shared-pools and l7?) | 19:29 |
rm_work | sbalukoff: so far as i've tested, though with the CLI i haven't really done much besides basic ops | 19:29 |
johnsom | rm_work That looks good. I'm going to do end-to-end today. If that goes well, I would be in favor of merge tomorrow/late tonight | 19:29 |
sbalukoff | rm_work: I've been trying hard not to rebase on all the L7 stuff! | 19:29 |
*** ducttape_ has quit IRC | 19:29 | |
rm_work | heh | 19:29 |
rm_work | johnsom: yeah sounds good to me | 19:29 |
sbalukoff | Yay! | 19:29 |
rm_work | I might do a final walkthrough of the code and then +2 as I go | 19:30 |
sbalukoff | I am so looking forward to entering bugfix mode instead of "keep the house of cards standing" mode. | 19:30 |
sbalukoff | :) | 19:30 |
rm_work | yes | 19:30 |
rm_work | I think that's better for everyone's sanity | 19:30 |
rm_work | even if something breaks... we'll find out and fix | 19:30 |
rm_work | and it'll be less of a nightmare than retesting everything constantly | 19:30 |
johnsom | Ok. Would you all might asking the other cores to hold +A until I get through tests today? | 19:30 |
sbalukoff | Well, at this point, I know I conflict with what minwang is doing in her big patch under consideration, and I'd like to not holding up anyone else's work. | 19:31 |
rm_work | johnsom: I doubt anyone has the balls to +A this without asking in channel first :P | 19:31 |
sbalukoff | That and Trevor | 19:31 |
johnsom | Ok, cool. | 19:31 |
rm_work | well trevor rebased on top of you | 19:31 |
rm_work | so he should be fine | 19:31 |
johnsom | I just want one good end-to-end test day, then I think we are good for merge and fix as necessary | 19:31 |
sbalukoff | johnsom: I'll get going immediately on fixing Evgeny's python-neutronclient L7 CLI patch. | 19:32 |
johnsom | We do have a bunch of patches in +2 state. I'm flexible on order | 19:32 |
rm_work | yeah i'm going to throw a couple more test cases at this and go into full syntax-review mode | 19:32 |
rm_work | i think we focus on L7 and merge anything else that still works afterwards :P | 19:33 |
johnsom | Yeah, I think that works. Heck, half of them are sbalukoff's anyway | 19:33 |
rm_work | someone's been a busy bee | 19:34 |
johnsom | +1 | 19:34 |
johnsom | It's great | 19:34 |
sbalukoff | Haha! | 19:34 |
rm_work | actually we can start +A-ing from the top down | 19:34 |
johnsom | sbalukoff I wanted a bit more info on this: https://bugs.launchpad.net/octavia/+bug/1548212 | 19:34 |
openstack | Launchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [High,New] | 19:34 |
rm_work | like docs | 19:34 |
johnsom | rm_work Already ahead of you on that one | 19:34 |
sbalukoff | My intent is to push IBM as hard as I fscking can to start using Octavia big-time with Mitaka. So, I *need* to make sure that Octavia doesn't suck when I start the internal pressure. | 19:35 |
johnsom | sbalukoff Do you have time to chat about it or later? | 19:35 |
sbalukoff | johnsom: Yes | 19:35 |
johnsom | Great. Was it looping on "Could not connect to instance" or just hung after one? | 19:35 |
sbalukoff | johnsom: I have a lot of ideas there, and I suspect that the unreasonable REST timeouts have a lot to do with it: I know you're working on that fix and figured if you were stuck you'd let people know. | 19:36 |
sbalukoff | johnsom: It did that a couple times then hung. | 19:36 |
rm_work | sbalukoff: i am a bit sad the sequence for l7 rules starts with 1 <_< | 19:36 |
sbalukoff | johnsom: which is why I suspect the REST timeout thing. | 19:36 |
sbalukoff | rm_work: Yeah, I fought with Evgeny a while over it. | 19:37 |
rm_work | was he for 1 or 0 | 19:37 |
johnsom | sbalukoff Ok. Yeah, if it didn't keep looping, that is bad. If you had the debug log for that it would be handy on the bug. | 19:37 |
sbalukoff | rm_work: I start counting with zero as well-- but at that point it was simpler just for Octavia to mirror what Evgeny had written Neutron-LBaaS to do. | 19:37 |
sbalukoff | I was arguing for 0 | 19:37 |
rm_work | i guarantee somewhere someone will have to write code that offsets for 1 | 19:37 |
rm_work | probably a lot of people >_> | 19:37 |
rm_work | ah well | 19:38 |
sbalukoff | johnsom: I don't have a debug log, but could probably reproduce this problem within in a few days. I've seen it often enough. | 19:38 |
johnsom | sbalukoff Sadly, I assigned that to myself, but have been focusing on reviews/testing and haven't started. If you have cycles, feel free to take it | 19:38 |
ajmiller | sbalukoff: I isolated the broken LB creation workflow, and I don't think it is related to your code. I think it is problem in the dashboard panels when creating HTTPS LBs without valid certs. | 19:38 |
ajmiller | One more test and I'll know for sure. | 19:38 |
sbalukoff | ajmiller: Oh, ok! | 19:39 |
sbalukoff | johnsom: Ok-- I think I might be juggling some of this CLI update stuff today, but if you get to it before me, I won't be pissed about duplicating effort to get that one fixed. | 19:40 |
johnsom | Ok | 19:40 |
sbalukoff | Especially if it does lead to clearing up the intermittent scenario test failure bug. | 19:40 |
*** ducttape_ has joined #openstack-lbaas | 19:44 | |
*** yamamoto has joined #openstack-lbaas | 19:48 | |
sbalukoff | Ok, gonna grab lunch before I get too buried in this CLI code. | 19:51 |
*** localloop127 has quit IRC | 19:54 | |
*** localloop127 has joined #openstack-lbaas | 20:00 | |
johnsom | Yeah, just grabbed a sandwich myself. | 20:07 |
fnaval | madhu_ak: yep, I saw the response; mtrenish had replied to me via IRC on that | 20:12 |
fnaval | basically, use tempest_lib and tempest together; it's totally better than copying the code in tree | 20:13 |
fnaval | even though tempest might not be stable, it's still better | 20:13 |
madhu_ak | fnaval, sure. In that case, do we need to propose any files in tempest-lib, so it can be stable? | 20:13 |
fnaval | madhu_ak: so there is a QA midcycle meet up happening this week. I've spoken to Rackspace folks who are going there about that. Hopefully, they'll have a plan of attack. If not, then we'll have to start attending their meetings and find out what they need. | 20:14 |
fnaval | Either way, I might just start attending their meetings and see where they're at and see what help they need. | 20:15 |
fnaval | I think that's far better than just throwing code out there. | 20:15 |
madhu_ak | sounds goo fnaval | 20:16 |
madhu_ak | good* | 20:16 |
fnaval | madhu_ak: if you're able to attend, here's some info: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting | 20:16 |
johnsom | sbalukoff I have the REST API timeout bug. Give me ~30 | 20:17 |
fnaval | madhu_ak: according to this etherpad, they have some migration work that's prioritized; I suppose we can help with reviews https://etherpad.openstack.org/p/mitaka-qa-priorities | 20:17 |
madhu_ak | sure fnaval. I shall attend it. if you have outlook invite in handy, can you forward them? | 20:17 |
fnaval | yep | 20:17 |
sbalukoff | johnsom: Sound good! | 20:18 |
fnaval | well, I'm not sure of the timings yet - they'll be figuring that out this week. so the week after next might be the restart of it | 20:18 |
madhu_ak | sure, fnaval. | 20:19 |
johnsom | I'm thinking a 60 second timeout on the REST API calls. Comments? | 20:19 |
fnaval | madhu_ak: added you to the cal | 20:19 |
johnsom | (by default of course) | 20:19 |
madhu_ak | awesome. Thanks | 20:20 |
*** itsuugo has quit IRC | 20:20 | |
*** bharathm has quit IRC | 20:20 | |
*** mugsie has quit IRC | 20:20 | |
*** reedip has quit IRC | 20:20 | |
*** mestery has quit IRC | 20:20 | |
*** _laco has quit IRC | 20:20 | |
*** dnovosel has quit IRC | 20:20 | |
*** mestery has joined #openstack-lbaas | 20:20 | |
*** mugsie has joined #openstack-lbaas | 20:20 | |
*** dnovosel has joined #openstack-lbaas | 20:22 | |
fnaval | madhu_ak: they have a weird alternating schedule - one during 1700 UTC and 0900 UTC. I plan to just attend the 1700 UTC. | 20:22 |
*** itsuugo has joined #openstack-lbaas | 20:22 | |
madhu_ak | exactly. its 3am | 20:22 |
*** bharathm has joined #openstack-lbaas | 20:22 | |
*** reedip has joined #openstack-lbaas | 20:23 | |
*** piet has joined #openstack-lbaas | 20:24 | |
*** _laco has joined #openstack-lbaas | 20:34 | |
*** prabampm has quit IRC | 20:37 | |
*** kevo has quit IRC | 20:37 | |
*** jwarendt has quit IRC | 20:37 | |
*** prabampm has joined #openstack-lbaas | 20:38 | |
*** jwarendt has joined #openstack-lbaas | 20:38 | |
*** crc32 has joined #openstack-lbaas | 20:44 | |
*** Purandar has quit IRC | 20:44 | |
*** doug-fish has quit IRC | 20:45 | |
johnsom | Hmmm, given we retry these connections in the driver, I think I am going to drop the default timeout down to 10 seconds. That is more than enough for a connect/read timeout IMHO. | 20:47 |
*** openstackgerrit has quit IRC | 20:47 | |
*** openstackgerrit has joined #openstack-lbaas | 20:48 | |
*** doug-fish has joined #openstack-lbaas | 20:48 | |
sbalukoff | johnsom: I was thinking 3 seconds is enough-- if the REST server stops sending data for that long something is wrong. I mean: Are we doing any synchronous operations with it? | 20:49 |
sbalukoff | (stops sending data without closing the connection, I mean.) | 20:49 |
sbalukoff | Though I think we should retry if that happens, eh. | 20:50 |
sbalukoff | (I had a peek at the code: We retry on connection timeout, but not on read timeout-- we should probably retry at least a few times on read timeout as well.) | 20:51 |
johnsom | We do retry. Here is the docs for the setting: http://www.python-requests.org/en/latest/user/advanced/#timeouts | 20:51 |
johnsom | Yeah, I'm covering both connect and read (using the float option) | 20:52 |
sbalukoff | Oh, ok! | 20:52 |
*** doug-fish has quit IRC | 20:53 | |
johnsom | How about 5 to split or diff | 20:53 |
johnsom | It's in the config, so you can tune as fits in your deployment | 20:54 |
*** kevo has joined #openstack-lbaas | 20:54 | |
xgerman | k, let’s keep it brief | 20:54 |
rm_work | johnsom: seems reasonable | 20:54 |
sbalukoff | johnsom: Sounds good! | 20:56 |
openstackgerrit | Michael Johnson proposed openstack/octavia: Add a request timeout to the REST API driver https://review.openstack.org/283255 | 20:59 |
sbalukoff | Nice! | 20:59 |
johnsom | There we go | 21:00 |
*** Kiall has quit IRC | 21:00 | |
*** Kiall has joined #openstack-lbaas | 21:00 | |
*** Kiall has quit IRC | 21:00 | |
*** Kiall has joined #openstack-lbaas | 21:01 | |
rm_work | i'm working my way down the L7 reviews now | 21:02 |
sbalukoff | Great! | 21:03 |
rm_work | sbalukoff: you had a lot of TODOs in the last one i looked at... flows, i think | 21:03 |
rm_work | i assume that's stuff that won't be addressed immediately | 21:03 |
rm_work | around status tree updates | 21:03 |
sbalukoff | rm_work: Yes, they are similar to the other todos that johnsom has for reverts in other flows. | 21:03 |
rm_work | yeah, seemed that way | 21:04 |
sbalukoff | We do need to revisit the revert strategy on a lot of flows. | 21:04 |
johnsom | If that is the one I already +2'd it is consistent with the existing need to update the revert paths | 21:04 |
rm_work | yeah | 21:05 |
rm_work | man, we define _test_lb_and_listener_statuses like 80 times it seems | 21:05 |
rm_work | I guess they are all VERY slightly different? | 21:06 |
sbalukoff | Yeah. | 21:06 |
rm_work | or else there's just no good place to put that? | 21:06 |
rm_work | it seems almost like the difference is the string in the LOG message | 21:06 |
rm_work | >_> | 21:06 |
sbalukoff | I'm thinking that someone should eventually move that to a file in octavia/common or something, and abstract it out enough that it's usable from all the API controllers. | 21:06 |
rm_work | ah well | 21:06 |
rm_work | yeah | 21:07 |
rm_work | or the base controller | 21:07 |
sbalukoff | Oh yes! There! | 21:07 |
rm_work | can still override it if necessary | 21:07 |
rm_work | i'll look at doing that at some point when I'm up for 20 hours and my OCD kicks in and I can't focus on the things i'm ACTUALLY supposed to be doing | 21:07 |
sbalukoff | Heh! Ok. | 21:07 |
openstackgerrit | Merged openstack/neutron-lbaas: Shared pools support https://review.openstack.org/218560 | 21:08 |
*** Kiall has quit IRC | 21:09 | |
sbalukoff | Woot! | 21:09 |
sbalukoff | There it is! | 21:09 |
johnsom | Well, I guess Al has the balls to +A | 21:09 |
*** Kiall has joined #openstack-lbaas | 21:10 | |
sbalukoff | Indeed. | 21:10 |
rm_work | rofl | 21:14 |
rm_work | on the n-lbaas side | 21:14 |
rm_work | that's fine :P | 21:14 |
sbalukoff | Heh! the Octavia share-pools patch has been merged for a couple weeks now. :) | 21:14 |
sbalukoff | shared | 21:14 |
sbalukoff | Oh! I need to unmark the shared-pools CLI patch from being WIP. | 21:15 |
rm_work | sbalukoff: yeah we should merge that | 21:16 |
rm_work | seems to work | 21:16 |
sbalukoff | If you want to +1 it so it gets attention, here it is: https://review.openstack.org/#/c/218563/ | 21:17 |
*** doug-fish has joined #openstack-lbaas | 21:22 | |
*** crc32 has quit IRC | 21:22 | |
sbalukoff | rm_work: It looks like Evgeny beat me to the punch on getting his L7 CLI updated. The tests you were doing with the CLI that didn't work: Were those using the patchset that Evgeny uploaded this morning? | 21:24 |
*** crc32 has joined #openstack-lbaas | 21:28 | |
*** Purandar has joined #openstack-lbaas | 21:29 | |
rm_work | err | 21:30 |
rm_work | define "this morning" | 21:30 |
rm_work | probably newest as of like 4 hours ago | 21:30 |
johnsom | Ok, what stupid mistake am I making. I did a checkout on https://review.openstack.org/#/c/148232/, then python setup.py install, ran the db migration. | 21:30 |
rm_work | but, I hadn't cherry-picked your shared-pools patch yet, once i did that everything seemed fine I think | 21:30 |
johnsom | But starting up q-svc gives me: ImportError: Plugin 'neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2' not found. | 21:30 |
rm_work | johnsom: *sudo* python setup.py install? :P | 21:31 |
*** doug-fish has quit IRC | 21:31 | |
johnsom | eyp | 21:31 |
rm_work | just guessing, i am usually running as stack user :P | 21:31 |
johnsom | yes | 21:31 |
rm_work | heh yeah dunno... you can try with bit.do/devstack | 21:32 |
rm_work | just put both relevant patches in there | 21:32 |
rm_work | i need to modify it to work with client patches too but not sure how yet | 21:32 |
rm_work | i mean, i could do it manually, but i assume there's an easy way with devstack | 21:32 |
johnsom | Yeah, I was just hoping to avoid a restack. Ok, guess I don't get a choice as my usual tricks aren't working | 21:33 |
sbalukoff | rm_work: Let me know if you figure out an easy way to get the CLI stuff into devstack other than going to /opt/stack/python-neutronclient and checking out the patch you're testing. :P | 21:34 |
sbalukoff | Anyway, good to know that getting both patches in seemed to fix things. | 21:34 |
sbalukoff | (I'll be trying to break things there myself in any case as I test Evgeny's CLI patch.) | 21:35 |
rm_work | sbalukoff: i might ask around in the qa channels, i think they work on devstack | 21:36 |
sbalukoff | Aah, ok. | 21:36 |
sbalukoff | bharathm: Could I please get you to re-visit this? https://review.openstack.org/#/c/281603/ | 21:51 |
bharathm | sbalukoff: sorry.. I should have done it much sooner.. :-) | 21:54 |
sbalukoff | bharathm: No worries, eh | 21:55 |
*** manishg has joined #openstack-lbaas | 21:57 | |
johnsom | Hmmm | 22:06 |
johnsom | 2016-02-22 14:04:44.409 TRACE neutron.api.v2.resource AttributeError: 'OctaviaDriver' object has no attribute 'l7policy' | 22:06 |
*** armax has quit IRC | 22:06 | |
*** rtheis has quit IRC | 22:07 | |
rm_work | yeah you need two patches for neutronclient right? | 22:08 |
rm_work | err | 22:08 |
johnsom | sbalukoff should commit f82d24463a510d726be485fb89f86c3c1149069b have included that? | 22:08 |
rm_work | and also, did sbalukoff say there was something borked with l7? | 22:08 |
rm_work | in the octavia driver | 22:08 |
johnsom | Yeah, he mentioned there might be some bugs. | 22:08 |
johnsom | I did a: neutron lbaas-l7policy-create --action REJECT --name policy1 --listener listener1 | 22:09 |
*** Purandar has quit IRC | 22:09 | |
rm_work | i did all my actual L7 testing via REST to octavia :/ | 22:09 |
rm_work | just used the neutron stuff to spin up the basics | 22:10 |
sbalukoff | johnsom: Looking... | 22:10 |
johnsom | Yeah, I'm trying to go end-to-end | 22:10 |
rm_work | since the client had a bug for me not allowing me to make policies correctly | 22:10 |
rm_work | johnsom: let me know if you run into: | 22:10 |
rm_work | [13:25:21] <rm_work>neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener1 | 22:11 |
rm_work | [13:25:27] <rm_work>"Redirect pool id is missing for L7 Policy with pool redirect action." | 22:11 |
johnsom | https://gist.github.com/anonymous/598110999506c18cb711 has the full traceback | 22:11 |
rm_work | i got there and gave up L7 testing on clientside | 22:11 |
sbalukoff | thanks johnsom: I think this is a bug in the Neutron-LBaaS Octavia driver that I added this weekend. Working on a fix now. | 22:13 |
johnsom | Cool | 22:13 |
*** Purandar has joined #openstack-lbaas | 22:15 | |
johnsom | rm_work Confirmed: https://gist.github.com/johnsom/147f8dd2b599b69b4761 | 22:16 |
rm_work | yeah i tried with the ID too | 22:17 |
rm_work | i think it isn't reading the option right | 22:17 |
sbalukoff | Haha! Ok, I found the problem, I think. Time to fix it... | 22:18 |
rm_work | on the LAST patchset | 22:24 |
rm_work | and then i'll have +2'd the whole chain | 22:24 |
rm_work | and by last, i mean first | 22:24 |
sbalukoff | Heh! | 22:25 |
sbalukoff | Nice! | 22:25 |
*** neelashah1 has quit IRC | 22:26 | |
openstackgerrit | Stephen Balukoff proposed openstack/neutron-lbaas: L7 capability extension implementation for lbaas v2 https://review.openstack.org/148232 | 22:27 |
sbalukoff | Ok, I'm going to restack to test this ^^^ But I think it *should* fix the problem you saw, johnsom. | 22:27 |
sbalukoff | I'm not sure yet if it fixes the problem you saw, rm_work. | 22:28 |
*** piet has quit IRC | 22:29 | |
*** localloop127 has quit IRC | 22:30 | |
rm_work | i think mine is a client bug | 22:32 |
sbalukoff | rm_work: I suspect you're right. Gonna poke at it once I finish restacking here. | 22:33 |
*** piet has joined #openstack-lbaas | 22:34 | |
johnsom | I added it to the gerrit for the client | 22:36 |
*** ducttape_ has quit IRC | 22:40 | |
*** ducttape_ has joined #openstack-lbaas | 22:42 | |
*** allan_h has quit IRC | 22:50 | |
*** allan_h has joined #openstack-lbaas | 22:52 | |
*** allan_h has quit IRC | 23:14 | |
*** Purandar has quit IRC | 23:14 | |
johnsom | Hmmm, really strange results in the scenario gate with the requests timeout: http://logs.openstack.org/55/283255/1/check/gate-neutron-lbaasv2-dsvm-scenario/4cd4de5/logs/screen-o-cw.txt.gz | 23:15 |
*** allan_h has joined #openstack-lbaas | 23:16 | |
*** allan_h has quit IRC | 23:17 | |
johnsom | I think five seconds is too short | 23:17 |
johnsom | Or we should break out the connection timeout vs. reas | 23:17 |
johnsom | read | 23:17 |
*** piet has quit IRC | 23:18 | |
*** ducttape_ has quit IRC | 23:20 | |
sbalukoff | johnsom: Dang! It works so much better on my devstack. ;) | 23:23 |
barclaac | sbalukoff: it always works on devstack | 23:32 |
sbalukoff | Haha! | 23:32 |
rm_work | alright I just +2'd the last L7 review | 23:34 |
rm_work | I'm good with it merging as-is | 23:34 |
*** armax has joined #openstack-lbaas | 23:34 | |
johnsom | I would like the commands to actually work | 23:35 |
rm_work | well | 23:36 |
rm_work | via API they seemed to do SOMETHING | 23:36 |
rm_work | (I just meant in Octavia) | 23:36 |
johnsom | Ah | 23:36 |
*** allan_h has joined #openstack-lbaas | 23:40 | |
sbalukoff | Yeah, the neutron-lbaas CLI has at least one minor bug the johnsom found (I've confirmed it, testing the fix), and once I got the CLI creating policies... I discovered that although Evgeny asked that l7policy positions start numbering with 1, he didn't set up the DB that way... so I'mma have to fix that real quick. | 23:47 |
sbalukoff | But! the Octavia stuff should be golden! XD | 23:48 |
johnsom | Ok, ping me with patches I should be checking out after you have updated them. | 23:48 |
johnsom | It looks like the IRC bot isn't announcing them all | 23:49 |
openstackgerrit | Michael Johnson proposed openstack/octavia: Add a request timeout to the REST API driver https://review.openstack.org/283255 | 23:49 |
johnsom | Hmm, so Octavia patches announce, but neutron-lbaas patches don't. | 23:50 |
rm_work | johnsom: well, I am going to be off for a bit... but... I've +2'd everything in the relevant Octavia chain | 23:51 |
johnsom | Ok, that one splits connect timeout (default 10) from read timeout (default 60) | 23:51 |
rm_work | so as far as I'm concerned, I could come back later and have it merged | 23:51 |
*** manishg has quit IRC | 23:51 | |
johnsom | Ok, once things seem to be working I will hit them | 23:51 |
rm_work | did ... anyone else want to do a sanity check on these? :P | 23:51 |
rm_work | or just trust johnsom and I to review and sbalukoff to have written it right? :P | 23:52 |
johnsom | Good question | 23:52 |
rm_work | i'll be frank, there's so much testing to do that i've really only tested the happy-path for the most part | 23:52 |
rm_work | that said, our sad-path has never worked super well <_< | 23:52 |
rm_work | so I doubt this could make it much worse | 23:53 |
sbalukoff | rm_work: Yeah, we should fix that someday. :/ | 23:53 |
rm_work | yes. | 23:53 |
rm_work | one of these days. | 23:53 |
rm_work | once happy-path actually does the stuff we want correctly :P | 23:53 |
johnsom | Yeah. It's still better than some systems I have seen.. Not to name names | 23:53 |
rm_work | and once this all settles I'll actually probably end up in bugfix mode too | 23:55 |
johnsom | I think we meet are criteria of two companies reviewing the third with rm_work and I. I went through a bunch of the non-L7 octavia paths last week/weekend. So I feel pretty good we didn't break existing functionality | 23:55 |
rm_work | unless I get pulled internally again | 23:55 |
sbalukoff | rm_work: My plan is to go nuts with the bugfixes as well. | 23:55 |
sbalukoff | Once this is all settled. | 23:56 |
rm_work | yes | 23:56 |
rm_work | also will go on a merge-spree for the smaller stuff that's already up | 23:56 |
sbalukoff | I'm actually fairly proud of what we've built with Octavia; I want to see people using it! | 23:56 |
rm_work | indeed | 23:56 |
rm_work | it's very close to something that I think could be deployable ;) | 23:57 |
rm_work | just a couple outstanding things | 23:58 |
rm_work | like the single-create API which trevorv is almost done with hopefully | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!