johnsom | lxkong Are you patching this or should I? I can fix this pretty quick | 00:05 |
---|---|---|
lxkong | johnsom: i can leave it to you, if it's easy | 00:05 |
johnsom | Sure, NP | 00:05 |
lxkong | johnsom: it's a cherry-pick? | 00:05 |
johnsom | No, it's going to need a patch | 00:06 |
lxkong | johnsom: ok, thanks for that. After the fix, i can consider if backport to our internal repo manually (if it's not allowed to cherry-pick to pike) | 00:06 |
johnsom | It will be backported to pike, not a problem there | 00:07 |
lxkong | johnsom: then that'd be great! | 00:08 |
rm_work | ewww gross | 00:17 |
rm_work | yeah that's badtimes | 00:18 |
johnsom | Can't repro with nc, so building that check_udp tool | 00:18 |
rm_work | orly? | 00:19 |
rm_work | hmm | 00:19 |
johnsom | Yeah, connected with nc pasted in that unicode it reported, got the hmac exception I would expect. | 00:20 |
rm_work | ahaha, maybe just also found a ... bug? | 00:20 |
rm_work | with field filter | 00:21 |
johnsom | Hmm, could be I'm running py27 too, maybe it was on py3 | 00:21 |
rm_work | not sure? | 00:21 |
rm_work | if you do ?fields=address on a member | 00:21 |
rm_work | it matches "address" and "monitor_address" and returns both | 00:21 |
johnsom | Yeah, the tool doesn't break it either. | 00:21 |
rm_work | is that intended behavior? | 00:21 |
johnsom | Opps, no, not intended | 00:21 |
rm_work | heh k, will try to fix, meanwhile will skip checking that field | 00:22 |
johnsom | Hmmm, nope, can't reproduce running under py3 either... | 00:29 |
rm_work | lolWUT | 00:30 |
rm_work | "monitor_address" also picks up address | 00:31 |
rm_work | wut | 00:31 |
*** dmellado has quit IRC | 00:31 | |
rm_work | wtf | 00:31 |
rm_work | it looks like we just make use of the DB to do this filtering | 00:31 |
rm_work | O_o | 00:31 |
johnsom | Well, my guess is oslo_utils fixed that, so I'm just going to fix what I see | 00:32 |
johnsom | Oh, just putting in "like address"? | 00:32 |
rm_work | err no like | 00:32 |
rm_work | ?fields=monitor_address | 00:32 |
johnsom | Using %'s? | 00:32 |
rm_work | returns a member obj with both address and monitor_address | 00:33 |
rm_work | and we do | 00:33 |
rm_work | uery = query.filter_by(**translated_filters) | 00:33 |
*** dmellado has joined #openstack-lbaas | 00:38 | |
johnsom | That just restricts the rows, not the columns | 00:49 |
johnsom | Are the columns filtered API side? | 00:49 |
rm_work | hmmm | 00:49 |
rm_work | no | 00:49 |
rm_work | i mean | 00:50 |
rm_work | yes | 00:50 |
rm_work | but in the DB | 00:50 |
johnsom | Yeah, filter_by will limit the rows in the result set based on the filters | 00:50 |
rm_work | right | 00:51 |
rm_work | but ummm it's doing it wrong | 00:51 |
rm_work | or weirdly | 00:51 |
rm_work | something is wonky with filtering on members | 00:52 |
rm_work | double field filter also isn't working | 00:52 |
johnsom | I think that one is == only and will not do != | 00:52 |
*** dmellado has quit IRC | 00:59 | |
rm_work | err | 01:05 |
rm_work | well | 01:05 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Improve Health Manager error handling https://review.openstack.org/566198 | 01:05 |
johnsom | lxkong https://review.openstack.org/566198 | 01:05 |
johnsom | It might take a day or two to get it merged and backported to pike. Depends on the cores | 01:06 |
rm_work | johnsom: http://paste.openstack.org/show/720330/ | 01:07 |
johnsom | What??? Hmmmm | 01:09 |
rm_work | and the other one is.... | 01:12 |
johnsom | Oh, I don't have that code, it hasn't merged yet | 01:14 |
johnsom | Yeah, but list goes out to lunch too | 01:15 |
johnsom | {"members_links": [], "members": [{"created_at": "2018-05-04T01:11:14"}, {"created_at": "2018-05-04T01:11:20"}]} | 01:15 |
*** annp has joined #openstack-lbaas | 01:16 | |
rm_work | yeah | 01:18 |
johnsom | So this is broken: https://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/base.py#L200 | 01:18 |
rm_work | http://127.0.0.1/load-balancer/v2.0/lbaas/pools/65ba0411-3dd4-40d5-b12d-ef96a58c467d/members?fields=monitor_address | 01:18 |
rm_work | {"members_links": [], "members": [{"monitor_address": null, "address": "192.0.2.1"}, {"monitor_address": null, "address": "192.0.2.1"}, {"monitor_address": null, "address": "192.0.2.1"}]} | 01:18 |
rm_work | lol | 01:18 |
johnsom | rm_work Yeah, I was right, it's all API side, not DB. | 01:19 |
rm_work | ah | 01:19 |
johnsom | DB would be an optimization however | 01:19 |
rm_work | ugh | 01:19 |
rm_work | i wonder why it was done there | 01:19 |
johnsom | Not sure. | 01:20 |
rm_work | oh | 01:20 |
rm_work | yeah i see | 01:20 |
rm_work | i was looking at the other field thing | 01:20 |
johnsom | The models probably | 01:20 |
rm_work | *filters | 01:20 |
rm_work | yeah | 01:20 |
rm_work | anyway ok | 01:20 |
rm_work | just need to fix | 01:20 |
*** phuoc_ has joined #openstack-lbaas | 01:20 | |
rm_work | good test :) | 01:20 |
johnsom | Yeah, fields != filters | 01:20 |
rm_work | gonna post the test code | 01:20 |
johnsom | Ok, I need to sign off for the night. | 01:20 |
johnsom | Date night and all | 01:21 |
rm_work | kk | 01:21 |
rm_work | night :) | 01:21 |
*** phuoc has quit IRC | 01:22 | |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members https://review.openstack.org/566199 | 01:25 |
*** dmellado has joined #openstack-lbaas | 01:26 | |
*** dmellado has quit IRC | 01:32 | |
*** dmellado has joined #openstack-lbaas | 01:56 | |
*** dayou_ has joined #openstack-lbaas | 02:02 | |
*** dmellado has quit IRC | 02:09 | |
*** dmellado has joined #openstack-lbaas | 02:13 | |
*** rcernin has quit IRC | 02:19 | |
*** rcernin has joined #openstack-lbaas | 02:20 | |
*** dmellado has quit IRC | 02:38 | |
*** yamamoto has quit IRC | 02:51 | |
*** yamamoto has joined #openstack-lbaas | 02:51 | |
*** yamamoto has quit IRC | 02:58 | |
*** dmellado has joined #openstack-lbaas | 03:39 | |
*** yamamoto has joined #openstack-lbaas | 03:59 | |
*** rcernin has quit IRC | 03:59 | |
*** rcernin has joined #openstack-lbaas | 04:00 | |
*** dmellado has quit IRC | 04:04 | |
*** yamamoto has quit IRC | 04:06 | |
*** dmellado has joined #openstack-lbaas | 04:39 | |
*** yboaron_ has joined #openstack-lbaas | 04:44 | |
*** yboaron_ has quit IRC | 04:45 | |
*** yboaron_ has joined #openstack-lbaas | 04:46 | |
*** yboaron_ has quit IRC | 04:51 | |
*** ctracey has quit IRC | 05:15 | |
*** hogepodge has quit IRC | 05:15 | |
*** ctracey has joined #openstack-lbaas | 05:16 | |
*** hogepodge has joined #openstack-lbaas | 05:16 | |
*** threestrands has quit IRC | 05:47 | |
*** dayou_ has quit IRC | 06:20 | |
*** padouch has joined #openstack-lbaas | 06:24 | |
*** longkb has joined #openstack-lbaas | 06:25 | |
padouch | hi all. I have a question about amphora image. | 06:25 |
padouch | is it possible to download some or i have to use diskimage-creator to build my own? | 06:26 |
*** pcaruana has joined #openstack-lbaas | 06:43 | |
*** dmellado has quit IRC | 07:01 | |
*** dayou_ has joined #openstack-lbaas | 07:04 | |
*** rcernin has quit IRC | 07:05 | |
*** b_bezak has joined #openstack-lbaas | 07:06 | |
*** dmellado has joined #openstack-lbaas | 07:12 | |
rm_work | padouch: there is one | 07:14 |
rm_work | padouch: we build a nightly image for both centos and ubuntu based amphora | 07:14 |
rm_work | it should be fine to use for testing/PoC | 07:14 |
rm_work | but we recommend you build your own for a production environment | 07:15 |
rm_work | unfortunately I don't have the link handy for where it actually *is* | 07:15 |
padouch | ok and how can i get that test img | 07:16 |
padouch | beause i try use diskimage-creator for my own and result was fail | 07:17 |
rm_work | hmm, more interesting would be why you're getting build failures | 07:17 |
rm_work | but i understand that sometimes you just need to make some progress | 07:18 |
rm_work | so if you want to download one for now that's probably fine, get your stuff working, and then loop back | 07:18 |
rm_work | let me see | 07:18 |
*** b_bezak has quit IRC | 07:18 | |
rm_work | cgoncalves: do you know where that nightly image build goes? | 07:18 |
padouch | unfortunately no | 07:19 |
padouch | do you have some link pls? | 07:19 |
rm_work | I am looking | 07:19 |
rm_work | i actually don't remember, xgerman_ was the one working on it | 07:19 |
*** tesseract has joined #openstack-lbaas | 07:20 | |
rm_work | johnsom: https://github.com/openstack/octavia/blob/master/octavia/tests/functional/api/v2/test_member.py#L303 hahahahahahaha | 07:20 |
rm_work | that's funny | 07:21 |
rm_work | description isn't a field on members <_< | 07:21 |
rm_work | so our only actual test to see if it filtered ... is not right | 07:21 |
rm_work | if i tweak the test just a tiny bit | 07:22 |
rm_work | it fails | 07:22 |
rm_work | rofl | 07:22 |
dmellado | rm_work: well, at least it tested it xD | 07:23 |
*** dmellado_ has joined #openstack-lbaas | 07:24 | |
*** b_bezak has joined #openstack-lbaas | 07:24 | |
rm_work | >_< | 07:26 |
rm_work | AAAAAND | 07:26 |
rm_work | it tested with "id" and "project_id" | 07:26 |
rm_work | both of which contain "id" | 07:26 |
rm_work | which is the only reason it passed also | 07:27 |
rm_work | *other reason it passed | 07:27 |
rm_work | because any OTHER two fields would have also failed | 07:27 |
rm_work | >_< | 07:27 |
rm_work | oh man and i found the bug i think... so much bug | 07:28 |
rm_work | need to have a pylint test i think for htis | 07:30 |
dmellado | pylint? what did you do rm_work? ;) | 07:30 |
rm_work | need to make sure the number of args we put in the @wsme_pecan.wsexpose() decorator actually matches the number of args in the function | 07:30 |
rm_work | the bug was that we had this decorator: | 07:30 |
rm_work | @wsme_pecan.wsexpose(member_types.MembersRootResponse, wtypes.text, [wtypes.text], ignore_extra_args=True) | 07:31 |
rm_work | def get_all(self, fields=None): | 07:31 |
rm_work | ^^ so fields which is supposed to be the array, actually was being parsed as a single string | 07:31 |
dmellado | hm, awesome | 07:31 |
rm_work | bad copy/paste from a different controller where we had another arg first | 07:31 |
dmellado | so it worked just by pure chance | 07:31 |
rm_work | found that bug duplicated in three places | 07:32 |
rm_work | amphora, l7rule, and member controllers | 07:32 |
rm_work | and i have found the same KIND of bug before too | 07:32 |
rm_work | where the number of args was wrong but we didn't notice | 07:32 |
rm_work | because it "worked kinda" | 07:32 |
rm_work | posting a fix once I get tests in place | 07:32 |
rm_work | AUGH and the amphora test too | 07:33 |
rm_work | 'fields': ['id', 'compute_id'] | 07:33 |
dmellado | btw, regarding the amphora, bcafarel | 07:33 |
rm_work | the second half of the bug is that we do an "in" when we shouldn't | 07:33 |
rm_work | so it only gets 'id' | 07:34 |
dmellado | did you get the chance to fix centos amphora on L2? | 07:34 |
*** dmellado_ has quit IRC | 07:34 | |
rm_work | but it also finds 'compute_id' because 'id' is in it | 07:34 |
dmellado | lol | 07:34 |
rm_work | so the test sees both return and is like "cool, A++" | 07:34 |
rm_work | dmellado: i think centos is working again | 07:34 |
dmellado | rm_work: so it was a series of catastrophic coincidences | 07:34 |
rm_work | yep | 07:34 |
dmellado | rm_work: I'll give it a chance, had to revert to ubuntu one on our CI , thanks! | 07:34 |
rm_work | the bug there was a failure to copy one of the scripts out correctly | 07:35 |
rm_work | we merged it | 07:35 |
rm_work | https://review.openstack.org/#/c/564371/ | 07:35 |
rm_work | that was the culprit | 07:35 |
rm_work | assuming the issue you had was with the failovers not working | 07:35 |
rm_work | or active-standby pairs never really coming up right | 07:35 |
rm_work | yep l7rule was the exact same | 07:40 |
rm_work | positive-testing on id + project_id and negative-testing on a field that isn't on L7Rule | 07:40 |
rm_work | T_T | 07:40 |
rm_work | such copy/paste testing | 07:40 |
rm_work | to be fair, NOW I know better, then I probably didn't | 07:40 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Correct field filtering for member/l7rule/amphora https://review.openstack.org/566230 | 07:49 |
rm_work | cgoncalves: ^^ we're going to want that all the way back to Pike | 07:49 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members https://review.openstack.org/566199 | 07:53 |
rm_work | oops | 07:58 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools https://review.openstack.org/565640 | 07:58 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members https://review.openstack.org/566199 | 07:58 |
rm_work | there | 07:58 |
rm_work | k | 07:58 |
rm_work | now need to add scenario for members | 07:58 |
*** dmellado_ has joined #openstack-lbaas | 08:01 | |
rm_work | aaaaand there it is | 08:06 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members https://review.openstack.org/566199 | 08:06 |
rm_work | that was ... easy | 08:07 |
rm_work | disturbingly easy | 08:07 |
rm_work | typed it up and ran it and it worked the first time O_o | 08:08 |
rm_work | anywho, that should be good for today | 08:08 |
rm_work | tomorrow I should be able to set up a traffic test | 08:08 |
rm_work | I hope | 08:08 |
*** salmankhan has joined #openstack-lbaas | 08:09 | |
*** dmellado_ has quit IRC | 08:11 | |
*** salmankhan has quit IRC | 08:20 | |
*** salmankhan has joined #openstack-lbaas | 08:23 | |
*** apuimedo has joined #openstack-lbaas | 08:26 | |
cgoncalves | rm_work, http://tarballs.openstack.org/octavia/test-images/ | 08:44 |
cgoncalves | rm_work, ack for https://review.openstack.org/566230 | 08:44 |
*** dmellado_ has joined #openstack-lbaas | 08:50 | |
padouch | thx cgoncalves | 09:47 |
*** yamamoto has joined #openstack-lbaas | 09:49 | |
*** yamamoto has quit IRC | 09:53 | |
*** padouch has quit IRC | 10:08 | |
cgoncalves | rm_work, re: https://review.openstack.org/#/c/564371/ it had always been always broken? if so backport candidate all the way back | 10:32 |
*** longkb has quit IRC | 10:32 | |
cgoncalves | ETOOMANYALWAYS :) | 10:32 |
*** yboaron_ has joined #openstack-lbaas | 10:47 | |
*** yboaron_ has quit IRC | 10:51 | |
*** yboaron_ has joined #openstack-lbaas | 11:02 | |
*** yboaron_ has quit IRC | 11:07 | |
*** yboaron_ has joined #openstack-lbaas | 11:21 | |
*** yboaron_ has quit IRC | 11:26 | |
*** pcaruana has quit IRC | 11:27 | |
*** salmankhan has quit IRC | 11:36 | |
*** salmankhan1 has joined #openstack-lbaas | 11:36 | |
*** salmankhan1 is now known as salmankhan | 11:39 | |
*** pcaruana has joined #openstack-lbaas | 11:39 | |
*** annp has quit IRC | 11:57 | |
*** pcaruana has quit IRC | 11:59 | |
*** salmankhan has quit IRC | 12:07 | |
*** salmankhan has joined #openstack-lbaas | 12:08 | |
*** pcaruana has joined #openstack-lbaas | 12:12 | |
*** yamamoto has joined #openstack-lbaas | 12:46 | |
*** yamamoto has quit IRC | 12:48 | |
*** yamamoto has joined #openstack-lbaas | 12:49 | |
*** yamamoto has quit IRC | 12:53 | |
*** b_bezak has quit IRC | 12:56 | |
*** yamamoto has joined #openstack-lbaas | 13:00 | |
*** samccann has joined #openstack-lbaas | 13:14 | |
*** salmankhan has quit IRC | 13:24 | |
*** salmankhan has joined #openstack-lbaas | 13:28 | |
*** yamamoto has quit IRC | 13:45 | |
xgerman_ | rm_work: cgoncalves padouch amp download URL: https://github.com/openstack/openstack-ansible-os_octavia/blob/master/defaults/main.yml#L215 | 14:24 |
xgerman_ | s/ubuntu/centos for that one | 14:24 |
*** fnaval has quit IRC | 14:31 | |
*** yamamoto has joined #openstack-lbaas | 14:45 | |
*** fnaval has joined #openstack-lbaas | 14:47 | |
*** salmankhan has quit IRC | 14:47 | |
*** salmankhan has joined #openstack-lbaas | 14:47 | |
*** yamamoto has quit IRC | 14:52 | |
*** dmellado has quit IRC | 14:57 | |
*** dmellado_ is now known as dmellado | 14:57 | |
*** salmankhan has quit IRC | 15:02 | |
*** salmankhan has joined #openstack-lbaas | 15:12 | |
*** fnaval has quit IRC | 15:12 | |
*** fnaval has joined #openstack-lbaas | 15:13 | |
*** pcaruana has quit IRC | 15:13 | |
*** dayou_ has quit IRC | 15:29 | |
*** salmankhan has quit IRC | 15:31 | |
*** salmankhan has joined #openstack-lbaas | 15:31 | |
*** bcafarel is now known as bcafarel|pto | 15:50 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 16:43 | |
*** salmankhan has quit IRC | 17:07 | |
*** tesseract has quit IRC | 17:09 | |
*** yamamoto has joined #openstack-lbaas | 17:50 | |
*** pcaruana has joined #openstack-lbaas | 17:54 | |
*** yamamoto has quit IRC | 17:54 | |
*** AlexeyAbashkin has quit IRC | 18:30 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Implement provider drivers - Listener https://review.openstack.org/566379 | 18:32 |
rm_work | cgoncalves: no that was was JUST broken by johnsom like a week before | 18:45 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members https://review.openstack.org/566199 | 18:48 |
*** pcaruana has quit IRC | 19:10 | |
rm_work | xgerman_ / nmagnezi / dayou https://review.openstack.org/#/c/492311/ | 19:14 |
xgerman_ | after lunch | 19:15 |
rm_work | k | 19:16 |
rm_work | feel free to continue to pools and members | 19:16 |
rm_work | https://review.openstack.org/#/c/565640/ | 19:16 |
*** yboaron_ has joined #openstack-lbaas | 19:29 | |
johnsom | rm_work What did I break? He was asking about the Ubuntu AppArmor change I think. | 19:31 |
rm_work | johnsom: no he linked the keepalive script venv fix | 19:32 |
johnsom | Oh, I see, different thread | 19:32 |
rm_work | I guess I didn't have to actively shame you for that one | 19:33 |
rm_work | since compared to the filtering which I broke from the start, that's not too bad :P | 19:33 |
*** yboaron_ has quit IRC | 19:33 | |
johnsom | Ha, yeah, we all have our hall of shame patches.... | 19:34 |
johnsom | Cool, I linked my listener driver patch to your listener tempest patch and it passed, so, that is good | 19:35 |
rm_work | yes :) | 19:35 |
johnsom | I realized today that our back-out-of-failed updates strategy isn't going to work with the drivers... | 19:36 |
rm_work | err? | 19:37 |
johnsom | Well, today we don't update the DB settings on an update call until they complete on the backend. But now that we hand the update off to the driver, we have to update the DB immediately. | 19:39 |
rm_work | hmmm | 19:40 |
rm_work | ugh i don't want to deal with it now, but | 19:40 |
rm_work | i had been noodling getting away from the immutable thing | 19:40 |
rm_work | we originally did that just because it was "less complicated" IIRC | 19:41 |
johnsom | Well, it allows an HA control plane | 19:42 |
rm_work | you can do an HA controlplane with non-immutable too | 19:44 |
rm_work | you just have to effectively queue stuff | 19:44 |
rm_work | you can lock backend-updates without locking frontend-updates | 19:44 |
johnsom | Yeah, like guaranteed ordered queue and not popping it off the queue until it completes | 19:45 |
rm_work | yes | 19:45 |
rm_work | which we could do | 19:45 |
rm_work | it's honestly not too bad | 19:45 |
rm_work | IIRC Designate does it? mugsie am I just straight up lieing here? | 19:45 |
johnsom | And a queue topic per LB | 19:45 |
rm_work | eh, not strictly necessary *i think* | 19:46 |
rm_work | but would have to continue noodling | 19:46 |
rm_work | theres a number of ways to do it | 19:46 |
xgerman_ | johnsom: just half reading — update immediatley? Why? Because amphora-driver reads form the DB? | 19:46 |
rm_work | actually yeah, why do we need to do this? we can pass an update-object to the driver the same as we do with the handler, no? | 19:47 |
rm_work | and when the driver returns OK, we update the DB? | 19:47 |
xgerman_ | or, we can just call the driver in a future and when it returns to the update? | 19:48 |
johnsom | xgerman_ No, because we don't want third party drivers updating all of the DB fields | 19:48 |
rm_work | errr | 19:48 |
rm_work | they don't, we do? | 19:48 |
rm_work | like... we send the update object off to them -> when the respond back that it was done successfully, we update? | 19:48 |
xgerman_ | yes, but we can update the fields after the call succeeds? | 19:48 |
rm_work | i didn't they the drivers even had DB access | 19:48 |
rm_work | *I didn't think the | 19:48 |
johnsom | rm_work drivers are async, ours included, so driver is going to say ok right away, DB gets updated with new stuffs, backend will not have old stuffs | 19:49 |
xgerman_ | well, it goes PENDING and when they are done we go ACTIVE | 19:50 |
johnsom | It's not an issue for any driver other than ours | 19:50 |
xgerman_ | ok, so you are saying we should Update the DB with changes -> driver-> driver does it’s thing and sets ERROR or ACTIVE? | 19:52 |
johnsom | https://www.irccloud.com/pastebin/shOyfzDi/ | 19:56 |
xgerman_ | but that will be true now for all drivers unless we have a rollback somewhere in the API | 19:57 |
johnsom | Well, the other drivers may have their old state in their own environment | 19:58 |
johnsom | My whole point here is this breaks our current fancy rollback | 19:58 |
xgerman_ | :-( | 19:58 |
rm_work | yeah umm... is it too late to change the model to have us send to the drivers as a future (still async) but have them actually return when they're in a good state? | 20:02 |
rm_work | and then we can update? | 20:02 |
johnsom | Not sure I follow. Make drivers sync? | 20:03 |
xgerman_ | rm_work: they put it on a queue and forget | 20:04 |
xgerman_ | so no way to have a future travel back unless you bring in job tracking | 20:04 |
rm_work | ah we're talking about a different kind of sync/async i guess | 20:04 |
rm_work | but err | 20:05 |
rm_work | yes essentially | 20:05 |
rm_work | i guess that's a no-go | 20:05 |
xgerman_ | o, they have to send a status update on the LB | 20:05 |
xgerman_ | can’t we key off that? | 20:06 |
xgerman_ | e.g. ERROR keep old version; ACTIVE->Persist new one? | 20:06 |
rm_work | yeah, doesn't that essentially make them sync? | 20:06 |
johnsom | Our we are piling up a bunch of updates in a queue somewhere hoping we ever get an update. | 20:07 |
johnsom | Not a fan of that idea | 20:07 |
rm_work | I mean, I am thinking like | 20:07 |
rm_work | LBaaSv3 API here | 20:08 |
johnsom | I mean, I don't think we have a problem with third party drivers. I think that is all fine, it's purely a problem with our current controller | 20:08 |
rm_work | I would like to architect it to be properly non-locking | 20:08 |
xgerman_ | third party can fail, to, and leave things in the old state | 20:08 |
johnsom | rm_work IMO so many other important things right now.... | 20:09 |
rm_work | right | 20:09 |
rm_work | that's what i mean | 20:09 |
rm_work | for non-locking, it'd be in v3 which is | 20:10 |
rm_work | very far off | 20:10 |
johnsom | Well, I am going to test out my listener stuff with actually having barbican running (the cert/sni stuff gave me fits), then update the LB and Listener patch to update the DB are driver acceptance, then move on to pools, etc. | 20:12 |
johnsom | Also, I changed the way that thing works that you were so wound up about yesterday in this listener patch, so take a look and start wearing me down again | 20:13 |
johnsom | Probably good to get some reviews on that HM crash thing too: https://review.openstack.org/566198 | 20:14 |
johnsom | Though I could not get it to crash, even using the tool they were | 20:14 |
rm_work | err which thing? | 20:23 |
xgerman_ | https://review.openstack.org/#/c/566198/1 | 20:30 |
rm_work | i meant on the listener | 20:32 |
rm_work | that i was apparently wound up about :P | 20:32 |
rm_work | i don't remember T_T | 20:32 |
rm_work | that was like, YESTERDAY | 20:32 |
rm_work | looks good +2 | 20:33 |
rm_work | could use similar on https://review.openstack.org/#/c/566230/ | 20:33 |
rm_work | err, I also +A'd | 20:33 |
*** samccann has quit IRC | 20:42 | |
rm_work | xgerman_: responded | 20:43 |
rm_work | Also note: I have a plan to DRY this whole thing, but I would rather go back in and do it after I have a better vision of the whole thing | 20:43 |
johnsom | Ugh, ok, caught up on e-mail and new stories. | 21:41 |
rm_work | lol | 21:41 |
rm_work | i found a fun one... which is, it doesn't seem to ACTUALLY run the resource_setup from the base | 21:41 |
rm_work | so there's no webservers, no security groups, etc | 21:42 |
johnsom | Missing super somewhere? | 21:42 |
rm_work | not that i can find | 21:42 |
johnsom | noop or overrides in the config? | 21:42 |
rm_work | i see nothing that could prevent this from running | 21:43 |
rm_work | oh nm i do | 21:43 |
rm_work | there it is | 21:43 |
rm_work | it was hiding from me | 21:43 |
johnsom | rm_work Have you set a SNI with the OSC? | 22:20 |
johnsom | https://www.irccloud.com/pastebin/bStmGK5j/ | 22:20 |
rm_work | no | 22:21 |
rm_work | umm | 22:21 |
johnsom | Not sure it works. At least our example in the cookbook doesn't | 22:21 |
rm_work | try a -- | 22:21 |
rm_work | openstack loadbalancer listener set --default-tls-container-ref http://172.21.21.140/key-manager/v1/secrets/9e3dea88-2953-42e5-848e-a10c9402ce9c --sni-container-refs http://172.21.21.140/key-manager/v1/secrets/3ba017cd-26c8-469f-9b2f-e4c9cf4bf1ca -- listener1 | 22:21 |
rm_work | looks like it's eating the listener1 | 22:21 |
rm_work | as a second SNI | 22:22 |
rm_work | lol | 22:22 |
johnsom | Yeah, the -- worked | 22:22 |
rm_work | yeah | 22:22 |
rm_work | k | 22:22 |
rm_work | so we need to fix the example | 22:22 |
rm_work | or change the client | 22:22 |
johnsom | So, should we add that to the cookbook or do we need to fix the client???? | 22:22 |
rm_work | lol | 22:22 |
rm_work | jin | 22:22 |
rm_work | *jinx | 22:22 |
rm_work | we could change the client to just take the arg multiple times | 22:23 |
rm_work | instead of expecting a list | 22:23 |
rm_work | (I think?) | 22:23 |
johnsom | Yeah, I think so too. | 22:23 |
johnsom | Not sure how, but should work | 22:23 |
rm_work | the question is whether or not cliff lets you do that | 22:23 |
johnsom | Have time to look at that? | 22:24 |
openstackgerrit | Merged openstack/octavia master: Improve Health Manager error handling https://review.openstack.org/566198 | 22:24 |
johnsom | https://docs.openstack.org/python-openstackclient/pike/contributor/command-options.html#options-with-multiple-values | 22:27 |
rm_work | not this week :( | 22:28 |
rm_work | not *soon* | 22:28 |
rm_work | it's pretty low priority honestly >_> | 22:29 |
johnsom | Ok, I will update the doc and add a story | 22:29 |
rm_work | k | 22:30 |
*** fnaval has quit IRC | 22:30 | |
johnsom | https://storyboard.openstack.org/#!/story/2001971 | 22:34 |
johnsom | ^^^ In case someone has time for an easy fix | 22:35 |
*** yamamoto has joined #openstack-lbaas | 23:21 | |
*** yamamoto has quit IRC | 23:26 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Implement provider drivers - Listener https://review.openstack.org/566379 | 23:40 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!