Friday, 2018-05-04

johnsomlxkong Are you patching this or should I? I can fix this pretty quick00:05
lxkongjohnsom: i can leave it to you, if it's easy00:05
johnsomSure, NP00:05
lxkongjohnsom: it's a cherry-pick?00:05
johnsomNo, it's going to need a patch00:06
lxkongjohnsom: ok, thanks for that. After the fix, i can consider if backport to our internal repo manually (if it's not allowed to cherry-pick to pike)00:06
johnsomIt will be backported to pike, not a problem there00:07
lxkongjohnsom: then that'd be great!00:08
rm_workewww gross00:17
rm_workyeah that's badtimes00:18
johnsomCan't repro with nc, so building that check_udp tool00:18
johnsomYeah, connected with nc pasted in that unicode it reported, got the hmac exception I would expect.00:20
rm_workahaha, maybe just also found a ... bug?00:20
rm_workwith field filter00:21
johnsomHmm, could be I'm running py27 too, maybe it was on py300:21
rm_worknot sure?00:21
rm_workif you do ?fields=address on a member00:21
rm_workit matches "address" and "monitor_address" and returns both00:21
johnsomYeah, the tool doesn't break it either.00:21
rm_workis that intended behavior?00:21
johnsomOpps, no, not intended00:21
rm_workheh k, will try to fix, meanwhile will skip checking that field00:22
johnsomHmmm, nope, can't reproduce running under py3 either...00:29
rm_work"monitor_address" also picks up address00:31
*** dmellado has quit IRC00:31
rm_workit looks like we just make use of the DB to do this filtering00:31
johnsomWell, my guess is oslo_utils fixed that, so I'm just going to fix what I see00:32
johnsomOh, just putting in "like address"?00:32
rm_workerr no like00:32
johnsomUsing %'s?00:32
rm_workreturns a member obj with both address and monitor_address00:33
rm_workand we do00:33
rm_workuery = query.filter_by(**translated_filters)00:33
*** dmellado has joined #openstack-lbaas00:38
johnsomThat just restricts the rows, not the columns00:49
johnsomAre the columns filtered API side?00:49
rm_worki mean00:50
rm_workbut in the DB00:50
johnsomYeah, filter_by will limit the rows in the result set based on the filters00:50
rm_workbut ummm it's doing it wrong00:51
rm_workor weirdly00:51
rm_worksomething is wonky with filtering on members00:52
rm_workdouble field filter also isn't working00:52
johnsomI think that one is == only and will not do !=00:52
*** dmellado has quit IRC00:59
openstackgerritMichael Johnson proposed openstack/octavia master: Improve Health Manager error handling
johnsomIt might take a day or two to get it merged and backported to pike.  Depends on the cores01:06
johnsomWhat???  Hmmmm01:09
rm_workand the other one is....01:12
johnsomOh, I don't have that code, it hasn't merged yet01:14
johnsomYeah, but list goes out to lunch too01:15
johnsom{"members_links": [], "members": [{"created_at": "2018-05-04T01:11:14"}, {"created_at": "2018-05-04T01:11:20"}]}01:15
*** annp has joined #openstack-lbaas01:16
johnsomSo this is broken:
rm_work{"members_links": [], "members": [{"monitor_address": null, "address": ""}, {"monitor_address": null, "address": ""}, {"monitor_address": null, "address": ""}]}01:18
johnsomrm_work Yeah, I was right, it's all API side, not DB.01:19
johnsomDB would be an optimization however01:19
rm_worki wonder why it was done there01:19
johnsomNot sure.01:20
rm_workyeah i see01:20
rm_worki was looking at the other field thing01:20
johnsomThe models probably01:20
rm_workanyway ok01:20
rm_workjust need to fix01:20
*** phuoc_ has joined #openstack-lbaas01:20
rm_workgood test :)01:20
johnsomYeah, fields != filters01:20
rm_workgonna post the test code01:20
johnsomOk, I need to sign off for the night.01:20
johnsomDate night and all01:21
rm_worknight :)01:21
*** phuoc has quit IRC01:22
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members
*** dmellado has joined #openstack-lbaas01:26
*** dmellado has quit IRC01:32
*** dmellado has joined #openstack-lbaas01:56
*** dayou_ has joined #openstack-lbaas02:02
*** dmellado has quit IRC02:09
*** dmellado has joined #openstack-lbaas02:13
*** rcernin has quit IRC02:19
*** rcernin has joined #openstack-lbaas02:20
*** dmellado has quit IRC02:38
*** yamamoto has quit IRC02:51
*** yamamoto has joined #openstack-lbaas02:51
*** yamamoto has quit IRC02:58
*** dmellado has joined #openstack-lbaas03:39
*** yamamoto has joined #openstack-lbaas03:59
*** rcernin has quit IRC03:59
*** rcernin has joined #openstack-lbaas04:00
*** dmellado has quit IRC04:04
*** yamamoto has quit IRC04:06
*** dmellado has joined #openstack-lbaas04:39
*** yboaron_ has joined #openstack-lbaas04:44
*** yboaron_ has quit IRC04:45
*** yboaron_ has joined #openstack-lbaas04:46
*** yboaron_ has quit IRC04:51
*** ctracey has quit IRC05:15
*** hogepodge has quit IRC05:15
*** ctracey has joined #openstack-lbaas05:16
*** hogepodge has joined #openstack-lbaas05:16
*** threestrands has quit IRC05:47
*** dayou_ has quit IRC06:20
*** padouch has joined #openstack-lbaas06:24
*** longkb has joined #openstack-lbaas06:25
padouchhi all. I have a question about amphora image.06:25
padouchis it possible to download some or i have to use diskimage-creator to build my own?06:26
*** pcaruana has joined #openstack-lbaas06:43
*** dmellado has quit IRC07:01
*** dayou_ has joined #openstack-lbaas07:04
*** rcernin has quit IRC07:05
*** b_bezak has joined #openstack-lbaas07:06
*** dmellado has joined #openstack-lbaas07:12
rm_workpadouch: there is one07:14
rm_workpadouch: we build a nightly image for both centos and ubuntu based amphora07:14
rm_workit should be fine to use for testing/PoC07:14
rm_workbut we recommend you build your own for a production environment07:15
rm_workunfortunately I don't have the link handy for where it actually *is*07:15
padouchok and how can i get that test img07:16
padouchbeause i try use diskimage-creator for my own and result was fail07:17
rm_workhmm, more interesting would be why you're getting build failures07:17
rm_workbut i understand that sometimes you just need to make some progress07:18
rm_workso if you want to download one for now that's probably fine, get your stuff working, and then loop back07:18
rm_worklet  me see07:18
*** b_bezak has quit IRC07:18
rm_workcgoncalves: do you know where that nightly image build goes?07:18
padouchunfortunately no07:19
padouchdo you have some link pls?07:19
rm_workI am looking07:19
rm_worki actually don't remember, xgerman_ was the one working on it07:19
*** tesseract has joined #openstack-lbaas07:20
rm_workjohnsom: hahahahahahaha07:20
rm_workthat's funny07:21
rm_workdescription isn't a field on members <_<07:21
rm_workso our only actual test to see if it filtered ... is not right07:21
rm_workif i tweak the test just a tiny bit07:22
rm_workit fails07:22
dmelladorm_work: well, at least it tested it xD07:23
*** dmellado_ has joined #openstack-lbaas07:24
*** b_bezak has joined #openstack-lbaas07:24
rm_workit tested with "id" and "project_id"07:26
rm_workboth of which contain "id"07:26
rm_workwhich is the only reason it passed also07:27
rm_work*other reason it passed07:27
rm_workbecause any OTHER two fields would have also failed07:27
rm_workoh man and i found the bug i think... so much bug07:28
rm_workneed to have a pylint test i think for htis07:30
dmelladopylint? what did you do rm_work? ;)07:30
rm_workneed to make sure the number of args we put in the @wsme_pecan.wsexpose() decorator actually matches the number of args in the function07:30
rm_workthe bug was that we had this decorator:07:30
rm_work@wsme_pecan.wsexpose(member_types.MembersRootResponse, wtypes.text, [wtypes.text], ignore_extra_args=True)07:31
rm_workdef get_all(self, fields=None):07:31
rm_work^^ so fields which is supposed to be the array, actually was being parsed as a single string07:31
dmelladohm, awesome07:31
rm_workbad copy/paste from a different controller where we had another arg first07:31
dmelladoso it worked just by pure chance07:31
rm_workfound that bug duplicated in three places07:32
rm_workamphora, l7rule, and member controllers07:32
rm_workand i have found the same KIND of bug before too07:32
rm_workwhere the number of args was wrong but we didn't notice07:32
rm_workbecause it "worked kinda"07:32
rm_workposting a fix once I get tests in place07:32
rm_workAUGH and the amphora test too07:33
rm_work'fields': ['id', 'compute_id']07:33
dmelladobtw, regarding the amphora, bcafarel07:33
rm_workthe second half of the bug is that we do an "in" when we shouldn't07:33
rm_workso it only gets 'id'07:34
dmelladodid you get the chance to fix centos amphora on L2?07:34
*** dmellado_ has quit IRC07:34
rm_workbut it also finds 'compute_id' because 'id' is in it07:34
rm_workso the test sees both return and is like "cool, A++"07:34
rm_workdmellado: i think centos is working again07:34
dmelladorm_work: so it was a series of catastrophic coincidences07:34
dmelladorm_work: I'll give it a chance, had to revert to ubuntu one on our CI , thanks!07:34
rm_workthe bug there was a failure to copy one of the scripts out correctly07:35
rm_workwe merged it07:35
rm_workthat was the culprit07:35
rm_workassuming the issue you had was with the failovers not working07:35
rm_workor active-standby pairs never really coming up right07:35
rm_workyep l7rule was the exact same07:40
rm_workpositive-testing on id + project_id and negative-testing on a field that isn't on L7Rule07:40
rm_worksuch copy/paste testing07:40
rm_workto be fair, NOW I know better, then I probably didn't07:40
openstackgerritAdam Harwell proposed openstack/octavia master: Correct field filtering for member/l7rule/amphora
rm_workcgoncalves: ^^ we're going to want that all the way back to Pike07:49
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members
rm_worknow need to add scenario for members07:58
*** dmellado_ has joined #openstack-lbaas08:01
rm_workaaaaand there it is08:06
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members
rm_workthat was ... easy08:07
rm_workdisturbingly easy08:07
rm_worktyped it up and ran it and it worked the first time O_o08:08
rm_workanywho, that should be good for today08:08
rm_worktomorrow I should be able to set up a traffic test08:08
rm_workI hope08:08
*** salmankhan has joined #openstack-lbaas08:09
*** dmellado_ has quit IRC08:11
*** salmankhan has quit IRC08:20
*** salmankhan has joined #openstack-lbaas08:23
*** apuimedo has joined #openstack-lbaas08:26
cgoncalvesrm_work, ack for
*** dmellado_ has joined #openstack-lbaas08:50
padouchthx cgoncalves09:47
*** yamamoto has joined #openstack-lbaas09:49
*** yamamoto has quit IRC09:53
*** padouch has quit IRC10:08
cgoncalvesrm_work, re: it had always been always broken? if so backport candidate all the way back10:32
*** longkb has quit IRC10:32
cgoncalvesETOOMANYALWAYS :)10:32
*** yboaron_ has joined #openstack-lbaas10:47
*** yboaron_ has quit IRC10:51
*** yboaron_ has joined #openstack-lbaas11:02
*** yboaron_ has quit IRC11:07
*** yboaron_ has joined #openstack-lbaas11:21
*** yboaron_ has quit IRC11:26
*** pcaruana has quit IRC11:27
*** salmankhan has quit IRC11:36
*** salmankhan1 has joined #openstack-lbaas11:36
*** salmankhan1 is now known as salmankhan11:39
*** pcaruana has joined #openstack-lbaas11:39
*** annp has quit IRC11:57
*** pcaruana has quit IRC11:59
*** salmankhan has quit IRC12:07
*** salmankhan has joined #openstack-lbaas12:08
*** pcaruana has joined #openstack-lbaas12:12
*** yamamoto has joined #openstack-lbaas12:46
*** yamamoto has quit IRC12:48
*** yamamoto has joined #openstack-lbaas12:49
*** yamamoto has quit IRC12:53
*** b_bezak has quit IRC12:56
*** yamamoto has joined #openstack-lbaas13:00
*** samccann has joined #openstack-lbaas13:14
*** salmankhan has quit IRC13:24
*** salmankhan has joined #openstack-lbaas13:28
*** yamamoto has quit IRC13:45
xgerman_rm_work: cgoncalves padouch amp download URL:
xgerman_s/ubuntu/centos for that one14:24
*** fnaval has quit IRC14:31
*** yamamoto has joined #openstack-lbaas14:45
*** fnaval has joined #openstack-lbaas14:47
*** salmankhan has quit IRC14:47
*** salmankhan has joined #openstack-lbaas14:47
*** yamamoto has quit IRC14:52
*** dmellado has quit IRC14:57
*** dmellado_ is now known as dmellado14:57
*** salmankhan has quit IRC15:02
*** salmankhan has joined #openstack-lbaas15:12
*** fnaval has quit IRC15:12
*** fnaval has joined #openstack-lbaas15:13
*** pcaruana has quit IRC15:13
*** dayou_ has quit IRC15:29
*** salmankhan has quit IRC15:31
*** salmankhan has joined #openstack-lbaas15:31
*** bcafarel is now known as bcafarel|pto15:50
*** AlexeyAbashkin has joined #openstack-lbaas16:43
*** salmankhan has quit IRC17:07
*** tesseract has quit IRC17:09
*** yamamoto has joined #openstack-lbaas17:50
*** pcaruana has joined #openstack-lbaas17:54
*** yamamoto has quit IRC17:54
*** AlexeyAbashkin has quit IRC18:30
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Listener
rm_workcgoncalves: no that was was JUST broken by johnsom like a week before18:45
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members
*** pcaruana has quit IRC19:10
rm_workxgerman_ / nmagnezi / dayou
xgerman_after lunch19:15
rm_workfeel free to continue to pools and members19:16
*** yboaron_ has joined #openstack-lbaas19:29
johnsomrm_work What did I break?  He was asking about the Ubuntu AppArmor change I think.19:31
rm_workjohnsom: no he linked the keepalive script venv fix19:32
johnsomOh, I see, different thread19:32
rm_workI guess I didn't have to actively shame you for that one19:33
rm_worksince compared to the filtering which I broke from the start, that's not too bad :P19:33
*** yboaron_ has quit IRC19:33
johnsomHa, yeah, we all have our hall of shame patches....19:34
johnsomCool, I linked my listener driver patch to your listener tempest patch and it passed, so, that is good19:35
rm_workyes :)19:35
johnsomI realized today that our back-out-of-failed updates strategy isn't going to work with the drivers...19:36
johnsomWell, today we don't update the DB settings on an update call until they complete on the backend. But now that we hand the update off to the driver, we have to update the DB immediately.19:39
rm_workugh i don't want to deal with it now, but19:40
rm_worki had been noodling getting away from the immutable thing19:40
rm_workwe originally did that just because it was "less complicated" IIRC19:41
johnsomWell, it allows an HA control plane19:42
rm_workyou can do an HA controlplane with non-immutable too19:44
rm_workyou just have to effectively queue stuff19:44
rm_workyou can lock backend-updates without locking frontend-updates19:44
johnsomYeah, like guaranteed ordered queue and not popping it off the queue until it completes19:45
rm_workwhich we could do19:45
rm_workit's honestly not too bad19:45
rm_workIIRC Designate does it? mugsie am I just straight up lieing here?19:45
johnsomAnd a queue topic per LB19:45
rm_workeh, not strictly necessary *i think*19:46
rm_workbut would have to continue noodling19:46
rm_worktheres a number of ways to do it19:46
xgerman_johnsom: just half reading — update immediatley? Why? Because amphora-driver reads form the DB?19:46
rm_workactually yeah, why do we need to do this? we can pass an update-object to the driver the same as we do with the handler, no?19:47
rm_workand when the driver returns OK, we update the DB?19:47
xgerman_or, we can just call the driver in a future and when it returns to the update?19:48
johnsomxgerman_ No, because we don't want third party drivers updating all of the DB fields19:48
rm_workthey don't, we do?19:48
rm_worklike... we send the update object off to them -> when the respond back that it was done successfully, we update?19:48
xgerman_yes, but we can update the fields after the call succeeds?19:48
rm_worki didn't they the drivers even had DB access19:48
rm_work*I didn't think the19:48
johnsomrm_work drivers are async, ours included, so driver is going to say ok right away, DB gets updated with new stuffs, backend will not have old stuffs19:49
xgerman_well, it goes PENDING and when they are done we go ACTIVE19:50
johnsomIt's not an issue for any driver other than ours19:50
xgerman_ok, so you are saying we should Update the DB with changes -> driver-> driver does it’s thing and sets ERROR or ACTIVE?19:52
xgerman_but that will be true now for all drivers unless we have a rollback somewhere in the API19:57
johnsomWell, the other drivers may have their old state in their own environment19:58
johnsomMy whole point here is this breaks our current fancy rollback19:58
rm_workyeah umm... is it too late to change the model to have us send to the drivers as a future (still async) but have them actually return when they're in a good state?20:02
rm_workand then we can update?20:02
johnsomNot sure I follow.  Make drivers sync?20:03
xgerman_rm_work: they put it on a queue and forget20:04
xgerman_so no way to have a future travel back unless you bring in job tracking20:04
rm_workah we're talking about a different kind of sync/async i guess20:04
rm_workbut err20:05
rm_workyes essentially20:05
rm_worki guess that's a no-go20:05
xgerman_o, they have to send a status update on the LB20:05
xgerman_can’t we key off that?20:06
xgerman_e.g. ERROR keep old version; ACTIVE->Persist new one?20:06
rm_workyeah, doesn't that essentially make them sync?20:06
johnsomOur we are piling up a bunch of updates in a queue somewhere hoping we ever get an update.20:07
johnsomNot a fan of that idea20:07
rm_workI mean, I am thinking like20:07
rm_workLBaaSv3 API here20:08
johnsomI mean, I don't think we have a problem with third party drivers. I think that is all fine, it's purely a problem with our current controller20:08
rm_workI would like to architect it to be properly non-locking20:08
xgerman_third party can fail, to, and leave things in the old state20:08
johnsomrm_work IMO so many other important things right now....20:09
rm_workthat's what i mean20:09
rm_workfor non-locking, it'd be in v3 which is20:10
rm_workvery far off20:10
johnsomWell, I am going to test out my listener stuff with actually having barbican running (the cert/sni stuff gave me fits), then update the LB and Listener patch to update the DB are driver acceptance, then move on to pools, etc.20:12
johnsomAlso, I changed the way that thing works that you were so wound up about yesterday in this listener patch, so take a look and start wearing me down again20:13
johnsomProbably good to get some reviews on that HM crash thing too:
johnsomThough I could not get it to crash, even using the tool they were20:14
rm_workerr which thing?20:23
rm_worki meant on the listener20:32
rm_workthat i was apparently wound up about :P20:32
rm_worki don't remember T_T20:32
rm_workthat was like, YESTERDAY20:32
rm_worklooks good +220:33
rm_workcould use similar on
rm_workerr, I also +A'd20:33
*** samccann has quit IRC20:42
rm_workxgerman_: responded20:43
rm_workAlso note: I have a plan to DRY this whole thing, but I would rather go back in and do it after I have a better vision of the whole thing20:43
johnsomUgh, ok, caught up on e-mail and new stories.21:41
rm_worki found a fun one... which is, it doesn't seem to ACTUALLY run the resource_setup from the base21:41
rm_workso there's no webservers, no security groups, etc21:42
johnsomMissing super somewhere?21:42
rm_worknot that i can find21:42
johnsomnoop or overrides in the config?21:42
rm_worki see nothing that could prevent this from running21:43
rm_workoh nm i do21:43
rm_workthere it is21:43
rm_workit was hiding from me21:43
johnsomrm_work Have you set a SNI with the OSC?22:20
johnsomNot sure it works. At least our example in the cookbook doesn't22:21
rm_worktry a --22:21
rm_workopenstack loadbalancer listener set --default-tls-container-ref --sni-container-refs -- listener122:21
rm_worklooks like it's eating the listener122:21
rm_workas a second SNI22:22
johnsomYeah, the -- worked22:22
rm_workso we need to fix the example22:22
rm_workor change the client22:22
johnsomSo, should we add that to the cookbook or do we need to fix the client????22:22
rm_workwe could change the client to just take the arg multiple times22:23
rm_workinstead of expecting a list22:23
rm_work(I think?)22:23
johnsomYeah, I think so too.22:23
johnsomNot sure how, but should work22:23
rm_workthe question is whether or not cliff lets you do that22:23
johnsomHave time to look at that?22:24
openstackgerritMerged openstack/octavia master: Improve Health Manager error handling
rm_worknot this week :(22:28
rm_worknot *soon*22:28
rm_workit's pretty low priority honestly >_>22:29
johnsomOk, I will update the doc and add a story22:29
*** fnaval has quit IRC22:30
johnsom^^^ In case someone has time for an easy fix22:35
*** yamamoto has joined #openstack-lbaas23:21
*** yamamoto has quit IRC23:26
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Listener

Generated by 2.15.3 by Marius Gedminas - find it at!