Wednesday, 2016-02-24

*** piet has joined #openstack-lbaas00:01
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 jinja template updates
sbalukoffGreat...  an now gerrit is hanging...00:04
sbalukoffOh there it goes!00:04
*** yamamoto_ has joined #openstack-lbaas00:05
johnsomYeah, I had gerrit give me a 500 too00:06
rm_workok, i'm going to take a break and come back in a couple of hours -- probably by then we'll have things through checks?00:07
rm_workwhich patch was the "new fix" in? sbalukoff00:07
rm_worki'll take a look at that now00:07
rm_workand are you *fully* rebased yet? I guess missing a couple at the top00:08
rm_workerr, end00:08
rm_worktop/bottom so ambiguous00:08
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 documentation
sbalukoffrm_work: The very first one in the chain.00:09
*** manishg_wfh has quit IRC00:09
sbalukoffI have one more to rebase.00:09
rm_workmy only comment is that I usually do "pool=None" above the if, and don't use an else00:11
rm_workbut it's just style00:11
rm_workI think yours actually does one less operation, technically00:11
rm_workso might be better00:11
rm_workbut this looks sane to me00:12
*** ducttape_ has quit IRC00:12
rm_worki'm tired and babbling00:12
sbalukoffI think we are still waiting on a verdict from the gate on whether it fixes the issue we saw.00:12
rm_workwhich is why i'm going to go do stuff and be back in a couple of hours00:13
sbalukoffOk, cool.00:13
rm_worki'll join from my desktop in case you need to ping me00:14
*** manishg_wfh has joined #openstack-lbaas00:14
*** yamamoto_ has quit IRC00:14
openstackgerritStephen Balukoff proposed openstack/octavia: Add listener stats API
*** rm_you has joined #openstack-lbaas00:15
sbalukoffOk! The chain should now be rebased.00:15
rm_youah k00:15
rm_youwill go back and start a devstack build then00:15
sbalukoffAnd most of the patches didn't lose their +2's because this was a rebase.00:15
openstackgerritMichael Johnson proposed openstack/octavia: Change HMAC compare to use constant_time_compare
rm_workhmm when i go to the related changes section looks much smaller00:16
rm_worksbalukoff: ^^00:16
rm_workdid some lose their topic or something?00:16
rm_workthough they should still be listed there if they're in the chain i thought00:16
johnsomThat is strange00:17
sbalukoffThat is really bizarre.00:17
rm_worktrying to trace the chain now00:17
johnsomIf I go the other direction, down from mine, it looks ok00:18
rm_worki'm lookng from 27883000:18
sbalukoffThe chain is visible here:
sbalukoffAnd those look to be the right patch sets.00:18
sbalukoffIt's only the one at the bottom that seems confused...00:19
sbalukoffI wonder if that was related to the gerrit hang I saw earlier.00:19
*** manishg_wfh has quit IRC00:19
rm_workyeah looks ok00:19
johnsomNo love00:20
rm_workdid anything change today client-side or n-lbaas side?00:20
johnsomLogs are here:
sbalukoffOk, I'm going to restack to try to figure out what's wrong with that tempest test locally.00:26
rm_youthat isn't ...00:28
rm_youthat's the second patch, not the first00:28
rm_youthe first is 28047800:28
rm_youwhich is not the one you linked me for the fix00:28
rm_youthis is a totally different issue00:29
rm_youthis makes sense00:29
sbalukoffOk, I'm confused.00:29
rm_youTHAT error is from
rm_youon line 27800:30
rm_youwhich, obviously pool can be None00:30
rm_youshould have caught that00:30
sbalukoffHeh! At least the failing test makes sense now.00:32
sbalukoffLet me get on fixing *that*00:32
rm_youi went from the end downward when reviewing00:32
sbalukoffI really need to get more sleep at night. :P00:32
rm_youso my guess is that my eyes had completely glazed over by the time i got here00:32
rm_yousame >_<00:32
rm_youwhat is amazing though is: *tests caught a bug* :P00:33
rm_youthat's almost more surprising than the bug being there00:33
sbalukoffAlso part of the reason I want native tempest tests for Octavia.00:35
openstackgerritStephen Balukoff proposed openstack/octavia: Fix model update flows
sbalukoffAnd here goes the rebase chain again.00:36
sbalukoffI'm pretty sure I killed the bug this time.00:36
sbalukoffBut... again, we shall see.00:36
openstackgerritStephen Balukoff proposed openstack/octavia: Assign peer_port on listener creation
rm_youwell, it was a different bug00:41
*** diogogmt has joined #openstack-lbaas00:42
rm_youok cool and it fixed the dumbness with the side-panel00:43
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 database structures
rm_youthe "related changes" include everything on the first patch now00:44
sbalukoffOh! Ok.00:46
openstackgerritStephen Balukoff proposed openstack/octavia: Update repos for L7 policy / methods
*** crc32 has quit IRC00:46
johnsomsbalukoff I think we need to have the chat about the data model comments tomorrow.  I really need to do some internal work I planned to do today.00:49
sbalukoffjohnsom: Ok, no time to chat now?00:49
sbalukoffI can pause this rebasing stuff.00:49
rm_youwhat was the issue you saw johnsom?00:49
johnsomI figured you were back on the rebase train00:49
sbalukoffI can hold off. I'd like to make sure that fix at the start really fixes tihng.00:50
sbalukoffOtherwise, I'm back on the train again.00:50
johnsomOk.  rm_work it's the comments German and I put on
rm_youah that00:51
sbalukoffjohnsom: Ok, I'm assuming you read through my responses. What are your thoughts?00:51
johnsomSo, starting with line 219 where we put the policy in reject when the pool is removed.00:51
johnsomThe user experience is a bit odd here, we are changing the behavior in a maybe hidden way.  Would it be better to reject the pool delete if it is in use by a policy?00:52
sbalukoffWe don't presently reject pool deletes if they're in use by a listener.00:53
*** Aish has quit IRC00:53
sbalukoffI see this as being similar to that.00:53
johnsomBut we do reject pool deletes if there is a member in the pool00:54
rm_youwait, we do? i guess i didn't test that :P00:54
rm_youah right because nothing cascades yet00:54
johnsomI think this is more like deleting the listener when there is a pool attached00:54
rm_youI would expect delete pool -> delete members00:54
sbalukoffrm_you: Same here. :/00:55
sbalukoffAah-- we do:00:55
sbalukoff    pool = orm.relationship("Pool", backref=orm.backref("members",00:56
sbalukoff                                                        uselist=True,00:56
sbalukoff                                                        cascade="delete"))00:56
sbalukoffThat's from models.py00:56
sbalukoffIn the Member class.00:56
johnsomRight, but the api blocks it00:56
rm_youyeah so funnily there can never be members when that happens00:56
rm_youi remember this from part of the discussion of what xgerman is working on, from the midcycle00:57
johnsomWhat concerns me is deleting the pool can cause requests previously serviced by the policy to go from working to "REJECT" with no indication to the operator deleting the pool that they just caused that.00:57
rm_youyeah i think MAYBE we should just block deleting a pool in use?00:57
rm_youforce them to either update the listener to use a different pool first, or delete it00:57
sbalukoffjohnsom: It could be an equally bad situation to route requests to the default pool. :/00:57
rm_yousbalukoff: ^^00:58
rm_youso literally just deny a delete on a pool in use by l700:58
johnsomAgreed.  That was my first idea, but yeah, I think sending the traffic back to the default is not good either.00:58
johnsomOk, so it sounds like we agree on that one.  Do you want me to open a bug for that?00:59
*** ducttape_ has joined #openstack-lbaas00:59
sbalukoffOk, I think i agree that's probably the best user experience. It's not what Evgeny and I agreed on months ago though, so we'll want the neutron-lbaas stuff to behave the same way.00:59
sbalukoffCan we file this as a bug for now?00:59
rm_youI would say yes00:59
sbalukoffWow, were we just on the same wavelength there, johnsom?00:59
johnsomDon't freak me out man00:59
johnsomOk, next up is line 281 where a listener update is deleting the pool from the model (not sqlalchemy or db)01:00
sbalukoffThis is a little more complicated to understand.01:01
johnsomThis seems like a side effect of doing the listener update.  If the model is used in later steps it would not see the pool in pools, right?01:01
sbalukoffIt's really important to understand that the data model graph that's being operated on is transient: It exists only as long as the operation doing the update / delete command is executing and gets discarded afterward in the flow regardless of whether whatever operation we were working on succeeded.01:02
johnsomTell me more about that.  You are copying the model somewhere upstream or doing a reload task right after?01:03
sbalukoffSo, if the model *graph* is used later on, the pool will still be there (linked under loadbalancer and probably other places within the graph), but it won't be listed under the listener.pools list (which is actually what we want-- we just simulated updating that relationship.)01:03
*** paco20151113 has joined #openstack-lbaas01:03
sbalukoffOk, let me find exactly where the Listener (data model) update method gets called:01:04
*** minwang2 has quit IRC01:04
sbalukoffOk, it's called in model_tasks.UpdateAttributes01:05
johnsomRight you changed the way I was doing that.01:06
*** minwang2 has joined #openstack-lbaas01:06
johnsomIt's used to update the model just before pushing a new haproxy.cfg out01:06
sbalukoffRight, because the old way assumed the attributes in the model were static attributes.01:06
sbalukoffYes, and that's the *only* place in the code tree where data model update() methods get called.01:06
sbalukoffBy design, I think-- any other time we manipulate stuff we want the changes to be saved in the database.01:07
sbalukoffThe problem with the old update method in that task was by assuming the attributed were simple and static, we break things when they're actually not.01:08
johnsomWell, let's not argue that mess again.  Let me ask some more questions and see if I can understand the point of this remove(old_pool)01:08
sbalukoffOk, that's to keep the data model graph accurately reflecting what would happen to the data model if we were doing this update on the repository.01:09
sbalukoffSo for example...01:09
johnsomSo, here is the question.  If I make a lister update call, admin down or such, and there is no L7 policy, but a default pool, wouldn't the rendered haproxy be missing it's pool config?01:09
sbalukoffCan you point me to the line numbers you're looking at?01:10
johnsom281-287, actually I have to change that slightly, the user is updating the listener to have a different default pool, as opposed to admin dnow01:11
sbalukoffSo, on lines 282-284, I figure out which pools are actually referenced by active l7 policies...01:12
sbalukoff285 I save the old_pool01:12
sbalukoff286 I see whether the old_pool is one that is referenced by an l7 policy, and if it *is*, i *don't* remove it from the list.01:13
sbalukoffIf the old_pool is not referenced by an l7 policy, then we know it was only ever referenced as the default_pool, so we will want to remove it from the listener.pools list.01:13
sbalukoffPlus, right after, we remove the current listener (that is being changed) from the old_pool's listeners list.01:14
johnsomI think I see it now, but if the new default is not on an l7 would it not get set?01:14
*** yamamoto_ has joined #openstack-lbaas01:14
sbalukoffThough, technically I don't think there's any code that deferences that right now-- but I figured it's better to keep the data model graph completely accurate.01:15
johnsomAh, ok.  I get it now.01:15
johnsomI think it is fine01:15
johnsomOk.  Last one, 42701:15
sbalukoffThat's a lot more convoluted.01:16
sbalukoffBut it's the same basic idea.01:16
johnsomGreat for the end of the day huh?01:16
sbalukoffSo here's what's up with that:01:16
sbalukoffIf we are deleting an l7rule, there is a chance that it's the last rule in an l7policy.01:16
johnsomRight, following that01:17
sbalukoffIf that's the case, then the l7policy is going to become "inactive" on the listener, and any pool it references could disappear from the listener.pools list.01:17
sbalukoffThere's a *lot* of logic to handle that case in the L7Policy update and delete methods...01:18
johnsomI guess this doesn't really matter as the haproxy would not have an acl to match, so dropping it from the render is the right thing01:18
sbalukoffTechnically, doing a l7policy.delete here means that the listener.l7policies list doesn't have it anymore...01:18
sbalukoffBut, that's OK because it wouldn't be rendered in the haproxy config anyway.01:18
johnsomOk, I think I have it01:18
sbalukoffSo... this is really just a shortcut not to have to copy-paste code for managing the l7policy relationships.01:19
*** yamamoto_ has quit IRC01:19
*** yamamoto_ has joined #openstack-lbaas01:20
johnsomOk, I think I am good to merge the beast.  Those were really the last concerns (other than the gates)01:20
*** manishg_wfh has joined #openstack-lbaas01:20
sbalukoffOk. Thank you for letting me fix the minor problems we've uncovered in the last week in follow-up bugs. :)01:20
sbalukoff(And hopefully the gate clears now.)01:21
johnsomSure, NP.  Sorry you had a heart attack over the -1's01:21
sbalukoffI'm still baffled as to how it could have passed before.01:21
johnsomI'm not quite the hard-a$$ you were....01:21
*** armax has joined #openstack-lbaas01:21
sbalukoffjohnsom: Oh, you're definitely a hard-ass. But for the right reasons: You don't want to introduce stuff that's poorly written or likely to break later on in unusual ways.01:22
sbalukoffI think that's a good thing.01:22
johnsomIt looks like it passed that time01:22
sbalukoffOk, thanks for taking time at the end of your day to talk through this stuff.01:22
sbalukoffAny other questions before I go heads down on rebasing the rest of this chain after that fix at the start of the chain?01:23
johnsomSure.  I'll be here for another hour or two working on the internal stuff.  If you need +2's ping me01:23
johnsomNo, I am good01:23
sbalukoffjohnsom: Will do!01:23
johnsomTomorrow you can fix the bug in lbaas and I'll +2 that too01:23
rm_youso wait, +2 time on the first patches again?01:24
sbalukoffYeah, I think I will probably be able to sleep better tonight. :) I'll tackle that when I'm fresh in the morning. :)01:24
johnsomrm_you it passed the gate that failed before01:24
johnsomscenario is still running01:24
*** minwang2 has quit IRC01:24
sbalukoffrm_you: I think so, if they're rebased off that first fix in the chain.01:24
rm_youah k01:24
sbalukoffrm_you: You might want to wait until things pass tests the first time again.01:25
sbalukoffOk, I'll be around, rebasing stuff for a bit...01:26
madhu_akfnaval, commented on your scenario test patch01:27
*** madhu_ak is now known as madhu_ak|home01:27
sbalukoffjohnsom: Once the Octavia patches land, and the neutron-lbaas stuff is taken care of so it can also land by next Monday, could you give me some direction on which of the outstanding bugs assigned to me you'd like to see fixed first?01:29
johnsomI have assigned severities01:31
sbalukoffOk, cool.01:35
openstackgerritStephen Balukoff proposed openstack/octavia: Update repos for L7 rules / validations
*** ducttape_ has quit IRC01:42
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 api - policies
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 api - rules
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 controller worker flows and tasks
*** Purandar has quit IRC01:51
*** piet has quit IRC01:53
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 jinja template updates
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 documentation
*** yamamoto_ has quit IRC02:00
openstackgerritStephen Balukoff proposed openstack/octavia: Add listener stats API
sbalukoffMan, I am not going to miss having to rebase that chain.02:06
sbalukoffI *think* we're all good there.02:06
openstackgerritMichael Johnson proposed openstack/octavia: Change HMAC compare to use constant_time_compare
johnsomYou could review that one as it depends on your patches and already has a +202:07
*** pcaruana has quit IRC02:07
sbalukoffjohnsom: Sure!02:08
johnsomOf course it has to go through the gate again02:08
sbalukoffrm_work / rm_you: If you want to +2 / +A this (now working) head of the chain... well, we can get the merge train rolling, eh!
rm_you just did02:09
rm_youshould have minimum of 3 merging02:09
rm_you7 of them in a row with +A02:09
rm_youshould all co-gate02:09
*** H3y has quit IRC02:14
johnsomI think our project just changed.  My last patch has a requirements gate02:15
rm_youyep there they go02:15
rm_youjohnsom: uhhh02:15
rm_youjohnsom: i did +1 the requirements patch recently02:15
rm_youi wonder if it merged02:15
johnsomI suspect it will break as our requirements file is out-of-date02:16
rm_youit should be ok02:16
rm_youit just starts the requirements-update job i thinjk02:16
rm_youwhere do you see that johnsom02:17
johnsomzuul 28333002:17
johnsomthere is a releasenotes gate there too02:18
johnsomWhich I asked about a long time ago, but got no traction, so ignored02:18
*** pcaruana has joined #openstack-lbaas02:19
johnsomWell, it passed, so that is good02:21
sbalukoffI'm going to re-stack locally here and see if can figure out how many rules in an l7policy breaks the haproxy config...02:22
sbalukoff(Should make for a nice diversion so I don't have to think about the gate for a while.)02:22
rm_youi was going to ask bedis02:23
rm_youi figure he'd just ... know02:24
sbalukoffI suspect it's going to be some multiple of 256 bytes on a line.02:24
sbalukoff(Probably 1024)02:25
rm_youi have a set of stuff i'm waiting to +A too lol02:30
rm_youjust sitting here watching this02:32
sbalukoffIt is pretty fascinating.02:34
*** Purandar has joined #openstack-lbaas02:34
johnsomWow, really?02:35
*** kevo has quit IRC02:35
johnsomI just get depressed that the gates are so slow.  Granted, we are lucky at the moment02:35
sbalukoffIt's kind of like watching a rube-goldberg machine.02:36
johnsomSo true02:36
sbalukoffKind of a "man, we were so lucky to witness that actually working" vibe to it.02:36
johnsom"I was there for L7" stickers02:37
openstackgerritMerged openstack/octavia: Fix model update flows
openstackgerritMerged openstack/octavia: Assign peer_port on listener creation
rm_youIT BEGINS02:44
*** Purandar has quit IRC02:47
*** yamamoto_ has joined #openstack-lbaas02:47
openstackgerritMerged openstack/octavia: Add L7 database structures
openstackgerritMerged openstack/octavia: Update repos for L7 policy / methods
openstackgerritMerged openstack/octavia: Update repos for L7 rules / validations
openstackgerritMerged openstack/octavia: Add L7 api - policies
sbalukoffIt continues!02:51
*** Aish has joined #openstack-lbaas02:51
*** Aish has left #openstack-lbaas02:51
*** manishg_wfh has quit IRC03:00
*** manishg_wfh has joined #openstack-lbaas03:00
*** pcaruana has quit IRC03:01
rm_youjohnsom: jinx03:02
rm_youwithin like03:02
rm_you5 seconds we both +A'd03:02
johnsomYeah, ha03:03
openstackgerritFranklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
rm_youwish i could +A 281603 too but i haven't tested03:05
johnsomYeah, I have the console open watching the test03:05
openstackgerritMerged openstack/octavia: Add L7 api - rules
johnsomrm_work it hit session persistence03:06
sbalukoff281603 is a relatively minor test-- adding functionality that's been in the docs but broken since as long as Octavia has been around.03:08
sbalukoffI put it at the end of the L7 chain because it touches the listener API (as does L7) and didn't want to futz with L7.03:09
sbalukoffjohnsom: If you want to rebase your patch for doing the time comparison to go before that one, it shouldn't break anything, eh.03:09
sbalukoff(And then that can get merged tonight as well.)03:10
johnsomNah, I have faith in the stats stuff testing out ok03:10
sbalukoffOnce the last of the L7 stuff merges, I will probably rebase any of the smallish bugfixes I have in the queue that end up in merge conflict.03:12
rm_youi might just +A this03:13
rm_youit looks fine on review03:13
*** pcaruana has joined #openstack-lbaas03:16
sbalukoffIt's pretty simple.03:17
*** TrevorV has joined #openstack-lbaas03:17
*** TrevorV2 has joined #openstack-lbaas03:18
rm_yousure, why not03:20
rm_youit's a gate party03:20
sbalukoffAm I smoking crack in the comment I made here?
rm_youi think that makes sense03:25
rm_youonly one could have "null" if it was unique :P03:25
rm_you(and nullable)03:25
sbalukoffYeah, that's one that I apparently missed in the neutron-lbaas shared-pools patch.03:26
rm_youi did that before by accident once03:26
*** TrevorV has quit IRC03:27
*** TrevorV2 has quit IRC03:27
*** TrevorV has joined #openstack-lbaas03:28
TrevorVrm_work, you online on here homie?03:30
TrevorVcan you go active in TS for a minute or two03:30
rm_yousec also in another ts03:31
johnsomOk, I will be away for a bit.  If you happen to think about it, can you click rebase on this after the stack is merged?
openstackgerritMerged openstack/octavia: Add L7 controller worker flows and tasks
sbalukoffjohnsom: Sure!03:36
*** links has joined #openstack-lbaas03:46
openstackgerritMerged openstack/octavia: Add L7 jinja template updates
openstackgerritMerged openstack/octavia: Add L7 documentation
*** ducttape_ has joined #openstack-lbaas03:58
sbalukoffAlmost done with the merges for the evening... then it's rebase time, eh!03:59
sbalukoff(for the small bugfixes already in the queue.)03:59
sbalukoffAlso: Finished running my test: I was able to add 53 rules to an l7policy before haproxy refused to parse the config anymore.03:59
TrevorVsbalukoff, your stuff all merged just now04:00
sbalukoffI would be extremely surprised if anyone actually does that.04:00
TrevorVNow *I* get the focus :D04:00
sbalukoffTrevorV: Woo-hoo!04:00
sbalukoffTrevorV: Yep!04:00
sbalukoffTrevorV: I'm hoping to see the single-create merge this week as well, eh.04:00
TrevorVscary being under the spotline, but I think I got this single-create thing licked for Octavia at least04:00
*** pcaruana has quit IRC04:01
sbalukoffHeh! I know the feeling. I'm glad I won't be holding others up anymore with the L7 stuff. Now I'll just be holding them up with bugfixes. :)04:01
TrevorVGood plan actually!04:01
sbalukoffThought hopefully most of those will be minor. I feel like L7 got a good workout.04:01
rm_youafk for 30m04:02
TrevorVIf I get the single-create stuff done this week I'll try to put some time in on rounding out those SQLAlchemy issues we've seen04:02
sbalukoffTrevorV: Sounds good.04:02
*** manishg_wfh has quit IRC04:03
*** neelashah has joined #openstack-lbaas04:04
openstackgerritMerged openstack/octavia: Add listener stats API
openstackgerritFranklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
openstackgerritMerged openstack/octavia: Change HMAC compare to use constant_time_compare
openstackgerritStephen Balukoff proposed openstack/octavia: Add a request timeout to the REST API driver
sbalukoff(rebased that per johnsom's request)04:07
*** allan_h has quit IRC04:08
bloganL7 merge in octavia!04:10
blogansorry I wasn't able to provide much testing/feedback on that stuff, too much going on internally for me, but thanks for all the hardwork you guys04:11
bloganand all the scrollback to read :)04:11
openstackgerritTrevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API
openstackgerritStephen Balukoff proposed openstack/octavia: Fix health monitor URL in API documentation
*** pcaruana has joined #openstack-lbaas04:15
sbalukoffblogan: Haha! you're welcome. XD04:15
*** ducttape_ has quit IRC04:15
blogansbalukoff: i feel like so useless now :(04:16
*** fnaval has quit IRC04:16
sbalukoffblogan: Naw-- there are lots of bugs to fix, most of which are small-ish. :)04:16
johnsomblogan If it feels better, we still pick on you04:17
johnsomsbalukoff Thanks for the rebase04:17
sbalukoffNo problem, eh!04:17
blogansbalukoff: oh i don't doubt there are bugs to fix, but at least those can be done after04:17
bloganand backported too04:17
bloganjohnsom: at least that will never change, that is my legacy04:18
bloganthat and data models04:18
sbalukoffOnce Monday's deadline is passed, I plan on going on a smashing spree with great alacrity.04:18
sbalukoffOctavia must not suck in Mitaka!04:18
johnsomCool, I think a bunch of them will go quickly04:18
bloganwell i hope some of the internal stuff gets settled and dies down so i can help out on that, but with the way things have been going i doubt that'll happen soon04:20
sbalukoffjohnsom: Don't know if you saw, but I found the limit for l7rules in a single l7policy:  53. haproxy stops parsing a config line after 2048 characters.04:20
sbalukoffjohnsom: That is a ridiculously high number of rules in one policy, so I think maybe we just set the limit at 50 and call it good.04:21
blogansbalukoff: meant to ask you a question, will a pool that is ahred between multiple listeners and/or multiple l7policies have different statuses?04:21
sbalukoffIf you want I can add a validation for that.04:21
sbalukoffblogan: It shouldn't.04:21
sbalukoffIt's the same DB record.04:21
bloganactually i dont think this is a big problem with pools, but will be for listeners, but they'd probably need different statuses04:22
johnsomsbalukoff Yes, a limit at 50 sounds like a good and reasonable idea04:22
sbalukoffblogan: Ok, we actually haven't done a whole lot of design around status updates and whatnot should work from what I can tell. Maybe we should dig into that deeply and figure out what makes sense?04:23
sbalukoffI'm thinking this is something we want to spec out and have discussion about.04:23
openstackgerritPaco Peng proposed openstack/octavia: Fixed  invalid IP value get by awk in sample file Closes-bug: #1549091
openstackbug 1549091 in octavia "samples/ get IP1 IP2 value not correct with $5" [Undecided,New] - Assigned to Paco Peng (pzx-hero-19841002)04:24
blogansbalukoff: yeah probably at some point, it was just something that we had issues in the beginning of the v2 circus04:24
sbalukoffBecause I think it's always been lightly passed over in other design discussions because the energy is used up on other things when we talke about it.04:24
sbalukoffblogan: yeah, that's what I'm talking about.04:24
sbalukoffi remember those discussions, and it was hard enough to get the structure we have...  nobody had any big ideas around status... or statistics for that matter.04:25
bloganif we had one haproxy process for all listeners it'd be easy to wave away the possible problem, its not a big problem really though04:25
sbalukoffI'm happy to revisit that discussion: I'm less vehemently opposed to the idea of one haproxy per LB.04:26
bloganlol yeah it was, i remember almost going with a DETACHED status04:26
sbalukoffwe're also not out of the woods if we go there.04:26
sbalukoffWe already have active-standby (ie. two processes on different hosts)04:26
sbalukoffAnd we should be thinking hard about active-active.04:26
blogangood point04:26
sbalukoffShould probably revisit the log shipping discussion at some point, though thankfully we don't have clients begging for that yet.04:27
bloganha yeah04:27
sbalukoff(That's another one to ignore until someone needs it enough to write a spec.)04:27
bloganwe'll have to deal with that towards the last quarter of the year i bet04:27
bloganwe as in rackspace04:28
bloganbut we as in octavia works too04:28
bloganbc that'll be Newton04:28
bloganso after L7 and single create call, there's active active right for octavia right? what else? because polishing and stabilizing would be great04:29
sbalukoffjohnsom: On rule limit: Want me to add a bug for that and assign it to myself?04:29
openstackLaunchpad bug 1549100 in octavia "L7 - HAproxy fails with more than 53 L7 rules" [High,New] - Assigned to Stephen Balukoff (sbalukoff)04:29
sbalukoffjohnsom: HAHA!04:29
sbalukoffblogan: The nice part about polishing and stabilizing is that that mostly consists of finding bugs, fixing bugs, and writing tests.04:30
johnsomblogan Polish, stabilizing, and we should consider HA control plane (i.e. job board)04:30
*** fnaval has joined #openstack-lbaas04:30
sbalukoffAgain, I'm hoping tempest tests land soon, which puts us in a good position to start fleshing out stuff that will ensure things say polished...04:30
blogansbalukoff: yeah but also includes stuff like refactoring the whole data model stuff, possibly revisiting the one haproxy process per listener04:30
sbalukoffjohnsom: +104:31
sbalukoffblogan: Yeah, that's a larger discussion we need to have.04:31
sbalukoffPossibly a good one for the summit, or do you not want to wait that long to have it?04:31
blogansummit will be good04:31
sbalukoff(summit is ~2 months away.)04:31
blogani just happen to be on right now and yall are listening04:32
sbalukoffAnd I'm on a dopamine high from not having much less to worry about with next Monday's deadline. XD04:32
sbalukoffer.. from having much less...04:32
sbalukoffI'm babbling.04:32
bloganat the summit ill bring up my obligatory "should we move away from taskflow" topic :), just for johnsom04:33
johnsomAh, it wouldn't be right if you didn't...04:33
sbalukoffjohnsom: Gonna re-stack with your REST API timeout patch and let you know how it goes.04:33
johnsomI'll try to bring in the big guns again too04:33
johnsomOk, thanks04:33
bloganill be sure to have a sign out front that says "no taskflow cores allowed"04:34
johnsomFYI, after next week, it's March 14th for RC104:34
sbalukoffSo, about 2 weeks to get a bunch of bugs fixed.04:34
TrevorVUgh... rebasing changes because I had to tip-toe for sbalukoff .... :P04:34
TrevorVIts goin well actually, just taking longer than I wanted ha ha ha ha04:35
sbalukoffTrevorV: Sorry!04:35
johnsomblogan You know that once you spend time with it, you'll love it.04:35
johnsomblogan I bet you will be evangelizing through the halls of castle....04:35
TrevorVjohnsom, no he won't... he doesn't even do that about the things he does like.04:35
TrevorVCurrently I mean04:36
bloganjohnsom: i've spent enough time with it!04:36
blogani would be a terrible evangelizer04:37
sbalukoffblogan: If you really want to defeat it, you need to come up with a tangible alternative. :)04:37
bloganhaha chuck e cheese brawl, what has this world come too04:37
blogansbalukoff: its called normal code structure :)04:38
openstackgerritTrevor Vardeman proposed openstack/octavia: WIP - Get Me A Load Balancer Controller
TrevorVAlright, now to rebuild devstack with new changes so I can manually test by tomorrow morning04:43
TrevorVbrb, playing CS:GO04:43
*** TrevorV has quit IRC04:43
sbalukoffI'll BBIAB. Just realized the only thing I've eaten today is a bowl of cereal.04:44
openstackgerritOpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements
*** piet has joined #openstack-lbaas04:52
*** neelashah has quit IRC04:55
*** Purandar has joined #openstack-lbaas05:00
*** amotoki has joined #openstack-lbaas05:05
*** minwang2 has joined #openstack-lbaas05:09
*** manishg_wfh has joined #openstack-lbaas05:12
rm_youwoo and requirements updates are back!05:14
*** minwang2 has quit IRC05:28
*** manishg_wfh has quit IRC05:37
*** pcaruana has quit IRC05:38
*** pcaruana has joined #openstack-lbaas05:53
openstackgerritMerged openstack/octavia: Updated from global requirements
*** piet has quit IRC06:00
*** kobis has joined #openstack-lbaas06:00
*** allan_h has joined #openstack-lbaas06:06
*** allan_h has quit IRC06:07
*** kevo has joined #openstack-lbaas06:09
*** minwang2 has joined #openstack-lbaas06:14
*** bana_k has joined #openstack-lbaas06:14
*** numans has joined #openstack-lbaas06:19
*** openstack has joined #openstack-lbaas13:23
*** localloop127 has joined #openstack-lbaas13:31
*** 32NAAD4T7 has quit IRC13:31
*** rtheis has quit IRC13:43
*** rtheis has joined #openstack-lbaas13:43
*** amotoki has joined #openstack-lbaas13:44
*** rtheis has quit IRC13:48
*** rtheis has joined #openstack-lbaas13:51
*** rtheis has quit IRC13:53
*** rtheis has joined #openstack-lbaas13:53
*** rtheis has quit IRC13:57
*** rtheis has joined #openstack-lbaas13:57
*** nmagnezi has quit IRC13:58
*** rtheis has quit IRC13:58
*** rtheis has joined #openstack-lbaas13:59
ihrachysdougwig: around?14:00
*** piet has joined #openstack-lbaas14:04
*** neelashah has joined #openstack-lbaas14:06
*** aryklein has joined #openstack-lbaas14:10
*** nmagnezi has joined #openstack-lbaas14:11
arykleinis there any guide about installing and configuring lbaas v2 (using octavia) from scratch? I don't use Devstack or any of this scripts to deploy openstack.14:12
*** piet has quit IRC14:15
*** piet has joined #openstack-lbaas14:16
*** woodster_ has joined #openstack-lbaas14:20
*** evgenyf has quit IRC14:26
*** Purandar has joined #openstack-lbaas14:30
*** rtheis has quit IRC14:40
*** rtheis has joined #openstack-lbaas14:41
*** diogogmt has quit IRC14:42
*** rtheis has quit IRC14:42
*** paco20151113 has quit IRC14:42
*** rtheis has joined #openstack-lbaas14:43
*** diogogmt has joined #openstack-lbaas14:44
*** armax has quit IRC14:47
*** localloop127 has quit IRC14:49
*** Purandar has quit IRC14:50
*** TrevorV has joined #openstack-lbaas14:56
*** rtheis has quit IRC15:00
*** ducttape_ has joined #openstack-lbaas15:01
*** rtheis has joined #openstack-lbaas15:01
*** armax has joined #openstack-lbaas15:01
*** diogogmt has quit IRC15:01
*** rtheis has quit IRC15:03
*** rtheis has joined #openstack-lbaas15:03
dougwigihrachys: ack15:05
ihrachysdougwig: hey! was writing an email on how octavia manages amphora image updates, but since you are here, would like to run it with you first.15:06
ihrachysdougwig: so the goal is: being able to update amphora image without octavia service reconfiguration/restart15:07
ihrachysdougwig: you know, currently we hardcode the image ID, so a new image requires all that15:07
ihrachysdougwig: my idea is to use glance image tags for that. we would have octavia know the tag used to mark the latest amphora image15:07
ihrachysdougwig: and then we'll talk to glance to extract its ID, then pass it into nova15:08
ihrachysdougwig: now, the complexity here is that we would then couple octavia with glance (thru glanceclient)15:08
ihrachysdougwig: alternative would be having nova do the extraction for us15:08
dougwigi think having it use glance would be a good idea.15:09
ihrachysdougwig: but I talked it thru with sdague and he is not convinced it's a good idea, at least until glance provides some unique labeling to us15:09
dougwigi can only speak for myself, but that seems a logical use of glance for a nova-enabled service.15:09
dougwigwe can't do our own by doing octavia-{uuid} or somesuch?15:09
ihrachysdougwig: actually, similar needs are there in other nova-enabled services like trove15:10
ihrachysdougwig: and that's why it could make sense to offload it to nova.15:10
dougwigi agree that it's a logical fit.  it is an image management service.  and we have images.  :)15:10
ihrachysbut since glance API is currently not very resilient, it may take some time and effort working with glance folks15:10
dougwigisn't glance just image management, removed from nova in the first place?15:10
*** manishg_wfh has joined #openstack-lbaas15:10
ihrachysdougwig: right. but for nova that would be a logical step to make sense of tag names, as it currently does for image names15:11
dougwigcan't you already set meta-data on images? or is that purely for hypervisor matching?15:12
ihrachysdougwig: the concern that sdague expressed is not about nova extracting the ID, but about the fact that glance does not guarantee uniqueness of the tag, so then nova would need to handle that somehow15:12
ihrachysdougwig: I think there are two things in glance - one is metadata tags, and another just generic image tags15:12
ihrachysthe former is used by hypervisor15:12
ihrachysthe latter seems like a pure API thing to mark images with random strings15:13
ihrachysnow, if glance would provide unique labeling, nova would be able to boot a server using 'latest RHEL base image'15:13
ihrachysinstead of 'rhel-7.0' name or even worse, its ID15:13
nmagneziihrachys, another downside of using the image id by Octavia is that you need to restart services after you change that id in octavia.conf15:13
ihrachysnmagnezi: right.15:15
ihrachysthat's a more important one I think15:15
*** numans has quit IRC15:16
ihrachysif that would be just a matter of extracting ID for the service, a script could do it for the operator15:16
ihrachysdougwig: so back to my original question - is glance coupling something that does not make you scared? :) I guess you replied 'no' before, but still would enjoy clarificaiton.15:17
nmagnezixgerman, ping re: Today's octavia meeting15:18
nmagnezixgerman, hey :) i might not make it for today's meeting15:20
xgermanok, I will channel my inner nmagnezi15:20
nmagnezixgerman, could you please add some bug ajo submitted?15:20
dougwigihrachys: no, neither glance nor nova coupling scares me, as its a nova based service.15:20
nmagnezixgerman, mmm, what? :)15:21
xgermanwas joking… will add that bug… you have the #15:21
nmagnezixgerman, :-)15:21
openstackLaunchpad bug 1549297 in octavia "octavia-health-manager requires a host-wise plugged interface to the lb-mgmt-net" [Undecided,New]15:21
ihrachysdougwig: ok one more procedural thing. I was looking at getting something baked for Mitaka. is it still realistic?15:21
ihrachysdougwig: assuming I get the code in a week.15:21
dougwigihrachys: code complete is the 29th, so if you have it in gerrit by then, it's possible, though tight. any chance you can introduce it to folks at the octavia meeting in 4 hours?  or have some kind of wip up that i can point folks at during that meeting?15:22
ihrachysdougwig: I won't have wip in 4h, I only made preliminary code investigation.15:23
xgermanihrachys I think it;s a good idea..15:23
xgermanbut as dougwig points out things are tight15:23
ihrachysdougwig: assuming that is well isolated (requiring a new option to be set to trigger the new code), I believe that could be fine as an exception? but I will try to get something till 29th15:24
ihrachyswe need the option anyway, for the image tag15:25
*** rtheis has quit IRC15:25
*** rtheis has joined #openstack-lbaas15:26
*** nmagnezi_ has joined #openstack-lbaas15:26
*** rtheis has quit IRC15:26
*** rtheis has joined #openstack-lbaas15:27
dougwigihrachys: i'll bring it up at the meeting, but i'll support it.15:27
xgerman+1 for support15:28
TrevorVHey dougwig I just did a single create with everything except a health monitor.  (not sure if HM is bugged but it wasn't accepting rise/fall threshold, gonna look into that)15:28
TrevorVOn Octavia15:28
TrevorVSorry, clarification15:28
ihrachysdougwig: xgerman: cool15:28
ihrachysnmagnezi: are you going to the meeting?15:28
xgermancascading delete needs to be rebased in L7… would be good if some of the dependent stuff merges15:28
TrevorVxgerman l7 all merged last night15:29
xgermanyep, and that needs to be deleted as well15:29
*** nmagnezi has quit IRC15:29
*** nmagnezi_ is now known as nmagnezi15:29
dougwigTrevorV: it's all about testing the failure cases. what's our plan for that?15:30
TrevorVHonestly, not entirely sure.  Should we talk about some of the failure cases we're concerned about?15:31
nmagneziihrachys, i sometimes do, but today i'm not sure I'll be at home at that time (10pm my local time)15:33
ihrachysok. I will see what I can do, will try to join but no guarantees.15:33
*** rtheis has quit IRC15:34
*** rtheis has joined #openstack-lbaas15:35
TrevorVAlright, got me a healthmonitor in the mix.  Just had the body configured wrong for the request.15:38
*** manishg_wfh has quit IRC15:54
*** jschwarz has quit IRC16:00
*** Purandar has joined #openstack-lbaas16:03
TrevorVxgerman did sbalukoff also do changes in neutron lbaas for L7?16:11
TrevorVMore importantly, I guess, I can look myself, but was curious if they also merged16:12
johnsomTrevorV The L7 patch for lbaas is up for review, but has some bugs.16:13
xgermanI think he did — not sure if it merged16:13
TrevorVis that one it?16:13
sbalukoffIt's not merged yet.16:13
johnsomYeah, that is the one16:13
sbalukoffWhat on earth am I doing up so early?16:13
TrevorVsbalukoff the real question is, did you sleep?16:13
johnsomI was just wondering the same thing....16:13
sbalukoffTrevorV: I did, actually! Very well, in fact.16:13
johnsom(thinking of both of us having a long night)16:14
*** diogogmt has joined #openstack-lbaas16:14
sbalukoffI think it helped that my internet cut out last night in mid-conversation with rm_work.16:14
sbalukoffSo Comcast decided I was done for the night. :P16:14
*** Purandar has quit IRC16:14
TrevorVI'm just happy I've now done a manual test of the single create16:14
TrevorVWith a default pool on the listener, and a redirect pool also on the l7 policy.16:14
TrevorVWorked just fine16:14
sbalukoffTrevorV: Is you patch out of WIP now?16:14
TrevorVsbalukoff no, because unit tests16:15
TrevorVI don't have many16:15
TrevorVIn fact, I only added one.16:15
sbalukoffBut you're close.16:15
sbalukoffAnd that's great!16:15
TrevorVI honestly wasn't sure what else to actually test...16:15
sbalukoffStill a few days to get it in before the deadline.16:15
TrevorVOh, shit, I haven't tried SNI stuff....16:15
sbalukoffTrevorV: You can look at sonar coverage reports for code you've added.16:15
TrevorVIs that what tells me what I've missed?  Where do I look at that?16:16
sbalukoffBeyond that, if you can think of permutations of the single create that might hit edge cases based on the code you wrote, those are good  candidates for tests.16:16
sbalukoffSo, that's the 'HP Octavia Sonar CI Check' that gets generated shortly after you upload a new patchset...16:17
TrevorVOoooh the "16:17
sbalukoffClick the link that gets generated and you're at the web root of the report you want to look at.16:17
sbalukoffNavigate to 'coverage' and then the files you've modified, and look for lines showing they weren't tested.16:17
TrevorVRemember, "HP Octavia Sonar CI Check - hosted by Rackspace" :P16:17
sbalukoffYeah, that.16:18
sbalukoffName is off now, but nobody seems bothered enough to fix it yet.16:18
sbalukoffI've found that sonar is a very good tool for this.16:18
sbalukoffAlso, check out the 'issues' report for your patch.16:18
sbalukoffThe tuning on that it's always great: We have naming conventions that sonar doesn't like.16:19
sbalukoffBut it can often point out some boneheadedness in your code.16:19
TrevorVHey now, I ain't bone-headed.16:19
sbalukoffYou don't have bones in your head?16:19
sbalukoffI'm so sorry..16:20
TrevorVNah, see, I have bones in my head, but my head is not bones.16:21
TrevorVThat's the point.16:21
TrevorVEnglish is such a shitty language16:21
TrevorVha ha ha16:21
sbalukoffYou have bones in the point of your head?16:22
sbalukoffSo you're saying you're pointy-headed?16:22
xgermanso a policy contains rules?16:23
sbalukoffxgerman: Yes.16:23
xgermancan rules be shared between policies?16:23
johnsomRules are AND'd, policies are OR'd16:23
xgermanso I delete all the rules, then the policy?16:23
xgermanfor cascading delete...16:24
xgermank, gotcha16:24
TrevorVsbalukoff sooo sonar says I'm good... Each file I've touched has places that aren't tested, but they're not where my code touched or was added.16:26
TrevorVLike error trapping in delete/update on controllers and such16:26
sbalukoffOk, well... if you really can't think of more tests to add, you can get others here to review your code, and we'll probably be able to point some out to you.16:27
TrevorVOops, found one.16:28
TrevorVGotta update repository tests16:28
TrevorVsbalukoff tsk tsk looks like you missed some coverage as well my friend ;)16:29
*** kobis has quit IRC16:29
sbalukoffYep, I did.16:29
*** ajmiller_ has joined #openstack-lbaas16:29
TrevorV:P all good, thanks for drawing my attention to this job and how I should look at it.16:29
TrevorVI'll use this to tinker around.16:29
sbalukoffjohnsom opened bugs for me to back-fill that. I'll be doing that after the Feb. 29 deadline.16:29
*** ajmiller has quit IRC16:29
TrevorVI did think of a few cases that I should check for functional tests at least.16:30
TrevorVawesome.  We made some good progress these past couple weeks16:30
xgermansbalukoff, you have16:30
xgermanso if I blow the policy away it should delete the stuff in the DB?16:31
sbalukoffxgerman: Oh yes-- I modeled that after the way members get deleted when you blow away a pool. So yes, that should work.16:31
*** ihrachys has quit IRC16:33
*** nmagnezi has quit IRC16:34
johnsomTrevorV Is your patch ready for review?  I will put it on the "priority patch review" list on the agenda...16:37
*** fawadkhaliq has joined #openstack-lbaas16:38
*** ducttape_ has quit IRC16:42
sbalukoffAren't neutron-lbaas patch set supposed be spammed to this channel?16:42
johnsomYeah, but I think it has been broken recently16:43
*** ducttape_ has joined #openstack-lbaas16:43
sbalukoffAnyway, for anyone looking, I could use a +1 or two on this (trying to fix PostgreSQL gate):
fawadkhaliqhi octavia folks, question: does Octavia support Barbican in implementation or is it still in design phase?16:44
johnsomfawadkhaliq It is in Octavia.16:45
*** ducttape_ has quit IRC16:45
*** ducttape_ has joined #openstack-lbaas16:45
johnsomIt went in for Liberty16:45
fawadkhaliqjohnsom: thanks, is there a document I can look at to see the exact functionality use-case Octavia has with Barbican16:45
fawadkhaliqjohnsom: I understand there are Nova instances running HAproxy etc and after that I am lost :-)16:46
johnsomYeah, just a second16:46
fawadkhaliqjohnsom: thanks16:48
johnsomSome of that might be a bit dated as it was done for liberty and barbican has evolved over time16:49
fawadkhaliqjohnsom: xgerman another question, might be a dumb one.. how does Octavia ensure that the communication between Octavia modules running the management plane and modules running inside the Nova instances is secure?16:49
xgermanwe have a two way SSL connection16:49
johnsomWe use two-way-ssl with certs generated unique for each amphora (service vm in this case)16:49
xgermanamphora runs an HTTPS server and we have certs on both sides16:49
sbalukoffThe use case there is still essentially correct.16:49
fawadkhaliqxgerman: johnsom, ah I see. Thanks a lot. In case you are wondering, I am from the Kuryr team and we are have similar use-case and trying to see how security can be introduced. I remebered Octavia does something similar. Thanks!16:51
*** piet has quit IRC16:51
johnsomSure, NP.16:52
johnsomSo are you going to enable neutron port hot plugging for containers?  grin16:52
*** piet has joined #openstack-lbaas16:52
sbalukoffIt's a bit freaky to consider that people are looking to potentially emulate some of what we do. ;)16:52
sbalukoffYeah, we would *love* to have that!16:53
*** localloop127 has joined #openstack-lbaas16:53
*** madhu_ak|home is now known as madhu_ak16:54
*** localloo1 has joined #openstack-lbaas16:54
dougwigcan i get a look at this gate_hook change, so we can enable a namespace driver job?
fawadkhaliqjohnsom: on its way, you will see it soon ;16:54
sbalukofffawadkhaliq: In time for Mitaka? :D16:55
johnsomAwesome.  We would really like to have that.  We had to make some big changes to our workflow to work around that issue16:55
fawadkhaliqsbalukoff: Hopefully the final design + a POC :D16:56
johnsomdougwig I already +2'd that one.16:56
johnsomfawadkhaliq Cool, do you have a link we can read the design?16:56
fawadkhaliqjohnsom: absolutely, here you go:
fawadkhaliqjohnsom: feel free to -1 ;-)16:57
fawadkhaliqyou're welcome and thanks.16:57
TrevorVHey johnsom I have some testing to write still for the most part, but sonar didn't complain too much when I looked.16:57
*** localloop127 has quit IRC16:58
*** amotoki has quit IRC17:00
xgermansbalukoff can you have a look at
xgermanthanks, that’s a real simple change for some low priority bug17:02
*** manishg_wfh has joined #openstack-lbaas17:04
*** fnaval has quit IRC17:08
*** fnaval has joined #openstack-lbaas17:09
sbalukoffxgerman: Looks good. will wait for jenkins to finish before I +2, but I think the code you have there is correct.17:12
sbalukoff(Please feel free to poke me again if it finishes and I don't notice for a bit.)17:13
*** fnaval_ has joined #openstack-lbaas17:15
*** fnaval has quit IRC17:15
rm_workha sbalukoff i wondered where you went :P17:15
rm_worki failed and stayed up till 7am again <_<17:16
*** Purandar has joined #openstack-lbaas17:17
*** kobis has joined #openstack-lbaas17:17
*** manishg_wfh is now known as manishg17:23
*** ihrachys has joined #openstack-lbaas17:26
*** ihrachys has quit IRC17:29
*** rtheis has quit IRC17:32
*** prabampm has quit IRC17:32
*** rtheis has joined #openstack-lbaas17:33
openstackgerritMerged openstack/neutron-lbaas: Updated from global requirements
madhu_akfnaval_ I tested your patch, its working using tempest plugin.17:35
madhu_akfnaval_, I will push up a patch which will just correct plugon part in the same patch17:35
openstackgerritAdam Harwell proposed openstack/octavia: Remove old SSH specific config options from sample config
*** kobis has quit IRC17:38
johnsomsbalukoff This is marked WIP in commit message, but not gerrit:
rm_worki JUST commented17:39
rm_workto that effect17:39
*** rtheis has quit IRC17:39
rm_workI was avoiding it since it said WIP but you +2'd so i was curious17:39
sbalukoffjohnsom: I think I want to add some functional checks to that. Will mark it WIP in gerrit.17:39
*** localloo1 has quit IRC17:39
johnsomOk.  It looks good so far.  I think the scenario fail may be a bad test check.17:40
rm_workrelevant to you for comment ^^17:40
rm_workand i think is good now17:40
sbalukoffI also want to dig through the scenario test logs to make sure I see the delay after Member create that I'm expecting.17:40
*** fnaval_ has quit IRC17:40
sbalukoffGonna do that now. (Was waiting for it to complete last night when my home internet comcasted out.)17:41
sbalukoffOk! That was quick: I am seeing the expected delay!17:42
sbalukoffSo, good! I just need to add a functional test to that and I think I'm good.17:42
sbalukoffOr update the ones in place.17:42
sbalukoffLet me see how fast I can do that.17:42
johnsomI am thinking the test needs to be updated to check for interrupted returns?
sbalukoffThe tempest test?17:45
johnsomThat is what I am thinking.  I'm poking around that code now17:45
sbalukoffOk. I think you're right, though obviously I've not looked closely into it yet. From what I can tell, load balancer status updates are working as they should now with my patch. Just need to add a couple tests to make sure it stays that way after this is committed.17:47
sbalukoff(Ever notice that unit tests are pretty useless for troubleshooting code you're writing at the moment... but they can help prevent regressions?)17:47
*** yamamoto has quit IRC17:49
*** rtheis has joined #openstack-lbaas17:50
madhu_akJust wondering, Is there any reason that we need to use six.moves.urllib module than urllib package?17:51
*** fnaval has joined #openstack-lbaas17:54
*** rtheis has quit IRC17:56
*** rtheis has joined #openstack-lbaas17:56
sbalukoffNo idea. I suggest recursive grep. :)17:57
fnavalmadhu_ak: awesome thank you sir17:58
johnsommadhu_ak: The urllib, urllib2 and urlparse modules of Python 2 were reorganized17:59
johnsominto a new urllib namespace on Python 3. Replace urllib, urllib2 and17:59
johnsomurlparse imports with six.moves.urllib to make the modified code17:59
johnsomcompatible with Python 2 and Python 3.17:59
*** rtheis has quit IRC18:01
madhu_akjohnsom, I see. However, in regards to the error mentioned in, I thought we can catch the exception and pass it in the event of the same error in future18:01
*** evgenyf has joined #openstack-lbaas18:03
madhu_akjohnsom, but still I thought it should catch the above exception by error.HTTPError. Not sure why that exception didnt catch it18:03
madhu_aksomething similar to like this:
johnsommadhu_ak That is the bug I am working on right now18:04
madhu_akjohnsom, oh okay. good to know18:04
*** kevo has joined #openstack-lbaas18:05
*** ajmiller_ is now known as ajmiller18:11
*** Purandar has quit IRC18:23
*** neelashah has quit IRC18:33
openstackgerritGerman Eichberger proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver
openstackgerritGerman Eichberger proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver
openstackgerritmin wang proposed openstack/octavia: Implements: blueprint anti-affinity server group
*** minwang2 has joined #openstack-lbaas18:42
minwang2sbalukoff can you please review anti-affinity patch—
*** yamamoto has joined #openstack-lbaas18:50
*** Aish has joined #openstack-lbaas18:51
openstackgerritMadhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
sbalukoffminwang2: Sure! Is later this afternoon OK?18:52
*** rtheis has joined #openstack-lbaas18:53
sbalukoff(Finishing unit tests on this critical patch and I have some internal stuff to deal with in the early afternoon.)18:53
minwang2sbalukoff, sure, i just rememberd that you mentioned in the comment that it is better to get this merge after L7 is merged first, so i think now is a good time to review it18:53
*** yamamoto has quit IRC18:56
*** aryklein has quit IRC19:02
*** evgenyf has quit IRC19:03
sbalukoffminwang2: I agree!19:04
*** neelashah has joined #openstack-lbaas19:04
minwang2cool, thank you sbalukoff19:04
openstackgerritBharath M proposed openstack/neutron-lbaas: Adds Cascade Delete for LoadBalancers to Octavia Driver
TrevorVUgh.... I accidentally fixed some of my API bugs in the CW review...19:09
*** neelashah has quit IRC19:10
*** Purandar has joined #openstack-lbaas19:12
*** neelashah has joined #openstack-lbaas19:14
openstackgerritMadhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
TrevorVjohnsom I'm writing some tests up right now, I think I'll knock off the WIP shortly though.19:21
TrevorVLike, possibly before the meeting, or during19:21
*** evgenyf has joined #openstack-lbaas19:25
openstackgerritTrevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API
TrevorVpremature... forgot to fill in the tests... ha ha19:26
*** bana_k has joined #openstack-lbaas19:29
*** ducttape_ has quit IRC19:38
*** fawadkhaliq has quit IRC19:40
*** manishg has quit IRC19:42
*** rtheis has quit IRC19:42
*** rtheis has joined #openstack-lbaas19:43
*** rtheis has quit IRC19:47
*** ducttape_ has joined #openstack-lbaas19:49
*** neelashah1 has joined #openstack-lbaas19:51
*** rtheis has joined #openstack-lbaas19:51
*** neelashah has quit IRC19:53
*** manishg has joined #openstack-lbaas19:55
*** rtheis has quit IRC19:56
*** rtheis has joined #openstack-lbaas19:56
*** neelashah has joined #openstack-lbaas19:57
*** neelashah1 has quit IRC19:58
johnsomOctavia meeting starting soon on #openstack-meeting-alt19:59
*** jschwarz has joined #openstack-lbaas19:59
*** longstaff has joined #openstack-lbaas20:00
*** neelashah1 has joined #openstack-lbaas20:00
*** rtheis has quit IRC20:01
*** neelashah has quit IRC20:01
*** longstaff has quit IRC20:03
*** bana_k has quit IRC20:09
*** bana_k has joined #openstack-lbaas20:10
*** bana_k has quit IRC20:10
*** manishg has quit IRC20:17
*** manishg has joined #openstack-lbaas20:25
*** Bjoern_ has joined #openstack-lbaas20:27
*** Purandar has quit IRC20:44
*** ihrachys has joined #openstack-lbaas20:47
*** localloo1 has joined #openstack-lbaas20:47
*** jschwarz has quit IRC20:49
*** localloop127 has joined #openstack-lbaas20:50
*** localloo1 has quit IRC20:53
ihrachyssbalukoff: johnsom: reading logs of the octavia meeting on adopting glance images for easier image update.21:00
ihrachysI guess the current stand of the team is we don't want that21:00
*** piet has quit IRC21:00
ihrachysand instead we want octavia to react to a signal to reload the conf option21:01
sbalukoffihrachys: No, just that "it's probably premature right now."21:01
johnsomAh, sorry we missed you at the meeting21:01
*** crc32 has joined #openstack-lbaas21:01
sbalukoffA signal reload would be useful to us in other ways as well.21:01
ihrachysjohnsom: np, that's my fault, I counted timezone incorrectly21:01
johnsomWe want the signal for other reasons as well, so that will probably move forward.21:01
markvanblogan: would like to know a bit more about that issue you refd in meeting, with the wsgi and pools.  Is this a bug or just how the namspaces collide somehow?21:01
sbalukoffihrachys: If you've got other reasons why we should do it, we're all ears, eh!21:02
johnsomI don't think we are opposed to the glance tagging, if it is stable21:02
ihrachyssbalukoff: I need to say that orchestration thru a signal is probably not as easy as via glance tags21:02
bloganmarkvan: for the agent or the pools?21:02
sbalukoffihrachys: That may be, eh. :/21:02
johnsomihrachys Agreed.  It will be some work.21:03
bloganihrachys, sbalukoff, johnsom: if we did glance tags, i'd just put that in the compute driver, and not make an "image" interface21:03
markvanblogan: the agent side of this21:03
sbalukoffblogan: That sounds reasonable.21:03
ihrachysif you go glance way, it's just a matter of some API admin calls; if it's signal, it would be 1) glance API call 2) puppet 3) signal propagation into all nodes.21:03
openstackgerritFranklin Naval proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
johnsomIdeally yes, it would be through nova like our image id is todya21:03
ihrachysblogan: yeah, it would be just a matter of some glanceclient code inside compute driver. that was my plan.21:04
*** ducttape_ has quit IRC21:04
sbalukoffIs anyone using nova without glance?21:04
*** ducttape_ has joined #openstack-lbaas21:04
bloganmarkvan: its been a long time since I've tested it out, so I don't remember specifics, and its possible this may not be a problem now, but basically if I created a v2 load balancer, the v1 agent would attempt pick that up and obviously fail21:05
*** evgenyf has quit IRC21:05
sbalukoffI'm not familiar enough with the image-storage game in OpenStack.21:05
johnsomihrachys Hmmm, I think I would like to see more details before I would be a go on that21:05
sbalukoffjohnsom: +121:05
johnsomWe don't import glance at all right now21:05
markvanblogan: ah, that helps.   yeah, looks like it's time to try some scenarios out. thx.21:05
ihrachyssbalukoff: johnsom: blogan: let me also clarify that I look at it ideally in Mitaka timeframe21:05
bloganmarkvan: np, i'm 99% sure the pools colliding in the wsgi framework is still a problem21:06
johnsomihrachys Why don't you put in an RFE and/or spec?21:06
ihrachysjohnsom: absolutely. I can give some description in devref and try to pull code for that in next days21:06
bloganmarkvan: the agent i'm less sure21:06
*** rtheis has joined #openstack-lbaas21:06
sbalukoffihrachys: Really? You've got less than a week on that...21:06
ihrachysjohnsom: I am new to octavia, I am fine to write a spec or whatnot, as long as that's how it works here21:06
markvanblogan: is that wsgi a code bug?  or just how it works?21:06
ihrachyssbalukoff: yes. though I don't expect the code to be invasive or huge21:06
sbalukoffihrachys: Well, get started then, eh! ;)21:07
neelashah1blogan: why would there be collision if v1 and v2 are two different nodes?21:07
ihrachyssbalukoff: I would target ~100 lines including messing with keystone catalog21:07
bloganihrachys, johnsom, sbalukoff: as long as we continue to allow the old way to work too, of storing the image_id in the config, this sounds fine to me21:07
sbalukoff(with the spec)21:07
ihrachyssbalukoff: that's my plan for the week :)21:07
johnsomblogan +121:07
bloganmarkvan: just how it works really, "fixing" it would almost break all attribute extensions21:08
sbalukoffWell... if we're going to do it, I want us to do it *right* so we don't have to support broken interfaces, eh.21:08
ihrachysblogan: that would be the default, yes, and the new code would be well isolated by a new option (to store the tag)21:08
sbalukoffSo please, get the spec right!21:08
bloganneelashah1: for the pools you mean?21:08
johnsomihrachys Ok, that sounds much more possible for mitaka21:08
ihrachyssbalukoff: ok, I will start with the spec tomorrow morning, and will proceed with the code afterwards.21:08
johnsomSounds good21:08
neelashah1blogan yes21:09
sbalukoffihrachys: Please ping us when the spec is ready, eh!21:09
bloganneelashah1: so you wouldn't be running v1 and v2 on the same api node?21:09
dougwigi'd suggest not waiting on spec approval for code, though.21:09
ihrachyssbalukoff: I will try to keep you informed.21:09
neelashah1blogan : correct, because v1 is on kilo and v2 will be M so they are different odes21:09
ihrachysdougwig: obviously, I will code right away21:09
neelashah1nodes blogan21:09
openstackgerritMadhusudhan Kandadai proposed openstack/octavia: Octavia: Basic LoadBalancer Scenario Test
sbalukoffJust realize that spec might need changes which would render some of the code needing changing too.21:09
ihrachysdougwig: and you then decide whether it's good21:09
fnavalthanks madhu_ak21:10
sbalukoffihrachys: Cool beans. :)21:10
bloganneelashah1: oh well if they're not being run by the same process then thats fine, i thought this was all about having v1 and v2 running inside the same neutron-server process21:10
ihrachyssbalukoff: absolutely. I am ready to be turned back, I understand I am really late in the game.21:10
*** pcaruana has quit IRC21:10
madhu_akno prob fnaval21:10
sbalukoffihrachys: It sounds like a useful feature, eh.21:10
bloganneelashah1: separate api nodes for v1 and v2, then i think the only other problem you might run into would be the possible agent issue mentioned before21:10
dougwigihrachys: octavia specs are the octavia cores.21:11
dougwigseveral have already agreed in theory to the gist of things here.21:11
ihrachysthanks folks for considering it at all. I will go have some sleep now. will keep you updated. cu.21:14
bloganihrachys: g'night21:14
bloganihrachys: or g'day21:14
*** ihrachys has quit IRC21:17
neelashah1blogan : ok, thanks for your insights and help21:22
openstackgerritStephen Balukoff proposed openstack/octavia: Fix LB/Listener status updates for HM/Member
sbalukoffrm_work and johnsom: ^^^ should be no-longer WIP.21:23
sbalukoffNote that I did some significant code changes in the health monitor and member API stuff: The previous code wrongly made the assumption that pools would have only one listener. Don't know how I missed that in the shared-pools patch. :P21:24
sbalukoff(I have yet to look at the sonar coverage report on those code changes obviously-- but if I'm not covering something in there, I want to do one more revision before it's merged.)21:25
sbalukoffOk! I need to go AFK for about 2 hours real quick here.  Will be back in a couple!21:25
*** localloop127 has quit IRC21:26
TrevorVjohnsom found a bug...21:29
TrevorVThat I can't seem to fix.21:29
TrevorVI added sni stuffs, because I had forgotten it.21:29
TrevorVSometime between storing the 2 tls_container_ids, and it using the "to_data_model" method to return, the list becomes 2 of the same SNI Container, and the other unique one sent in is lost... except still accessible in the DB.21:30
sbalukoffOk, I'm still here for a little bit. Coverage report on the above patch looks pretty good... I think I'm still missing a duplice health monitor create coverage test, though...21:30
TrevorVsbalukoff it has to do with this graphy thingy21:30
sbalukoffTrevorV: Oh boy.21:30
sbalukoffTrevorV: I don't have time to troubleshoot this with you right now. I'm sorry.21:31
sbalukoff(At this moment, I mean.)21:31
TrevorVNah you're good :)21:31
TrevorVI was just saying I found a new one that I have to look into21:31
johnsomTrevorV That is neat!21:32
*** localloop127 has joined #openstack-lbaas21:34
openstackgerritStephen Balukoff proposed openstack/octavia: Fix LB/Listener status updates for HM/Member
sbalukoffOk, I think this one is ready for merge (if jenkins agrees) ^^^21:39
sbalukoffAnd now I've really got to run. BBIAB.21:39
*** _ducttape_ has joined #openstack-lbaas21:50
*** ducttape_ has quit IRC21:54
*** localloop127 has quit IRC21:58
TrevorVWellp, I've confirmed that "to_data_model" graph method is what "duplicates" my sni_containers.... the problem now is "why" or at least "how"22:03
*** Purandar has joined #openstack-lbaas22:11
*** _ducttape_ has quit IRC22:14
*** ducttape_ has joined #openstack-lbaas22:15
*** neelashah1 has quit IRC22:25
*** _ducttape_ has joined #openstack-lbaas22:25
*** ducttape_ has quit IRC22:28
openstackgerritMichael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue
*** rtheis has quit IRC22:42
openstackgerritMichael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue
*** rtheis has joined #openstack-lbaas22:45
*** yamamoto_ has joined #openstack-lbaas23:08
*** _ducttape_ has quit IRC23:15
*** rtheis has quit IRC23:16
*** rtheis has joined #openstack-lbaas23:16
*** rtheis has quit IRC23:17
TrevorVjohnsom I have a bug fix about to go up in review, high priority23:17
*** rtheis has joined #openstack-lbaas23:17
*** rtheis has quit IRC23:17
TrevorVIts pretty simple, I have a test written for it too, mind taking a look when I drop the review?23:17
johnsomSure.  I have been staring at the scenario tests too long.23:20
TrevorVKk :D23:20
TrevorVJust a second on that one23:20
openstackgerritTrevor Vardeman proposed openstack/octavia: WIP - Get me a Load Balancer API
TrevorVNOT that one23:20
TrevorVsorry lulz23:20
*** armax has quit IRC23:27
*** armax has joined #openstack-lbaas23:28
openstackgerritMichael Johnson proposed openstack/neutron-lbaas: fix session persistence gate issue
openstackgerritTrevor Vardeman proposed openstack/octavia: Use unique SNI identifier when building data model
*** armax has quit IRC23:28
TrevorVjohnsom THAT review23:28
johnsomTrevorV Yeah, that was a bug23:30
TrevorVYes, yes it was :)23:30
TrevorVCredit to blogan for finding it, honestly23:30
*** mestery has quit IRC23:30
openstackgerritTrevor Vardeman proposed openstack/octavia: Get me a Load Balancer API
bloganblame to blogan for data models23:33
bloganand sqlalchemy23:33
johnsomI'm pretty sure that is code that was changed for L723:34
xgermanblogan make sure to put some blame on those reviewers giving +2s23:35
xgermanthey should have known better23:35
TrevorVHey now, that'd be rm_work and johnsom as far as I remember, especially if its L7 stuffs :P23:35
* johnsom bows his head in shame23:35
xgermanyep, I purposefully stayed out of that like blogan — we know sbalukoff23:35
*** mestery has joined #openstack-lbaas23:35
sbalukoffJust got back and my ears were burning....23:36
sbalukoffLooking now...23:36
TrevorVFrankly I was more surprised we didn't have a test that added 2 sni containers...23:37
johnsomTLS has been a huge gap in testing23:37
openstackgerritTrevor Vardeman proposed openstack/octavia: Get Me A Load Balancer Controller
TrevorVGood news, though, I'm ready for people to consume the single-create for Octavia at least :D23:38
*** yamamoto_ has quit IRC23:38
TrevorVI'll honestly probably have missed stuff, but please, let me know what's wrong with it.23:38
johnsomsbalukoff On another topic, I think between your and we might get the scenario gate going again23:40
sbalukoffWow! nice!23:40
TrevorVjohnsom I added the single-create links to the etherpad23:40
sbalukoffTrevorV:  There's something missing from you patch-- I've commented.23:40
sbalukoffBut good catch, in any case.23:40
johnsomTrevorV which eitherpad?  The L7 one?23:41
TrevorVSuppose it might not go here?23:41
TrevorVIs there a better place?23:41
johnsomNot really, I don't have another etherpad going for Mitaka23:42
TrevorVThen there 'tis :D23:42
xgermanok, will add my stuff there as well23:43
TrevorVsbalukoff you mean to add a multiple-sni-containers-test right?23:43
madhu_akfnaval, you around?23:43
TrevorVI didn't actually look where your comment was hosted, just read the comment on the review23:43
sbalukoffIt sounds like this is urgent, so if you just want to get it added to the get unique id method in there that would be great--  we should probably add more SNI-stuff to the model tests in any case at some point.23:44
sbalukoffBut I gather this is a show-stopper for you right now.23:44
TrevorVsbalukoff I'm adding right now :D23:45
sbalukoffTrevorV: around line 823 in octavia/tests/functional/db/test_models.py23:45
TrevorVYep, gots it23:45
*** Bjoern_ has quit IRC23:46
johnsomsbalukoff passed, is that ready to review?23:46
sbalukoffjohnsom: Yep!23:46
sbalukoffHuh. Scenario test passed.23:47
TrevorVsbalukoff I'm not sure exactly how to add a second sni object to get these tests updated....23:47
johnsomYeah.  I still think we need the scenario test changes in my lbaas patch, but we are in the right direction23:47
xgermanmade_ak was asking what is more urgent SSL scenario or API...23:48
sbalukoffTrevorV: So, in that test file, in the _get_unique_key function, you should do the same thing there that you're doing in octavia/common/ in the _get_unique_key function.23:49
TrevorVoh yeah I already added that, but should I not also update the tests to account for that?23:50
sbalukoffTrevorV: I was saying earlier, if the tests pass as is, you're probably OK for now, but that probably indicates that we're not doing enough SNI testing in that file.23:51
johnsomIt seems like API is getting covered for the most part by the neutron-lbaas tests right?  I think TLS is a problem area when it comes to testing, so I guess I would vote for TLS23:51
TrevorVAh I see what you mean.  Alright, well they pass with that change, but it definitely needs to be updated to include multiple containers.23:51
sbalukoffWe should back-fill that at some point, but I wouldn't require it of you right now because this is urgent, right?23:51
TrevorVYeah, but I rebased on top of it23:51
TrevorVOh well.23:51
sbalukoffLet me have a closer look at the SNI class...23:52
TrevorVSure thing23:52
sbalukoffIs SNI.tls_container_id a unique identifier for the SNI object?23:52
sbalukoffAs in, can there be more than one 'SNI' object with the same tls_container_id?23:53
sbalukoffOh! Is it the combination of listener_id and tls_container_id that makes it unique?23:53
madhu_akjohnsom, Will that be TLS scenario or API tests?23:54
sbalukoff(I guess I could just check the DB schema...)23:54
johnsommadhu_ak TLS scenario23:54
sbalukoffOk, so yes, the primary key is both listener_id + tls_container_id.23:54
madhu_akokay, will work on it and I hope it is not coiniciding with fnaval's work on TLS scenario23:55
sbalukoffTrevorV:  Make the unique identifier line look like this, then:  return obj.__class__.__name__ + obj.listener_id + obj.tls_container_id23:55
sbalukoffTrevorV: Make sense?23:55
openstackgerritTrevor Vardeman proposed openstack/octavia: Use unique SNI identifier when building data model
TrevorVOh dammit... why sbalukoff ?23:58
sbalukoffHaha! Sorry-- I finally realized what the real nature of the problem was.23:58
bloganomg scenario tests passed!23:58
rm_workyeah i mentioned that but he pointed out why it didn't matter23:58
TrevorVsbalukoff no no, the tls_container_id is unique23:58
rm_work(sni_container_id is unique)23:58
sbalukoffIs it?23:58
sbalukoffThen that's fine.23:58
TrevorVTHat should be a negative test for it.23:59
rm_workyeah, since we pull the host data for the SNI from the cert... there is no reason to have the same cert twice23:59
blogansbalukoff: could you have just used id(SqlAlchemyObjectInstance)23:59
sbalukoffblogan: I'm not sure we actually want to do that the way we're building the data model.23:59

Generated by 2.14.0 by Marius Gedminas - find it at!