blogan | if we keep things as is, what is the alternative to solving the current problems? | 00:00 |
---|---|---|
johnsom | blogan Yes, I think that is what I see as options | 00:00 |
xgerman | yeah, and make sure the status indicates that things ion amp might be different then seen here | 00:00 |
johnsom | Current problem I know of is the member update issue. I can fix that. | 00:00 |
sbalukoff | Again, you've not actually looked at my patch yet. | 00:01 |
xgerman | ok, I gotta run | 00:01 |
blogan | should we make a N plan to implement a task based API? where we do story the intended state adn the current state of all resources? | 00:01 |
johnsom | If we stay with how it is, there is some chunk of work to make sure we can do the model updates with the L7 stuff like we do with the other existing model objects. | 00:01 |
blogan | s/story/store | 00:01 |
blogan | sbalukoff: i wish i could :( | 00:02 |
blogan | okay so it has to be fixed for L7, sothat rules out the task based API anytime soon | 00:03 |
johnsom | Those are the two issues I know of with the current implementation. Others sbalukoff? | 00:03 |
sbalukoff | If we stay with how it is, we're also duplicating code between the data model and repository. | 00:03 |
blogan | sbalukoff: mind rehashing what those duplication points are? i have missed it | 00:03 |
johnsom | Yeah, I can't identify those eitehr | 00:04 |
sbalukoff | Unless we do something differently in the to_data_model code, we're going to be dead in the water. The code I implemented there in the shared pools patch is n^2 or worse efficiency. | 00:04 |
blogan | did you do an n^n algorithm? | 00:04 |
johnsom | I thought you realized the reason you did that was really just a bad test | 00:04 |
blogan | bc i couldn't even if i tried i dont think | 00:05 |
sbalukoff | blogan: The biggest ones are around the L7 functionality. But there are others y'all are actually overlooking right now when it comes to relationships that sqlalchemy keeps track of for us. Right now we're not doing anything particularly complex with those things, but that's likely to change as we add new features. | 00:05 |
*** piet_ has joined #openstack-lbaas | 00:05 | |
sbalukoff | blogan: I'm pretty sure I did an n^n. | 00:06 |
sbalukoff | yes. | 00:06 |
sbalukoff | Sorry. | 00:06 |
blogan | i'd love to see it lol | 00:06 |
sbalukoff | It's worth noting: | 00:06 |
sbalukoff | The patch I implemented this morning is almost exactly a roll-back to blogan's code prior to the shared pools patch. | 00:06 |
johnsom | Right, thus my confusion here | 00:07 |
sbalukoff | (It turns out blogan's code had an error in it that turned out in our favor regarding data model population...) | 00:07 |
blogan | yes! dumb luck to the rescue! | 00:07 |
sbalukoff | Haha! | 00:07 |
johnsom | The problem with that patch is it breaks the create calls at the moment | 00:07 |
sbalukoff | Well. it's not just the create calls... | 00:08 |
sbalukoff | Wait... | 00:08 |
sbalukoff | Actually.... | 00:08 |
sbalukoff | Ok, the follow-up patch I posted (with the flow reorder) seems to work fine with create calls. | 00:08 |
blogan | grrr i really wish gerrit was up | 00:08 |
sbalukoff | At least, I've not been able to get it to fail. | 00:08 |
sbalukoff | but I don't think the follow up patch actually touches any of the create calls... | 00:09 |
johnsom | sbalukoff be sure to check the haproxy.cfg in the amp. The flows don't fail, the config is just toast | 00:09 |
sbalukoff | So... I have no idea how they were breaking for you, johnsom. | 00:09 |
sbalukoff | Oh, I was watching the haproxy.cfg | 00:09 |
johnsom | Ok | 00:09 |
blogan | okay i gotta go home, ill be on later sometime | 00:10 |
blogan | hopefully gerrit will be up | 00:10 |
sbalukoff | i've been trying to detect discrepancies between the octavia database and what is actually deployed for the "success" scenario. | 00:10 |
blogan | ill comment on the review | 00:10 |
sbalukoff | So yes, i've been looking very closely at the haproxy.cfg | 00:10 |
*** yamamoto_ has joined #openstack-lbaas | 00:12 | |
sbalukoff | So... it's entirely possible all the validation stuff I'm doing in the repo is actually the wrong place for it and that it should go in the data model instead. Or some other yet-tobe-invented validation layer that's external to all of this. | 00:16 |
sbalukoff | I will say that some of the l7policy code in particular makes use of sqlalchemy orderedlist stuff (since policies have an order that needs to be managed), that I don't like the idea of having to reproduce in some other layer. | 00:16 |
sbalukoff | Especially since a new policy being inserted in the list will effectively reorder all the other policies in the list. | 00:17 |
sbalukoff | I'm disturbed that the return data we give for updates / deletes is not the state of affairs after the transaction has occurred (which is how things work with creates). Maybe we should have blocking calls when they do updates? | 00:18 |
sbalukoff | (Right now we just shove everything off to an asynchronous handler and hope for the best-- so what gets returned by the API is going to need to be taken with a grain of salt anyway.) | 00:19 |
johnsom | I'm assuming you me via the API. Yes, since we are async we return a 202 and what they asked us to change. | 00:20 |
johnsom | Going synchronous is yet another topic to rehash | 00:20 |
johnsom | The difference here, is under current code, GET queries return what is actually configured on the amps, where re-order would return what we want to be on the amps. So, if you have 10,000 amps in an LB, get under the reorder code would indicate they are updated when they have not yet been updated. | 00:21 |
johnsom | Extreme crazy case, yes | 00:22 |
sbalukoff | Well... I would argue that a get when you're in the middle of changing state is a kind of "anyone's guess" scenario. | 00:23 |
sbalukoff | We probably don't want to block the get... | 00:23 |
johnsom | fair | 00:23 |
johnsom | It really comes down to whether we want that history or not. If not, we can strip a heck of a lot of code out. | 00:24 |
sbalukoff | And... well, I guess it comes down to a matter of opinion on what is more worthwhile for the user to see when the state is 'PENDING_UPDATE': The state of things before the update, or the state they will be in once the update completes. | 00:25 |
sbalukoff | I could see arguments either way on that one. | 00:25 |
sbalukoff | So the database state should always be the state of things on the amphora (assuming we eventually write roll-back code that occasionally succeeds. ;) )... and the theory is it's easier to update to the previous state if we don't change the database until we know whether the transaction succeeded? (I'm not convinced of this-- you're going to have to store state one way or another, either where you were or where you want | 00:27 |
sbalukoff | to go. I suspect it's about equally difficult either way... but I don't know that for sure.) | 00:27 |
*** piet_ has quit IRC | 00:29 | |
johnsom | We are already storing that state | 00:31 |
sbalukoff | I agree that it seems like a good idea to be cautious about updating the database pre-emptively-- we're closer to actual database transactions that way. | 00:31 |
johnsom | Where we were is in the DB, where we want to go is stored in the flow storage (UPDATE_DICT) | 00:31 |
sbalukoff | Yeah,well you could just as easily do a ROLLBACK_DICT. ;) | 00:32 |
johnsom | Back to the GET argument | 00:32 |
johnsom | Yeah | 00:32 |
sbalukoff | Yeah. | 00:32 |
sbalukoff | I think we're cheating with create commands though: | 00:33 |
johnsom | In theory, UPDATE_DICT is easier on the queue | 00:33 |
sbalukoff | We're showing the user how things will be after the change. | 00:33 |
johnsom | I agree. It's not too un-fair though | 00:34 |
sbalukoff | So.... I really have to run for now. (There are already 6 guests over for a party I'm throwing tonight and I'm being a terrible host...) One thing I will say about this is that not having things work for the success state is causing me severe problems trying to test and troubleshoot the l7 code. | 00:34 |
johnsom | Unlike a user than updates to disable session persistence, sees the change in the API, but still sees stuck traffic on his nodes. | 00:34 |
sbalukoff | Because at this point it's really hard for me to tell where things went wrong if updates don't end up in the haproxy.cfg. | 00:35 |
sbalukoff | That ambiguity goes away when i do the DB updates first. :P | 00:35 |
johnsom | Ok, well, everything *except* member update works for me pre-shared pools. member is just a model update | 00:35 |
sbalukoff | The l7policy and l7rule stuff suffers the same problem as the member update stuff. | 00:35 |
sbalukoff | And again, right now this goes away with the two patches I suggested this morning. | 00:36 |
johnsom | easy != right necessarily. Ok, go to your party. Gerrit will be back tuesday | 00:37 |
sbalukoff | Perhaps a compromise? Maybe we could do the task-flow reorder for now to make sure things work for the "success" state, and make sure that roll-backs are on the docket in the near future? | 00:37 |
johnsom | Remove all that code to put it back in? Not buying | 00:37 |
sbalukoff | I know that easy is not necessarily right. But having to take great pains is usually a sign of bad design. :/ | 00:38 |
sbalukoff | No-- | 00:38 |
sbalukoff | Leave the model code in place. | 00:38 |
sbalukoff | Have any of my patches tried to remove it? | 00:38 |
johnsom | No, but I wouldn't leave that much dead code around either | 00:38 |
sbalukoff | I'm working off the idea that "things need to work for normal 'success' flows in production" including stuff like member updates. :/ | 00:39 |
sbalukoff | Right now... they don't. | 00:39 |
johnsom | Member updates is a pretty easy fix | 00:39 |
sbalukoff | And I suspect that was the case before shared pools. | 00:39 |
sbalukoff | Is it? | 00:39 |
johnsom | Yeah | 00:39 |
johnsom | I've been poking at it while we discussed | 00:40 |
sbalukoff | Hopefully gerrit will be back soon. | 00:41 |
sbalukoff | Ok, I'm off to this party. | 00:44 |
sbalukoff | Have a good weekend in case I don't see y'all before Monday, eh! | 00:45 |
*** piet has joined #openstack-lbaas | 00:48 | |
*** yamamoto_ has quit IRC | 00:52 | |
*** allan_h has joined #openstack-lbaas | 00:56 | |
*** yamamoto_ has joined #openstack-lbaas | 00:57 | |
*** ajmiller has quit IRC | 00:57 | |
johnsom | FYI, three lines of code (maybe there is a python optimized way) fixes the member update issue | 00:59 |
*** _cjones_ has quit IRC | 01:05 | |
*** piet has quit IRC | 01:24 | |
*** yamamoto_ has quit IRC | 01:28 | |
johnsom | This fixes the pre-shared pools member update. member delete should be fixed too as it was reorderd in a recent patch | 01:31 |
johnsom | https://gist.github.com/anonymous/2811a238a4a42f1b0adb | 01:31 |
*** madhu_ak has quit IRC | 01:32 | |
johnsom | When I get a chance I will try the shared pools with reverted to_model code again and see if I can repro the corrupt haproxy.cfg issue and fix the update member there. Given create has always worked before I think there is something wrong in sharedpools | 01:33 |
johnsom | Right now I need to do my day job, which I put off for this discussion | 01:34 |
*** openstackgerrit has quit IRC | 01:47 | |
*** openstackgerrit has joined #openstack-lbaas | 01:48 | |
openstackgerrit | Jeffrey Longstaff proposed openstack/neutron-lbaas: Create plugin driver for F5 Networks appliances. https://review.openstack.org/279836 | 01:48 |
*** yamamoto_ has joined #openstack-lbaas | 02:06 | |
*** yamamoto_ has quit IRC | 02:21 | |
*** yamamoto_ has joined #openstack-lbaas | 02:22 | |
*** yamamoto_ has quit IRC | 02:22 | |
*** yamamoto_ has joined #openstack-lbaas | 02:22 | |
*** bana_k has quit IRC | 02:32 | |
*** kevo has quit IRC | 02:35 | |
*** bana_k has joined #openstack-lbaas | 02:46 | |
*** bana_k has quit IRC | 02:47 | |
*** bana_k has joined #openstack-lbaas | 02:47 | |
*** bana_k has quit IRC | 02:48 | |
*** chlong has joined #openstack-lbaas | 02:51 | |
*** yamamoto_ has quit IRC | 02:56 | |
*** woodster_ has quit IRC | 02:56 | |
*** piet_ has joined #openstack-lbaas | 03:10 | |
*** HenryG has quit IRC | 03:12 | |
*** HenryG has joined #openstack-lbaas | 03:13 | |
*** amotoki has quit IRC | 03:37 | |
*** chlong has quit IRC | 03:54 | |
*** yamamoto has joined #openstack-lbaas | 03:56 | |
*** piet_ has quit IRC | 04:02 | |
*** yamamoto has quit IRC | 04:04 | |
*** piet has joined #openstack-lbaas | 04:10 | |
*** piet has quit IRC | 04:35 | |
*** kobis has joined #openstack-lbaas | 05:06 | |
*** piet has joined #openstack-lbaas | 05:22 | |
*** armax has quit IRC | 05:22 | |
*** fnaval has quit IRC | 05:25 | |
*** fnaval has joined #openstack-lbaas | 05:34 | |
*** piet has quit IRC | 05:41 | |
*** piet has joined #openstack-lbaas | 05:52 | |
*** piet has quit IRC | 06:07 | |
*** fnaval has quit IRC | 06:54 | |
*** piet has joined #openstack-lbaas | 06:57 | |
*** piet has quit IRC | 07:12 | |
*** piet has joined #openstack-lbaas | 07:30 | |
*** piet has quit IRC | 07:45 | |
*** hockeynut_afk has joined #openstack-lbaas | 08:14 | |
*** hockeynut has quit IRC | 08:15 | |
*** HenryG has quit IRC | 08:15 | |
*** HenryG has joined #openstack-lbaas | 08:17 | |
*** yamamoto has joined #openstack-lbaas | 09:30 | |
*** yamamoto has quit IRC | 09:37 | |
*** yamamoto has joined #openstack-lbaas | 09:43 | |
*** yamamoto has quit IRC | 09:46 | |
*** yamamoto has joined #openstack-lbaas | 10:47 | |
*** yamamoto has quit IRC | 10:53 | |
*** piet has joined #openstack-lbaas | 11:50 | |
*** piet has quit IRC | 12:13 | |
*** piet has joined #openstack-lbaas | 12:25 | |
*** doug-fish has quit IRC | 12:55 | |
*** doug-fish has joined #openstack-lbaas | 12:57 | |
*** doug-fish has quit IRC | 12:57 | |
*** ducttape_ has joined #openstack-lbaas | 14:01 | |
*** ducttape_ has quit IRC | 14:18 | |
*** ducttape_ has joined #openstack-lbaas | 14:30 | |
*** ducttape_ has quit IRC | 15:10 | |
*** ducttape_ has joined #openstack-lbaas | 16:29 | |
*** ducttape_ has quit IRC | 16:36 | |
*** ducttape_ has joined #openstack-lbaas | 16:47 | |
*** ducttape_ has quit IRC | 16:48 | |
*** armax has joined #openstack-lbaas | 17:16 | |
*** armax has quit IRC | 17:50 | |
*** madhu_ak has joined #openstack-lbaas | 18:00 | |
*** madhu_ak has quit IRC | 18:21 | |
*** kobis has quit IRC | 19:58 | |
*** kobis has joined #openstack-lbaas | 19:59 | |
*** bana_k has joined #openstack-lbaas | 20:39 | |
*** bana_k has quit IRC | 21:28 | |
*** ducttape_ has joined #openstack-lbaas | 22:56 | |
*** ducttape_ has quit IRC | 23:05 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!