14:02:32 #startmeeting neutron-lbaas 14:02:33 Meeting started Thu Sep 4 14:02:32 2014 UTC and is due to finish in 60 minutes. The chair is jorgem. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:34 Lol 14:02:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:35 there we go 14:02:36 The meeting name has been set to 'neutron_lbaas' 14:02:38 Yep! 14:02:46 cool! 14:02:49 no hypen x2 14:02:53 #chair dougwig 14:02:54 Current chairs: dougwig jorgem 14:03:19 What's the agenda? the agenda on the LBaaS wiki aren't updated 14:03:19 its been a while since I've been on time! 14:03:26 I don't think much 14:03:28 i think our only listed topic is blogan's driver status thing. 14:03:40 I'm guessing incubator status 14:03:43 And, if Kyle or Mark are here, an update on incubator 14:03:47 perfect 14:03:48 So this might be a quick one? 14:03:51 jinx 14:04:00 rm_mobile: most likely 14:04:05 If only it were later in the day. XD 14:04:05 blogan, can you resent the link to your driver? 14:04:10 K 14:04:21 rm_mobile, it never is ;-) 14:04:26 hello all 14:04:28 Lpl 14:04:29 #topic entity status and drivers 14:04:36 *lol 14:04:49 blogan: take it away. 14:04:55 blogan just got here 14:05:12 sorry just got here, talking about my email? 14:05:17 yes 14:05:23 oh okay 14:06:00 so I just wanted to guage interest in whether people think that drivers being responsible for setting the status of the entities is an issue 14:06:16 to me it is, because it leads to inconsistencies and really isn't something a driver should have to worry about 14:06:34 You said you had an idea on how to deal with asynchronous drivers 14:06:40 Did you want to share that? 14:07:01 well it was an idea, but i haven't done much with it 14:07:25 but basically it would make the neutron lbaas API always asynchronous, and there would be an async driver interface, and a sync driver interface 14:08:01 How would the asynchronous driver interface work differently than the one now? 14:08:05 I haven't worked it out totally because I'm not sure if anyone thinks it is worth investigating 14:08:12 (That seems like the crux of the problem, to me.) 14:08:18 I am also wondering what happened to our plan of using exceptions? 14:08:28 xgerman: +1 14:08:35 xgerman, +1 14:08:43 sbalukoff: actually it wouldn't be an interface it would be an abstract class that would handle the polling of the async driver methods to get the status 14:08:46 o/ 14:08:54 xgerman: what happened was having async drivers and sync driversr 14:09:07 xgerman: since we have async drivers they can't throw exceptions 14:09:17 xgerman: well they can it would just be uncaught 14:09:26 Hmmm... 14:09:29 if we are planning to poll the async driver will appear synchronous anyway 14:09:33 isn't that what the ERROR status is for? 14:09:45 unless you think we want to do event=ing 14:10:05 blogan: I suspect the people who should weigh in on that would be the authors of the asynchronous driver interfaces-- whether they would prefer things to work that way, or as they do now. 14:10:06 xgerman: even with sync, there's a wrinkle in that plane w.r.t. fatal vs non-fatal exceptions. since you can get an LB object that triggers creating a bunch of child objects, you could end up in a halfway state. 14:10:33 sbalukoff: yes and their input is what I was hoping to get today too 14:10:53 I don't suppose any of them are present? 14:11:03 dougwig, throwing exceptions doesn't a driver preclude from cleaning up 14:11:04 (I'm only seeing Octavia crew here that are active.) 14:11:06 in really really short, if we want to support auto-magic, even with synchronous, we need three exit results from driver interfaces, not just "had an exception" or "did not have an exception". 14:11:25 well, we could have different exceptions 14:11:31 dougwig: we can have custom exceptions that will tell the plugin what to do, even complex exceptions 14:11:37 +1 14:11:40 that would require more investigation though 14:11:55 blogan, +1 14:11:59 I think a good error/exception model would be worth it 14:12:07 well, with our current models, you'd actually need to be able to communicate multiple errors at once, possibly. 14:12:08 +10000 14:12:11 are the radware guys here today? 14:12:22 dougwig, you can chain exceptions 14:12:23 samuel? 14:12:27 dougwig: custom exception can have many fields to communicate that 14:12:44 I don't see Samuel. Avishay? 14:12:50 Based on my past experience not putting enough time in getting a good error/exception model is an issue. it always comes back and bit 14:12:54 blogan:I'm here 14:13:02 Oh yay! 14:13:03 evgenyf: do you have an opinion on this? 14:13:04 yep. i'm just saying that it's not as simple an interface as it sounds, which i suspect is why the statuses are in there. 14:13:22 Are we talking LBaaS v1 or v2? 14:13:27 I think the reason the statuses are there is because of async drivers not going through an agetn 14:13:38 evgenyf: v2 14:14:16 evgenyf: nothign should be added to v1 14:14:28 I think it should remain as it is now, in v2 we have all these active/fail/defer functions in mixins 14:14:55 evgenyf: true but the alternative is not having those at all 14:15:43 blogan, can you elaborate on your idea with async plugin API please? 14:16:33 sbalukoff: Sam and Avishay are not here 14:16:40 evgenyf: basically the driver does not set the status of entities at all, it will just throw exceptions 14:17:06 Netsplit 14:17:08 evgenyf: then the plugin sets those based on the exception thrown 14:17:09 ah net split! 14:17:09 Awesome. 14:17:18 woah mass exodus 14:18:04 evgenyf: how this is done to support both async and sync drivers I have an idea about but need more time to hash it out 14:18:10 we can prototype it, but given the multi-error issue, we're just going to replace one set of glue with another set that has to wrangle and chain the exceptions or setup the error fields before re-raising. the bar here should be whether it results in simpler driver code. 14:18:29 blogan: Radware's driver has separate thread dealing with operations success/fail 14:19:29 dougwig: doesn't it make sense to have a separation of concerns though? the driver only has to tell the plugin what entities are affected and then the plugin decides for all drivers what to do? 14:19:33 that seems simpler to me 14:19:52 evgenyf: correct and a solution to this problem would handle that 14:20:22 blogan: Are you talking about entities shared between drivers? 14:20:31 Does v2 in its current form allow that? 14:20:34 sbalukoff: no not at all 14:20:39 if the separation results in one line per method becoming 3, then we're just re-arranging deck chairs. 14:21:00 Heh. 14:21:10 we'd be putting deck chairs in the proper location 14:21:32 or even 1. if it's truly a useless thing for drivers to be doing, we should be able to get rid of it and reduce driver code. if that doesn't happen,w e're not really insulating anything. 14:21:32 like on a deck 14:21:56 dougwig +1 14:22:02 i'm not sure that drivers updating status is "improper". 14:22:15 maybe its a philosophical difference 14:22:22 which is why i wanted people to give their opinions 14:22:24 it's not a good separation of concerns 14:22:26 with a bit of a debate 14:22:42 xgerman: i'm confused are you for or against it? 14:22:53 I am for exceptions 14:23:02 blogan: Is the concern that the current system leads to more inflexible spaghetti code? 14:23:35 sbalukoff: i wouldn't say its spaghetti code, the driver interface dougwig creates does improve teh status management in the driver 14:23:44 over v1 14:23:47 well, if everybody updates statuses then it's hard to make changes/updates without everybody changing code 14:24:01 xgerman, +1 14:24:09 xgerman: That was what I was getting at. 14:24:30 the main problems I have with it are that it will lead to inconsistent statuses across drivers, and to me it doesn't seem like the drivers' responsibility 14:24:35 good so we are at teh same page :-) 14:24:47 and what xgerman said 14:25:04 xgerman: err, it's even harder if you pull them out, because now the driver can't make the decision of how to do transactions/locking, and you can have drivers running in parallel. you can't wrap those operations at the plugin level. 14:25:39 (the complex ones, not the status ones) 14:25:58 to me it seems odd the driver has to worry about transactions/locking when it should be the db layer that does 14:26:20 it's part and parcel of having two sources of truth. 14:26:21 unless you're talking about not the lbaas db 14:27:06 two soruces of truth being the lbaas db and the driver's own db? 14:27:33 or whatever the vendor's storage mechanism is 14:27:39 correct. we don't have things setup for atomic replacements of entire trees. 14:28:20 which means, hello hard cs problem #2. 14:28:28 naming things? 14:28:30 lol 14:28:35 Haha 14:28:47 i knew you would reply with that. :) 14:29:13 can you give a specific example dougwig? im not totally following? 14:29:26 that last sentence should just be a statement 14:30:41 can we talk later in channel? it's way early, and this is going to take awhile. 14:30:46 lols ure 14:30:55 its not mean to be solved today at all 14:31:11 dougwig, please pin me if I am available. Would love to be part of that chat 14:31:15 s/ping 14:31:19 sballe: ok 14:31:39 okay so that will be sidelined until later today 14:31:47 Sounds good. 14:32:03 anyone have anything else? 14:32:11 Is mestery on the IRC? 14:32:13 Are markmcclain or mestery here? 14:32:16 lol 14:32:24 :) 14:32:26 sballe sbalukoff: o/ 14:32:31 Dougwig: could you please summarize your conversation results in ML after, please 14:32:32 #topic incubator update 14:32:37 evgenyf: yes 14:32:40 mestery: the obligatory ask for an update on the incubator 14:32:45 blogan: Absolutely sir! 14:32:45 :D 14:32:51 i can take this one. 14:32:54 it should be done this week. 14:32:57 right? :) 14:32:58 lol 14:33:02 So, the update is that now that we're past Juno FF, we will get infra to set the repository up by tomorrow. 14:33:10 We were holding off given their focus on holding hte gate togehter this week 14:33:22 markmcclain has worked with the TC and infra on the plan and they are both on board. 14:33:31 Any questions? 14:33:36 :) 14:33:45 mestery: by tomorrow do you mean two weeks from now? or actually tomorrow? 14:33:50 Heh! 14:33:51 blogan: :P 14:33:51 Is there a document with info somewhere? 14:33:51 I would like to understand a little more abotu the governance of the incubator proect 14:34:02 Tomorrow is the plan, markmcclain should have the review out today for repo creation 14:34:06 sballe: me too 14:34:12 mestery: Any changes in how incubator is going to be run, or is the wiki still the source of truth on this? 14:34:15 Based on the ML discussion we had there is a lot of confusion 14:34:21 I think the governance should be documented on the wiki, let me find the link. 14:34:32 sbalukoff: The wiki is still the source of truth at this point 14:34:33 mestery: can you make sure it gets put on the ML so we can all look at it? 14:35:03 blogan: Ack, will do 14:35:15 mestery: thanks a bunch 14:35:15 the wiki has pending revisions that I have not posted to clarify feature-branch vs incubator criteria 14:35:36 #link https://wiki.openstack.org/wiki/Network/Incubator 14:35:39 mestery, Do we have any timeline for the first project to enter the Neutron incubator project 14:35:40 ok, can you send an e-mail once we are supposed to look 14:36:15 sballe: Once it's up, you can post blogan's patch series there right away. 14:36:41 sballe, mestery: I'll ping blogan when it is ready and help get the patches in 14:36:50 markmcclain: Awesome sir! 14:36:52 awesome, thanks! 14:37:27 is devstack getting modified at the same time? 14:37:56 dougwig: not sureā€¦ I have work w/QA team 14:38:04 mestery, I am looking forward to seeing this whole Neutron Incubator project working and hoepfully it will work well. We are kind of counting on this to be able to move forward 14:38:21 the gate is still seriously backed up 14:38:22 We == HP 14:38:36 the gate needs some fiber, for sure. 14:38:37 sballe: ++, I agree, I expect this to really be super helpful as well! 14:38:54 * mestery feeds the gate some whole grain 14:39:06 haha 14:39:07 sballe: i think everyone wants to see it succeed 14:39:32 Yep 14:39:37 mestery, you should give it some Red Bull instead 14:39:52 I think we just bring in Chuck Norris as a "gate opener" 14:40:09 sballe: hahhahahahaha 14:40:10 it's only 72 deep today. 14:40:15 that's light and breezy 14:40:46 any other incubator questions or updates? 14:41:03 I would like to have this topic on the agenda for next week again. 14:41:13 sballe: +1 14:41:13 ok 14:41:16 We need a weekly status on how this is working for us 14:41:25 sballe: +1 14:41:32 sballe +1 14:41:35 mestery, markmcclain - thanks for the update 14:41:37 sballe: Lets use the neutron meeting for that 14:41:45 ok when is that? 14:41:47 I'd like the broader team to hear the update on the incubator as well 14:41:58 #link https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting 14:42:02 dougwig: I have a topic when we wrap up the incubator topic and whatever else was on the official agenda 14:42:30 #topic rm_work's topic 14:42:35 heh 14:43:02 wonderful topic 14:43:06 mestery, when is the next Neutron meeting Monday or Tiesday: Mondays at 2100 UTC and Tuesdays at 1400 UT 14:43:08 So, for TLS, we are "registering" with Barbican to get the user's certificate data 14:43:20 Tuesday 14:43:22 thx 14:43:30 to do this, we need to auth with barbican using our own keystone user that is an "admin" 14:43:31 sballe: I sent email on this, and if you go here (https://wiki.openstack.org/wiki/Network/Meetings) you can see it 14:43:52 I don't know if there is precedent for Neutron having its own "service user". Is there? how would this be handled? 14:44:15 that should potentially be the same as us needign a nova user for Octavia 14:44:17 or maybe itll be an 'operator' admin user? 14:44:45 xgerman: yeah, i think very similar 14:45:07 I know security looks down on passwords stored in config files 14:45:19 i'm thinking that there must be a standard "openstack" way of handling this. we can't be the first to need a backdoor account into another project. 14:45:26 xgerman, you can have a Octavia tenant with admin privs for the variosu services or advsrv role in the case of Neutron 14:45:31 Yeah... not sure how else to handle it -- we can't exactly store the password in Barbican <_< 14:45:58 we do this all the time wirh our platform services 14:46:24 right, so we just need a tenant with a couple specific keystone roles, and marked as "admin" (whatever that means exactly to Keystone, i'm still not 100% clear) 14:46:44 well, name, pwd, etc. should all be configurable 14:46:45 rm_work, yeah and we migth not even need admin privs. 14:46:48 So, I think this needs to be owned by Neutron, not Neutron-Lbaas, since the code that uses it will be in /common/ 14:46:54 it will depend on whta we need to do 14:47:36 well, I am currently very concerned with security on this, and I am going to need to discuss with Barbican about how we'll do this 14:47:45 rm_work really just needs to know if neutron alreayd has one set up for this or if one will have to be created 14:47:56 maybe an ML query on how other projects have done this? 14:48:00 blogan, can you elaborate? 14:48:03 markmcclain: does neutron use service accounts to communicate with other openstack projects? 14:48:05 rm_work: Could ask on the ML to see if anyone there knows the "OpenStack" way of doing this. 14:48:10 mestery? 14:48:15 dougwig: Jinx 14:48:51 blogan, I am not sure what yu are asking about? 14:49:11 blogan: it does, for nova. 14:49:25 i think you put endpoint/user/pass into neutron.conf 14:49:30 ok 14:49:31 ok, then use that one? 14:49:34 sballe: if neutron already has a service account set up that can talk to barbican, really 14:49:34 so we could hook into that 14:49:41 Yea, we do store other 'secure' info in the configs 14:49:43 we need to check this 14:49:45 xgerman: maybe but not sure if it also can talk to barbican 14:49:48 not sure why this would be much different 14:49:54 will require some testing for sure 14:49:54 https://www.irccloud.com/pastebin/bvLCzij1 14:49:56 blogan: yes a service account in the recommended way to deploy 14:50:02 I mean, if it's a keystone tenant, then it's just a role issue 14:50:25 rm_work, That's what I am getting at... It all depend on wht we want to do 14:50:25 markmcclain: so if we need to retrieve keys from barbican, is there already a service account we can use? 14:50:43 markmcclain: or is that something a deployer does themselves and puts in the config? 14:51:13 yes.. there should be credentials that we use for hte Nova callback 14:51:20 though I am still thinking maybe we want to hijack the user's token to set up the original trust -- just because maybe allowing the service-account access to literally anyone's data in Barbican is a little scary 14:51:48 I may be getting into the weeds here, but I would like people to have some idea what's going to be happening in the background with regard to this 14:52:03 so if there are concerns with the security of the whole thing, they can be voiced 14:52:24 It sounds like storing credentials in a config file is no worse than current practice. 14:52:24 and people with more security experience than myself can chime in :) 14:52:24 rm_work, there is something in keysone called Trusts maybe taht would be useful. It allows a service to do somethign as a user 14:52:25 rm_work: I agree wanting to limit trust 14:52:35 so you could reuse the current context 14:52:48 sballe: right, the plan is that we set up a trust between our service account to the user 14:52:59 but allowing the service-account to actually set up a trust with any user is a bit risky 14:53:13 but we'll need to be careful that something else in the callstack has not elevated the privileges 14:53:17 rather, maybe would be good to as markmcclain is saying, re-use the user's original context to initiate the trust the first time 14:53:35 then rely on the trust from then on 14:53:41 doesn't this have implications for octavia too? 14:53:43 agreed 14:53:46 blogan: yes 14:53:50 blogan: Yes. 14:53:54 Same problem, really. 14:53:56 what about later, when you need to do maintenance on the LB, and need to re-fetch the cert. user context will be gone. 14:54:07 dougwig: but at that point the Trust will be set 14:54:15 so the service account has access to the user's Barbican data 14:54:22 ahh, i see. do trust's exist in keystone yet? 14:54:54 (or are they at the barbican level?) 14:54:59 Well, again, there is also the problem of the ticking-time-bomb if the user nukes his barbican data and Octavia / LBaaS needs to access it afterward. 14:55:02 dougwig: yes 14:55:07 dougwig: they are in keystone now 14:55:19 sbalukoff: at some point we can't protect them :/ 14:55:20 https://wiki.openstack.org/wiki/Keystone/Trusts 14:55:20 sbalukoff, let's not open that can of worms 14:55:27 sbalukoff: if they go in and remove our trust... tough :/ 14:55:34 #link https://wiki.openstack.org/wiki/Keystone/Trusts 14:55:49 We already discussed this, and I think the solution was to have Octavia store its own copy of the secrets. 14:55:50 :/ 14:55:59 That's where I recall the discussion went last time. 14:56:03 not octavia, just haproxy. 14:56:14 but heck, i don't remember the final resolution. 14:56:14 sbalukoff: right, for Octavia that's the case 14:56:19 not for Neutron-Lbaas 14:56:23 dougwig: Right. Because vendors keep their own copy of the secrets anyway. 14:56:34 *Octavia* would copy the keys into its own Barbican store 14:56:35 Yep. 14:56:41 +1 14:56:47 but for Neutron-Lbaas, we're relying on the original 14:56:55 rm_work: True. 14:56:57 thus Consumer Registrartion 14:57:01 *Registration 14:57:23 anyway, I'll work up an email with a better overview and maybe some pictures / diagrams :P 14:57:28 since we are just about out of time 14:57:32 Fancy! 14:57:34 rm_work +1 14:57:47 +1 14:58:02 #action rm_work send writeup on barbican/trust stuff to ML 14:58:07 #topic Open discussion 14:58:29 open discussion for 2 minutes GO 14:58:39 Ok, going back to bed for a half hour. See you at 10:30, sballe! 14:58:45 hah sbalukoff 14:58:50 enjoy 14:58:51 sbalukoff, Perfect. 14:58:53 that's a great plan, too bad i have sprint planning <_< 14:58:53 rm_work: in your writeup, remember that we need to call out to barbican for every SSL negotiation. people loved that. 14:58:57 see you guys 14:59:07 bye 14:59:09 got a meeting to attend 14:59:13 bye 14:59:14 jk 14:59:15 later 14:59:20 dougwig: hahahaha no. 14:59:21 T_T 14:59:31 I remember that though >_< 14:59:38 \o 14:59:42 #endmeeting