20:00:06 #startmeeting Octavia 20:00:11 Meeting started Wed Mar 8 20:00:06 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:15 The meeting name has been set to 'octavia' 20:00:25 Hi folks 20:00:35 o/ 20:00:38 o/ 20:00:55 #topic Announcements 20:00:57 o/ 20:01:05 Octavia is now cycle-with-milestones release cycle 20:01:13 yeah 20:01:27 This means we will do Pike-1, 2, 3, and rc releases now. 20:02:03 Mostly this will help with end of cycle tasks like setting stable/pike for the agent, etc. 20:02:51 The other announcement I have is tomorrow there is a meeting to talk about an alternative act/act method 20:03:07 17:00 UTC Thursday March 9th 20:03:21 #link http://bit.ly/2lTz7wK 20:03:35 It's a webex style meeting 20:03:58 any related readings to do before that meeting? 20:04:17 No, I think it is a brainstorming kind of meeting where a spec will come out of it. 20:04:37 They are talking about using L3 anycast for act/act 20:04:53 That is about all I know. 20:05:06 may I ask, just to get a context of this, why are we discussing an alternative? 20:05:22 the current act/act work is going to get dropped? 20:05:24 It is just a proposal from the community 20:05:40 No, I expect it would be an alternate driver 20:05:53 ack. thanks. 20:06:24 yeah, we will keep teh current act/act work 20:06:25 All good questions we can ask... 20:06:37 :-) 20:06:57 I don't see any reason to drop the work that has been done for the current specs. 20:07:22 +1 20:07:30 aye. just asked to get the context of things. I'm not involved in this effort so just wanted to know :-) 20:07:32 quite the opposite 20:07:40 Anyway, please join if you can. Should be an interesting conversation 20:08:42 Shortened urls are blocked on the wiki, so sharing it here... 20:08:55 Any other announcements? 20:09:18 Other than our gates are slightly broken at the moment.... 20:10:19 which of the gates? anyone looking into this? 20:10:33 Yeah, rm_work and I are working on it now 20:10:45 T_T 20:10:56 let me know if I can help with something here 20:11:05 #topic Brief progress reports / bugs needing review 20:11:28 Lots of stuff needing reviews, so help out if you can. 20:12:09 I have been working on a bunch of clean up stuff and write ups about the PTG as well as reviewing the load balancer API 20:12:22 I hope to get back to the api-ref today 20:13:08 No other updates? 20:13:32 #topic Team mascot 20:13:36 #link https://etherpad.openstack.org/p/octavia-mascot 20:14:37 It doesn't look like we have new proposals, so we should move on to voting. 20:15:12 Please add your name to the roll call list next to your color and +1 next to the mascot you like 20:15:44 This way we won't have ballot stuffing.... grin 20:16:57 #topic Status of Active/Active development 20:17:31 Are any of the developers on those patches here? 20:17:58 i think perelman is not here atm 20:18:35 well, we have made some suggestions and haven’t seen them being picked up 20:18:46 I wanted to get an update on that work. We have put up review comments, but I haven't seen any updates to those patches beyond rebases 20:18:54 +! 20:19:13 so we are wondering what is going on and how we can help 20:19:31 Yes and if they will have resources to work on this for Pike. 20:20:01 I was hoping we could iterate on those pretty quickly and get that merged early in Pike. 20:20:13 +2 20:20:51 Ok, so hopefully they will read the minutes and send an update. I want to know if we need to find others to take over those patches. 20:21:13 nmagnezi any idea how we can get in touch with them or make them get in touch with us 20:21:30 i really don't know, sorry :< 20:21:43 k, thanks 20:21:45 I can try e-mailing directly I guess. 20:21:58 Similar topic: 20:22:07 #topic Status of flavors spec for Octavia 20:22:34 #link https://review.openstack.org/392485 20:22:52 This spec was posted, there was some discussion, but I haven't seen updates. 20:23:05 I don't see Evgeny here today either 20:23:40 Is there still interest in working on this in Pike? 20:24:05 No Kobis either... 20:24:39 ML? 20:25:00 Bummer. This is one I wanted to make some traction on as the provider work will have flavors implications. 20:25:05 Yeah, probably ML time 20:25:23 #topic Discussion about how to migrate loadbalancers from legacy haproxy in namespace to run under Octavia 20:25:27 :-( 20:25:43 that one would be me :) 20:25:51 nmagnezi Wanted to talk about migrating LBs 20:25:56 yup 20:26:01 I spoke with rm_work and johnsom about this subject already, and I would like to raise it here for more opinions. 20:26:10 so 20:26:12 Many operating currently use lbaasv2 with the legacy haproxy in namespace. 20:26:14 It sounds like migrating between drivers, i.e. from netns driver to octavia driver. 20:26:29 I understand we currently don't have any tool to migrate existing loadbalancers from the old driver to Octavia, but from operators standpoint, I think this is an important option to have. 20:26:35 ok, so not netns -> new netns 20:26:43 So, we might want to capture such a thing in a spec (?) but I would like to know what people think about this and if they have any specific idea as for how such a thing should be implemented. 20:26:50 xgerman, yup 20:26:55 I do plan to have a method to migrate nlbaas netns lbs to octavia netns lbs 20:26:59 Please keep in mind that such a migration should be done in production env. Meaning: 1. It should have the option to roll back in case something goes wrong. 2. Minimal downtime, if possible. 20:27:36 johnsom, so maybe it makes more sense to have a two step migration 20:27:52 the one you mentioned should come as the first step 20:27:54 Yeah, across drivers there pretty much has to be some downtime as the data flows will change 20:28:16 nmagnezi Yes, likely. 20:28:42 yeah, vips change often as well 20:28:54 My first question would be to investigate if we can move the vip port to preserve the IP address 20:29:19 you can pass in the port after you cleared it I guess… 20:29:46 mmm, yeah that is indeed problematic. i guess if operators use floating ip that won't be an issue for them but we cannot assume everyone do that 20:29:52 We would probably need to create a flow that spins up the amp, plugs the backends and configures, but be able to move the VIP port at the last minute. 20:30:04 yup 20:30:43 also how big a problem is it… do operators run 100 LBs they could recreate in a few hours or more like 1000s 20:30:46 can two drivers co-exists in one deployment at the same time? just for the migration process? 20:30:56 Maybe a first step would be just removing the VIP port and passing it into octavia for an lb create. Then optimize later 20:31:12 nmagnezi yes, they can 20:31:22 nmagnezi Yes, we will have multiple live drivers 20:31:36 You select with the "provider" parameter at LB create time 20:32:26 we just can’t move one LB from one provider to another 20:32:36 great. so as you said we can spin amps and configure them 20:32:42 Right, it would be a process 20:33:20 so i guess the main question here is what would be the best practice to migrate the vip 20:33:55 can we clear a vip without actually deleting the existing loadbalancer? 20:34:12 Right, start simple, look at how to detach from the old netns and pass in that port_ID to octavia on lb create... 20:34:40 but then you still need to somehow gather listeners/members/pools 20:34:42 I think so, but that would be the investigation. Can you, how, what are the gotchas 20:35:19 xgerman, that we can read directly from the database i guess, no? 20:35:26 Right, it would need to mine the DB (bad idea) or do detail gets via the API 20:35:50 yep 20:35:59 johnma, why is it a bad idea to mine the db? 20:36:07 performance? 20:36:13 schemas change 20:36:27 It's not a "stable" interface like the API is. 20:36:32 it shouldn't change if i just read it, no? 20:36:53 Is this something you are thinking about adding to the Octavia API or be a standalone tool? 20:38:00 i was thinking on a standalone tool, but if we can get this to Octavia API it should even better (and that removes the db mining) 20:38:11 nmagnezi for example, the column "admin_state_up" in the DB could be renamed to "enabled". where the API has to keep a stable name for it 20:38:13 I can see adding something which dumps all the LB and related info to later feed to a single create as a “backup” to the API 20:38:15 should be* 20:39:11 It will need octavia service account level permissions to be able to access the VIP ports, etc. 20:40:08 well we should only expect admins to do such actions, so is it a problem from that standpoint? 20:40:38 well, if operators like their users to self-migrate when it is conveneient for them… 20:40:39 No, just part of the thought process 20:41:07 I think it would be an interesting API really. 20:41:27 That way it could be something you allow your users to do. 20:42:25 that is an interesting option. also, it makes it far more complex to implement :D 20:42:47 but indeed it sounds like a better option 20:42:55 So, yeah, interesting idea. Are you able to investigate and then propose a spec? 20:43:09 also xgerman made a good point about the "dump all LB" 20:43:46 johnsom, yes. it might take me couple of weeks to get to this, but it is something I can look into 20:44:14 Yeah, a backup/restore from the api perspective. Interesting idea. I would track that as a seperate spec 20:44:49 might be all you need if they can live with downtime 20:44:49 where do we expect to keep the backup? 20:44:59 sorry if it is a dump question. 20:45:04 the user would ask for it and move it to a place he likes 20:45:06 dumb* 20:45:12 Something that exports in "single call create" format 20:45:42 we have an single call create implemented already? 20:45:46 Yes 20:45:55 nice! 20:46:03 i was not aware of this. good to know. 20:46:11 One call that creates all of the parts of the LB (pools, members, etc.) in one API call 20:46:38 do we also have a single call delete (delete cascade was it called?) 20:46:57 We do not however allow multiple LBs in one call. Maybe in the future 20:47:30 #link https://docs.openstack.org/developer/octavia/api/octaviaapi.html#create-fully-populated-load-balancer 20:47:48 The old docs. I will be adding that to the new API-REF soon 20:47:59 ack. 20:48:33 Ok, any other questions about that? 20:48:38 okay I unless anyone has anything more to add. I have no additional questions. I will capture this in a spec. 20:48:52 no. thank you for your time :) 20:48:55 we should probably gather interest on the ML 20:48:59 #idea Allow Octavia driver migration of load balancers 20:49:04 not that we spend weeks and most people just run 10 LB 20:49:42 #idea Add a way to export your load balancer configurations in "single call create" format 20:49:46 it’s easy to gold plate such things 20:50:02 I never use those tags, so figured I would try them out.... grin 20:50:06 :-) 20:50:12 :) 20:50:22 I also think if you use ns you should be ok with downrime ;-) 20:50:35 True, those are "icing on the cake" features. We have some baking left to do on the base.... 20:50:47 xgerman, lol 20:51:08 A true statement.... 20:51:17 i know :) 20:51:46 Ok, any other topics today? 20:52:12 oh, we have a spec for qos 20:52:34 Oh, I spoke with those folks at the PTG. 20:52:35 #link https://review.openstack.org/#/c/441912/ 20:52:38 I need to read that. 20:52:45 I gave them some comments 20:52:51 but yes please read… 20:53:40 Yeah, it has been so busy since the PTG. I am still catching up. (the summary e-mails to the mailing list were a bit late, but they went out...) 20:54:24 I commented too much on CCF (see https://review.openstack.org/#/c/333993/11/specs/pike/common-classification-framework.rst) so they pulled me in 20:54:38 yup. highly detailed! I still need to read part 2 :) 20:55:12 Glad to know someone read it... Grin 20:55:25 I not only read but forwarded it ;-) 20:55:39 spreading the gospel 20:55:45 hah 20:55:47 same here 20:55:47 :) 20:55:51 Nice. Fire hose style.... 20:56:13 Ok if there is nothing else I will end the meeting. 20:56:36 Thanks for the discussion today. Good stuff. Don't forget to join the webex tomorrow about act/act if you can. 20:56:54 #endmeeting