20:00:15 #startmeeting Octavia 20:00:15 Meeting started Wed Mar 21 20:00:15 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:18 The meeting name has been set to 'octavia' 20:00:25 Hi folks 20:00:40 #topic Announcements 20:01:05 "S" series naming poll closes today - check you e-mail for your voting link 20:01:22 TC elections are coming up in April 20:01:34 o/ 20:01:45 That is about what I have for announcements this week. 20:01:50 Any others? 20:02:12 summit talks? 20:02:25 also travel support is closing today 20:03:16 Sure, I have two talks scheduled for Vancouver: Octavia project update (40 minutes extended version) and I signed up for an on-boarding session for Octavia. 20:03:53 o/ 20:04:00 cool 20:04:02 I have. a talk on Octavia and K8 20:04:06 do either of those actually require voting 20:04:13 or are they just ... set 20:04:16 they are all confirmed 20:04:19 cool. 20:04:24 hey i'm doing octavia on k8s :) 20:04:51 Mine are set by the foundation. They are not yet on the schedule, they will add them later 20:05:44 Any other announcements? 20:06:00 #topic Brief progress reports / bugs needing review 20:06:45 Ugh, trying to remember what I worked on. A bunch of gate related issues. 20:07:01 did we all... 20:07:05 *didn't 20:07:14 Oh, helped xgerman_ out with the proxy plugin gate job. 20:07:46 Today i am starting work on the tempest plugin again to address the comments folks left and the discussion we had at the PTG. 20:07:58 i made a little review thing with mine and german's listed https://etherpad.openstack.org/p/octavia-priority-reviews 20:08:07 having something like that can be useful for us 20:08:09 Hope to wrap that up today/tomorrow so you all can shot more holes in it... grin 20:08:18 if you want something reviewed, add it and i can look :) 20:08:26 was hoping i could get johnsom to help prioritize and add stuff too 20:08:43 rm_work should I add that to the channel topic like we do at the end of the cycle? 20:08:45 we usually do one of these before each release and it seems to be quite helpful for velocity 20:08:49 I think that'd be good 20:09:13 Ok, I will work on that after the meeting 20:09:17 it's sometimes hard for me to tell what i should be reviewing, across so many projects 20:09:23 and with a lot of stuff WIP 20:09:36 EVERYTHING! hahahha 20:10:11 Yeah, it really helps me with my review dashboard if folks use the workflow -1 for WIP patches 20:11:13 Once the tempest patch is straightened out, I'm on to provider driver work 20:11:50 I did start a migration from neutron-lbaas to octavia script one evening. It is really just a start. I think rm_work is going to work on it some 20:12:25 Any other updates? 20:12:40 I think I saw Nir had some rally stuff in flight, so that is good too 20:13:00 yup 20:13:03 still in the works 20:13:08 but will finalize this very soon 20:13:18 basically two patches are already up 20:13:18 Nice 20:13:27 1. add octavia pythonclient support 20:13:38 2. port the existing scenario to use Octavia 20:13:44 which is mostly done 20:14:05 the 3rd patch will contain additional stuff (mostly CRUD for our LB resources) 20:14:42 I also know I'm behind on dashboard patch reviews. I hope to load that up and do some reviews on that stuff this week. Lots of good work going on there 20:15:04 oh and just if it interests anyone here, Rally is about to split to two code bases. to I ported my patches to rally-openstack 20:15:16 johnsom, I can help with those 20:15:39 Ah, interesting, so there will be an OpenStack specific Rally and then a general use one? 20:16:25 from what I understood from the Rally core team, the will have a generic base framework and additional code bases for plugins 20:16:39 That makes sense 20:16:52 yup 20:16:59 openstack-octavia-rally project? :) 20:17:04 haha 20:17:09 maybe rally-k8s 20:17:12 who knows :) 20:17:16 Yeah, if we need a repo for the plugin let me know 20:17:30 but anyhow, the split is still WIP 20:17:37 Ok 20:17:41 johnsom, I don't think we will, but will keep you posted 20:17:46 #topic Other OpenStack activities of note 20:17:54 OpenStack Common Healthcheck WSGI Middleware spec 20:18:00 #link https://review.openstack.org/#/c/531456 20:18:33 Our friend mugsie is proposing a common method to report service health 20:18:44 Interesting read, worth commenting on. 20:18:57 * mugsie will reply on that spec soon, I promise 20:19:21 I know our friends at F5 were interested in how we can expose the controller health in a useful way. This might be the answer 20:19:23 neat 20:19:57 Proposed "Extended Maintenance" policy for stable branches 20:20:05 #link https://review.openstack.org/#/c/548916/ 20:20:32 Also of interest, proposals about how to handle extended maintenance for stable branches. 20:21:15 For those of you not running master with a few days lag.... 20:21:46 Ok, on to the big topic.... 20:21:55 rm_work, i think he pointed at you :D 20:21:56 #topic Octavia deleted status vs. 404 20:22:08 #link https://review.openstack.org/#/c/545493/ 20:22:27 In my work on the tempest plugin and the proxy plugin gate I noticed we have a problem 20:22:53 Most services in OpenStack return a 404 when a query comes in for a deleted item. 20:23:16 The current Octavia API does not, it returns a record with the provisioning_status marked DELETED. 20:23:32 yeah ... i also just noticed that nova somehow accepts a --deleted param to show old deleted stuff, otherwise 404s on gets 20:23:37 This morning I confirmed that neutron-lbaas also returns a 404 20:23:55 So, we have a backward compatibility issue. 20:23:56 yep, my proxy-gate chokes on the DEL:ETED 20:24:23 backport material? 20:24:27 So, I wrote up this patch, which switches it over to 404 and gives a path to having a --deleted flag. 20:24:40 Well, here is the part I need your input on.... 20:25:03 cgoncalves: we are in a funny spot. We released the API and documented the DELETED behavior 20:25:07 We have now released two versions with the API doing the "DELETED" bit, even though the api-ref does show the 404 20:25:09 backporting would fix and break API at the same time 20:25:16 yes. 20:25:36 Related: n-lbaas returns 404 instead of 403 (FORBIDDEN) 20:25:50 Actually, I don't think we documented the "DELETED" case. I haven't looked through the whole api-ref, but I know the 404's are there 20:26:13 xgerman_ I'm going to ignore the 403 thing for now. That is a neutron oddity 20:26:15 mmh, we need to make sure we don’t *change* the API after the fact 20:26:39 johnsom: well it breaks backward compatibility - but I doubt anyone was using that 20:27:01 xgeman_ Yeah, but let's focus on one topic at a time 20:27:29 ok, we always told people to use octav ia API - we could change our recommendation to use proxy if 100$ compaitibility is needed and octavia-API if you are willing to fix the two or three variations 20:27:48 Yeah, ok, in the API-REF we list the DELETED as a possible provisioning_status, but not in any of the sections. Each section does list 404 however. 20:28:53 So, how do we want to handle this issue in Octavia API? 20:29:05 mmh, so I am for consistency between servcies… 20:29:14 1. Consider it an API bug, fix, backport, beg for forgiveness in the release notes. 20:29:31 it has to be fixed either now or later. I'd say fix it now and backport to queens and perhaps also pike (if taken as a critical issue). existing deployments could eventually start observing a different behavior, yes... 20:29:52 2. Bump the API version. Likely a major bump as it's not necessarily backward compat 20:30:10 yeah... 20:30:13 I would say fix it now 20:30:14 3. ??? 20:30:25 the pain will be less 20:30:32 we're about to have people switching over en-mass soon 20:30:48 Yeah, I think we need to do it now, I'm just struggling with how... 20:30:48 my guess is relatively few have actually seen or would be affected by this 20:30:58 let’s not do a 3.0 - people already are freaked out about 2.0 20:31:09 lol 20:31:13 xgerman_, good point 20:31:47 Yeah, I think the most pain would be with the libraries. I can of course fix openstacksdk, but what about gopher and openstack4j 20:32:20 Yeah, I really don't want to do 3.0 now. (though there are other result codes that neutron-lbaas used that are wrong IMO) 20:32:22 johnsom,what usually justifies an API minor version bump? bug fixes or just new features 20:32:24 ? 20:32:38 i think we just ... fix it and take the backlash 20:32:40 if there is any 20:33:00 terraform etc. wait for DELETED 20:33:07 Right, A.B.C A is breaking change to the API, B new features but compat, C is bug fixes 20:33:07 hmm 20:33:21 errr 20:33:26 can we... make it a deployer option? 20:33:38 rm_work, please no :< 20:33:38 temporarily? 20:33:42 i mean 20:33:50 But we don't really have our API versioning story straight yet. Our discovery is broken 20:34:14 start it deprecated, but allow people time to flip it over 20:34:16 yep, and gophercloud is rewriting anyway 20:34:18 "you can flip this now, soon it will be flipped for you" 20:34:21 LOL, I just had a thought. It's bad, but a thought.... 404 with the DELETED body.... 20:34:26 so if we sneak it in now they should be fine 20:34:28 LOL 20:34:31 ummmmmmmmmmm 20:34:33 yes? 20:34:37 I mean... why not? 20:34:45 though probably would still break the tools 20:34:53 because they'd see the status first is my guess 20:35:03 Yeah, I think it doesn't really help that much 20:35:09 +1 20:35:20 do we have the deletes=True option? 20:35:30 so we show deleted ones on request? 20:35:31 So, frankly I think people will be happy that it becomes the same as the other services. 20:35:39 yes 20:35:44 Also, our client already handles lit like a 404 20:35:48 i think we may just need to cause some temporary breakage 20:35:56 to get to consistency 20:36:23 we are playing with credibility here. People don’t like us breaking things IHMO 20:36:55 ha, people keep bringing up v1 so.... haters are going to hate 20:36:58 this would be us NOT breaking things IMO 20:37:08 because i don't think many people have switched yet 20:37:08 I am more about doing the right thing that people whining 20:37:12 to octavia 20:37:18 well, I have an install which relies on that feature 20:37:25 does it? 20:37:38 yep, bot terraform and k8 provider waut for DELETED 20:37:42 hmmm 20:37:48 can we fix those to work with BOTH 20:37:51 *first* 20:37:54 get patches in for them 20:37:57 and then do the switch 20:37:57 but it’s Pike — so as long as we leave that alone I am +2 20:38:19 maybe once a patch lands to make them work both ways we can backport? 20:38:42 Oye, I feel like it should be backported all the way to Pike... 20:38:45 rm_work, if that will add a config option that would be a problem 20:38:49 to backport.. 20:38:51 johnsom: +1 20:39:09 nmagnezi_: nah that was a different thought 20:39:14 yes 20:39:19 so what if we do it in Master 20:39:23 rm_work, oh, alright 20:39:25 then make terraform work with either 20:39:32 then once that merges we backport to pike 20:39:34 xgerman_ do you have the places in the repos for teraform and k8s that we can go do these patches? 20:40:04 I can find them — but my problem is that we are doing deploys and our version management isn’t that great 20:40:05 Yeah, parallel effort this 20:40:59 so I like this change come with a cahnge Pike->Queen 20:41:07 just my 2ct 20:41:28 So let's go around the room and get your thoughts on the situation and how you think we should move forward. 20:42:04 It's a bad situation no matter what, we missed it while doing testing for pike 20:42:33 No one wants to go first? 20:43:03 Do we wait and talk about this again next week? 20:43:10 I mean 20:43:12 I said my bit 20:43:57 fix it to 404. fix terraform to work with either and wait for them to release the fix. backport all the way to pike. 20:44:17 so did I it’s easier to communicate beginning X you need this new terraform thing 20:45:18 Personally, to me the API-REF is the spec. It lists 404 as a result for the calls. So this is a bug in my book. I would fix it, backport it, and be proactive fixing things we think might break. 20:45:26 yep 20:45:28 agree 20:45:43 if terraform starts breaking, people can look and see the changelog item and get the new release 20:46:00 johnsom: +1 20:46:09 johnsom, +2 20:46:27 I will overflow soon with all my +1s 20:46:34 I know people and it’s easier to tell them for Queens you need the new terraform then; oh, you did some stable Pike release and now everyhting is broken 20:46:44 Ok. So please review the patch on master. We seem to all agree we can land that. 20:47:24 xgerman_ please paste some pointers in the channel so we can be aggressive at adding a fix. I will go check openstacksdk and fix if needed. 20:48:15 We should double check the OSC plugin too. Someone want to volunteer for that? 20:48:25 ok, will do 20:49:04 I am assuming the will-do is for the links and not the OSC plugin? 20:49:18 I will do the terraform fixes 20:49:30 and investigate the k8s ones 20:49:35 Thanks. 20:49:59 Ok, if no one has cycles for our client plugin I will take a look there too. 20:50:29 I will put an agenda item on the meeting for status and when we should start landing backports. 20:50:50 #topic Open Discussion 20:50:57 Other items today? 20:51:01 I have a question 20:51:05 about our tempest plugin 20:51:09 ok 20:51:12 specifically about https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/config.py#L79-L81 20:51:26 just wondering why did we adopt a specific name for role 20:51:29 I knew it! 20:51:38 cgoncalves, :D 20:51:53 I was struggeling with this today 20:51:56 So, first part, that code is going away 20:52:13 #link https://review.openstack.org/543030 20:52:23 replaced with: https://review.openstack.org/#/c/543034/ 20:53:08 The reason for the role is the RBAC 20:53:10 #link https://docs.openstack.org/octavia/latest/configuration/policy.html 20:53:30 So OpenStack is moving towards a richer RBAC scheme. 20:53:43 Currently it's either "ADMIN" or "OWNER" by project 20:53:52 aha 20:54:05 German Eichberger proposed openstack/neutron-lbaas master: Fix proxy extension for neutron RBAC https://review.openstack.org/554004 20:54:06 nova and octavia both implemented this new RBAC scheme 20:54:22 alright so I guess tripleO should configure all the roles mentioned in https://docs.openstack.org/octavia/latest/configuration/policy.html regardless of tempest 20:54:31 +1 20:54:36 Where you need to have a role "member" to access the load-balancer service (or nova) 20:55:05 This is what is in as "default", but we provide a policy.json that allows you to set it back to the old way 20:55:22 #link https://github.com/openstack/octavia/tree/master/etc/policy 20:55:48 That said, there is a proposal out to officially align these across the services. 20:56:04 johnsom, thanks a lot. will surely read this. 20:56:31 johnsom, we came across this when we started to test tripleO based deployments with via the tempest plugin 20:56:34 #link https://review.openstack.org/#/c/523973/ 20:56:40 nmagnezi_: https://github.com/openstack/openstack-ansible-os_octavia/blob/25f3446fabd92a74322495bd536696074306d01f/tasks/octavia_policy.yml 20:57:02 and since I was not aware of those RBAC related roles, I tried to understand where this is coming from 20:57:27 That docs page is the source of truth there... 20:57:33 xgerman_, thanks! 20:57:43 johnsom, it's not always is ;) 20:58:04 True 20:58:11 but we are getting better 20:58:15 indeed 20:58:32 api-ref is also the source of truth but... DELETED... :P 20:58:42 The "default Octavia policies" section is built out of the code, so will stay accurate 20:59:09 Yeah, api-ref is the truth, our code lies 20:59:16 lol 20:59:20 haha 20:59:38 One minute left. 20:59:43 Thanks folks. 20:59:58 o/ 21:00:00 o/ 21:00:02 If you have other questions I will be around 21:00:07 #endmeeting