20:00:03 #startmeeting Octavia 20:00:04 Meeting started Wed Dec 5 20:00:03 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:07 The meeting name has been set to 'octavia' 20:00:09 Hi folks! 20:00:20 o/ 20:00:35 o/ 20:00:52 #topic Announcements 20:01:09 The only real announcement I have today is that the Berlin summit videos are now posted. 20:01:12 o/ 20:01:19 #link https://www.openstack.org/videos/summits/berlin-2018 20:01:38 yeah - so you all can see what cgoncalves and I were up to 20:02:08 You all did a pretty good job. Thanks for presenting for us! 20:02:40 Any other announcements today? 20:02:44 you taught us well 20:02:55 +1 20:03:08 There is some discussion of moving projects to a different release model, but I don't think that impacts us for the most part. 20:03:43 Ok, moving on then... 20:03:50 #topic Brief progress reports / bugs needing review 20:04:18 o/ sorry i'm late 20:04:26 I have been working on flavors. My latest patch implements the core of flavors for the amphora driver. The first flavor "option" is for the topology. 20:04:58 I have one more patch to add some API stuff, then it's down to the CLI, Tempest plugin, and finally adding additional flavor options. 20:05:02 So, good progress. 20:05:38 I have also figured out the last zuulv3 incantation I needed to get zuulv3 native multi-node gates running. I should have that wrapped up this week. 20:06:06 great progress, thank you! 20:06:41 am still looking into nova-lxd integration for containerized amphora but am waiting on a nova upgrade before i proceed. expect to have something to report on that after kubecon next week 20:06:50 Short summary, you can't assign IPs for the hosts via devstack any longer. There is an ansible variable that tells you which IP the hosts got.... 20:07:24 colin- Excellent, excited that someone has time to look at that again. 20:07:55 still very early days but am hopeful for it. would be really cool 20:08:03 Yes! 20:08:47 Any other updates? I know xgerman updated our testing web server to support UDP. This will enable tempest test for the UDP work added in Rocky. Thanks! 20:09:12 Yep. 20:09:50 #topic CentOS 7 gate 20:10:11 I put a few gate related topic items on the agenda today. Starting with CentOS 7 20:10:42 To be frank, this gate has been unstable for a while and with the recent release of CentOS 7.6 it is broken again. 20:11:04 #vote non voting 20:11:05 I will throw shame towards the person that pulled packages out in a dot release..... 20:11:41 lol, xgerman is ahead of me, but yes, I would like to propose we drop the CentOS 7 gate back down to non-voting. 20:11:44 Comments? 20:11:46 afaik golang is still present in rhel 7.6 repo 20:12:27 If that is the case, why are the gate runs failing because it can't find it? 20:12:34 +1 until it gets fixed 20:12:54 gate runs centos, not rhel 20:13:04 Yup, stability should be a priority here. +1 from me as well 20:13:56 http://paste.openstack.org/show/736720/ 20:14:02 Ok, that is a majority of the folks that raised their hand as attending today, so I will post a patch to drop it to non-voting. 20:14:26 oh, rhelosp-ceph-3.0-tools provides it 20:14:27 sounds reasonable yeah 20:14:29 Really the issues have been around the package repos and mirrors. This may be another one of those issues. 20:14:35 now that it's a safe majority i'll chime in ;) 20:14:44 Lol 20:15:11 #topic Grenade gate 20:15:31 So the fun continues... The grenade gate mysteriously started failing recently. 20:15:33 I spent some time today checking what's going on there, no luck 20:15:51 My quick look at this was apache is doing something wrong and the API calls aren't making it to us. 20:16:43 Ok, thanks for the update. This is one I don't really want to make non-voting. At least until we can see why it is failing. 20:16:44 I see it in keystone but then it returns 404 20:17:04 http://logs.openstack.org/19/622319/3/check/octavia-grenade/3ee64ee/logs/apache_config/octavia-wsgi.conf.txt.gz 20:17:26 Yeah, that is a pretty simple file... 20:18:08 I will try to spend some time this afternoon debugging this. Any assistance is very welcome. 20:18:44 It is an important gate to show we can do upgrades... 20:18:47 Let me know how I can help 20:19:00 find the root cause :) 20:19:20 xgerman Another set of eyes would be welcome 20:19:22 #link http://logs.openstack.org/38/617838/5/check/octavia-grenade/3479905/ 20:19:46 This is a simple patch that is failing the grenade gate (all of them are now, so not likely this patch) 20:20:17 Thanks folks for taking a look at that. 20:20:28 #topic Multi-node gate(s) and scenario gates 20:20:50 Ok, next on the agenda, since I have multi-node gates running I wanted to run an idea by you all. 20:21:08 We have a lot of gates..... (nlbaas retirement is coming!) 20:21:51 Since these multi-node gates will run the scenario test suite, how do you feel about dropping our current two scenario gates and instead using the multi-node version as our primary scenario tests? 20:22:56 Or do folks see value in the single node scenario gates? 20:23:06 Just throwing it out as an idea 20:23:40 which two scenario gates? octavia-v2-dsvm-scenario and ocavia-dsvm-scenario? 20:23:59 or move octavia-v2-dsvm-scenario{,-py3} to multi-node? 20:24:06 octavia-v2-dsvm-scenario and octavia-v2-dsvm-py35-scenario 20:24:19 Well a multi-node is more true to life. The question is if whether or not we have something to gain by keeping a single node alongside it 20:25:02 Yeah, the only downside I see is multi-node has more moving parts, so could be more likely to have failures not related to the patch being tested....] 20:25:41 right. can we make it non-voting for a while and then decide based on success/failure rate? 20:25:48 I assume we'll start with it as non-voting so we can get an indication of stability 20:25:56 Just trying to think about the longer term 20:26:05 Yep 20:26:42 I was just trying to reduce the number of duplicate runs of the same tests and save some precious nodepool instances. 20:26:53 * johnsom Glares at triple-o's nodepool usage 20:27:00 haha 20:28:12 tripleo fine folks doing fine work 20:28:13 Ok, I will simply replace the current (broken) zuul v2 octavia-v1 gates with non-voting zuul v3 octavia-v2 gates for now and we can re-consider later. 20:28:19 So I think everyone will agree that multi node tests bring a ton of value. Maybe we can just start and see how it plays out? (I know, we'll consume a lot more nodes in the meantime) 20:28:43 I wasn't criticizing triple-o folks, just the heavy nodepool usage.... 20:29:00 yes but we could also run tests faster and in parallel 20:29:32 johnsom, I know ;) 20:30:06 Yeah, right now we have the tempest concurrency capped at 2. With the multi-node we might be able to raise that. 20:30:07 ok, so we're dropping octavia-v1 jobs \o/ 20:30:26 of course, once n-lbaas gets removed 20:30:27 johnsom, 2? I think at 1 20:31:06 #link https://github.com/openstack/octavia-tempest-plugin/blob/master/zuul.d/jobs.yaml#L132 20:31:12 oh, 2 yes 20:31:38 We could also be greedy as spin up a third instance with nova on it... 20:32:21 I wouldn’t call that greedy… OSA is no light foot neither 20:32:23 * johnsom Hears maniacal laughter from somewhere.... 20:32:27 Merged openstack/neutron-lbaas master: use neutron-lib for _model_query https://review.openstack.org/617782 20:32:46 Oh so very true... 20:32:53 should we also suffix py2-based jobs with -py2 and drop suffix -py3? make python3 officially first class citizen 20:33:02 Ok, I think I have the answer/path forward I was looking for. 20:33:09 k 20:33:33 cgoncalves Yes. I can do that. 20:33:52 #topic Open Discussion 20:34:05 We should also decide how/when we do bionic.... 20:34:21 Do we want parallel xenial and bionic for a bit, or just make the jump? 20:34:32 I was typing that but decided not to put more burn on you 20:34:57 parallel with bionic non-voting 20:35:09 *burden 20:35:09 We are not yet "bionic" native for the amp, but the compatibility mode seems to work fine. 20:36:23 Well, that is why I get the lousy thankless title of PTL.... Grin. Don't even get the t-shirt. lol 20:36:35 noted 20:36:49 Or put another way. We can decide we want to do it, and it will get done at some point in the future*. 20:36:53 * Maybe 20:37:01 johnsom: could be worse… TC 20:37:03 grin 20:37:14 TC gets catering 20:37:24 lol 20:37:41 every time I hang with them they go to food trucks 20:37:48 Oh, I mean, great title with a bunch of perks. Please sign up 20:38:09 Sorry, any topics for open discussion today? 20:38:17 been there, done that :-) 20:38:25 Q: why can't I find octavia-v2-dsvm-scenario-ubuntu-bionic in openstack health page? 20:38:33 I wanted to check failure rate 20:39:27 Probably because we don't have one 20:39:38 oh i watched the berlin project update btw, nice job cgoncalves and xgerman (and slide composer johnsom ) 20:39:50 looking forward to the stein stuff after seeing it bundled in that way 20:39:52 oh, maybe it only lists gate jobs 20:39:57 oh, we do 20:40:07 colin-, thanks! 20:40:20 #link http://zuul.openstack.org/builds?project=openstack%2Foctavia&job_name=octavia-v2-dsvm-scenario-ubuntu-bionic 20:40:45 I was looking at http://status.openstack.org/openstack-health/#/?searchProject=octavia-v2-dsvm-scenario&groupKey=build_name 20:41:06 You need to set the drop down to job instead of project 20:41:31 I have &groupKey=build_name which is job 20:42:31 anyway, bionic looks mostly green 20:42:38 Seems ok-ish. I need to look at that and setup bionic-on-bionic. I think that is amp image only right now 20:43:35 Yeah, I have another patch I was playing with to figure out what all was needed for zuul to do a bionic nodepool instance 20:43:41 I got it working at the PTG 20:43:49 Just need to clean it up 20:45:05 cgoncalves BTW, I saw your comment on the multi-node patch. I learned (the hard way) that if you override anything in a subsection, it *replaces* the whole section in zuul land, so I have to pull in a bunch of vars from parent jobs to make it work. 20:46:03 there's no zuul_extra_vars or alike option? :( 20:46:14 The only part that technically isn't required in our repo right now is the nodeset, but I think there is value to just having it there for future layouts and clarity. I will probably create an octavia-controller group 20:46:33 ok 20:47:01 I thought it was supposed to just be additive and override if the same name matches, but that override is at the group level, not the variable level... sigh. 20:47:46 So "host-vars: controller: devstack_localrc:" ends up replacing all of the "devstack_localrc:" vars from the parent 20:48:00 A bummer. 20:48:28 However, that said, you are welcome to play around with it and find a better way. There might be one. I'm just happy it works now. 20:49:26 Other items today? 20:50:37 Oh, I will be taking a work day off on Friday, so will be around with limited bandwidth. 20:50:50 Ok, thanks folks. Have a great week! 20:50:59 o/ 20:51:03 #endmeeting