20:00:02 #startmeeting Octavia 20:00:03 Meeting started Wed Mar 14 20:00:02 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:07 The meeting name has been set to 'octavia' 20:00:18 Hi folks! 20:00:25 o/ 20:00:26 hi 20:00:30 Let's see how many people show up after the DST change 20:00:32 hi 20:00:37 o/ 20:00:51 hi 20:00:52 * rm_work uses the strategy of just paying attention to the channel and having no idea when the meeting will actually start 20:00:58 Not bad 20:01:16 That is a bonus of having the meeting in channel 20:01:19 #topic Announcements 20:01:22 yeah i love it 20:01:32 Queens Released! 20:01:43 In case you missed it, queens is out the door. 20:01:55 Thank you all for your contributions to our Queens release 20:02:44 It seems we had a hiccup with some global-requirements, but we will talk about that later in the agenda. 20:02:54 Otherwise it's a pretty good release. 20:03:24 In case you were not able to make the Rocky PTG, I attempted to take notes in the etherpad 20:03:31 #link https://etherpad.openstack.org/p/octavia-ptg-rocky 20:03:49 Once I dig out a bit more I will try to send out a summary e-mail. 20:04:21 Also, the naming for the "S" series is out for a vote. Check you OpenStack e-mail for your voting link. 20:04:37 The theme is around Berlin as that is where the summit will be. 20:05:12 I still have a stack of windows for e-mails I need to read, so that is about all of the announcements I have today. Anything I missed? 20:05:52 #topic Brief progress reports / bugs needing review 20:06:47 I am still getting caught up after travelling for two weeks. Mostly I have been working on gate fixes, catching up on e-mails, expense reports, etc. 20:07:11 yeah the gate currently is ... :/ 20:07:21 Also catching up on reviews. There was a lot of work done while I was in Ireland. 20:07:30 i have a list of patches i'd like eyes on, i guess i can start posting 20:07:53 I have also started to clean out the neutron-lbaas patches. Some had not been touched in two years, so very clearly needed abandoned. 20:08:41 https://review.openstack.org/549263 and https://review.openstack.org/548989 and https://review.openstack.org/550303 20:09:05 cgoncalves I see you had some stuff in the agenda, feel free to share here. 20:09:05 and https://review.openstack.org/552641 just needs a +2/+A once the gate fixes merge (don't do it yet) 20:10:36 I wanted to share that we have recently faced some gate issues that led us to migrate from testr/ostestr to stestr which is the new tool for running tests in OpenStack. testr is not maintained and ostestr was a wrapper of it; stestr is a fork of testr 20:11:14 i feel like we've done this about 3 different times for 4 different projects <_< 20:11:16 Yep, cool. Funny that we migrated to ostestr like a year ago.... 20:11:16 octavia and neutron-lbaas got migrated already. we need to do the same now for client, dashboard and tempest projects 20:12:08 My plan is to address some comments on the tempest plugin patch and update it for PTG discussions. 20:12:08 we also faced some other issues but turned out to be because of running jobs in parallel 20:12:18 Then I want to focus on the driver code 20:13:01 the other item I put on the agenda was grenade: https://review.openstack.org/#/c/549654/ 20:13:30 the patch has a depends-on btw 20:13:36 Yes, cool stuff! I haven't had a chance to look at it, but I'm excited that we are working towards declaring upgradablility 20:14:00 the grenade now verifies successfull upgrading from queens to master and no dataplane downtime 20:14:35 oh, we also started looking at performance/scale 20:14:44 Nir has started this with a patch submitted to Rally: https://review.openstack.org/#/c/551024/ 20:14:48 Nice! I am interested to read it and learn about the grenade jobs. 20:15:29 we have got baremetal machines internally and we plan to have data by last week of March 20:15:49 Nice 20:16:36 #topic Other OpenStack activities of note 20:16:55 A few things are falling out of PTG discussions. This is the first one I have had time to read. 20:17:02 #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128175.html 20:17:02 also... (sorry!) soon Octavia will work when firewall is set to openvswitch, which until now would fail at load balancer creation 20:17:21 Ah, so there is a fix for OVSFW? 20:17:32 https://review.openstack.org/#/c/550421/ 20:17:57 https://review.openstack.org/#/c/550431/ validates the patch 20:18:22 Along those lines, there is another issue in neutron DVR that can cause us problems: 20:18:24 #link https://bugs.launchpad.net/neutron/+bug/1753434 20:18:24 Launchpad bug 1753434 in neutron "Unbound ports floating ip not working with address scopes in DVR HA " [Undecided,Confirmed] 20:18:29 In case anyone is using DVR 20:19:21 My item of note above was that there is work on a new oslo library for quotas. I looked for this when I was doing the quota work for Octavia. So, this is good stuff. 20:19:41 I will try to highlight the others next week once I get through my stack of stuff to read. 20:20:37 #topic Open Discussion 20:21:18 I added "Gate with lower-requirements.txt and workflow to ensure dependencies bumped in requirements.txt (i.e. a prereq before merging patches?)" 20:21:22 cgoncalves Put an item on the agenda about the new lower-constraints requirements is starting 20:21:56 we have recently discovered that our requirements.txt is a bit behind what are our requirements 20:22:01 This is what I came up with after looking into the issues we had with the two requirements in queens 20:22:18 right 20:22:34 I would like to discuss ways we could improve this situation 20:22:38 This lower-constraint seems to be super new and talking with the requirements team, we might be the first to use it. 20:23:07 well, i still hold that this is a symptom of the way g-r works in openstack, and that we got stuck in kind of a hard spot 20:23:23 My thought is to setup a gate using this lower-constraints file and run through the py27 and functional tests with no-op. 20:23:27 in our dev envs and at gate we may not face such issues because we have recent lib versions installed, so we dont detect that requirements.txt gets outdated when we modify code 20:23:31 yeah, and then there is my nique problem with privsep and the broken msgpack 20:23:42 and that in general, things need to be packaged/deployed using the same stuff we use in testing, which is to say "upper-constraints.txt" 20:23:45 Agreed, it is an oddity in the requirements setup 20:24:17 johnsom: functional and no-op are enough? 20:25:26 cgoncalves, I think so. Do you think we need a full dsvm? 20:25:31 I don't know, therefor my question :) 20:25:53 one other idea would be using our requirements.txt 20:25:56 It would have caught these two issues 20:26:18 Umm, we are using our requirements.txt.... 20:26:23 yes 20:26:28 requirements.txt is all >= 20:26:43 the issue is that if you don't use u-c values for packages when packaging 20:26:45 and we use the gloval requirements indirectly in the gates 20:26:49 Right, it assumes to pull whatever the latest is 20:26:51 and just guess at "something that matches" ... 20:26:53 well, but that didn't prevent us from shipping a kind-of broken octavia 20:27:04 cgoncalves: IMO it isn't broken 20:27:17 when I deployed / build images for deploy, i followed guidelines which are to use u-c 20:27:21 and everything was fine 20:27:36 u-c details explicitly which packages are used in testing 20:27:52 rm_work: if you clone octavia repo only and follow the docs probably you end up installing from requirements.txt 20:28:04 and are therefore the packages that should be used in deployments 20:28:05 you also end up installing neutron-lbaas 20:28:16 i wouldn'tuse our docs as a good example of what to do 20:28:16 Right, but it is valid that we should be making sure G-R gets updated to our bare-minimum requirements. Thus, why I am proposing a gate that runs with lower-constraint 20:28:23 see the global openstack docs 20:28:35 ok 20:28:38 johnsom: +1 20:28:56 cgoncalves Yes, installing with requirements.txt will work correctly and not have an issue 20:29:17 johnsom: if we proceed with that, would we block patches from being merged before bumping in g-r? 20:29:55 johnsom: in queens you will have issues. their names are: jinja2 and pyOpenSSL :) 20:29:56 Following the docs will lead to a successful install. 20:30:34 cgoncalves no. it will go pull the latest for those two packages, which will succeed 20:31:03 meh, assuming that you pull the latest yes 20:31:14 cgoncalves: it WILL pull latest, if you pass requirements.txt to pip 20:31:19 that's what requirements.txt *has* 20:31:20 As for the lower-constraint gate, yes, it would block patches from merging until G-R is updated. 20:31:21 >= 20:31:28 if the system already has minimum required and turns out to not be good enough, then no 20:32:00 ah, if you don't use -U and you already have random system python packages installed, then yes, ugh 20:32:01 johnsom: +1 for blocking patches 20:32:08 system python needs to DIAF 20:32:20 *system python environment 20:32:24 * cgoncalves looks up for DIAF 20:32:39 die in a fire 20:32:49 heh :) 20:32:56 I kind of agree that packaged python modules tend to lead to nothing but trouble 20:33:04 +1 20:33:21 But anyway, we have rat holed a bit here. 20:33:25 absolutely everything should be deployed in a virtualenv, no exceptions 20:33:39 Does anyone have any comments about the lower-constraint gate? Are we in favor? 20:33:48 i'm not sure i understand how it works 20:33:49 but sure 20:33:56 we have to play well with distros, after all majority of users install from distro packages ;) 20:34:07 in favor 20:34:18 rm_work it will install using the minimum versions of the packages in requirements.txt 20:34:24 yes, you can play well with distros by ignoring their system packages and using a virtualenv :P 20:34:38 this doesn't impact the distro in any way 20:34:53 and is in fact very friendly to it by ignoring it altogether and being extremely low impact 20:34:58 I know of distros that ship venvs too... 20:35:23 mobing on..... :P 20:35:29 *moving 20:35:32 grin 20:36:07 Ok, I will put together a gate, non-voting for now so we can try this out. I'm a bit nervous as it's "new" for requirements team. 20:37:01 Probably will need a py27 and py35, but maybe just start with py27 20:37:32 We don't really have any version specific requirements if I remember right. 20:38:20 Other topics for open discussion today? 20:38:34 yes 20:38:45 lol 20:38:57 I would like to bring up the topic of backports 20:39:41 could we have devs also proposing backporting stuff to stable/* ? 20:40:10 so far I have got the impression that it is a best effort, occasional 20:40:14 Typically that is handled by a stable maintenance sub-team. 20:40:47 cgoncalves: congrats on being the first member of the stable maintenance subteam for Octavia! 20:40:49 hmm I don't recall seen anyone from that sub-team proposing 20:40:49 We however are a small team, so that doesn't really exist except for cgoncalves volunteering 20:40:51 * rm_work claps for cgoncalves 20:40:59 rm_work: lol 20:41:13 * johnsom Congratulates cgoncalvess 20:41:24 I'm okay with that 20:41:37 yep, I would lobe the stable subtem to backport some of the recent hm fixes to Pike 20:41:43 because down the road it will save me lots of time with customer tickets 20:41:48 Anyway, the trick here is they backports need to be proposed after the patch has merged on master. So a bit async 20:42:32 Yeah, anyone can propose a backport. I have kind of being going on the "If someone needs it" approach (feel free to fire the PTL). 20:42:57 if we do we will do it by tweet 20:42:58 I'd suggest, whenever possible, to leave a comment or even add a tag to the commit message that the patch is backport material 20:43:37 The key part is making sure it meets the policy: 20:43:39 #link https://docs.openstack.org/project-team-guide/stable-branches.html 20:43:46 sure 20:45:42 So, yes, it would be nice to take with backport potential. Please feel free to propose things. Please propose them after the master has merged. 20:46:15 cgoncalves Do you have a cadence you would like to see for dot releases of the stable branches? 20:46:18 will do 20:46:45 not really 20:46:56 Again, my approach has been roughly at release cycles and if folks request them. 20:47:15 ok 20:47:15 yeah, I usually run off a SHA 20:47:28 on our side we like to be proactive and backport whenever applicable 20:47:37 ok, sounds good 20:47:49 Yeah, not a problem. 20:48:24 Ok, other topics today? 20:48:44 * cgoncalves feels observed.... 20:49:08 Hahaha 20:49:32 no 20:51:20 I need to go dig into the periodic stable jobs too. Not sure if those ever got put back after the zuulv3 stuff 20:51:40 We had a nice health dashboard I used to track those, but that is now gone. 20:51:57 http://status.openstack.org/openstack-health/ 20:52:24 johnsom: periodic-stable-jobs-neutron run for neutron-lbaas 20:53:05 AJaeger: hi. do you have an ETA for grenade zuul v3 native job? 20:53:14 check also http://zuul.openstack.org/builds.html?project=openstack%2Fneutron-lbaas&pipeline=periodic-stable 20:53:24 cgoncalves: no - best ask QA team 20:53:25 Ah, it's working again. cool. Yeah, looks like the stable periodics are gone 20:53:35 AJaeger: ack 20:53:53 johnsom: those get renamed, we might have forgotten to update the dashboard ;( 20:54:10 Ah, that is why I couldn't find them. periodic-stable. 20:54:45 Hmm, wish it had the branch in there somewhere 20:55:05 johnsom: not in that display yet ;( 20:55:29 looks like https://review.openstack.org/#/c/552978/ is about to pass and needs +A 20:55:33 At least they are all passing.... grin 20:55:50 there it goes 20:55:52 so, something wrong with them? ;) 20:55:56 +A plz 20:56:04 Yeah, good question. hahaha 20:56:26 Ok, a few minutes left, anything else today? 20:57:06 yes, someone, right now, should +A https://review.openstack.org/#/c/552978/ :P 20:57:11 Ok, thanks folks 20:57:12 #endmeeting