15:00:56 #startmeeting oslo 15:00:56 Meeting started Mon May 13 15:00:56 2019 UTC and is due to finish in 60 minutes. The chair is bnemec. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:00 The meeting name has been set to 'oslo' 15:01:01 moguimar: Yes :-) 15:01:20 courtesy ping for amotoki, amrith, ansmith, bnemec, dims, dougwig, e0ne 15:01:20 courtesy ping for electrocucaracha, garyk, gcb, haypo, hberaud, jd__, johnsom 15:01:20 courtesy ping for jungleboyj, kgiusti, kragniz, lhx_, moguimar, njohnston, raildo 15:01:20 courtesy ping for redrobot, sileht, sreshetnyak, stephenfin, stevemar, therve, thinrichs 15:01:20 courtesy ping for toabctl, zhiyan, zxy, zzzeek 15:01:31 o/ 15:01:35 o/ 15:01:53 o/ 15:01:54 o/ 15:03:15 #link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting 15:03:28 Forgot the agenda link. It's been too long since we had this meeting. :-) 15:03:35 #topic Red flags for/from liaisons 15:03:36 o/ 15:03:58 I don't think we have anything from the Barbican side 15:03:58 lol 15:04:13 o/ 15:04:22 I think the main thing is Bandit, but that's not really Oslo. 15:04:28 I have that as a main topic later. 15:04:33 No flags from Cinder that I know of. :-) 15:04:38 Nothing from Octavia 15:05:03 * johnsom Notes the bandit issue was an easy fix for Octavia team. It's just a path parsing change 15:05:57 Okay, sounds like we can move on then. 15:06:01 #topic Releases 15:06:26 Releases happened last week, so everything that merged up until then should be out there. 15:06:34 Including stable branches. 15:06:44 Business as usual for this week too. 15:07:13 I will note that I added hberaud as a new Oslo release liaison since Doug is no longer active in Oslo. 15:07:45 I need to let the release team know that as well. 15:07:58 #action bnemec to contact release team about new Oslo liaison 15:08:29 That's it for releases. 15:08:31 #topic Action items from last meeting 15:08:40 "ansmith_ to review https://review.opendev.org/#/c/638248" 15:08:56 Done, thanks ansmith. 15:09:02 "bnemec to send email about meeting cancellation." 15:09:06 Done 15:09:15 And if not, it's way too late. :-) 15:09:38 That was it for action items. 15:09:40 Hervé Beraud proposed openstack/oslo.messaging master: Fix switch connection destination when a rabbitmq cluster node disappear https://review.opendev.org/656902 15:09:49 #topic Courtesy pings 15:10:12 If you're in the courtesy ping list, please note that we'll be moving away from courtesy pings in the near future. 15:10:23 This came out of the discussion in the PTL tips and tricks session in Denver. 15:10:28 #link https://etherpad.openstack.org/p/DEN-ptl-tips-and-tricks 15:10:44 Basically courtesy pings are considered bad IRC netiquette. 15:11:05 The preferred method is for anyone interested in a meeting to add a custom notification on "#startmeeting oslo". 15:11:34 isn't that irc client dependent? 15:11:44 I'll keep doing courtesy pings for another week or two to give people a chance to see the change. 15:11:46 * jungleboyj missed this discussion 15:11:55 moguimar: Yes, but any decent IRC client should be able to do it. 15:12:10 Is there a pointer to instructions on how to do that? 15:12:24 I guess if that's a blocker for anyone, please raise it on the list. 15:12:36 but then if you change irc client you're back to step 0 15:12:38 =( 15:12:40 jungleboyj: Not that I know of. I actually need to figure out how to do it myself. 15:12:55 Ok. 15:13:20 I also need to figure out how to test it since it presumably won't highlight on my own messages. 15:14:16 :-) 15:14:17 I'll send an email to the list about this. We can maybe have a deeper discussion there where the folks who objected to courtesy pings can be involved. 15:14:39 good 15:14:40 We might need some dev docs about adding custom highlights in various popular IRC clients. 15:14:41 bnemec: Sounds like a good plan. I will do some investigation as well. 15:14:57 bnemec: Yeah, was just looking for info on IRC Cloud 15:15:07 #action bnemec to send email to list about courtesy pings 15:15:57 Okay, look for that later today. 15:16:03 #topic Bandit breakage 15:16:25 So we have a bunch of different patches proposed to deal with this. 15:16:35 Some capping it, some tweaking the exclusion list. 15:16:52 I'd prefer to find a single solution so we don't have slightly different configs all over Oslo. 15:17:09 #link https://github.com/PyCQA/bandit/pull/489 15:17:19 Octavia just needed this: 15:17:21 #link https://review.opendev.org/#/c/658476/1/tox.ini 15:18:58 Apparently that works well if you have a fairly simple test tree. 15:19:07 I followed Doug's patch https://review.opendev.org/#/c/658674/ 15:19:12 I know they had issues in Keystone because the wildcard didn't handle nested test directories or something. 15:19:30 I don't know how much of that we have in Oslo, but it could be an issue. 15:19:52 https://github.com/PyCQA/bandit/issues/488 15:20:02 It also worth noting that Bandit intends to fix this behavior, so we need to make sure whatever we do will work with 1.6.1. 15:20:06 I'd prefer just to blacklist 1.6.0 tho 15:20:08 *It's 15:20:28 kgiusti: I'm inclined to agree. 15:20:36 then we can pick Doug's fix for now 15:21:04 Although I wonder if we should do a != exclusion instead so we don't have to go through and uncap when the fix is released. 15:21:07 blacklist just 1.6.0 for now might pop the error back again in the future 15:21:59 moguimar: +1 but we can wait for 1.6.1 15:22:09 I'd put < 1.6.0 and keep an eye on bandit 15:22:41 On that note, it would be nice if we could do some testing with the proposed fix in bandit. 15:23:37 FYI we also have some issues related to sphinx requirements 15:23:38 Not only would it ensure that our stuff works with the eventual fix, it might help get the fix merged in the first place. 15:23:53 https://review.opendev.org/#/c/658812/ 15:23:59 https://review.opendev.org/#/c/650505/ 15:24:13 related to this bump ^^^^ 15:25:13 hberaud: Okay, one dependency problem at a time. :-) 15:25:20 I've added sphinx as another topic after this one. 15:25:30 bnemec: ack 15:25:54 We need an action on the bandit stuff. 15:26:19 Maybe two patches per repo: one to cap, one to uncap with an exclusion of 1.6.0? 15:26:47 We merge the cap and then recheck the uncap once bandit releases the fix. 15:26:59 That way if the fix doesn't work we aren't blocked on anything but the uncap patch. 15:27:16 The downside is if we forget to merge an uncap patch, but that's not the end of the world. 15:27:24 Thoughts? 15:27:26 +1 15:27:35 +1 15:28:18 +1 15:28:33 Okay, we'll go with that. 15:28:35 bnemec: Any reason not to just change paths? 15:28:48 Assuming they don't break that with 1.6.1, of course 15:28:50 stephenfin: It's not clear to me what the behavior in 1.6.1 is going to be. 15:29:07 The current behavior was not intended and I don't want to rely on it. 15:29:18 Ack, +1 from me too so 15:29:31 Also, apparently it doesn't work in more complex repos. 15:29:37 Okay, thanks. 15:30:13 #action push cap, uncap patches to projects blocked by bandit 15:30:45 I guess this might bite us if they can't restore the old behavior entirely. 15:30:56 But I'm not going to borrow trouble. :-) 15:31:12 I'll update the list about our plans since I know basically everyone is dealing with this. 15:31:25 Although most teams don't have quite so many projects to manage. 15:31:37 #action bnemec to send email about bandit plans 15:31:53 #topic Sphinx requirements 15:32:02 hberaud: You're up. 15:33:03 so I guess my patch is not the right solution to fix the issue https://review.opendev.org/658812 15:33:36 i know that the error occur on many project since few days 15:33:48 s/project/projects/ 15:34:13 stephenfin: This may also be relevant to your interests. 15:34:25 I guess the CI requirements check fail due to this one => https://review.opendev.org/#/c/650505/ 15:34:48 hberaud: I think maybe you want python_version>='3.4' instead of listing them. 15:35:08 "...does not match "python_version>='3.4'" 15:35:10 I suppose 15:35:17 hberaud: Which repo is failing? 15:35:34 I've seen some failures on projects that aren't using constraints 15:35:34 openstack/murano 15:35:53 and I other I guess too but not sure yet 15:35:56 Oh, wait. It's not complaining about lower-constraints, it's complaining about doc/requirements.txt. 15:37:26 oh are you sure 15:37:39 hberaud: http://logs.openstack.org/12/658812/1/check/requirements-check/04a5cd8/job-output.txt.gz#_2019-05-13_13_38_31_400156 15:37:49 That's where the errors are coming from. 15:38:22 http://logs.openstack.org/18/658818/1/check/requirements-check/c8b8958/job-output.txt.gz#_2019-05-13_13_45_49_007565 15:39:40 hberaud: Which patch is that? It's not https://review.opendev.org/658812 15:40:29 related to => http://logs.openstack.org/18/658818/1/check/requirements-check/c8b8958/ https://review.opendev.org/#/c/658818/ 15:40:29 Oh, it's in the url. 15:40:38 np 15:41:03 That one is failing on test-requirements.txt. 15:41:32 yep 15:41:35 We most likely need to fix https://github.com/openstack/oslo.service/blob/master/test-requirements.txt#L14 15:42:02 yeah 15:43:57 Okay, "Similarly sphinx 2.0.0 drop py27 support do express that in global-requirements." 15:44:01 From https://opendev.org/openstack/requirements/commit/00b2bcf7d664b1526b4eefe157c33113206d6251 15:44:20 So we need tweak the requirements to cap sphinx on py27. 15:44:45 ok 15:45:08 Probably we need to split it into two lines, one for python_version=='2.7' and one for >='3.4'. 15:45:16 Let's take a look at that after the meeting. 15:45:25 ok 15:45:32 #action bnemec and hberaud to sort out sphinx requirements 15:45:41 ack 15:46:02 #topic PBR unit test flakiness 15:46:15 In other good news, it's almost impossible to merge anything in PBR because the unit tests are super flaky in the gate. 15:46:27 oh 15:46:30 It seems to be related to the WSGI wrapper unit tests. 15:47:03 I wanted to bring it up in the hopes that someone would be able to investigate. 15:47:14 Unfortunately, IIRC these failures don't reproduce for me locally. 15:47:45 And I never got any further than running the pbr unit tests in a loop last time I looked at this. 15:48:19 I'll take a look on my side too 15:48:36 hberaud: Thanks 15:48:46 #action hberaud to investigate pbr unit test flakiness 15:48:59 #topic Weekly Wayward Review 15:50:07 #link https://review.opendev.org/#/c/618569/ 15:50:38 hberaud: Looks like you had been involved in this one too. 15:51:03 yep 15:51:41 I think your changes were just to the docstrings, so I'd be okay with you +2'ing it if it looks good now. 15:51:59 ok I'll double check 15:52:12 It's pbr so it may not actually merge, but at least we can get it approved. 15:52:26 ack 15:52:30 #action hberaud to review https://review.opendev.org/#/c/618569/ 15:52:38 #topic Open discussion 15:52:46 Okay, that was it for topics. 15:52:53 We have a few minutes left if there's anything else. 15:53:11 if someone can take a look to => https://review.opendev.org/645208 and https://review.opendev.org/647492 15:53:45 (the second is pbr too) 15:54:01 Yeah, that would be next week's wayward review if it doesn't merge before then. :-) 15:54:17 ack 15:54:49 hberaud: Approved https://review.opendev.org/#/c/645208 15:55:09 bnemec: thx 15:55:21 Note that pike is EM now so it won't get released. 15:55:32 bnemec: ack 15:57:01 I also +2'd the ocata backport. 15:57:11 nice 15:57:15 Anything else? We have 3 minutes. 15:57:38 not on my end 15:58:11 I'll ping kmalloc about https://review.opendev.org/634457 (switch from python-memcached to pymemcache), since I guess everything is ok 15:58:24 Sounds good. 15:58:54 I just have an issue with the openstack/opendev => https://review.opendev.org/658347 15:59:01 Okay, let's call the meeting and get started on all the action items I assigned. 15:59:09 #endmeeting