15:00:29 #startmeeting oslo 15:00:31 Meeting started Mon Oct 8 15:00:29 2018 UTC and is due to finish in 60 minutes. The chair is bnemec. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:35 The meeting name has been set to 'oslo' 15:00:42 o/ 15:00:50 courtesy ping for amotoki, amrith, ansmith, bnemec, dansmith, dhellmann, dims 15:00:50 courtesy ping for dougwig, e0ne, electrocucaracha, flaper87, garyk, gcb, haypo 15:00:50 courtesy ping for jd__, johnsom, jungleboyj, kgiusti, kragniz, lhx_, moguimar 15:00:50 courtesy ping for njohnston, raildo, redrobot, sileht, sreshetnyak, stephenfin, stevemar 15:00:51 courtesy ping for therve, thinrichs, toabctl, zhiyan, zxy, zzzeek 15:00:55 o/ 15:01:03 o/ 15:01:05 o/ 15:01:05 o/ 15:01:06 o/ 15:01:11 o_ 15:01:11 o/ 15:01:13 o/ 15:01:24 o/ 15:01:33 o/ 15:02:04 #link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting 15:02:12 #topic Red flags for/from liaisons 15:02:34 Major release of oslo.messaging last week. 15:02:35 Nothing to report from the Octavia team 15:02:37 Nothing from me. 15:03:00 I don't think it should affect anyone, but it's something to keep in mind. 15:03:26 Oh, actually sileht found a significant bug in it. 15:03:29 o/ 15:03:30 Let me find the link. 15:04:03 https://github.com/openstack/oslo.messaging/commit/172cfb33f3ee207531a9e82fbc8293d24009a256 15:04:33 It will only affect you if you aren't explicitly setting a transport, which I suspect is usually not the case. 15:04:46 But obviously it is sometimes. 15:05:54 Thanks for the heads up. We are calling out the transport, but it's always nice to have this information should things go sideways. 15:06:25 It's fixed and should get released as part of the usual bunch of releases this week too. 15:06:37 Otherwise I think that's it. 15:06:48 #topic Releases 15:07:00 Business as usual. 15:07:47 #topic Action items from last meeting 15:08:08 "bnemec to request project update slot" 15:08:12 Done, but... 15:08:37 Because of the uncertainty around my travel plans I waited too long. 15:09:07 So unless some projects drop out we aren't going to have a normal project update slot. :-( 15:09:26 ah, bummer 15:09:45 I was thinking we should look into whether we could do a lightning talk or something. 15:10:00 bnemec: Bummer. 15:10:00 It might not be as formal, but at least we could get the information out there. 15:10:03 we should try to get moguimar a lightning talk slot 15:10:11 at the very least, that's a big new feature for folks to know is coming 15:10:20 Yeah, agreed. 15:10:38 * dhellmann looks around the room for someone else he can volunteer to do something 15:11:05 o/ 15:11:12 I can do lightning talk on drivers 15:11:22 I've never done a lighting talk, so I'm not sure when that all gets scheduled. 15:11:29 It's usually closer to summit though, right? 15:11:37 * bnemec doesn't want to drop the ball again 15:11:52 you should get in touch with ttx and/or diablo_rojo about it now to make sure 15:12:38 Okay, will do. 15:12:47 I can verify that we didn't get an official update timeslot too. 15:13:02 #action bnemec to look into lightning talk for oslo.config drivers 15:13:03 ttx or diablo_rojo is the right people 15:13:30 they will at least know who you really need to talk to 15:14:41 So, lesson learned. Request the timeslot early and we can always back out if nobody can do it. 15:14:53 "bnemec to start etherpad for project update topics" 15:14:57 Obsoleted by the previous topic. 15:15:09 But will do if we still get a session. 15:15:18 "bnemec to send dhellmann slides from previous project update" 15:15:20 Ditto. 15:15:41 Except that I can do the slides now since it turns out I am going to be in Berlin. 15:15:56 "investigate writing a script to automatically tag stories with migrated priority" 15:16:02 Still not done to my knowledge. 15:16:21 I started something on that 15:16:32 it's pretty hacky and I hit a bug in the SB API that I wasn't able to figure out 15:16:48 I forgot about putting it in a repo somewhere 15:16:53 I can put it in oslo.tools maybe? 15:16:54 Okay, cool (the started part, not the bug part). 15:17:04 that's not a good long term home, but it would let someone else take over the script 15:17:05 That seems like a reasonable place for it. 15:18:42 #action dhellmann to put story tagging code in a repo (oslo.tools?) 15:19:25 That was it for action items. 15:19:46 I think that covers the storyboard topic I still had on the agenda too. 15:19:53 #topic os-log-merger 15:20:06 #link https://review.openstack.org/#/c/607142 15:20:09 hi o/ 15:20:20 exactly, and 15:20:21 There was a proposal to add this as a formal project. 15:20:23 #link https://wiki.openstack.org/wiki/OsLogMerger 15:20:42 A question that came up was whether it should be under the oslo umbrella. 15:21:25 bnemec: Exactly, although it's not a library itself, and I didn't know if it'd have a place under oslo 15:21:50 it's a tool to help openstack developers and operators 15:22:08 I started it around 3 years ago and I haven't publicized it much. 15:22:19 what does it do? 15:22:22 We certainly have common logging bits in Oslo, but I'm trying to think whether we have much in the way of standalone tools like this. 15:22:55 So, the purpose of the tool is to take the logs of several openstack (and non openstack services) 15:23:12 and merge them together in a single output, ordered by timestamp 15:23:12 "os-log-merger is an OpenStack project which produces tools to help debugging openstack logs by aggregation. " 15:23:17 quoth the wiki :-) 15:23:20 hehe 15:23:40 it auto-detects the type of log 15:24:06 Neutron jobs use it to aggregate functional log outputs, and send the aggregation to logstash/kibana 15:24:08 but, 15:24:51 When you have lots of services (sometimes replicated across nodes) which interact to each other, it becomes a pain to trace a request, or whats happening under the hood (post-mortem) 15:25:15 osprofiler for example, helps with that if you prepare your environment, setup the services for a profiling session, etc 15:25:26 but if you just have the logs, it's not possible 15:25:54 https://pypi.org/project/os-log-merger/ < -- search for "Level 3" 15:26:00 there is no anchor, sorry :) 15:27:29 My purpose with submiting it to governance (or oslo) is maximizing utility, making other projects more aware of it's existance, and eventually getting more contributions to make it better. 15:27:39 any questions? 15:28:05 So the question remains, does it make sense to you folks? 15:28:36 "It should work as long as the logs are based on oslo logger output." 15:28:36 ^seems like an argument to have this in oslo. 15:29:01 bnemec: what do you mean "the logs are based on oslo logger output"? 15:29:17 ajo: I'm quoting from the pypi description. :-) 15:29:53 bnemec: oh, that comment is obsolete, now it supports other formats too (so we're able to merge more system logs together with oslo logger output) :) 15:30:03 oslo log :) 15:30:06 Ah, okay. 15:30:24 * ajo sends a review to fix that :) 15:30:29 how many people are contributing to it? 15:31:13 dhellmann: not a lot, I believe we have been around 4-5 maximum, I can ask git for more precise number 15:31:23 There are three listed on the wiki right now. 15:31:30 how many are *active*? 15:31:45 I don't know about adopting something that isn't maintained 15:31:51 we have a lot of those sorts of things already 15:32:05 3-4 15:32:20 dhellmann: I understand the concern, of course :) 15:33:24 it's been slow during 3 years, there's room for lots of improvement. And greater visibility should help with having more capacity to develop it 15:34:19 bnemec : that tagging script is in https://review.openstack.org/608707 15:34:33 dhellmann: Thanks 15:34:44 I'm also a bit concerned that this is stretching our mission statement, which has to do with common libraries 15:35:00 we have check list for adopting an oslo project or not, let me find the URL 15:35:01 I feel like folks suggested bringing it to Oslo because it didn't fit anywhere else :-/ 15:35:36 I think it's probably a useful tool, I just feel like it might be better served in an ops tools SIG or project team 15:35:44 yeah, true, and because creating a whole governance project just for this, seemed an overkill I guess 15:35:58 possibly, yeah 15:36:25 http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html 15:36:29 Are there other ops tools that we could put under the same governance umbrella as this? 15:36:45 dhellmann: what do you mean "ops tools SIG or project team", creating a new project with such purpose? 15:36:50 ajo : yes 15:36:59 bnemec : who owns os-purge? 15:37:13 ok, that could make a lot of sense, I suspect there are other tools in existence which could be willing to join 15:37:21 I have no idea, but that would be a good one too. 15:37:36 gcb_ : I think the fact that that document only talks about libraries is pretty significant 15:37:50 And it would be nice to make that an official thing since I know a lot of operators want it. 15:37:51 yeah 15:38:04 bnemec : yeah, the public cloud group expressed some interest in os-purge-like features recently 15:39:16 I'm fine with going the ops tools team path.... that makes sense considering the purpose of the oslo project 15:39:27 if we o the ops-tools patch, any advice dhellmann? 15:39:33 dhellmann, yeah, do you mean https://pypi.org/project/ospurge/ / 15:39:33 may be we can talk about that off-meeting 15:39:37 don't want to hold everybody 15:39:39 gcb_ : yes 15:39:54 Yeah, that feels like a better fit than Oslo for this. 15:40:20 ajo : maybe a good next step is to propose creating a group like that on the mailing list(s)? 15:40:29 the structure could be a lot like oslo, with separate review teams on each repo and 1 core team across them all if folks are more comfortable with that 15:40:45 and a sig feels like it's lower effort than a project team 15:40:57 dhellmann: how does a SIG work? 15:41:16 https://governance.openstack.org/sigs/ 15:41:21 thanks 15:41:33 the process to create one is at the very bottom there 15:41:39 dhellmann: I may have some more questions, if you have time off meeting, no need to hold everybody 15:41:44 thanks a lot 15:41:59 and I can help you find a guide to set it up if you get some positive responses 15:42:03 ajo : sure 15:42:19 we could move to #openstack-tc after this meeting 15:42:25 ack 15:42:51 Okay, good discussion. Thanks everyone. 15:43:05 That was it for topics. 15:43:07 #topic Weekly Wayward Review 15:44:10 There's the config migrator one, but I think that one's already being actively worked. 15:44:17 So, let's do this one: 15:44:19 #link https://review.openstack.org/583524 15:45:11 I suspect that one lingered because of the discussion over how the message was formatted. 15:46:18 I guess Doug +2'd the first patch set, so maybe this already has consensus. 15:46:23 * bnemec looks what changed from 1 to 2. 15:46:48 was it just rebased? 15:46:55 Ah, nothing. 15:47:00 Yeah, must have been. 15:48:08 Okay, sent it. 15:48:18 yeah, we can clean that up 15:49:38 There's also a fairly simple followup to that one: https://review.openstack.org/#/c/583525/3 15:49:51 The only negative comment so far was in regard to the commit message. 15:50:00 And that was addressed, I believe. 15:50:10 Doug Hellmann proposed openstack/oslo.config master: avoid trailing space in sphinxext log output https://review.openstack.org/608717 15:50:16 fixed ^^ 15:50:52 +2 15:50:56 Thanks 15:51:45 I have a few 15:51:54 assuming wayward means "good but needs review" 15:52:14 lingering, yeah 15:52:18 It's mostly grabbing our oldest review without a -1 and figuring out how to proceed. 15:52:31 These two would be good https://review.openstack.org/583957 https://review.openstack.org/594222 15:52:34 We just knocked out two of yours. :-) 15:52:41 Oh, awesome 15:53:02 * stephenfin loves it when things happen without him having to do a thing :) 15:53:37 Of those two I shared, the latter one's the one I care about. The other one? Meh, nice-to-have at most 15:54:07 Ah, yes. I was looking at that one before the meeting. 15:54:57 lgtm 15:55:08 (y) Ta 15:56:47 Okay, three down. Very good. :-) 15:56:51 #topic Open discussion 15:57:03 I have an item for this: backports 15:57:40 I mentioned it here earlier today, but we've a couple of open patches against Ocata and Pike and I'm not sure what the policy around accepting them is 15:58:01 ...especially with extended-maintenance now a thing (for Ocata) 15:58:34 (If there are docs on this somewhere, please tell me where to go RTFM :)) 15:59:00 those branches are still open for stable backports, aren't they? 15:59:15 stephenfin: I think any bug fixes that would have been appropriate during the maintained phase are still appropriate in EM. 15:59:30 We just don't produce releases anymore. 15:59:34 https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases 15:59:42 I guess ocata is EM but the others are stable: https://releases.openstack.org 15:59:54 OK, so that brings me to the second part of my question: what's considered appropriate in that case? 16:00:14 https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes 16:00:29 https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance 16:00:30 nova's policy is the last release gets all bug backports, the second last release only gets security/data loss potential backports, and the third last release got nothing 16:00:31 heh 16:00:40 oh, we don't get that picky 16:00:55 send it all back as far as people want to deal with fixing broken gate jobs 16:01:38 +1 16:01:45 OK, I wasn't sure about that point. I owe kgiusti/dmueller an apology, in that case 16:01:48 Thanks for the info 16:02:10 stephenfin: np 16:02:48 Okay, we're two minutes over time. 16:02:48 * kgiusti can't recall what exactly needs an apology, but likes stephenfin enough to forgive practically anything 16:03:04 Feel free to continue discussions in the regular channel. 16:03:08 Thanks for joining everyone! 16:03:12 o/ 16:03:12 #endmeeting