14:00:12 <mnaser> #startmeeting tc
14:00:13 <openstack> Meeting started Thu Aug  8 14:00:12 2019 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:17 <openstack> The meeting name has been set to 'tc'
14:00:19 <mnaser> #topic roll call
14:00:25 <ricolin> o/
14:00:26 <ttx> ohai
14:00:57 <mnaser> o/
14:01:06 <mnaser> welcome tc-members :)
14:01:27 <gmann> o/
14:01:38 <fungi> welcome to you as well
14:01:59 <zaneb> ahoy
14:02:12 <jroll> \o
14:02:25 <mugsie> o/
14:02:38 <mnaser> ok so i count atleast 7 of us if my math is right
14:02:48 <ttx> that is 7
14:02:53 <mnaser> and i think my math tells me that we're good
14:03:04 <ttx> evrardjp was here earlier
14:03:10 <zaneb> I count 8
14:03:12 <fungi> i have not had enough caffeine to math yet
14:03:30 <mnaser> heh, well lets get started.
14:03:34 <mnaser> #topic Follow up on past action items
14:03:38 <mnaser> #info fungi to add himself as TC liaison for Image Encryption popup team
14:03:44 <mnaser> i believe this was already done and addressed
14:03:49 <TheJulia> o/
14:04:13 <fungi> #link https://governance.openstack.org/tc/reference/popup-teams.html#image-encryption
14:04:24 <fungi> like ragu, it's in there
14:04:35 * dhellmann slides in the back late
14:04:42 <mnaser> #link https://review.opendev.org/#/c/670370/
14:04:50 <mnaser> cool
14:04:54 <mnaser> #info fungi to draft a resolution on proper retirement procedures
14:05:03 <mnaser> this merged not long ago
14:05:05 <mnaser> #link https://review.opendev.org/#/c/670741/
14:05:31 <mnaser> and on our website
14:05:32 <mnaser> #link https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html
14:05:50 <fungi> cool, i missed it getting approved
14:05:56 <mnaser> happened before your coffee :)
14:06:05 <mnaser> #topic Active initiatives
14:06:15 <mnaser> #info Python 3: mnaser to sync up with swift team on python3 migration
14:06:34 <mnaser> i believe that this is probably wrapped up, most of the patches are in and i think that swift is ok for py3?  gmann mentioned something about this too
14:06:47 <gmann> yeah py3 integration job is running fine on swift.
14:07:11 <gmann> timburke also removed swift from disable-py3-repo list on devstack side.
14:07:15 <mugsie> http://replygif.net/i/417.gif
14:07:19 <mnaser> looks like its also moving well -- https://review.opendev.org/#/q/topic:py3-func-tests+(status:open+OR+status:merged)
14:07:22 <mnaser> so thats awesome
14:07:38 <mnaser> #info mugsie to sync with dhellmann or release-team to find the code for the proposal bot
14:07:43 <mnaser> sorry this one wasnt written out nicely
14:07:44 <mugsie> I found it
14:07:57 <mugsie> and am working on it at the moment
14:08:20 <mnaser> ok cool, for context -- this is making sure that when we cut a branch, proposal bot automatically pushes up a patch to add the 'jobs' for that series
14:08:22 <mugsie> there was some work done in the goal tools repo already, so trying to not re-write the world
14:08:26 <mnaser> ..right?
14:08:26 <mugsie> yes
14:09:00 <mugsie> speaking of, we should add the task of defining versions to our TODO list
14:09:07 <zaneb> mugsie: some of the stuff from https://review.opendev.org/#/c/666934/ might help
14:09:16 <gmann> 'jobs' ? for new python version right ?or stable ?
14:09:25 <mnaser> gmann: i think the 'series' specific job templates
14:09:32 <mnaser> like openstack-python-train-jobs or whatever it's called now
14:09:34 <mugsie> for the openstack-python3-train-job templates
14:09:42 <gmann> ok
14:10:12 <mnaser> ok, well that's progressing so we'll follow up on that.. we still have sometime before the next release but it'd be nice to have it ready a little bit before
14:10:22 <gmann> one difficulties in that might be few projects might need old py version testing like charm-*
14:10:58 <mugsie> gmann: they can add custom ones as needed, these are just for a standard set
14:11:23 <gmann> mugsie: yeah. adding new should be ok as long as we do not remove their old supported one
14:11:25 <mugsie> charm-* *should* be good, as we based the py3 version off the LTS py3 version for each distro
14:11:58 <mnaser> ok, we can discuss more of the impl. details in office hours :>
14:11:59 <gmann> for exmaple, they need to test and support py35
14:12:08 <gmann> yeah. we can discuss later
14:12:12 <mugsie> =1
14:12:14 <mugsie> +1*
14:12:43 <mnaser> #info Forum follow-up: ttx to organise Milestone 2 forum meeting with tc-members (done)
14:12:58 <ttx> yeah so we raised it and etherpads were created
14:13:19 <ttx> let me dig links
14:13:44 <ttx> We only have one volunteer (jroll) for the programming committee
14:13:54 <ttx> anyone else interested in the short list not going for reelection?
14:14:15 <mnaser> the proposed list was: asettle mugsie jroll mnaser ricolin, ttx and zaneb. (those that qualify)
14:15:00 <mnaser> any volunteers? :>
14:15:06 <ttx> mnaser: is there a specific document for Forum topics ideas ?
14:15:12 <ttx> I can only find a PTG one
14:15:20 <mnaser> ttx:  https://etherpad.openstack.org/p/PVG-TC-brainstorming ?
14:15:27 <ttx> ok
14:15:30 <mugsie> I would like to do it, but not sure on time commitments - what was the requirements ?
14:15:33 <ttx> #link https://etherpad.openstack.org/p/PVG-TC-brainstorming
14:15:54 <ttx> mugsie: Beyond encouraging people to submit proposals, the bulk of the selection committee work happens after the submission deadline (planned for     Sept 16th) and the Forum program final selection (planned for Oct 7th).
14:16:32 <ttx> you help select, refine, merge. But there aren't taht many proposals so it's less work imho that a conference track chair
14:16:41 <cdent> i did it last time, we accepted everything
14:16:44 <mugsie> OK, I don't have travel planned right now, so I should be OK for that
14:16:47 <cdent> because there wasn't enough
14:16:56 <ttx> yes, usually it's more about merging duplicates
14:16:58 <cdent> i assume the situation will be similar this time
14:17:06 <ttx> and deciding what is worth doublesessions
14:17:25 <fungi> i heard rumors we may not be able to accept every forum proposal this time around, but i don't really know what the capacity for it is
14:17:33 <ttx> basically aligning the number of slots available with the proposals received
14:18:00 <fungi> also it probably depends a bunch on how many sessions get proposed in the first place
14:18:08 <mnaser> fair enough, ok, so mugsie is a "maybe" and we can discuss that a 'tad bit more in office hours or over the ml
14:18:09 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008188.html
14:18:19 <ttx> ++
14:18:30 <mnaser> #topic Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/)
14:18:36 <ttx> I expect jimmy and Kendall to reach out soon for names
14:18:36 <mnaser> #undo
14:18:36 <openstack> Removing item from minutes: #topic Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/)
14:18:40 <mnaser> #info Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/)
14:18:59 <ricolin> ttx count me in for volunteer
14:19:00 <ttx> Yeah this is still missing reviews, no standing -1
14:19:28 <ttx> so please review so we can cross it out
14:19:54 <mnaser> its been sitting around for a while so yeah
14:20:06 <gmann> i will do that tomorrow
14:20:55 <ricolin> I really think we need to done this long before summit so we actually got time to sort/get more the proposal lists
14:21:15 <mnaser> good idea, well please lets go through it when you can then (but after we're done :))
14:21:20 <mnaser> #topic Attendance for leadership meeting during Shanghai Summit on 3 November
14:21:39 <mnaser> alan reached out to me about this
14:21:54 <mnaser> wondering who from the tc might be able to make it then (and i assume this is somewhat related to https://etherpad.openstack.org/p/PVG-TC-PTG)
14:22:00 <ttx> I should be there unless my visa applciation goes wrong
14:22:06 <mnaser> #link https://etherpad.openstack.org/p/PVG-TC-PTG
14:22:16 <mugsie> I should be there
14:22:25 <zaneb> I expect to be there
14:22:25 * ricolin will definitely be there
14:22:27 <mnaser> is it safe to assume that anyone going to ptg will likely be at that leadership meeting?
14:22:33 <ttx> probably
14:22:44 <zaneb> of people already on the TC, I'd say yes
14:22:50 <mnaser> ok, we have 5 names down
14:23:02 <gmann> i will be there but did not add my name till election...
14:23:14 <mnaser> oh yes, that's happening
14:23:28 <mnaser> anyone know off the top of their head
14:23:31 <mnaser> when the election starts/ends
14:23:35 <TheJulia> I was just going to mention that would be a thing...
14:23:44 <TheJulia> I think nominations opens end of August timeframe
14:23:46 <dhellmann> nominations start on 27th
14:23:49 <dhellmann> https://governance.openstack.org/election/
14:23:56 <ttx> TC Nominations
14:23:58 <ttx> 
14:24:00 <ttx> Aug 27, 2019 23:45 UTC
14:24:02 <ttx> 
14:24:04 <ttx> Sep 03, 2019 23:45 UTC
14:24:19 <ttx> 
14:24:20 <mnaser> ouch so only by Sep 17, 2019 23:45 UTC can we really have a final tc list
14:24:21 <ttx> Sep 17, 2019 23:45 UTC
14:24:25 <ttx> Election end ^
14:24:31 <dhellmann> yeah
14:24:44 <mnaser> that might be hard for those who are on the "i can go if i hold a role" thing
14:24:50 <ttx> probably too late for people to join the leadership thing if they did not plan to
14:25:20 <mnaser> we should probably address that timeframe issue for the future
14:25:42 <fungi> i expect to be i shanghai at the board/leadership meeting, but as my term is up i will refrain from listing myself as an attendee unless reelected
14:25:58 <mnaser> fair enough
14:26:07 <ttx> To be fair, the leadership thing does not require everyone imho
14:26:11 <dhellmann> mnaser : the usual approach has been to recommend that candidates be prepared to attend, but travel budgets aren't what they used to be
14:26:21 <ttx> I've been advocating for the people who are there to represent the others
14:26:25 <dhellmann> ttx makes a good point
14:26:33 <mnaser> yeah and also 1 month before the actual summit itself is hard for people in general
14:26:37 <ttx> PTG is a much more important moment imho
14:26:39 <mnaser> esp. if theres a process like a visa or something
14:26:41 <dhellmann> especially with the change in the nature of that meeting
14:26:53 <fungi> if reelected, i'll do my best to represent the positions of other tc members who cannot attend
14:27:06 <ttx> i.e everyone should participate in drafting the message/position, and whoever can make it can represent
14:27:26 <mnaser> so i think at the end of the day, our message to alan will be: yes, the tc will have a presence at the leadership meeting
14:27:35 <ttx> some presence
14:27:40 <fungi> sounds right
14:27:52 <mnaser> #action mnaser to contact alan to mention that tc will have some presence at shanghai leadership meeting
14:27:56 <fungi> we can have more precise numbers at the end of next month
14:28:16 <mnaser> cool, that sounds good to me
14:28:27 <mnaser> anyone has anything before moving on the next topic?
14:29:32 <mnaser> ETIMEOUT
14:29:38 <mnaser> #topic Reviving Performance WG / Large deployment team into a Large scale SIG (ttx)
14:29:46 <ttx> Yeah, so a couple of weeks ago I was in Japan visiting some large OpenStack users
14:29:56 <ttx> Yahoo! Japan for example, which runs 160+ clusters totalling 80k hypervisors and 60Pb storage
14:30:07 <ttx> Or LINE, which *tripled* its OpenStack footprint over the last year alone, reaching 35k VMs (CERN's level)
14:30:17 <mnaser> wow, that's awesome
14:30:17 <ttx> In those discussions there was a common thread, which is the need to improve scalability
14:30:44 <ttx> What's even more awesome is taht they run those with pretty little teams
14:30:50 <ttx> It's currently hard to go beyond a certain size (500-1000 hypervisors / cluster), and those users would love to
14:31:02 <ttx> They cited RabbitMQ starting to fall apart, API responses to things like listing VMs getting too slow
14:31:08 <zaneb> ttx: was that a typo? reasonably confident CERN has more than 35k VMs :)
14:31:12 <ttx> Obviously I tried to push them to invest upstream in that specific area
14:31:25 <cdent> ++
14:31:26 <ttx> zaneb: I'm pretty sure they are not. 36k VMs was last count
14:31:26 <gmann> +1
14:31:53 <mnaser> ttx: mriedem actaully raised this email to the mailing list a few days ago
14:31:53 <zaneb> oh, ok
14:31:54 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008204.html
14:32:02 <ttx> but they also run Magnum clusters which might not be included
14:32:16 <ttx> anyway -- I realized I had nowhere to really point them to
14:32:18 <ttx> We used to have a bunch of groups tackling that "large scale" angle
14:32:27 <ttx> We had the "Performance" team which was formed around Rally and osprofiler, but died in The Big Mirantis Shakedown
14:32:38 <ttx> We have the "Large Deployments" team on the UC side, but afaict it is inactive since 2015
14:32:48 <ttx> It feels like we need a place to point people interested in openly collaborating to tackle that specific "large scale" angle
14:32:54 <ttx> Do you think a "Large scale" SIG could make sense ?
14:33:00 <ttx> (assuming we clean up the remnants of the former teams)
14:33:08 <mugsie> I think it does, as long as people actually show up for it
14:33:10 * mnaser looks at current list of sigs
14:33:13 <jroll> it seems like it makes sense, but are there people to join the sig and do the work?
14:33:46 <ttx> I feel like it's easier to point people to a thing that is forming (Large scale SIG), than to a thing that is dead (Large deployemnt team)
14:33:48 <TheJulia> I suspect it could because the larger operators I think of are the scientific operators, and commercial operators may not realize the scale the scientific folks tend to operate at
14:34:04 <gmann> yeah that is important point to get the volunteer first
14:34:06 <ricolin> ttx any chances you mention this SIG idea to LINE or Yahoo JP? Just wondering what they think about this
14:34:14 <mugsie> yeah. sigs are cheap anyway, so if it fails to get traction we can spin it back down
14:34:17 <ttx> or even to ask them as their first contribution to set up a SIG
14:34:23 <mriedem> tbc, my email started from a conversation in -nova with eandersson (blizzard)
14:34:39 <mriedem> who is last i checked not a scientist
14:34:44 <ttx> ricolin: yes -- I just wanted to check the idea with y'all before pushing
14:34:52 <ttx> Yahoo and LINE are not scientists
14:34:58 <fungi> we're all scientists here ;)
14:35:06 <ttx> YahooJapan should I say, different from YahooInc
14:35:07 <mnaser> ok so it makes sense to have something seperate for it
14:35:21 <ttx> I just wanted to gut-check that it was not a stupid idea
14:35:22 * mnaser would be +2 on a change that is proposed to create a sig from their part
14:35:36 <ricolin> ttx it's a great idea IMO:)
14:35:48 * jroll would also +2 that
14:35:50 <ttx> Like YahooJapan was talking of running new benchmarks on oslo.mesaging backends
14:35:57 <jroll> worst case nobody joins and we're in the same spot
14:36:08 <ttx> I'd love if they did it as part of that new group
14:36:10 <gmann> +1 having something and in active state can be very useful for other org also.
14:36:27 <ttx> I'll try to compile a list of orgs that may be interested in participating
14:36:32 <mnaser> ttx: should i make that an action to you to reach out to them and contact them?
14:36:44 <ttx> Let's see if we can get some momentum around that. If not, taht;s not a big cost
14:37:01 <fungi> i guess one other example is the (presumably definct) lcoo "large contributing openstack operator" working group
14:37:03 <ttx> mnaser: yes sure! Anyone else interested in helping?
14:37:12 <ttx> fungi: yeah I tried not to mention that one
14:37:18 <fungi> heh, fair
14:37:28 <fungi> seemed more like an excuse to create a bureaucracy
14:37:28 <gmann> ttx:  your plan is to make it immediately ?  or propose the idea in shanghai forum and see the response and volunteer  ?
14:37:29 <ricolin> ttx I will go update some SIG guideline docs so this process might be easier for new SIG like this
14:37:41 <mnaser> you can tag me on, i can help being there and sharing operator knowledge but i dont know if i have a ton of bandwidth to do 'run' the sig itself
14:37:42 <ttx> gmann: in time to get people together in Shanghai, for sure
14:38:29 <ricolin> if we early start this SIG, like right this/next week, it can propose it's own PTG schedule
14:38:36 <mnaser> #action ricolin update sig guidelines to simplify process for new sigs
14:38:50 <ttx> The whole story is also a useful reminder that we have lots of users out there, mostly invisible... and really need to bridge that gap and get them involved
14:39:03 <jroll> I'll try to recruit some folks from verizon media to work with the sig as well, we're getting to a point where we might have some people-time to contribute
14:39:06 <ttx> I see this SIG as a way to make it win-win
14:39:17 <mnaser> #action ttx contact interested parties in a new 'large operators' sig (help with mnaser, jroll reaching out to verizon media)
14:39:23 <mnaser> i think the hardest part is getting someone to take care of the logistics
14:39:45 <ttx> mnaser: I said "Large scale", not "Large operators" because I feel like it's a slightly different concern
14:39:48 <mnaser> people will show up and talk but the whole note keeping / scheduling / running things is where people might disappear and not do
14:39:50 <mnaser> #undo
14:39:51 <openstack> Removing item from minutes: #action ttx contact interested parties in a new 'large operators' sig (help with mnaser, jroll reaching out to verizon media)
14:39:56 <fungi> scalability sig ;)
14:40:00 <mnaser> #action ttx contact interested parties in a new 'large scale' sig (help with mnaser, jroll reaching out to verizon media)
14:40:03 <ttx> You can be happily operating smaller clusters
14:40:11 <ttx> this is about scaling cluster size
14:40:25 <ttx> and pushing things like cells in other projects
14:40:44 <ttx> pushing the limits basically
14:41:02 <ttx> I agree the overlap with large operators is probably very significant
14:41:07 * mnaser plays 'push it to the limit'
14:41:12 <mnaser> but yeah, i agree, i think that's very useful
14:41:29 <ttx> It's a bit more.. targeted than just shring operations woes between large operators
14:42:04 <ttx> anyway, thanks for helping me gut-check if that was a good idea
14:42:15 <ricolin> it's also a good place to target with Ironic power sync issue with large scale or co-work that issue with baremetal SIG:)
14:42:40 <mnaser> cool, with that we've gone through most of our topics.
14:42:43 <mnaser> #topic other discussions
14:42:54 <mnaser> anyone (tc or not) have any things that was not on our agenda that's not office-hours-y ?
14:42:56 <ttx> ICYMI, Cinder is about to remove half of their drivers because they did not update to Py3
14:43:02 <ttx> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008275.html
14:43:08 <ttx> OSF is looking at pulling a few more strings to see if that triggers any last-minute save, but I'm not very optimistic
14:43:08 <mnaser> wow, that much?
14:43:23 <mnaser> i diidn't click but didnt know there was that much
14:43:36 <ttx> mnaser: that is how I interpret that etherpad
14:43:47 <gmann> yeah, I am  trying to contact NEC driver team to migrate to py3
14:44:03 <mnaser> this isnt something the community can help with right?
14:44:08 <mnaser> because the CI inherently is just running py2
14:44:09 <ttx> mnaser: not really
14:44:13 <gmann> main  thing is CI access
14:44:35 <gmann> even as NEC contributor I do not have their CI access so cannot help on that
14:44:48 <ttx> there might be some ex tra review work on Cinder if driver teams suddenly wake up
14:45:33 <ttx> but otherwise it's mostly about kncking at every door we know of
14:45:39 <TheJulia> With ironic, I had to explicitly go to each 3rd party CI and ask for them to plan and account for switching approximately half their jobs to py3. It took some leg work, but most everyone was responsive....
14:46:16 <fungi> sounds like a bunch of the cinder driver maintainers/ci operators are just not responsive at all
14:46:16 <TheJulia> Essentially it was "knocking on every door"
14:46:26 <ttx> TheJulia: yeah, maybe that approach was not doable with Cinder
14:46:33 <mnaser> lets remember openstack-discuss is huge traffic, it might just not be visible
14:46:49 <mnaser> does the user survey capture cinder drivers used?
14:46:49 <jroll> I suspect people will show up and complain when the patch goes up to remove it
14:46:49 <fungi> well, jay reached out to them all individually, he said
14:47:00 <ttx> Jay did send emails to the contact emails he had
14:47:05 <fungi> and something like half never replied at all
14:47:11 <smcginnis> jungleboyj has tried emailing all the contact info listed in the third party CI wiki, but apparently the info there is very out of date or black hole addresses.
14:47:24 <ttx> which may point to outdated contact info, but in the end same result
14:47:35 <fungi> we do also have a third-party ci announcements ml we recommend they all subscribe to
14:47:38 <TheJulia> What about last people to edit the files?
14:47:52 <ttx> That's why we are pulling contact strings we have for OSF member companies
14:48:09 <ttx> those are likely still active and may trigger a response
14:48:18 <mnaser> i think that's probably the best way to move forward
14:48:56 <mugsie> +1
14:48:58 <smcginnis> My take is that the removal of some of these might be a good thing. And for the others, maybe a good wake up call to get them to know that they can't just put out driver code and assume they are done if they want to stay up to date.
14:49:16 <fungi> it's a bit of a dead-man switch, yes
14:49:22 <ricolin> smcginnis, agree
14:49:39 <fungi> periodic overhauls have a tendency to shake out what's not actually being maintained
14:49:50 <mnaser> if it does get pulled and some vendor realizes this once the release is out
14:50:01 <mnaser> is it possible that cinder uses an 'out of tree' driver?
14:50:12 <mnaser> as a stop gap till it makes it again in the upcoming release?
14:50:18 <fungi> there are scads of oot drivers for cinder, if memory serves
14:50:31 <smcginnis> mnaser: Customers are always able to use out of tree drivers and we do have some vendors that prefer that route versus being upstream.
14:50:41 <zaneb> the part where we find out which drivers aren't maintained is good. the part where there are a lot of drivers not really being maintained is not good
14:50:58 <mnaser> i am just trying to think of the operators/users that can at least work on a temporarily route till they add support again or what
14:51:30 <smcginnis> Yep, that would be a valid option for them.
14:52:03 <gmann> marking unsupported and warning on multiple platform ML, newsletter etc can be good before removing.
14:52:17 <mnaser> ok so as long as its workaround-able for our users, im happy with removing them
14:52:19 <gmann> ttx: should we add it in newsletter if not so late ?
14:52:27 <fungi> also the point at which vendors stop caring about particular hardware or platforms is the point at which those that are still popular may see new grassroots support teams form around them from their users
14:52:36 <mnaser> i dunno if we wanna use our newsletter to 'shame' those who arent maintaining things :p
14:52:37 <ttx> gmann: it's really too targeted of a message for the newsletter imho
14:52:38 <smcginnis> fungi: ++
14:52:54 <mnaser> (or might just not know that they're out of date because someone forgot to update a contact email)
14:53:03 <mnaser> anyways
14:53:14 <fungi> numerous drivers in the linux kernel are not maintained by vendors or commercial integrators, but by users who want their hardware to keep working
14:53:15 <dhellmann> I think it's healthy for us to be encouraging out of tree drivers
14:53:18 <TheJulia> At some point, you just have to remove them though. You can warn and try to raise red flags again, but if people are not maintaining it is better to remove them... no matter how painful it feels for the project leaders.
14:53:28 <jroll> TheJulia: ++
14:53:37 <mnaser> wanted to leave a bit more time for any other topics if any other community or tc members had that weren't office hour-y ?
14:53:51 <ttx> agreed, just want to do a bit more due diligence with people we have contacts with
14:54:12 <ttx> try to catch the 5% who accidentally overlooked it
14:54:17 <fungi> in other topics, new openstack security advisory this week
14:54:22 <fungi> #link https://security.openstack.org/ossa/OSSA-2019-003.html
14:54:26 <dhellmann> ttx: ++
14:54:31 <ttx> not the 95% who did not pay attention since rocky
14:54:43 <fungi> ttx: not sure if you've checked the announce ml moderation queue, it may be hung up in there for the past couple days
14:55:10 <ttx> checkimg
14:55:21 <fungi> ossa-2019-003 is also an interesting test case for our extended maintenance model
14:55:32 <fungi> mriedem made patches all the way back to stable/ocata
14:55:33 <ttx> I don;t get notified so
14:55:58 <ttx> fungi:  done
14:56:04 <fungi> thanks ttx!
14:56:13 <ttx> fungi: you might want to add yourself to that one and be able to clear OSSAs
14:56:26 <jroll> aren't we still supposed to merge things to extended maintenance branches?
14:56:37 <fungi> ttx: happy to, thanks for the invitation
14:56:47 * jroll notes that ocata isn't merged
14:56:55 <fungi> i'm personally curious to see how long it takes to get changes merged to some of the older stable branches, particularly how viable the ci jobs still are
14:56:58 <mnaser> i think the topics are slowly moving towards office hour-y things so ill close us up :)
14:57:06 <mnaser> we can carry this conversation onto office hours
14:57:08 <fungi> thanks mnaser!
14:57:08 <mnaser> thanks everyone!
14:57:11 <mnaser> #endmeeting