15:01:07 <mnaser> #startmeeting tc 15:01:07 <openstack> Meeting started Thu Mar 18 15:01:07 2021 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:10 <openstack> The meeting name has been set to 'tc' 15:01:12 <mnaser> #topic roll call 15:01:13 <mnaser> o/ 15:01:15 <jungleboyj> o/ 15:01:33 <diablo_rojo_phon> o/ 15:01:41 <belmoreira> o/ 15:02:03 <gmann> o/ 15:02:07 <yoctozepto> \o\ 15:02:14 <dansmith> but I'm not reo/ 15:02:14 <diablo_rojo_phon> Lol 15:02:20 <dansmith> gah.. 15:02:30 <jungleboyj> :-) 15:02:33 <yoctozepto> yeah, you're not reo/ 15:02:52 <jungleboyj> REO Speedwagon? 15:02:52 <mnaser> lol 15:02:55 <mnaser> welcome yoctozepto :) 15:02:57 <yoctozepto> lol 15:02:59 <dansmith> I'm not so good at remembering my client has one input box for whatever channel is in focus :) 15:03:00 <yoctozepto> thanks mnaser 15:03:19 <ricolin> o/ 15:03:36 <yoctozepto> dansmith: don't worry, I sometimes start programming right in my irc client 15:03:50 <dansmith> my client used to be my editor, so I know how that goes :) 15:04:04 <mnaser> lol 15:04:11 <mnaser> okay so getting started 15:04:13 <mnaser> #topic Audit SIG list and chairs (diablo_rojo) 15:04:44 <diablo_rojo_phon> I think this is largely done for now? 15:05:24 <mnaser> #link https://governance.openstack.org/sigs/ 15:05:39 <gmann> there are few things to do as discussed in last meeting. on adding retirement doc/file for "forming to retire" SIG 15:05:59 <openstackgerrit> Merged openstack/governance master: Add Manila dashboard charm to OpenStack charms https://review.opendev.org/c/openstack/governance/+/780813 15:06:00 <mnaser> right, which coverst the contaienrs/k8s one 15:06:09 <diablo_rojo_phon> Oh. My bad. 15:06:16 <gmann> this one #link https://review.opendev.org/c/openstack/governance-sigs/+/778304 15:06:28 <diablo_rojo_phon> Got it. 15:06:54 <gmann> and adding 'reason' in retirement doc. 15:06:55 <ricolin> mnaser, I think I'm the one who should write that retire doc 15:07:06 <ricolin> I will do it before this weekend 15:07:10 <gmann> +1 15:07:14 <gmann> thanks 15:07:19 <mnaser> awesome 15:07:24 <diablo_rojo_phon> Thanks ricolin ! 15:07:44 <mnaser> #action ricolin Add retired SIGs section for governance-sigs repo 15:08:20 <mnaser> i guess we can drop this topic and just keep following up on the action item above? 15:08:42 <gmann> section is there, may be to update retirement SIG section 15:09:05 <gmann> #link https://governance.openstack.org/sigs/reference/sig-guideline.html#retiring-a-sig 15:09:06 <ricolin> mnaser, +1 15:09:53 <gmann> we can add two things there 1. how to retire Forming SIG 2. add reason in retired SIG doc 15:10:10 <mnaser> yeah, that makes sense 15:10:13 <openstackgerrit> Merged openstack/governance master: Add Magnum charms to OpenStack charms https://review.opendev.org/c/openstack/governance/+/780212 15:10:17 <ricolin> gmann, make sense:) 15:10:58 <diablo_rojo_phon> Sounds good to me. 15:11:22 <mnaser> #action mnaser drop "Audit SIG list and chairs" from agenda 15:11:44 <mnaser> any other comments on this topic? 15:12:16 <diablo_rojo_phon> None from me. 15:12:24 <ricolin> None from me either 15:12:30 <yoctozepto> from me neither 15:12:37 <mnaser> cool! 15:12:43 <mnaser> next up 15:12:45 <mnaser> #topic Gate performance and heavy job configs (dansmith) 15:13:07 <dansmith> still suffering mostly from cinder fails I think.. I haven't seen a lot of other patterns 15:13:27 <dansmith> we're definitely chewing a ton of stuff these days, meaning we're doing lots of tests 15:13:33 * jungleboyj face palms 15:13:51 <jungleboyj> One of the failures this week has been fixed. 15:14:06 <jungleboyj> A dependency issue causing problems with the doc build. 15:14:11 <jungleboyj> That was fixed yesterday. 15:14:11 <dansmith> I think things are starting to head back to a more normal kind of load level, which means maybe next week or later we can start to look at whether things are really good or not 15:14:17 <dansmith> jungleboyj: that was a hard fail, right? 15:14:30 <jungleboyj> dansmith: Yes 15:14:33 <dansmith> I'm talking about spurious fails that affect some percentage of runs randomly 15:14:50 <mnaser> i see, so things are a little harder to tell between 'busy time' vs 'unreliable jobs' 15:14:53 <jungleboyj> dansmith: Ok, and still seeing those from Cinder? 15:14:55 <dansmith> mnaser: right 15:15:00 <dansmith> jungleboyj: yes 15:15:20 <jungleboyj> Ok. Will keep on the team about that then. 15:15:30 <fungi> well, having a lot more spurious build failures can lead to long gate queues and wait times similarly to having a higher change volume 15:15:42 <dansmith> jungleboyj: meaning no change in my general gut feeling of "when I have to recheck, it's a volume test that failed to delete a volume or something similar" 15:15:42 <fungi> that's why it's hard to tell which is which 15:15:45 <mnaser> fungi: right 15:15:54 <jungleboyj> Gotcha. 15:16:06 <mnaser> dansmith: i assume there is a lp for this often-rechecked thing 15:16:17 <dansmith> fungi: that's why I'm saying I've not been trying to draw too many conclusions during this time 15:16:29 <dansmith> fungi: except for the obvious cinder stuff 15:16:30 <mnaser> do you have it handy by any chance to add it to our meeting notes? 15:16:50 <dansmith> mnaser: I have been rechecking with "cinder dance" because it seems to be all over the place in terms of which tests fail 15:17:17 <mnaser> question: is this a good thing to keep an eye on? https://zuul.opendev.org/t/openstack/builds?pipeline=gate&result=FAILURE 15:17:24 <dansmith> if I dig in deep I usually see a small number of recognizable errors in the cinder logs, but I haven't done the work to try to distill that into a reliable e-r query if that's what you mean 15:17:37 <mnaser> first thing i notice is a bunch of OSA non-voting jobs :x 15:18:05 <dansmith> mnaser: non-voting in gate... yeah, that's not cool 15:18:13 <gmann> more than that check pipeline cause the load due to failures 15:18:39 <mnaser> right, but technically speaking, gate should always be passing 15:18:40 <mnaser> in an ideal world 15:18:45 <dansmith> gmann: yeah for sure 15:18:51 <fungi> mostly non-voting jobs in gate queues add noise when you're trying to find build failures which actually would have rejected the change 15:19:01 <dansmith> mnaser: well, and n-v jobs in gate just waste resources because they won't prevent a thing from landing 15:19:03 <fungi> though yes it's also a waste of (some) resources 15:19:12 <gmann> yeah, n-c should be removed from gate pipeline 15:19:13 <gmann> n-v 15:19:24 <mnaser> #action mnaser reach out to OSA team about dropping nv jobs from gate 15:19:54 <dansmith> jungleboyj: I think the cinder team is really busy right now with release stuff, 15:19:57 <mnaser> dansmith: if you wouldn't mind, could you maybe maintain an etherpad of the logs for the cinder failures ? 15:19:58 <fungi> i once imagined a zuul pipeline option where you could tell it to filter out non-voting jobs for anything enqueued, but i really don't have time to write that 15:20:24 <dansmith> so I've been trying not to jump in and try to get them to work on these fails, but maybe in a week or so when things cool off we can try to help them at least get them identified 15:20:35 <gmann> +1 15:20:53 <yoctozepto> +1 15:20:56 <dansmith> mnaser: the log links expire so I haven't been trying to do that, but I do have local notes on some common types of failures, which I pastebin'd for them last week 15:21:01 <jungleboyj> +1 15:21:19 <yoctozepto> yes, remember to pastebin or you have a nice list of useless links 15:21:23 <yoctozepto> (happened to me) 15:21:47 <dansmith> https://termbin.com/oiml1 15:21:49 <mnaser> lol 15:21:56 <fungi> right, we upload logs and set a 30-day expiration for them in swift 15:21:58 <dansmith> these are what most of the fails I see look like ^ 15:22:05 <dansmith> and two probably expired links to examples 15:22:34 <mnaser> ok got it, looks like a volume which failed to create and failed on the cleanup 15:22:38 <yoctozepto> yeah, I try to pastebin some general logs and related service logs for later enquiries 15:22:43 <dansmith> but of course, there are multiple variations in the symptoms, depending on whether a test or nova or something else actually is trying to do a thing 15:22:55 <dansmith> mnaser: it depends 15:23:05 <dansmith> mnaser: sometimes it's a volume snapshot with an instance on top, etc 15:23:07 <mnaser> https://zuul.opendev.org/t/openstack/build/9b5d4b9d44db403a94f5edb02b42f3a8 15:23:14 <mnaser> caught one here simply by looking at the same job name 15:23:28 <mnaser> anyways, so let's keep this open, ill try to follow up with osa team on dropping nv jobs 15:23:37 <yoctozepto> just nice races in there ;-) 15:23:50 <dansmith> yeah, but everyone runs those jobs :/ 15:23:51 <dansmith> so it's not just cinder patches of course 15:23:54 <gmann> delete one happening ~40 times in last 7 days 15:23:55 <mnaser> and we can bring this up with cinder team and see if we can maybe get a few minds on this in a call or something 15:24:07 <gmann> oh even more 15:24:09 <mnaser> and iron it out 15:24:15 <dansmith> mnaser: that one you linked is actually different than the other two I have I think 15:24:17 <dansmith> so yeah... 15:24:38 <gmann> #link http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22failed%20to%20delete%20and%20is%20in%20error_deleting%20status%5C%22 15:24:46 <dansmith> a different stuck state I mean 15:25:00 <yoctozepto> mnaser: you mean... to cinder the bugs! 15:25:15 <dansmith> gmann: that state check will not catch them all.. there are several states I've seen besides error_deleting 15:25:37 <gmann> dansmith: yeah, 15:25:56 <dansmith> this is why I haven't really tried an e-r query because it varies a lot 15:26:09 <dansmith> anyway, mnaser we can move on, but +1 for continuing to check in on this 15:26:52 <mnaser> ok cool, maybe we can pick up a crew of folks to try and iron those out and help out the cinder team 15:26:55 <mnaser> im happy to particpate in that 15:27:09 <mnaser> but yes, we can move on and keep this idea for next weeks when release stuff settle dowjn 15:27:10 <jungleboyj> ++ 15:27:15 <yoctozepto> (uh-oh, nobody picked up the pun) 15:27:26 <mnaser> :P 15:27:30 <mnaser> #topic Consensus on lower constraints testing (gmann) 15:27:48 <diablo_rojo_phon> I did yoctozepto :) 15:28:07 <yoctozepto> diablo_rojo_phon: :-) 15:28:12 <gmann> we discussed the current proposal sent on ML thread last week which seems no objection until now 15:28:18 <jungleboyj> yoctozepto: :-) 15:28:28 <yoctozepto> jungleboyj: :-) 15:28:30 <mnaser> i think last time it was about the discussion of 'make it policy' or 'make it advisory' 15:28:44 <gmann> but how to document those or add in PTI is something we can continue discussing 15:28:48 <yoctozepto> mnaser: yes, I remember it like this as well 15:29:11 <gmann> so that we can decide how we can test or drop the lower bound consistently across all projects 15:29:39 <mnaser> if i remember, lower constraints purely was for the benefit of distro packagers 15:29:50 <yoctozepto> yup 15:29:53 <mnaser> but it seemed like... no distro packagers were actaully relying on it 15:30:01 <gmann> yes, and only Debian use those 15:30:06 <mnaser> rdo didnt, canocnical didnt 15:30:10 <yoctozepto> gmann: to some extent 15:30:14 <gmann> rest all mentioned they use upper constraints 15:30:28 <yoctozepto> well, l-c never tested any functional aspects 15:30:30 <yoctozepto> only units 15:30:31 <mnaser> but i dont think debian uses them as an actual part of a ci pipeline or something, more of like a 'reference' 15:30:36 <gmann> yes, only unit 15:30:49 <yoctozepto> and in many units projects just mock the real functionalities of libs 15:30:54 <yoctozepto> as they don't call out to services 15:31:05 <yoctozepto> so it's very low in usefulness 15:31:07 <mnaser> so honestly this fels like there isn't much consumers of those jobs, neither are they really a clear signal that things work 15:31:39 <mnaser> which personally makes me lean on the 'optional' in the project guide 15:31:54 <yoctozepto> +1 15:32:12 <gmann> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019918.html 15:32:27 <gmann> this describe the usage in Debian 15:33:41 <yoctozepto> yes, but I guess zigo assumes the tests are really worth it, i.e., lower-constraints actually test the service is usable 15:33:49 <yoctozepto> they are quite far from that 15:34:20 <yoctozepto> if we want to make l-c recommended/obligatory, we should enforce functional testing to give them meaning 15:34:23 <gmann> yoctozepto: zigo impression is if we ship those we keep them up to date. how we keep them up to date is testing part 15:35:03 <mnaser> personally im inclined to ship requirements.txt + upper-constraints.txt only and forget about lower 15:35:24 <gmann> yoctozepto: do you mean integration testing too tempest jobs? 15:35:31 <yoctozepto> I would love to ship l-c as well but in their current shape I don't think they produce enough value 15:35:39 <yoctozepto> gmann: yes 15:36:02 <yoctozepto> but that's going to consume many more resources for sure 15:36:03 <gmann> mnaser: that was the proposal in ML thread but we had more response on not to do that and try with "direct deps" only 15:36:45 <yoctozepto> yes, there is the issue of indirect deps as well 15:36:49 <gmann> this is start of this ML thread after oslo started it for oslo projects dropping l-c tetsing #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html 15:36:51 <yoctozepto> they are entirely up to distro packagers 15:37:12 <gmann> yoctozepto: indirect deps we can surly dropped, no meaning of maintaining those 15:37:36 <yoctozepto> gmann: yes, but then distro packagers still have no idea what version really works 15:38:00 <yoctozepto> it is completely possible not to update some indirect dep and have a considerable vulnerability or crashing services 15:38:42 <gmann> yoctozepto: yeah but they know at least direct deps of what openstack deliverables they install and figure out the others form their maintainer 15:38:58 <fungi> i also realized that we stopped shipping a global set of lower bounds in openstack requirements several years ago, so now it's just exclusions 15:39:33 <gmann> yes, that time lower bounds maintenance were moved to project side 15:39:39 <fungi> so even if all projects shipped a tested lower-constraints.txt, deriving the lowest version of a package which would work for all of openstack would be nontrivial 15:39:51 <yoctozepto> ++ 15:40:00 <fungi> also this makes integration testing of lower bounds basically intractable 15:40:07 <yoctozepto> ++++ 15:40:10 <mnaser> this seems like a lot of work for something that is not being consumed by users 15:40:19 <gmann> fungi: yeah that is good point. 15:40:36 <mnaser> we're already low on resources if it's human or compute time 15:40:39 <fungi> we have a global upper-constraints.txt specifically because we do integration testing and need to agree on common versions to test 15:40:59 <mnaser> most distros and source builds rely on upper constraints too 15:41:00 <fungi> with no global lower bounds tracked which we know work for all projects, we can't really integration-test lower bounds 15:41:00 <gmann> mnaser: yup, lot of work :) most of my time in community wide goal goes for those 15:41:01 <yoctozepto> mnaser: and debian can just run tempest after they package and I know zigo does run various tests anyway 15:41:40 <mnaser> right -- so i think maybe we should stop worrying too much about it, unless the people who _want_ lower constraints want to show up and do the work 15:41:53 <yoctozepto> fungi: well, we can always CoNtAiNeRiSe 15:42:02 <yoctozepto> but I don't want to start this discussion now at all 15:42:06 <jungleboyj> mnaser: ++ 15:42:16 <yoctozepto> mnaser: +1 15:42:36 <fungi> yoctozepto: i really don't see how container fairy dust solves this for libraries 15:42:42 <gmann> I am fine with that. 15:43:00 <yoctozepto> fungi: container per projects - no conflicts to resolve for these lower constraints 15:43:02 <mnaser> so question, is lower-constraints texting parpt of our pti right now? 15:43:05 <yoctozepto> per project* 15:43:09 <fungi> yoctozepto: so an oslo.config container? 15:43:25 <fungi> or just allow nova and cinder to use different versions of oslo.config and expect the oslo team to support that 15:43:27 <yoctozepto> fungi: right, I considered only top-level projects 15:43:35 <fungi> see, and THAT's the problem 15:43:39 <yoctozepto> fungi: yes, the second one! :D 15:44:02 <yoctozepto> mnaser: I think not? 15:44:11 <yoctozepto> guess I checked this the last meeting 15:44:16 <fungi> mnaser: lower bounds testing is not mentioned in the pti at all, no, it's completely optional 15:44:21 <gmann> so how we should proceed next, 1. TC resolution first or PTI mentioning explicitly "we do not need lower bound testing a mandatory things " 2. update it in ML and project start dropping if they want. 15:44:51 <fungi> i don't understand why the pti should become a list of what things we don't have to test 15:44:57 <yoctozepto> fungi ++ 15:45:02 <fungi> seems like that would be a never-ending list 15:45:15 <yoctozepto> well, it's basically a complement of what we expect to test 15:45:19 <mnaser> fungi: well, we need to write the answer _somewhere_ for "do i do lower boundtesting?" 15:45:20 <yoctozepto> so it's practically infinite 15:45:29 <gmann> well, because it is all confusion in most of the projects on we are doing this and we do not know whether to do or not 15:45:45 <yoctozepto> TC resolution and ML? 15:45:56 <yoctozepto> write a clear message 15:45:59 <gmann> if we were not doing this then it could be ok not to mention 15:46:02 <mnaser> so putting it into governance seems a bit overkill, PTI seems like it would be slightly less overkill 15:46:15 <gmann> TC resolution + ML is better at least 15:46:16 <fungi> the pti is part of our governance 15:46:26 <gmann> yes, pti is in governance 15:46:28 <yoctozepto> yes 15:46:30 <fungi> pti is our testing policy all projects are expected to follow 15:46:46 <mnaser> so rather than a resolution, if its going to be governance, then we put it in the PTI so it can be around the same information 15:46:50 <fungi> giodance for projects is mostly in the project teams guide, fwiw 15:46:57 <fungi> er, guidance 15:47:06 <jungleboyj> ++ 15:47:10 <mnaser> if we end up with a gigantic list of things to test or not to test, we can maybe look at reorganizing things 15:47:11 <fungi> there is a section in the project teams guide on lower bounds tests 15:47:13 * yoctozepto did a giodance 15:47:17 <gmann> we can document like "this is things we used to test but not clear policy, this is consensus now" 15:48:08 <mnaser> https://governance.openstack.org/tc/reference/pti/python.html#constraints -- could we not just add a sentence in there and update ML? 15:48:40 <yoctozepto> based on how PTI looks now, I don't think it's worth adding l-c testing there 15:48:48 <mnaser> because our project team guide says 15:48:49 <yoctozepto> we can reword the PTI to mention upper-constraints 15:48:50 <mnaser> "Each project team may also optionally maintain a list of “lower bounds” constraints for the dependencies used to test the project in a lower-constraints.txt file. If the file exists, the requirements check job will ensure that the values it contains match the minimum values specified in the local requirements files, so when the minimums are changed lower-constraints.txt will need to be updated at the same time. 15:48:50 <mnaser> Per-project test jobs can be configured to use the file for unit or functional tests." 15:48:55 <yoctozepto> instead of just "constraints" 15:49:12 <gmann> yoctozepto: we can add u-c and tell about we do not do l-c testing 15:49:49 <gmann> and remove/update the existing statements from project-guide 15:49:58 <jungleboyj> Would seem that updating that would be sufficient. 15:50:01 <yoctozepto> but you realise it's a bit silly to add information about what is not being done in a place where people look for information on what should be done? 15:50:09 <fungi> #link https://docs.openstack.org/project-team-guide/dependency-management.html 15:50:19 <fungi> that seems like a reasonable place 15:50:25 <yoctozepto> PTI needs update to mention u-c only - that for sure 15:50:32 <gmann> yoctozepto: i think it is "we were doing soemthing and we do not it any more becasue of xyz reason' 15:50:36 <mnaser> ok, how about a simpler approach 15:50:44 <yoctozepto> PTG to rewrite part on l-c 15:50:52 <yoctozepto> (Project Team Guide*) 15:50:56 <mnaser> "Each project team may also optionally maintain a list of “lower bounds” constraints for the dependencies used to test the project in a lower-constraints.txt file." 15:50:58 <fungi> the dependency management chapter already talks a bunch about lower bounds testing and tracking, which will need updating anyway 15:51:01 <mnaser> we already say that it's optional 15:51:10 <mnaser> so we can simply update the ML saying: it's optional, you can drop it if you want. 15:51:19 <mnaser> and we don't have to make any more changes wrt to this 15:51:30 <mnaser> (because we all seem to agree on the fact that it's optional) 15:51:36 <yoctozepto> ah, yes, mnaser is right 15:51:47 <yoctozepto> I would just update the PTI 15:51:49 <yoctozepto> to mention u-c 15:51:53 <yoctozepto> not just constraints 15:51:56 <yoctozepto> to avoid any confusion 15:52:03 <yoctozepto> that's that 15:52:03 <gmann> yeah, that we can clarify for sure 15:52:15 <jungleboyj> ++ 15:52:21 <gmann> only u-c will convey no l-c 15:52:58 <yoctozepto> anyway 15:53:00 <gmann> so 1. updating project-guide 2. ML update 3. update constraints to u-c in PTI ? 15:53:03 <yoctozepto> pti goes "Projects may opt into using the constraints in one or more of their standard targets via their tox.ini configuration." 15:53:06 <yoctozepto> "MAY OPT" 15:53:15 <yoctozepto> so we don't even require u-c 15:53:16 <yoctozepto> fwiw 15:53:31 <yoctozepto> should we clarify this? 15:53:33 <yoctozepto> and require? 15:53:33 <mnaser> i like the steps gmann proposed 15:53:49 <yoctozepto> mnaser: me too 15:53:51 <mnaser> i think on the #3 item, we can discuss in the next meeting, id like to have time for the rest o the topics if thats ok 15:53:54 <jungleboyj> mnaser: ++ 15:53:56 <mnaser> so if we can move with 1 and 2.. 15:54:01 <yoctozepto> mnaser: ++ 15:54:13 <mnaser> and we can loop back on the the PTI changes next week, if that works? 15:54:30 <gmann> +1 make sense first two we can do now 15:54:32 <jungleboyj> WFM 15:54:44 <mnaser> next-up: 15:54:46 <mnaser> #topic PTL assignment for Xena cycle leaderless projects (gmann) 15:54:49 <gmann> and 3rd one in next week or PTG discussion if needed 15:55:25 <mnaser> looks like we have most ptl appointment patches 15:55:39 <gmann> #link https://etherpad.opendev.org/p/xena-leaderless 15:55:49 <mnaser> i invite tc-members to vote on them please https://review.opendev.org/q/projects:openstack/governance+is:open 15:55:58 <mnaser> at least, for the appointments 15:56:00 <gmann> we left with two projects rest other have patch up for PTL assignment 15:56:07 <gmann> keystone and Mistral 15:56:13 * yoctozepto looks for the patcheeeeees 15:56:14 <gmann> are left 15:56:18 <diablo_rojo> Can do! 15:56:46 <mnaser> mistral already gave us a heads up right? 15:56:52 <gmann> as discussed last week, i sent email on openstack-discuss but no response form Mistral team 15:56:54 <mnaser> so we only have DPL option 15:57:10 <gmann> mnaser: yeah, they said they will try DPL but now we need them to step up for required liasions 15:57:34 <gmann> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-March/021068.html 15:57:46 <yoctozepto> gmann: did you push further on that mistral being a dep of tacker? 15:57:52 <gmann> may be i need to reach out to them via personal email r meeitng if they have 15:58:14 <gmann> yoctozepto: not yet, I can add it in Tacker meeting agenda 15:58:51 <yoctozepto> gmann: cool, that would clear things up if mistral goes worse next cycle 15:58:55 <gmann> yoctozepto: i informed one of the Tacker Core form my company but I think notifying them on meeting and they discuss on deps is something they can do 15:59:09 <mnaser> so tacker is work in progress 15:59:11 <yoctozepto> and also to notify tacker about mistral's situation 15:59:21 <gmann> at least for long term maintenance or if they can help in Mistral 15:59:28 <mnaser> so 15:59:33 <mnaser> i think the other more concerning one is 15:59:34 <mnaser> keystone 15:59:38 <yoctozepto> ++ 15:59:39 <gmann> eah 15:59:41 <gmann> yeah 15:59:48 <jungleboyj> mnaser: ++ 16:00:04 <yoctozepto> so 16:00:11 <diablo_rojo> Agreed 16:00:11 <yoctozepto> dpl progress is..? 16:02:07 * yoctozepto could not find any mentions on the ml 16:04:07 <gmann> we can ping knikolla in case something is being discussed in keystone team. 16:04:13 <mnaser> right 16:04:17 <mnaser> i think we should try and move on that item 16:04:21 <mnaser> (sorry, i got sucked into something else) 16:04:24 <gmann> yeah 16:04:29 <yoctozepto> gerrit looks quite calm for keystone 16:04:30 <mnaser> for next weeks meeting 16:04:36 <mnaser> i will start with this item first 16:04:38 <yoctozepto> indeed 16:04:51 <yoctozepto> (so that we don't go into PTI details beforehand) 16:05:04 <mnaser> right 16:05:13 <mnaser> we're a bit over time, but any really important items? 16:05:14 <gmann> I will ping on knikolla and keystone team meanwhile 16:05:23 <yoctozepto> gmann: great 16:06:14 <fungi> related, has there been any more thought on what to do about the bit of the charter which advises a special election to fill the current vacancy on the tc? 16:06:26 <yoctozepto> oh, that's important too 16:06:38 <yoctozepto> none that I know of 16:07:24 <fungi> if the new tc is officially seated, then that's a discussion for them to have. if the new tc is not yet seated, then maybe defer 16:07:34 <mnaser> i've updated acls and we've merged the changes 16:07:49 <mnaser> but going to have to be something we need to indeed discuss 16:07:56 <mnaser> we're short on time in these meetings 16:07:59 <mnaser> #endmeeting