*** lbragstad8 is now known as lbragstad | 06:56 | |
frickler | tc-members: I suggest you consider taking a more active approach in fixing up Zuul config errors. the weekly reminders don't seem to have had much effect over the last years or so and now we got hundreds of additional errors due to the queue config change | 11:57 |
---|---|---|
noonedeadpunk | I'd say big problem also to backport to all prior branches | 12:00 |
noonedeadpunk | So we likely will need to force-merge things to clean out zuul errors with regards to that | 12:02 |
fungi | which raises the question: if it's hard to backport a simple change to old branches, is it because their tests no longer work? and if so, then why are we keeping those branches open instead of eol'ing them? | 12:04 |
fungi | but also it's worth pointing out that many of the lingering errors are a good sign those branches can be eol'd: they've gone many months with broken configuration then they've definitely not been testable for at least that long | 12:07 |
fungi | also the ~120 stable branch jobs which fail every day are rather a waste to keep running even though we know they're not fixed | 12:08 |
noonedeadpunk | well, I see we also have issues related to retired projects | 12:09 |
noonedeadpunk | openstack/networking-midonet is quite good example | 12:10 |
fungi | yes, those are the sorts of things which have gone unfixed for a very long time | 12:10 |
fungi | branches still depending on networking-midonet are clearly not running any jobs and nobody seems to care, so why do we keep them around? | 12:10 |
noonedeadpunk | but I do agree that eol'ing branches can be fair thing to do indeed | 12:11 |
noonedeadpunk | but from other side my expectation as a user, that what we say at https://releases.openstack.org/ matched at least somehow. Which is not a thing already | 12:20 |
noonedeadpunk | (which is quite confusing to have that said) | 12:20 |
noonedeadpunk | well, at least we have note that project can EOL there... | 12:21 |
noonedeadpunk | But then for how long EM should be kept | 12:21 |
fungi | yes, i agree that our list of em branches needs to match reality | 12:33 |
*** dasm|off is now known as dasm | 12:48 | |
JayF | That's a problem in both direction | 13:56 |
JayF | Ironic, for example, has very good support for releases going all the way back to ussuri and train | 13:57 |
fungi | yeah, but some projects still have branches open and periodic jobs running/failing for pike | 14:04 |
knikolla[m] | would it be less of a problem if those jobs were passing (as in the problem is that we're wasting resources) or the problem is the amount of jobs (we're lacking resources in general)? | 14:08 |
fungi | the two are intertwined. certainly running jobs which always fail and nobody is bothering to fix is of no use to anyone | 14:09 |
fungi | running jobs on branches nobody's going to introduce new patches into, even if they succeed, is also probably wasteful though | 14:10 |
fungi | keep in mind that "lacking resources" is a misleading way of looking at things. we run jobs on resources donated by companies who could be using those resources in other ways, so whether or not those resources are available to us, we need to be considerate stewards of them and not use what we don't need to | 14:11 |
knikolla[m] | that's a really good point | 14:12 |
knikolla[m] | are the opendev.org resources pooled equally for all projects running therein? | 14:13 |
JayF | Honestly I would just take the sign that a project would have tests that are failing that regularly as a sign that in general the project isn't being maintained and that either we need to take action to maintain it if that's the contract with users, or take action to retire it | 14:14 |
fungi | effectively yes, we don't implement per-project quotas, though there are relative prioritizations for different pipelines and "fair queuing" algorithms which prioritize specific changes in order to keep a small number of projects from monopolizing our available quota when we're running at max capacity | 14:14 |
JayF | And unless we have some kind of magical power to summon developers, I don't think maintaining things is really an option | 14:15 |
fungi | right now we claim we've got "extended maintenance" on branches all the way back to pike, and i know there's been recent discussion of dropping pike specifically, i doubt queens, rocky, stein, or ussuri are in much better shape across most projects at this point | 14:16 |
fungi | er, train/ussuri as JayF pointed out seems to be in good shape in ironic at least | 14:16 |
fungi | but keeping stein and older open is certainly questionable at this point | 14:17 |
JayF | One of the things I backported to get ironic in good shape for train was to nova, so they have functioning CI in train as well | 14:17 |
JayF | I will look at this specifically for ironic, and if ironic has old branches open - older than train - I can at least take action to fix things there. | 14:18 |
knikolla[m] | ++ on dropping pike. i think we should fix a set number of releases that we keep on EM. we can't scale the amount of jobs with each new release while not increasing testing capacity. | 14:18 |
frickler | pike-eol is almost done, just blocked by some not-too-well maintained projects https://review.opendev.org/q/topic:pike-eol | 14:18 |
fungi | right now we have 12 branches open for projects (if you count master/2023.1/antelope) | 14:19 |
fungi | 1 for development, 3 under stable maintenance (wallaby will transition to em shortly after zed releases), and 8 in "extended maintenance" right now | 14:20 |
fungi | or 7 if pike eol is finished around the zed release/wallaby transition to em | 14:21 |
knikolla[m] | frickler: do we need to wait indefinitely for a ptl to approve the eol transition for the remaining teams? | 14:21 |
fungi | from a procedural standpoint, the tc can decide this if they want | 14:21 |
frickler | knikolla[m]: discuss this with the release team maybe | 14:22 |
fungi | and the tc has delegated that decision to the release managers, so they could decide to go forward with it as well even if ptls are in disagreement | 14:22 |
fungi | but yeah, in this case it's less a matter of project leaders disagreeing, and more a matter of them being completely unresponsive | 14:23 |
knikolla[m] | i'll bring it up with the release team | 14:23 |
gmann | we have discussed it in last PTG or in yoga PTG i think where i raised concern over increasing the number of EM branches over and over and becoming difficult from QA perspective too | 14:23 |
gmann | proposal was to limit those but it seems everyone including release team also were ok with current model | 14:24 |
gmann | it is 12 branches currently but it will keep growing by 1 every 6 month or so | 14:24 |
fungi | when the em model was first proposed, we said at the time that if jobs started failing on branches and nobody felt like fixing them, we'd drop testing or eol the branches | 14:25 |
knikolla[m] | i think the current model is unsustainable | 14:25 |
knikolla[m] | ++ that's my memory too | 14:25 |
JayF | We do need to be careful to make sure that we're looking at the whole range of openstack projects, and not just the ones that are noisy and broken right now. We should fix the problem without damaging autonomy of projects that are operating properly. | 14:25 |
JayF | Isn't this just a resourcing issue? I imagine folks aren't leaving old broken branches up intentionally, but cleaning up older leases has to be pretty low on some folks priority list | 14:26 |
fungi | to put this in perspective, i've been working on tests of production data imports for an upcoming migration from mailman v2 to v3. importing lists.openstack.org data takes roughly 2.5 hours. more than half of that time is importing the archive for the stable branch failures mailing list | 14:26 |
knikolla[m] | JayF: ++ totally. I'd be in favor of an opt-in model for keeping branches that are EM. | 14:26 |
fungi | the em process already supports that | 14:27 |
gmann | true and agree | 14:27 |
fungi | the problem is that the incentive was inverted. the idea was that projects would ask to eol their older branches ahead of the project-wide eol. instead we need to be eol'ing branches for projects whose jobs stop working and they don't ever fix them. if there's nobody around to work on fixing testing for old branches, there's nobody around to request we eol that branch either | 14:28 |
gmann | coming back to zuul config error | 14:29 |
gmann | fungi: is everything in this etherpad, this is what I add in my weekly summary report and ask project to fix https://etherpad.opendev.org/p/zuul-config-error-openstack | 14:29 |
gmann | and seems old one are fixed | 14:29 |
gmann | so weekly reminder is working at some extend | 14:29 |
gmann | this seems new right? 'extra keys not allowed @ data['gate']['queue']' | 14:31 |
fungi | gmann: yes. unfortunately, deprecating pipeline-specific queue syntax in zuul for 19 months and spending an entire release cycle repeatedly notifying projects that their jobs are going to break did not get those addressed before the breaking change finally merged to zuul | 14:32 |
fungi | i've answered questions about probably 10 different projects where contributors were confused that no jobs were running on some branch, and in every case it's been that | 14:33 |
gmann | fungi: ok, may be we can put these one in etherpad and we can ask projects to fix those on priority in weekly report too. let's see | 14:34 |
gmann | once we have those project list in etherpad then we can ask one or two tc-members to volunteer to drive this and ask/help project to fix these soon. | 14:40 |
fungi | the queue fixes are very simple, but of course if your stable branch testing is already broken then you can just merge that without fixing whatever other bitrot has set in | 14:41 |
fungi | er, then you can't just merge that i mean | 14:41 |
gmann | yeah, if those are broken anyways no change can be merged so there is no extra breakage caused by zuul error | 14:42 |
gmann | I will ask ion today meeting if any volunteer from tc-members to drive this in more aggressive way along with ML. may be proposing the fixes or so | 14:49 |
fungi | thanks! hopefully that'll help | 14:49 |
fungi | i considered volunteering to bypass zuul and directly merge fixes for a bunch of those more trivial errors, but honestly if testing on those branches is too broken to merge anything anyway then maybe it's a sign those branches can be eol'd (at least in specific projects who aren't fixing them) | 14:50 |
knikolla[m] | i can take this, i've rarely ventured into zuul territories. | 14:50 |
knikolla[m] | ++ on eoling | 14:51 |
fungi | knikolla[m]: if you have specific questions for basic classes of fixes for those errors, let me know and i'm happy to get you examples | 14:51 |
gmann | yeah that is general criteria to move them to EOL but this zuul config error fix can highlight it | 14:51 |
gmann | knikolla[m]: thanks | 14:52 |
fungi | they're usually very straightforward things like some required project in a job no longer exists because of a repository rename, or a job definition no longer exists because a branch was eol'd | 14:52 |
noonedeadpunk | Tbh right now force-merging seems easier, as we have too much errors at the moment, which can be solved quite easily. But with EOLing branches this process will take a while... | 14:52 |
noonedeadpunk | so quite depends on how we want to clean zuul config | 14:53 |
noonedeadpunk | and eol-ing can be taken as a separate track imo | 14:53 |
fungi | i'm more concerned about moving fixes to branches people are actually trying to use. if those errors linger a bit longer while we increase the priority for eol decisions, i'm okay with that | 14:53 |
gmann | i do not think we need to fix on EOL or EOLing also | 14:53 |
JayF | gmann: we should mention in meeting -> re: chair elections, only 5 out of 9 of us have voted | 14:53 |
JayF | gmann: with only 4 working days left | 14:53 |
gmann | JayF: yeah, I have added it in meeting agenda | 14:53 |
JayF | ty | 14:53 |
gmann | good reminder | 14:53 |
gmann | tc-members: weekly meeting in ~5 min from now | 14:55 |
fungi | one way to handle the mass zuul fixes would be to prioritize maintained stable branches, and then set a deadline for any in em state where we'll just eol those branches if the projects don't get fixes merged for them before then | 14:55 |
gmann | if their jobs are broken then it will easy to move them to EOL. if no review then it is difficult may be elodilles will just merge it | 14:56 |
gmann | I am sure there will be many EM repo where their gate is already broken for other things | 14:57 |
fungi | absolutely | 14:58 |
slaweq | @fungi gmann I was trying some time ago fix zuul config issues related to neutron but most of those patches were to very old branches like e.g. pike and ci for the projects in those stable branches were already broken | 14:59 |
slaweq | so it was really hard to get things merged and I don't have time and knowledge about all those projects to fix their gates there :/ | 14:59 |
gmann | slaweq: yeah, hopefully pike will be EOL soon | 14:59 |
slaweq | I still have it in my todo but there is always something more important todo | 14:59 |
gmann | +1 great | 14:59 |
slaweq | I will try to get back to it :) | 14:59 |
gmann | #startmeeting tc | 15:00 |
opendevmeet | Meeting started Thu Sep 29 15:00:03 2022 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'tc' | 15:00 |
gmann | #topic Roll call | 15:00 |
JayF | o/ | 15:00 |
gmann | o/ | 15:00 |
slaweq | gmann I have today another meeting at the same time as our tc meeting so I will be just lurking here for first 30 minutes | 15:00 |
gmann | slaweq: ack. thanks | 15:00 |
jungleboyj | o/ | 15:00 |
slaweq | o/ | 15:01 |
gmann | today agenda #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting | 15:01 |
jungleboyj | As a lurker I guess. :-) | 15:01 |
knikolla[m] | o/ | 15:01 |
gmann | jungleboyj: good to have you in meetings | 15:02 |
noonedeadpunk | o/ | 15:02 |
spotz_ | o/ | 15:02 |
gmann | let's start | 15:02 |
gmann | #topic Follow up on past action items | 15:02 |
rosmaita | o/ | 15:03 |
gmann | there is one action item | 15:03 |
gmann | JayF to send the ics file on TC PTG ML thread | 15:03 |
JayF | There is no TC PTG ML thread to send it to, afaict :) | 15:03 |
gmann | JayF: this one you can reply to #link https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030575.html | 15:03 |
JayF | ack; got it, I'll take care of that this morning | 15:04 |
gmann | perfect, thanks JayF | 15:04 |
gmann | #topic Gate health check | 15:04 |
gmann | any news on gate | 15:04 |
gmann | devstack and grenade setup for 2023.1 cycle is almost done #link https://review.opendev.org/q/topic:qa-zed-release | 15:05 |
slaweq | nothing from me | 15:05 |
gmann | from failure side, I have not noticed any more frequent one | 15:05 |
slaweq | starting 2023.1 cycle we should have "skip release" grenade jobs in projects, right? | 15:06 |
gmann | yes, we have setup that in grenade side | 15:06 |
slaweq | maybe we should remind teams that they should ask such jobs in their queues? wdyt? | 15:06 |
gmann | #link https://review.opendev.org/c/openstack/grenade/+/859499/2/.zuul.yaml#378 | 15:07 |
gmann | many projects but may be 5-6 have those already but yes we should remind them | 15:07 |
slaweq | I can send email about it | 15:07 |
gmann | slaweq: thanks. | 15:07 |
gmann | #action slaweq to send email to projects on openstack-discuss ML about add the grenade-skip-level (or prepare project specific job using this base job) in their gate | 15:08 |
gmann | projects can use this as base job like done in manila project | 15:09 |
gmann | #link https://github.com/openstack/manila/blob/486289d27ee6b3892c603bd75ab447f022d25d13/zuul.d/grenade-jobs.yaml#L95 | 15:09 |
noonedeadpunk | gmann: jsut to ensure I got it right - basically upgrade job from Y to A | 15:09 |
gmann | this needs to be updated. I will do after meeting | 15:09 |
gmann | noonedeadpunk: yes, stable/yoga to current master | 15:09 |
gmann | I think other than greanade in-tree supported projects, manila is first one to setup the job till now | 15:11 |
rosmaita | but to be clear, we don't actually support skip-release-upgrades until 2023.1, because that is our first 'tick' release | 15:11 |
gmann | also we need to make sure it does not run on zed gate | 15:11 |
gmann | rosmaita: so 2023.1 we will say first SLURP right? means it can be upgraded from stable/yoga and stable/zed both ? | 15:12 |
gmann | that is what i remember | 15:12 |
arne_wiebalck | o/ | 15:12 |
slaweq | so those skip-level job can be non-voting in 2023.1 cycle but I think projects should start adding it to see how things works/not works for them | 15:12 |
rosmaita | right, 2023.1 is supposed to be designed to be able to go directly to 2024.1 | 15:13 |
gmann | mentioned here too #link https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html#example-sequence | 15:13 |
rosmaita | right, so A (or 2023.1) is the first SLURP, which i thought meant you will be able to go from 2023.1 to 2024.1, not that you should be able to upgrade to 2023.1 from yoga | 15:14 |
gmann | "Our letter-based release naming scheme is about to wrap back around to A, so the proposal is that the “new A” release be the first one where we enforce this scheme. Y->A should be a “dress rehearsal” where we have the jobs enabled to help smoke out any issues, but where hard guarantees are not yet made." | 15:14 |
gmann | rosmaita: you are right | 15:14 |
rosmaita | ok, cool | 15:14 |
rosmaita | (just wanted to be sure) | 15:14 |
noonedeadpunk | +1 | 15:14 |
noonedeadpunk | make sense to me - was just double checking I remember it right | 15:14 |
gmann | let's ask project to add jobs anyways | 15:15 |
gmann | thanks | 15:15 |
knikolla[m] | makes sense. | 15:15 |
jungleboyj | ++ | 15:15 |
rosmaita | yes, the "dress rehearsal" is a good metaphor for this | 15:15 |
JayF | It's not a dress rehersal if we don't actually reherse, so I don't think that it's a rehersal changes any actions. | 15:15 |
gmann | slaweq: let's ask and if any project facing issue then they can keep it non voting but let's tell to make it voting | 15:15 |
slaweq | ++ | 15:15 |
knikolla[m] | i guess more of a canary | 15:15 |
gmann | stable/wallaby to stable/yoga went pretty well so may be Y->2023.1 will also | 15:16 |
JayF | I know many operators in practice have been skipping releases frequently. It'll be interesting to see if CI unearths anything that they haven't exposed. | 15:16 |
gmann | yeah that is our intension but let's see how our testing is on upgrade | 15:17 |
gmann | and will give opportunity to extend it | 15:17 |
jungleboyj | *fingers crossed* | 15:17 |
gmann | ok, moving next | 15:18 |
gmann | Bare 'recheck' state | 15:18 |
gmann | #link https://etherpad.opendev.org/p/recheck-weekly-summary | 15:18 |
gmann | slaweq: go ahead please | 15:18 |
slaweq | I don't have much to say today | 15:19 |
slaweq | I contacted few projects last week | 15:19 |
gmann | +1 | 15:20 |
slaweq | lets see if things for them will improve now | 15:20 |
gmann | yeah, it is getting noticed. tacker project contacted me to get more clarification on it and they understand and will take action | 15:20 |
gmann | thanks slaweq for driving this and helping to save the infra resources | 15:21 |
JayF | Can the tool used to generate these spit out specific patch URLs? I'm mainly curious (with my PTL hat on) who the 'stragglers' are in Ironic that are not yet putting reasons for rechecks | 15:21 |
JayF | and if that tool exists, we can potentially offer it to other project leaders to enable them to chat about specific instances | 15:21 |
gmann | one is this list https://etherpad.opendev.org/p/recheck-weekly-summary#L21 | 15:22 |
gmann | I hope that is bare recheck one only | 15:22 |
slaweq | JayF scripts are here https://github.com/slawqo/rechecks-stats | 15:22 |
JayF | ack slaweq, will look into it thanks :D | 15:22 |
slaweq | yw | 15:23 |
gmann | thanks | 15:23 |
gmann | anything else on bare recheck? | 15:23 |
knikolla[m] | i noticed that someone auto-translated the latter half of that etherpad | 15:23 |
fungi | if you need it rolled back to an earlier state, let me know | 15:23 |
fungi | there's an admin api i can use to undo revisions | 15:23 |
gmann | ok | 15:25 |
gmann | Zuul config error | 15:25 |
gmann | #link https://etherpad.opendev.org/p/zuul-config-error-openstack | 15:25 |
gmann | we talked about it before meeting | 15:25 |
gmann | and we need more aggressive approach to get them fixes or fix them. | 15:25 |
gmann | knikolla[m] volunteer to drive this from TC side by taking to project lead/liaison or fixes them | 15:26 |
gmann | thanks knikolla[m] for that | 15:26 |
knikolla[m] | fungi: thanks. i looked at the revisions and doesn't seem necessary. the translation was appended, rather than replacing the content. weird. | 15:26 |
gmann | but I will suggest we as tc-members also can take some of the project and fix them | 15:26 |
gmann | like slaweq will do for neutron | 15:26 |
gmann | I volunteer to do for nova, qa, glance, and keystone | 15:26 |
gmann | please add your name in etherpad #link https://etherpad.opendev.org/p/zuul-config-error-openstack#L26 | 15:27 |
JayF | I am surprised to see Ironic hits on that list; I'll ensure those are corrected. | 15:27 |
fungi | to restate what i suggested before the meeting, i recommend prioritizing maintained stable branches. for em branches you could take teams not fixing those as an indication they're ready for early eol | 15:27 |
gmann | JayF: cool, thanks | 15:27 |
knikolla[m] | fungi: ++ | 15:27 |
clarkb | one common cause of these errors in the past has been project renames requested by openstack | 15:27 |
TheJulia | JayF: I think the ironic ones are branches I've long expected to be closed out | 15:27 |
JayF | TheJulia: I will likely propose at next Ironic meeting we EOL a bunch of stable branches | 15:28 |
TheJulia | jfyi | 15:28 |
clarkb | it would be a good idea for everyone requesting project renames to understand they have a little bit of legwork to complete the rename once gerrit is done | 15:28 |
gmann | yeah, it will unhide the EM branches gate status also if those are broken and unnoticed | 15:28 |
TheJulia | JayF: just do it :) | 15:28 |
gmann | clarkb: sure, let me it as explicit step in project-team-guide repo retirement process | 15:28 |
fungi | gmann: that's an excellent idea, thanks | 15:29 |
gmann | but let's take 2-3 projects by ourselves to fix these and we will 1. fix many projects 2. trigger/motivate other projects/contributors to do it | 15:30 |
gmann | #action gmann to add a separate step in repo rename process about fixing the renaming repo in zuul jobs across stable branches also | 15:31 |
gmann | anything else on zuul config error or gate heath ? | 15:31 |
clarkb | yseterday I updated one of the base job roles | 15:31 |
clarkb | the motivation was to speed up jobs that have a large number of required projects like those for OSA. Doesn't seem to have hurt anything and some jobs should be a few minutes quicker now | 15:32 |
gmann | nice | 15:32 |
clarkb | In general, there are some ansible traps that can increase the runtime of a job unnecessarily. Basically loops with large numbers of inputs where each loop iteration shouldn't be expensive so incurs ansible task overhead as the real only overhead | 15:32 |
clarkb | be on the lookout for those and if you have them in your jobs rewriting to avoid loops in ansible can speed things up quite a bit | 15:33 |
fungi | there was also a recent change to significantly speed up log archiving, right? | 15:33 |
clarkb | fungi: yes that was motivated by neutron jobs but really all devstack jobs were impacted | 15:33 |
fungi | (along the same lines) | 15:33 |
clarkb | it had the same underlying issue of needlessly expensive ansible loops. | 15:33 |
clarkb | Anyway this is more of a "if your jobs are slow this is somethign to look for" thing | 15:33 |
gmann | +1, thanks for improvement and updates clarkb | 15:34 |
gmann | #topic 2023.1 cycle PTG Planning | 15:34 |
gmann | we have etherpad ready to add the topics | 15:34 |
gmann | #link https://etherpad.opendev.org/p/tc-leaders-interaction-2023-1 | 15:35 |
gmann | #link https://etherpad.opendev.org/p/tc-2023-1-ptg | 15:35 |
gmann | with schedule information too | 15:35 |
*** rosmaita1 is now known as rosmaita | 15:35 | |
gmann | and JayF will send the icals over ML too | 15:35 |
gmann | on operator-hour sessions, we have 7 projects booked till now | 15:35 |
gmann | I will send another reminder to projects sometime next week | 15:36 |
fungi | if anyone needs operator hour tracks created in ptgbot, feel free to ping me in #openinfra-events | 15:36 |
gmann | but feel free to reachout to projects you are contributing or in touch with to book it | 15:36 |
gmann | yeah | 15:37 |
fungi | (track creation/deletion is an admin operation, the rest should be pretty self-service though) | 15:37 |
gmann | yeah. | 15:38 |
gmann | anything else on PTG things for today? | 15:38 |
gmann | #topic 2023.1 cycle Technical Election & Leaderless projects | 15:39 |
gmann | Leaderless projects | 15:39 |
gmann | #link https://etherpad.opendev.org/p/2023.1-leaderless | 15:39 |
gmann | no special update on this but all the patches for PTL appointment is up for review, please provide your feedback there | 15:39 |
gmann | TC Chair election | 15:40 |
gmann | as you know we have TC chair election going on | 15:40 |
gmann | you might have received email for CIVS poll, JayF is handling the election process | 15:41 |
gmann | 1. please let JayF know if you have not received the email | 15:41 |
fungi | (after checking your junk/spam folder) | 15:41 |
gmann | +1 | 15:41 |
spotz_ | Everyone should be opt-in at this point:) | 15:42 |
gmann | 2. vote if not yet done. we do not need to wait until last day instead we can close it early once all members voted | 15:42 |
gmann | yeah opt-in is required | 15:42 |
jungleboyj | :-) | 15:42 |
gmann | and just to restate, once voting are closed. we will merge the winner nomination and abandon the other one | 15:43 |
rosmaita | thanks to gmann and knikolla[m] for making an election necessary! | 15:43 |
rosmaita | (i am not being sarcastic, i think it is a good thing) | 15:43 |
knikolla[m] | :) | 15:43 |
gmann | and in PTG we can discuss on how to make nomination/election more better/formal. | 15:44 |
gmann | +1, agree | 15:44 |
gmann | anything else on this? | 15:45 |
jungleboyj | +1 And also gmann thank you for putting your name in again. All your leadership recently is GREATLY appreciated. | 15:45 |
gmann | thanks | 15:45 |
gmann | #topic Open Reviews | 15:45 |
gmann | one is this one #link https://review.opendev.org/c/openstack/governance/+/859464 | 15:46 |
gmann | Dedicate Zed release to Ilya Etingof | 15:46 |
gmann | this is great idea and thanks iurygregory for the patch | 15:46 |
gmann | please review and cast your vote | 15:46 |
JayF | I'd like to thank fungi for suggesting that, and iurygregory for owning putting up the patch. Ilya Etingof was a valued member of the Ironic community and this is a deserving tribute. | 15:46 |
rosmaita | ++ | 15:47 |
gmann | ++ true | 15:47 |
arne_wiebalck | JayF: ++ | 15:47 |
spotz_ | ++ | 15:47 |
fungi | the 7 day waiting period for motions as described in the charter puts the earliest possible approval date right before the release, so merging it in a timely fashion will be good | 15:47 |
gmann | yeah | 15:48 |
jungleboyj | ++ | 15:48 |
fungi | the feedback so far has been all in favor, so the foundation marketing crew are operating under the assumption it will merge on schedule | 15:48 |
fungi | as far as mentioning the dedication in press releases about the release, and such | 15:48 |
jungleboyj | One of the special things about this community. | 15:49 |
knikolla[m] | the majority voted, so setting a reminder for when the 7 days are up. | 15:49 |
knikolla[m] | jungleboyj: ++ | 15:49 |
gmann | sure, I think 7 we can merge it but we will keep eyes on it | 15:49 |
gmann | 7th oct | 15:49 |
gmann | or may be 6th, i will check | 15:50 |
gmann | another patch to review is this #link https://review.opendev.org/c/openstack/governance/+/858690 | 15:50 |
knikolla[m] | 4th | 15:50 |
fungi | charter says 7 calendar days, which would be october 4 | 15:50 |
gmann | yeah, that is even better. I will run script to get it merge asap | 15:50 |
gmann | migrating the CI/CD jobs to ubuntu 22.04 | 15:51 |
fungi | that's the day before the release, just to be clear | 15:51 |
gmann | it is in good shape now, thanks to noonedeadpunk for updating | 15:51 |
gmann | fungi: ack | 15:51 |
gmann | that is all from agenda, we have 9 min left if anyone has anything else to discuss | 15:51 |
gmann | our next meeting will be on 6th Oct and a video call | 15:53 |
JayF | Those nine minutes would be a great opportunity for the 4 folks who haven't voted to do so :) | 15:53 |
gmann | +1 | 15:53 |
knikolla[m] | ++ | 15:53 |
spotz_ | ++:) | 15:53 |
gmann | let's vote if you have not done | 15:54 |
gmann | thanks everyone for joining, let's close the meeting | 15:54 |
gmann | #endmeeting | 15:54 |
opendevmeet | Meeting ended Thu Sep 29 15:54:50 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:54 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.html | 15:54 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.txt | 15:54 |
opendevmeet | Log: https://meetings.opendev.org/meetings/tc/2022/tc.2022-09-29-15.00.log.html | 15:54 |
slaweq | o/ | 15:54 |
spotz_ | Thanks all | 15:54 |
JayF | We're up to 8 outta 9. I don't know who the one is but we're close :D | 15:55 |
jungleboyj | I will be OOO next week. Just FYI. | 15:55 |
* knikolla[m] starts the drumroll | 15:55 | |
JayF | I sent out those ICS files. Let me know if you have any issues with it. | 16:50 |
*** chkumar|rover is now known as raukadah | 17:15 | |
gmann | JayF: it seems i received only for Monday sessions 'PTG: TC and Community Leaders session' | 17:31 |
gmann | did you send two email or in single only? | 17:31 |
JayF | gmann: That one ICS file should import three meetings. | 17:31 |
JayF | gmann: I tested it working on google claendar. | 17:31 |
JayF | If it didn't work for you; I'd be interested to know what calendaring system. | 17:32 |
knikolla[m] | Worked for me on iphone | 17:34 |
gmann | I tried in zoho and in google calendar also | 17:37 |
clarkb | fastmail is able to show all three | 17:39 |
JayF | gmann: in google calendar, make sure you go settings -> Import & export -> import file that way | 17:40 |
JayF | That's how it worked on gmail in my testing. | 17:40 |
* JayF will reply to ML with instructions if that works for you and whatever-other-path did not | 17:40 | |
gmann | JayF: yes, it worked via Import way | 17:45 |
JayF | ack, I'll respond with that information | 17:46 |
gmann | able to import all 3 events calendar | 17:46 |
gmann | thanks | 17:46 |
JayF | o/ email is out | 17:51 |
opendevreview | Ghanshyam proposed openstack/project-team-guide master: Add openstack-map and zuul jobs updates step in repository handling https://review.opendev.org/c/openstack/project-team-guide/+/859886 | 18:44 |
gmann | fungi: clarkb: ^^ please check if anything else we need to mention ? | 18:59 |
clarkb | gmann: left a suggestion | 19:02 |
gmann | slaweq: manila-grenade-skip-level job also passing, you can give this ref in your email while asking projects to start grenade skip-level job https://review.opendev.org/c/openstack/manila/+/859875 | 19:02 |
gmann | clarkb: checking | 19:02 |
gouthamr | gmann++ | 19:06 |
opendevreview | Ghanshyam proposed openstack/project-team-guide master: Add openstack-map and zuul jobs updates step in repository handling https://review.opendev.org/c/openstack/project-team-guide/+/859886 | 19:09 |
gmann | clarkb ^^ updated | 19:09 |
gmann | gouthamr: about to ping for review on manila job :) | 19:09 |
gouthamr | gmann: 'm on it :) thanks! | 19:12 |
*** tosky_ is now known as tosky | 21:24 | |
*** dasm is now known as dasm|off | 21:24 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!