opendevreview | Merged openstack/project-team-guide master: Add SLURP release notes strategy https://review.opendev.org/c/openstack/project-team-guide/+/843457 | 02:48 |
---|---|---|
-opendevstatus- NOTICE: Zuul job execution is temporarily paused while we rearrange local storage on the servers | 16:55 | |
knikolla | tc-members: reminder, weekly meeting in ~45 mins. | 17:15 |
-opendevstatus- NOTICE: Zuul job execution has resumed with additional disk space on the servers | 17:44 | |
knikolla | #startmeeting tc | 18:00 |
opendevmeet | Meeting started Tue Aug 15 18:00:09 2023 UTC and is due to finish in 60 minutes. The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot. | 18:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 18:00 |
opendevmeet | The meeting name has been set to 'tc' | 18:00 |
knikolla | #topic Roll Call | 18:00 |
JayF | o/ | 18:00 |
knikolla | o/ | 18:00 |
knikolla | Hi all, welcome to the weekly meeting of the OpenStack Technical Committee | 18:00 |
spotz[m] | o/ | 18:00 |
knikolla | A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct | 18:00 |
dansmith | o/ | 18:00 |
knikolla | Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee | 18:00 |
gmann | o/ | 18:00 |
knikolla | We have one noted absence, slaweq | 18:00 |
noonedeadpunk | o/ | 18:01 |
rosmaita | o/ | 18:02 |
knikolla | #topic Follow up on past action items | 18:02 |
knikolla | We have one action item from the last meeting | 18:02 |
knikolla | rosmaita to review guidelines patch and poke at automating it | 18:02 |
gmann | I think guidelines change is merged now | 18:03 |
rosmaita | yes | 18:03 |
rosmaita | no action on automating, though | 18:03 |
knikolla | ack, thanks rosmaita | 18:03 |
knikolla | #topic Gate health check | 18:03 |
knikolla | Any updates on the state of the gate? | 18:04 |
fungi | source of the occasional enospc failures was tracked down to missing data disks on the executors (we overlooked giving them dedicated storage when we replaced them in early july) | 18:04 |
fungi | should be fixed as of about 20 minutes ago | 18:04 |
dansmith | steadily getting better but still not "good enough" IMHO. We're merging things, which is good, but we're still hitting plenty of fails | 18:04 |
dansmith | I just got something that is months old to merge after 28 rechecks | 18:05 |
rosmaita | i have occasionally seen jobs pass | 18:05 |
rosmaita | just not all at once | 18:06 |
* fungi sees lots of jobs passing, but yes there are also lots of jobs | 18:06 | |
gmann | I have seen improvement in tempest and nova gate at least but did not check cinder | 18:06 |
gmann | many improvement changes merging in last month or so helping for sure but yes gate is not 100% stable | 18:07 |
fungi | there's a nova change in the gate right now running 24 jobs. if those jobs average a 1% failure rate then that's basically a 50% chance that the change can make it through check and gate in one go | 18:07 |
dansmith | fungi: yeah | 18:07 |
dansmith | one I'm watching right now has a legit timeout in tempest and a volume test failure in one run of one patch | 18:07 |
fungi | 2% average failure rate means nothing merges | 18:08 |
dansmith | (i.e. two failing jobs on one patch which doesn't actually change anything) | 18:08 |
gmann | rebuild server of volume backed test was also failing many times which is now refactored so let's see if that help or not | 18:08 |
knikolla | ugh :/ | 18:08 |
dansmith | so yeah, it's better but we're still headed for trouble come m3 I tink | 18:08 |
gmann | yeah | 18:09 |
gmann | it not going to be smooth for sure but at least some level better | 18:09 |
dansmith | yes, improved for sure, just not enough | 18:09 |
fungi | looks like we've also got a bunch of leaked servers stuck "deleting" in rackspace too, which is a big dent in our available quota. need to find time to ask them to clean those up | 18:09 |
gmann | we have seen time during last month where hardly anything merged and everything was stuck | 18:09 |
knikolla | understood. let me know if there's something i can do to help. I have some extra free cycles this week and the next. | 18:09 |
dansmith | three weeks ago we were at "no point in trying" | 18:10 |
gmann | yeah | 18:10 |
dansmith | knikolla: I think you were going to look at keystone db queries and caching right? | 18:10 |
fungi | also image uploads started failing for rackspace's iad region at the end of last month, so images there are quite stale which means more time jobs spend updating packages and git repos too | 18:10 |
dansmith | I know neutron is working on that (i.e. slaweq) both of which will help IO performance, which is a huge bottleneck | 18:10 |
dansmith | fungi: ah good to know | 18:11 |
knikolla | dansmith: I hadn't said i would yet, but i can prioritize that now. | 18:11 |
noonedeadpunk | well, talking about keystone, it blocks our upgrade jobs for stable branches, which are all red due to series of regressions... So performance IMO not the biggest issue right now... Or well, depending on who you will ask obviously | 18:11 |
noonedeadpunk | THough patches are proposed, so this should be solved soonish | 18:11 |
knikolla | noonedeadpunk: roger, i'll review the backports. i think i already reviewed the patch to master, or are there others? | 18:12 |
dansmith | knikolla: okay I thought you did, but yeah, would be helpful | 18:13 |
noonedeadpunk | knikolla: yup, jsut pushed fix for another one https://review.opendev.org/c/openstack/keystone/+/891521 | 18:13 |
knikolla | noonedeadpunk: awesome, fresh hot the press. will review it after the meeting. thanks for proposing it! | 18:14 |
noonedeadpunk | but I don't have test scenario for that... Or well, I'm not skilled enough to make up one in given time | 18:14 |
knikolla | off* | 18:14 |
knikolla | I'll see if i can think of something to suggest re: testing | 18:15 |
gmann | one thing to mention about python 3.11 version testing. this is tox job change #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1 | 18:15 |
gmann | as this is proposed to run on debian bookworm, it need fixes in bidnep.txt for many/all projects | 18:15 |
gmann | I am adding it as non voting job in this cycle so that projects will get time (at least 2-3 months) to fix it before next cycle where we can make it mandatory #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/2 | 18:16 |
gmann | few projects like cinder, neturon already run py3.11 testing and on ubuntu jammy | 18:16 |
noonedeadpunk | ++ sounds good | 18:17 |
gmann | adding it in non voting is improtant otherwise because of our general job template model, in next cycle it can break gate for projects not fixing. | 18:17 |
knikolla | ++ | 18:18 |
fungi | also pep 693 says python 3.12.0 is due out two days before we release bobcat, so expect ubuntu to have packaged it by mid-cycle | 18:18 |
knikolla | do we know if 3.13 brings any possible problematic changes? i haven't looked at the changelog yet | 18:18 |
gmann | cool, we can consider that adding non voting once it is available and thing of adding mandatory in next cycle of its release time | 18:19 |
knikolla | 3.21* | 18:19 |
fungi | if so, non-voting 3.12 jobs might be worth adding at that point | 18:19 |
knikolla | 12* | 18:19 |
knikolla | it seems i can't type today. sorry. | 18:19 |
fungi | 3.12 deprecated some more stuff out of the stdlib | 18:19 |
gmann | 3.12 or 3.11 ? | 18:19 |
fungi | the "dead batteries" pep | 18:19 |
fungi | er, deleted already deprecated stuff i mean | 18:19 |
fungi | but i think most of that is due to happen in 3.13 | 18:19 |
knikolla | 3.12, given that we'll have 3.11 testing in place so i'm less concerned about that, and i was curious about 3.12. | 18:20 |
fungi | no, i'm wrong, all the deletions for pep 594 happened in 3.11 | 18:20 |
fungi | scratch that, i was right | 18:20 |
gmann | we can go with the same way there. adding it non voting in next cycle and see how it behaves | 18:20 |
fungi | deprecated in 3.11, deleteing in 3.13 mostlt | 18:21 |
rosmaita | gmann: dyk how big a bindep change is required for bookworm py 3.11 ? | 18:21 |
fungi | asynchat and asyncore go away in 3.12 | 18:21 |
gmann | rosmaita: not bug, this is nova example and I think same for other projects too #link https://review.opendev.org/c/openstack/nova/+/891256 | 18:21 |
fungi | #link https://peps.python.org/pep-0594/ Removing dead batteries from the standard library | 18:21 |
gmann | with that nova job passing | 18:21 |
gmann | #link https://review.opendev.org/c/openstack/nova/+/891228/3 | 18:22 |
clarkb | rosmaita: its usually updating libffi versions and things like that that have hardcoded versions in the pcakge name | 18:22 |
gmann | yeah | 18:22 |
gmann | libmysqlclient-dev not present in debian bookworm and might be few more | 18:22 |
clarkb | no more python2 on bookworm | 18:22 |
clarkb | is another likely source of trouble | 18:23 |
gmann | anyways that is plan for py3.11 which need changes in almost many/all repo may be | 18:23 |
gmann | that is all from me on this | 18:23 |
knikolla | thanks gmann | 18:24 |
noonedeadpunk | there're no py2 on ubuntu jammy either, so not sure this will ba an issue | 18:24 |
knikolla | #topic Testing runtime for 2024.1 release | 18:24 |
knikolla | on that topic, since we're already talking about it | 18:24 |
knikolla | #link https://review.opendev.org/c/openstack/governance/+/891225 | 18:24 |
gmann | yeah, i described the changes in commit msg #link https://review.opendev.org/c/openstack/governance/+/891225/2//COMMIT_MSG#9 | 18:24 |
gmann | main changes are on debian side, adding debian 12 bookworm but also keepnig debian 11 because it was supported in our previous SLURP release | 18:25 |
gmann | with debian 12, py3.11 testing comes in as mandatory but no removal of any python version which are supported currently. this is what were changed in our PTI explicitly this cycle | 18:26 |
gmann | so min version of python to test is 3.8 | 18:26 |
fungi | so it's mostly a question of how many versions in between those we also explicitly require testing for | 18:27 |
gmann | please review and let me know your feedback. also preparing the job template #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238/1 | 18:27 |
gmann | fungi: yeah that is good question | 18:27 |
fungi | as currently written it says to run 3.8, 3.10 and 3.11, skipping 3.9 | 18:28 |
knikolla | it does sound a bit weird to me that we're saying we support 3.9 in the testing runtime, while not testing for it, despite the minor differences. have we done something like that before? | 18:28 |
fungi | also the rationale for skipping 3.9 could also be applied to 3.10 | 18:28 |
gmann | as proposed in job template, I think testing py3.8, py3.10, and py3.11 is enough. skipping py3.9 because things working in py3.8and py3.10 will work for py3.9 also | 18:28 |
clarkb | knikolla: I'm a proponent for testing only the bookends if you aren't testing platform differences (libvirt, etc) | 18:28 |
gmann | knikolla: we do test, many project explicitly have job of that and we can add it periodic | 18:29 |
clarkb | this has worked really well for zuul and reduces test runtimes, chances for random failures, and resource ndeeds. Zuul hasn't had problems doing that | 18:29 |
gmann | My proposal is to add py3.9 as periodic and not to run everytime on check/gate pipeline | 18:29 |
fungi | yes, the argument is that if what you're testing is "does this work for the given python minor versions?" then odds are anything that passes on both 3.8 and 3.10 will work for 3.9 | 18:29 |
gmann | that should ve enough to cover the testing of py3.9 | 18:29 |
dansmith | gmann: sounds fine to me | 18:29 |
dansmith | gmann: honestly, we could be doing that for 38 and 39 right now | 18:29 |
fungi | also projects concerned about it can still choose to run 3.9 jobs, the pti just won't require them | 18:30 |
gmann | yes. nova has functional job running on py3.9 too | 18:30 |
knikolla | I don't have a strong opinion, if we can save resources the better. Just wanted to ask :) | 18:30 |
noonedeadpunk | but I think we've added "appreciacion" to PTI to cover more versions than minimally required by PTI | 18:31 |
gmann | dansmith: I would like to keep py3.8 as check/gate job as it is min version and good to keep eyes on if anyone dropping its support. | 18:32 |
gmann | noonedeadpunk: yes | 18:32 |
dansmith | gmann: it's just very unlikely and finding something later in periodic would not be hard to recover from | 18:32 |
knikolla | makes sense | 18:32 |
dansmith | obviously running everything on every patch is *ideal* but just not necessary | 18:33 |
gmann | but it still make other repo projects to break for that time | 18:33 |
gmann | we can move py3.10 along wuth py3.9as periodic ? testing min and max py3.8 and py3.11on every change | 18:33 |
noonedeadpunk | I can recall solid reason to drop 3.8 support from Ansible... As they were able to drop some quite old things to speed-up execution a lot... But can't really recal details... | 18:33 |
noonedeadpunk | Anyway I think keeping 3.8 is reasonable at min | 18:34 |
fungi | there's been lots. the "faster cpython" effort has been making steady performance improvements | 18:35 |
clarkb | particularly beginning with 3.10 | 18:35 |
gmann | also, many projects not seeing py3.8 job in check/gate will think of its going away or already dropped. I think testing that on every change make sense to me | 18:35 |
JayF | gmann++ | 18:36 |
* noonedeadpunk looking forward to noGIL | 18:37 | |
knikolla | anything else on the topic? | 18:38 |
gmann | anyways its there in template, please review and add your feedback. that template change need to wait for this cycle release so we have time but governance change we can review and merge when it is ready | 18:38 |
gmann | this is template change #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238/1 | 18:38 |
gmann | that is all from me on this | 18:38 |
knikolla | #topic User survey question updates by Aug 18 | 18:39 |
knikolla | #link https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034589.html | 18:39 |
knikolla | A reminder that the deadline for proposing changes to the survery that will be sent out is this Friday. | 18:39 |
knikolla | Are there any questions that we as the TC can pose that you'd like to propose? | 18:40 |
knikolla | I'll take the silence as a no, we can discuss outside the meeting if anything comes to mind. | 18:42 |
spotz[m] | Yeah I can't think of anything tC specific | 18:42 |
fungi | it's not just about proposing tc-specific questions, but also making sure the questions currently in there make sense | 18:42 |
knikolla | #topic Reviews and Open Discussion | 18:42 |
fungi | i went through, for example, and recommended some updates to questions which listed projects but were missing some more recent additions | 18:43 |
gmann | There are couple of things/updates. | 18:44 |
gmann | elod ping TC about reaching our to murano/solum PTL for stable.rocky EOL. I also reached out to them, and they responded to my changes and to release changes also. | 18:44 |
gmann | so we are good on these projects and PTL is responding the things. | 18:44 |
gmann | other is about election and any project changing their leadership model, tonyb asked about it and as election nomination is going to open tomorrow, we will not have any change in project leadership model for this cycle. if any application comes we need to postponed it to next cycle | 18:46 |
gmann | #link https://governance.openstack.org/election/ | 18:46 |
gmann | these are two updates I wanted to share | 18:46 |
knikolla | thanks gmann! | 18:46 |
noonedeadpunk | But does this still apply if there's no PTL for the project? But there're volunteers for distributed leadership? | 18:49 |
knikolla | It'll be up to us to decide, IIRC. | 18:49 |
gmann | noonedeadpunk: that we can handle as leaderless project and then change to DPL model during PTL assignment task | 18:49 |
noonedeadpunk | As I thought it's possible to change the model in such case? | 18:49 |
gmann | yes but not during the election which create confusion | 18:50 |
JayF | basically it's locking them into having a PTL election; not to having a PTL (if nobody goes up for election; then we can change model if needed) | 18:50 |
noonedeadpunk | ah, ok, gotcha your point now | 18:50 |
gmann | we have deadline of doing it before election nomination start or after election | 18:50 |
noonedeadpunk | ok, yes, makes total sense | 18:50 |
fungi | if projects want to propose to switch to dpl there is a deadline for that. if the tc wants to switch a project to dpl they can do it at any time they want (just ought to avoid disrupting an ongoing election process) | 18:50 |
noonedeadpunk | ++ | 18:51 |
gmann | JayF: yeah | 18:51 |
knikolla | alright. if there's nothing else. thanks all! | 18:53 |
knikolla | have a great rest of the week. | 18:53 |
knikolla | #endmeeting | 18:53 |
opendevmeet | Meeting ended Tue Aug 15 18:53:09 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:53 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-15-18.00.html | 18:53 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-15-18.00.txt | 18:53 |
opendevmeet | Log: https://meetings.opendev.org/meetings/tc/2023/tc.2023-08-15-18.00.log.html | 18:53 |
JayF | ty knikolla o/ | 18:53 |
spotz[m] | Thanks knikolla ! | 18:53 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!