*** rlandy|ruck is now known as rlandy|out | 00:54 | |
opendevreview | Merged openstack/project-config master: Add openEuler 20.03 LTS SP2 node https://review.opendev.org/c/openstack/project-config/+/818723 | 04:56 |
---|---|---|
*** ysandeep|out is now known as ysandeep | 05:27 | |
*** sshnaidm|afk is now known as sshnaidm | 06:57 | |
*** ysandeep is now known as ysandeep|lunch | 07:26 | |
*** amoralej|off is now known as amoralej | 08:15 | |
*** ysandeep|lunch is now known as ysandeep | 08:41 | |
*** ysandeep is now known as ysandeep|afk | 09:35 | |
*** ysandeep|afk is now known as ysandeep | 10:32 | |
*** dviroel|rover|afk is now known as dviroel|rover | 11:01 | |
*** rlandy is now known as rlandy|ruck | 11:13 | |
*** jpena|off is now known as jpena | 11:42 | |
*** jcapitao is now known as jcapitao_lunch | 11:58 | |
*** soniya29 is now known as soniya29|afk | 12:04 | |
*** ysandeep is now known as ysandeep|brb | 12:21 | |
*** outbrito_ is now known as outbrito | 12:22 | |
*** ysandeep|brb is now known as ysandeep | 12:48 | |
*** jcapitao_lunch is now known as jcapitao | 13:04 | |
*** amoralej is now known as amoralej|lunch | 13:16 | |
opendevreview | yatin proposed openstack/project-config master: Update Neutron's Grafana as per recent changes https://review.opendev.org/c/openstack/project-config/+/821706 | 13:30 |
opendevreview | Merged openstack/project-config master: Update Neutron's Grafana as per recent changes https://review.opendev.org/c/openstack/project-config/+/821706 | 13:56 |
*** soniya29|afk is now known as soniya29 | 13:58 | |
*** amoralej|lunch is now known as amoralej | 14:04 | |
*** ysandeep is now known as ysandeep|out | 14:13 | |
*** dviroel|rover is now known as dviroel|rover|lunch | 14:59 | |
elodilles | clarkb fungi : as usual, I'll run the EOL branch cleanup script now, as I see, there is nothing that would block that | 15:18 |
fungi | elodilles: go for it. how many branches are being deleted, approximately? | 15:21 |
fungi | i'm curious to see if we encounter the same zuul reconfigure event processing backlog we get from mass branch creations | 15:21 |
fungi | elodilles: if it's a lot, then it may be a good idea to try some smaller batches at first, just to make sure we're not seeing significant impact | 15:24 |
elodilles | fungi: it should be like a dozen or so, so not that much | 15:29 |
dpawlik | fungi, clarkb: hey, just FYI I started logscraper + log-gearman-* on logscraper01.openstack.org | 15:29 |
fungi | elodilles: a dozen or so deletions is probably fine to try in one go, enough that we'll probably be able to tell if it has the same impact as creation, but not enough to create a severe backlog | 15:30 |
fungi | dpawlik: thanks for the heads up | 15:30 |
elodilles | fungi: i was wrong, there is 37 ocata openstack-ansible branch to delete :S | 15:34 |
elodilles | fungi: but actually it's not doing it in one go, since the script asks for confirmation for every branch before it starts to delete it | 15:34 |
dpawlik | fungi: no problem :) Just FYI, if Reed says that the service should be stopped for now and I will be not available, please ping tristanC or nhicher - they will stop the services. | 15:35 |
fungi | dpawlik: noted, thanks | 15:35 |
fungi | elodilles: that sounds fine, maybe do 10 or so in fairly rapid succession and let's see how zuul is handling it before adding the resy? | 15:36 |
elodilles | (so altogether ~50 branches to delete, but not in one go) | 15:36 |
fungi | rest | 15:36 |
elodilles | fungi: actually i think zuul is not taking part of the branch deletion as it is only a gerrit thingy. or am I wrong? | 15:37 |
elodilles | anyway, i'll be careful and look at what happens, and will leave time between branches | 15:39 |
fungi | gerrit sends events on its event stream for creation and deletion of branches. since zuul's configuration is branch-aware, zuul needs to update (reconfigure) its view of the projects to know it should drop configuration which was present on those branches now they're deleted | 15:40 |
fungi | so zuul reacts to the branch deletion events by adding a reconfigure for each one into its management event queue | 15:40 |
frickler | will be another interesting test for the new two-headed setup I guess | 15:41 |
fungi | yeah | 15:41 |
elodilles | oh, i see, sorry. then i'll definitely keep pauses between branch deletions | 15:41 |
fungi | reconfiguration often takes a few minutes, and if deletion events arrive faster than it can process a reconfiguration we've seen the management event queue count climb and the scheduler mostly stop doing other things until it gets through them | 15:42 |
elodilles | (2 branch have been deleted so far) | 15:47 |
fungi | elodilles: you can fo a bit faster if you want, right now the management event queue length is 1 | 15:50 |
fungi | elodilles: you can see it listed in the top-right corner of https://zuul.opendev.org/t/openstack/status/ | 15:50 |
fungi | er, top-left i mean | 15:50 |
elodilles | well, as soon as I get to the osa branches I'll queue up more branches & let you know (but i'll be on meeting meanwhile, so it might take awhile o:)) | 15:53 |
fungi | no worries | 15:53 |
ade_lee | fungi, clarkb hey guys - could I get a node held for https://review.opendev.org/c/openstack/cinder/+/790535 ? | 15:54 |
ade_lee | we have failure either for cinder-tempest-plugin-lvm-lio-barbican-fips or cinder-tempest-lvm-multibackend-fips that we're unable to reproduce locally | 15:55 |
ade_lee | same test fails - so node for either one would work | 15:56 |
ade_lee | timburke_, will try to look at your fips failure today | 15:56 |
*** dviroel|rover|lunch is now known as dviroel|rover | 15:59 | |
fungi | ade_lee: i've added an autohold for the cinder-tempest-plugin-lvm-lio-barbican-fips build of that change, recheck at will | 16:02 |
ade_lee | fungi, thanks -- I think its already running now from a rebase I did about 10 mins ago .. should I recheck? | 16:03 |
fungi | nah, that will be good enough. once it fails it will get held | 16:03 |
ade_lee | fungi, cool -thanks! | 16:04 |
fungi | yw | 16:06 |
timburke_ | ade_lee, thanks! it's strange to me; seems very hit-or-miss whether the problem will manifest or not | 16:12 |
*** beekneemech is now known as bnemec | 17:08 | |
*** amoralej is now known as amoralej|off | 17:26 | |
*** jpena is now known as jpena|off | 17:37 | |
elodilles | fungi: fyi, 10 branches have been deleted now within a couple of minutes. I see now "8 management events." | 17:54 |
elodilles | i've stopped now to see how zuul handles them | 17:55 |
fungi | thanks for keeping an eye on it! | 17:56 |
fungi | curious to see how quickly it churns through those | 17:56 |
elodilles | and now it is "10 management events." | 17:57 |
elodilles | do you see any slowdown? | 17:59 |
fungi | well, i mean, zuul won't queue any new builds/changes while it's processing management events | 18:00 |
fungi | it's not necessarily a "slowdown" | 18:00 |
fungi | but it does delay other activities | 18:01 |
fungi | it also won't process build results until it's done with whatever management event it's on | 18:01 |
elodilles | :S | 18:01 |
elodilles | so now zuul is kind of blocked until the "management events" number goes down to 0? | 18:02 |
clarkb | elodilles: ya basically | 18:03 |
clarkb | jobs that are already running keep running so its not a full stop | 18:04 |
clarkb | but new work like merging changes or starting new jobs is paused aiui | 18:04 |
elodilles | i see that events counter is increasing, but fortunately the management events counter decreasing slowly | 18:04 |
fungi | yes, it'll take a few minutes to process each one | 18:04 |
frickler | would zuul be able to merge those events? | 18:05 |
elodilles | ok, so it's definitely better to wait between branch deletions for now, it seems. hmmm. | 18:05 |
frickler | like they mostly would supercede each other, wouldn't they? | 18:06 |
fungi | frickler: corvus was saying that he thought it was supposed to collapse them (sort of like how it does with its supercedent pipeline manager) and isn't sure why that's not actually happening | 18:07 |
frickler | would it make sense to activate tracing while this is happening? maybe too late now but before the next deletions? | 18:08 |
* frickler notices corvus isn't here, will repeat in #opendev | 18:10 | |
fungi | ade_lee: i lost track earlier, but circling back around now it looks like we have a held node for you... what ssh key(s) do you want authorized? | 18:57 |
ade_lee | fungi, https://github.com/vakwetu.keys | 19:27 |
fungi | ade_lee: ssh root@173.231.254.233 | 19:29 |
ade_lee | fungi, perfect - thanks! | 19:30 |
fungi | yw | 19:30 |
elodilles | so as I wrote on opendev channel, I've interrupted now the branch cleanup script and will run it again later (depending on the status of the zuul bug fix tomorrow). btw, today these branches were deleted: https://paste.opendev.org/show/811671/ | 19:38 |
clarkb | elodilles: a fix is up now and looks straightfowrard I expect this will be able to proceed tomorrow | 19:38 |
clarkb | But I still need to properly review the fix | 19:38 |
elodilles | clarkb: oh, nice, that sounds promising! thanks for the info! | 20:11 |
*** dviroel|rover is now known as dviroel|out | 20:31 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!