opendevreview | Ian Wienand proposed openstack/diskimage-builder master: python3.12: "fix" unittests https://review.opendev.org/c/openstack/diskimage-builder/+/903848 | 00:14 |
---|---|---|
tonyb | ianw: Wow, that's some "magic" :/ | 00:17 |
ianw | tonyb: i was too scared to go through "git blame" in case i originally wrote it ... i don't think i did, but can't be sure! :) | 00:18 |
ianw | it's such a weird situation, pulling in a file from totally outside the python hierarchy that usually gets copied to a chroot environment /usr/local/bin for unit-testing | 00:19 |
tonyb | Yeah it is a little "special" | 00:21 |
tonyb | ianw: I dd the git blame .... you're off the hook | 00:22 |
fungi | we should probably gracefully restart the zuul schedulers onto the latest images, since we're going to have an even more massive backlog in the openstack tenant from all the extra periodic builds otherwise | 00:51 |
fungi | once normal gerrit activity starts back up tomorrow it will get gnarly | 00:51 |
tonyb | I can do that. | 00:51 |
tonyb | gimme say 30(mins) | 00:51 |
fungi | thanks! no rush. o | 00:52 |
fungi | i'll try to keep an eye on irc in case you get stuck | 00:52 |
fungi | 903818 merged to fix it and so the new images are basically what we've been running plus that fix | 00:52 |
tonyb | I'm unsure about the gitea update after the haproxy stiff yeaterday | 00:53 |
fungi | yeah, i'd be inclined to wait on the gitea upgrade so we can regroup | 00:53 |
tonyb | Kk | 00:53 |
tonyb | Before restart the zuul-schedulers I want to confirm I | 01:31 |
tonyb | 'm basically doing docker-compose pull; docker-compose down; docker-compose up -d in /etc/zuul-scheduler on zuul* ; ensureing that a) I do them in series and there's always one availavle ; and b) that the quay.io/zuul-ci/zuul-scheduler:latest does indeed contain 903818 | 01:33 |
tonyb | I have confirmed "b" | 01:35 |
tonyb | tony@thor:~$ podman inspect quay.io/zuul-ci/zuul-scheduler:latest | jq '.[0].Config.Labels' | 01:36 |
tonyb | { | 01:36 |
tonyb | "org.zuul-ci.change": "903818", | 01:36 |
tonyb | "org.zuul-ci.change_url": "https://review.opendev.org/903818" | 01:36 |
tonyb | } | 01:36 |
ianw | tonyb: yes, if you just want to restart the schedulers that looks about right. if you want to restart everything there is a playbook | 02:41 |
tonyb | ianw: Thanks | 02:42 |
tonyb | infra-root: I have restarted the the zuul-sceduler on zuul{01,02} Things look okay but I'll continue to monitor | 03:30 |
Clark[m] | The zuul components listing is really helpful for ensuring one is always available | 03:41 |
tonyb | Clark[m]: where is that? | 03:43 |
tonyb | Clark[m]: Oh: https://zuul.opendev.org/components | 03:46 |
Clark[m] | You're faster than I am. And ya that should show the version and status of the components | 03:48 |
tonyb | Looks good to me. | 03:48 |
tonyb | I'm not seeing new jobs enqueued into periodic or periodic-stable as we tick over the hour | 04:03 |
Clark[m] | There is a small jitter. But also I think it is supposed to see things are already enqueued from before and not reenqueue? | 04:07 |
tonyb | and the number of items in those pipelines decreasing | 04:08 |
tonyb | https://grafana.opendev.org/d/21a6e53ea4/zuul-status?orgId=1&from=1702868100000&to=now&refresh=1m&viewPanel=17 looks better | 04:12 |
tonyb | Actually I'm seeing, or rather not seeing, any metrics in "Zuul Jobs Launched (per Hour)" or "Gerrit Events (per Hour)" since 0400 | 04:18 |
tonyb | https://grafana.opendev.org/d/21a6e53ea4/zuul-status?orgId=1&refresh=1m&viewPanel=16 | 04:18 |
tonyb | that seems suspicious | 04:18 |
Clark[m] | 903075 seems newer and has jobs running | 04:23 |
tonyb | Okay. That's good :) | 04:26 |
zigo | Hi there! Is this a known issue? https://6dca5728c40d535db466-4fcaafdedb24be0c657932ab646595c9.ssl.cf2.rackcdn.com/903860/1/check/openstack-tox-pep8/4125f32/job-output.txt (this looks unrelated to my patch...) | 09:39 |
frickler | zigo: the job is passing, are you referring to the bandit error messages? | 09:48 |
zigo | Yeah. | 09:48 |
zigo | frickler: BTW, thanks a lot for the SQLAlchemy hint you gave me, this was super helpful ! :) | 09:48 |
frickler | zigo: yw. moving the bandit discussion to #openstack-sdks since according to opensearch it only affects keystoneauth | 09:54 |
scoopex | I don't quite understand how to deal with Gerrit :-) I created a fork of "ansible-kolla" on Github and created a change on Gerrit. (see https://review.opendev.org/c/openstack/kolla-ansible/+/900528) After receiving feedback, I modified the commit on my branch using "--amend". | 12:38 |
scoopex | Now I have two patch sets :-( How can I fix this? Where can I read how to do this better? | 12:38 |
fungi | scoopex: gerrit has nothing to do with github, you don't need to create a fork of anything anywhere. clone the repository to your system, optionally make a topic branch off the branch you want to submit a change for, make and commit the edits you want, then push that commit to a review remote on gerrit (the git-review tool is recommended for this last step so you don't have to work out the | 13:03 |
fungi | remote syntax yourself). to update the change, make more edits and commit --amend so it's changing your previously submitted commit, make sure not to remove or alter the change-id footer in the commit message, and then push the commit again (ideally with the git-review tool) | 13:03 |
fungi | we have a chapter in our infra-manual about the various aspects it here: https://docs.opendev.org/opendev/infra-manual/latest/gettingstarted.html | 13:04 |
fungi | since you've pushed changes to multiple branches, it's not clear to me what the intent was. 900528 is a change for the master branch, you've also pushed change 900522 for the stable/2023.1 branch, and then another change for stable/2023.1 (900057) which you abandoned | 13:08 |
fungi | 900522 lacks the typical cherry-pick identifier so i'm not sure if it was the result of trying to backport 900528 for stable/2023.1 or you somehow accidentally pushed a revision for the wrong branch | 13:11 |
fungi | aha, i see gerrit identified 900528 (for master) as a cherry-pick of 900522 (for stable/2023.1), so were you trying to forward-port your stable/2023.1 change to master? | 13:13 |
fungi | i think the only way gerrit would know it was a cherry-pick is if you used the "cherry pick" button in the gerrit webui. i've never tried that personally, i usually just git cherry-pick locally between branches so that the commit message includes the "cherry picked from commit ..." line that you get with the -x option | 13:16 |
fungi | anyway, in most openstack projects, they want changes reviewed for master first, and then backported to stable branches, not reviewed on a stable branch and then forward-ported to master | 13:17 |
fungi | but it's probably best to talk to the kolla maintainers in the #openstack-kolla channel if you're not sure | 13:17 |
fungi | in addition to the infra-manual, the openstack project has its own contributor documentation here: https://docs.openstack.org/contributors/ (you probably want the code and documentation contributor guide) | 13:19 |
fungi | also in the repository you're trying to contribute to, you'll find a prominently named file with some quick links: https://opendev.org/openstack/kolla-ansible/src/branch/master/CONTRIBUTING.rst | 13:20 |
fungi | which includes a link to the kolla team's own contributor guide here: https://docs.openstack.org/kolla-ansible/latest/contributor/contributing.html | 13:21 |
fungi | hope that helps! | 13:21 |
Clark[m] | fungi: if it isn't too much trouble I'm wondering if you'd be willing to drive tomorrow's meeting and send out an agenda? I've spent the weekend managing a fever with drugs and it doesn't seem to be getting better any time soon. I'll still try to attend but best to get me out of the critical path | 14:11 |
fungi | oh i'll be happy to chair the meeting, no problem. hope your situation improves! | 14:12 |
fungi | will get the agenda out shortly, just have an errand i need to go run first | 14:12 |
Clark[m] | Thank you! | 14:13 |
scoopex | fungi: Many thanks for the elaborate answer! My orientation was https://docs.openstack.org/contributors/code-and-documentation/index.html and that did't provided me answerts for my problem area! My oddisey was: 1.) I started to make my changes based on the stable/2023.1, then (2) i relaized that i created multiple changesets because i perfomed a "git commit --amend" and not "git review" (which seems to | 14:20 |
scoopex | be the right choice). 3. Therefore i started a new change :-) 4. After that i got the information that such changes are more suiteable to the master branch and i executed a "rebase". Then i got reviews and created accidentially new changesets while fixing them... The diagram from https://docs.opendev.org/opendev/infra-manual/latest/gettingstarted.html seems the missing piece of information.... | 14:20 |
corvus | i'm restarting zuul-web | 16:05 |
corvus | #status log restarted zuul-web to match zuul-scheduler (zuul-scheduler was previously restarted on 2023-12-17) | 16:11 |
opendevstatus | corvus: finished logging | 16:11 |
fungi | meeting agenda for tomorrow is sent to the ml. heading out for lunch but should be back in an hour or so | 16:14 |
tonyb | corvus: for future reference, shoudl I have done that ... restart zuul-{web,fingergw} when I did zuul-scheduler? | 22:15 |
corvus | tonyb: probably not important; but because zuul-web shares a lot of scheduler code, whenever we change scheduler-like things i like to update both. this change *shouldn't* affect web, but i didn't think about it enough to be sure. typically if i'm restarting the scheduler, i'd restart web as well because it's fast and easier than thinking about it. if i'm restarting web for some specific web thing, i wouldn't necessarily restart the | 22:18 |
corvus | scheduler. | 22:18 |
tonyb | corvus: Thanks. Noted. | 22:19 |
corvus | (consider my action today more like satisfying my ocd than any operational imperative :) | 22:22 |
tonyb | LOL, sure. | 22:22 |
fungi | yeah, i thought about suggesting it. in this case the new image only had one change beyond what everything was running, and it only altered how timer trigger timespecs were parsed, but you're right even then it's possible some zuul-web feature might have cared that they're parsed the same as on the schedulers | 22:23 |
tonyb | On https://zuul.opendev.org/components I see "9.3.1.dev8 da639bb57" as the version for z{e,m}* but if I 'git -C ~/src/opendev.org/zuul/zuul show da639bb57' I get an error. so where does that, I assume SHA, come from? | 22:25 |
tonyb | for web and scheduler I see `9.3.1.dev9 5087f00ac` and that is a valid SHA | 22:26 |
fungi | tonyb: from zuul. since zuul builds the images in the gate pipeline, the change hasn't been merged by gerrit yet, so that's the speculative merge zuul created | 22:26 |
fungi | we've talked about ways to backmap those from the final branch commits | 22:26 |
tonyb | Ah okay. | 22:27 |
fungi | in cases where there is a merge commit, you'll see the disconnect you described | 22:27 |
fungi | in cases where a fast-forward merge is possible, it's built from the actual change commit, so will match what ends up on the branch | 22:27 |
fungi | 5087f00ac was a brown-bag regression fix that got fast-tracked yesterday, so there were no other changes merged between when i created it from the branch and when it got approved | 22:28 |
tonyb | Got it, and it makes sense. I veridied that the images I pulled had the "right" information about changes and SHAs in the metatdata | 22:29 |
fungi | in the past we've also talked about zuul growing the logic to push merges into gerrit itself. if that were to happen, it could also eliminate the disconnect | 22:30 |
fungi | since merge commits created in the gate pipeline would be the actual merge commits that end up in the branch later | 22:30 |
tonyb | That'd be nice. Clearly not on the critical path | 22:31 |
JayF | fungi: re: the mail to the list (about ML == forum now); I wanted to tell you -- that's how itamarst was interacting with the Eventlet thread, and I've already personally known *3* people who signed up for the mailing list (either with digest on or emails off) when they realized they could interact and reply without having to have their inbox crunchinated. | 23:50 |
JayF | fungi: did you all do any comms on openinfra live about this? I don't wanna thunder-steal, but if nobody infra side has done it or is willing to, we should try and get this evangelized more; it's a huge boon for accessibility+moderization of our tools :D | 23:51 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!