*** rlandy is now known as rlandy|out | 00:52 | |
opendevreview | Michael Johnson proposed openstack/diskimage-builder master: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/diskimage-builder/+/837776 | 00:53 |
---|---|---|
opendevreview | Matthew Thode proposed openstack/project-config master: update generate constraints to py38,39,310 https://review.opendev.org/c/openstack/project-config/+/837815 | 03:09 |
*** pojadhav|afk is now known as pojadhav | 03:56 | |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: source-repositories : use explicit sudo/-C args when in REPO_DEST https://review.opendev.org/c/openstack/diskimage-builder/+/837824 | 06:12 |
opendevreview | chandan kumar proposed zuul/zuul-jobs master: Setup RDO gpg keys and repo for CentOS distros only https://review.opendev.org/c/zuul/zuul-jobs/+/837850 | 09:46 |
*** rlandy|out is now known as rlandy | 10:31 | |
*** pojadhav is now known as pojadhav|afk | 11:00 | |
*** dviroel|afk is now known as dviroel | 11:24 | |
hrw | can zuul log pages get lighter? | 11:58 |
*** pojadhav|afk is now known as pojadhav | 11:58 | |
hrw | whenever I go to pages like https://zuul.opendev.org/t/openstack/build/2da973dd56a34681829b48b7e9854d80 I hear how fans of my ryzen box start to spin up and firefox holds everything trying to deliver page | 11:59 |
hrw | and switching from 'task summary' to 'logs' takes 10+ seconds before page is clickable again | 12:00 |
hrw | 29MB job-output.json | 12:03 |
fungi | hrw: yes, i think it's mainly a problem with very large job logs like that | 12:15 |
fungi | it's worth discussing possible solutions in the zuul matrix channel | 12:15 |
hrw | fungi: and debugging why it does not fetch job-output.json.gz | 12:16 |
fungi | well, job-output.json in our deployment is stored in swift, so your browser theoretically transparently negotiates compression with the server when requesting | 12:17 |
hrw | ok | 12:17 |
fungi | pre-compressing would break that | 12:17 |
fungi | also i seriously doubt it's taking 10 seconds to fetch 29mb of data, it's the browser-side processing of that data which takes time | 12:18 |
hrw | and then browser says | 12:18 |
hrw | ops | 12:18 |
hrw | yep | 12:18 |
*** dviroel is now known as dviroel|brb | 12:19 | |
fungi | it's possible smarter indexing/summaries could be prepended to the json allowing more rapid rendering of the high-level details and then only spending time parsing the larger data set when specific operations request it (like expanding a playbook in the console view or something) | 12:20 |
fungi | for that matter, something more drastic like serialized sqlite might allow faster indexing into specific slices of the data without having to parse the entire set every time | 12:21 |
fungi | i don't personally know what kinds of efficiency magic might be available since i'm not that well-versed in database theory | 12:21 |
hrw | neither do I | 12:22 |
hrw | will dig out my matrix account | 12:22 |
fungi | it's really fairly quick when you don't have a large json blob: https://zuul.opendev.org/t/openstack/build/ea61a0b082184a01af0e5a3786748f53 | 12:24 |
hrw | yeah | 12:24 |
hrw | too bad that I do 99% of work in kolla(-ansible) with insane logs | 12:24 |
hrw | I may check is my logging ideas possible to implement again. | 12:25 |
fungi | the job-output.json in that example is "only" 612kb (9633 lines) | 12:26 |
hrw | so console output is show and all goes into files (instead of console+files like it is now) | 12:26 |
fungi | clarkb: i've approved 827774 and will keep an eye on it | 12:47 |
fungi | also, you determined that jeepyb/manage-projects probably came through unscathed, right? | 12:48 |
*** dviroel|brb is now known as dviroel | 12:51 | |
*** artom_ is now known as artom | 13:29 | |
hrw | ok. https://review.opendev.org/c/openstack/kolla/+/837872 may make kolla logs shorter | 13:37 |
jrosser | fwiw for gigantic log files i get on fine with the 'raw' link in my browser but the pretty rendition that zuul makes bogs down quite badly | 13:41 |
opendevreview | Merged opendev/system-config master: Force case sensitive Gerrit users in config https://review.opendev.org/c/opendev/system-config/+/827774 | 13:41 |
fungi | seems to have deployed successfully | 14:12 |
Clark[m] | fungi 827774 probably deserves a Gerrit restart before the renaming just to be sure it acts as expected? I think 3.4 should ignore the setting and it only takes effect in 3.5. basically double check that noops. | 14:13 |
Clark[m] | fungi: for jeepyb I pushed up a noop change to have it run it's tests and those passed as well | 14:13 |
fungi | yeah, i figured 827774 wouldn't take effect yet, just wanted to be sure it would deplpy the config file | 14:15 |
fungi | i guess we can approve pending project creation changes. i'll do one that's ready and make sure it works as intended before we do any more | 14:15 |
dtantsur | folks, do you have plans on making {{ ansible_user_dir }}/src a safe directory for git or should we work around for our jobs? | 14:15 |
fungi | dtantsur: it's a safe directory for the zuul which jobs run as | 14:16 |
fungi | are you performing git operations there as a different user, maybe via sudo? | 14:16 |
Clark[m] | hrw jrosser for really large log files I have to use vim. There are definitely tiers of problem and the size of the data makes each tier progressively worse. Logging effectively is something that a lot of jobs struggle with imo. For example printing every passing test case to stdout is extremely noisy. A lot of openstack projects go even worse and print the warnings their tests generate to stdout too. | 14:16 |
dtantsur | hmm, so it's not the same failure? https://zuul.opendev.org/t/openstack/build/fb8ca42225f34c7ba1978a576f21542c | 14:16 |
fungi | i meant to say "for the zuul user which jobs run as" | 14:17 |
dtantsur | fungi: yeah, it has a become. hmm. maybe we can use `pip install --local`... | 14:17 |
hrw | Clark[m]: for very large logs I just run build locally ;D | 14:17 |
dtantsur | the affected task: https://opendev.org/openstack/metalsmith/src/branch/master/playbooks/integration/initial-setup.yaml#L21 | 14:17 |
hrw | https://zuul.openstack.org/stream/ddd18979b0144b95b49be11a711c99d9?logfile=console.log is much more quiet kolla run | 14:18 |
fungi | dtantsur: if you're going to `sudo pip install .` for a git repo with a pbr-using project, then yes you'll need to do something to address that. you can set the dir as safe for the root account to use like the workaround which merged to devstack yesterday, or you can separate wheel building and installing if you don't need editable mode | 14:18 |
Clark[m] | Or use a virtualenv | 14:19 |
dtantsur | we need an urgent fix for our CI, I don't feel like changing the way devstack works there right away | 14:19 |
fungi | pbr doesn't need to run git as root, so you can build wheels as the zuul user and install them with any account | 14:19 |
* dtantsur wonders if ansible has something for pip wheel | 14:20 | |
fungi | dtantsur: devstack has a workaround merged already | 14:20 |
Clark[m] | Changing the way devstack works is what devstack has already done | 14:20 |
dtantsur | Clark[m]: you mean, it defaults to virtualenvs? | 14:20 |
hrw | dtantsur: you mean? | 14:20 |
dtantsur | hrw: I'm looking for the smallest change I can make to https://opendev.org/openstack/metalsmith/src/branch/master/playbooks/integration/initial-setup.yaml#L21 to unbreak our CI | 14:20 |
dtantsur | so far I'm inclined to go down the git config path and figure out later | 14:21 |
fungi | dtantsur: devstack sets project source trees it installs as safe for the root user to run git in | 14:21 |
Clark[m] | No devstack sets the safe directory setting for each repo it clones | 14:21 |
hrw | dtantsur: behave and use virtualenv? | 14:21 |
dtantsur | hrw: *SMALLEST* | 14:22 |
hrw | dtantsur: chown root:root whole git repo | 14:22 |
*** pojadhav is now known as pojadhav|afk | 14:22 | |
fungi | dtantsur: https://review.opendev.org/c/openstack/devstack/+/837659 is what merged to devstack as a workaround | 14:22 |
hrw | dtantsur: so ansible runs pip as root, pbr sees git code as own and does the job | 14:22 |
dtantsur | yep, I'm looking at it. I guess I'll do the same for the cloned dir. | 14:22 |
fungi | it's a one-liner | 14:23 |
fungi | so i'm not sure how much smaller you want ;) | 14:23 |
fungi | dtantsur: if you want to go the build/install split route instead, i have a working poc at https://review.opendev.org/837731 but it's not as minimal as what we merged | 14:24 |
dtantsur | thx! | 14:24 |
*** chkumar|ruck is now known as raukadah | 14:31 | |
hrw | now https://8d8bee28fe796502b955-a2ed94e453c651da1fcc90e729dacd9c.ssl.cf5.rackcdn.com/837872/2/check/kolla-build-debian/15f8b2b/job-output.txt is mostly base-jobs output rather than kolla one ;D | 14:36 |
hrw | and job-output.json is 5MB instead of 29M | 14:37 |
Clark[m] | hrw: I don't think it is bad to have output, more concern that we over do it. For me at least the top level logs should identify high level actions and errors with their details when they occur | 14:38 |
hrw | Clark[m]: we have all that output in separate log files | 14:38 |
fungi | noting that a job consisting of lots of small ansible tasks also bloats the json blob | 14:38 |
hrw | Clark[m]: and when firefox get 29MB of json to parse my ryzen system starts to levitate on air flowing from case fans | 14:39 |
hrw | Clark[m]: and those separate log files are *separate* so they are readable. contrary to basic output where everything is mixed | 14:40 |
hrw | anyway waiting for other kolla cores to review | 14:40 |
Clark[m] | Sure. I just don't want people to feel they have to delete logging. More I want to impress that it is about logging what is useful | 14:41 |
Clark[m] | Typically a side effect of this is logs that are manageable. | 14:42 |
hrw | Clark[m]: sure. logs are useful. We just generated tons of them and not used them. | 14:43 |
opendevreview | Merged openstack/project-config master: Add Ganesha based Ceph NFS Charm https://review.opendev.org/c/openstack/project-config/+/835430 | 15:06 |
*** dviroel is now known as dviroel|lunch | 15:12 | |
prometheanfire | do we have a image I can use to run generate-constraints that has py38,39,310? | 15:49 |
clarkb | prometheanfire: rumor is focal might get a 3.10 package and then it could do that, but I am not aware of one that has all three currently | 15:51 |
fungi | that sounds suspiciously like a question i raised on a review recently ;) | 15:51 |
* fungi looks up what's available | 15:51 | |
clarkb | each of the centos' are single python3. Focal has 3.8 and 3.9. Debian bullseye has 3.9 and 3.10 | 15:52 |
clarkb | oh wait no bullseye is just 3.9 | 15:52 |
*** marios is now known as marios|out | 15:52 | |
fungi | https://packages.ubuntu.com/python3.10 says not for focal yet | 15:52 |
clarkb | I guess 3.10 comes out of unstable or testing or something | 15:52 |
prometheanfire | ugh, am I right that zed will support those three python versions officially? that's all I'm looking to change to | 15:53 |
fungi | yeah, 3.10 is in debian testing and unstable: https://packages.debian.org/python3.10 | 15:53 |
clarkb | prometheanfire: I think that is the plan. But also I wouldn't be surprised if python3.6 panic occurs again | 15:53 |
prometheanfire | I really want to remove 36 :P, if it's just 38/39 I'd be happy too | 15:54 |
fungi | https://governance.openstack.org/tc/reference/runtimes/zed.html#python-runtimes-for-zed | 15:54 |
fungi | 3.8 and 3.9 | 15:54 |
prometheanfire | oh, that is easier then | 15:54 |
clarkb | sort of | 15:54 |
clarkb | the intent is to have 3.10 testing | 15:54 |
prometheanfire | ah | 15:55 |
clarkb | so I guess it isn't official support but requirements likely needs to handle it anwyay | 15:55 |
fungi | the "3.6 panic" is that there is still no rhel 9 (as far as i'm aware anyway), so latest rhel is still py36 | 15:55 |
prometheanfire | if I had more time to get a gentoo image working right we would still have to remove 36 lol | 15:55 |
fungi | which means we may release a zed which can't run on rhel servers | 15:55 |
prometheanfire | we'd have 27 available though | 15:55 |
clarkb | fungi: ya I feel like the current defintions for support are a bit flawed and openstack should target rocky to address that instead of centos | 15:55 |
clarkb | basicaly the idea originally was always to support rhel via a stand in | 15:56 |
clarkb | we've broken that itnent now because raisins I guess | 15:56 |
fungi | but really, these are topics for #openstack-tc not #opendev, since the people who need to make decisions about that sort of thing pay more attention there | 15:56 |
clarkb | ++ | 15:56 |
prometheanfire | requirements was originally just for gate usage, but it's always morphed | 15:57 |
prometheanfire | sure, joining | 15:57 |
clarkb | but to be clear I am not aware of any distro image we have that can do all three out of the box | 15:57 |
clarkb | you can do 3.8 + 3.9 or 3.9 + 3.10 | 15:57 |
prometheanfire | well, we can start with what we officially support | 15:57 |
opendevreview | Matthew Thode proposed openstack/project-config master: update generate constraints to py38,39 https://review.opendev.org/c/openstack/project-config/+/837815 | 15:58 |
fungi | in hindsight, the py3k transition creating a combined constraints file was probably a poor choice. this would be a lot easier if we had pyver-specific constraints files and just applied the appropriate one for the interpreter version being used | 16:03 |
fungi | then we could generate them independently in separate jobs | 16:03 |
fungi | we can still do that and then have a subsequent job which mashes them together, i guess | 16:03 |
clarkb | I think sort -u can mash them together proeprly | 16:04 |
clarkb | you just need to remove duplicates | 16:04 |
fungi | well, i imagine you'd set pyver-specific envmarks on every line indicating the version for which that constraint was generated, and then just trivially concatenate | 16:06 |
fungi | no need to get complicated with it | 16:06 |
prometheanfire | super ugly, but'd work lol | 16:06 |
clarkb | oh I thought the generation tooling already added the pyver envmarks when necessary | 16:07 |
clarkb | so you get a subset rather than needing to expand it out. But ya expanding verbosely would work fine too | 16:07 |
prometheanfire | when necessary, but not always | 16:07 |
fungi | yeah, but if you do each one in a separate job you no longer know whether it's "necessary" | 16:07 |
fungi | though you could postprocess the concatenated aggregate and collapse envmarks together | 16:07 |
fungi | functionally identical, just shortens the file | 16:08 |
clarkb | isn't that independent of where the job runs? it looks at the pypi data I would've thought? Or is it diffing the pip list output? | 16:08 |
clarkb | but ya many ways to address this and not all of them complicated | 16:08 |
fungi | it's not as simple as whether the package specifies a requires_python, you may need different versions on different interpreters because of some transitive dependency relationship | 16:09 |
*** dviroel|lunch is now known as dviroel | 16:20 | |
*** rlandy is now known as rlandy|brb | 16:26 | |
hrw | prometheanfire: use manylinux2014 container? | 16:28 |
johnsom | Can someone review and hopefully +W this DIB patch that fixes the git issue for Octavia image building? https://review.opendev.org/c/openstack/diskimage-builder/+/837824 | 16:37 |
johnsom | Successful test patch here: https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/837778/ | 16:38 |
clarkb | does ^ mean opendev has been unable to build iamges? I guess we'd need bullseye to update then land a nodepool change to pick up the git update? | 16:38 |
johnsom | The issue we hit was using the source-repositories element to get some git based packages. If the nodepool images aren't installing packages that way they were probably fine. | 16:40 |
clarkb | johnsom: we use source-repositories to cache the repos. But we do not install them from there | 16:40 |
clarkb | johnsom: ok I think I found a bug | 16:47 |
johnsom | clarkb I think Ian is out for a few days, so we are probably on our own on this patch. | 16:50 |
clarkb | I too am only about to be half here. But if people agree with my -1 pushing up a new patchset is probably a good idea | 16:50 |
johnsom | Yeah, I see it too. I will push an update and run the test patch again. | 16:51 |
opendevreview | Michael Johnson proposed openstack/diskimage-builder master: source-repositories : use explicit sudo/-C args when in REPO_DEST https://review.opendev.org/c/openstack/diskimage-builder/+/837824 | 16:53 |
*** rlandy|brb is now known as rlandy | 16:53 | |
clarkb | fungi: fyi https://storyboard.openstack.org/#!/story/2009993 | 17:05 |
fungi | thanks! | 17:18 |
johnsom | clarkb The test patch successfully built the image with that pushd removed. | 17:23 |
clarkb | johnsom: and you verified the repo contents in the resulting image? | 17:23 |
johnsom | ha, well, give the test job a few more minutes. | 17:24 |
clarkb | ok I +2'd it but Ithink it would be good to check the image contents before approving (the dib CI will take a bit of time nayway) | 17:26 |
johnsom | There are a lot of different paths in that element. I'm not sure we even use that other path. | 17:26 |
clarkb | I need to step away for a it but will check back in later. Also fungi can +A it if he feels comfortable | 17:26 |
clarkb | johnsom: ya thats possible, but just confirming as much as we can without a ton of extra effort is good | 17:26 |
johnsom | +1 | 17:26 |
fungi | i'm happy to approve it now. i need to run a quick errand, but it won't merge right away | 17:29 |
johnsom | Yeah, the image is good. The tempest tests are passing. | 17:30 |
fungi | stepping away for just a few, back shortly | 17:50 |
fungi | and back, looks like it hasn't merged yet | 18:36 |
clarkb | we'll also likely need to do a release | 18:53 |
clarkb | but depending on how octavia is consuming dib that may not be as urgent | 18:53 |
clarkb | https://metadata.ftp-master.debian.org/changelogs//main/g/git/git_2.30.2-1_changelog I'm not sure that includes the git fix that would affect our builders | 18:54 |
clarkb | https://github.com/git/git/blob/master/Documentation/RelNotes/2.30.2.txt I don't think so. That means we have some time on our end | 18:55 |
johnsom | It's in related projects, but yes, I suspect we will need a release | 18:55 |
opendevreview | Merged openstack/diskimage-builder master: source-repositories : use explicit sudo/-C args when in REPO_DEST https://review.opendev.org/c/openstack/diskimage-builder/+/837824 | 19:45 |
fungi | there we go | 19:53 |
opendevreview | David Wilde proposed opendev/git-review master: Bypass Java >= 17 strong encapsulation for tests https://review.opendev.org/c/opendev/git-review/+/838040 | 20:00 |
opendevreview | David Wilde proposed opendev/git-review master: Bypass Java >= 17 strong encapsulation for tests https://review.opendev.org/c/opendev/git-review/+/838040 | 20:01 |
clarkb | fungi: I'm going to start being here less as I have to help with the kids, but re project renames I think we need to review the openstack/project-config change(s) for the rename, push a opendev/project-config yaml file for the rename (this is the input to the rename playbook), then write down a plan to make sure we aren't missing anything? | 20:21 |
fungi | that's pretty much it. i was planning to hack on it after dinner | 20:28 |
Clark[m] | Cool I'll try to carve out time later to pull up things to review | 20:37 |
fungi | yeah, i'll probably be free in ~ an hour | 20:38 |
clarkb | quickly checking the dib release situation I think we could do a 3.20.3 release with only that last update in it. The previous commit was 3.20.2 | 20:59 |
opendevreview | David Wilde proposed opendev/git-review master: Bypass Java >= 17 strong encapsulation for tests https://review.opendev.org/c/opendev/git-review/+/838040 | 21:08 |
*** dviroel is now known as dviroel|out | 21:24 | |
fungi | clarkb: agreed, that seems prudent | 22:33 |
fungi | the fix is pretty minimal | 22:34 |
fungi | still just the one rename for tomorrow: https://review.opendev.org/833019 | 22:35 |
fungi | single repo changing namespace, no group renames included | 22:36 |
fungi | i'm mildly dubious about switching requireContributorAgreement to true, but will remind the tc that they need to double-check the status of all the prior contributors to that repo | 22:36 |
fungi | i don't think it should block the rename anyway | 22:37 |
fungi | i'll +2 and push up the metadata change | 22:37 |
opendevreview | Jeremy Stanley proposed opendev/project-config master: Record project renames happening April 15, 2022 https://review.opendev.org/c/opendev/project-config/+/838048 | 22:40 |
fungi | clarkb: ^ | 22:40 |
clarkb | fungi: cool that change and hte actual rename change both lgtm | 22:44 |
clarkb | and since there is only a single rename we don't have to worry about order of operations between different renames or getting them in in a short period of time | 22:44 |
fungi | yeah, i plan to just do a status notice as a heads up at 14z and then another at 15z when i start the outage, but basically just follow https://docs.opendev.org/opendev/system-config/latest/gerrit.html#renaming-a-project | 22:47 |
fungi | we don't have any outstanding caveats we need to update the docs there for as far as you're aware, right? | 22:47 |
Clark[m] | Did we manage to get all the updates into that doc after the last rename? | 22:47 |
Clark[m] | Let's see if we can find the etherpad for it to compare | 22:48 |
Clark[m] | https://etherpad.opendev.org/p/project-renames-2022-01-24 | 22:50 |
Clark[m] | fungi: ^ if you need to compare. That also had a Gerrit server upgrade at the end which made it a little different but I think you can largely ignore that | 22:51 |
fungi | yeah, the only slightly out of the ordinary thing we have going on this time is that the updated gerrit config will take effect at restart to do nothing actually new because it just adds an option our gerrit version should ignore | 22:53 |
Clark[m] | Right | 22:54 |
johnsom | It looks like diskimage-builder isn't release managed by the release team. What is the process to get that git fix released? | 23:16 |
Clark[m] | johnsom: someone in the release group has to push a signed tag | 23:19 |
Clark[m] | This is intentional to avoid chicken and egg problems between the release system and disk image builder releases | 23:20 |
johnsom | So pretty much infra. Would someone have a moment to hit the button on that? The Octavia grenade jobs use the released version. | 23:26 |
fungi | johnsom: if i wait 30 minutes can i call it a friday release? ;) | 23:30 |
johnsom | lol | 23:30 |
johnsom | It's a bit longer until Friday here. | 23:31 |
fungi | well, i mean utc friday. real friday ;) | 23:31 |
fungi | Clark[m]: it looks like ianw inadvertently tagged not-head for 3.20.2 | 23:32 |
fungi | there's a merge commit which landed after it for a commit which is before it | 23:32 |
fungi | so i think 3.20.3 is actually also going to include https://review.opendev.org/830113 Update fedora element testing to F35 | 23:34 |
fungi | i guess it's not a functional change for the software though, so doesn't really alter plans | 23:34 |
fungi | commit 4cb3346fec6715d3f2ba7953face5296f497956a (HEAD, tag: 3.20.3, origin/master, origin/HEAD, gerrit/master, refs/changes/24/837824/2) | 23:37 |
fungi | that look right? | 23:37 |
fungi | if so, i'll heave | 23:37 |
johnsom | It's the right sha | 23:38 |
*** rlandy is now known as rlandy|out | 23:48 | |
Clark[m] | fungi: oh interesting but I agree a test only change isn't a big deal | 23:50 |
Clark[m] | And ya that Sha looks right | 23:51 |
fungi | cool, pushing thanks! | 23:52 |
fungi | and it's up | 23:52 |
fungi | release-openstack-python node request is taking longer than expected | 23:57 |
johnsom | Thank you! | 23:57 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!