clarkb | I've been warned that dinner is just about ready. I'm not likely to be able to make it back to this tonight. | 00:00 |
---|---|---|
clarkb | Now that we know the what the issue is at least we can work with it. Good luck ! :) | 00:01 |
ianw | clarkb: thanks for digging into it! | 00:02 |
johnsom | email sent | 00:05 |
ianw | "Not uninstalling six at /usr/lib/python3/dist-packages, outside environment /usr" | 00:13 |
ianw | that seems like a unique error | 00:13 |
ianw | s/unique/new/ | 00:14 |
ianw | oh actually that's not the error | 00:15 |
ianw | 2022-04-12 23:58:05.740 | + lib/infra:install_infra:32 : python3 -m venv /opt/stack/requirements/.venv | 00:15 |
ianw | 2022-04-12 23:58:05.778 | Error: [Errno 13] Permission denied: '/opt/stack/requirements/.venv' | 00:15 |
ianw | is | 00:15 |
ianw | see fungi i told you nothing would try to modify the source tree, nobody would do that, that's just crazy talk :) | 00:16 |
fungi | bwahaha | 00:18 |
opendevreview | Ian Wienand proposed openstack/devstack master: chown checked-out repos (to be used for install) to root ownership https://review.opendev.org/c/openstack/devstack/+/837636 | 00:23 |
opendevreview | Ian Wienand proposed openstack/devstack master: chown checked-out repos (to be used for install) to root ownership https://review.opendev.org/c/openstack/devstack/+/837636 | 00:50 |
opendevreview | Ian Wienand proposed openstack/devstack master: chown checked-out repos (to be used for install) to root ownership https://review.opendev.org/c/openstack/devstack/+/837636 | 01:12 |
ianw | hrm, neutron wants to also edit the source tree as it creates its example configs in there | 01:53 |
ianw | i wonder if "root:stack" ownership will work | 01:54 |
ianw | that will allow git traversal of the directories, but hopefully stack user transparent ability to write in there too | 01:54 |
ianw | i wonder what our umask on all this is | 01:54 |
opendevreview | Ian Wienand proposed openstack/devstack master: chown checked-out repos (to be used for install) to root ownership https://review.opendev.org/c/openstack/devstack/+/837636 | 02:04 |
ianw | it looks like the ^ approach, owned by root but group stack, is getting a lot further | 03:20 |
ianw | devstack passed, but tempest-full-py3 failed | 03:37 |
ianw | 022-04-13 03:28:24.538 | ++ lib/tempest:set_tempest_venv_constraints:119 : git show origin/master:upper-constraints.txt | 03:41 |
ianw | 2022-04-13 03:28:24.538 | fatal: unsafe repository ('/opt/stack/requirements' is owned by someone else) | 03:41 |
opendevreview | Ian Wienand proposed openstack/devstack master: chown checked-out repos (to be used for install) to root ownership https://review.opendev.org/c/openstack/devstack/+/837636 | 03:57 |
ianw | i think the tempest source needs to be owned by stack, as opposed to all the other sources. so now that has a wrapper | 03:59 |
ianw | sigh that's not right ... pip_install -e /opt/stack/tempest | 04:21 |
opendevreview | Benny Kopilov proposed openstack/tempest master: Allows to skip wait for volume create https://review.opendev.org/c/openstack/tempest/+/837603 | 04:26 |
ianw | i think the problem is that we install tempest, but aslso run tox in the tempest source dir, which also tries to install tempest as "stack" into the tox venv | 04:28 |
ianw | i don't know ... i'm starting to think this is a loosing battle | 05:01 |
opendevreview | Ian Wienand proposed openstack/devstack master: [wip] mark clones git safe https://review.opendev.org/c/openstack/devstack/+/837659 | 05:05 |
opendevreview | yatin proposed openstack/devstack master: [DNM] check ovn fedora failure https://review.opendev.org/c/openstack/devstack/+/837660 | 05:27 |
ianw | i think ^^ may be the way, tempest is at least running | 05:50 |
ianw | grenade is not, but i guess we need backports | 05:51 |
opendevreview | yatin proposed openstack/devstack master: [DNM] check ovn fedora failure https://review.opendev.org/c/openstack/devstack/+/837660 | 06:04 |
opendevreview | Ian Wienand proposed openstack/devstack master: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837659 | 06:57 |
*** pojadhav is now known as pojadhav|lunch | 08:00 | |
*** pojadhav|lunch is now known as pojadhav | 08:40 | |
*** whoami-rajat__ is now known as whoami-rajat | 08:53 | |
opendevreview | Victoria Martinez de la Cruz proposed openstack/devstack-plugin-ceph master: Deploy with cephadm https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 09:37 |
opendevreview | Victoria Martinez de la Cruz proposed openstack/devstack-plugin-ceph master: Deploy with cephadm https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 10:04 |
opendevreview | Francesco Pantano proposed openstack/devstack-plugin-ceph master: Deploy with cephadm https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 10:22 |
opendevreview | Federico Ressi proposed openstack/devstack master: [WIP] Create workaround for new issue related with https://bugs.launchpad.net/pbr/+bug/1675459 https://review.opendev.org/c/openstack/devstack/+/837720 | 11:47 |
opendevreview | Luigi Toscano proposed openstack/devstack-plugin-ceph master: [DNM][CI] Add CEPHADM_DEPLOY flag to py3 tests https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/834223 | 12:01 |
fungi | ianw: i'm mostly caught up now, but i think i may have a solution, posted to the ml thread... rather than relying on pip to call setuptools/pbr, just separate the wheel build step out and do that as stack, then install the resulting wheel as root | 12:08 |
opendevreview | Federico Ressi proposed openstack/devstack master: Create workaround for issue when installing Keystone package as root from /opt/stack/keystone https://review.opendev.org/c/openstack/devstack/+/837720 | 12:09 |
fungi | after i get a shower, i can try to push up a patch which does that, if nobody else beats me to the punch | 12:13 |
opendevreview | Francesco Pantano proposed openstack/devstack-plugin-ceph master: Deploy with cephadm https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 12:51 |
opendevreview | Dan Smith proposed openstack/devstack master: WIP: Gather performance data after tempest https://review.opendev.org/c/openstack/devstack/+/837139 | 13:37 |
*** soniya29 is now known as soniya29|out | 13:51 | |
*** artom__ is now known as artom | 14:00 | |
*** pojadhav is now known as pojadhav|afk | 14:06 | |
dansmith | is the git problem biting us because we do an apt upgrade before/during our run or something? | 14:10 |
dansmith | and if so, could we maybe just pin git for the time being? | 14:11 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 14:13 |
fungi | dansmith: *we* can pin git to an insecure version, but we can't very well tell all devstack users to just stick with an old git which has known security vulnerabilitirs | 14:14 |
fungi | vulnerabilities | 14:14 |
dansmith | clearly not long-term | 14:14 |
dansmith | I mostly meant to get past the need to disable grenade to merge the config fix (which is, AFAIU, equivalent to pinning to the insecure version anyway) | 14:14 |
fungi | which config fix? sorry, been trying to test a working fix for devstack | 14:15 |
dansmith | https://review.opendev.org/c/openstack/devstack/+/837659/2/functions-common | 14:15 |
dansmith | ianw proposed globally disabling the new check I think | 14:15 |
fungi | but yes, we'll need to patch stable branches and forward-port, just like any devstack bug which impacts upgrade testing | 14:15 |
fungi | the usual solution is to patch teh oldest stable branch and then cherry-pick that patch to later branches until you reach master | 14:16 |
dansmith | well, what causes us to get the new git? not devstack, that I know of, so I figured it was base zuul stuff? | 14:17 |
fungi | figured what was zuul stuff? | 14:17 |
dansmith | causing us to get the just-released-yesterday git package | 14:17 |
fungi | nope, it's ubuntu | 14:17 |
fungi | ubuntu patched a security hole in the versions of git they distribute | 14:18 |
dansmith | right, I understand :) | 14:18 |
dansmith | we must be running apt-upgrade on our workers at some point to be getting that, right? | 14:18 |
fungi | it will appear in all supported ubuntu versions soon if it hasn't already, and presumably most other distros too | 14:18 |
dansmith | yeah I know, just trying to understand and think about ways to get out of jail sooner.. I'll drop it and leave it to the experts, sorry for the noise | 14:19 |
fungi | we build fresh images every day, so whatever the current versions of ubuntu's packages are, that's what we run tests on (because we assume our users aren't avoiding installing security fixes in their deployments) | 14:19 |
fungi | thankfully, we have a ci system which has caught that devstack no longer works with security-patched git versions, so it's a sign we need to fix devstack | 14:21 |
fungi | my idea is that we can fix it by separating wheel building (which needs to run pbr/git) from wheel installing (which needs to run as the root user), but i'm not all that well-versed in devstack internals so fumbling my way through. i expect others are trying alternative approaches | 14:24 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate sdist building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 14:35 |
clarkb | fungi: your idea seems good except I'm not sure if you can do an editable install against an sdist or wheel? | 14:37 |
fungi | clarkb: we probably need to give up on editable installs anyway. their days seem to be numbered based on the pep-660 discussions in the python community and among the setuptools maintainers | 14:37 |
clarkb | ah ok, and ya that feature seems less important than working at all :) devs can learn to reinstall | 14:38 |
clarkb | ok school run, back in a bit | 14:38 |
fungi | in particular, the setuptools maintainers have been suggesting that continuing to support editable installs is at odds with future development priorities they have | 14:38 |
fungi | there are definitely people in the python community who want editable installs to stick around, but i'm not sure they'll win out | 14:39 |
fungi | i would say, if people do want editable installs in devstack, that's a reason to re-raise the venv solution. having the root user install packages which are backed by editable files owned by a non-root user seems really fragile | 14:43 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate sdist building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 14:56 |
johnsom | I can't remember, why have we not gone down the venv path? Other than the required effort of course. | 15:05 |
opendevreview | Francesco Pantano proposed openstack/devstack-plugin-ceph master: Deploy with cephadm https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 15:05 |
fungi | clarkb: if you're interested, what progress there has been on editable mode rework in setuptools is most recently summarized at https://github.com/pypa/setuptools/issues/2816 and https://discuss.python.org/t/14855 | 15:07 |
clarkb | johnsom: I tried valiantly when pip broke us in devstack due to not being able to uninstall distro installed distutils packages. There wasn't enough momentum to make the switch and getting it to work with grenande and ironic was tricky (doable though) | 15:15 |
clarkb | johnsom: basically we lacked critical mass of people interested in doing reviews and dealing with the likely fallout | 15:15 |
clarkb | the changes are still up (might be abandoned) if anyone wants to resurrect the effort. I'm not sure I've got it in me to try agin myself | 15:16 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate sdist building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 15:21 |
johnsom | clarkb Ok, thanks. Yeah, we switched to venv for the amphora agent in our images in Ussuri to get away from the conflict issues with the distro packages. It has served us well. | 15:22 |
clarkb | johnsom: from memory I got grenade working via two separate virtualenvs. The last known problems were due to ironic and how it runs random commands in weird ways to do their more all in one testing iirc | 15:23 |
johnsom | I can imagine that was a large effort to work through. | 15:26 |
clarkb | fungi: left a couple of thoughts on 837731 | 15:30 |
fungi | thanks! | 15:31 |
fungi | clarkb: responded, and also the preliminary results on 837731,4 bear out your supposition about editable mode... "ERROR: /opt/stack/keystone/dist/*.tgz is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with svn+, git+, hg+, or bzr+)." | 15:57 |
fungi | i'll see if dropping -e gets things to go farther | 15:57 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 16:10 |
clarkb | I dunno if we need to tell openstack to stop approving changes but it seems like the gate has been busy failing | 16:16 |
gmann | I think let's get this in to unblock the gate and then 837731 in parallel ? https://review.opendev.org/c/openstack/devstack/+/837659 | 16:19 |
dansmith | gmann: +1 .. as I noted in there, that avoids breaking everyone's assumption about editable trees in the short-term, and appears ready to go | 16:20 |
gmann | let me make grenade non voting for now. merging the things in reverse order for stable branches is little complex and take time by seeing EM stable branch also has grenade jobs | 16:21 |
clarkb | ya I think my only concern with it is that it makes a system wide decision on the host for all usage of git. That said devstack has made ltos of system wide impacting decisions. | 16:21 |
dansmith | ack, I hate to do it, but the time-to-fix is more important It hink | 16:22 |
clarkb | gmann: can you explain why merging this in the proepr order is not possible? | 16:22 |
fungi | yeah, i'm focused on the mid-term solution at this point, with an eye to getting rid of sudo pip install in favor of venvs as the longer-term solution | 16:22 |
fungi | short-term fix seems straightforward if not great | 16:22 |
clarkb | it isn't complex. You just do depends on and zuul handles the rest | 16:22 |
gmann | clarkb: it is possible but take time to unblock the master gate | 16:22 |
clarkb | gmann: its the same amount of time as landing a single change assuming testing is stable (a big assumption I know but a good reason to remind people why this matters) | 16:22 |
clarkb | every time this sort of issue comes up we end up makign the situation worse by not following the path that works out of the box | 16:23 |
gmann | clarkb: which is not case for many stable testing especially EM branches | 16:23 |
clarkb | with grenade ending up disabled in a bunch of places people don't realize and then landing broken upgrades in the interim | 16:23 |
clarkb | EM branches aren't supposed to grenade though | 16:23 |
gmann | making non voting and then voting also does not harm | 16:23 |
clarkb | gmann: it absolute ly does | 16:23 |
clarkb | the last time we ran into this it took weeks to untangle the resulting mess | 16:24 |
gmann | we do it in a stack | 16:24 |
clarkb | if we had just done it properly it would've been done and we could move on. But moving to non voting or removing the tests (what happened last time) made a huge mess | 16:24 |
fungi | the period of time where grenade is non-voting is an opportunity for upgrade-breaking changes to slip through | 16:24 |
gmann | let me propose both non-voting and then revert of it so that we merge them in single shot | 16:24 |
dansmith | how far back do we have to go? I assumed to any branch we're testing because the CVE likely made it to xenial right? | 16:25 |
clarkb | fungi: right from memory what happened last time was that the necessary backports didn't end up happening so all the non voting/removed grenade jobs stopped providing signal for long enough that a problem or two landed. Then when we tried to undo the grenade removal it ended up being as painful as just fixing the thing in the first place | 16:26 |
clarkb | dansmith: I think just to bionic | 16:26 |
gmann | dansmith: till victoria | 16:26 |
gmann | victoria | 16:26 |
dansmith | yeah, which release is what I meant | 16:26 |
dansmith | so we have to land v,w,x,y, and then master before we're fixed right? | 16:26 |
clarkb | openstack shouldn't be doing any more xenial testing at this point. We're working on removing xenial from our CI options and openstack wasn't on the list iirc | 16:26 |
clarkb | dansmith: yes, but you can set depends on and they can enter the gate together | 16:27 |
clarkb | (and we can direct enqueue etc) | 16:27 |
dansmith | presumably we could direct enqueue the pair of non-voting,revert too right? | 16:27 |
fungi | dansmith: if you consider stable branches as supported by the openstack developers, then we have to merge v,w,x,y, and master before we're fixed regardless of which order they merge in | 16:27 |
gmann | clarkb: we have that in EM branches, we might need to do EOL them first. but another topic. | 16:27 |
dansmith | fungi: right but unblocking master is by far the most important | 16:27 |
fungi | is it? | 16:28 |
johnsom | Wouldn't pinning the git package be a safer interim fix? | 16:28 |
gmann | any stable < victoria has grenade jobs non voting so yes till victoria | 16:28 |
fungi | johnsom: pinning it how? | 16:28 |
dansmith | johnsom: already suggested that :/ | 16:28 |
clarkb | gmann: dansmith remember you have to land the fix in yoga at least too | 16:28 |
clarkb | to unbreak master | 16:28 |
clarkb | you cannot pin ubuntu packages | 16:28 |
johnsom | Pull the package from kernel.org (or our mirrors) then mark it in apt as pinned so it can't upgrade | 16:28 |
clarkb | they get pruned | 16:28 |
gmann | yeah | 16:28 |
clarkb | and they are gone | 16:28 |
fungi | johnsom: building git from source? | 16:29 |
johnsom | wget http://mirrors.edge.kernel.org/ubuntu/pool/main/g/git/git_2.25.1-1ubuntu3.2_amd64.deb is what I did for the test yesterday | 16:29 |
fungi | and yeah, the debian/ubuntu word for pinned is "held" | 16:29 |
gmann | ah we have neutron grenade job also in devstack gate | 16:29 |
dansmith | clarkb: that's what we have to do to get people to merge things that are running grenade, yes, but at the point where we've merged the devstack fix then we can at least be getting runs.. right now nobody can even get a test result | 16:30 |
clarkb | I think we should avoid installing insecure software intentionially when we have other valid workarounds. Even if the valid workaround is "restore older insecure behavior in a specific case" because then other uses of the software can be secure | 16:30 |
dansmith | johnsom: right that was my first suggestion this morning and still seems totally reasonable to me | 16:31 |
fungi | so not pinning git, just downgrading hit and marking the package as held. i still think that's not a great signal to send to users: here you should avoid this high-profile security fix because openstack has spent years avoiding solving a packaging anti-pattern that the python packaging community and distros agree is a terrible idea | 16:31 |
johnsom | Don't use my patch as it was quick/dirty and pulls the package multiple times. | 16:31 |
clarkb | all that to say +2 on ianw's change. I would prefer we land the sequence. If you make grenade non voting instead you still need to land the sequence so why don't we start on that now. I can work on pushing those changes | 16:31 |
johnsom | Yeah, but I'm not a fan of secretly disabling the feature globally on people either | 16:31 |
clarkb | to be clear I'll push the "stack" of backports with appropriate depends on. I won't do the non voting stuff. But they can happen concurrently as non voting should still want the stack to land asap | 16:32 |
dansmith | johnsom: it's blessing individual repos not disabling entirely, so it's pretty targeted | 16:32 |
clarkb | dansmith: exactly. It onyl does it for the repos that devstack configures/clones | 16:32 |
johnsom | Ok, I missed that fine tuning | 16:33 |
gmann | clarkb: sure, please propose the backport. and let's see how much time it take to unblock master gate. | 16:33 |
dansmith | right, you're already giving devstack the keys to your system to make all sorts of changes, so I don't think trusting the repos you're installing and running with privs is a big deal | 16:33 |
fungi | right, the config is path-specific | 16:33 |
gmann | more than stable, I still think unblocking master gate is most important things we should do ASAP | 16:34 |
dansmith | most definitely | 16:34 |
clarkb | I guess if I put the depends on in the changes then switching to non voting won't help | 16:34 |
clarkb | since they have to land in order | 16:34 |
fungi | merging additional changes to master is more urgent than fixing the problem? | 16:34 |
clarkb | I'll push them up with depends on only on the stable branches then yall can disble grande on master if you want and land it as is | 16:35 |
dansmith | I don't want to disable grenade unless we have to, but the risk vs. nobody being able to even see test results for two more days is acceptable, IMHO | 16:36 |
dansmith | if we can really sail through a big stack in six hours, then doing it the right way makes sense to me | 16:36 |
clarkb | It depends on how stable stable testing is and I'm not clued into that right now | 16:36 |
dansmith | right, that's my concern | 16:37 |
gmann | that is the main things and my concern that it will take time | 16:37 |
fungi | i guess i have a fundamental disconnect with the idea that the most important thing is that people can merge changes without being impeded by annoying bugs in their software (and to be clear, devstack is the openstack community's software) | 16:37 |
gmann | but you can propose and if we all see merging them taking time then we can go wioth grenade non voting | 16:37 |
fungi | but i'll set that aside for the sake of pragmatism | 16:37 |
opendevreview | Clark Boylan proposed openstack/devstack stable/victoria: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837745 | 16:38 |
gmann | fungi: its bug in how we are creating the test env not in openstack as software so blocking that for longer does not make sense to me | 16:38 |
dansmith | agree | 16:38 |
fungi | makes sense, openstack should stop using devstack, i agree | 16:39 |
gmann | "a things in ubuntu security patch and openstack is all blocked for development" | 16:39 |
opendevreview | Clark Boylan proposed openstack/devstack stable/wallaby: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837746 | 16:39 |
fungi | an important security fix in ubuntu finally undercut the house of cards that is the sudo pip install approach we should never have used to begin with because it was as terrible an idea 12 years ago as it is now | 16:40 |
clarkb | well to be fair I made a large effort a few years ago to fix these calsses of problems in devstack and I couldnt' get any buy in | 16:40 |
clarkb | no one wanted to invest in preventing these erros so here we are | 16:40 |
opendevreview | Clark Boylan proposed openstack/devstack stable/xena: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837747 | 16:41 |
dansmith | fungi: I don't want toi get into an argument, but I really don't think the security concern of this change really impacts us here.. we're already running anything anyone submits to the system as root, so the concern about leakage within a single system is not really relevant, IMHO | 16:41 |
gmann | definitely we need more better testing env and infra and more imp than people to help there. it is causing more issues in OpenStack development day by day now. and I am not sure how to fix it | 16:42 |
dansmith | devstack is not a deployment tool, it's a development tool and it does all kinds of things that are not ideal for deployment to make development easier | 16:42 |
fungi | dansmith: i'm not disagreeing with the pragmatic workaround, just pointing out that we're sleeping in the bed we've collectively made | 16:42 |
opendevreview | Clark Boylan proposed openstack/devstack stable/yoga: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837748 | 16:42 |
dansmith | clarkb: should we be +Wing those now? | 16:42 |
clarkb | ok all of those stable changes have the approprate depends on and if reviewed and approved should end up gating in a way that makes grenade happy assuming victoria was really how far back we need to go | 16:42 |
clarkb | dansmith: if we think that ianw's worakround fix is the one we want (at least for the shrot term) then ya I think it is safe to +W them | 16:43 |
clarkb | I'm checking now if the victoria chagne is running grenade (it shouldn't) | 16:43 |
dansmith | clarkb: ack, thanks for doing that | 16:43 |
clarkb | the victoria change is running grenade | 16:44 |
clarkb | which means I didn't go back far enough | 16:44 |
clarkb | gmann: ^ is this a bug in the victoria job defs or do we need to go back to ussuri? | 16:44 |
gmann | yeah, let' try and see how fast they merge | 16:44 |
dansmith | maybe we should drop grenade underneath on V? | 16:44 |
gmann | clarkb: ussuri also to unblock vistoiria grenade | 16:44 |
gmann | dansmith: yeah but once that is EM | 16:45 |
clarkb | ok one sec ussuri change incoming | 16:45 |
gmann | dansmith: oh ussuri is EM then yes you are right we can drop on V | 16:45 |
dansmith | clarkb: ^ | 16:45 |
opendevreview | Clark Boylan proposed openstack/devstack stable/ussuri: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837749 | 16:46 |
clarkb | oh ok I can abandon ^ if someone wants to pusha change to remoev grenade from victoria | 16:46 |
gmann | clarkb: you can make them non voting as other stable EM branch have and if anyone want to fix then they backport this ^^ in ussuri or actualy fix | 16:46 |
gmann | actual | 16:46 |
gmann | clarkb: I mean make grenade non voting on victoria backport only | 16:47 |
clarkb | gmann: the problem isn't in the EM branch the prioblem is in victoria | 16:47 |
clarkb | and victoria shouldn't switch to non voting it should delete the job | 16:47 |
dansmith | can we do that without changing templates? | 16:47 |
gmann | clarkb: agree but EM maintainers keep them as non voting | 16:47 |
gmann | every other EM are having grenade job but non voting | 16:48 |
clarkb | ugh you shouldn't have gating jobs as non voting | 16:48 |
clarkb | they should be removed | 16:48 |
gmann | its in check pipeline only I think | 16:48 |
dansmith | I can push this, if it's right: https://termbin.com/9t6e1 | 16:49 |
clarkb | gmann: I think it is best if you modify the jobs as I'm not aware of how you have been organizing them (and my initial impression is that I need to go and change it anyway :P) | 16:49 |
dansmith | gmann: it's in gate too | 16:49 |
gmann | clarkb: yeah I am not worried about EM things, I tried to remove it completly many times but they are added back as non voting | 16:49 |
clarkb | right but we are not talking about EM things | 16:50 |
clarkb | we are talking about victoria | 16:50 |
clarkb | victoria as the oldest supported release should not run grenade any longer was my understanding of how we handled the oldest branch | 16:50 |
clarkb | but if that isn't the case please feel free to push what you would like to see to make victoria testing pass | 16:50 |
clarkb | assuming it fails here. I guess bionic may not have gotten the git fix /me checks | 16:50 |
clarkb | https://ubuntu.com/security/notices/USN-5376-1 says bionic did get it so I expect victoria to fail too | 16:51 |
gmann | dansmith: I mean any branch in EM has them as non voting but in check pipeline only https://github.com/openstack/devstack/blob/stable/ussuri/.zuul.yaml#L631 | 16:51 |
clarkb | gmann: ok and you are suggesting we make victoria match that? | 16:52 |
dansmith | gmann: I'm saying in victoria, it's in the gate queue as voting | 16:52 |
gmann | clarkb: I am dropping them completely in Victoria and if any EM maintainer comes they can add as non-vting or so | 16:52 |
dansmith | I don't know if that works for victoria because it uses the templates | 16:52 |
gmann | yeah as ussuri is EM and grenade in victoria test ussuri->victria then we can just do nt worry about it in victoria too | 16:53 |
dansmith | gmann: so you're pushing a patch? | 16:53 |
clarkb | its interesting that it uses the template but then also redefines things. I think if you set non voting at the project level under the grenade entry there it will do the right thing | 16:53 |
gmann | dansmith: we do not use template in devstack gate | 16:53 |
gmann | clarkb: because of it ^^ | 16:53 |
clarkb | gmann: https://github.com/openstack/devstack/blob/stable/victoria/.zuul.yaml#L638 that one | 16:54 |
clarkb | it defines grenade | 16:54 |
dansmith | exactly | 16:54 |
gmann | clarkb: dansmith ah sorry again I was checking ussuri so that also goes away https://github.com/openstack/devstack/blob/stable/ussuri/.zuul.yaml#L564 | 16:54 |
gmann | victoria: stop template and make grenade as non voting ^^ like we did in ussuri | 16:54 |
clarkb | ++ if you do that I'll happily Approve it | 16:55 |
gmann | clarkb: you will do it in that backport itself? that will be faster to merge the things | 16:55 |
clarkb | oh good point. I guess I can do that. Give me a moment | 16:55 |
gmann | +1 | 16:55 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 16:55 |
opendevreview | Clark Boylan proposed openstack/devstack stable/victoria: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837745 | 16:58 |
clarkb | gmann: ^ I think that will do it? the other stable branch cahgnes will need to eb rechecked now | 16:58 |
* clarkb does that | 16:58 | |
clarkb | gmann: dansmith: thinking out loud here, if we leave the depends-on off of master that helps with non voting changes should you wish to do that if stable isn't stable. But if stable does manage to enqueue to the gate then we could manually enqueue the master change behind the stable changes | 17:02 |
clarkb | basically doing the depends on manually | 17:02 |
gmann | let's see how it goes and if we are stuck in any of stable then we can go for non voting | 17:03 |
dansmith | so we should stack +Ws on the other backports so that they'll merge ASAP yeah? | 17:06 |
clarkb | yup | 17:06 |
gmann | yeah | 17:09 |
dansmith | gmann: same for the master one right? | 17:20 |
clarkb | there are failures on victoria jobs but so far only on non voting jobs | 17:20 |
clarkb | gmann: dansmith ya if you +W the master one it won't enqueue auitomaticaly but I can volunteer to enqueue it if the others enqueue ahead of it | 17:20 |
clarkb | and it needs +W for me to enqueue it that way | 17:21 |
gmann | clarkb: but it does not have depends-on | 17:21 |
clarkb | gmann: yes that is why I will have to manually enqueue it | 17:21 |
gmann | ok | 17:21 |
clarkb | which I can do to keep our options open | 17:21 |
gmann | done | 17:21 |
clarkb | (trying to keep the best of both options available right now) | 17:21 |
gmann | +W | 17:21 |
clarkb | thanks! | 17:21 |
dansmith | clarkb: no depends-on on that one though | 17:23 |
clarkb | yup I'll have to manually enqueue it when the others enter the gate | 17:24 |
gmann | dansmith: yeah, he will enqueue that after table/yoga one so it would run before stable/yoga is merged | 17:24 |
gmann | clarkb: even the check pipeline too as it will fail otherwise | 17:24 |
dansmith | ack, I got it now, just wanted to make sure we weren't missing (the important) one :) | 17:24 |
clarkb | if I add the depends on then it makes a pivot to non voting grenade on master less useful since theyoga change would still have to merge first. This allows us to keep that option open with a little manual intervention which I'm happy to do | 17:25 |
clarkb | feel free to ping me when the yoga change enters the gate though I will try to watch out for it too | 17:25 |
gmann | sure | 17:25 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 17:26 |
gmann | fungi: clarkb on xenail support. stable/pike till stable/rocky EM branches use the Xenial node on testing. | 17:30 |
fungi | yeah, we'd like to get rid of xenial nodes soon | 17:31 |
fungi | stable/rocky is pretty ancient anyway | 17:31 |
clarkb | gmann: hrm we're definitely trying to delete those images (we still use them to test some stuff on the infra side so not immediate). Would be good if we can work towards cleanign some of that up. I was just talking to pabelanger about it since windmill uses xenial | 17:31 |
gmann | in that case we need to wait for them to be EOL | 17:31 |
gmann | and I totally ok to do that, but its EM team decision | 17:32 |
clarkb | well no | 17:32 |
fungi | well, i'd say it's the other way around. they can be eol as soon as you want or just remove any jobs for them which need xenial | 17:32 |
clarkb | the distro is well past its sell by date | 17:32 |
clarkb | and we're saying we cannot support it anymore | 17:32 |
clarkb | whether or not they eol the branches is up to them but not the removal of xenial | 17:32 |
gmann | I think every job use xenail so if we remove those job it will be almost zero testing | 17:32 |
clarkb | that may be and it would be up to the EM team if they want to try and make bionic or some other option work | 17:33 |
fungi | we always said em branches could exist as long as we can run jobs for them. if we're removing the nodes they require, then it's time to eol them | 17:33 |
dansmith | xenial is just now end of support, AFAIK | 17:33 |
clarkb | dansmith: was last year in april | 17:33 |
fungi | just now a year or so ago | 17:33 |
dansmith | oh it is 2022 already, huh? | 17:33 |
clarkb | ya 2020 never ended so it can be confusing | 17:33 |
gmann | yeah, that is good time. I am very less focusing on those EM | 17:34 |
gmann | keeping them untested and as EM is not good, i prefer we do EOL | 17:34 |
dansmith | ack, so yeah, makes sense to drop at that point I think | 17:34 |
clarkb | the infra team/opendev does occasioanlly keep things past the sell by date simply beacuse we don't have time to cleanup everything on time, but we don't intend on keeping things alive beyond that point | 17:35 |
fungi | we've already had those em branches for 2-3x as long as we used to keep branches open | 17:35 |
clarkb | but the more releases we support the more image builds that we have to keep working and we need more disk for the set. Also more disk for the mirrors and so on | 17:35 |
fungi | so i'd say they've had a good run | 17:35 |
clarkb | fungi: ++ | 17:36 |
gmann | when is plan for xenail image removal? | 17:36 |
clarkb | gmann: right now we're asking people nicely to stop using them. we haven't gotten to the point where we're forcefully removing it htough I suspect that will eventually happen if askign nicely stops making progress | 17:37 |
fungi | the bulk of opendev's own need to test on xenial will be gone once we turn off the logstash/elasticsearch systems | 17:38 |
fungi | we still havea few straggler services which are deployed on xenial servers right now (with gratis security support from canonical for those handful of servers), but they'll be reworked or turned off soon | 17:39 |
gmann | sure, that is nice way. but let me know if any deadline you plan so that we can discuss in openstack in TC and EM SIG to take decision on those. and stable/victoria will be EM soon so even we remove until stable/rocky we have 4 EM branch + 3 Maintenance branches. | 17:39 |
clarkb | gmann: well I think the deadline was april 2021. The fact that we don't have the bw to keep up with that is an implementation detail :) | 17:40 |
fungi | heh | 17:40 |
fungi | [sad laugh] | 17:40 |
dansmith | clarkb: but it's still the 42nd month of 2020 | 17:41 |
dansmith | so, unrelated to all this mess, | 17:42 |
dansmith | gmann: are you generally cool with something like this? https://review.opendev.org/c/openstack/devstack/+/837139 | 17:42 |
dansmith | gmann: it has already set off clarkb's memory footprint alarms, which I see as a good thing, | 17:42 |
dansmith | and if we can get some averages via logstash then we can also highlight specific patches that make big changes I think | 17:43 |
dansmith | I still need to make it work on non-ubuntu systems (for the apache logs) and re-enable the other jobs it disabled for debug | 17:44 |
gmann | dansmith: sure, it is good idea. so you plan to enable it in base devstack job by default? | 17:45 |
dansmith | I would like to unless there's some reason not to, for a couple reasons: | 17:45 |
dansmith | 1. I'd like to be able to check almost any run for overages because that's how we'll catch things | 17:46 |
dansmith | 2. I think the averages (per job) are important and we only get that if it's run widely | 17:46 |
clarkb | I think its nice to have the summary too | 17:46 |
clarkb | as it exposes "hey maybe privsep is a problem" etc | 17:46 |
gmann | yeah, I was thinking the same and then only it will gather the data per patch on project side too | 17:46 |
dansmith | the only reason to not do it everywhere is if it impacts performance, but so far I think it doesn't | 17:46 |
gmann | yeah, it should not | 17:47 |
dansmith | the mysql performance_schema is the only thing that might, but it seems okay | 17:48 |
dansmith | it would be nice if we could summarize potential issues in the gerrit findings tab, but I haven't looked into what I need to do to make that happen | 17:48 |
clarkb | dansmith: there is a way to have a zuul job comment back (it does this for linter failures). You could theoretically use that tooling | 17:52 |
dansmith | oh, that'd be cool | 17:52 |
clarkb | I'm not sure what lines you would comment on for this though, but that seems solveable one way or another | 17:52 |
dansmith | does it have to be per-line and per-file? | 17:52 |
clarkb | yes I think so. It was really built to target the linter use case. But I think it is the onyl way to get a generic message back to gerrit from zuul | 17:54 |
clarkb | otherwise its all pass/fail and link to the zuul page for the job | 17:54 |
dansmith | okay, I figured the gerrit findings tab would maybe parse a test result file from somewhere, so I thought that'd be an avenue as well | 17:54 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 17:54 |
fungi | clarkb: ^ so one downside to the `pip wheel` idea is that by default it wants to create wheels for all your dependencies if they aren't already available. i think --no-deps will address that, but it's not completely clear from the docs | 18:02 |
clarkb | wow | 18:03 |
fungi | another way to put it is that unlike build or setup.py bdist_wheel, it uses the same output directory for all the wheels it builds in the process of making the wheel for the project you asked it to | 18:03 |
clarkb | though I guess it will do that anyway when you install on the next step | 18:03 |
clarkb | so maybe six one way half a dozen the other | 18:03 |
fungi | we only want to install the project from file though, since otherwise we get a constraints conflict | 18:04 |
fungi | ERROR: Could not satisfy constraints for 'alembic': installation from path or url cannot be constrained to a version | 18:04 |
clarkb | ah | 18:04 |
clarkb | hrm maybe we do need to install build then? Or I guess --no-deps might address it | 18:05 |
fungi | we asked to make a wheel for keystone, but we got a directory full of all its deps too, so installing *.whl is the real issue | 18:06 |
fungi | if i could be confident in the mapping of repository directory name to package name then i could get more specific with the pip install command | 18:06 |
clarkb | but they don't have to be 1:1 beacuse python | 18:06 |
fungi | right | 18:07 |
fungi | which is why i'm punting with *.whl and hoping there's just the one | 18:07 |
fungi | https://zuul.opendev.org/t/openstack/build/ae5288292fae4957a7e11094ba5bf3ac for an example | 18:07 |
clarkb | https://zuul.opendev.org/t/openstack/build/f75eafbc3dee4f75a4e588ff443a5396/log/job-output.txt#6541 the failure there is interesting. This is the devstack bionic platform job running against the victoria change. It seems to ahev failed due to this sameissue? | 18:09 |
clarkb | I expect grenade to fail due to using the ussuri devstack and it did | 18:09 |
clarkb | but it isn't clear to me why regular devstack on bionic would fail. https://zuul.opendev.org/t/openstack/build/c1c88e45a4714f87bc6ff36271ca5ab4 the non platform devstack job against the victoria chagne passed. Oh that job ran against focal | 18:10 |
clarkb | interesting | 18:10 |
clarkb | I wonder if the bionic backport of the git fix isn't respecting the config change | 18:11 |
clarkb | considering victoria and forawrd are apparently focal I think we can punt on that for now, but look into it as the focal dust settles | 18:11 |
fungi | okay, the new challenge is swift's [keystone] extra. seems `pip install *.whl[keystone]` looks for a literal file named "*.whl[keystone]" | 18:12 |
fungi | file globbing and extras may be incompatible | 18:13 |
clarkb | fungi: looking at the stackoverflow link I shared that is realted some people suggest using foo[bar] @file:///path/to/*.whl ? | 18:14 |
clarkb | its definitely getting into the interesting specifier behaviors of pip there though | 18:14 |
fungi | also possible the directory i specify is just empty and i'm misreading https://zuul.opendev.org/t/openstack/build/0ef85bbbd3c445f58ef6e63df6c684c5 | 18:15 |
fungi | Created wheel for swift: filename=swift-2.30.0.dev15-py2.py3-none-any.whl | 18:15 |
fungi | Stored in directory: /tmp/pip-ephem-wheel-cache-xfd4f_o4/wheels/e0/26/0f/cc230019e3234482179c817141568d604c0861c9f7adf85dd8 | 18:15 |
fungi | pip_install '/tmp/tmp.JOHjaWAneK/*.whl[keystone]' | 18:15 |
fungi | there's no output indicating it was moved from the wheel cache to the tempdir | 18:16 |
fungi | but it could just not log that | 18:16 |
fungi | could also be the quoting around the pip_install parameter breaking globbing | 18:16 |
fungi | thing is, in the script it looks like this: | 18:17 |
fungi | pip_install $flags "$dist_dir/*.whl$extras" | 18:17 |
fungi | so i'm starting to suspect $dist_dir/ is just empty | 18:18 |
fungi | guess i'll add an ls in there to find out | 18:19 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 18:20 |
clarkb | only one voting job needs to pass at this point to get the whole stable branch set into the gate (tehre are a number of non voting jobs too) | 18:28 |
gmann | we can backport the openEuler moving to experimental in stable/yoga but after we merge this stack. | 18:37 |
gmann | that is taking time | 18:38 |
clarkb | I've got the master change enqueue to the gate command ready to go once that yoga change enqueues | 18:46 |
clarkb | then I'm going to make lunch while they gate | 18:46 |
gmann | thanks | 18:46 |
clarkb | if anyone has time I think the next thing is looking at why this seems to still break on bionic (visible on the victoria change) | 18:47 |
clarkb | ok the whole stack is in the gate now | 18:52 |
clarkb | in the right order for the master change to actually eb able to merge | 18:52 |
dansmith | clarkb: git in bionic honors the safe thing, so I dunno why it's not when pip does it | 19:13 |
fungi | okay, so https://zuul.opendev.org/t/openstack/build/f440b7373c0a42b3beef0902dd94bc7a indicates that the wheel is there (and now the only one), so it's got to be the parameter quoting which is the problem, i just can't see where it's happening | 19:13 |
dansmith | the only thing I noted is that we're writing ~me/.gitconfig and not ~root/.gitconfig, so I'm actually surprised that it's working for us under sudo, and I wonder if that is because $HOME tells us to keep using ~me | 19:14 |
dansmith | so like maybe pip or sudo on bionic is handling that differently? | 19:14 |
sean-k-mooney | are we passing -EH to sudo | 19:15 |
dansmith | yeah just looking at that | 19:16 |
sean-k-mooney | to preserve teh enviornment and home dir | 19:16 |
dansmith | not preserve but anti-preserve right? | 19:16 |
dansmith | -H makes $HOME=~root yeah? | 19:16 |
sean-k-mooney | i tought it was the other way around but i have not checked in a while | 19:17 |
dansmith | nope | 19:17 |
dansmith | root@ubuntu:/opt/stack/keystone# sudo -Hs | 19:17 |
dansmith | root@ubuntu:/opt/stack/keystone# echo $HOME | 19:17 |
dansmith | shows /home/dan without -H | 19:17 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 19:17 |
sean-k-mooney | ok so -H set the home dir to the dir of the requesed user | 19:17 |
sean-k-mooney | but -E preserves the env | 19:17 |
sean-k-mooney | so -H will use HOME=/root ya | 19:18 |
fungi | and here i always thought sudo -EH was for canadian mode | 19:18 |
dansmith | we're not using -E on victoria devstack at least | 19:18 |
dansmith | also not -E on master, AFAICT | 19:19 |
fungi | keep in mind that ~ expansion may be happening in the calling shell before sudo operates on the result | 19:20 |
dansmith | there's no ~ going on here | 19:20 |
dansmith | I was just using it symbolically | 19:20 |
dansmith | so with ~/.gitconfig set to "safe" the current repo, | 19:21 |
fungi | oh, got it. roughly where in devstack is the call? | 19:21 |
dansmith | I can "sudo git show" but I can not "sudo -H git show" | 19:21 |
dansmith | and we're doing sudo -H pip install, which ends up calling git much later | 19:21 |
fungi | and yes, sudo -H will definitely impact where it looks for the "global" (per user) .gitconfig | 19:22 |
dansmith | so I dunno why it's different in either later devstacks or focal, because it seems like it's doing the right thing in bionic | 19:22 |
dansmith | right, so we probably need to be writing root' | 19:22 |
dansmith | root's git config and not the stackuser | 19:22 |
sean-k-mooney | or both root and stack | 19:22 |
dansmith | I just don't know why it's working in the later ones | 19:22 |
dansmith | sean-k-mooney: yeah, actually both you're right | 19:22 |
sean-k-mooney | depending on if its devstack or pip that is invoking it | 19:22 |
dansmith | well, | 19:23 |
sean-k-mooney | things like reclone=true will work as stack | 19:23 |
dansmith | not just that but just because we could have other permissions issues | 19:23 |
fungi | how the version of git on bionic looks for its .gitconfig may differ from how the version on focal does it | 19:23 |
dansmith | fungi: it's possible, but seems unlikely to find stackuser's | 19:23 |
sean-k-mooney | apprently git will look at /etc/gitconfig | 19:24 |
dansmith | (since we're not calling with -E) | 19:24 |
sean-k-mooney | coudl we just put it there | 19:24 |
dansmith | oh, maybe *that* is the difference | 19:24 |
sean-k-mooney | https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration | 19:24 |
dansmith | maybe --global on focal is setting it there? | 19:24 |
dansmith | so, | 19:25 |
dansmith | on focal "sudo git config --global" writes ~root/.gitconfig | 19:26 |
dansmith | but on my bionic tester, it clearly wrote ~dan/.gitconfig | 19:26 |
dansmith | so maybe we need -H on the actual git config run | 19:26 |
sean-k-mooney | so there are 3 levels, system, global and local. | 19:26 |
dansmith | yeah | 19:26 |
sean-k-mooney | if we user --system instead of gloabl it should use /etc/gitconfig | 19:26 |
sean-k-mooney | and work for everyone unless you override it at "gloabl" or local scope | 19:27 |
dansmith | yeah, sudo -H git config --global makes it behave on bionic the way it does for focal | 19:28 |
dansmith | so fungi is probably right about them finding gitconfig differently, but not in the place I was assuming.. meaning not during pip, but during the git-config call | 19:28 |
dansmith | yep, that fixes it for bionic for me | 19:29 |
dansmith | so clarkb I say let the current stack go in as it is, and then we can either add -H and backport like normal, or move to --system (if that also works) | 19:29 |
clarkb | dansmith: catching up but ya I thin kt he current stack is working so we can finish landing it then cleanup in followups | 19:29 |
clarkb | I guess git learned to check its user rather than $HOME in newer git or something | 19:30 |
dansmith | probably | 19:30 |
sean-k-mooney | that or maybe there is an alias of sudo or somthing on focal/bionic that adding -H | 19:30 |
* dansmith tests --system | 19:30 | |
dansmith | okay --system does write it to /etc/gitconfig as expected on bionic.. waiting to see if it's honored by the pip call | 19:31 |
gmann | clarkb: dansmith seems master one hitting centos9 stream failing https://zuul.opendev.org/t/openstack/build/4081df7c08634ee1acbb60edc3f3b7ae | 19:32 |
dansmith | gmann: well, I for one, am shocked | 19:33 |
clarkb | gmann: I think that is the pypi issue | 19:33 |
clarkb | I can reenqueue it | 19:33 |
dansmith | yeah, --system is working on bionic under pip too | 19:33 |
dansmith | so I think that's the way to go, for reasons I just commented on in the master patch | 19:33 |
sean-k-mooney | + | 19:34 |
sean-k-mooney | the only down side to that is if you are shareing a host with other but that is not our use case | 19:34 |
clarkb | the master change has been reenqueed | 19:35 |
clarkb | does anyone else want to do the --system change or is that something I can be helpful with by pushing and stacking up again? | 19:35 |
sean-k-mooney | clarkb: if you or dan want to do the system change go for it | 19:37 |
opendevreview | Dan Smith proposed openstack/devstack master: Write safe.directory items to system git config https://review.opendev.org/c/openstack/devstack/+/837759 | 19:37 |
sean-k-mooney | i was just loging off | 19:37 |
dansmith | doneski ^ | 19:37 |
sean-k-mooney | ah | 19:37 |
dansmith | clarkb: also don't believe what you hear.. sean-k-mooney never logs off | 19:38 |
sean-k-mooney | actully i dont i just lock my screen but one last question | 19:38 |
sean-k-mooney | is git smart enough not to keep adding the same option | 19:38 |
clarkb | sean-k-mooney: yes it is | 19:38 |
sean-k-mooney | cool | 19:38 |
clarkb | or at least I'm pretty sure of that | 19:38 |
clarkb | gerrit usees the same system and it dedups | 19:39 |
sean-k-mooney | i was just wondering if unstack needed to do something here | 19:39 |
dansmith | the options are duplicated actually | 19:39 |
dansmith | because they're all directory= | 19:39 |
clarkb | oh this one might be special then | 19:39 |
dansmith | I have not yet seen it re-create the same thing though | 19:39 |
dansmith | unstack should clean this anyway I think | 19:39 |
dansmith | I can add that to my system patch | 19:39 |
sean-k-mooney | that might be a littel harder too do sicne you wont have a nice hook point like clone | 19:40 |
sean-k-mooney | i guess you coudl hook into the service stop maybe but if you jsut want it to work for evnerything out of the box including plugins | 19:41 |
sean-k-mooney | im not shere where to put it so that it always gets called | 19:41 |
dansmith | I think sed is more than adequate for unstack ;) | 19:41 |
sean-k-mooney | that slight nicer the the rm -f /etc/gitconfig i was thinking | 19:42 |
dansmith | directory=$DEST.* should be plenty careful | 19:42 |
sean-k-mooney | ya | 19:42 |
sean-k-mooney | ok cool | 19:42 |
sean-k-mooney | in that case im going to get dinner o/ | 19:42 |
clarkb | git config also has a --unset flag | 19:42 |
clarkb | and --unset-all | 19:43 |
fungi | okay, i think it's appending the extras in the command with the file glob which is making the shell not glob, i'll have to expand it with a subshell | 19:43 |
clarkb | oh! | 19:44 |
clarkb | that makes sense | 19:44 |
dansmith | clarkb: ack, but unless you feel strongly, I like to clean up in a way irrespective of the current local.conf because I often change my local.conf, unstack and restack, and it's annoying when we don't clean up things once we're no longer configured | 19:44 |
dansmith | I also don't know how those will work if we wanted to only unset *our* directory things, since they're all the same key I dunno if --unset would kill user's items too | 19:45 |
dansmith | yeah, --unset safe.directory doesn't take a value, and will nuke *all* the safe directories | 19:46 |
clarkb | oh thats a good point | 19:46 |
opendevreview | Dan Smith proposed openstack/devstack master: Write safe.directory items to system git config https://review.opendev.org/c/openstack/devstack/+/837759 | 19:46 |
clarkb | ya I agree we should be careful to only unset the devstack repo values | 19:46 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 19:50 |
fungi | used bash magic involving arrays in order to avoid the subshell | 19:51 |
* fungi puts away his magic wand | 19:51 | |
opendevreview | Merged openstack/devstack stable/victoria: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837745 | 20:01 |
fungi | one down! | 20:03 |
dansmith | the master one is failing a lot | 20:07 |
dansmith | rax problems? | 20:07 |
dansmith | /usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/connectionpool.py:999: InsecureRequestWarning: Unverified HTTPS request is being made to host 'mirror-int.ord.rax.opendev.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings | 20:07 |
dansmith | ah, I guess that's okay, but: | 20:08 |
dansmith | ERROR: No matching distribution found for oslo.limit===1.5.0 (from -c /opt/stack/old/requirements/upper-constraints.txt (line 332)) | 20:08 |
clarkb | dansmith: ya thats pypi problems | 20:09 |
clarkb | unfortunately :/ | 20:09 |
dansmith | ah | 20:09 |
clarkb | basically when pypi's CDN fails to talk to its primary backend for content it falls back to a backup backend that tends to be quite stale | 20:09 |
dansmith | ack | 20:10 |
clarkb | so anytime pypi has problems we proxy cache the stale data. I can try reenqueuing again | 20:10 |
clarkb | the last time this happened they actually acught it on https://status.python.org/ but it looks happy this time (they usually don't notice because hitting the fallback is expected some small percentage of the time I guess) | 20:12 |
clarkb | one frustrating thing about this is we notice due to our use of constraints. Many pypi users likely install old insecure software when it happens without noticing | 20:14 |
dansmith | yeah that definitely sucks | 20:15 |
clarkb | I would prefer that pypi just return an error if they can't serve up to date content | 20:15 |
dansmith | hopefully we're about to get everything but master landed | 20:17 |
opendevreview | Merged openstack/devstack stable/wallaby: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837746 | 20:20 |
opendevreview | Merged openstack/devstack stable/xena: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837747 | 20:20 |
opendevreview | Jeremy Stanley proposed openstack/devstack master: Separate wheel building from installation https://review.opendev.org/c/openstack/devstack/+/837731 | 20:20 |
fungi | two more! | 20:20 |
dansmith | yoga in a sec | 20:21 |
opendevreview | Merged openstack/devstack stable/yoga: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837748 | 20:21 |
fungi | and there it goes | 20:22 |
dansmith | clarkb: the bionic git behavior thing is what caused you to need to make grenade non-voting there right? | 20:26 |
dansmith | I think gmann wanted it to stay that way anyway, so I shan't re-voting it when I backport the system fix, but at least I should see that it fixes it if that's what it was | 20:26 |
clarkb | dansmith: sort of. Even if bionic's behavior was addressed initially we would need to make it non voting because ussuri devstack wouldn't get the fix | 20:26 |
dansmith | oh right okay | 20:27 |
clarkb | the way grenade works is it checks out old devstack and runs that first then it checks out current for the current branch and reruns when doing the upgrade | 20:27 |
clarkb | both things would've prevented grenade from working but the ussuri dependency is the main reason for non voting and ya we want to keep it that way | 20:27 |
dansmith | yeah | 20:27 |
*** arxcruz is now known as arxcruz|out | 20:28 | |
fungi | 837731 is passing the devstack job on master, and most other voting jobs (ignoring grenade for the moment because of the state of stable/yoga when those builds began) | 21:13 |
clarkb | fungi: nice | 21:14 |
clarkb | I guess I should review it again now | 21:14 |
fungi | so it looks like a path we can take if there's consensus | 21:14 |
fungi | the array trick didn't force glob expansion to happen as early as i wanted, so i eventually resorted to echo in a subshell until someone with a better idea can make suggestions | 21:15 |
fungi | aside from grenade, i still have failures from tempest and centos jobs, but haven't dug deeper on those yet to see whether they're related | 21:16 |
clarkb | fungi: +2'd with a couple of thoughts. | 21:17 |
fungi | tempest is running into a rather fun local write permission error, i suspect as a result of dropping editable installs: https://zuul.opendev.org/t/openstack/build/2815fbe9faa34111a15ea5f40aa8ccef | 21:18 |
ianw | there's a bit much to parse in scrollback, but i'm assuming everything is still terrible :) | 21:18 |
fungi | ianw: less terrbile. your workaround has merged to all stable branches but not yet to master | 21:19 |
ianw | i notice upstream are pushing a patch to enable '*' as a config option to disable the whole thing | 21:19 |
clarkb | ianw: slightly less terrible than before. We've gone with your proposed fix which has landed on victoria through yoga and is gating on master now | 21:19 |
ianw | excellent, thank you, i'll catch up :) | 21:19 |
clarkb | ianw: we did find that bionic git doesn't work with your fix so dansmith has https://review.opendev.org/c/openstack/devstack/+/837759 to address taht which we should land once your fix lands and then backport per usual | 21:19 |
fungi | i've been exploring splitting package building from installation, since the former needs to call pbr/git but the latter needs root perms | 21:20 |
fungi | however it means no longer supporting editable mode/develop | 21:20 |
fungi | apparently for develop installs, there is no separate package build step really | 21:21 |
ianw | the other thing i thought, not sure if practical, is to use something like pygit2 in pbr? or does it bring in too much? | 21:21 |
clarkb | ianw: pbr intentionally avoids deps | 21:21 |
clarkb | I think that may be a lot if we can't vendor a small version | 21:21 |
clarkb | but also I expect those libs to all behave this way too soon enough | 21:22 |
clarkb | (since it is a security issue) | 21:22 |
fungi | but also, the only reason to use something other than git would be so that we don't have to worry about the security fix in git, which basically means opting for a library which still has that security flaw | 21:22 |
fungi | yeah, what clarkb said | 21:22 |
ianw | yeah, but this is a big hammer that i think ignores many subtleties about what is actually unsafe, and things like containers, fuse mounting (so it all looks like you own it anyway), symlinks, etc. | 21:24 |
clarkb | I agree but it is also what the main behavior expectation setter has decided to do | 21:24 |
fungi | unless git backs off from this new behavior, all other implementations will probably follow suit in attempt to continue mimicing git's behaviors as closely as possible | 21:25 |
fungi | so this is a fun discovery... apparently lib/horizon has (i'm guessing unintentionally) grown a dependency on editable mode installs because it wants to create secret keys at /usr/local/lib/python3.8/dist-packages/openstack_dashboard/local/_usr_local_lib_python3.8_dist-packages_openstack_dashboard_local_.secret_key_store | 21:31 |
fungi | that seems really... unclean | 21:31 |
fungi | that's why tempest is failing for 837731 | 21:33 |
fungi | er, well, the tempest jobs | 21:33 |
clarkb | wut | 21:33 |
clarkb | how would that even work in a proper distro install? | 21:33 |
fungi | https://zuul.opendev.org/t/openstack/build/2815fbe9faa34111a15ea5f40aa8ccef | 21:33 |
fungi | oh it wouldn't work in a distro install | 21:33 |
clarkb | right | 21:34 |
clarkb | so either distros patch it out or this is recent? | 21:34 |
fungi | i think it's lib/horizon's doing, not horizon proper | 21:34 |
clarkb | ah | 21:34 |
fungi | interestingly, the tempest-ipv6-only job passes, it's just tempest-full-py3 running into that | 21:35 |
fungi | and the devstack-platform-centos-9-stream failure seems to be due to not having wheel preinstalled. i guess it comes installed as a dependency of something in the ubuntu jobs | 21:37 |
clarkb | we just need centos 9 to work now | 21:48 |
opendevreview | Merged openstack/devstack master: Mark our source trees as safe for git to use as other users https://review.opendev.org/c/openstack/devstack/+/837659 | 21:55 |
clarkb | woo | 21:56 |
gmann | \o/ | 21:56 |
clarkb | I've rechecked https://review.opendev.org/c/openstack/devstack/+/837759 as it should pass now | 21:56 |
clarkb | and if we land that we should probably backport it too, but we don't need to do the whole depends on dance since the stable branches currently pass testing | 21:57 |
gmann | yes | 21:57 |
gmann | clarkb: can you please update it on ML also that gate is unblocked now with this wrkaround and ok to recheck | 21:59 |
clarkb | yes I can send an email | 21:59 |
gmann | thanks | 21:59 |
ianw | thanks for getting that one fixed! | 22:00 |
clarkb | ok email sent | 22:07 |
fungi | i also updated the ml thread recapping the build/install split experiment and suggesting a revisit of the venv solution | 22:10 |
clarkb | ah yup I touched on that too as I missed your email | 22:10 |
fungi | it probably hit the ml only moments before yours anyway | 22:11 |
johnsom | It looks like we may need to change diskimage-builder too: https://zuul.opendev.org/t/openstack/build/b86683e35eed47b6946e02d1b156482e/log/controller/logs/dib-build/amphora-x64-haproxy.qcow2_log.txt#1928 | 22:52 |
johnsom | Hmm, maybe that is actually an issue in the Octavia element, it's pulling IDs with rev-parse | 22:54 |
johnsom | Nope, it's DIB source-repositories element | 22:56 |
johnsom | ianw This seems to be the odd git call that doesn't use sudo here. Should we just sudo this call as well? https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/source-repositories/extra-data.d/98-source-repositories#L166 | 23:01 |
johnsom | Oh, there is another one too. | 23:02 |
ianw | this change is really starting to annoy me :/ | 23:05 |
johnsom | lol, yeah, I'm not a fan either | 23:05 |
johnsom | Looking around more, it might be best to just do the same thing here. | 23:05 |
ianw | this must be running as root, right? so the cache directory is owned by someone else? | 23:07 |
opendevreview | Ghanshyam proposed openstack/tempest master: Add releasenote to tag the end of support for Ussuri https://review.opendev.org/c/openstack/tempest/+/837775 | 23:08 |
johnsom | Well, I see a mix of sudo git and just git calls in there | 23:08 |
johnsom | I can push up an attempt, but I have the odd feeling this isn't going to be the only place | 23:10 |
ianw | it looks like the sudo calls have always been there | 23:13 |
johnsom | Yeah, off my head I have no idea why this is mixed like this | 23:13 |
ianw | i think it starts with https://review.opendev.org/c/openstack/diskimage-builder/+/30275 and just keeps growing | 23:16 |
johnsom | Well, I posted a naive patch | 23:19 |
opendevreview | Ghanshyam proposed openstack/tempest master: Use ussuri stable constraint in tox to release 31.0.0 https://review.opendev.org/c/openstack/tempest/+/837777 | 23:22 |
ianw | /opt/stack/octavia-lib seems odd ... that's devstack's checkout? | 23:23 |
johnsom | Ok, this should test that patch: https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/837778 | 23:24 |
johnsom | Umm, I really have no idea where that is coming from, I just copied the line from the failed job log. I can dig though and see why it's using that path. | 23:25 |
opendevreview | Ghanshyam proposed openstack/tempest master: Switch back the tox constraint to master https://review.opendev.org/c/openstack/tempest/+/837779 | 23:25 |
johnsom | The end result in the image is all the python stuff goes in a venv | 23:26 |
johnsom | Oh, where did you see /opt/stack/octavia-lib? I only see /opt/octavia-lib | 23:29 |
johnsom | The /opt/octavia-lib makes sense to me, we use that to make sure the right version of octavia-lib ends up in the image venv. | 23:31 |
ianw | Caching octavia-lib from /opt/stack/octavia-lib in /opt/stack/.cache/image-create/source-repositories/octavia_lib_baadb0b1df3ae669481b30f8549af98e7d5fab0b | 23:34 |
ianw | i guess it comes from https://opendev.org/openstack/octavia/src/branch/master/elements/octavia-lib/source-repository-octavia-lib#L2 | 23:35 |
ianw | this is ~ equal to the case of containers | 23:36 |
ianw | hrm, maybe not there | 23:37 |
ianw | https://opendev.org/openstack/octavia/src/branch/master/devstack/plugin.sh#L41 would be what sets it | 23:38 |
ianw | i don't know what to do here. so that is running outside the chroot, and copying things into TMP_MOUNT_PATH -- the mounted dib image | 23:44 |
opendevreview | Ghanshyam proposed openstack/devstack stable/yoga: Move openEuler job to experimental pipeline https://review.opendev.org/c/openstack/devstack/+/837791 | 23:53 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!