opendevreview | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/864456 | 02:18 |
---|---|---|
opendevreview | Merged openstack/project-config master: Normalize projects.yaml https://review.opendev.org/c/openstack/project-config/+/864456 | 03:07 |
*** ysandeep|out is now known as ysandeep | 05:57 | |
*** soniya29|rover is now known as soniya29|rover|afk | 05:59 | |
frickler | ssbarnea: stephenfin: you may want to have a look at https://review.opendev.org/c/opendev/git-review/+/864098 and my CI fix below it | 07:27 |
*** soniya29|rover|afk is now known as soniya29|rover | 07:46 | |
*** yadnesh|away is now known as yadnesh | 07:48 | |
*** slaweq_ is now known as slaweq | 07:57 | |
*** jpena|off is now known as jpena | 08:21 | |
*** soniya29|rover is now known as soniya29|rover|lunch | 08:25 | |
*** ysandeep is now known as ysandeep|lunch | 08:45 | |
*** soniya29|rover|lunch is now known as soniya29|rover | 09:22 | |
*** akahat|ruck is now known as akahat|ruck|afk | 09:53 | |
*** ysandeep|lunch is now known as ysandeep | 10:20 | |
*** pojadhav is now known as pojadhav|dr_appt | 11:09 | |
*** akahat|ruck|afk is now known as akahat|ruck | 11:32 | |
noonedeadpunk | hey there! Any idea where this redirect may come from? https://paste.openstack.org/show/bCqeAofiSsTUOtrN5l4r/ | 11:48 |
noonedeadpunk | Like in gerrit these are 2 different projects: https://review.opendev.org/admin/repos/q/filter:zookeeper | 11:49 |
noonedeadpunk | But gitea does redirect for $reason.... | 11:50 |
noonedeadpunk | ah, here we go.... https://opendev.org/opendev/project-config/src/branch/master/renames/20190419.yaml#L1317 | 12:09 |
fungi | yes, the old project name is now in use by a new project. we likely have to unwind that somehow | 12:28 |
fungi | in this particular case, the original project wasn't used much so it's probably fine to do so. more generally, i wonder how and whether we need to catch that sooner | 12:30 |
fungi | i think gitea stores its redirects in database records, so cleaning it up across the entire farm of gitea servers is probably going to be a bit of work | 12:31 |
fungi | since they don't share a database | 12:31 |
noonedeadpunk | well, to be fair I asked if I should rename repo somehow :) | 12:58 |
noonedeadpunk | fungi: I wonder if I should push patch to drop that rename? I guess not? | 12:59 |
fungi | noonedeadpunk: we'll need to talk through the options when there are more folks around (maybe add it to the meeting agenda for today if we can't work out a good solution sooner) | 13:01 |
noonedeadpunk | I'm not sure if dropping/renaming repo will be much easier? | 13:04 |
fungi | no, i expect cleaning up the old redirects will be the simplest path forward, but want to get some consensus on that from other sysadmins and to have a better understanding of the steps involved in it | 13:21 |
*** pojadhav|dr_appt is now known as pojadhav | 13:25 | |
*** dasm|off is now known as dasm | 13:58 | |
opendevreview | Merged opendev/git-review master: Fix nodesets for tox jobs https://review.opendev.org/c/opendev/git-review/+/864106 | 14:02 |
noonedeadpunk | yeah, that's totally fair but smth I can't actually help with. But I'm really open for solutions if there's an easy way | 14:06 |
*** ysandeep is now known as ysandeep|dinner | 14:41 | |
*** pojadhav is now known as pojadhav|dinner | 14:42 | |
*** soniya29|rover is now known as soniya29|rover|out | 14:47 | |
*** yadnesh is now known as yadnesh|away | 14:58 | |
*** ysandeep|dinner is now known as ysandeep | 15:00 | |
*** pojadhav|dinner is now known as pojadhav | 15:31 | |
*** akahat|ruck is now known as akahat|ruck|dinner | 15:31 | |
opendevreview | Lajos Katona proposed openstack/project-config master: Add toggleWipState to networking projects https://review.opendev.org/c/openstack/project-config/+/864563 | 15:33 |
clarkb | fungi: noonedeadpunk: the tldr is that openstack/ansible-role-zookeeper redirects to windmill/ansible-role-zookeeper in gitea due to a previous rename of openstack/ansible-role-windmill to windmill/ansible-role-zookeeper? I'm good with undoing the redirect since windmill is largely unmaintained at this point and windmill users can directly point to the main name if necessary | 15:45 |
fungi | clarkb: precisely. is that going to involve cleaning up our rename metadata in opendev/project-config and then deleting redirect records from the mariadb on each gitea server? | 15:46 |
fungi | or is there more to it? | 15:46 |
fungi | or is there an api to use for manipulating those redirects instead of poking directly in the db? | 15:46 |
clarkb | those redirects are inserted into the gitea redirect database as a side effect of renaming the projects in gitea. I am unaware of any direct management for that so database updates are likely necessary | 15:47 |
clarkb | I don't think we need to clean up are rename metadaa that is just record keeping. We might want to add a note to it to prevent readding that redirect in hte future though | 15:47 |
clarkb | basically keep the existing metadata that says we did a rename but amend it to indicate url redirects are undesireable (with a yaml comment probabl) | 15:47 |
fungi | mainly concerned if we use it in the future to rebuild the redirects | 15:47 |
fungi | yeah, that should work | 15:48 |
fungi | for the db, at least it's easy to test on one server, then put back if it causes a problem | 15:48 |
fungi | could even temporarily remove it from the haproxy pools if we're really worried | 15:48 |
clarkb | if we didn't want to edit the db directly we might eb able to rename windmill/ansible-role-zookeeper back to openstack/ansible-role-zookeeper then recreate windmill/ansible-role-zookeeper and tell gerrit to replicate both projects to resync the git content | 15:48 |
fungi | though that's likely overkill | 15:48 |
clarkb | fungi: my suggestion would be to hold a gitea system-config-run test node and test the redirect removal there | 15:49 |
clarkb | fungi: we do project renames in those jobs so everything but the redirect removal will be done for you | 15:49 |
fungi | oh, yep that's even better | 15:50 |
clarkb | can even test the rename back to original name to see if it collapses the db records (I suspect not, I suspect it just adds another redirect entry to the db then the software collapses things) | 15:51 |
clarkb | ensuring we don't break any schema expectations with unenforced foreign keys etc (seems like that was popular for performance reasons once upon a time, ut I have no idea if that is an issue here) is probably the main thing if editing the db directly | 15:51 |
fungi | longer term, we may want to check the renames data when adding new repos... that could probably be done pretty easily with a zuul job | 15:52 |
clarkb | fungi: also, I think logging in as the admin user and double checking there isn't a ui toggle for this would be good ( a lot of things in the ui don't have api mechanisms so lack of api doesn't mean no button) | 15:52 |
clarkb | fungi: ++++ | 15:52 |
clarkb | I'm double checking the api swagger doc now | 15:53 |
fungi | clarkb: in unrelated news, upstream mailman3 containers got updated for the 0.4 tag we were tracking, which broke the setup automation. fixing that then broke the forked dockerfile builds. i've updated those as well, but seems like there may be a behavior change breaking our tests now, which i still need to look into | 15:55 |
clarkb | interesting. They had indicated you should be able to upgrade between 0.4 rleases. Might be a good thing we hit this before deployment so we can learn about their update processes :) | 15:56 |
fungi | having been through the exercise, i guess my main concern with the fork is all the == pinning for pypi reqs, which leads to dependency hell any time we want to update the images to new mailman releases, and also means we won't be automatically updating mailman when there's a security issue | 15:56 |
fungi | yeah, they decided that 0.4.4 should switch from python 3.8 to 3.9 | 15:57 |
fungi | probably driven by needing to update dependencies or new python features used in newer mailman | 15:57 |
fungi | anyway, ironing the rest of that out is my main priority today if i get user requests under control | 15:59 |
clarkb | re the == dependencies those come frmo upstream because aiui they only work with very specific deps | 16:00 |
fungi | right | 16:00 |
fungi | just noting that we have to either copy theirs exactly or figure it out ourselves if there's an urgent mm3 update | 16:01 |
fungi | for example, there's a newer mailman-core available on pypi than what's in the containers, but upgrading that needs a newer django too | 16:02 |
*** ysandeep is now known as ysandeep|out | 16:02 | |
clarkb | right. I linked to a mm3 mailing list thread a little while back that basically said this is a big issue for them and they need to do better | 16:03 |
fungi | oh, right, i remember that now | 16:04 |
clarkb | it definitel doesn't help with confidence levels, but I'm not sure there is much we can do as end users unless we volunteer to fix that for them. Maybe hey need a lockfile/constraints | 16:05 |
fungi | sure, it's the process of generating that list that's challenging of course | 16:06 |
*** marios is now known as marios|out | 16:28 | |
clarkb | fungi: https://review.opendev.org/c/opendev/system-config/+/864060 and https://review.opendev.org/c/opendev/system-config/+/864234 would be good to land. First fixes ehterpad logging rotation (or lack of it) and second updates our gerrit image building so that it works again | 16:30 |
clarkb | there is a followup to the second changethat will update our gerrit imgaes to the latest versions from last week too | 16:31 |
clarkb | nothign super urgent in those updaets but it does improve performance for the copy approvals step we need to take before upgrading to 3.6 which is great | 16:32 |
fungi | awesome, thanks. will take a look | 16:32 |
fungi | okay, so the system-config-run-lists3 failure with our updated mm3 container seems to be related to database configuration/population, but i'm not finding a lot to go on in the collected logs: https://zuul.opendev.org/t/openstack/build/08603d23d29a4c5f90d9c0c995e9ec55 | 16:47 |
clarkb | fungi: but it does work with their images? | 16:49 |
clarkb | might need to exec into the two images and compare pip freeze output or do something similar to that | 16:50 |
fungi | yep | 16:50 |
fungi | also possible that was a random failure, i'm checking the child change to see if it had the same issue | 16:50 |
clarkb | oh wait | 16:50 |
clarkb | thats one of my tasks to ensure we don't operate on the deployment before the databse has content we expect. Is it possible the content has changed? | 16:50 |
fungi | entirely possible | 16:51 |
clarkb | owuld be odd for the parent to work in that case, but I think first step is to do what that task does by hand on a held node and then take it from there | 16:51 |
fungi | i think i have a held node from the dnm change, and yeah same error on it too | 16:51 |
clarkb | maybe django changed the table name or something | 16:52 |
fungi | though doesn't explain why the test worked with the upstream image | 16:52 |
clarkb | https://zuul.opendev.org/t/openstack/build/08603d23d29a4c5f90d9c0c995e9ec55/log/lists99.opendev.org/docker/mailman-compose_database_1.txt indicates it may be an auth issue? | 16:52 |
clarkb | but the db itself is up | 16:53 |
clarkb | ya I think once we understand the failure then we can dig into the difference between the images | 16:53 |
fungi | oh, that's where the db log is. i even looked in that subtree but my eyes just went right past it | 16:53 |
clarkb | I've got to find breakfast and prep for the infra meeting and so on, but let me know if I can help more with that. Definitely start by manually trying to run that task to see why it is failing (auth failure?) and once we know how to fix it we can compare images with that knowledge | 16:54 |
fungi | Access denied for user 'root'@'127.0.0.1' (using password: NO) | 16:54 |
fungi | yeah, maybe we're missing an update for that | 16:55 |
clarkb | I know getting the quoting right for those tasks was a chore | 16:55 |
clarkb | maybe something changed to break the quoting? | 16:55 |
fungi | i'm going to sxs compare the db log from the working build | 16:56 |
fungi | but also 173.231.255.73 is the held node, i can test things on there | 16:57 |
fungi | on the working build, th elog looks very similar except after a couple of access denied entries for 'root'@'127.0.0.1' it appears to switch to using 'mailman'@'127.0.0.1' | 17:02 |
clarkb | the command does use -u mailman | 17:02 |
fungi | oh, actually the failing log has indications of those connections too | 17:03 |
fungi | so this may be a red herring | 17:03 |
clarkb | so maybe the root access denieds are a red herring and instead "Aborted connection 629 to db: 'mailmandb' user: 'mailman' host: '127.0.0.1' (Got an error reading communication packets)" is what we should look at | 17:03 |
fungi | we have those in the successful one too though | 17:03 |
fungi | https://zuul.opendev.org/t/openstack/build/2f354b06d85849b08c0417a5d363b721/log/lists99.opendev.org/docker/mailman-compose_database_1.txt | 17:03 |
clarkb | ya so it could just be a problem with parsing the output | 17:05 |
clarkb | ntil: django_db_exists.stdout_lines | length > 1 and django_db_exists.stdout_lines[1] == "auth_user" | 17:06 |
fungi | there is no auth_user table on the held node | 17:09 |
clarkb | ya so django must've changed. I don't understand why that would work on the parent change though | 17:11 |
clarkb | the idea behind all of that is there isn't a good way to check if mailman3 is up and running. So we do a number of checks for things that get us there. A db check, a rest api check, and I think another one | 17:12 |
fungi | unless me copying from master of the dockerfiles isn't reflecting the 0.4.4 tag | 17:12 |
fungi | i'll double-check the diff | 17:12 |
clarkb | also maybe django isn't starting properly | 17:12 |
clarkb | we may need extraconfig | 17:12 |
clarkb | fungi: the failing job is on python3.10 not python3.9 | 17:14 |
fungi | oh, huh | 17:15 |
clarkb | oh but they both are | 17:15 |
fungi | the only commit after 0.4.4 was "Add g++ (gcc-cpp) as build dependency" which only updated core/Dockerfile.dev abd some postgresql configuration | 17:16 |
clarkb | fungi: https://zuul.opendev.org/t/openstack/build/08603d23d29a4c5f90d9c0c995e9ec55/log/lists99.opendev.org/docker/mailman-web.txt#1624-1668 that doesn't show up in the successful job | 17:16 |
fungi | but maybe i should more thoroughly diff the upstream dockerfile against ours | 17:16 |
clarkb | so ya I'm backto thinking something about the django install has broken the db | 17:17 |
clarkb | and that would be in the image | 17:17 |
fungi | right | 17:18 |
fungi | oh, and yeah the mailman-web container did get updated to python 3.10, mailman-core only to 3.9 | 17:18 |
clarkb | thats weird that they would split them up like that | 17:18 |
fungi | apparently they like to use different python versions in different containers, because that's a sane thing to do | 17:18 |
fungi | yeah, mailman-core uses alpine:3.15 while mailman-web has alpine:3.16.2 | 17:19 |
fungi | and mailman-web did get a django cap increase from <3.2 to <4.1 | 17:27 |
fungi | along with updating django-mailman3 from 1.3.7 to 1.3.8 | 17:27 |
fungi | i'll look through changelogs in a bit, but need to take a moment to go pick up some lunch | 17:28 |
clarkb | it might be better to diff the image content directly | 17:29 |
clarkb | maybe something without a pin got updated | 17:29 |
*** jpena is now known as jpena|off | 17:42 | |
clarkb | fungi: https://github.com/maxking/docker-mailman/commit/208017f9a00fdaf1b2d0671e97fb879e804fd911 I think you need to incorporate these updates | 18:15 |
clarkb | here https://review.opendev.org/c/opendev/system-config/+/860157/14/docker/mailman/web/mailman-web/urls.py but should cross check all of those files | 18:16 |
fungi | oh, thanks! | 18:16 |
fungi | but yeah, back now and lunch has been consumed, so i'll start plugging at it | 18:17 |
clarkb | does it make sense to also convert to using a requirements file like upstrem just to keep the delta down? Our install should be equivalent though | 18:17 |
clarkb | fungi: https://review.opendev.org/c/opendev/system-config/+/860157/14/docker/mailman/web/mailman-web/uwsgi.ini also needs updating | 18:22 |
fungi | oh fun | 18:26 |
fungi | so path changes to web/mailman-web/urls.py and refresh the uwsgi.ini file we're using | 18:29 |
clarkb | ya Ithink so | 18:29 |
clarkb | separately I was unable to exec into our forked image using --entrypoint bash but was able to do so with the 0.4 upstream image tag | 18:30 |
clarkb | we should test that after we fix those issues | 18:30 |
fungi | is our docker/mailman/web/mailman-web/urls.py just a direct copy from their repo, or does it have local edits? | 18:30 |
clarkb | oh wait that might be my own derp | 18:31 |
clarkb | fungi: everything in the image fork I pushed up was a direct copy except for the addition of lynx to the package install list | 18:31 |
*** akahat|ruck|dinner is now known as akahat|ruck | 18:31 | |
clarkb | fungi: all of the overrides we are doing are happening via ansible config management | 18:31 |
fungi | well, and our dockerfiles don't use requirements.txt files, the packages are merged instead | 18:32 |
clarkb | fungi: that is because the split to requirements.txt is newer than my fork | 18:32 |
fungi | or did that change recently upstream> | 18:32 |
fungi | aha, so maybe i should do that too | 18:32 |
clarkb | yes the exec thing was my own derp. DOn't try to provide a command as well as an entrypoint | 18:34 |
clarkb | fungi: also there is an even newer mailman release | 18:36 |
clarkb | https://docs.mailman3.org/projects/mailman/en/latest/src/mailman/docs/NEWS.html#news-3-3-7 not sure if we want ot try and update to those or just stick with being a very lightweight fork | 18:36 |
clarkb | looks like it should be pretty non eventful for us and adds a bugfix though | 18:36 |
fungi | looks like the only uwsgi.ini edit is adding buffer-size = 8190 | 18:37 |
fungi | clarkb: yeah, i tried it, but 3.3.7 needs newer django | 18:37 |
fungi | could try uncapping django but then not sure what else that'll shake loose | 18:38 |
clarkb | ah | 18:39 |
clarkb | interesting that the notes don't say anyhting about that :/ | 18:39 |
fungi | yeah, the dockerfile limits Django<4.1 but mailman 3.3.7 wants Django>=4.1 (or something like that) | 18:40 |
fungi | so presumably lockstep django updating | 18:41 |
clarkb | fungi: I think this is separate | 18:42 |
clarkb | it is mailman-core that got updated not mailman-web | 18:42 |
clarkb | https://review.opendev.org/c/opendev/system-config/+/860157/14/docker/mailman/core/Dockerfile#20 is the line to update | 18:43 |
clarkb | (entirely possible there are other deps issues though, its just that django shouldn't be one of them) | 18:43 |
fungi | well, now it's in the requirements.txt with the update i'm about to push | 18:44 |
clarkb | right but its part of the non django portion of mm3 | 18:44 |
fungi | anyway, we can try updating that as a different revision if we want | 18:44 |
fungi | oh! wasn't django | 18:45 |
fungi | the dockerfile for mailman-core has sqlalchemy<1.4.0 and mailman 3.3.7 needs sqlalchemy>=1.4.0 | 18:45 |
fungi | that's what it was | 18:45 |
fungi | bumping sqla might not be too onerous | 18:45 |
fungi | or it might need newer alpine | 18:45 |
clarkb | ya that should be safe to change based on my read of the changelog | 18:45 |
clarkb | they had that pin in place because of python3.6 support aiui | 18:46 |
fungi | oh, well i'll give it a shot here in that case and we can see what happens | 18:46 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Fork the maxking/docker-mailman images https://review.opendev.org/c/opendev/system-config/+/860157 | 18:49 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: DNM force mm3 failure to hold the node https://review.opendev.org/c/opendev/system-config/+/855292 | 18:49 |
fungi | i've blown away my last held node and set a new autohold | 18:49 |
fungi | now to see about holding a gitea test node for fiddling with redirect removal | 18:51 |
clarkb | unrelated: our jitsi meet install should upgrade overnight tonight due to a new release and our periodic jobs | 18:52 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: DNM: Fail gitea testing to hold a node https://review.opendev.org/c/opendev/system-config/+/864581 | 18:53 |
clarkb | fungi: re ^ before you do the db mangling I would definitely log in as the admin use rand see if this is toggleable via the web ui | 18:54 |
fungi | sure | 18:54 |
fungi | do we have global admin set up in prod? | 18:54 |
opendevreview | Vishal Manchanda proposed zuul/zuul-jobs master: [DNM] Enable and started dbus.service https://review.opendev.org/c/zuul/zuul-jobs/+/864582 | 18:54 |
clarkb | fungi: we have an admin user yup. But you have to modify each node separately | 18:55 |
fungi | that much i figured. anyway, i won't have time to look at it until after the meeting | 18:56 |
clarkb | you can also login as admin on the test node. Everything should align behaviorwise since we deploy a specific version | 18:56 |
fungi | yeah, that's what i was going to look at first | 18:57 |
fungi | i guess we have a test credential for admin in the job setup | 18:57 |
clarkb | yup | 18:57 |
opendevreview | Merged opendev/system-config master: etherpad: redirect container logs https://review.opendev.org/c/opendev/system-config/+/864060 | 19:35 |
fungi | the latest mm3 container fork change seems to test clean | 19:37 |
clarkb | woot | 19:37 |
fungi | so i should be able to proceed with migration testing on it this evening | 19:37 |
fungi | for the fips token situation, i can understand the concerns on both sides, and am willing to accept that encoding the ua token in a zuul secret really amounts to obfuscation which may not be a great choice for transparency purposes | 20:04 |
frickler | maybe it would also help to see how the token would be used. is there some documentation that you could link to? or show how a job using it would look like? | 20:08 |
clarkb | I expect the jobs will use the `ua` command to enroll the node and enable the features / do package updates then reboot into the updated kernel settings | 20:08 |
frickler | so that's essentially the same we did with the old xenial nodes? (or was it even older ones?) I was assuming fips to be a different feature | 20:10 |
opendevreview | Merged opendev/system-config master: gerrit-build: jammy updates: update to nodejs 18.x, allow submodule clones https://review.opendev.org/c/opendev/system-config/+/864234 | 20:13 |
clarkb | frickler: yes, the ua tool is used to manage all of those separate things. THey are distinct items, just sharing the same management tool | 20:14 |
fungi | frickler: the extent of the documentation i got for it is included in the commit message | 20:17 |
fungi | pro attach {{ token }} | 20:17 |
fungi | but my understanding is that it will then allow adding sources.list entries for the fips enablement package repository so that packages can be installed from there | 20:18 |
fungi | i'm mainly hoping the people with a vested interest in doing fips testing will be the ones to work out the details | 20:19 |
frickler | for me the main question still is where we as opendev team (and this is where IMO this leaves the pure openstack realm) draw the line on running non-free software on test nodes | 20:25 |
frickler | we have this, we might be asked to run RHEL or possibly even windows instances by some project. and this is for me a fundamentally different thing from accessing external APIs only that may require a secret to access them | 20:26 |
fungi | it's not entirely clear that the software in question is closed source/proprietary, but also there's nothing preventing a user from installing software in a test from somewhere on the internet regardless of what license it's distributed under | 20:27 |
frickler | well the download of the fips certified software obviously isn't free. otherwise we could run these jobs without needing that token | 20:28 |
fungi | we've been asked to run rhel in the past. we didn't say no because it requires licenses, red hat said no because they didn't feel like working out the legal agreements it would entail | 20:28 |
fungi | "free" (libre) software has nothing to do with whether the distributor charges money for it | 20:28 |
fungi | if we're not allowed to redistribute the software we've received then that's another matter of course | 20:29 |
frickler | there different flavors of free of course | 20:30 |
fungi | of course | 20:30 |
frickler | well we are not allowed to redistribute the token we received, or are we? | 20:30 |
fungi | even the gpl doesn't require someone to distribute software for no charge, only that they also supply source code to anyone to whom they distribute binary builds | 20:30 |
fungi | is a token "software"? | 20:30 |
fungi | this is where a lot of these existential arguments about software freedom start to break down | 20:31 |
fungi | is a software license also software? many open source licenses disallow alteration of the license files. can you say it's acceptable to waive software freedoms for license texts, but expect credentials to be open source? | 20:32 |
fungi | is it unacceptable to use a private credential in order to obtain privileged access to someone's build of an open source project, but okay to use a private credential to authenticate a test to someone's proprietary service on the internet? | 20:34 |
fungi | is the twitter login our statusbot uses free software? | 20:34 |
frickler | lots of good questions indeed. I'll sleep about it. fwiw feedback I saw from the TC so far was similarly mixed, so not helpful in making up my mind, either | 20:35 |
JayF | fungi: I'd frame the question this way: Can we redistribute OS images that contain Ubuntu-FIPS code? | 20:35 |
fungi | JayF: we weren't planning to do so, but it's a valid question to ask canonical | 20:35 |
JayF | like, is the software itself under some kind of license agreement or will opendev/infra be free to distribute it as long as the key is not disclosed | 20:35 |
JayF | I feel like if you get a key that gives you access to a repo full of FOSS | 20:35 |
JayF | there's no way they give you the key without you agreeing to some sort of restriction, right? | 20:36 |
fungi | we weren't planning to distribute the software anyway, but i do agree it's a good data point | 20:36 |
JayF | It's just helpful to have a concrete perspective to look from when talking about "software freedom" | 20:36 |
JayF | and I think about it in context of this: there's a failing FIPs run in Ironic. I ask infra to hold a failing node. | 20:37 |
JayF | Do I have to worry about copying secrets off of that public CI node I've been given access to? | 20:37 |
JayF | Do I agree transitively to not disclose or distribute information about the FIPs configuration that I now have access to? | 20:37 |
JayF | license key itself is an authentication credential; I get that part being hidden; but if the work product is licensed as such that we can't redistribute it, then you're going to handcuff any developer who has to go deep troubleshooting on this | 20:38 |
fungi | canonical representatives said not to worry about that, because the main value in that token and those builds of packages is whether a user can engage them for support. if someone wants free access to fips-enabled builds of software they can build the same themselves from source or switch to a different distribution which provides them already | 20:38 |
JayF | Is that going to be in writing in some public place? | 20:39 |
fungi | it's summarized in a code comment | 20:39 |
JayF | part of being open is those promises being made in the open so that everyone can rely on it; not just the folks having these conversations (or the second order conversations like this one) | 20:39 |
JayF | This all sounds fine then? | 20:39 |
fungi | the code comment could certainly be expanded on | 20:40 |
Clark[m] | Fwiw openstack already uses secrets to access proprietary software from our jobs (GitHub replication) | 20:40 |
JayF | As we were talking about in #openstack-tc yesterday; it's not any different than having features that can't be tested without explicit hardware | 20:40 |
Clark[m] | I don't think that aspect is a concern unless we want to revert a lot of precedent | 20:40 |
JayF | and sometimes even hardware that needs a software-unlock-key-license (e.g. some iLo features that Ironic supports) | 20:40 |
JayF | I've been trying to drive a wedge somewhere to see if that ^ case is different than the fips case; and it just isn't. | 20:40 |
fungi | tinwood may also be able to clarify some of the technical points about whether there's any non-osi-approved licenses involved in the packages distributed for ubuntu fips enablement (he's the one who arranged the token for us) | 20:43 |
fungi | my assumption is that it's just specifically configured builds of the usual open source projects which have been subjected to additional testing on the distro side, and are normally being distributed for a cost in order to be able to fund that work, but i could be mistaken | 20:46 |
JayF | yeah tbh given what I know about it now; it makes me wish that they just gated the actual-support and not the code | 20:54 |
opendevreview | Ian Wienand proposed opendev/statusbot master: Add mastodon support https://review.opendev.org/c/opendev/statusbot/+/864586 | 21:10 |
ianw | ^ basically you just send a bit of json with a bearer token | 21:11 |
fungi | oh nice | 21:15 |
opendevreview | Ian Wienand proposed opendev/statusbot master: Add mastodon support https://review.opendev.org/c/opendev/statusbot/+/864586 | 21:16 |
clarkb | ianw: where did we end up with paste and werkzeug. Do we need a system-config-run job? | 21:17 |
clarkb | oh we have that job I just need a depends on to the lodgeit side I guess /me does this | 21:17 |
ianw | yeah that should do a complete bringup and screenshot | 21:19 |
opendevreview | Clark Boylan proposed opendev/system-config master: DNM testing lodgeit image updates https://review.opendev.org/c/opendev/system-config/+/864587 | 21:19 |
clarkb | re etherpad it looks like we are getting the log files now. However, it seems like we might still be logging to the docker log which is what was filling up? | 21:26 |
clarkb | so this might be half of the fix we need. The other half would be to somehow reduce retention of the docker log? I don't know the answer yet just calling out what I see | 21:26 |
fungi | or configure etherpad/nodejs to not also log to stdout? | 21:27 |
opendevreview | Ian Wienand proposed opendev/statusbot master: Add mastodon support https://review.opendev.org/c/opendev/statusbot/+/864586 | 21:28 |
opendevreview | Ian Wienand proposed opendev/statusbot master: Update ancient hacking version https://review.opendev.org/c/opendev/statusbot/+/864588 | 21:28 |
clarkb | well I think that is how the docker -> syslogging works | 21:28 |
clarkb | it takes what docker sees, tags it with a syslog tag, then syslog puts it into files | 21:28 |
ianw | sorry i have to afk for a bit, i can look when i get back in ~1 hour | 21:29 |
clarkb | oh unless `docker log` knows to look in syslog for that info? | 21:29 |
clarkb | ya that may be it | 21:30 |
fungi | er, right i reviewed that no more than an hour ago and already forgot the config was telling docker to log to syslog, not telling nodejs to do that | 21:30 |
clarkb | container-cached.log is the file we end up with in the container dir | 21:34 |
clarkb | maybe that won't grow forever and instead acts like a cache for the info going to syslog? | 21:34 |
clarkb | I think we can monitor and observe this. | 21:34 |
clarkb | https://docs.docker.com/config/containers/logging/dual-logging/ I suspect that it may be doing something like this? | 21:40 |
clarkb | there are also docker daemon settings that will rotate logs. It might not be a bad idea for us to set those too? | 21:41 |
clarkb | ianw: https://1b684d4e3822b9d14821-fd1d23de4951783ce4905cf96230b70e.ssl.cf5.rackcdn.com/864587/1/check/system-config-run-paste/e56772e/bridge99.opendev.org/screenshots/paste-main-page.png | 21:54 |
*** rcastillo|rover is now known as rcastillo | 21:55 | |
clarkb | https://zuul.opendev.org/t/openstack/build/e56772e28f75450c97e5ac18073f28cc/log/paste99.opendev.org/docker/lodgeit-compose_lodgeit_1.txt shows it using python3.10 as expected | 21:55 |
*** dasm is now known as dasm|off | 23:02 | |
ianw | clarkb: excellent, thanks! LGTM | 23:03 |
clarkb | fungi: I've poked around on your held gitea node to try and find a way to edit redirects via the ui. I've also looked at the API swagger doc with no luck. I suspect that the db is our only choice | 23:08 |
clarkb | Looking at the db the repo_redirect table appears to have the bits we need. That table has a redirect id, then owner_id + repository pair that indicate the source of a redirect and redirect_repo_id that points to the target | 23:09 |
clarkb | on our test node we have openstack/diskimage-builder and opendev/diskimage-builder redirecting to opendev/disk-image-builder | 23:09 |
clarkb | I think we get two redirects because we changed both the name and org in this case | 23:10 |
clarkb | the owner_id maps onto user.id and redirect_repo_id onto repository.id | 23:11 |
clarkb | you should double check all of that but hopefully this helps with your investigating | 23:11 |
ianw | clarkb: i agree journalctl -f is still showing things, i had expected that would be out of the loop too | 23:19 |
fungi | thanks for the extra detail clarkb! | 23:26 |
fungi | i got sidetracked with other stuff and hadn't a chance to dig into it yet, but at this rate probably will save it for tomorrow morning (barring any new emergencies overnight) | 23:27 |
clarkb | ianw: ya I think we can probably monitor it and see how it does | 23:34 |
clarkb | ianw: if that cache file doesn't grow indefinitely then its probably fine | 23:34 |
ianw | clarkb: i think this is related, and i have deja-vu about this -> https://github.com/docker/for-linux/issues/1151#issuecomment-734351866 | 23:34 |
ianw | https://review.opendev.org/c/opendev/system-config/+/845316 is why i have deja-vu i think. same but different ... | 23:36 |
clarkb | does that imply we need to down then up it? | 23:36 |
ianw | i think it implies that we should have a separate rsyslogd socket that docker logs to directly, bypassing /dev/log | 23:37 |
ianw | if we don't want the logs in journald too | 23:37 |
clarkb | oh, I'm ok with them being in journald | 23:38 |
clarkb | its the docker direct collection that grows indefinitely and created problems (I think journald has some sort of pruning?) | 23:38 |
clarkb | fungi: are you interested in the stack at https://review.opendev.org/c/opendev/lodgeit/+/864222 which updates lodgeit testing and python versions? | 23:40 |
fungi | looks straightforward to me, and still seems to run the same tests | 23:51 |
clarkb | the child changes swap the docker image over to python3.10 and remove 3.6 testing | 23:51 |
clarkb | and are tested by https://review.opendev.org/c/opendev/system-config/+/864587 as well | 23:52 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!