Clark[m] | tonyb: I would leave mirror update alone for now since that is the backend update server and is a bit decoupled. The rest of it sounds right | 00:15 |
---|---|---|
tonyb | Clark[m]: Thanks. | 00:19 |
tonyb | I'll get that going first thing tomorrow. | 00:19 |
tkajinam | looks like some jobs are stuck in queued status for very long. Can it indicate some infra problems ? | 04:24 |
tkajinam | hmm I see bunch of periodic jobs are triggered and I guess it consumes a lot of vms, I guess | 04:25 |
opendevreview | Merged openstack/project-config master: Update Grafana dashboard for Neutron master https://review.opendev.org/c/openstack/project-config/+/898836 | 09:32 |
opendevreview | Merged openstack/project-config master: sdk/osc: Rollback to LaunchPad for issuetracking https://review.opendev.org/c/openstack/project-config/+/894285 | 09:33 |
frickler | clarkb: fungi: ^^ I merged this because gtema had already changed the osc docs, but please double check that this works as expected. also I think one of you will want to disable the affected projects in sb? | 10:23 |
mnasiadka | frickler: do you remember if there was any conclusion with removing kayobe feature/zookeeper branch? | 10:39 |
frickler | mnasiadka: I need to check logs since I restarted my client yesterday | 10:45 |
frickler | mnasiadka: so clarkb wanted to check back with the release team, but there was no response to that. there was also general agreement that it would be ok-ish for infra-root to do this as a one-off, so I'm going to doublecheck which permissions I need and go ahead with it | 10:49 |
mnasiadka | frickler: thanks :) | 10:49 |
frickler | duckduckgo gives me https://docs.openstack.org/fuel-docs/newton/plugindocs/fuel-plugin-sdk-guide/create-environment/repository-branching/repository-branching-delete.html which isn't helpful when being on the receiving end of that request ;) | 10:52 |
frickler | #status log deleted the "feature/zookeeper" branch from the openstack/kayobe which was an unwanted remainder from importing the project | 11:00 |
opendevstatus | frickler: finished logging | 11:00 |
frickler | mnasiadka: ^^ I'm not sure how things work for the github mirror, it may take another change to get merged for the replication to trigger | 11:01 |
mnasiadka | frickler: thanks, we'll check after any merged change | 11:02 |
*** dhill is now known as Guest5259 | 12:25 | |
fungi | frickler: thanks, i had somehow missed the cli/sdk change, i'll get things cleaned up in the sb database momentarily | 12:49 |
fungi | frickler: for branch deletion, simplest solution is to temporarily add your normal account to the administrators (or openstack release managers) group, then browse to configuration for the project in question, and on the branches tab you should see a delete button (there is also a rest api call for branch deletion, which is what the scripts used by the openstack release managers rely on) | 12:50 |
fungi | frickler: gtema: just to be clear, that change makes it so openstack/ansible-collections-openstack bugs are now tracked in https://launchpad.net/openstacksdk | 12:56 |
fungi | that's the intended url now, right? | 12:56 |
fungi | just making sure you didn't mean to have a separate lp project for that | 12:57 |
gtema | hmm, no, that was not the intention | 12:57 |
fungi | there's no https://launchpad.net/ansible-collections-openstack so i expect that's correct | 12:57 |
fungi | if there's a different lp project for it, you'll want to update the "groups" entry in gerrit/projects.yaml to whatever lp project name you want to use | 12:58 |
fungi | and similarly, openstack/cliff is using https://launchpad.net/python-openstackclient | 12:59 |
fungi | granted, https://launchpad.net/cliff seems to be something else entirely anyway | 13:00 |
gtema | right, creating change now | 13:02 |
gtema | thanks for hint fungi | 13:02 |
fungi | yw | 13:02 |
frickler | gtema: looking at the change I'd assumed you'd want to keep a-c-o in the openstacksdk bug group like it was for storyboard, do you want to create a dedicated project for that instead? from my consumer perspective both would be fine. all the other repos should certainly stick to using only sdk/osc as bug projects, right? | 13:19 |
fungi | once that's decided, i'll go back and adjust the urls in the descriptions i set for the old projects in sb | 13:20 |
fungi | for whichever ones are getting a different lp project than what they're currently set to | 13:21 |
opendevreview | Artem Goncharov proposed openstack/project-config master: Set proper LP project for ansible-collections-openstack https://review.opendev.org/c/openstack/project-config/+/899678 | 13:21 |
fungi | gtema: when you create https://launchpad.net/ansible-collections-openstack please make sure to set it as "part of" openstack, and also if you're creating any new teams for it then whatever team owns the project should itself be owned by ~openstack-admins | 13:22 |
gtema | uhm, too late. Just created | 13:23 |
gtema | can it be moved or changed otherwise? | 13:23 |
fungi | you should be able to set the "part of" in the project settings still | 13:23 |
fungi | it's just a piece of metadata really | 13:24 |
fungi | https://launchpad.net/ansible-collections-openstack isn't showing up for me yet, but maybe there's a delay | 13:24 |
gtema | yes, changed, thanks. Noticing this is not trivial | 13:24 |
frickler | seems to be set already | 13:24 |
frickler | fungi: different name, which is a bit confusing, not sure what the idea is behind that, gtema? https://launchpad.net/openstack-collections-openstack | 13:25 |
fungi | oh, never mind, i see it's openstack-collections-openstack | 13:25 |
fungi | yep | 13:25 |
fungi | assuming https://launchpad.net/openstack-collections-openstack is really intentional, it looks correct | 13:26 |
gtema | dammnnn | 13:27 |
gtema | and there is no way to rename project (only its title) as far as I see | 13:28 |
gtema | ok, created also https://launchpad.net/ansible-collections-openstack and added it to be part of openstack | 13:31 |
fungi | gtema: i guess you'll want to revise https://review.opendev.org/899678 in that case | 13:34 |
opendevreview | Artem Goncharov proposed openstack/project-config master: Set proper LP project for ansible-collections-openstack https://review.opendev.org/c/openstack/project-config/+/899678 | 13:35 |
gtema | done. I think I should stop today with LP - it is driving me crazy | 13:35 |
opendevreview | Artem Goncharov proposed openstack/project-config master: Set proper LP project for ansible-collections-openstack https://review.opendev.org/c/openstack/project-config/+/899678 | 13:36 |
opendevreview | Artem Goncharov proposed openstack/project-config master: Set proper LP project for ansible-collections-openstack https://review.opendev.org/c/openstack/project-config/+/899678 | 13:47 |
TheJulia | Any chance I can get a hold for job name "ironic-tempest-standalone-advanced" on openstack/ironic ? Change ID 898010 in particular. | 14:13 |
fungi | TheJulia: can you provide a few words about what you're going to investigate on that held node, for context? i'll include it in the autohold comment | 14:16 |
TheJulia | fungi: trying to figure out virtual media networking issues | 14:19 |
TheJulia | but can't reproduce them locally... which seems my luck | 14:19 |
fungi | TheJulia: autohold has been created | 14:21 |
TheJulia | Thanks | 14:21 |
fungi | yw | 14:21 |
fungi | is this related to your glean questions from last week? | 14:22 |
TheJulia | yes! | 14:22 |
fungi | awesome | 14:22 |
TheJulia | Also, looks like cloud-init just tries dhcp and stomps config too | 14:22 |
TheJulia | so... all sorts of *fun* | 14:22 |
TheJulia | "fun" | 14:22 |
fungi | so sounds like you've progressed to thinking it's not mounting the configdrive for some reason? | 14:22 |
TheJulia | Well, config drive gets mounted, config gets created, networkmanager sort of just spins and tries to dhcp anyway | 14:24 |
TheJulia | cloud-init fires up shortly afterwards and goes "everything is dhcp!" and then, things just don't *seem* to work | 14:24 |
fungi | ouch | 14:24 |
TheJulia | yeah | 14:24 |
fungi | so being mounted but not "found" (or not parsed successfully) perhaps | 14:25 |
TheJulia | it sort of even looks like it is parsing correctly too, but can't know for sure from just console logs | 14:26 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add a jammy test node for regional mirrors https://review.opendev.org/c/opendev/system-config/+/899710 | 14:48 |
clarkb | tkajinam: I think those builds and change are stuck due to requesting very particular types of nodes (in helm's case the 32GB flavors) and we must not be able to fulfill them for some reason and/or are stuck in the process of determining no cloud can fulfill them so NODE_FAILURE is not reported | 14:49 |
clarkb | I've got meetings for like the next 5 hours though so not sure I'll be able to debug | 14:50 |
tkajinam | clarkb, ok. these jobs are eventually started and completed and I agree with what you said. | 14:59 |
frickler | https://grafana.opendev.org/dashboards is giving me an error popup "permission needed: folders:read". listing dashboards on the home page is working fine, though | 15:02 |
clarkb | I can confirm the behavior. Not sure why it happens. Maybe look at what the front page does permissions wise and we might need to update for the dashboards/ path? | 15:03 |
opendevreview | Merged openstack/project-config master: Set proper LP project for ansible-collections-openstack https://review.opendev.org/c/openstack/project-config/+/899678 | 15:12 |
* frickler really likes the Archived-At header that mm3 generates. earlier I always needed to look for the mail in the archive in order to give other ppl a link to it, now I can copy it straight from the mutt :-) | 15:23 | |
tonyb | That is nice. | 15:24 |
fungi | yes, i make extensive use of it as well | 15:36 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add a jammy test node for regional mirrors https://review.opendev.org/c/opendev/system-config/+/899710 | 15:47 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add a jammy test node for regional mirrors https://review.opendev.org/c/opendev/system-config/+/899710 | 16:07 |
clarkb | following up on the gerrit plugin manager exception. I think it is a different issue to the one they fixed recently after comparing tracebacks. Makes me more confident we aren't accidentally building something wrong somehow and ending up with old versions of the plugins | 16:20 |
clarkb | and to followup on the commentlinks thing my plan is to restart gerrit this afternoon around school run scheduling | 16:27 |
clarkb | I have too many meetings early in the day to give that proper attention | 16:27 |
fungi | sounds good. i'll go ahead and approve the two mailman cleanup changes | 16:30 |
fungi | and send a heads up to service-announce about landing the upgrade change on thursday | 16:30 |
frickler | infra-root: seems we have some issue with rax since end of last week https://grafana.opendev.org/d/a8667d6647/nodepool3a-rackspace?orgId=1&from=now-30d&to=now | 16:35 |
frickler | seems related to nodepool/sdk https://paste.opendev.org/show/bwFMydSlE3lyrBf2WKTd/ | 16:37 |
frickler | launcher container on nl01 is 4 days old, so that matches grafana | 16:38 |
clarkb | d474eb84c605c429bb9cccb166cabbdd1654d73c is the likely issue given timing I think cc stephenfin | 16:38 |
stephenfin | my guess is you're using cinder's v2 API (on RAX)? | 16:40 |
fungi | yes, they don't support anything newer | 16:40 |
clarkb | yup just noticed v2 doesn't have get_limits | 16:40 |
clarkb | I think sdk is broken here for all v2 cinder api usage | 16:41 |
stephenfin | we've found a few of those lately. Should be a copy-paste job | 16:41 |
clarkb | I'll propose a change to nodepool to exclude this openstacksdk version | 16:42 |
clarkb | looks like 1.5.0 should be fine but 2.0.0 is not | 16:43 |
clarkb | remote: https://review.opendev.org/c/zuul/nodepool/+/899717 Exclude openstacksdk 2.0.0 | 16:46 |
frickler | clarkb: do we want to verify that within the container on nl01 first or did you test that locally? | 16:47 |
clarkb | frickler: I verified via git log which is pretty light verification. I guess we can downgrade in nl01 and restart the container | 16:48 |
clarkb | maybe. I'm not sure we have permissiosn to do that within the container? | 16:48 |
frickler | entering the container as root should work, let me do a quick test | 16:49 |
clarkb | looks like the conatiner was just restarted. Fwiw I did a user installation of the library and was going to test if pythonpath would cause that to get picked up | 16:51 |
clarkb | it seems to be working now. Not sure if you did anything else. But maybe the user install and pythonpath made it all happy | 16:52 |
frickler | I just did "docker exec -u root bash" and then "pip install openstacksdk\<2" | 16:52 |
frickler | and then docker restart | 16:52 |
frickler | but yes, 23 building already | 16:52 |
clarkb | frickler: ah ok, so unsure if user install would have worked | 16:52 |
clarkb | but either way 1.5.0 seems to be valid | 16:52 |
stephenfin | clarkb: https://review.opendev.org/c/openstack/openstacksdk/+/899718 should be able to get that into 2.0.1/2.1.0 | 16:53 |
clarkb | stephenfin: thanks! | 16:53 |
clarkb | stephenfin: my nodepool side change did !=2.0.0 so we should automatiically pick that up once available | 16:53 |
stephenfin | cool | 16:54 |
stephenfin | clarkb: seeing as you have a credentials to a v2-having cloud, it would be helpful if you could validate that also. We don't have v2 in CI | 16:55 |
stephenfin | s/have a/have/ | 16:56 |
clarkb | stephenfin: I thought gtema did get rax credentials? | 16:56 |
clarkb | but we can probably manage to test that | 16:56 |
frickler | I was just about to say it would be interesting to see if that extra test would catch the issue | 16:57 |
gtema | clarkb - I remember I got the funny pair of secrets and I had troubles establishing proper tests with it (lack of privs or so). I will try to have another look on that later this week | 16:57 |
clarkb | gtema: cool let us know and if that continues to be problematic we can probably cross check with how things are setup on our side and/or run a one off test | 16:59 |
gtema | deal | 16:59 |
clarkb | frickler: fwiw I did `pip install openstacksdk==1.5.0` and that put it in the users local lib path. pip freeze did report it as the right version afterwards so I think it may have worked without root anyway | 17:01 |
frickler | clarkb: ah, o.k., I will try to test that next time | 17:03 |
opendevreview | Merged opendev/system-config master: Merge production and test node mailman configs https://review.opendev.org/c/opendev/system-config/+/899304 | 17:03 |
frickler | clarkb: regarding testing the sdk fix we should be able to do it with a local venv on bridge, I'll see if I can get that done between dinner and TC meeting | 17:04 |
clarkb | ++ | 17:05 |
clarkb | frickler: a small script that makes a cloud object then fetches volume limits should be sufficient rather than running all of nodepool | 17:06 |
clarkb | also much safer since we don't want nodepool fighting with another instance | 17:06 |
clarkb | (leak cleanups etc) | 17:06 |
frickler | yes, I wasn't planning to run nodepool there. my first would be to see if also getting limits via osc is affected | 17:09 |
frickler | +test | 17:09 |
clarkb | oh ya that would be a nice easy reproduced | 17:10 |
clarkb | *reproducer | 17:10 |
frickler | infra-root: seems there is a new zuul config error for github? projects like ansible: "Will not fetch project branches as read-only is set" | 17:31 |
clarkb | frickler: yes I brought it up in the zuul matrix room last week | 17:33 |
clarkb | frickler: basically the way zuul startup should work is it checks for configs while thinsg are starting up and notices that the configs aren't ready and marks it in error. Then later when it does load the config it should flip the state out of error with the new config | 17:34 |
clarkb | frickler: it appears that something is not causing it to clip over in this case and we need to do further debugging but I've been busy with other stuff recently | 17:34 |
frickler | ah, o.k., I skipped reading the zuul channel during the ptg, sorry for the duplicate then | 17:36 |
clarkb | if anyone has time to dig into the zuul logs to see if they can determine why it happened that would be great | 17:38 |
*** dhill is now known as Guest5281 | 18:15 | |
fungi | mailman upgrade announcement sent | 18:16 |
clarkb | fungi: does https://review.opendev.org/c/opendev/system-config/+/899305 need to be rebased due to the other config merge change? | 18:17 |
clarkb | I see the announcement too fwiw | 18:17 |
fungi | ah, whoops, thanks--i just saw the -2 from that | 18:18 |
fungi | they didn't originally conflict, but i think i ended up touching other files later that did | 18:18 |
clarkb | fixing the rax provider allowed zuul and nodepool to claer out those helm builds | 18:18 |
clarkb | thats one item that no longer needs debugging | 18:19 |
opendevreview | Jeremy Stanley proposed opendev/system-config master: Clean up old Mailman v2 roles and vars https://review.opendev.org/c/opendev/system-config/+/899305 | 18:21 |
clarkb | fungi: I went ahead and reapproved ^ after checking the diff | 18:22 |
fungi | thanks | 18:23 |
frickler | the neutron job that was stuck in periodic for days is gone as well | 18:44 |
opendevreview | Merged opendev/system-config master: Clean up old Mailman v2 roles and vars https://review.opendev.org/c/opendev/system-config/+/899305 | 18:59 |
TheJulia | o/ Looks like my autohold is ready for me! Who wants my ssh key? | 19:09 |
fungi | TheJulia: gimme | 19:10 |
fungi | TheJulia: ssh root@173.231.255.73 | 19:11 |
TheJulia | works, awesome! | 19:11 |
TheJulia | thanks! | 19:11 |
fungi | my pleasure | 19:11 |
fungi | i ran this on the old listserv just now: | 20:02 |
fungi | sudo su - root | 20:02 |
fungi | cd ~/kernel-stuff | 20:03 |
fungi | cp /boot/vmlinuz-5.4.0-165-generic ./ | 20:03 |
fungi | bash extract-vmlinux vmlinuz-5.4.0-165-generic > vmlinuz-5.4.0-165-generic.extracted | 20:03 |
fungi | cp vmlinuz-5.4.0-165-generic.extracted /boot/vmlinuz-5.4.0-165-generic | 20:03 |
fungi | i think that should be sufficient to get it booting successfully on the latest kernel in /boot | 20:03 |
fungi | clarkb: if that looks right to you, i'll perform a reboot test on it next | 20:04 |
fungi | and assuming it comes back up, i'll shut it down tomorrow and make an image before deleting | 20:05 |
fungi | tomorrow will be the 13.5 year anniversary of that vm's creation | 20:06 |
fungi | er, 11.5 year i mean | 20:06 |
fungi | it didn't quite live to reach its teens | 20:06 |
fungi | originally booted 2012-05-01 | 20:07 |
clarkb | looking | 20:21 |
clarkb | fungi: the size is a bit bigger but in the same rnage of magnitude as the one we're booted off of | 20:22 |
clarkb | so ya I think its good | 20:22 |
fungi | cool, rebooting it now | 20:23 |
fungi | current uptime is 494 days | 20:23 |
clarkb | will services start that we don't want to start? | 20:23 |
clarkb | specifically mailman and exim | 20:23 |
fungi | exim and mailman services are all disabled as part of the maintenance | 20:24 |
fungi | K links in /etc/rc2.d | 20:24 |
fungi | (created by `systemctl disable ...`) | 20:24 |
fungi | so no, they should not start again on boot | 20:25 |
fungi | but i'll double-check once i can log into it after it comes up | 20:25 |
fungi | okay, actually rebooting it now-now | 20:26 |
fungi | i probably should have touched /fastboot since i guarantee it's running a fsck of the rootfs | 20:28 |
clarkb | either that or its found some new way to fail to boot :/ | 20:28 |
*** elodilles is now known as elodilles_pto | 20:29 | |
fungi | if so, i'm tempted to just image it as-is and then we can attach it to a recovery boot or something down the road if we actually need any files from it | 20:29 |
clarkb | ya | 20:29 |
clarkb | it pings now | 20:30 |
fungi | just started responding to ping | 20:30 |
fungi | yup | 20:30 |
clarkb | I can login via ssh | 20:31 |
fungi | and i can ssh in | 20:31 |
fungi | no mailman/exim services started on boot either | 20:31 |
fungi | huh, of course, unattended-upgrades has staged linux-image-5.4.0-166-generic to be installed at the next shutdown | 20:38 |
fungi | i can probably just kill the unattended-upgrades daemon, or upgrade manually and do the extraction dance again followed by another reboot test | 20:39 |
clarkb | the computers are conspiring against us | 20:40 |
fungi | okay, did the same thing again with the newer kernel | 20:57 |
fungi | hopefully it'll come back up faster this time | 20:57 |
fungi | and i'm already ssh'd back in again | 20:57 |
fungi | Linux lists 5.4.0-166-generic #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | 20:58 |
fungi | tomorrow i'll power it off for good and archive an image, then delete | 20:58 |
fungi | and also take it out of the disable list on bridge | 20:58 |
clarkb | fungi: snapshot is in progress now? | 21:00 |
fungi | no, tomorrowe | 21:04 |
clarkb | how does this look for 20 minutes from now #status notice Gerrit on review.opendev.org will be restarted to pick up a configuration change required as part of Gerrit 3.8 upgrade preparations. | 21:41 |
tonyb | +1 | 21:44 |
clarkb | rough plan is `docker-compose down ; mv /home/gerrit2/review_site/data/replication/ref-updates/waiting /home/gerrit2/tmp/clarkb/waiting_20231031 ; docker compose up -d` we don't need any pulls oranythin like that so should be straightforward | 21:45 |
fungi | lgtm | 21:49 |
fungi | you're not moving out of /home/gerrit2 so should be ~instantaneous | 21:51 |
clarkb | ok time to start. I'll send that notice then proceed with the plan outlined above | 22:00 |
* fungi is on hand | 22:00 | |
clarkb | #status notice Gerrit on review.opendev.org will be restarted to pick up a configuration change required as part of Gerrit 3.8 upgrade preparations. | 22:00 |
opendevstatus | clarkb: sending notice | 22:00 |
-opendevstatus- NOTICE: Gerrit on review.opendev.org will be restarted to pick up a configuration change required as part of Gerrit 3.8 upgrade preparations. | 22:00 | |
opendevstatus | clarkb: finished sending notice | 22:03 |
clarkb | ok proceeding now that the notice is done | 22:03 |
fungi | webui is loading for me now | 22:04 |
fungi | file diffs aren't showing for me... very odd | 22:06 |
clarkb | as a sanity check it doesn't look like gerrit.config was updated. its timestamp is from when the change merged to update the config | 22:06 |
clarkb | fungi: I think we've seen that before. Its slow while it rebuilds caches | 22:06 |
clarkb | they should eventually load for you | 22:06 |
fungi | aha, okay | 22:06 |
clarkb | spotchecking some chagnes I already had opened their diffs load for me | 22:07 |
fungi | now it's working, yep | 22:07 |
clarkb | https://review.opendev.org/c/opendev/system-config/+/898756/ and https://review.opendev.org/c/openstack/nova/+/899753 for example | 22:07 |
clarkb | cool | 22:07 |
clarkb | change id comment links work for me. Not sure I have any examples of other commentlinks handy | 22:07 |
fungi | yeah, i was looking | 22:08 |
fungi | did test that already at least | 22:08 |
clarkb | the sha commentlink in https://review.opendev.org/c/opendev/system-config/+/899283 is working | 22:08 |
clarkb | and shas were the main one that got updated so I think I'm happy with this | 22:08 |
fungi | https://review.opendev.org/c/starlingx/utilities/+/897335 has a closes-bug | 22:08 |
fungi | still functioning | 22:08 |
fungi | https://review.opendev.org/c/starlingx/tools/+/899742 has story and task footers | 22:10 |
fungi | also working | 22:11 |
fungi | everything lgtm | 22:11 |
clarkb | I agree that your examples look good. So ya commentlink config updated to 3.8 compatible specification and we're still functional | 22:11 |
clarkb | should've scheduled the Gerrit upgrade for this Friday :) | 22:12 |
fungi | https://review.opendev.org/c/opendev/sandbox/+/760230 | 22:14 |
fungi | my last comment there confirms the commit id links you redid still work | 22:14 |
clarkb | ya if we want to change that behavior I'm sure we can adjust the regex but I didn't want to confuse matters when shifting formats of config | 22:14 |
fungi | agreed, it's fine | 22:15 |
TheJulia | so going back to glean, it looks like glean works as we would epxect on a first pass, the challenge is if cloud-init also triggers and smashes what is there. At least that is my running theory at the moment. Glean likely needs to move away from just using the network-script config format as well, but while deprecated, it still works at the moment | 22:15 |
clarkb | I don't know that glean + cloud-init was ever considered a use case. It was always glean or cloud-init | 22:15 |
TheJulia | well, we have no elements to remove cloud-init | 22:16 |
TheJulia | we likely ought to | 22:16 |
clarkb | as for configuring networks I think we've tried to take a path of least resistance with network manager on centos and friends which meant keep emitting the network script stuff and turn on the compat flag for network manager. It was my undersatnding that enabling the compat layer by default is deprecated but that the compat layer wasn'tgoing away | 22:17 |
clarkb | its just that we may have to explicitly enable it at some point. If this is wrong then yes we should investigate alternative network manager configuration methods | 22:17 |
TheJulia | well, the error that now gets spit out is we need to move to keyfiles | 22:18 |
TheJulia | and there is a command to do it for us | 22:18 |
TheJulia | like nmcli connection migrate | 22:18 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add a jammy test node for regional mirrors https://review.opendev.org/c/opendev/system-config/+/899710 | 22:22 |
opendevreview | Tony Breeds proposed opendev/system-config master: [testinfra] Add port into curl's --resolve arg. https://review.opendev.org/c/opendev/system-config/+/899762 | 22:22 |
opendevreview | Tony Breeds proposed opendev/system-config master: [DNM] Testinfra debugging https://review.opendev.org/c/opendev/system-config/+/899763 | 22:22 |
tonyb | Not important but I noticed that the sign-in link/button doesn't work on zuul.openstack.org as the redirect_uri is invalid, it does happily work on zuul.opendev.org. Are the valid redirect_uris managed somewhere in system-config or is that manually done? | 22:38 |
clarkb | tonyb: looks like its is the redirect from zuul.openstack.org to keycloak.opendev.org that has failed so it must be the keycloak config which I think is manual still | 22:55 |
clarkb | corvus: would know more | 22:55 |
clarkb | tonyb: in general I think we try to encourage people to just use zuul.opendev.org. Maybe we should haev the openstac.org vhost redirect to opendev.org | 22:56 |
tonyb | Yeah, I think we could do that and we could just rewrite the url to include the '/t/openstack' | 22:57 |
clarkb | I don't know why we did the proper vhost rather than a redirect. Maybe it was to prove you could do it with zuul multitenancy | 22:58 |
tonyb | Hard to say. | 23:01 |
tonyb | I'll try to remember how apache and mod_rewrite work | 23:02 |
corvus | i don't really recall a desire to remove the openstack whitelabel | 23:10 |
corvus | i'm certain part of the lack of desire comes from the fact that it is semi-helpful to zuul developers to have a whitelabel on hand for testing | 23:12 |
corvus | but i don't think that needs to be a blocker for removing the openstack whitelabel if we want to. we could always add a zuul whitelabel and not publicize it. | 23:13 |
corvus | and yeah, the keycloak urls are currently configured manually; the next step in that project would be to export that data (as json iirc) and then add automatic import/configuration to the deployment. there's a model for that in the zuul tutorial system which uses keycloak (the bootstrap stage of that imports a serialized keycloak config) | 23:15 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add a jammy test node for regional mirrors https://review.opendev.org/c/opendev/system-config/+/899710 | 23:28 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!