Wednesday, 2024-09-11

jrossero/ morning07:30
noonedeadpunko/07:41
jrosserthat is a new error https://zuul.opendev.org/t/openstack/build/ff74a98242b34312a5ed091bf7eef90d/log/job-output.txt#1131707:54
fricklerjrosser: new tempest release was made on monday, likely related. will need some more digging though08:05
jrosserits interesting as it seems to be self infliced08:06
jrosser*inflicted08:06
jrosserits jut running `tempest run -l` i think08:06
fricklerjrosser: there must be more to it, a quick test with tempest from master or 29.1.0 doesn't fail. might be either related to the set of plugins installed or some other environment condition. is this failing 100% so that holding a node for debugging could help?08:38
jrosseri've just rechecked it to see if it is repeatable08:38
jrosserof note it's on a noble job which we only just added, so there could easily be something unexpected happening there08:39
fricklerah, ok, let me add a hold to this then, just in case08:39
opendevreviewMerged openstack/openstack-ansible-os_neutron master: Disable uWSGI usage by default  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/92788112:49
NeilHanlono/ morning folks. i started discussing rocky mirrors for opendev with the infra folks yesterday (again). I will be going through CI failures across zuul for rocky to see if I can come up with some numbers regarding how many mirror-related CI failures we see, on average13:08
noonedeadpunkoh, we saw quite some in recent 2 weeks I'd say13:09
NeilHanlonyeah, there's not really opposition to it, and there's space--just want to put some numbers behind it to justify the space request13:27
jrosserNeilHanlon: i think that most of the error that i have seen where "package blah-<version> cannot be installed because $reasons"13:29
jrosserrather than it really being a connectivity issue13:29
NeilHanlonyeah, i still think those are sort-of mirror issues, as that indicates a mirror being used is inconsistent or out of date13:30
jrosserso two things need to be considered differently, having a local mirror to avoid reaching out over the internet vs. having a mirror to help with consistency13:30
jrosseri think the purpose of the mirror is the first of those?13:30
NeilHanlonboth13:30
jrosseri was not sure that the mirroring process guarantees that you will get a usable mirror at the end13:31
NeilHanlonthe short version is that package deps can be spread across multiple repos, and dnf doesn't know to keep AppStream and BaseOS from the same mirror.. so you can end up in a situation where the AppStream mirror and BaseOS mirror are on different mirror systems13:31
NeilHanlonwhich may or may not have the same sync state13:32
jrosserah right - so having a local mirror forces that all to be the same place?13:32
NeilHanlonyeah, a local mirror that our CI jobs can point to would mean all those are coming from the same place13:33
NeilHanlonhttps://drop1.neilhanlon.me/irc/uploads/549451f32823cf6a/image.png 13:33
NeilHanlonthis is a propagation chart for 9.4 BaseOS -- it's pretty consistent but there are delays sometimes13:33
jrosserbtw did you see that i put a link to opensearch https://opensearch.logs.openstack.org/_dashboards which might help you dig out failures13:37
NeilHanlonjrosser: i did, thank you!13:48
NeilHanlonjrosser: any tips on login for that?13:51
jrosseroh openstack/openstack13:51
NeilHanlonta13:52
jrosseri was unable to find this documented at all and had to dig though the mailing list13:52
jrossernoonedeadpunk: those job timeouts look worrying - as they occur exactly on the tasks added by the patch14:07
jrosserlike here https://zuul.opendev.org/t/openstack/build/3286cc494ad5482cb112ad6de7c5b1c8/log/job-output.txt#1061914:08
jrosserit could be that the next task (update apt cache) is not completing14:09
noonedeadpunkoh... I just spotted that they're a bit "random"14:09
noonedeadpunkmeaning that some passed just nicely14:09
jrosserthats true14:10
jrossereverything was ok for check jobs14:10
noonedeadpunkand for rabbitmq we don't have mirrors, so I assumed it could be that14:10
noonedeadpunkwhy it's timeouting though....14:11
noonedeadpunkbut probably worth to recheck anyway to see14:11
jrosserit could easily be that the repo was broken for some moment14:18
opendevreviewMerged openstack/openstack-ansible-os_octavia master: Return `amphora` provider back  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/92881515:37
spatelHello!! 15:39
noonedeadpunko/15:46
noonedeadpunklong time no see :)15:46
spatelI was super busy :( 16:05
spatelnow I am back 16:05
noonedeadpunkyeah, workload these days is just /o\16:05
spatellol16:11
spatelI was working on DC migration and it just finished 16:11
spatelNext project is to deploy large cloud using OVN (little nervous) 16:11
noonedeadpunkwell, we've already found couple of nits in ovn in our POC16:21
noonedeadpunkas some things that used to work in OVS do not work for us in OVN16:21
noonedeadpunkbut it still looks quite promising :)16:22
jrosserdoh another timeout https://zuul.opendev.org/t/openstack/build/f2c7abed4efe482fb8bbb403dce6cdb8/log/job-output.txt#619016:26
spatelnoonedeadpunk what do you means OVS do not work for us in OVN?17:55
spatelwhat are those nits? 17:57
opendevreviewMerged openstack/openstack-ansible-openstack_hosts stable/2023.2: Ensure python3-packaging is installed for distros  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/92811918:02
bjoerntany plans to move the eom tags periodically or are the unmaintained branches to be used at that point ?20:24

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!