14:00:34 #startmeeting kolla 14:00:34 Meeting started Wed Nov 13 14:00:34 2024 UTC and is due to finish in 60 minutes. The chair is bbezak. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:34 The meeting name has been set to 'kolla' 14:00:47 #topic rollcall 14:01:23 o/ 14:01:27 \o 14:02:14 I can see that core team is around, good :) 14:03:32 #topic agenda 14:03:33 * Roll-call... (full message at ) 14:03:41 (i hope rendering is not very bad) 14:03:45 it is 14:03:53 rendering is very bad (just a link) :D 14:04:06 you need to paste each line on it's own 14:04:15 but for now I guess we can live with it 14:04:17 yeah. I'll try to remember that 14:04:26 #topic CI status 14:04:55 whiteboard says green 14:05:16 I didn't see massive issues recently 14:05:25 there were some infra issues yesterday, but should be fine now again 14:05:36 ok 14:06:05 #topic Release tasks 14:06:15 how is kayobe doing 14:06:26 jovial 14:06:39 I'll check with him 14:07:47 #topic Current cycle planning 14:08:26 I guess we're just after PTG and in the beginning of the cycle 14:08:49 rabbitmq is a good topic, but I saw discussion in the change SvenKieske 14:09:43 yes, so on one hand CI is "green" but looking at actual logs it's not that green and I wonder why that many errors are reported as green, but I didn't have time to dive deep into the issues and if they are even related. 14:10:14 any particular example? 14:10:33 and if I'm not missing something it's really weird that CI is green, as afaik upstream removed some of our used queue types, so I wonder how that should work. 14:10:53 #link https://zuul.opendev.org/t/openstack/build/0037175825154a0bb76e0c6c65eab144 14:11:33 specifically https://zuul.opendev.org/t/openstack/build/0037175825154a0bb76e0c6c65eab144/log/secondary1/logs/kolla/all-ERROR.txt but it's a very very long error log so as I said I didn't deep dive into it 14:12:22 there's errors all over the place, no sql connection, keystone domain not found..various stuff, might be related to rmq, might be something else 14:12:54 might be related to bootstrap. I'd compare other changes flows 14:13:07 not sure why that's reported as "success" though. afaik in k-a that should be a hard error. maybe zuul ci in kolla is different than in kolla-ansible? not sure. 14:13:28 lot's of those errors are during the startup phase, so they are ignored 14:13:58 yes, we're ignoring a lots of errors. recently we added some as well 14:14:03 and keystone domain/user/whatever not found is normal ops 14:14:15 in check-logs.sh 14:14:24 tests/check-logs.sh that is 14:14:40 sorry for autoparsing link 14:14:51 it's at least hard to distinguish multinode jobs from single node jobs in kolla, these seem to be differently labeled from the kolla-ansible jobs somehow 14:15:13 because I don't think we would find any issues in single node jobs. 14:15:55 yes, multinodes are very helpful in that regard 14:16:20 or this: 2024-11-12 14:28:53.801 31 ERROR oslo_service.periodic_task oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect to AMQP server on 192.0.2.1:5672 after inf tries: Basic.publish: (404) NOT_FOUND - no exchange 'cinder-scheduler_fanout' in vhost '/' 14:16:30 basically last line in: https://zuul.opendev.org/t/openstack/build/19efb602a7734f06b495a0b2346bf345/log/secondary2/logs/kolla/all-ERROR.txt 14:16:43 so that doesn't look like a healthy rmq to me 14:17:34 still reported as success. I'm not sure what's filtering out "harmless" errors in our CI, but I think it's a little bit greedy 14:18:50 and in general I would be very careful about filtering out errors at all, because the required patterns often spin out of control and tend to shadow real issues. 14:19:25 indeed. however some errors in bootstrap will occur anyway 14:19:28 but whatever, someone would have to dedicate some time, maybe I can make some on sunday, to look into it, no promises though 14:19:48 ok. let's continue discussion in that change 14:19:54 it make sense 14:19:55 sure, I understand the very good reasons to hide some errors, we can't fix all of them. we just need to be very careful :) 14:19:58 to make it clearer 14:20:12 #topic Additional agenda (from whiteboard) 14:20:16 I'll update the link to the latter link, that seems like a more real issue, thx 14:20:36 I can see some changes to review from SvenKieske 14:21:12 and backports 14:21:20 that is on my list 14:21:27 I'll take a look into that 14:22:00 nice, yeah most of those are backports I think :) 14:22:24 no reason to go over them individually imho. would just be happy to receive some reviews, thx. 14:23:18 yeap, will take a look 14:23:22 #topic Open discussion 14:24:04 I guess we did some open discussion already :) 14:25:09 I think we won't have meeting next week, as there are at least couple of conferences 14:25:21 okay 14:25:22 I'll be in Prague on OVS/OVN con 14:25:34 and some folks will be on Supercomputing in the US 14:25:53 let's skip it then, I'd think 14:25:53 I'm wondering why today's attendance is so low ;) 14:26:00 can you motivate someone at ovn con to revive the ovn-exporter project, please? ;) 14:26:09 :) 14:26:16 I'll try 14:26:37 upstream is dead and I don't feel particular inclined to maintain my one-off-patch fork, which isn't even finished :D 14:27:45 it is a problem of one-people project. I think ovs/ovn should have and maintain that kind of stuff 14:27:46 btw, your IRC name somehow changed to "Guest9306" just if you're wondering bbezak :D 14:27:53 lovely 14:28:03 again :) 14:28:12 right, would be nice if ovn/ovs project could step up. no idea how many ressources they got though. 14:29:17 that looks better, but I guess we are finished? :) 14:29:36 I don't have anything else 14:29:57 let's wrap up then, thank you all! 14:29:57 #endmeeting