16:02:54 #startmeeting openstack_ansible_meeting 16:02:55 Meeting started Tue Feb 9 16:02:54 2021 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:58 The meeting name has been set to 'openstack_ansible_meeting' 16:05:31 trying to check if we have some bugs to discuss... 16:06:49 I think jrossercovered today most of them :) 16:07:04 #topic office hours 16:09:06 o/ 16:09:08 hello 16:09:15 \o/ 16:10:01 I don't have really much from my side to say, since I had pretty little time on my hands :( 16:11:17 feels like we need to get all this new-pip stuff merged 16:11:46 I'd say we almost did? 16:11:56 https://review.opendev.org/q/topic:%22osa-new-pip%22+(status:open) 16:11:59 it's super closr 16:12:02 *close 16:12:05 we don't yet land the patch to the integrated repo which turns it on 16:12:52 this is related for the tests repo https://review.opendev.org/c/openstack/openstack-ansible-tests/+/774651 16:13:04 we stuck on neutron pretty much 16:13:13 and tests repo does not make this easy for us 16:13:22 yeah lots of things there, the tests repo patch will help 16:13:36 then we need the bionic->focal patch for os_neutron 16:13:56 which just doesn't work actually... 16:14:55 indeed, the functional tests are all generally unhappy 16:15:27 https://review.opendev.org/773979 is failing horribly in CI just now 16:16:36 oh right 16:17:06 we can't and the change to the tests repo + bionic->focal without also the constraints->requirements changes for os_neutron 16:17:14 some of these patches are going to need to be squashed 16:17:20 *land 16:18:27 why constraints->requirements changes relate to bionic vs focal? I guess they will get same versions during play? 16:18:46 but see no issues in merging as well if it will be required 16:19:34 also I'm wondering what to do with octavia on centos 16:19:43 should we jsut mark it nv now? 16:20:19 i wonder if johnsom is around? 16:20:23 Hi 16:20:28 woah 16:20:30 :) 16:20:37 You rang? 16:20:40 What is up? 16:21:03 did you see this http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020218.html 16:21:13 we are a bit stuck on our centos-8 CI jobs 16:22:16 Hmm, reading through. The initial report is a nova bug it looks like. Let me read all the way down 16:23:45 the issue here is that nova and neutron tempest tests are passing for us.. 16:23:53 maybe we're testing wrong things... 16:24:04 we should check they actually boot something :) 16:24:23 Well, Octavia tends to actually test more than other projects. We have true end-to-end tests, where some gloss over things 16:24:40 Is there a patch with logs I can dig in? 16:25:02 sure https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/769952 16:25:17 Thanks, give me a few minutes. 16:26:03 `tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops` should boot smth I guess 16:26:19 https://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/openstack/aio1-utility/tempest_run.log.txt#135 (it's for nova role CI) 16:26:58 Hmm, those logs have expired. Another patch maybe? 16:26:58 Ah, nevermind, I had the wrong link 16:27:33 jrosser: yeah they do spawn isntance https://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/host/nova-compute.service.journal-20-52-15.log.txt#5241 16:28:01 cool 16:29:10 johnsom: you an also check this if previous has expired https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a 16:30:18 btw for the last patch debian also failed in pretty much the same way I'd say... 16:36:46 noonedeadpunk: i did not know what to do about the 0.0.0 version here https://bugs.launchpad.net/openstack-ansible/+bug/1915128 16:36:48 Launchpad bug 1915128 in openstack-ansible "OpenStack Swift-proxy-server do not start" [Undecided,New] 16:36:59 other than say we're not really supporting rocky..... 16:38:57 I'm wondering if it's because they checked-out to rocky-em tag 16:39:09 I could imagine that pbr might go crazy about that 16:39:13 oh interesting, could be 16:39:34 perhaps an assumption that a tag is a number 16:40:10 whilst we are in meeting time i guess we should also talk about CI resource use? 16:40:35 yeah 16:41:08 I think the best we can do, except reducing time, is also move bionic tests to experimental 16:41:16 i think that the conclusion on the ML is a good one, reducing job failures is the biggest win 16:41:18 not sure if we should actively carry on bionic 16:41:34 becasue that may be even 100% overhead right now, or more 16:41:37 and main issue with failures I guess is galera 16:42:21 yeah, there were another ones, like auditd bug... 16:42:27 i'm going to try and be a bit more disciplined with recheck to note on the etherpad (https://etherpad.opendev.org/p/osa-ci-failures) when there is some systematic error 16:42:32 and I guess looking into gnocchi is also useful 16:42:43 oh yes there is a whole lot of mess there 16:43:10 something very strange with the db access unless i'm reading the log badly 16:43:11 +1 to having that etherpad 16:43:55 I think I need to deploy it to see what's going on 16:45:12 what to do with mariadb? is this an irc sort of thing? 16:47:24 I actually no idea except asking, yeah. 16:47:31 * noonedeadpunk goes to #mariadb for this 16:47:50 * #maria 16:48:28 hi all .. i am getting issue in setup-infra that i cannot understand .. this is the error: https://gist.githubusercontent.com/a1git/bf7c55a1befd59e3682be485bc4b1e88/raw/785c1d0a32fc05ae23e5fa5dbd859d3934f6930a/gistfile1.txt -- does it mean i need to downgrade my pip ? 16:48:57 i tried 22.0.0 .. but it fails on galera setup .. so going back on 21.2.2 16:52:30 admin0: have you used venv_rebuild=true ever on that deployment? 16:53:06 uh.... 16:53:12 i have not .. this is a new greenfield 16:53:59 we need to merge https://review.opendev.org/q/I6bbe66b699ce5ab245bb9779b61b5c4625eba927 16:54:16 on one line in the log inside the python_venv_build log, I find 021-02-09T22:13:01,803 error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 16:54:30 noonedeadpunk ++++++1 for that patch 16:54:38 aren't those installed by ansible inside the container ? 16:55:28 I guess it should be installed only on repo container where we usually delegate 16:55:43 i will lxc-containers-destroy .. and retry once more 16:56:12 admin0 it cook everything on repo and then just deploy on other container to reduce duplication work 16:56:12 Merged openstack/openstack-ansible-tests master: Unpin virtualenv version https://review.opendev.org/c/openstack/openstack-ansible-tests/+/774651 16:56:58 jrosser noonedeadpunk I think we need to bring in a nova expert on this. I don't see why nova is going out to lunch, but there are a bunch of errors in the nova logs. This seems to be related: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-api-os-compute.service.journal-12-56-44.log.txt#6893 16:57:03 venv_rebuild can be evil without that patch :) I learnt that hard way 16:57:08 it should never be trying to build that wheel on the utility container lie spatel says 16:57:27 it means that for some reason it is not being taken from the repo server 16:57:51 This is the other key message: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-compute.service.journal-12-56-44.log.txt#5970 16:58:28 But that may be a side effect of the cleanup/error handling related to the above error 17:00:13 * jrosser sees eventlet...... 17:03:13 Yeah 17:03:13 hm, that seems like libvirt issue indeed 17:03:38 wondering why we don't see it anywhere else... 17:03:42 Well, I really think it's related to the messaging queue problem. The libvirt very well may be a side effect 17:04:01 I'm just not sure what it is trying to message there. 17:04:39 rabbitmq log is totally unhelpful :( 17:06:24 eventually I saw this messages in my deployment with ceilometer 17:06:39 when it agent tries to poll libvirt 17:07:14 and the metric it's polling is not supported by libvirt 17:07:37 but here we don't have any pollster I guess (except nova) 17:08:49 well anyway, thanks for taking time and watching johnsom! 17:08:56 #endmeeting