16:09:05 #startmeeting openstack_ansible_meeting 16:09:06 Meeting started Tue Dec 1 16:09:05 2020 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:09:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:09:09 The meeting name has been set to 'openstack_ansible_meeting' 16:09:16 #topic bug triage 16:09:32 sorry for the delay with meeting :( 16:09:53 o/ hello 16:10:08 Merged openstack/openstack-ansible-os_senlin master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764650 16:10:09 Jonathan Rosser proposed openstack/openstack-ansible-tests stable/train: Apply OSA global-requirements-pins during functional tests https://review.opendev.org/c/openstack/openstack-ansible-tests/+/764976 16:10:37 I haven't done anything from the last week regarding fixing bugs unfortunatelly - had really a lot of internal things 16:10:46 Merged openstack/openstack-ansible-os_senlin master: Define condition for the first play host one time https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764659 16:10:57 I think the only new bug was https://bugs.launchpad.net/openstack-ansible/+bug/1906108 and there's already a fix for it 16:10:59 Launchpad bug 1906108 in openstack-ansible "os-keystone-install with keystone-config and keystone-install tags failure" [Undecided,New] - Assigned to Siavash Sardari (siavash.sardari) 16:11:11 Merged openstack/openstack-ansible-os_adjutant master: Trigger uwsgi restart https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/764655 16:11:41 which is https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/764552 16:12:38 it's really interesting why launchpad does not get updated after gerrit upgrade, but I think it's not the only issue here... 16:12:57 i think the new gerrit broke the integration with LP and storyboard 16:13:29 * jrosser waves to gshippey 16:14:04 👋 16:14:41 well, I think we don't have much discussions regarding bugs this week 16:14:49 #topic office hours 16:15:24 so regarding overall plans. thanks everyone for doing lots of reviews today 16:16:03 seems like a few gate failures, need some rechecks 16:16:15 and i think that the functional test jobs should be better now 16:16:15 jrosser: did you see my msg.. http://paste.openstack.org/show/800590/ 16:16:35 Yeah functional looks goot atm 16:16:51 I'm going to do branching this week, hopefuly tomorrow once rechecks will pass 16:17:09 and was thinking about doing stable release in 2 weeks after that 16:17:22 I don't think we have much to merge, except fixing out octavia 16:17:28 (for debian) 16:17:44 is that something understood or needs more investigating? 16:18:25 well I haven't spawn aio yet 16:18:59 I hope I will have time on my hands during this week for investigation. I'm worried that this might be upstream thing, as before the latest bump things were good 16:19:10 and debian uses py3.7 which is weird overall 16:19:46 good in terms that it was failing for uprade jobs but passing for debian ones 16:20:43 btw have you seen https://lists.ceph.io/hyperkitty/list/ceph-announce@ceph.io/thread/Y267KT2TQJ3VT7UQCC2ES4ZZV2OTL46P/ ? 16:21:08 omg 16:21:27 I'm wondering if that's what Saivash was talking about half an hour ago... 16:24:51 so branch will mean victoria rc this week? 16:24:58 well, I think I don't have much more things to discuss, but love to hear if anything needs attention from your prespective? 16:25:08 yep, exactly 16:25:12 ok good 16:25:28 well it is a bit unfortunate to see how much trouble folk have with magnum 16:25:45 oh, yes... 16:25:51 though our experience was the same and i'm not totally sure what can be done 16:26:04 * noonedeadpunk has the same 16:26:15 the AIO is too small to start a cluster really 16:26:37 well, we can probably do job with 2 vms.... 16:26:52 second one as computing node... 16:28:13 but again - we don't run magnum jobs with bumps... 16:28:20 so we won't know that we just broke it 16:29:45 and not sure we can really add task for each service during bumps... 16:30:13 no, that would be very difficult 16:30:44 is multinode a lot of work? 16:32:48 not sure.. I think not much. 16:32:59 but it seems we do test cluster creation.... 16:33:04 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_483/410681/3/gate/openstack-ansible-deploy-aio_metal-ubuntu-focal/4836457/logs/openstack/aio1-utility/stestr_results.html 16:33:35 https://github.com/openstack/magnum-tempest-plugin/blob/master/magnum_tempest_plugin/tests/api/v1/test_cluster.py 16:33:54 maybe issue is there somewhere in networking 16:34:07 since in aio we really do everything through same mgmt? 16:36:01 ah, well 16:36:25 these are all negative tests 16:37:20 and we've blacklisted the only positive one which is test_create_list_sign_delete_clusters 16:37:32 hmm 16:38:08 we really need a contributor for magnum stuff 16:38:22 it is a project of it's own somehow 16:38:52 well we're using it here but tend not to touch it at the moment 16:39:13 I hope I will get hands on it in some future, but now in the nearest :( 16:39:41 *but not in the nearest 16:41:21 ok - i don't think i have anything much more 16:42:22 Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [DNM] Remove magnum tempest blacklists https://review.opendev.org/c/openstack/openstack-ansible/+/764986 16:42:28 However w're about to implement troove support, so it will get better docs at least (I hope) 16:42:41 there are a couple of small patches toward zun from andrewbonney which could do with review to keep that moving 16:43:27 we are close to some basic tempest tests passing for that and the kuryr people are helpful with fixing the networking 16:43:57 yeah, I realizied I even submited some bug to them one day and andrewbonney had fixed it lately :) 16:45:35 Merged openstack/openstack-ansible stable/train: Remove git repo haproxy backend https://review.opendev.org/c/openstack/openstack-ansible/+/762209 16:45:52 Merged openstack/openstack-ansible stable/stein: Switch to stable/stein for EM https://review.opendev.org/c/openstack/openstack-ansible/+/761937 16:46:31 ah yes, merging the magnum docs is good 16:46:42 i think we should just continue to improve those as people get their stuff working 16:49:00 Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: [DNM] Test CI https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764987 16:49:17 Merged openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 16:49:24 Yeah, it seems by far the best approach atm... 16:49:55 well, maybe tests will pass now for magnum with without blacklists 16:50:10 which will kind of test clusters at least somehow... 16:50:37 dmsimard: have you seen this before? http://paste.openstack.org/show/800592/ 16:50:37 as otherwise even having several vms for testing we would need to write some tests as well... 16:50:51 * dmsimard looks 16:51:26 jrosser: rarely, but yes 16:51:55 when I've seen it happen it was due to a prior playbook crashed or was interrupted 16:52:09 running another playbook after that seemed to be fine 16:52:20 haven't seen it reproduce with mysql 16:52:22 i just grabbed that off here https://zuul.openstack.org/stream/8e41cd8233c249a798ccafcf8e69ad46?logfile=console.log 16:52:51 oh, and execution stuck 16:52:52 as the console does seem to have jammed up and i'm not sure if it's related 16:53:33 i took a look at that becasue i've seen more timeouts than usual 16:55:12 uh, and need to find out where all these deprecations come from... 16:56:40 heres another from earlier the same https://zuul.opendev.org/t/openstack/build/51374ae7438040d3bc00078b77c03174/log/job-output.txt#8428 16:59:38 last one is even more weird 17:00:01 it has timeouted in 1.5 hours? 17:00:46 job has started at 12:30 and timeouted at 14:00 17:00:56 while we should have 3h timeout.... 17:01:10 #endmeeting