16:09:05 <noonedeadpunk> #startmeeting openstack_ansible_meeting 16:09:06 <openstack> Meeting started Tue Dec 1 16:09:05 2020 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:09:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:09:09 <openstack> The meeting name has been set to 'openstack_ansible_meeting' 16:09:16 <noonedeadpunk> #topic bug triage 16:09:32 <noonedeadpunk> sorry for the delay with meeting :( 16:09:53 <jrosser> o/ hello 16:10:08 <openstackgerrit> Merged openstack/openstack-ansible-os_senlin master: Reduce number of processes on small systems https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764650 16:10:09 <openstackgerrit> Jonathan Rosser proposed openstack/openstack-ansible-tests stable/train: Apply OSA global-requirements-pins during functional tests https://review.opendev.org/c/openstack/openstack-ansible-tests/+/764976 16:10:37 <noonedeadpunk> I haven't done anything from the last week regarding fixing bugs unfortunatelly - had really a lot of internal things 16:10:46 <openstackgerrit> Merged openstack/openstack-ansible-os_senlin master: Define condition for the first play host one time https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/764659 16:10:57 <noonedeadpunk> I think the only new bug was https://bugs.launchpad.net/openstack-ansible/+bug/1906108 and there's already a fix for it 16:10:59 <openstack> Launchpad bug 1906108 in openstack-ansible "os-keystone-install with keystone-config and keystone-install tags failure" [Undecided,New] - Assigned to Siavash Sardari (siavash.sardari) 16:11:11 <openstackgerrit> Merged openstack/openstack-ansible-os_adjutant master: Trigger uwsgi restart https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/764655 16:11:41 <noonedeadpunk> which is https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/764552 16:12:38 <noonedeadpunk> it's really interesting why launchpad does not get updated after gerrit upgrade, but I think it's not the only issue here... 16:12:57 <jrosser> i think the new gerrit broke the integration with LP and storyboard 16:13:29 * jrosser waves to gshippey 16:14:04 <gshippey> 👋 16:14:41 <noonedeadpunk> well, I think we don't have much discussions regarding bugs this week 16:14:49 <noonedeadpunk> #topic office hours 16:15:24 <noonedeadpunk> so regarding overall plans. thanks everyone for doing lots of reviews today 16:16:03 <jrosser> seems like a few gate failures, need some rechecks 16:16:15 <jrosser> and i think that the functional test jobs should be better now 16:16:15 <MickyMan77> jrosser: did you see my msg.. http://paste.openstack.org/show/800590/ 16:16:35 <noonedeadpunk> Yeah functional looks goot atm 16:16:51 <noonedeadpunk> I'm going to do branching this week, hopefuly tomorrow once rechecks will pass 16:17:09 <noonedeadpunk> and was thinking about doing stable release in 2 weeks after that 16:17:22 <noonedeadpunk> I don't think we have much to merge, except fixing out octavia 16:17:28 <noonedeadpunk> (for debian) 16:17:44 <jrosser> is that something understood or needs more investigating? 16:18:25 <noonedeadpunk> well I haven't spawn aio yet 16:18:59 <noonedeadpunk> I hope I will have time on my hands during this week for investigation. I'm worried that this might be upstream thing, as before the latest bump things were good 16:19:10 <noonedeadpunk> and debian uses py3.7 which is weird overall 16:19:46 <noonedeadpunk> good in terms that it was failing for uprade jobs but passing for debian ones 16:20:43 <noonedeadpunk> btw have you seen https://lists.ceph.io/hyperkitty/list/ceph-announce@ceph.io/thread/Y267KT2TQJ3VT7UQCC2ES4ZZV2OTL46P/ ? 16:21:08 <jrosser> omg 16:21:27 <noonedeadpunk> I'm wondering if that's what Saivash was talking about half an hour ago... 16:24:51 <jrosser> so branch will mean victoria rc this week? 16:24:58 <noonedeadpunk> well, I think I don't have much more things to discuss, but love to hear if anything needs attention from your prespective? 16:25:08 <noonedeadpunk> yep, exactly 16:25:12 <jrosser> ok good 16:25:28 <jrosser> well it is a bit unfortunate to see how much trouble folk have with magnum 16:25:45 <noonedeadpunk> oh, yes... 16:25:51 <jrosser> though our experience was the same and i'm not totally sure what can be done 16:26:04 * noonedeadpunk has the same 16:26:15 <jrosser> the AIO is too small to start a cluster really 16:26:37 <noonedeadpunk> well, we can probably do job with 2 vms.... 16:26:52 <noonedeadpunk> second one as computing node... 16:28:13 <noonedeadpunk> but again - we don't run magnum jobs with bumps... 16:28:20 <noonedeadpunk> so we won't know that we just broke it 16:29:45 <noonedeadpunk> and not sure we can really add task for each service during bumps... 16:30:13 <jrosser> no, that would be very difficult 16:30:44 <jrosser> is multinode a lot of work? 16:32:48 <noonedeadpunk> not sure.. I think not much. 16:32:59 <noonedeadpunk> but it seems we do test cluster creation.... 16:33:04 <noonedeadpunk> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_483/410681/3/gate/openstack-ansible-deploy-aio_metal-ubuntu-focal/4836457/logs/openstack/aio1-utility/stestr_results.html 16:33:35 <noonedeadpunk> https://github.com/openstack/magnum-tempest-plugin/blob/master/magnum_tempest_plugin/tests/api/v1/test_cluster.py 16:33:54 <noonedeadpunk> maybe issue is there somewhere in networking 16:34:07 <noonedeadpunk> since in aio we really do everything through same mgmt? 16:36:01 <noonedeadpunk> ah, well 16:36:25 <noonedeadpunk> these are all negative tests 16:37:20 <noonedeadpunk> and we've blacklisted the only positive one which is test_create_list_sign_delete_clusters 16:37:32 <jrosser> hmm 16:38:08 <jrosser> we really need a contributor for magnum stuff 16:38:22 <jrosser> it is a project of it's own somehow 16:38:52 <noonedeadpunk> well we're using it here but tend not to touch it at the moment 16:39:13 <noonedeadpunk> I hope I will get hands on it in some future, but now in the nearest :( 16:39:41 <noonedeadpunk> *but not in the nearest 16:41:21 <jrosser> ok - i don't think i have anything much more 16:42:22 <openstackgerrit> Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [DNM] Remove magnum tempest blacklists https://review.opendev.org/c/openstack/openstack-ansible/+/764986 16:42:28 <noonedeadpunk> However w're about to implement troove support, so it will get better docs at least (I hope) 16:42:41 <jrosser> there are a couple of small patches toward zun from andrewbonney which could do with review to keep that moving 16:43:27 <jrosser> we are close to some basic tempest tests passing for that and the kuryr people are helpful with fixing the networking 16:43:57 <noonedeadpunk> yeah, I realizied I even submited some bug to them one day and andrewbonney had fixed it lately :) 16:45:35 <openstackgerrit> Merged openstack/openstack-ansible stable/train: Remove git repo haproxy backend https://review.opendev.org/c/openstack/openstack-ansible/+/762209 16:45:52 <openstackgerrit> Merged openstack/openstack-ansible stable/stein: Switch to stable/stein for EM https://review.opendev.org/c/openstack/openstack-ansible/+/761937 16:46:31 <jrosser> ah yes, merging the magnum docs is good 16:46:42 <jrosser> i think we should just continue to improve those as people get their stuff working 16:49:00 <openstackgerrit> Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: [DNM] Test CI https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764987 16:49:17 <openstackgerrit> Merged openstack/openstack-ansible-os_magnum master: Add docs for suggested cluster template and debugging hints https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/764627 16:49:24 <noonedeadpunk> Yeah, it seems by far the best approach atm... 16:49:55 <noonedeadpunk> well, maybe tests will pass now for magnum with without blacklists 16:50:10 <noonedeadpunk> which will kind of test clusters at least somehow... 16:50:37 <jrosser> dmsimard: have you seen this before? http://paste.openstack.org/show/800592/ 16:50:37 <noonedeadpunk> as otherwise even having several vms for testing we would need to write some tests as well... 16:50:51 * dmsimard looks 16:51:26 <dmsimard> jrosser: rarely, but yes 16:51:55 <dmsimard> when I've seen it happen it was due to a prior playbook crashed or was interrupted 16:52:09 <dmsimard> running another playbook after that seemed to be fine 16:52:20 <dmsimard> haven't seen it reproduce with mysql 16:52:22 <jrosser> i just grabbed that off here https://zuul.openstack.org/stream/8e41cd8233c249a798ccafcf8e69ad46?logfile=console.log 16:52:51 <noonedeadpunk> oh, and execution stuck 16:52:52 <jrosser> as the console does seem to have jammed up and i'm not sure if it's related 16:53:33 <jrosser> i took a look at that becasue i've seen more timeouts than usual 16:55:12 <noonedeadpunk> uh, and need to find out where all these deprecations come from... 16:56:40 <jrosser> heres another from earlier the same https://zuul.opendev.org/t/openstack/build/51374ae7438040d3bc00078b77c03174/log/job-output.txt#8428 16:59:38 <noonedeadpunk> last one is even more weird 17:00:01 <noonedeadpunk> it has timeouted in 1.5 hours? 17:00:46 <noonedeadpunk> job has started at 12:30 and timeouted at 14:00 17:00:56 <noonedeadpunk> while we should have 3h timeout.... 17:01:10 <noonedeadpunk> #endmeeting