16:00:16 #startmeeting openstack_ansible_meeting 16:00:20 #topic rollcall 16:00:22 Meeting started Tue Mar 12 16:00:16 2019 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:25 The meeting name has been set to 'openstack_ansible_meeting' 16:00:48 o/ 16:00:55 o/ 16:01:03 cloudnull, DimGR, d34dh0r53, hughsaunders, b3rnard0, palendae, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, scarlisle, luckyinva, ntt, javeriak, spotz, vdo, jmccrory, alextricity25, jasondotstar, admin0, michaelgugino, ametts, bgmccollum, darrenc, JRobinson__, colinmcnamara, thorst, adreznec, eil397, 16:01:03 qwang,nishpatwa_, cathrichardson, drifterza, hwoarang, cshen, ullbeking, mnaser, nicolasbock, jrosser, cjloader, antonym, dcdamien, jamesdenton, chandankumar 16:01:15 * mnaser always hates doing that mass ping 16:01:16 :P 16:01:22 o/ 16:01:26 o/ 16:01:34 It's getting smaller:( 16:01:39 o/ 16:02:06 o/ 16:02:24 s'ok 16:02:25 * cyberpear waves 16:02:29 there's still people out there. 16:03:09 #topic agenda bashin 16:03:33 because osa works nicely now, people use it and move forward with their life/platform and not stick around . 16:03:45 admin0: are you saying... we need to break it? 16:03:46 :) 16:03:47 i wanted to talk about upcoming ptg, final train cleanups, rocky maintenance for suse 16:03:51 anyone has any topics they want to bring up? 16:04:18 is this a good place to chat about core ansible openstack modules? 16:04:32 or strictly limited to the openstack-ansible playbooks/roles? 16:04:36 cyberpear: ah, some of us here are maintainers of those modules, we consume them heavily but we're not "the team that maintains it" 16:04:55 though i think i'm the only one around right now who's an actual maintainer inside the repo 16:05:07 i do, i've been working on the calico upgrades to enable queens(and later) + calico depoyment. I'd like to get https://review.openstack.org/#/c/641483/ merged and then backport it (along with etcdv3 role tags) to rocky and queens 16:05:12 dunno if there is room right now to discuss this https://review.openstack.org/#/c/642614/ 16:05:16 * cyberpear nods 16:05:46 ok great, we got a few things 16:05:49 lets get the quick ones out first 16:06:02 #topic calico upgrades for queens and above deployment 16:06:08 #link https://review.openstack.org/#/c/641483/ 16:07:22 looks like calico jobs are failing on that logan- 16:07:35 are the jobs broken or? 16:07:43 the current calico 2.6 + etcd2 deployment has not worked after pike in my testing. I've upgraded the etcd role to support v3, and now I've upgraded the calico deploy in os_neutron to support calico v3 (requires etcd3). so in order to get that whole stack working on queens and later, I need to merge this in master, and then merge that change + a tag bump for the etcd role in the stable branches 16:07:45 the calico jobs never really existed in os_neutron, I think someone took a stab and never completed them 16:08:21 I use the integrated repo to gate calico downstream, because the os_neutron tests don't run basicserverops afaik so it doesn't build a vm, bind a floating ip, ping it, ssh to it, test metadata, etc 16:09:09 i've compiled the v3 changes from both my gating and also kmadac2's feedback on getting it working in Rocky on his environment 16:09:23 afaik we are the only folks using calico 16:10:28 honestly, i'm okay with merging with 16:10:32 merging this 16:10:42 no harm done, if it doesn't work at all and doesn't seem to affect the tooling itself 16:11:01 jrosser, spotz: how does this sound as other cores here? 16:11:24 i think there is enough stuff in the integrated repo to make a calico test 16:11:26 mnaser: Sounds good from logan-'s comments 16:11:35 the reworked scenario stuff should be just the ticket 16:11:45 jrosser: for sure. i dont mind doing that 16:12:07 and yes, it's great to get that all up to date because having a proper etcd will be useful for other things too 16:13:37 yeah and a calico integrated test would actually exercise the etcd cluster deploy too 16:17:04 cr+2 from my side 16:17:55 jrosser: wanna +2 ? 16:18:26 it is done 16:18:32 wonderful 16:18:36 thanks logan-! 16:18:38 thanks! 16:18:57 #topic mistral in stable/rocky 16:19:00 #link https://review.openstack.org/#/c/642614/ 16:19:22 so .. this is an interesting thing because we're technically adding a feature. however, it's very isolated 16:19:51 so it shouldn't affect anything else within OSA 16:20:26 also, afaik, openstackansible doesn't report as stable within governance which means we're not stopped from backporting features 16:20:37 (i.e. we were gonna backport nspawn support at some point too to rocky) 16:22:30 would be the case to port mistral role to rocky too? 16:22:36 it won't make difference anyways 16:25:13 * mnaser looks around 16:27:58 i left a comment that mistral didnt have a rocky branch 16:28:02 can we handle that? 16:28:22 os_mistral i mean 16:29:22 i don't mind it, i don't think we gate mistral or intend to, so it won't affect gates, and it won't affect anyone who doesn't deploy it. i think we would just call it experimental status in rocky 16:29:33 we could make a branch off master 16:30:24 the only goal to create a rocky branch would be to make the openstack_services and ansible-role-requirements consistent? 16:30:35 guilhermesp: i believe so 16:31:05 yeah, unless the release tooling can deal with it being a different branch - we do something like that for gnocchi? 16:31:17 yeah, coz the code will remain the same, so in my point of view is just a consistence matter 16:31:24 * jrosser looks for evrardjp 16:32:15 if we get that sorted out then i'm ok with experimental support in rocky 16:33:17 we don't have to release for stable/rocky. afaik, our roles are all 'independent' status now 16:33:22 therefore, they are not tied to the openstack release cycle 16:33:23 let me verify 16:33:24 jrosser: ? 16:33:38 sorry I am still travelling 16:33:48 evrardjp: release tooling for adding mistral to rocky, in the absence of the role having a rocky branch 16:33:54 is that do-able? 16:34:07 why would you want to do that? 16:34:07 not having a rocky branch becasue it is a new role 16:34:08 :p 16:34:14 mnaser: ^ :) 16:34:30 all is possible :) 16:34:38 I just need to setup the trackbranch 16:34:45 which I just started to get done 16:34:57 I have a few patches up that would allow that with no issues 16:35:06 (I suppose you want to track master in stable/rocky for that role) 16:35:20 alternatively we can create the branches manually on that role if necessary 16:35:33 i think just to make life less confusing, we can branch stable/rocky manually 16:35:37 I just don't know what you are asking yet, I haven't followed the conversation 16:36:07 evrardjp: https://review.openstack.org/#/c/642614/ 16:36:07 let's just branch stable/rocky for mistral and guilhermesp will make appropriate changes to stable/rocky 16:36:13 technically we never added new roles after feature freeze but it is up to ppl 16:37:14 I am about to enter my next flight, is there something I need to urgently do? 16:37:33 i don't think so 16:37:39 thanks evrardjp -- safe flights 16:37:39 I agree with jrosser's review though :) 16:37:42 mnaser: thanks 16:37:43 safe flights evrardjp 16:38:59 ok we'll follow up on this 16:39:14 #topic upcoming ptg 16:39:17 who's going to be there? 16:39:39 https://etherpad.openstack.org/p/osa-train-ptg 16:39:58 yeah I was just a kick-off ^ 16:39:59 yes - just need to actually book it 16:40:12 yay cool 16:40:19 wanna add yourself to the list jrosser ? 16:41:13 if anyone has topics to put on there, please put it in 16:42:06 ok just sent to ML too 16:43:58 we should also talk about the heat stuff too if there is time 16:46:17 jrosser: sounds like a good topic to put down as well. 16:46:25 oh i meant now :) 16:46:30 oh. 16:46:31 yeah 16:46:31 sure :D 16:46:43 #topic heat keystone URLs 16:46:55 so, have yo had much progress since the conversation a few days ago? 16:47:08 so thanks to gshippey we have this https://review.openstack.org/642812 16:47:44 which is basically a starting point that makes things work, i'm sure that the conf var name and stuff will need changing but it's good enough to fix things 16:48:42 and there is a bunch of other fall-out from this too 16:49:12 i would encourage gshippey to ask the heat team to make sure how to scope these changes to be backportable 16:49:12 we don't have a means of distributing CA certs into hosts/containers, so https://review.openstack.org/#/c/641445/ 16:49:33 yes, it is nicely minimal and we didnt change any behaviour, so hopefully that is do-able 16:49:46 sometimes there is just policy things 16:49:55 like maybe needing to split to change in a different way to make it happen 16:50:15 yes, will follow up - there are enough bug reports about this that I hope it can be backported 16:51:05 so there is progress, but we do have a bunch of folks who are broken 16:51:33 i'd like to try to get this all sorted as a priority becasue it breaks POC setups with self signed certs predominently 16:56:46 aehm, can I quickly ask a question before closing? 16:58:40 jrosser: im with you 16:58:45 tosky: sure. 17:00:15 I failed to realize that os_sahara jobs have been failing for a while; I tested it locally, I didn't find the reason, but I seemed like fixing the haproxy config would have fixed the issue 17:00:28 hence https://review.openstack.org/642787 , but the testing didn't work well: https://review.openstack.org/642041 17:01:10 so do you have any idea whether anything happen in the the haproxy setting? We had few changes in sahara but I'm not sure how it could have worked before 17:01:48 I also have some troubles to analyze the issue on the gates as I miss various config and log files (including the haproxy ones, unless they are there and I'm really unable to find them) 17:02:42 hmm 17:02:48 and that's it; I will continue to investigate, but if anyone of you has any suggestion or hints on top of their heads, that would be much appreciated! 17:02:50 HTTPConnectionPool(host='127.0.0.1', port=8386) . <- that doesnt look right 17:03:35 the connection reaches the haproxy, but then it fails after that 17:03:44 whered you find that jrosser 17:03:50 tosky: we listen on vip ip, not 127.0.0.1 17:03:56 http://logs.openstack.org/41/642041/2/check/openstack-ansible-functional-ubuntu-bionic/a29d2dc/logs/openstack/infra1/stestr_results.html 17:04:11 ok so im gonna bet 17:04:17 its trying to create it with 127.0.0.1 in the keystone endpoint 17:04:22 instead of using the vip in role tests 17:05:08 https://opendev.org/openstack/openstack-ansible-os_sahara/src/branch/master/tests/os_sahara-overrides.yml vs https://opendev.org/openstack/openstack-ansible-os_mistral/src/branch/master/tests/os_mistral-overrides.yml tosky 17:05:16 specifically https://opendev.org/openstack/openstack-ansible-os_mistral/src/branch/master/tests/os_mistral-overrides.yml#L21-L23 17:05:22 i think that will get you where you want to get :) 17:05:22 I deployed locally and even an `openstack dataprocessing plugin list` was failing 17:05:30 until I fixed the haproxy settings 17:05:39 could it be a combination of two different issues? 17:05:55 possibly, but we know that's one at least 17:06:53 https://opendev.org/openstack/openstack-ansible-os_sahara/src/branch/master/defaults/main.yml#L128 17:07:15 shouldn't those values be good enough too? 17:07:27 when running role tests, these arent the same as running integrated tests 17:07:33 yes, its super confusing and we're killing that :) 17:07:37 oh 17:08:13 what is the difference and when one is used vs the other? Even an RTFM link would be enough :) 17:08:26 the role tests is us just manually running them (i.e. running ./run_tests.sh inside the repo) 17:08:32 the integrated is like a afully openstack deployment 17:08:39 we're going to move our testing that everything uses integrated 17:08:47 integrated uses the role to deploy, role tests use the playbooks inside test/ 17:09:16 so is tests/os_sahara-overrides.yml used only when testing manually, but not by the the gate jobs? 17:09:21 or the other way round? 17:09:47 they are used only in the role jobs (i.e. when running ./run_tests.sh -- aka gates too) 17:11:27 oh, so would it be easier for me to fix tests/os_sahara-overrides.yml, or make sure that tests/os_sahara-overrides.yml is not needed anymore because everything is used integrated tests? 17:11:34 (if I got it correctly) 17:13:12 tosky: you'll have to fix os_sahara-overrides .. and fix integrated :( 17:14:22 so... which zuul job is running which kind of tests? 17:14:47 do -functional use role tests/run_tests.sh? 17:16:01 zuul job uses ./run_tests.sh 17:16:10 #endmeeting