16:00:16 <mnaser> #startmeeting openstack_ansible_meeting
16:00:20 <mnaser> #topic rollcall
16:00:22 <openstack> Meeting started Tue Mar 12 16:00:16 2019 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:25 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:00:48 <mnaser> o/
16:00:55 <guilhermesp> o/
16:01:03 <mnaser> cloudnull, DimGR, d34dh0r53, hughsaunders, b3rnard0, palendae, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, scarlisle, luckyinva, ntt, javeriak, spotz, vdo, jmccrory, alextricity25, jasondotstar, admin0, michaelgugino, ametts, bgmccollum, darrenc, JRobinson__, colinmcnamara, thorst, adreznec, eil397,
16:01:03 <mnaser> qwang,nishpatwa_, cathrichardson, drifterza, hwoarang, cshen, ullbeking, mnaser, nicolasbock, jrosser, cjloader, antonym, dcdamien, jamesdenton, chandankumar
16:01:15 * mnaser always hates doing that mass ping
16:01:16 <mnaser> :P
16:01:22 <spotz> o/
16:01:26 <logan-> o/
16:01:34 <spotz> It's getting smaller:(
16:01:39 <admin0> o/
16:02:06 <jrosser> o/
16:02:24 <mnaser> s'ok
16:02:25 * cyberpear waves
16:02:29 <mnaser> there's still people out there.
16:03:09 <mnaser> #topic agenda bashin
16:03:33 <admin0> because osa works nicely now, people use it and move forward with their life/platform and not stick around .
16:03:45 <mnaser> admin0: are you saying... we need to break it?
16:03:46 <mnaser> :)
16:03:47 <mnaser> i wanted to talk about upcoming ptg, final train cleanups, rocky maintenance for suse
16:03:51 <mnaser> anyone has any topics they want to bring up?
16:04:18 <cyberpear> is this a good place to chat about core ansible openstack modules?
16:04:32 <cyberpear> or strictly limited to the openstack-ansible playbooks/roles?
16:04:36 <mnaser> cyberpear: ah, some of us here are maintainers of those modules, we consume them heavily but we're not "the team that maintains it"
16:04:55 <mnaser> though i think i'm the only one around right now who's an actual maintainer inside the repo
16:05:07 <logan-> i do, i've been working on the calico upgrades to enable queens(and later) + calico depoyment. I'd like to get https://review.openstack.org/#/c/641483/ merged and then backport it (along with etcdv3 role tags) to rocky and queens
16:05:12 <guilhermesp> dunno if there is room right now to discuss this https://review.openstack.org/#/c/642614/
16:05:16 * cyberpear nods
16:05:46 <mnaser> ok great, we got a few things
16:05:49 <mnaser> lets get the quick ones out first
16:06:02 <mnaser> #topic calico upgrades for queens and above deployment
16:06:08 <mnaser> #link https://review.openstack.org/#/c/641483/
16:07:22 <mnaser> looks like calico jobs are failing on that logan-
16:07:35 <mnaser> are the jobs broken or?
16:07:43 <logan-> the current calico 2.6 + etcd2 deployment has not worked after pike in my testing. I've upgraded the etcd role to support v3, and now I've upgraded the calico deploy in os_neutron to support calico v3 (requires etcd3). so in order to get that whole stack working on queens and later, I need to merge this in master, and then merge that change + a tag bump for the etcd role in the stable branches
16:07:45 <logan-> the calico jobs never really existed in os_neutron, I think someone took a stab and never completed them
16:08:21 <logan-> I use the integrated repo to gate calico downstream, because the os_neutron tests don't run basicserverops afaik so it doesn't build a vm, bind a floating ip, ping it, ssh to it, test metadata, etc
16:09:09 <logan-> i've compiled the v3 changes from both my gating and also kmadac2's feedback on getting it working in Rocky on his environment
16:09:23 <logan-> afaik we are the only folks using calico
16:10:28 <mnaser> honestly, i'm okay with merging with
16:10:32 <mnaser> merging this
16:10:42 <mnaser> no harm done, if it doesn't work at all and doesn't seem to affect the tooling itself
16:11:01 <mnaser> jrosser, spotz: how does this sound as other cores here?
16:11:24 <jrosser> i think there is enough stuff in the integrated repo to make a calico test
16:11:26 <spotz> mnaser: Sounds good from logan-'s comments
16:11:35 <jrosser> the reworked scenario stuff should be just the ticket
16:11:45 <logan-> jrosser: for sure. i dont mind doing that
16:12:07 <jrosser> and yes, it's great to get that all up to date because having a proper etcd will be useful for other things too
16:13:37 <logan-> yeah and a calico integrated test would actually exercise the etcd cluster deploy too
16:17:04 <mnaser> cr+2 from my side
16:17:55 <mnaser> jrosser: wanna +2 ?
16:18:26 <jrosser> it is done
16:18:32 <mnaser> wonderful
16:18:36 <mnaser> thanks logan-!
16:18:38 <logan-> thanks!
16:18:57 <mnaser> #topic mistral in stable/rocky
16:19:00 <mnaser> #link https://review.openstack.org/#/c/642614/
16:19:22 <mnaser> so .. this is an interesting thing because we're technically adding a feature.  however, it's very isolated
16:19:51 <mnaser> so it shouldn't affect anything else within OSA
16:20:26 <mnaser> also, afaik, openstackansible doesn't report as stable within governance which means we're not stopped from backporting features
16:20:37 <mnaser> (i.e. we were gonna backport nspawn support at some point too to rocky)
16:22:30 <guilhermesp> would be the case to port mistral role to rocky too?
16:22:36 <guilhermesp> it won't make difference anyways
16:25:13 * mnaser looks around
16:27:58 <jrosser> i left a comment that mistral didnt have a rocky branch
16:28:02 <jrosser> can we handle that?
16:28:22 <jrosser> os_mistral i mean
16:29:22 <logan-> i don't mind it, i don't think we gate mistral or intend to, so it won't affect gates, and it won't affect anyone who doesn't deploy it. i think we would just call it experimental status in rocky
16:29:33 <mnaser> we could make a branch off master
16:30:24 <guilhermesp> the only goal to create a rocky branch would be to make the openstack_services and ansible-role-requirements consistent?
16:30:35 <mnaser> guilhermesp: i believe so
16:31:05 <jrosser> yeah, unless the release tooling can deal with it being a different branch - we do something like that for gnocchi?
16:31:17 <guilhermesp> yeah, coz the code will remain the same, so in my point of view is just a consistence matter
16:31:24 * jrosser looks for evrardjp
16:32:15 <jrosser> if we get that sorted out then i'm ok with experimental support in rocky
16:33:17 <mnaser> we don't have to release for stable/rocky.  afaik, our roles are all 'independent' status now
16:33:22 <mnaser> therefore, they are not tied to the openstack release cycle
16:33:23 <mnaser> let me verify
16:33:24 <evrardjp> jrosser: ?
16:33:38 <evrardjp> sorry I am still travelling
16:33:48 <jrosser> evrardjp: release tooling for adding mistral to rocky, in the absence of the role having a rocky branch
16:33:54 <jrosser> is that do-able?
16:34:07 <evrardjp> why would you want to do that?
16:34:07 <jrosser> not having a rocky branch becasue it is a new role
16:34:08 <evrardjp> :p
16:34:14 <jrosser> mnaser: ^ :)
16:34:30 <evrardjp> all is possible :)
16:34:38 <evrardjp> I just need to setup the trackbranch
16:34:45 <evrardjp> which I just started to get done
16:34:57 <evrardjp> I have a few patches up that would allow that with no issues
16:35:06 <evrardjp> (I suppose you want to track master in stable/rocky for that role)
16:35:20 <evrardjp> alternatively we can create the branches manually on that role if necessary
16:35:33 <mnaser> i think just to make life less confusing, we can branch stable/rocky manually
16:35:37 <evrardjp> I just don't know what you are asking yet, I haven't followed the conversation
16:36:07 <guilhermesp> evrardjp: https://review.openstack.org/#/c/642614/
16:36:07 <mnaser> let's just branch stable/rocky for mistral and guilhermesp will make appropriate changes to stable/rocky
16:36:13 <evrardjp> technically we never added new roles after feature freeze but it is up to ppl
16:37:14 <evrardjp> I am about to enter my next flight, is there something I need to urgently do?
16:37:33 <mnaser> i don't think so
16:37:39 <mnaser> thanks evrardjp --  safe flights
16:37:39 <evrardjp> I agree with jrosser's review though :)
16:37:42 <evrardjp> mnaser: thanks
16:37:43 <guilhermesp> safe flights evrardjp
16:38:59 <mnaser> ok we'll follow up on this
16:39:14 <mnaser> #topic upcoming ptg
16:39:17 <mnaser> who's going to be there?
16:39:39 <mnaser> https://etherpad.openstack.org/p/osa-train-ptg
16:39:58 <guilhermesp> yeah I was just a kick-off ^
16:39:59 <jrosser> yes - just need to actually book it
16:40:12 <mnaser> yay cool
16:40:19 <mnaser> wanna add yourself to the list jrosser ?
16:41:13 <mnaser> if anyone has topics to put on there, please put it in
16:42:06 <mnaser> ok just sent to ML too
16:43:58 <jrosser> we should also talk about the heat stuff too if there is time
16:46:17 <mnaser> jrosser: sounds like a good topic to put down as well.
16:46:25 <jrosser> oh i meant now :)
16:46:30 <mnaser> oh.
16:46:31 <mnaser> yeah
16:46:31 <mnaser> sure :D
16:46:43 <mnaser> #topic heat keystone URLs
16:46:55 <mnaser> so, have yo had much progress since the conversation a few days ago?
16:47:08 <jrosser> so thanks to gshippey we have this https://review.openstack.org/642812
16:47:44 <jrosser> which is basically a starting point that makes things work, i'm sure that the conf var name and stuff will need changing but it's good enough to fix things
16:48:42 <jrosser> and there is a bunch of other fall-out from this too
16:49:12 <mnaser> i would encourage gshippey to ask the heat team to make sure how to scope these changes to be backportable
16:49:12 <jrosser> we don't have a means of distributing CA certs into hosts/containers, so https://review.openstack.org/#/c/641445/
16:49:33 <jrosser> yes, it is nicely minimal and we didnt change any behaviour, so hopefully that is do-able
16:49:46 <mnaser> sometimes there is just policy things
16:49:55 <mnaser> like maybe needing to split to change in a different way to make it happen
16:50:15 <jrosser> yes, will follow up - there are enough bug reports about this that I hope it can be backported
16:51:05 <jrosser> so there is progress, but we do have a bunch of folks who are broken
16:51:33 <jrosser> i'd like to try to get this all sorted as a priority becasue it breaks POC setups with self signed certs predominently
16:56:46 <tosky> aehm, can I quickly ask a question before closing?
16:58:40 <mnaser> jrosser: im with you
16:58:45 <mnaser> tosky: sure.
17:00:15 <tosky> I failed to realize that os_sahara jobs have been failing for a while; I tested it locally, I didn't find the reason, but I seemed like fixing the haproxy config would have fixed the issue
17:00:28 <tosky> hence https://review.openstack.org/642787 , but the testing didn't work well: https://review.openstack.org/642041
17:01:10 <tosky> so do you have any idea whether anything happen in the the haproxy setting? We had few changes in sahara but I'm not sure how it could have worked before
17:01:48 <tosky> I also have some troubles to analyze the issue on the gates as I miss various config and log files (including the haproxy ones, unless they are there and I'm really unable to find them)
17:02:42 <mnaser> hmm
17:02:48 <tosky> and that's it; I will continue to investigate, but if anyone of you has any suggestion or hints on top of their heads, that would be much appreciated!
17:02:50 <jrosser> HTTPConnectionPool(host='127.0.0.1', port=8386) .  <- that doesnt look right
17:03:35 <tosky> the connection reaches the haproxy, but then it fails after that
17:03:44 <mnaser> whered you find that jrosser
17:03:50 <mnaser> tosky: we listen on vip ip, not 127.0.0.1
17:03:56 <jrosser> http://logs.openstack.org/41/642041/2/check/openstack-ansible-functional-ubuntu-bionic/a29d2dc/logs/openstack/infra1/stestr_results.html
17:04:11 <mnaser> ok so im gonna bet
17:04:17 <mnaser> its trying to create it with 127.0.0.1 in the keystone endpoint
17:04:22 <mnaser> instead of using the vip in role tests
17:05:08 <mnaser> https://opendev.org/openstack/openstack-ansible-os_sahara/src/branch/master/tests/os_sahara-overrides.yml vs https://opendev.org/openstack/openstack-ansible-os_mistral/src/branch/master/tests/os_mistral-overrides.yml tosky
17:05:16 <mnaser> specifically https://opendev.org/openstack/openstack-ansible-os_mistral/src/branch/master/tests/os_mistral-overrides.yml#L21-L23
17:05:22 <mnaser> i think that will get you where you want to get :)
17:05:22 <tosky> I deployed locally and even an `openstack dataprocessing plugin list` was failing
17:05:30 <tosky> until I fixed the haproxy settings
17:05:39 <tosky> could it be a combination of two different issues?
17:05:55 <mnaser> possibly, but we know that's one at least
17:06:53 <tosky> https://opendev.org/openstack/openstack-ansible-os_sahara/src/branch/master/defaults/main.yml#L128
17:07:15 <tosky> shouldn't those values be good enough too?
17:07:27 <mnaser> when running role tests, these arent the same as running integrated tests
17:07:33 <mnaser> yes, its super confusing and we're killing that :)
17:07:37 <tosky> oh
17:08:13 <tosky> what is the difference and when one is used vs the other? Even an RTFM link would be enough :)
17:08:26 <mnaser> the role tests is us just manually running them (i.e. running ./run_tests.sh inside the repo)
17:08:32 <mnaser> the integrated is like a afully openstack deployment
17:08:39 <mnaser> we're going to move our testing that everything uses integrated
17:08:47 <mnaser> integrated uses the role to deploy, role tests use the playbooks inside test/
17:09:16 <tosky> so is tests/os_sahara-overrides.yml used only when testing manually, but not by the the gate jobs?
17:09:21 <tosky> or the other way round?
17:09:47 <mnaser> they are used only in the role jobs (i.e. when running ./run_tests.sh -- aka gates too)
17:11:27 <tosky> oh, so would it be easier for me to fix tests/os_sahara-overrides.yml, or make sure that tests/os_sahara-overrides.yml is not needed anymore because everything is used integrated tests?
17:11:34 <tosky> (if I got it correctly)
17:13:12 <mnaser> tosky: you'll have to fix os_sahara-overrides .. and fix integrated :(
17:14:22 <tosky> so... which zuul job is running which kind of tests?
17:14:47 <tosky> do -functional use role tests/run_tests.sh?
17:16:01 <mnaser> zuul job uses ./run_tests.sh
17:16:10 <mnaser> #endmeeting