16:00:01 <mnaser> #startmeeting openstack_ansible_meeting
16:00:02 <openstack> Meeting started Tue Jan  8 16:00:01 2019 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:05 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:00:05 <mnaser> #topic rollcall
16:00:06 <mnaser> o/
16:00:28 <jamesdenton> o/
16:00:32 <odyssey4me> o/
16:00:34 <evrardjp> o/
16:00:36 <spotz> o/
16:00:52 <mattt> o/
16:01:16 <chandankumar> \o/
16:01:33 <mnaser> anyone has any last week highlights to bring up? i know we've missed 2 weeks because of holidays :)
16:01:35 <cloudnull> +/- /o
16:01:45 <odyssey4me> holidays were a highlight :)
16:01:54 <guilhermesp> o/
16:03:05 <mnaser> that's the best highlgiht
16:03:12 <mnaser> #topic Bug triage
16:03:13 <mnaser> catch up time
16:03:22 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1810584
16:03:22 <openstack> Launchpad bug 1810584 in openstack-ansible "openstack-ansible setup-hosts.yml fails in task: lxc_hosts Ensure image has been pre-staged" [Undecided,New]
16:04:10 <mnaser> i can curl that file
16:04:33 <evrardjp> 3 ppl saying it's a problem , I suppose it's real
16:04:39 <mnaser> "Running the same command a second time, the command succeeds."
16:04:52 <evrardjp> weirdly it works in gates in one go
16:04:53 <mnaser> in my experience, cdimage.ubuntu.com isn't the most reliable
16:05:15 <mnaser> yeah i've never seen gate failures around that
16:05:24 <evrardjp> maybe we are using an infra mirror?
16:05:29 <mnaser> 2 people on the same time too
16:05:35 <mnaser> i don't think we are for this
16:05:43 <jrosser> these are multinode though arent they?
16:05:54 <evrardjp> jrosser: they seem to
16:06:22 <evrardjp> but IIRC odyssey4me told me RAX has multinode daily periodics now, which would have shown up the issue
16:06:49 <mnaser> _lxc_hosts_container_image_url: "http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-{{ lxc_cache_map.arch }}.tar.gz"
16:06:55 <odyssey4me> yep, we don't see failures there for MNAIO tests as far as I know
16:07:06 <evrardjp> we maybe need to document this as a known issue in case of people not overriding the value
16:07:15 <mnaser> lxc_hosts_container_image_url: "{{ _lxc_hosts_container_image_url }}"
16:07:16 <evrardjp> odyssey4me: I suppose you're using RAX mirrors for that
16:07:18 <odyssey4me> or perhaps we should increase the timeout
16:07:27 <mnaser> odyssey4me: the thing is if you look at the logs
16:07:29 <mnaser> 0B downloaded
16:07:29 <odyssey4me> nope, we're using the upstream sources every time
16:07:33 <mnaser> so.. it's just not downloading ever
16:07:41 <odyssey4me> well, that's nice :/
16:07:46 <evrardjp> oh god
16:07:52 <mnaser> 64MiB/s later on the same download on the rerun
16:07:54 <evrardjp> look at the nice can of worms?
16:08:13 <mnaser> the 2nd person reported on the same day
16:08:31 <odyssey4me> well, we can remove the async and add retries I guess - or figure out another way of having async + retries
16:08:46 <odyssey4me> or we can ditch containers :p
16:08:50 <mnaser> :D
16:09:13 <mnaser> for the sakes of this bug, i hate to say it but i guess its confirmed because we don't have a retry mechanism
16:09:41 <mnaser> even though it's not really our fault, but we don't have recovery from this simple failure
16:10:14 <mnaser> confirmed/low ?
16:10:26 <mnaser> because it's really handling a third party failure
16:10:57 <mnaser> i guess ill go for that.
16:10:58 <evrardjp> yeah I'm fine with that triage
16:11:07 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1810538
16:11:07 <openstack> Launchpad bug 1810538 in openstack-ansible "keepalived.service is not enabled" [Undecided,New]
16:11:08 <evrardjp> but we need to explain it's not really our fault in the bug :p
16:11:29 <mnaser> uh
16:11:37 <mnaser> i'll add a note
16:12:06 <evrardjp> that's weird. It should not pass my functional testing if it's not started
16:12:16 <prometheanfire> o/
16:12:22 <evrardjp> oh wait
16:12:22 <mnaser> evrardjp: enabled not started
16:12:24 <evrardjp> after reboot
16:12:29 <evrardjp> yeah I misread
16:12:32 <evrardjp> I don't test that
16:12:56 <evrardjp> but it's not really my fault if the module is not doing appropriately
16:12:57 <evrardjp> :p
16:13:01 <mnaser> https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L197-L200
16:14:04 <evrardjp> curious
16:14:07 <mnaser> i somehow doubt
16:14:14 <mnaser> that `enabled: yes` is broken
16:14:23 <mnaser> and i have a feeling something else is breaking it
16:14:46 <mnaser> anyways, i cant reproduce because i've always seen keepalive go back up
16:14:57 <evrardjp> I can't remember if keepalived can be socket activated
16:15:11 <mnaser> and i can see it up here `systemctl is-enabled keepalived.service` on a c7 deploy
16:15:14 <evrardjp> in that case a wrong net config would never activate the socket
16:15:33 <evrardjp> but i doubt it's that
16:15:46 <mnaser> it would still be enabled but stopped
16:15:48 <evrardjp> we need a systemd unit log
16:16:07 <evrardjp> I think there is something else going on
16:16:32 <mnaser> asked for more info
16:16:33 <evrardjp> the enablement of the service resulting in a failure should be in the journal
16:16:41 <evrardjp> thx
16:16:48 <mnaser> incomplete
16:16:54 <mnaser> medium
16:16:54 <mnaser> for now..
16:17:05 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1810537
16:17:05 <openstack> Launchpad bug 1810537 in openstack-ansible "volume creation fails after successful installation" [Undecided,New]
16:17:27 <mnaser> hey isn't that what you were working on cloudnull ?
16:17:31 <mnaser> the fix is icnldued
16:17:44 <mnaser> cc odyssey4me ^
16:17:53 <mnaser> https://review.openstack.org/#/c/628197/
16:18:26 <cloudnull> mnaser that maybe the issue I was seeing.
16:18:39 <mnaser> cloudnull: can i assign to you? maybe https://review.openstack.org/#/c/628197/ is the fix
16:18:42 <mnaser> it certainly has to do with it
16:19:02 <openstackgerrit> Mohammed Naser proposed openstack/openstack-ansible-os_cinder master: Adds resource_filters.json distribution  https://review.openstack.org/628197
16:19:04 <mnaser> added closes-bug
16:19:08 <cloudnull> sure
16:19:13 <mnaser> evrardjp: i just did it, lets +2?
16:19:16 <evrardjp> haha
16:19:29 <cloudnull> this was the error from the cinder-api log I was seeing
16:19:30 <cloudnull> https://pasted.tech/pastes/61c4496978d40841ddaf22d0e3ca49936f269a3a.raw
16:19:35 <mnaser> considering noonedeadpunk had it fixed for a while now and we haven't done much :)
16:19:55 <cloudnull> when running tempest.api.volume.test_volumes_list.VolumesListTestJSON
16:19:56 <mnaser> cloudnull: while its not the same traceback, it complains about `common.get_enabled_resource_filters`
16:20:00 <cloudnull> ++
16:20:06 <mnaser> which seems pretty darn close so..
16:20:13 <cloudnull> so it totally could be the same issue, will tinker in a bit
16:20:21 <mnaser> keep https://review.openstack.org/#/c/628197/2 in mind and ill go onto the next
16:20:28 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1810533
16:20:29 <openstack> Launchpad bug 1810533 in openstack-ansible "openstsack-ansible behind a proxy fails when calling apt-key" [Undecided,New]
16:20:29 <noonedeadpunk> Oh, never knew, that there's a bug for this:)
16:20:35 <mnaser> noonedeadpunk: no problem =)
16:20:49 <mnaser> oh look a proxy issue
16:20:54 * mnaser looks at jrosser
16:21:04 <evrardjp> :)
16:21:13 <jrosser> didnt we change these to be in the repo now?
16:21:14 <mnaser> im going to guess this might be happening in cache prep stage
16:21:35 <mnaser> unfortunately it doesn't mention which role
16:21:51 <odyssey4me> MOAR WORKFLOW
16:21:55 <evrardjp> jrosser: that's true we are carrying things
16:22:01 <evrardjp> odyssey4me: haha I laughed too
16:22:13 <mnaser> fine ill join in too
16:22:13 <evrardjp> BUFFEROVERWORKFLOW
16:22:38 <mnaser> i dont see any 'apt-key' references using codesearch.openstack.org
16:22:44 <mnaser> i guess i can ask where that change was done?
16:22:44 <odyssey4me> oh yeah, I'll take that - I've done all the patches except the rocky patch for rabbitmq_server
16:22:52 <odyssey4me> master is all don
16:22:54 <odyssey4me> *done
16:23:02 <evrardjp> odyssey4me: thanks
16:23:03 <spotz> hehe
16:23:10 <mnaser> okay cool, ill assign then odyssey4me :)
16:23:29 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1810319
16:23:30 <openstack> Launchpad bug 1810319 in openstack-ansible "Can't set gateway for provider network" [Undecided,New]
16:23:38 <odyssey4me> https://review.openstack.org/#/q/(topic:vendor-gpg-keys+OR+topic:vendor-gpg-keys-stable/rocky)+(status:open+OR+status:merged)
16:24:11 <mnaser> looks like that's dcdamien
16:24:17 <jrosser> i wasnt sure about this one - "specific default gateway" sounds a bit bogus?
16:24:26 <jamesdenton> i seem to recall looking at the playbooks could not find any reference to 'gateway' other than docs
16:24:32 <mnaser> sounds like the user wants a static route
16:24:37 <jrosser> ^ that, yes
16:25:35 <mnaser> was it lxc_container_create that made network configs?
16:26:34 <noonedeadpunk> I don't think, that it ever created default routes. But several times I've faced with issue of missing nat rules on controller nodes after setup-hosts.yml
16:26:47 <noonedeadpunk> but it's probably not related
16:27:34 <mnaser> i dunno
16:27:36 <evrardjp> long ago we could inject routes to the process. I remember, I wrote it.
16:27:37 <mnaser> i mean we have it listed there
16:27:49 <jrosser> theres an example in the tests https://github.com/openstack/openstack-ansible-lxc_container_create/blob/master/tests/group_vars/all_containers.yml#L17-L26
16:28:15 <mnaser> for static routes yeah
16:28:17 <mnaser> not default
16:28:26 <jamesdenton> static could easily be 0.0.0.0, no?
16:28:30 <odyssey4me> missing rules on the host is fixed by lxc-system-manage
16:28:34 <mnaser> well it would conflict with the lxc one
16:28:38 <jamesdenton> ahh
16:28:41 <mnaser> the 10.8.0.0 or whatever
16:28:44 <evrardjp> jamesdenton: having two 0.0.0.0/0 seems to be problematic
16:28:50 <evrardjp> for some ppl :p
16:29:13 <evrardjp> not sure what we are talking about anymore
16:29:23 <odyssey4me> unicorns!
16:29:24 <mnaser> i added a comment
16:29:27 <mnaser> and i'll set to invalid i guess
16:29:36 <mnaser> linking to jrosser example
16:29:47 <evrardjp> we are talking about two different things
16:29:49 <evrardjp> IMO
16:30:07 <mnaser> user wants to define a gateway => not possible
16:30:18 <mnaser> we're pretty sure user just wants a static route pointing ?? somewhere ??
16:30:34 <mnaser> the default route is always 10.8.0.1 or whatever the lxc host ends up with
16:30:44 <mnaser> we use that for natting and all the other fun stuff we need to do to make sure things work
16:30:55 <mnaser> also
16:31:00 <mnaser> don't we run haproxy inside metal?
16:31:27 <mnaser> i'm pretty sure we do.. right? keepalived and haproxy
16:31:35 <noonedeadpunk> Yep
16:31:43 <evrardjp> https://github.com/openstack/openstack-ansible-lxc_hosts/blob/a8b96e2e37ffea4b7c3e055b1310b10bb95a7b2a/defaults/main.yml#L106
16:31:54 <evrardjp> let me backtrack this into the inventory
16:32:18 <odyssey4me> haproxy/keepalived are installed on the host for an AIO, yes
16:32:37 <mnaser> so i guess this seems to be a specific scenario that the user came up with
16:32:42 <evrardjp> it seems it's not in the inventory anymore.
16:33:02 <mnaser> running haproxy in containers but then wanting the container to not be wired to the internet lxc network but wired to the public network in this case
16:33:54 <jrosser> i think this is running containerised haproxy, needing eth0 to be natted to install (guess) and the external interface on this new eth14
16:33:59 <evrardjp> one could run haproxy and keepalived in containers, and use whatever network interface he wants
16:34:40 <evrardjp> I think it's a valid issue
16:35:10 <mnaser> not really, because our architecture has a default route of the host
16:35:15 <mnaser> which means that traffic goes in eth14
16:35:20 <evrardjp> we probably removed that feature at some point, kept the feature in lxc_hosts, and probably forgot to edit the default template of the inventory.
16:35:21 <mnaser> but on the way out, it hits the default route
16:35:25 <jrosser> imho this is addressed with a static route as i linked
16:36:01 <mnaser> i'l leave this and lets wait for the user to comment on next week
16:36:05 <evrardjp> mnaser: not sure to understand
16:36:22 <mnaser> evrardjp: just because traffic enters from one interface, does not mean it will exit from the same one
16:36:28 <evrardjp> IMO you could have your own NIC that's not natted in the container, and that would require a default route for reason x
16:36:50 <mnaser> if your default route is the physical host that runs the container
16:37:11 <evrardjp> well no I meant, if you don't run nat at all
16:37:23 <evrardjp> you could have just bridges on the host, and ignore lxc nat.
16:37:37 <mnaser> you could, but that's a whole another use case
16:37:43 <mnaser> anyways, i think we've taken abit of time on this
16:37:55 <mnaser> i dont wanna burn everyone out with all this stuff for now
16:38:00 <jrosser> evrardjp: yes this is exactly what i do in the new http proxy test, no default route https://review.openstack.org/#/c/625523/
16:38:02 <evrardjp> haha true :)
16:38:14 <mnaser> i feel like bug triage drains everyone out and we lose people :(
16:38:24 <mnaser> so enough of that for today
16:38:38 <openstackgerrit> Kevin Carter (cloudnull) proposed openstack/openstack-ansible-os_cinder master: Cleanup files and templates using smart sources  https://review.openstack.org/588953
16:38:39 <mnaser> rhel 8 was an interesting subject cloudnull brought up and odyssey4me evrardjp  was discussing earlier
16:38:48 <mnaser> a lot of stuff has been removed which makes containers even harder
16:39:04 * mnaser is using more and more non-containerized centos deploys
16:39:13 <evrardjp> mnaser: I checked the link you gave above
16:39:17 <guilhermesp> with this, cinder as backend for glance is going to be supported by us but just when glance is METAL
16:39:20 <evrardjp> (might worth giving it here)
16:39:28 <mnaser> oh yeah cloudnull posted that
16:39:39 <mnaser> cloudnull> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8-beta/html/8.0_beta_release_notes/removed_functionality -- for your reading enjoyment :)
16:39:40 <evrardjp> mnaser: I am scared of the LVM Python bindings have been removed
16:40:04 <mnaser> yeah guilhermesp also caught a really annoying issue about iscsiadm and containers
16:40:13 <mnaser> things like running glance in containers with cinder backend doesn't work for example
16:40:20 <odyssey4me> yep
16:40:28 <odyssey4me> nothing using iscsi will work in a container
16:40:35 <mnaser> learned that the hard way :)
16:40:41 <evrardjp> :)
16:40:41 <guilhermesp> yep hahaha
16:40:45 <prometheanfire> that's been known for a while iirc (we learned the hard way too)
16:40:47 <guilhermesp> I suffered a bit but that's ok
16:41:03 <mnaser> so given all that stuff, i'd like to propose moving centos off containers into metal
16:41:09 <mnaser> as a path to eventually maybe make containers an opt-in
16:41:19 <mnaser> but making centos more of a canary to see what happens in there
16:41:32 <prometheanfire> a second cent gate for it?
16:41:42 <evrardjp> mnaser: that sounds fair. I like the approach.
16:41:58 <mnaser> prometheanfire: we already gate metal and lxc
16:42:07 <mnaser> and they're both somewhat reliable, i guess.
16:42:17 <evrardjp> mnaser: do we have fine tuned the default listen interfaces of all the services?
16:42:26 <prometheanfire> just curious what it'd do to coverage is all
16:42:39 <mnaser> evrardjp: i'll have to pick up that work and maybe JUST maybe multinode metal jobs
16:42:40 <evrardjp> if there is no 0.0.0.0 anymore, we can pretty much simplify more things
16:43:12 <jrosser> evrardjp: you need this https://github.com/openstack/openstack-ansible/blob/master/playbooks/listening-port-report.yml
16:43:12 <mnaser> so i was thinking we can combine some of the efforts that odyssey4me has been doing in order to run integrated for roles
16:43:31 <mnaser> so that we don't have to rewrite most functional tests to throw them away later
16:43:33 <evrardjp> mnaser: for now I'd say the only part is haproxy because he listens to the same port as some services, so if you put multinode, just put haproxy on a single node, then all the rest into an infra node, then a compute node
16:43:46 <evrardjp> a 3 node perfection :)
16:43:49 <mnaser> evrardjp: yea but ideally i'd like haproxy to be colocatable
16:43:57 <mnaser> most deployments don't have a dedicated haproxy node
16:44:04 <mnaser> in my experience at least
16:44:08 <mnaser> people want to reuse their controllers
16:44:12 <evrardjp> that's sad, because it helps :p
16:44:49 <mnaser> running everything on it's own machine is nice too because that helps, but then we can do vms to make it easier cause they don't need resources .. but machine containers work easier too cause they're lighter
16:44:49 <mnaser> oh.
16:44:50 <mnaser> wait.
16:44:52 <mnaser> :)
16:45:05 <evrardjp> :)
16:45:28 <odyssey4me> maybe we should just use kata containers instead :p
16:45:37 <mnaser> hey
16:45:39 <evrardjp> k8s?
16:45:39 <mnaser> i'm not gonna lie
16:45:50 <mnaser> i've thought about using docker containers to run some stuff
16:46:00 <evrardjp> mnaser: it makes sense
16:46:10 <mnaser> like imagine how nice it'd be just to pull down memcache, the same way, across all systems
16:46:52 <evrardjp> but then ppl will say it's memcache with centos, oh no it's memcache on suse I want, oh no it's memcache on ubuntu I want, OH NO it's not Alpine!
16:47:11 <jrosser> we should learn carefully from the tripleo experience there - seems to have turned messy
16:47:37 <evrardjp> I think run everything on metal is simple
16:47:46 <evrardjp> if ppl want their own things, they can
16:48:08 <prometheanfire> not going to make all people happy all the time
16:48:10 <evrardjp> for example, they set their ubuntu nodes with lxd networks, and then install on lxd nodes
16:48:19 <evrardjp> prometheanfire: that's true
16:48:31 <evrardjp> prometheanfire: but we don't have to
16:48:50 <mnaser> i've used every single deployment tool so far by now lol
16:48:55 <prometheanfire> evrardjp: just making sure we were not trying to, that path leads to destruction :P
16:48:57 <evrardjp> if we deliver the minimum amount of code it's easier to maintain in the long run, and people will be happy, even if we lack features
16:49:00 <noonedeadpunk> but I'd prefer not to completel reject containers, in favor of metal
16:49:09 <mnaser> noonedeadpunk: nope, we don't want to do that at all
16:49:12 <evrardjp> mnaser: fuel too?
16:49:29 <mnaser> evrardjp: fuel, mos, tripleo, kolla-{ansible,k8s}, puppet openstack
16:49:40 <evrardjp> noonedeadpunk: it's not about rejecting it, because they make sense... It's about giving them as opt-in
16:49:40 <mnaser> even the tripleo before redhat revived it :)
16:49:50 <evrardjp> mnaser: wow, the HP thing?
16:49:54 <mnaser> yep
16:50:05 <mnaser> noonedeadpunk: when you say not rejecting containers, you mean machine containers or app containers
16:50:23 <evrardjp> like odyssey4me would have said, at that time dinosaurs were roaming on earth
16:50:38 <noonedeadpunk> mnaser I mean machine containers
16:50:49 <cloudnull> jrosser ++ totally agree.
16:50:54 <mnaser> yeah, we don't aim on dropping it
16:51:06 <mnaser> we have users that are happy with it
16:51:08 <noonedeadpunk> Yeah, then bare metal is really nice. But kata looks also pretty interesting. But I don't like docker to be honest..
16:51:08 <jrosser> i'm also here primarily becasue of the attractive architecture / approach
16:51:18 <mnaser> kata is nothing but a docker runner
16:51:29 <mnaser> it just runs docker containers in vms, thats' all
16:51:31 <odyssey4me> it makes sense to me to have lxc only running on ubuntu, which formally supports it - and nspawn/lxc opt-in for those who want it
16:51:45 <mnaser> yeah, the lxc stuff is pretty hacked up too
16:51:49 <mnaser> because we depend on some other external repo
16:52:05 <mnaser> and the more we stray away from upstream tooling the harder it gets
16:52:23 <evrardjp> the urgency is remove it for centos
16:52:28 <evrardjp> when is 8 official?
16:52:50 <mnaser> having discussed all of this, is anyone not too opposed to removing containers from centos to be able to support rhel 8 and make it a bit more maintainable, i guess
16:52:57 <spotz> Can we blame mhayden for something here?:)
16:53:09 <mhayden> always
16:53:19 <cloudnull> I'd really love folks to give the nspawn bits a spin. they need folks to use it and report where its all broken for them and their environments.
16:53:19 <spotz> heheh, we miss you:)
16:53:42 <prometheanfire> mhayden: we should do tacos this week
16:53:59 <mnaser> cloudnull: i really tried to get nspawn working at gate, 17 patch sets later and i was still struggling
16:54:00 <FrankZhang> wild Major found
16:54:03 <prometheanfire> cloudnull: I need to bug you about the state of that
16:54:07 <mnaser> and unfortunately we haven't been getting traction on it
16:54:07 <evrardjp> FrankZhang:  :)
16:54:10 <mnaser> my knowledge in it is limited
16:54:19 <odyssey4me> to my mind, given that we have a very limited centos/suse support base and people supporting/developing, perhaps we should consider scaling back their support to only without containers
16:54:36 <evrardjp> odyssey4me: I would be fine with that.
16:54:37 <jrosser> mnaser: theres an important distinction between "we removed this stuff becasue rhel8 broke the world" vs. "we removed this stuff becasue it's the general direction of travel of OSA"
16:54:53 <odyssey4me> jrosser yep, fair point
16:55:03 <jrosser> i think we have enough confusion already between containers/metal, with very few actually using metal, and we carry the overhead
16:55:18 <mnaser> jrosser: i agree.  i don't want to take away from major users who's benefit is containers and the architecture we provide
16:55:18 <evrardjp> odyssey4me: to be honest, lxc is supposed to be phased out in favor of new lxd/lxc bindings, right?
16:55:23 <cloudnull> mnaser maybe we could spend some time on a mnaio or some env to get nspawn up and running and answer questions.
16:55:39 * cloudnull has access  to hardware to do that
16:55:47 <odyssey4me> if we move to containers being opt-in, then we should change the current dynamic inventory to a simpler inventory plugin which can be easily enabled/disabled
16:55:51 <mnaser> well, i'd be just happy with passing jobs to start with :(
16:56:00 <spotz> I know we don't want to wait that long to make a decision but this could be a great Forum or PTG discussioon
16:56:00 <evrardjp> odyssey4me: indeed
16:56:17 <prometheanfire> odyssey4me: would we still need the current one to support people if they do opt in to containers?
16:56:18 <evrardjp> spotz: I tried that though, it didn't bring many supporters
16:56:24 <mnaser> also i think making our default deploy tooling simpler but leaving the complex one available is better
16:56:31 <evrardjp> that's why I never worked on the removal :p
16:56:44 <mnaser> most users don't want to throw a /24 at an openstack control plane deployment, they want 3 ips for each of their controllers
16:56:45 <evrardjp> mnaser: couldn't it be on the side, in ops repo?
16:56:47 <spotz> evrardjp: Weird, cause the in[ut would help make a design for direction:(
16:57:00 <mnaser> *if* someone wants to do that, they can do it (and they probably have the knowledge to do it)
16:57:17 <evrardjp> spotz: oh no I meant the lack of willingness to change was an input in itself
16:57:20 <prometheanfire> mnaser: swich the control plane to v6, never run out of addresses :P
16:57:27 <cloudnull> ^ :D
16:57:31 <mnaser> baha
16:57:32 <prometheanfire> then just use haproxy for 624
16:57:35 <mnaser> like or maybe as an idea
16:57:41 <prometheanfire> :P
16:57:47 <mnaser> we can decouple the dynamic inventory away out
16:57:54 <odyssey4me> we've run around this circle many times, but until it matters enough to someone it's not going to happen
16:58:04 <prometheanfire> ^
16:58:16 <mnaser> odyssey4me: the ipv6 circle or the metal circle
16:58:18 <odyssey4me> for now it's easy to make centos metal only, and perhaps also work on some plays to transition any container deployments to metal when doing the upgrade
16:58:33 <spotz> evrardjp: hehe, then they need to provide help!:)
16:58:41 <odyssey4me> perhaps for suse we do the same given that we have a low support base and user base
16:58:55 <mnaser> if folks are ok with that, i will do the work
16:59:01 <odyssey4me> we leave ubuntu as-is until it matter enough to anyone to change up how that's all done
16:59:02 <cloudnull> +1
16:59:18 <evrardjp> lgtm
16:59:23 <spotz> +1
16:59:24 <mnaser> awesome
16:59:24 <evrardjp> and thanks mnaser for the work
16:59:33 <evrardjp> it shouldn't be that hard if we keep lxc around
16:59:35 <mnaser> thank you for your patience with my shenanigans dealing with dinosaurs
16:59:36 <mnaser> :)
16:59:37 <evrardjp> for ubuntu
16:59:51 <cloudnull> would be sad to see SUSE support / test matrix reduced, but i understand
16:59:55 <mnaser> anyone else have anything in mind?
17:00:04 <odyssey4me> mnaser I think we can pretty simply just remove centos/suse from the openstack-ansible-tests templates, then add cross-repo integrated build tests to all roles with a metal build
17:00:07 <evrardjp> as we won't change inventory, methods of deployments, but still simplify code for centos
17:00:22 <mnaser> odyssey4me: that was my goal pretty much :)
17:00:31 <mnaser> and then figure out upgrades
17:00:35 <evrardjp> well
17:00:35 <mnaser> anyways, we're kinda at time
17:00:37 <prometheanfire> mnaser: I have an item to bring up for notice (after this) :P
17:00:44 <evrardjp> it's not enough for clustering roles
17:00:49 <mnaser> whats up prometheanfire ?
17:00:57 <prometheanfire> the barbican role seems broken
17:00:58 <mnaser> evrardjp: i had an idea for that too :)
17:01:00 <mnaser> orly?
17:01:03 <evrardjp> mnaser: multinode?
17:01:06 <mnaser> evrardjp: yep
17:01:16 <evrardjp> mnaser: good
17:01:21 <mnaser> oh thats a lot of red
17:01:33 <prometheanfire> at least in master
17:01:36 <mnaser> fatal: [infra1]: FAILED! => {"changed": false, "cmd": "set -e\n if [ -d /opt/tempest-testing/bin ];\n then\n . /opt/tempest-testing/bin/activate\n fi\n tempest run  --whitelist-file /root/workspace/etc/tempest_whitelist.txt", "delta": "0:00:02.464257", "end": "2019-01-03 18:26:46.166077", "msg": "non-zero return code", "rc": 1, "start": "2019-01-03 18:26:43.701820", "stderr": "", "stderr_lines": [], "stdout": "The
17:01:36 <mnaser> specified regex doesn't match with anything", "stdout_lines": ["The specified regex doesn't match with anything"]}
17:01:39 <jrosser> that is due to changes in tempest
17:01:47 <mnaser> mayb arxcruz or chandankumar can help us with that
17:01:49 <jrosser> and there was a patch this morning merged as the first part of addressing that
17:01:58 <odyssey4me> yep, I've been doing some work on that front too
17:02:16 <mnaser> okay cool, so maybe that is a good canary patch to see if it works or not :)
17:02:17 <prometheanfire> mnaser: k, thanks
17:02:21 <odyssey4me> https://review.openstack.org/628979 has been a bit of a work in progress
17:02:21 <chandankumar> mnaser: can you point me the log url
17:02:30 <mnaser> chandankumar: i looked at https://review.openstack.org/#/c/625634/
17:02:47 <evrardjp> I also have a few things to add for the record of the meeting:
17:02:52 <jrosser> this https://github.com/openstack/openstack-ansible-os_tempest/commit/25b5533c30e328c80d29348dff0cfc0f2ac5e88f
17:03:04 <jrosser> even if that doesnt fix barbican/designate it's the first step
17:03:07 <evrardjp> 1) If there is a problem with SUSE packaging, please query in #openstack-rpm-packaging
17:03:09 <odyssey4me> the previous issue was that distro installs didn't install the plugins to do the tests - now that's happening, nothing's setting the var that enables the plugin to be installed
17:03:53 <chandankumar> jrosser: does barbican/designate failure does not got fixed?
17:04:44 <openstackgerrit> Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_tempest master: Use the inventory to enable/disable services by default  https://review.openstack.org/628979
17:04:45 <mnaser> evrardjp: anything else for the record of the meeting? or we can wrap up and keep discussing :)
17:04:54 <evrardjp> 2) I am bringing the idea of gating OBS Staging->OBS repo using OSA for more stability
17:05:08 <mnaser> yay
17:05:13 <odyssey4me> that should solve it
17:05:15 <evrardjp> thanks mnaser
17:05:24 <evrardjp> and everyone!
17:05:27 <odyssey4me> awesome, thanks evrardjp
17:05:40 <odyssey4me> even just trying to install openstack using anythign would e a good start
17:06:04 <mnaser> ++
17:06:09 <mnaser> okay, we're over but i think we covered most things
17:06:12 <mnaser> i shall end :)
17:06:15 <mnaser> thank you everyone!!
17:06:17 <mnaser> #endmeeting