16:02:04 <cloudnull> #startmeeting OpenStack Ansible Meeting
16:02:05 <openstack> Meeting started Thu Feb 11 16:02:04 2016 UTC and is due to finish in 60 minutes.  The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:09 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:02:27 <cloudnull> #topic rollcall
16:02:30 <cloudnull> o/
16:02:34 <odyssey4me> o/ here, but will be leaving shortly - cloudnull is chair
16:02:53 <cloudnull> hows it going all ?
16:03:34 <cloudnull> so lets get started
16:03:40 <michaelgugino> here
16:03:51 <cloudnull> looks like most of the action items from last week we're from odyssey4me
16:03:54 <cloudnull> #link #link http://eavesdrop.openstack.org/meetings/openstack_ansible/2016/openstack_ansible.2016-02-04-16.04.html
16:04:22 <cloudnull> and I believe theyve all been accomplished
16:04:22 <palendae> o/
16:04:24 <cloudnull> odyssey4me ?
16:04:35 <v1k0d3n> o/
16:04:44 <odyssey4me> cloudnull yep :)
16:04:54 <cloudnull> centos gating is online and I've seen the ML discussion RE: the midcycle
16:04:58 <cloudnull> so i think we're good there.
16:05:27 <cloudnull> #topic Topics for Discussion
16:06:02 <michaelgugino> I'm in another meeting, so ping me for anything please.
16:06:14 <cloudnull> odyssey4me -- RE: Mid-Cycle Meetup -- hows that going ?
16:06:19 <cloudnull> I assume all is well
16:06:22 <cloudnull> thanks michaelgugino will do
16:06:36 <odyssey4me> can we get votes in on sessions for https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle so that we can prioritise in case there isn't enough time
16:06:37 <v1k0d3n> same here, have another scrum at 11, can you ping during open discussion please?
16:06:50 <odyssey4me> if anyone has other proposed talks they'd like to add, please add them
16:06:50 <cloudnull> will do v1k0d3n
16:07:03 <cloudnull> #link https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle
16:08:26 <admin0> hello all :)
16:08:29 <cloudnull> o/
16:08:36 <cloudnull> #topic Support for multiple availability zones
16:08:47 <cloudnull> admin0 This was your item
16:08:55 <cloudnull> what would you like to discuss ?
16:09:09 <admin0> well, regarding AZ, i am thinking if there is/will/can be easier way to bind resources together rather then using host specific overrides which is a bit tedious and time consuming
16:09:20 <admin0> especially if you got like 60 nodes in 3 az’s each
16:09:42 <cloudnull> so is this specifically a nova / cinder AZ  ?
16:09:54 <admin0> yep
16:10:17 <cloudnull> how would you like to see that done in a perfect world ?
16:10:44 <admin0> was what I  recommended a bad way or a tedious way ?
16:10:56 * cloudnull is looking for the LP issue
16:11:04 <admin0> i would like to group resources in an AZ .. and be able to deifine AZ based ips and resources
16:11:24 <admin0> so that just 1 deployment can take care of multiple AZs, since they share the same db/api
16:11:24 <odyssey4me> is there a reason why this couldn't just be a post install configuration done by hand?
16:11:45 <odyssey4me> as far as I recall you can set hosts into different AZ's through the CLI
16:12:07 <admin0> well the reason to use ansible and automation is to try to automate as much as possible right . so that was a thought
16:12:40 <admin0> with 5 people or many people in the team, having this controlled using ansible prevents mistakes
16:12:56 <evrardjp> odyssey4me: and web interface
16:13:11 <admin0> because the ansible governs what goes where  and that can be reviewed before deployment, rather than depending on who is doing what post-install on cli
16:13:15 <spotz> o/ late
16:13:20 <cloudnull> o/ spotz
16:14:04 <cloudnull> I can see the merits of determining the az for a nova/cinder node.
16:14:27 <odyssey4me> As long as whatever is implemented is reasonably maintainable, I'm ok with it. I'm wary of adding a bunch of conditional code - we already have too much of it and it makes the logic hard to follow.
16:14:35 <cloudnull> however I can also see that automation living outside of the main deployment
16:14:44 <cloudnull> so idk
16:14:54 <odyssey4me> We need to aim for more simplicity in the deployment.
16:15:12 <cloudnull> admin0:  I recently had to deal with AZs for a 500 node deployment via the CLI and its a bit of a pain.
16:15:24 <cloudnull> so im on the fence
16:15:34 <odyssey4me> So yeah, I'm not against it - I'd just like to see a proposed review and perhaps discussion is better suited in such a review.
16:15:54 <cloudnull> admin0: in your LP issue you noted that assigning an AZ based on CIDR group is that right
16:16:00 <evrardjp> isn't openstack-ansible already setting an AZ for cinder nodes?
16:16:00 * cloudnull is failing at his LP searching
16:16:10 <odyssey4me> #link https://bugs.launchpad.net/openstack-ansible/+bug/1539803
16:16:11 <openstack> Launchpad bug 1539803 in openstack-ansible "enable support for availability zones for compute and services" [Undecided,Invalid]
16:16:18 <cloudnull> ^ that one
16:16:22 <cloudnull> thatnks odyssey4me
16:16:53 <odyssey4me> the bug description has a proposed solution, but that doesn't really give me a sense of the impact to the code
16:17:13 * odyssey4me is a little slow to comprehend these things today :)
16:17:13 <admin0> well, that is what we use now and find it easy ..  because the internal cdir’s are different on both zones . only the management cdr being accessible from the deploy server
16:17:42 <logan-> couldn't you do the AZ stuff using group vars since all of the settings are there
16:17:49 <odyssey4me> that seems like a sensible way to do it
16:18:08 <admin0> i have just started to play with it .. is there an example already of this ?
16:18:13 <evrardjp> why wouldn't you use it the same way as it's done on cinder?
16:18:15 * admin0 feels n00bish
16:18:25 <evrardjp> https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_cinder/templates/cinder.conf.j2#L77
16:18:56 <evrardjp> nova_availability_zone should be defined in the openstack_user_config if it's a group override
16:18:57 <cloudnull> odyssey4me:  i think the point is that you'd have to add that to every host which is time consuming especially as the deployment grows
16:19:26 <cloudnull> 500 nodes == 500 entries of the same config
16:19:34 <odyssey4me> cloudnull I think you meant to inform evrardjp of that :)
16:19:38 <logan-> yeah the openstack_user_config doesn't have a great way to set group vars afaik, you have to set tons of container vars
16:19:56 <admin0> we grow by 6 compute nodes per az and 2 cinder nodes per az and this i see  outside of the deployment and something if the deployment can handle
16:20:01 <cloudnull> odyssey4me:  yes sorry evrardjp **
16:20:23 <odyssey4me> perhaps a better long term solution here is to enable better usage of group_vars in the dynamic inventory?
16:20:24 <admin0> each az have their own api, storage, nova  clusters .  only swift is on all zones
16:20:27 <logan-> for something like this you would almost be better off segmenting compute_hosts and storage_hosts into some subgroups and putting some files in inventory/group_vars
16:20:58 <logan-> yeah group vars in dynamic inventory would be so helpful
16:20:58 <admin0> my purposal was based on i could not find a lot of ways to do it .. if there are examples , i can try again and come back if really stuck
16:20:59 <odyssey4me> maybe something like dynamic groups based on arbitrary criteria (eg cidr)
16:21:13 <palendae> Or a group membership stanza
16:21:28 <evrardjp> there we go, same discussion again :)
16:21:58 <palendae> Sorry, am I being cyclical? trying to look at multiple things
16:22:04 <admin0> if this discussion has come up again and again, there must be a reason right :D
16:22:05 <cloudnull> i think logan- is on to soomething there. if it were subgroups, or if we documented how to do that better, then you could define group_vars in a simpler way
16:22:12 <odyssey4me> maybe we need to have a proper discussion about the next evolution of the dynamic inventory at the mid cycle?
16:22:26 <palendae> ^ yes
16:22:31 <evrardjp> +1
16:22:41 <palendae> I know neillc was looking at implementing it as a library
16:22:41 <logan-> i do it out of necessity for some things (modifying inventory directly and adding some subgroups of hosts in static files to build on top fo dynamic inventory)
16:22:43 <logan-> but i hate doing it
16:22:50 <logan-> if i could do it via dynamic inventory that would be great
16:22:50 <odyssey4me> or yes - another way to handle this would be user-developed env.d changes for altered group specs
16:22:51 <evrardjp> tagging hosts to groups and having a way to define variables for these tags is IMO important
16:22:53 <palendae> With a first pass of just making it a library, with current functionality
16:23:08 <palendae> odyssey4me: Yeah, I think we talked about an evolution of env.d
16:23:15 <palendae> These thigns aren't necessarily exclusive, either
16:23:38 <odyssey4me> ok, personally I don't think we're going to solve this in this meeting and we should dedicate a slot to it in the mid cycle
16:23:43 <palendae> Nope
16:23:45 <palendae> Yep
16:23:46 <odyssey4me> can someone propose and facilitate it?
16:24:01 <admin0> what needs to be done for that ?
16:24:02 <odyssey4me> be sure to add an etherpad link so that discussion can already begin ahead of time
16:24:05 <jmccrory> not sure how far along this is https://blueprints.launchpad.net/openstack-ansible/+spec/dynamic-inventory-lib, but maybe we could include this discussion as part of that
16:24:30 <palendae> jmccrory: neillc is looking at it, but I know he was out for a few weeks
16:24:40 <odyssey4me> we need to examine new requirements and work out a good way to cater for those
16:24:42 <palendae> His implementation is starting as just moving the current code into a library
16:25:02 <odyssey4me> yeah, I think the lib effort should be seperate from that
16:25:11 <odyssey4me> this should just be to figure out how to cater better for scale
16:25:35 <odyssey4me> these requirements can then be added to Neil's work - and we can work out a work breakdown after that
16:25:47 <palendae> Makes sense to me
16:26:22 <admin0> +1
16:26:37 <admin0> so when will this be discussed again ?
16:26:51 <palendae> Sounds like midcycle
16:27:15 <odyssey4me> but before then, in an etherpad
16:27:20 <admin0> ok
16:27:27 <admin0> what do I need to do :D ?
16:27:31 <odyssey4me> who's going to facilitate the session? I'm happy to assist with that
16:27:55 <odyssey4me> admin0 see the links in https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
16:28:08 <admin0> got it
16:28:10 <odyssey4me> get yourself a remote ticket, if you're not able to pysically attend
16:28:23 <odyssey4me> and add a proposed work session to the work session list
16:28:53 <odyssey4me> if you're in the US, we'll try and time the work session for the UK afternoon
16:29:14 <admin0> i am in the Netherlands
16:29:18 <admin0> so :(
16:29:37 <cloudnull> we'll be in the UK for the midcycle
16:29:40 <odyssey4me> alright, then we can do it almost any time in the day - but I think many of the US folk would like to contribute to the discussion
16:29:40 <cloudnull> :)
16:29:46 <cloudnull> just a hop skip and a jump
16:29:58 <palendae> I'm not sure I can facilitate, but I would like to participate in that discussion
16:30:01 <odyssey4me> so admin0 you can easily attend remotely, and facilitate remotely
16:30:07 <admin0> i will try :)
16:30:35 <evrardjp> I'll also take part in this discussion :p
16:30:51 <cloudnull> nice
16:31:00 <cloudnull> so lets move on to the next topic
16:31:13 <cloudnull> #topic Blueprint work
16:31:31 <cloudnull> michaelgugino -- hows Multi-OS Support going ?
16:31:58 <michaelgugino> we talked a little about that yesterday
16:32:14 <michaelgugino> I wanted to talk about a few blockers
16:32:19 <cloudnull> sure thing
16:32:50 <michaelgugino> 1)  containers.  the bulk of the work needs to be done to services that run in containers.  We need to update our container strategy
16:33:04 <michaelgugino> for starters, we should probably make it so host distro = container distro
16:33:18 <cloudnull> ++
16:33:23 <palendae> michaelgugino: Agreed.
16:33:30 <odyssey4me> michaelgugino yep, we agreed that in the principles of the work
16:33:40 <prometheanfire> ya, that was talked about when this first came up
16:33:47 <odyssey4me> while it may be possible for a deployer to configure it differently, we will not test other options
16:33:56 <michaelgugino> so, there is a lot of refactoring that needs to go into that.  I think that is where we need to start, and where the bulk of the effort actually lies.
16:34:26 <odyssey4me> see 'basic principles' in https://etherpad.openstack.org/p/openstack-ansible-multi-os-support :)
16:34:38 <michaelgugino> next up, we need to figure out if we're going to support a new version of galera/percona
16:35:03 <odyssey4me> yes we can, for master - but not liberty
16:35:15 <mattt> why new version?
16:35:32 <michaelgugino> because newer distros don't support those older packages
16:36:04 <palendae> michaelgugino: new as in 10.x?
16:36:11 <palendae> 10.1 iirc
16:36:13 <cloudnull> michaelgugino: is this related to the xtrabackup work ?
16:36:35 <michaelgugino> right, specifically the xtrabackup package is hard-coded to a specific version
16:36:59 <michaelgugino> that version number is not available for Wily in upstream packaging.
16:37:23 <odyssey4me> before I run off - at the top of https://etherpad.openstack.org/p/openstack-ansible-multi-os-support is a list of the independent repositories for the apt install / var split out... can we get some volunteers to provide patches for each repo based on the pattern set out in https://review.openstack.org/274290 ?
16:37:25 <michaelgugino> installing the current package via dpkg -i as we currently do fails.
16:38:15 <mattt> odyssey4me: i'd like to grab one for sure
16:38:42 <odyssey4me> please add your name, and then add your review link next to the repo
16:38:50 <odyssey4me> first your name so that we don't duplicate work
16:39:25 <cloudnull> michaelgugino: I wonder if we can we make that a tunable that can otherwise be turned off ?
16:39:29 <cloudnull> for the xtrabackup package
16:40:50 <michaelgugino> I think we need to at very least use make the package version portable
16:42:08 <cloudnull> agreeded.
16:42:31 <cloudnull> maybe that can be conditional includes similar to what odyssey4me was proposing with the package install refactor
16:42:49 <michaelgugino> I think that is a good approach, and what I was going to do.
16:43:33 <cloudnull> then we can default to something like a a portable package and become more restrictive based on the include
16:44:12 <michaelgugino> right, but for galera specifically, we should have a game plan going forward
16:44:47 <cloudnull> I that was how you were invisioning the approach, I'm +1
16:44:49 <michaelgugino> "we support the stable release for the respective distro from upstream"  Something like that.
16:45:11 <prometheanfire> what happens with rolling distros?
16:45:37 <cloudnull> or we can always say that the we only support the LTS for now?
16:45:50 <evrardjp> prometheanfire will be sad
16:45:55 <odyssey4me> I think that the distro-specific vars can cater for a uniform approach, and where needed there can be distro-specific tasks too.
16:46:36 <odyssey4me> ie if one distro requires the addition of a repo, but the other doesn't - then that is applied in the package-mgr specific task set
16:46:41 <prometheanfire> evrardjp: we can support package for extended periods, and it's honestly not many
16:46:48 <evrardjp> what about only supporting distro that are tested in the gates?
16:47:02 <prometheanfire> evrardjp: I do agree with that
16:47:07 <evrardjp> sorry if it's too specific, but it's the most simple
16:47:11 <odyssey4me> but anywya - the primary targets right now are Ubuntu & CentOS - we can iterate the approach once those are done
16:47:24 <prometheanfire> yarp
16:47:44 <odyssey4me> I'd rather that we didn't over-engineer things
16:47:51 <odyssey4me> simpler is better
16:48:41 <michaelgugino> Once we get Ubuntu and CentOS supported, that will cover most users.
16:49:16 <cloudnull> ++
16:49:21 <michaelgugino> Debian should be easy to add after we get systemd working.
16:49:32 <michaelgugino> But, the key question is, when do we stop supporting 14.04?
16:49:43 <michaelgugino> I figured we'd have to go with the guidance from Openstack.org
16:49:45 <evrardjp> 16.04 will be systemd too, right?
16:49:50 <michaelgugino> yes
16:50:26 <prometheanfire> systemd will help make cross distro functionality easier, hopefully most distros will support the networkd config as well
16:50:33 <michaelgugino> Once 16.04 launches, Debian, CentOS, and Ubuntu LTS will all be on systemd.
16:50:47 <prometheanfire> gentoo will too
16:50:53 <palendae> michaelgugino: I personally envisioned the first version to support 16.04 would support both 14/16, then the one after, 16.04 only
16:50:57 <prometheanfire> well, already are
16:50:59 <cloudnull> I think the systemd changes are the bulk of what will be needed to be mluti-distro compatible
16:51:01 <odyssey4me> yep, but production environments will have a mix of 14.04 and 16.04, so for a cycle or two we'll need to carry support for both
16:51:08 <palendae> What odyssey4me said
16:51:18 <evrardjp> +1
16:51:27 <palendae> Upgrades that cycle are going to be...interesting
16:51:30 <prometheanfire> ya, in the mean time extra fun for all
16:51:31 <mattt> haha yep
16:51:37 <michaelgugino> I think we should support 14.04 until 17.04 is released.
16:51:47 <prometheanfire> 17.04 an lts?
16:51:56 <prometheanfire> thought next next one would be 18
16:51:58 <mattt> i still think we need a strong criteria for distros other than ubuntu/centos
16:52:02 <palendae> prometheanfire: It is
16:52:12 <michaelgugino> no, but it's a solid year from launch of 16.04 which will give people 6 months to move, 6 months to bake
16:52:24 <evrardjp> until official deprecation of ubuntu? so maintain until something before 19.04? ;)
16:52:25 <prometheanfire> mattt: I think being able to gate is good
16:52:32 <mattt> prometheanfire: wrong.
16:52:39 <prometheanfire> oh, this is nice
16:52:46 <mattt> :)
16:52:50 <palendae> michaelgugino: I think a lot of places stay on LTS
16:52:58 <odyssey4me> I've added a note in the etherpad regarding the success criteria for the package install and var split.
16:53:16 <cloudnull> ok so we're almost out of time and I'd like to open it up for other general discussions.
16:53:28 <cloudnull> #topic Open discussion
16:53:32 <odyssey4me> michaelgugino I think that is a good suggestion for input to one of the mid cycle sessions. :)
16:53:38 <cloudnull> which i know v1k0d3n wanted to be pinged about
16:53:40 <odyssey4me> we can discuss in more detail there.
16:53:54 <v1k0d3n> hello o/
16:54:03 <cloudnull> so feel free to keep discussing but i wanted to let in more
16:54:09 <michaelgugino> I won't be able to make midcyle, and I'll be traveling next week.  But, I'll be at the summit.
16:54:10 <cloudnull> o/ v1k0d3n
16:54:49 <v1k0d3n> is it ok to bring up the tower deployment and see if anyone is interested?
16:55:13 <cloudnull> michaelgugino:  no worries, we can carry the convo on within the channel as you find time. I'd imagine the summit will be a good opportunity to cover a lot.
16:55:19 <cloudnull> sure
16:55:21 <odyssey4me> michaelgugino we'll not necessarily finalise everything at the mid cycle - the intent is to start the conversation, which will then move into a proposed review
16:55:22 <evrardjp> quick question about this: does someone know if tower will be open sourced at some point ?
16:55:26 <cloudnull> v1k0d3n: sure thing
16:55:31 <odyssey4me> we can then revisit everything at the summit
16:55:40 <evrardjp> this would help with the dynamic inventory thingy
16:55:43 <cloudnull> evrardjp idk
16:56:00 <cloudnull> i know the folks at ansible have teased that in the past
16:56:05 <cloudnull> but ive heard nothing recently
16:56:10 <v1k0d3n> ok, so tyler cross from ansible and i have been discussing some ways to simplify the deployment and management of the openstack-ansible deployment (or at least give another option for management)...
16:56:31 <v1k0d3n> i'm new to this (different background), but we came up with this: https://github.com/v1k0d3n/tower-openstack
16:56:50 <cloudnull> v1k0d3n: tower would be sweet! if it can do what we need it to
16:56:56 <v1k0d3n> i want to get suggestions for workflow, but there are two main workflows that i can think of that would be useful....
16:57:06 <v1k0d3n> management of the first openstack deployment into bareOS
16:57:24 <v1k0d3n> the second would be OSaaS within OS for development purposes...
16:58:02 <v1k0d3n> and the OSaaS or bareOS deployment would use variables (in a survey provided to the end user) that could collect those variables used for the deployment (of each).
16:58:07 <v1k0d3n> does anyone like this idea?
16:58:39 <cloudnull> I'd be very interested in tower working for bareOS deployments
16:58:39 <v1k0d3n> for each deployment...bareOS or OSaaS, multi-node and AIO would be options.
16:58:58 <evrardjp> yeah it's cool
16:59:22 <v1k0d3n> i want to prioritize the work effort, and see if folks would be interesting in contributing by pull requests, etc.
16:59:35 <cloudnull> ++
16:59:47 <cloudnull> ok we're out of time.
16:59:55 <cloudnull> we can carry on in the channel
16:59:58 <evrardjp> +1 v1k0d3n
16:59:59 <cloudnull> thanks everyone
17:00:02 <evrardjp> thanks
17:00:05 <cloudnull> #endmeeting