16:00:01 <mhayden> #startmeeting OpenStack-Ansible
16:00:01 <openstack> Meeting started Thu Aug 25 16:00:01 2016 UTC and is due to finish in 60 minutes.  The chair is mhayden. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:05 <openstack> The meeting name has been set to 'openstack_ansible'
16:00:25 <evrardjp> hello
16:00:50 <michaelgugino> ahoy
16:00:52 * mhayden waves
16:00:54 <andymccr> o/
16:01:00 <DrifterZA> o/
16:02:16 <odyssey4me> o/
16:02:19 <eil397> 0/
16:02:36 <shasha_tavil> o/
16:02:37 <automagically> o/
16:02:42 <matias> o/
16:02:46 <palendae> o/
16:03:28 * mhayden waits til 5 min after
16:03:47 <cloudnull> o/
16:03:47 <mhayden> some folks here are stuck in another mtg -- i'll give 'em a couple of minutes
16:03:55 <asettle> o/
16:03:59 <rromans> \o
16:04:47 <mhayden> okay, let's roll
16:04:48 <qwang> o/
16:04:58 <mhayden> #topic Action items from the last meeting
16:05:15 <mhayden> the one action item was from asettle for everyone to do some technical reviews of the new draft docs
16:05:28 <mhayden> there's still a bunch of work going on there!
16:05:31 <asettle> Yep! There is
16:05:41 <asettle> Going through and doing reviews of the full doc. It's certainly taking some time!
16:05:58 <jmccrory> o/
16:06:07 <asettle> But htanks to everyone who has drawn my attention to certain issues. Trying my best to go through it.
16:06:16 <asettle> And thanks to everyone who volunteered to help out with the technical review.
16:06:23 <asettle> For anyone who missed last week
16:06:27 <michaelgugino> what's the link?
16:06:34 <asettle> If you are interested in testing the new install guide for the deploy, please sign up here: https://etherpad.openstack.org/p/install-guide-reviewteam
16:06:50 <asettle> michaelgugino: for?
16:06:56 <michaelgugino> docs
16:07:07 <automagically> #link https://etherpad.openstack.org/p/install-guide-reviewteam
16:07:20 <asettle> As above michaelgugino
16:07:35 <michaelgugino> thanks
16:07:48 <asettle> Please also pass onto your friends ;)
16:08:00 <mhayden> sweet
16:08:03 <asettle> That's all from me right now! :)
16:08:11 <asettle> Oh, if people wanna review this: https://review.openstack.org/#/c/360548/
16:08:13 <asettle> That'd be rad too :p
16:08:17 <mhayden> thanks, asettle
16:08:21 * asettle sneaks that one in
16:08:32 * mhayden starts rolling through the agenda
16:08:56 <mhayden> #topic Newton install guide overhaul updates (asettle)
16:09:02 <asettle> Oh gosh, me again
16:09:11 <mhayden> asettle: looks like we have this covered already, right? ;)
16:09:14 <asettle> Sort of haha
16:09:21 <asettle> Basically, one minor update!
16:09:33 <asettle> All the config information that was meant to be moved into the role docs are now all done. Thanks to those who put in that effort.
16:09:45 <evrardjp> thanks indeed
16:09:49 <asettle> If people have the time, please keep an eye out for some storage architecture diagrams coming around, and keep your eyes peeled for docs patches coming through
16:09:51 <odyssey4me> I had a chat with darrenc earlier today - he should have updated storage arch pics up tomorrow, or soon (tm).
16:10:01 <asettle> We're on the last leg of getting this doc up and ready for newton and we need all hands on review deck.
16:10:06 <asettle> (see my plug before)
16:10:17 <asettle> If people have the time to just read (not technically review) we would appreciate that too.
16:10:35 <asettle> Andddd last but not least, the doc will soon replace the installation guide that you see from teh front page.
16:10:37 <automagically> asettle: Have been and will continue to do so
16:10:40 <odyssey4me> are we still missing example configs for test & prod?
16:10:43 <asettle> We will send around an update to the list
16:10:48 <asettle> odyssey4me: yes, you have them in your inbox ;)
16:10:50 <automagically> odyssey4me: Yep, but its in progress
16:11:46 <automagically> Believe nishpatwa is working it
16:12:01 <asettle> I think that's all major doc related updates. We have stuff to come, but would appreciate eyes and ears to the review ground mostly :)
16:12:02 <asettle> Thanks mhayden
16:12:05 <mhayden> woot
16:12:14 <mhayden> #topic Binary lookup (cloudnull)
16:12:42 <evrardjp> that is a catchy topic name
16:13:00 <cloudnull> odyssey4me: and I have talked about adding a package lookup to py_pkgs
16:13:12 <cloudnull> this would index all of the apt/yum pacakages we use
16:13:27 <cloudnull> but to do it effectively we need to rename vars. :(
16:13:42 <palendae> Where would that index live?
16:13:45 <evrardjp> makes sense to standardize anyway
16:13:47 <odyssey4me> Some background on this - I'm wanting to try and get base container caches created for each role. I've asked cloudnull to put together a lookup for the apt/yum packages so that we can prep a base cache for each role and use it when creating containers to cut time, but also to enable the ability to archive those off and re-use them (which will be instrumented next cycle).
16:13:48 <automagically> And why are we doing it?
16:13:49 * palendae hasn't looked at py_pkgs for a while
16:13:50 <cloudnull> right now, we have <things>_packages: ${some-list}
16:13:59 <palendae> Gotcha
16:14:20 <cloudnull> we'd need <things>_bin_packages: ${some-list}
16:14:26 <cloudnull> which would be similar to what we do for pip
16:14:44 <automagically> Blueprint/spec in progress?
16:14:54 <cloudnull> odyssey4me: has a spec
16:14:55 <odyssey4me> So the plan is just to lay the foundation this cycle.
16:14:57 <automagically> Generally seems like a good idea
16:15:15 <palendae> Kind of related - when is Newton-3?
16:15:20 <palendae> Would we want this done by then?
16:15:39 <odyssey4me> It's all groundwork for https://review.openstack.org/346038 - which is targeted for next cycle... but yeah, I'm hoping to get it in for experimentation for NEwton by N3.
16:15:41 <automagically> #link http://releases.openstack.org/newton/schedule.html
16:15:47 <automagically> 8/29-9/2
16:15:52 <evrardjp> cloudnull: using bin_packages to makes sure the matching of _packages is unique, that's what you mean?
16:15:57 <palendae> So 4-7 days
16:16:04 <odyssey4me> All the CoW container creation work has been a part of this concept.
16:16:43 <odyssey4me> All I'm missing is the lookup and I should be able to get a patch up pretty quickly.
16:16:55 <odyssey4me> cloudnull wanted to just agree on a standard name for the var
16:17:00 <palendae> Ah
16:17:12 <palendae> Yeah, standardizing the name's pretty much a necessity IMO
16:17:16 <odyssey4me> I know that michaelgugino expressed that '_packages' wasn't a great nameanyway.
16:17:22 <cloudnull> evrardjp: yesa
16:17:24 <cloudnull> *yes
16:17:44 <michaelgugino> yes, lots of types of packages.  Don't like that name at all.
16:17:44 <odyssey4me> So I think we could quite easily get '_bin_packages' done this cycle and the lookup at least.
16:18:17 <evrardjp> distro_packages or bin_packages
16:18:26 <evrardjp> do we plan to install src packages from distro?
16:18:28 <odyssey4me> yeah, those were my floated suggestions
16:18:31 <automagically> There is a whole bunch of not that well understood magic going on in py_pkgs. Let’s hope we have less of it in bin_pkgs, good testing and some doc
16:18:37 <michaelgugino> I prefer distro_packages
16:18:39 <evrardjp> it's not that it really matters though
16:18:47 <michaelgugino> of course, there are third party packages we use (galera)
16:19:11 <evrardjp> still installed as part of the distro (with another repo/ppa maybe)
16:19:25 <evrardjp> I don't really care of the name to be honest
16:19:28 <cloudnull> http://cdn.pasteraw.com/ao3dn1jpt8b0g0gywqrsbryhqh60y8i <- thats the current list of things w/ _packages that doesn't match pip
16:19:28 <evrardjp> I agree with the idea
16:19:35 <michaelgugino> also, I don't like calling things role_packages, as the actual roles are install from source.  I would prefer <role>_distro_package_deps
16:19:43 <cloudnull> so having an anchor somewhere will be helpful
16:19:54 <cloudnull> I'm +1 for distro, bin ?
16:20:08 <odyssey4me> I'd like to try and at least have the os_roles optionally use this mechanism for Newton in the AIO. Nova builds several containers all with the same bin/pip package content... so it'd save quite a bit of time.
16:20:42 <evrardjp> what would be great, is to drop apt_
16:20:49 <evrardjp> the rest I don't really care :p
16:20:51 <odyssey4me> evrardjp we already did that :p
16:21:03 <evrardjp> not on the list of cloudnull odyssey4me:p
16:21:16 <odyssey4me> I don't feel too strongly about bin_packages or distro_packages.
16:21:32 <evrardjp> the choice come to the implementer then!
16:21:39 <cloudnull> as we go multi-distro apt/yum in the variable name has to go away
16:21:47 <evrardjp> don't forget a release notes for those who override the packages
16:21:56 <evrardjp> cloudnull: agreed!
16:22:01 <evrardjp> that's why I am telling this :p
16:22:30 <odyssey4me> #help vote
16:22:33 <jmccrory> should be existing release notes when apt was dropped, they'll just need to be updated with the final choice of bin/distro
16:22:39 <odyssey4me> jmccrory ++
16:22:54 <evrardjp> ++
16:22:59 <mhayden> #chair mhayden odyssey4me
16:22:59 <openstack> Current chairs: mhayden odyssey4me
16:23:21 <automagically> Next topic?
16:23:34 <evrardjp> os_packages
16:23:37 <mhayden> can we get this onto the ML?
16:23:37 <odyssey4me> #startvote What's the name you prefer? bin distro
16:23:38 <openstack> Begin voting on: What's the name you prefer? Valid vote options are bin, distro.
16:23:40 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
16:23:50 <evrardjp> #vote os
16:23:51 <openstack> evrardjp: os is not a valid option. Valid options are bin, distro.
16:24:06 <matias> #vote distro
16:24:07 <jmccrory> #vote distro
16:24:12 <michaelgugino> #vote distro
16:24:19 <asettle> Vote 1 Distro!
16:24:27 <DrifterZA> #vote distro
16:24:32 <odyssey4me> #vote distro
16:24:33 <qwang> #vote distro
16:24:37 <eil397> #vote distro
16:24:38 <asettle> Distro for President!
16:24:46 <mhayden> #vote Pedro
16:24:47 <openstack> mhayden: Pedro is not a valid option. Valid options are bin, distro.
16:24:52 <mhayden> #vote distro
16:24:54 <odyssey4me> lol
16:24:54 <automagically> abstain
16:24:55 <evrardjp> mhayden: D
16:25:29 <odyssey4me> #endvote
16:25:29 <openstack> Voted on "What's the name you prefer?" Results are
16:25:30 <openstack> distro (8): matias, odyssey4me, eil397, DrifterZA, jmccrory, qwang, michaelgugino, mhayden
16:25:38 <odyssey4me> cloudnull there you have it :)
16:25:43 <evrardjp> :D
16:25:46 <evrardjp> next topic!
16:25:51 <asettle> Democracy at its finest.
16:26:12 <cloudnull> distro it is
16:26:27 <evrardjp> With Pedro and os as independant candidates, not sure whose of the republicans or the democrats won on this!
16:26:39 <evrardjp> anyway...
16:26:42 <michaelgugino> I didn't realize Pedro was an option!
16:27:02 <mhayden> next topic?
16:27:02 <evrardjp> os_packages was great IMO
16:27:11 <evrardjp> yup
16:27:29 <mhayden> #topic Improvements to reduce container restarts (odyssey4me)
16:28:30 <odyssey4me> yeah, so I wanted to raise awareness and figure out whether this effort should continue or not - and if to continue then decide how to approach it
16:29:04 <odyssey4me> in https://review.openstack.org/347400 I've tried two different methods to reduce container restarts on creation or major upgrades (where we typically change config/bind mounts)
16:29:30 <odyssey4me> the general issue is that container restarts raise the possibility of service outages, and also deploy failures
16:29:35 <odyssey4me> the more we have, the worse it gets
16:30:03 <odyssey4me> right now with another patch coming in we should be down to 2 restarts max in a deploy - which isn't bad
16:30:16 <mhayden> 2 is fantastic
16:30:20 <evrardjp> it was really slow in the gates to have these restarts on heavily loaded hosts like ovh... now that we have osic it's kinda lighter... but I think it's better to have the least restarts as possible
16:30:21 <michaelgugino> My thinking is that we've made some good progress.  I don't think we need to over optimize here.  I realize container restarts cause a lot of waiting, but whenever I do a build, I run the contain create steps by themselves.  I really feel like this is an important step to watch and make sure things go smoothly
16:30:35 <odyssey4me> I'm hoping for no more than one, and in one place (ie in the lxc-container-create play, or in the service install play).
16:31:08 <odyssey4me> cloudnull has raised concerns about variable namespace collisions and unwanted side effects
16:31:35 <odyssey4me> to that end, I did some testing and was surprised to discover that in Ansible 2.1.1 the leaf group's var wins - which is what I had hoped.
16:31:49 <odyssey4me> #link https://gist.github.com/odyssey4me/4bd529fcf0e53385ebb7d02fa7bfd0e6
16:32:11 <odyssey4me> so it seems to me that implementing https://review.openstack.org/347400 is perfectly safe
16:32:34 <cloudnull> i disagree.
16:32:45 <odyssey4me> the only outstanding concern would be that if a deployer wants to override the bind mounts for a specific group... if that's something that we want to instrument, I have a way to do it
16:32:54 <cloudnull> add the host to multiple groups watch the collision.
16:32:57 <jmccrory> how does it decide when a host is a member of multiple groups?
16:33:11 <odyssey4me> cloudnull that host was a member of multiple groups
16:33:15 <odyssey4me> read the gist
16:33:23 <cloudnull> i did
16:33:38 <cloudnull> i also read the ansible code and added debug statements to it.
16:33:46 <cloudnull> its still squashed.
16:33:57 <odyssey4me> the only way there'd be a problem is when the host is a member of two groups at the same level, in which case the group design is bad
16:33:58 <cloudnull> namespace or bust.
16:34:28 <cloudnull> the fact that there would be a problem, is a problem
16:34:29 <michaelgugino> I prefer that things be as declarative as possible.  It's getting really tricky to trace through some of this stuff
16:34:41 <odyssey4me> the only other approach here would be to move the container creation and config into the service plays themselves.
16:34:55 <odyssey4me> that would achieve the goals, but will be tricky to get right
16:35:16 <cloudnull> this is how ansible smashes group vars http://cdn.pasteraw.com/62hcs9hp7u5i4a5rvuzvwxs519g6hh1
16:35:22 <automagically> I’m in favor of the create containers in the service plays approach
16:35:33 <evrardjp> ansible way of doing things say we should have a variable name that's relevant to the role you're applying, and the group should properly list variables -- I don't see a problem of overlap if we manage stuff properly
16:35:33 <cloudnull> note the list keeps growing
16:35:36 <automagically> I think it will go a long way to make the project more approachable to new deployers
16:36:06 <evrardjp> automagically: you mean closer to ansible standards?
16:36:33 <automagically> evrardjp: That will also help, but I meant “create containers in the service plays approach"
16:36:33 <odyssey4me> automagically yeah, it does seem more intuitive that way - the container gets created when you build the service - and all changes to that service happen when you execute the play for the service
16:37:06 <michaelgugino> I disagree.  I think the containers are more of the infrastructure that needs to be in place to deploy.  It's perfectly intuitive as-is
16:37:07 <cloudnull> unless we can get https://github.com/ansible/ansible/pull/6666 or something similar
16:37:13 <cloudnull> im against overloading group_vars
16:37:14 <michaelgugino> the role should be able to deploy to any target
16:37:15 <evrardjp> automagically: agreed on that too.
16:37:18 <cloudnull> its not safe.
16:37:44 <odyssey4me> cloudnull the approach of doing it in the service plays would be to apply a namespaced var to the role execution
16:37:53 <odyssey4me> which is exactly what you were suggesting should be done
16:38:08 <evrardjp> I don't understand the problem of group_vars
16:38:15 <cloudnull> odyssey4me: yes.
16:38:19 <odyssey4me> it would also mean that the conditionals could be handled at the same time.
16:38:20 <cloudnull> i'd not be against that
16:38:30 <evrardjp> yes they can overlap, it's us to be smart and not do stupid overlapping things?
16:38:50 <odyssey4me> I've not put a sample patch together yet because I want your /var/log bind mount to get in before I start reworking it all.
16:38:59 <cloudnull> if the container was created in the service plays and that solves the issue, great!
16:39:04 <evrardjp> I don't think we have a need of multi-layer variable scoping else than what ansible does
16:39:33 <odyssey4me> michaelgugino we can still have the entrypoint of using the lxc-container-create playbook - it just won't be a default path any more
16:39:37 <michaelgugino> putting the container creation in the service plays locks us into using inventory as-is
16:39:41 <odyssey4me> so if you want the two step approach, you can get it
16:39:44 <cloudnull> but shoving everything into group_vars and hoping folks don't munge the groups, no.
16:40:04 <odyssey4me> michaelgugino erm, no it doesn't - it'll be no different to today - just executed in a different order
16:40:06 <evrardjp> let's stay close to ansible standards, it makes it simpler for ppl to discover the product
16:40:14 <automagically> evrardjp: ++
16:40:35 <cloudnull> ^ +1
16:40:50 <automagically> So, looking forward to seeing the patch with the creation in the service play
16:40:55 <evrardjp> even if, yes, they suck sometimes.
16:40:56 <automagically> Think we need that in order to make a decision
16:41:08 <automagically> Seems rational to wait for the log bind mount to merge first
16:41:11 <odyssey4me> ok, I'll have one up by next week
16:41:14 <odyssey4me> yep
16:41:36 <michaelgugino> I think the plays and roles should be separated from the infrastructure as much as possible.
16:42:02 <palendae> At some point they have to intersect
16:42:34 <cloudnull> odyssey4me: container create could all be moved into a common-task when is_metal=false
16:43:04 <odyssey4me> cloudnull yep, I pretty much have the concept in my mind and think it should work just fine
16:43:07 <cloudnull> but i'll leave that to next week .
16:43:17 <DrifterZA> odyssey4me: we can test it next week
16:43:35 <odyssey4me> I do like that it also makes an easy solution for the conditional configs/bind mounts
16:43:54 <odyssey4me> but I'd like to unwind some of that scary task logic that's going on in some of the plays
16:44:18 <odyssey4me> it's close to unreadable in some of them
16:44:57 <odyssey4me> thanks mhayden - done here for now
16:45:00 <mhayden> woot
16:45:07 <mhayden> #topic Release planning & decisions
16:45:19 * mhayden hands the mic back to odyssey4me ;)
16:45:33 <odyssey4me> right, I need to submit release requests - anything outstanding that needs a push before I do that?
16:46:04 <mhayden> odyssey4me: i'd like to get that galera bind mount fix into 13.3.2 (or whatever is next)
16:46:06 <michaelgugino> nova-lxd and nova are not tracking ATM.
16:46:20 <michaelgugino> so, for master we need to get nova bumped
16:46:30 <odyssey4me> michaelgugino we're talking stable branches for the moment
16:46:36 <michaelgugino> ok
16:46:37 <odyssey4me> but yes, I will do a newton bump too
16:46:58 <odyssey4me> yeah, so mhayden's patch for the bad galera mount needs a review: https://review.openstack.org/358852
16:47:09 <odyssey4me> it's a bit ugly, but it will do and is safe
16:47:33 <odyssey4me> but it looks like that'll need to wait for the next release
16:47:40 <palendae> I'm still looking for a solution to https://bugs.launchpad.net/openstack-ansible/+bug/1614211, which I think might need a rewrite of how services are registered with keystone
16:47:40 <openstack> Launchpad bug 1614211 in openstack-ansible trunk "Playbook Runs Fail in Multi-Domain Environments" [Medium,Confirmed] - Assigned to Nolan Brubaker (nolan-brubaker)
16:47:43 <palendae> But I still don't know for sure
16:48:05 <automagically> palendae: Need help testing cases?
16:48:06 <odyssey4me> palendae so the registration into the default domain should be fine already?
16:48:14 <palendae> odyssey4me, It
16:48:31 <palendae> odyssey4me, It's evidently not. The playbooks get a 403 with the current setup using the multidomain config
16:48:33 <odyssey4me> but registration into other domains won't be without adding a domain parameter to all service reg tasks
16:48:43 <palendae> They're not registering into other domains
16:48:52 <palendae> The problem is it's not using a domain-scoped token
16:48:59 <odyssey4me> oh, interesting - and the module passes the domain as a parameter?
16:49:14 <palendae> Yes, but if you pass domain and project/tenant, you get project/tenant
16:49:18 <cloudnull> we also might want to consider moving to the upstream shade modules.
16:49:33 <cloudnull> and dropping our keystone module.
16:49:36 <palendae> Our keystone module currently only supports project/tenant
16:49:36 <evrardjp> cloudnull: good idea
16:49:39 <odyssey4me> ah yes - so we need to pass either domain or project
16:49:42 <palendae> Right
16:49:47 <palendae> Or making a token
16:49:54 <palendae> Which is correctly scoped
16:50:02 <palendae> Same idea, but happens in a different spot
16:50:11 <palendae> cloudnull, You're talking about shade?
16:50:23 <odyssey4me> well, if we just leave out the project name from the parameters (for the login user) then it should work?
16:50:31 <palendae> I'm not at all opposed, but it's a big ask
16:50:35 <evrardjp> cloudnull: not sure if everything is implemented there though
16:50:42 <odyssey4me> and cloudnull yes for shade, but that won't help liberty/mitaka
16:50:53 <evrardjp> it's worth discussing with bluebox too to share modules?
16:50:54 <palendae> odyssey4me, From the module call? the module will error out if you don't provide the project
16:50:54 <cloudnull> no it wont,
16:50:56 <cloudnull> http://docs.ansible.com/ansible/list_of_cloud_modules.html#openstack
16:50:58 <cloudnull> ^ palendae
16:51:21 <odyssey4me> palendae sure, it'll need adjustment to the module - assuming that the API accepts it then that should be fine
16:51:23 <cloudnull> evrardjp: at last check the modlues should now support multiple domains
16:51:25 <palendae> Our current module has logic that if it didn't receive a token or a project_name, it bails
16:51:32 <palendae> But yes, I'm testing these cases
16:51:33 <cloudnull> but we can ping mordred about that if they dont.
16:51:40 <evrardjp> that's good
16:51:41 <palendae> I haven't arrived at one that I've gotten working yet
16:51:49 <odyssey4me> ok cool
16:51:57 <palendae> In any case, this will require editing more than just keystone
16:52:10 <palendae> It'll be changes to any service that registers with the catalog
16:52:10 <odyssey4me> we may have to just leave liberty as-is... but I'm guessing that mitaka will need this fix
16:52:16 <palendae> Yeah, that was the request
16:52:23 <palendae> I guess it's blocking an upgrade
16:52:44 <odyssey4me> ok, but it's not a release blocker now (for this tag) as this is new functionality
16:52:53 <palendae> Yeah, I wouldn't say it's a release blocker
16:52:57 <odyssey4me> you're just asking for help to figure out a good solution
16:52:58 <palendae> Mostly just stating I'll likely miss
16:52:59 <odyssey4me> ?
16:53:11 <palendae> Partially, yes
16:53:47 <palendae> And that implementing it fully will mean changing a bunch of roles
16:54:29 <odyssey4me> yeah - we may want to do it in master, run it for two weeks with testing, then backport just after the next tag and do the same
16:54:40 <odyssey4me> that way we get max test coverage
16:54:46 <palendae> Yep
16:54:50 <odyssey4me> even if it is human test coverage
16:54:53 <palendae> Planning on doing it in master
16:55:03 <palendae> Since that behaves the same as mitaka
16:55:25 <odyssey4me> ok, so release discussion done :) thanks mhayden
16:55:31 <mhayden> 5 minute warning ;)
16:55:41 <mhayden> #topic Open floor
16:55:46 <mhayden> just a few minutes remaining here
16:55:51 <mhayden> anything to bring up?
16:56:26 <cloudnull> 5 min warning https://www.youtube.com/watch?v=NP2ipX0aiJo
16:56:40 <evrardjp> nothing, we had so many successes this week, I'm speechless
16:56:42 <automagically> Just an FYI - I’ll be on vacation all next week, so you won’t be seeing me in IRC
16:56:51 <cloudnull> enjoy !
16:56:55 <mhayden> enjoy, automagically!
16:57:09 <evrardjp> enjoy :D
16:57:13 <automagically> Could really use some help closing out the Ansible 2.1.1 role stuff. mhayden pitched in a bit yesterday, would definitely appreciate more eyes
16:57:23 <mhayden> i'll try to commit some time there
16:57:25 <automagically> Bit of a nasty item to untangle.
16:57:30 <cloudnull> ++
16:57:42 <evrardjp> I'd like to pitch in -- I have to allocate time to it.
16:57:54 <evrardjp> I'll probably help when my upgrade bit is ready
16:58:03 <odyssey4me> I think that naughty CentOS lxc start bug is more painful at this stage.
16:58:07 <odyssey4me> once that flows, so will others.
16:58:27 <automagically> sounds like cloudnull has an idea on that
16:58:40 <mhayden> okay, i'll close this thing up unless there's anything else
16:58:40 <cloudnull> add retry
16:58:46 <evrardjp> :D
16:58:53 <eil397> : )
16:58:54 <mhayden> thanks everyone! :)
16:58:55 <cloudnull> ^ shitty i know. but seems to work
16:58:57 <mhayden> #endmeeting