16:02:04 #startmeeting OpenStack Ansible Meeting 16:02:05 Meeting started Thu Feb 11 16:02:04 2016 UTC and is due to finish in 60 minutes. The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:09 The meeting name has been set to 'openstack_ansible_meeting' 16:02:27 #topic rollcall 16:02:30 o/ 16:02:34 o/ here, but will be leaving shortly - cloudnull is chair 16:02:53 hows it going all ? 16:03:34 so lets get started 16:03:40 here 16:03:51 looks like most of the action items from last week we're from odyssey4me 16:03:54 #link #link http://eavesdrop.openstack.org/meetings/openstack_ansible/2016/openstack_ansible.2016-02-04-16.04.html 16:04:22 and I believe theyve all been accomplished 16:04:22 o/ 16:04:24 odyssey4me ? 16:04:35 o/ 16:04:44 cloudnull yep :) 16:04:54 centos gating is online and I've seen the ML discussion RE: the midcycle 16:04:58 so i think we're good there. 16:05:27 #topic Topics for Discussion 16:06:02 I'm in another meeting, so ping me for anything please. 16:06:14 odyssey4me -- RE: Mid-Cycle Meetup -- hows that going ? 16:06:19 I assume all is well 16:06:22 thanks michaelgugino will do 16:06:36 can we get votes in on sessions for https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle so that we can prioritise in case there isn't enough time 16:06:37 same here, have another scrum at 11, can you ping during open discussion please? 16:06:50 if anyone has other proposed talks they'd like to add, please add them 16:06:50 will do v1k0d3n 16:07:03 #link https://etherpad.openstack.org/p/openstack-ansible-mitaka-midcycle 16:08:26 hello all :) 16:08:29 o/ 16:08:36 #topic Support for multiple availability zones 16:08:47 admin0 This was your item 16:08:55 what would you like to discuss ? 16:09:09 well, regarding AZ, i am thinking if there is/will/can be easier way to bind resources together rather then using host specific overrides which is a bit tedious and time consuming 16:09:20 especially if you got like 60 nodes in 3 az’s each 16:09:42 so is this specifically a nova / cinder AZ ? 16:09:54 yep 16:10:17 how would you like to see that done in a perfect world ? 16:10:44 was what I recommended a bad way or a tedious way ? 16:10:56 * cloudnull is looking for the LP issue 16:11:04 i would like to group resources in an AZ .. and be able to deifine AZ based ips and resources 16:11:24 so that just 1 deployment can take care of multiple AZs, since they share the same db/api 16:11:24 is there a reason why this couldn't just be a post install configuration done by hand? 16:11:45 as far as I recall you can set hosts into different AZ's through the CLI 16:12:07 well the reason to use ansible and automation is to try to automate as much as possible right . so that was a thought 16:12:40 with 5 people or many people in the team, having this controlled using ansible prevents mistakes 16:12:56 odyssey4me: and web interface 16:13:11 because the ansible governs what goes where and that can be reviewed before deployment, rather than depending on who is doing what post-install on cli 16:13:15 o/ late 16:13:20 o/ spotz 16:14:04 I can see the merits of determining the az for a nova/cinder node. 16:14:27 As long as whatever is implemented is reasonably maintainable, I'm ok with it. I'm wary of adding a bunch of conditional code - we already have too much of it and it makes the logic hard to follow. 16:14:35 however I can also see that automation living outside of the main deployment 16:14:44 so idk 16:14:54 We need to aim for more simplicity in the deployment. 16:15:12 admin0: I recently had to deal with AZs for a 500 node deployment via the CLI and its a bit of a pain. 16:15:24 so im on the fence 16:15:34 So yeah, I'm not against it - I'd just like to see a proposed review and perhaps discussion is better suited in such a review. 16:15:54 admin0: in your LP issue you noted that assigning an AZ based on CIDR group is that right 16:16:00 isn't openstack-ansible already setting an AZ for cinder nodes? 16:16:00 * cloudnull is failing at his LP searching 16:16:10 #link https://bugs.launchpad.net/openstack-ansible/+bug/1539803 16:16:11 Launchpad bug 1539803 in openstack-ansible "enable support for availability zones for compute and services" [Undecided,Invalid] 16:16:18 ^ that one 16:16:22 thatnks odyssey4me 16:16:53 the bug description has a proposed solution, but that doesn't really give me a sense of the impact to the code 16:17:13 * odyssey4me is a little slow to comprehend these things today :) 16:17:13 well, that is what we use now and find it easy .. because the internal cdir’s are different on both zones . only the management cdr being accessible from the deploy server 16:17:42 couldn't you do the AZ stuff using group vars since all of the settings are there 16:17:49 that seems like a sensible way to do it 16:18:08 i have just started to play with it .. is there an example already of this ? 16:18:13 why wouldn't you use it the same way as it's done on cinder? 16:18:15 * admin0 feels n00bish 16:18:25 https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_cinder/templates/cinder.conf.j2#L77 16:18:56 nova_availability_zone should be defined in the openstack_user_config if it's a group override 16:18:57 odyssey4me: i think the point is that you'd have to add that to every host which is time consuming especially as the deployment grows 16:19:26 500 nodes == 500 entries of the same config 16:19:34 cloudnull I think you meant to inform evrardjp of that :) 16:19:38 yeah the openstack_user_config doesn't have a great way to set group vars afaik, you have to set tons of container vars 16:19:56 we grow by 6 compute nodes per az and 2 cinder nodes per az and this i see outside of the deployment and something if the deployment can handle 16:20:01 odyssey4me: yes sorry evrardjp ** 16:20:23 perhaps a better long term solution here is to enable better usage of group_vars in the dynamic inventory? 16:20:24 each az have their own api, storage, nova clusters . only swift is on all zones 16:20:27 for something like this you would almost be better off segmenting compute_hosts and storage_hosts into some subgroups and putting some files in inventory/group_vars 16:20:58 yeah group vars in dynamic inventory would be so helpful 16:20:58 my purposal was based on i could not find a lot of ways to do it .. if there are examples , i can try again and come back if really stuck 16:20:59 maybe something like dynamic groups based on arbitrary criteria (eg cidr) 16:21:13 Or a group membership stanza 16:21:28 there we go, same discussion again :) 16:21:58 Sorry, am I being cyclical? trying to look at multiple things 16:22:04 if this discussion has come up again and again, there must be a reason right :D 16:22:05 i think logan- is on to soomething there. if it were subgroups, or if we documented how to do that better, then you could define group_vars in a simpler way 16:22:12 maybe we need to have a proper discussion about the next evolution of the dynamic inventory at the mid cycle? 16:22:26 ^ yes 16:22:31 +1 16:22:41 I know neillc was looking at implementing it as a library 16:22:41 i do it out of necessity for some things (modifying inventory directly and adding some subgroups of hosts in static files to build on top fo dynamic inventory) 16:22:43 but i hate doing it 16:22:50 if i could do it via dynamic inventory that would be great 16:22:50 or yes - another way to handle this would be user-developed env.d changes for altered group specs 16:22:51 tagging hosts to groups and having a way to define variables for these tags is IMO important 16:22:53 With a first pass of just making it a library, with current functionality 16:23:08 odyssey4me: Yeah, I think we talked about an evolution of env.d 16:23:15 These thigns aren't necessarily exclusive, either 16:23:38 ok, personally I don't think we're going to solve this in this meeting and we should dedicate a slot to it in the mid cycle 16:23:43 Nope 16:23:45 Yep 16:23:46 can someone propose and facilitate it? 16:24:01 what needs to be done for that ? 16:24:02 be sure to add an etherpad link so that discussion can already begin ahead of time 16:24:05 not sure how far along this is https://blueprints.launchpad.net/openstack-ansible/+spec/dynamic-inventory-lib, but maybe we could include this discussion as part of that 16:24:30 jmccrory: neillc is looking at it, but I know he was out for a few weeks 16:24:40 we need to examine new requirements and work out a good way to cater for those 16:24:42 His implementation is starting as just moving the current code into a library 16:25:02 yeah, I think the lib effort should be seperate from that 16:25:11 this should just be to figure out how to cater better for scale 16:25:35 these requirements can then be added to Neil's work - and we can work out a work breakdown after that 16:25:47 Makes sense to me 16:26:22 +1 16:26:37 so when will this be discussed again ? 16:26:51 Sounds like midcycle 16:27:15 but before then, in an etherpad 16:27:20 ok 16:27:27 what do I need to do :D ? 16:27:31 who's going to facilitate the session? I'm happy to assist with that 16:27:55 admin0 see the links in https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting 16:28:08 got it 16:28:10 get yourself a remote ticket, if you're not able to pysically attend 16:28:23 and add a proposed work session to the work session list 16:28:53 if you're in the US, we'll try and time the work session for the UK afternoon 16:29:14 i am in the Netherlands 16:29:18 so :( 16:29:37 we'll be in the UK for the midcycle 16:29:40 alright, then we can do it almost any time in the day - but I think many of the US folk would like to contribute to the discussion 16:29:40 :) 16:29:46 just a hop skip and a jump 16:29:58 I'm not sure I can facilitate, but I would like to participate in that discussion 16:30:01 so admin0 you can easily attend remotely, and facilitate remotely 16:30:07 i will try :) 16:30:35 I'll also take part in this discussion :p 16:30:51 nice 16:31:00 so lets move on to the next topic 16:31:13 #topic Blueprint work 16:31:31 michaelgugino -- hows Multi-OS Support going ? 16:31:58 we talked a little about that yesterday 16:32:14 I wanted to talk about a few blockers 16:32:19 sure thing 16:32:50 1) containers. the bulk of the work needs to be done to services that run in containers. We need to update our container strategy 16:33:04 for starters, we should probably make it so host distro = container distro 16:33:18 ++ 16:33:23 michaelgugino: Agreed. 16:33:30 michaelgugino yep, we agreed that in the principles of the work 16:33:40 ya, that was talked about when this first came up 16:33:47 while it may be possible for a deployer to configure it differently, we will not test other options 16:33:56 so, there is a lot of refactoring that needs to go into that. I think that is where we need to start, and where the bulk of the effort actually lies. 16:34:26 see 'basic principles' in https://etherpad.openstack.org/p/openstack-ansible-multi-os-support :) 16:34:38 next up, we need to figure out if we're going to support a new version of galera/percona 16:35:03 yes we can, for master - but not liberty 16:35:15 why new version? 16:35:32 because newer distros don't support those older packages 16:36:04 michaelgugino: new as in 10.x? 16:36:11 10.1 iirc 16:36:13 michaelgugino: is this related to the xtrabackup work ? 16:36:35 right, specifically the xtrabackup package is hard-coded to a specific version 16:36:59 that version number is not available for Wily in upstream packaging. 16:37:23 before I run off - at the top of https://etherpad.openstack.org/p/openstack-ansible-multi-os-support is a list of the independent repositories for the apt install / var split out... can we get some volunteers to provide patches for each repo based on the pattern set out in https://review.openstack.org/274290 ? 16:37:25 installing the current package via dpkg -i as we currently do fails. 16:38:15 odyssey4me: i'd like to grab one for sure 16:38:42 please add your name, and then add your review link next to the repo 16:38:50 first your name so that we don't duplicate work 16:39:25 michaelgugino: I wonder if we can we make that a tunable that can otherwise be turned off ? 16:39:29 for the xtrabackup package 16:40:50 I think we need to at very least use make the package version portable 16:42:08 agreeded. 16:42:31 maybe that can be conditional includes similar to what odyssey4me was proposing with the package install refactor 16:42:49 I think that is a good approach, and what I was going to do. 16:43:33 then we can default to something like a a portable package and become more restrictive based on the include 16:44:12 right, but for galera specifically, we should have a game plan going forward 16:44:47 I that was how you were invisioning the approach, I'm +1 16:44:49 "we support the stable release for the respective distro from upstream" Something like that. 16:45:11 what happens with rolling distros? 16:45:37 or we can always say that the we only support the LTS for now? 16:45:50 prometheanfire will be sad 16:45:55 I think that the distro-specific vars can cater for a uniform approach, and where needed there can be distro-specific tasks too. 16:46:36 ie if one distro requires the addition of a repo, but the other doesn't - then that is applied in the package-mgr specific task set 16:46:41 evrardjp: we can support package for extended periods, and it's honestly not many 16:46:48 what about only supporting distro that are tested in the gates? 16:47:02 evrardjp: I do agree with that 16:47:07 sorry if it's too specific, but it's the most simple 16:47:11 but anywya - the primary targets right now are Ubuntu & CentOS - we can iterate the approach once those are done 16:47:24 yarp 16:47:44 I'd rather that we didn't over-engineer things 16:47:51 simpler is better 16:48:41 Once we get Ubuntu and CentOS supported, that will cover most users. 16:49:16 ++ 16:49:21 Debian should be easy to add after we get systemd working. 16:49:32 But, the key question is, when do we stop supporting 14.04? 16:49:43 I figured we'd have to go with the guidance from Openstack.org 16:49:45 16.04 will be systemd too, right? 16:49:50 yes 16:50:26 systemd will help make cross distro functionality easier, hopefully most distros will support the networkd config as well 16:50:33 Once 16.04 launches, Debian, CentOS, and Ubuntu LTS will all be on systemd. 16:50:47 gentoo will too 16:50:53 michaelgugino: I personally envisioned the first version to support 16.04 would support both 14/16, then the one after, 16.04 only 16:50:57 well, already are 16:50:59 I think the systemd changes are the bulk of what will be needed to be mluti-distro compatible 16:51:01 yep, but production environments will have a mix of 14.04 and 16.04, so for a cycle or two we'll need to carry support for both 16:51:08 What odyssey4me said 16:51:18 +1 16:51:27 Upgrades that cycle are going to be...interesting 16:51:30 ya, in the mean time extra fun for all 16:51:31 haha yep 16:51:37 I think we should support 14.04 until 17.04 is released. 16:51:47 17.04 an lts? 16:51:56 thought next next one would be 18 16:51:58 i still think we need a strong criteria for distros other than ubuntu/centos 16:52:02 prometheanfire: It is 16:52:12 no, but it's a solid year from launch of 16.04 which will give people 6 months to move, 6 months to bake 16:52:24 until official deprecation of ubuntu? so maintain until something before 19.04? ;) 16:52:25 mattt: I think being able to gate is good 16:52:32 prometheanfire: wrong. 16:52:39 oh, this is nice 16:52:46 :) 16:52:50 michaelgugino: I think a lot of places stay on LTS 16:52:58 I've added a note in the etherpad regarding the success criteria for the package install and var split. 16:53:16 ok so we're almost out of time and I'd like to open it up for other general discussions. 16:53:28 #topic Open discussion 16:53:32 michaelgugino I think that is a good suggestion for input to one of the mid cycle sessions. :) 16:53:38 which i know v1k0d3n wanted to be pinged about 16:53:40 we can discuss in more detail there. 16:53:54 hello o/ 16:54:03 so feel free to keep discussing but i wanted to let in more 16:54:09 I won't be able to make midcyle, and I'll be traveling next week. But, I'll be at the summit. 16:54:10 o/ v1k0d3n 16:54:49 is it ok to bring up the tower deployment and see if anyone is interested? 16:55:13 michaelgugino: no worries, we can carry the convo on within the channel as you find time. I'd imagine the summit will be a good opportunity to cover a lot. 16:55:19 sure 16:55:21 michaelgugino we'll not necessarily finalise everything at the mid cycle - the intent is to start the conversation, which will then move into a proposed review 16:55:22 quick question about this: does someone know if tower will be open sourced at some point ? 16:55:26 v1k0d3n: sure thing 16:55:31 we can then revisit everything at the summit 16:55:40 this would help with the dynamic inventory thingy 16:55:43 evrardjp idk 16:56:00 i know the folks at ansible have teased that in the past 16:56:05 but ive heard nothing recently 16:56:10 ok, so tyler cross from ansible and i have been discussing some ways to simplify the deployment and management of the openstack-ansible deployment (or at least give another option for management)... 16:56:31 i'm new to this (different background), but we came up with this: https://github.com/v1k0d3n/tower-openstack 16:56:50 v1k0d3n: tower would be sweet! if it can do what we need it to 16:56:56 i want to get suggestions for workflow, but there are two main workflows that i can think of that would be useful.... 16:57:06 management of the first openstack deployment into bareOS 16:57:24 the second would be OSaaS within OS for development purposes... 16:58:02 and the OSaaS or bareOS deployment would use variables (in a survey provided to the end user) that could collect those variables used for the deployment (of each). 16:58:07 does anyone like this idea? 16:58:39 I'd be very interested in tower working for bareOS deployments 16:58:39 for each deployment...bareOS or OSaaS, multi-node and AIO would be options. 16:58:58 yeah it's cool 16:59:22 i want to prioritize the work effort, and see if folks would be interesting in contributing by pull requests, etc. 16:59:35 ++ 16:59:47 ok we're out of time. 16:59:55 we can carry on in the channel 16:59:58 +1 v1k0d3n 16:59:59 thanks everyone 17:00:02 thanks 17:00:05 #endmeeting