15:57:18 <cloudnull> #startmeeting openstack-ansible
15:57:20 <openstack> Meeting started Thu Jan 15 15:57:18 2015 UTC and is due to finish in 60 minutes.  The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:57:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:57:24 <openstack> The meeting name has been set to 'openstack_ansible'
15:57:42 <alextricity> #help
15:57:45 <alextricity> fail
16:00:19 <cloudnull> so , lets get started.
16:00:20 <sigmavirus24> o/ cloudnull
16:00:36 <cloudnull> first on the agenda is our milestones
16:00:52 <cloudnull> we have 10.1.2 and 9.0.6 that needs to be prioritized.
16:01:25 <cloudnull> I think that we can commit to dev readyness for these "releases" in 2 weeks.
16:01:34 <cloudnull> thoughts ?
16:02:00 <palendae> cloudnull: Do we have a list of what's going to be on those milestones?
16:02:07 <mattt> i'm not aware of much happening in 9.0.x, quite a few things are in the pipeline for juno branch right now tho, so that will need a fair bit of testing
16:02:38 <cloudnull> palendae: im looking at https://launchpad.net/openstack-ansible/+milestone/9.0.6
16:02:40 <andymccr> cloudnull: i think logging is a big "must go" for that release
16:02:54 <cloudnull> and https://launchpad.net/openstack-ansible/+milestone/10.1.2
16:03:17 <palendae> Ok
16:03:17 <b3rnard0> cloudnull should we do a roll call first :-)
16:03:23 <cloudnull> andymccr Agreed. i think logging needs love and is a must go.
16:03:41 <palendae> I htink this could be pulled off of the milestones, since RBAC is evidently broken - https://bugs.launchpad.net/openstack-ansible/+bug/1408363
16:04:12 <cloudnull> b3rnard0: we're already moving along :)
16:04:50 <cloudnull> palendae: i tend to agree those changes were recently reverted.
16:05:03 <cloudnull> sigmavirus24: was working on that a bit, sigmavirus24 thoughts?
16:05:19 <sigmavirus24> Glance is terribly broken and can't support what we want.
16:05:22 <cloudnull> andymccr: how far are we on making logging happy ?
16:05:28 <palendae> Unless we want to keep them open and fix it when Glance is fixed
16:05:32 <sigmavirus24> The revert patchset made it through this morning for master. The other two patchsets were abandoned
16:05:46 <andymccr> cloudnull: we're pretty close odyssey4me basically has it going, we're fixing some bugs on a fresh install i did today
16:06:01 <sigmavirus24> palendae: we can do that too but the glance fix was discussed in the glance meeting this morning and will take some time to work through the process (spec, adding more to the change, etc.)
16:06:11 <andymccr> so i'd say 2 weeks i no problem.
16:06:11 <palendae> sigmavirus24: yeah, I figured it'd be a while
16:06:12 <sigmavirus24> So it won't be feasible for either of these next milestones
16:06:15 <cloudnull> andymccr: for juno or icehouse ?
16:06:16 <palendae> Right
16:06:41 <andymccr> cloudnull:  its on master but the backport shouldnt be too bad, its mostly changing logstash filters and adjusting some ownerhsip on log dirs
16:06:42 <palendae> sigmavirus24: My guess is Kilo at the soonest, but likely L release
16:06:47 <andymccr> so the filters would be the same for both
16:07:01 <sigmavirus24> No I think we'll make this change in Kilo but it will take a couple of weeks
16:07:08 <sigmavirus24> I need to prioritize writing the spec today or tomorrow etc
16:07:09 <palendae> ok
16:07:11 <sigmavirus24> That's a bit off topic though
16:07:14 <palendae> Sure
16:08:05 <cloudnull> ok, sigmavirus24 palendae we'll pull "https://bugs.launchpad.net/openstack-ansible/+bug/1408363" for the time being and put it to "next" noting that its broke upstream .
16:09:27 <sigmavirus24> cloudnull: ++
16:09:32 <cloudnull> with the other open items on 10.1.2 do we think we can get ready for release in 2 weeks ?
16:10:01 <cloudnull> mattt. mancdaz, odyssey4me?
16:10:45 <mattt> cloudnull: i'm not aware of any blockers
16:10:52 <palendae> Based on the currently in progress tickets, I think so
16:11:18 <cloudnull> ok. well then thats what we'll shoot for.
16:12:11 <cloudnull> andymccr do you have a specific launchpad issue for the logging bits you guys have been working on ?
16:12:49 <andymccr> cloudnull: i think it comes off the back of the logstash: parse swift logs issue
16:14:21 <cloudnull> ok, ill add that to 9.0.6 and 10.1.2. can we change the heading of that issue for more general logstash work?
16:14:40 <andymccr> im sure we can or create a new one to handle the "pre swift" stuff
16:14:45 <cloudnull> ok.
16:15:25 <cloudnull> on to our new bugs:
16:15:38 <cloudnull> https://bugs.launchpad.net/openstack-ansible/+bugs?search=Search&field.status=New
16:15:58 <cloudnull> we need to prioritize and triage all 27 of these
16:16:34 <cloudnull> idk that we cover each bug in our meeting here. but we needs to do it and stay on top of it
16:17:04 <sigmavirus24> cloudnull: +1
16:17:28 <sigmavirus24> if a few people want to volunteer to look at them once a day that'd probably make it manageable
16:17:49 <b3rnard0> cloudnull: i would start at the top and go down the list unless someone has a bug we should look at first
16:18:05 <cloudnull> ill be doing it for sure moving forward and that im not on vacation , but it would be great to get some people to commit to review/triaging
16:18:41 <b3rnard0> cloudnull: sounds like sigmavirus24 is stepping up to the plate again
16:18:42 <mattt> wonder if we need to leverage the affects me thing a bit more
16:18:44 <d34dh0r53> I'll work on review/triaging
16:18:47 <mancdaz> yes I'm happy to do that
16:18:47 <mattt> to guage who wants what etc.
16:18:59 <sigmavirus24> b3rnard0: don't make me hurt you
16:19:04 <cloudnull> haha
16:19:14 <sigmavirus24> mattt: the affects me can help "confirm" a bug
16:19:30 <sigmavirus24> but the bug should be "triaged" once a bug owner verifies it and determines import
16:19:38 <mattt> sigmavirus24: right but we need a way to determine how many people are interested in a feature/functionality
16:19:48 <sigmavirus24> mattt: for new features, yes
16:19:55 <sigmavirus24> and if it's clearly a feature, just mark it wishlist and move on
16:19:56 <sigmavirus24> =P
16:20:04 <mattt> :)
16:20:21 <cloudnull> the only three bugs that i want to talk about here to get triaged are the "undecided" bugs.
16:20:23 <d34dh0r53> are we targeting bugs to milestones?
16:20:30 <cloudnull> specifically gating is not enabled on non-master branches
16:20:49 <cloudnull> hughsaunders do we know how/when we can get that going ?
16:21:07 <mancdaz> cloudnull odyssey4me should be able to sort that out
16:21:18 <hughsaunders> cloudnull: we can enable aio or multi node checks
16:21:20 <mancdaz> he put the original review to disable them in the gate
16:21:25 <cloudnull> where is odyssey4me?
16:21:37 <mancdaz> and I've put a review in to get the gate scripts synced into those branches as a precursor to turning it back on
16:22:05 <cloudnull> ok.
16:23:09 <cloudnull> ok so if we can get gatting going on the branches sooner than later it would be great.
16:23:35 <mattt> odyssey4me: do you want hugh/i to pick taht up if you're tied up w/ other stuff ?
16:23:42 <mancdaz> cloudnull yep. hughsaunders and I were also talking this morning about enabling voting as soon as the gates are stable
16:24:06 <mancdaz> so that it's actually a gate, as opposed to something people mostly ignore
16:24:08 <odyssey4me> I'm happy to do that - the aio appears to be stable now
16:24:26 <hughsaunders> odyssey4me: do you plan to add tempest to the aio?
16:24:29 <odyssey4me> I take it that we want to get it voting on the non=master branches too?
16:24:55 <odyssey4me> hughsaunders: yeah, that would be ideal - but afaik we don't have a properly working set of tempest tests do we?
16:24:56 <cloudnull> odyssey4me: i'd say yes.
16:25:14 <mancdaz> +1
16:25:40 <cloudnull> other than that, i'm done with bugs , unless there are a few that people want to talk about specifically?
16:26:06 <hughsaunders> odyssey4me: sadly not since the glance issues kicked off
16:27:16 <cloudnull> anything?
16:27:40 <cloudnull> ok, moving.
16:27:56 <cloudnull> next on the agenda "Making everything more generic".
16:28:15 <cloudnull> this was the biggest ask from people replying to the original mailing list post
16:28:42 <cloudnull> presently we have rpc, rackspace, and rax all over the place.
16:29:00 <cloudnull> most people have said they want that gone before they begin using it.
16:29:25 <cloudnull> additionally it was asked if we can make the roles less lxc dependent and more galaxy like .
16:30:01 <cloudnull> which is all part of the separating the rax product release we have into a more community project.
16:30:15 <d34dh0r53> so those  two things should be though of as one task?
16:30:15 <odyssey4me> should we kill two birds with one stone and do it at once, or should we dig into doing it with the current method first?
16:30:41 <cloudnull> i've taking the route of killing the multiple birds
16:30:57 <cloudnull> but that could be done as a step by step process.
16:31:14 <andymccr> cloudnull: agree but thats not a simple thing so we need some kind of plan - i also want to avoid a situation where its one persons task and nobody else has any idea. Or a situation where there is a month period where any improvements are wasted because its a wholesale change.
16:31:39 <andymccr> additionally we have to consider how current installs would upgrade - although if we do it right that shouldn't be a problem.
16:31:56 <cloudnull> andymccr: agreed.
16:32:00 <cloudnull> if we do it right.
16:32:19 <cloudnull> there are multiple blueprints on this, which I would like people to participate in
16:32:48 <sigmavirus24> You can #link them here so they're included in the notes ;)
16:32:59 <cloudnull> sadly the blueprints were created "2014-12-10" and nobody but myself has even looked at the,
16:33:05 <cloudnull> https://blueprints.launchpad.net/openstack-ansible/+spec/galaxy-roles
16:33:15 <cloudnull> https://blueprints.launchpad.net/openstack-ansible/+spec/rackspace-namesake
16:33:24 <cloudnull> https://blueprints.launchpad.net/openstack-ansible/+spec/inventory-cleanup
16:34:28 <cloudnull> now, i've started the rax namesake and galaxy roles which can be seen here. https://github.com/os-cloud and now that im formally asking i'd like people to look at and help out with getting that done.
16:34:36 <palendae> I looked at the inventory cleanup, but it's been a while
16:34:52 <cloudnull> like andymccr said we need a plan .
16:34:56 <cloudnull> and not just my plan
16:34:59 <mattt> yeah how do we split this up?
16:35:21 <b3rnard0> let's add an action item to come up with a plan
16:35:38 <palendae> We've got a start on broken-out roles
16:35:58 <palendae> But we need more people than cloudnull looking at them. I've done some very brief work, but need to dive further
16:36:38 <cloudnull> mattt like palendae said, we've started on the broken out roles. i've not done or looked at logging, neutron, horizon, or swift.
16:36:42 <odyssey4me> I would guess that we need to review existing infrastructure galaxy roles and see if they suit our needs and are responsive to our PR's. It's either that or we develop our own infrastructure service galaxy roles and maintain them within the community.
16:37:02 <mattt> cloudnull: cool but i've not seen any reviews for this stuff?
16:37:06 <mattt> or did i miss them?
16:37:15 <palendae> mattt: They're at https://github.com/os-cloud
16:37:19 <mattt> what is that?
16:37:22 <odyssey4me> I took a brief look through some related to galera earlier today.
16:37:36 <palendae> I think part of cloudnull's logic was he wasn't sure how many repos it would be, and didn't want to bug stackforge infra until he did
16:37:58 <cloudnull> mattt there have been no reviews for them . because we need to decied if we want to have the roles in sub repos in stackforge or in a master repo like we have now.
16:38:10 <cloudnull> also yes. we've bothered infra enough for now
16:38:25 <mattt> ok i just worry about going off to the side to develop stuff
16:38:29 <mattt> seems a bit counter-productive
16:39:08 <cloudnull> if we follow the puppet, and cookbooks they have sub repos so it shouldnt be a big ask to have them make the additional repos
16:39:10 <hughsaunders> could we gradually convert our existing roles into galaxy roles in the current stackforge repo?
16:39:22 <hughsaunders> then split out into separate repos at a later point?
16:39:39 <palendae> That sounds reasonable to me
16:39:40 <cloudnull> hughsaunders, we could however pulling the roles in with a galaxy requirements file would be a bit cumbersome
16:39:50 <cloudnull> with static and remote roles
16:41:47 <hughsaunders> initially yes, but better than a big-bang switchover?
16:41:49 <odyssey4me> could we not do a set of basic starting repo's in stackforge - empty as placeholders
16:41:53 <cloudnull> mattt its counter productive to have a side repo, im not saying we use the test things i've created. but they exist to see how it worked is a collective whole.
16:42:32 <cloudnull> odyssey4me: i say we should be able to maintain a few place holders that we gradually migrate to .
16:42:41 <mattt> cloudnull: i know what you're getting at
16:43:12 <odyssey4me> initially we just create repo's per openstack project we work with - let them be blank to start with, then we transition code into them and start adjusting them
16:43:50 <odyssey4me> and also start adjusting the 'parent' repo to work with them
16:44:38 <sigmavirus24> quick, voluntell miguelgrinberg to do something since he's late ;)
16:44:39 <cloudnull> hughsaunders, agreed. a big ban switchover would be hard to swallow. but we need to do something so other people, not rackspace, begin contributing.
16:44:56 <cloudnull> miguelgrinberg: your on code review for 2 weeks. :)
16:44:58 <d34dh0r53> how is the upgrade process for moving from our traditional roles to the galaxy roles (from a user perspective)?
16:45:17 <miguelgrinberg> cloudnull: sorry, last minute ETO
16:45:26 <cloudnull> hahaha
16:45:31 <cloudnull> j/k miguelgrinberg
16:45:50 <cloudnull> d34dh0r53 they need to get the roles using the ansible-glaxey command
16:45:59 <cloudnull> which exist in a yaml file.
16:46:05 <hughsaunders> Converting our current roles to be galaxy compliant, with the repo, and removing rax branding at the same time could work
16:46:07 <d34dh0r53> but can those roles be run on an existing cluster?
16:46:11 <hughsaunders> *within
16:46:12 <cloudnull> d34dh0r53: ie https://github.com/os-cloud/os-ansible-deployment/blob/master/ansible-role-requirements.yml
16:46:29 <mattt> cloudnull: so could we create roles under os-cloud and link to them from our existing os-ansible-deployment repo?
16:46:37 <cloudnull> d34dh0r53: yes, if we like andymccr said do it right.
16:46:41 <odyssey4me> It would seem that I need to do some playing to understand how the galaxy-based roles work first.
16:46:47 <mattt> odyssey4me: heh same
16:46:52 <cloudnull> mattt yes.
16:47:24 <d34dh0r53> that was a major concern that the directorator had in discussions with logging, we need to ensure that we can in-place upgrade existing clusters without losing anything
16:47:39 <cloudnull> d34dh0r53 odyssey4me https://github.com/os-cloud/os-ansible-deployment/blob/master/bootstrap.sh#L74-L77 thats essentilly the change to get the roles.
16:48:02 <mattt> cloudnull: i guess that makes sense, and we can bring roles from os-cloud into stackforge as we see fit
16:48:16 <d34dh0r53> +1 I like that idea
16:48:18 <mattt> because presumably the infrastructure-related roles wouldn't need to live under stackforge (galera/rabbit/etc)
16:48:32 <odyssey4me> cloudnull so does that essentially cache the role locally, then operate it similarly to how we work now?
16:49:27 <cloudnull> odyssey4me yes.
16:49:35 <odyssey4me> so an approach could be that we actually convert to using galaxy roles for the infrastructure first, then look at the openstack stuff later (or in parallel)
16:49:43 <cloudnull> it also makes it so that we can pull different version of the role as needed.
16:50:14 <cloudnull> i'd do it one at a time. and would like to have the roles in stackforge for the whole review process.
16:50:26 <hughsaunders> including github modules in stackforge leads to different review/contribution processes for parts of the same project.
16:50:27 <cloudnull> if we take a staggered approach
16:50:48 <odyssey4me> so we create new blank repo's for any openstack project roles in stackforge as placeholders
16:50:52 <cloudnull> i want to avoid what hughsaunders said .
16:51:26 <odyssey4me> then we divide up the tasks of transitioning the existing roles (both infra and openstack) to make use of galaxy roles (in stackforge and outside for infra)
16:51:29 <andymccr> if we do it staggered can we create a set of guidelines on the first one, that are followed on the following conversions - we can adjust the guidelines/backport changes as required to the already changed ones.
16:51:43 <cloudnull> odyssey4me yes, but to do so we need to ask infra, which i'd like to limit so we should figure out how many repos we'll need leave them blank and gradually migrate to them as we see fit.
16:52:14 <palendae> There's also the question of removing rax/rpc mentions in the repo; I think this can be done without moving any repos
16:52:24 <palendae> And should probably be done prior to separating roles
16:52:49 <cloudnull> palendae well thats the hardest part. we have rpc rax rackspace everywhere.
16:52:51 <odyssey4me> surely we need just one for each openstack project, ie nova, glance, heat, keystone, swift, openstack-client
16:53:09 <palendae> cloudnull: Yeah
16:53:09 <cloudnull> odyssey4me, yes nova galnce ...
16:53:19 <cloudnull> *glance
16:53:36 <cloudnull> which collapses our logic into a role.
16:53:46 <cloudnull> instead of the play which is what we have.
16:53:49 <odyssey4me> we just call them 'ansible-openstack-nova', etc
16:54:11 <hughsaunders> if we're not sure how many repos we need, why not create these roles as dirs in the existing repo, then migrate to separate repos once the roles have stabilised?
16:54:12 <cloudnull> odyssey4me id say yes.
16:54:25 <cloudnull> you can see an example of collapsed logic here https://github.com/os-cloud/opc_role-os_nova/blob/master/tasks/main.yml
16:54:48 <cloudnull> hughsaunders we could do that.
16:54:57 <mattt> i think that is a good idea
16:55:06 <mattt> gives us some room to experiment
16:55:11 <hughsaunders> we may need multiple roles for some projects
16:55:16 <mattt> while still keeping everything in stackforge
16:55:16 <odyssey4me> yeah, I must say that for simplicity's sake it would be easier to evolve as far as possible in one repo before splitting it out
16:55:41 <andymccr> if they are truly independent then it will be no problem to move the directory into its own repository if we decide that is better
16:55:46 <andymccr> at that point we know exactly what we need though
16:56:04 <andymccr> if thats possible i think we should do that
16:56:11 <cloudnull> ok, well make the roles as galaxy roles and keep them in the main repo until such a point where we migrate them to separate repos.
16:56:21 <odyssey4me> it may not technically be galaxy compatible to start with, but to evolve the roles to a place where the logic in them is compatible is quite a big first step
16:56:22 <hughsaunders> if we move to primarily using role dependancies rather than playbooks to tie everything together, then we may end up with quite a lot of roles
16:57:15 <cloudnull> i estimate that we'll have 16+ odd roles.
16:57:52 <cloudnull> and thats with removal of all of the *_common roles and such
16:57:56 <odyssey4me> if that is the case, having them all as seperate repo's makes packaging/testing/branching/tagging a nightmare
16:58:20 <odyssey4me> we really need to limit the number of sub-repo's as far as is practical
16:58:30 <d34dh0r53> just throwing this out there but in the interest of time it seems like the rpc* cleanup is a rather quick easy win which we could do decoupled from the galixyifying of everything, that may quell some of the request in the short term while we work on the other
16:59:02 <mattt> d34dh0r53: kind of agree w/ you there
16:59:13 <cloudnull> we're out of time people. well have to continue this in the channel and into our next meeting
16:59:14 <hughsaunders> sed
16:59:14 <odyssey4me> d34dh0r53 agreed - I do think it'll be easier to de-rax in the current structure/method as we already know it
16:59:45 <mattt> alright, thanks cloudnull
16:59:52 <d34dh0r53> ty cloudnull
16:59:57 <hughsaunders> yep well chaired cloudnull :)
16:59:57 <odyssey4me> ty cloudnull
17:00:01 <cloudnull> ok , thanks guys.
17:00:03 <mancdaz> \o/
17:00:06 <b3rnard0> thank you dearest ptl
17:00:13 <sigmavirus24> dont' forget to #endmeeting ;)
17:00:32 <cloudnull> #endmeeting