15:57:18 #startmeeting openstack-ansible 15:57:20 Meeting started Thu Jan 15 15:57:18 2015 UTC and is due to finish in 60 minutes. The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:57:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:57:24 The meeting name has been set to 'openstack_ansible' 15:57:42 #help 15:57:45 fail 16:00:19 so , lets get started. 16:00:20 o/ cloudnull 16:00:36 first on the agenda is our milestones 16:00:52 we have 10.1.2 and 9.0.6 that needs to be prioritized. 16:01:25 I think that we can commit to dev readyness for these "releases" in 2 weeks. 16:01:34 thoughts ? 16:02:00 cloudnull: Do we have a list of what's going to be on those milestones? 16:02:07 i'm not aware of much happening in 9.0.x, quite a few things are in the pipeline for juno branch right now tho, so that will need a fair bit of testing 16:02:38 palendae: im looking at https://launchpad.net/openstack-ansible/+milestone/9.0.6 16:02:40 cloudnull: i think logging is a big "must go" for that release 16:02:54 and https://launchpad.net/openstack-ansible/+milestone/10.1.2 16:03:17 Ok 16:03:17 cloudnull should we do a roll call first :-) 16:03:23 andymccr Agreed. i think logging needs love and is a must go. 16:03:41 I htink this could be pulled off of the milestones, since RBAC is evidently broken - https://bugs.launchpad.net/openstack-ansible/+bug/1408363 16:04:12 b3rnard0: we're already moving along :) 16:04:50 palendae: i tend to agree those changes were recently reverted. 16:05:03 sigmavirus24: was working on that a bit, sigmavirus24 thoughts? 16:05:19 Glance is terribly broken and can't support what we want. 16:05:22 andymccr: how far are we on making logging happy ? 16:05:28 Unless we want to keep them open and fix it when Glance is fixed 16:05:32 The revert patchset made it through this morning for master. The other two patchsets were abandoned 16:05:46 cloudnull: we're pretty close odyssey4me basically has it going, we're fixing some bugs on a fresh install i did today 16:06:01 palendae: we can do that too but the glance fix was discussed in the glance meeting this morning and will take some time to work through the process (spec, adding more to the change, etc.) 16:06:11 so i'd say 2 weeks i no problem. 16:06:11 sigmavirus24: yeah, I figured it'd be a while 16:06:12 So it won't be feasible for either of these next milestones 16:06:15 andymccr: for juno or icehouse ? 16:06:16 Right 16:06:41 cloudnull: its on master but the backport shouldnt be too bad, its mostly changing logstash filters and adjusting some ownerhsip on log dirs 16:06:42 sigmavirus24: My guess is Kilo at the soonest, but likely L release 16:06:47 so the filters would be the same for both 16:07:01 No I think we'll make this change in Kilo but it will take a couple of weeks 16:07:08 I need to prioritize writing the spec today or tomorrow etc 16:07:09 ok 16:07:11 That's a bit off topic though 16:07:14 Sure 16:08:05 ok, sigmavirus24 palendae we'll pull "https://bugs.launchpad.net/openstack-ansible/+bug/1408363" for the time being and put it to "next" noting that its broke upstream . 16:09:27 cloudnull: ++ 16:09:32 with the other open items on 10.1.2 do we think we can get ready for release in 2 weeks ? 16:10:01 mattt. mancdaz, odyssey4me? 16:10:45 cloudnull: i'm not aware of any blockers 16:10:52 Based on the currently in progress tickets, I think so 16:11:18 ok. well then thats what we'll shoot for. 16:12:11 andymccr do you have a specific launchpad issue for the logging bits you guys have been working on ? 16:12:49 cloudnull: i think it comes off the back of the logstash: parse swift logs issue 16:14:21 ok, ill add that to 9.0.6 and 10.1.2. can we change the heading of that issue for more general logstash work? 16:14:40 im sure we can or create a new one to handle the "pre swift" stuff 16:14:45 ok. 16:15:25 on to our new bugs: 16:15:38 https://bugs.launchpad.net/openstack-ansible/+bugs?search=Search&field.status=New 16:15:58 we need to prioritize and triage all 27 of these 16:16:34 idk that we cover each bug in our meeting here. but we needs to do it and stay on top of it 16:17:04 cloudnull: +1 16:17:28 if a few people want to volunteer to look at them once a day that'd probably make it manageable 16:17:49 cloudnull: i would start at the top and go down the list unless someone has a bug we should look at first 16:18:05 ill be doing it for sure moving forward and that im not on vacation , but it would be great to get some people to commit to review/triaging 16:18:41 cloudnull: sounds like sigmavirus24 is stepping up to the plate again 16:18:42 wonder if we need to leverage the affects me thing a bit more 16:18:44 I'll work on review/triaging 16:18:47 yes I'm happy to do that 16:18:47 to guage who wants what etc. 16:18:59 b3rnard0: don't make me hurt you 16:19:04 haha 16:19:14 mattt: the affects me can help "confirm" a bug 16:19:30 but the bug should be "triaged" once a bug owner verifies it and determines import 16:19:38 sigmavirus24: right but we need a way to determine how many people are interested in a feature/functionality 16:19:48 mattt: for new features, yes 16:19:55 and if it's clearly a feature, just mark it wishlist and move on 16:19:56 =P 16:20:04 :) 16:20:21 the only three bugs that i want to talk about here to get triaged are the "undecided" bugs. 16:20:23 are we targeting bugs to milestones? 16:20:30 specifically gating is not enabled on non-master branches 16:20:49 hughsaunders do we know how/when we can get that going ? 16:21:07 cloudnull odyssey4me should be able to sort that out 16:21:18 cloudnull: we can enable aio or multi node checks 16:21:20 he put the original review to disable them in the gate 16:21:25 where is odyssey4me? 16:21:37 and I've put a review in to get the gate scripts synced into those branches as a precursor to turning it back on 16:22:05 ok. 16:23:09 ok so if we can get gatting going on the branches sooner than later it would be great. 16:23:35 odyssey4me: do you want hugh/i to pick taht up if you're tied up w/ other stuff ? 16:23:42 cloudnull yep. hughsaunders and I were also talking this morning about enabling voting as soon as the gates are stable 16:24:06 so that it's actually a gate, as opposed to something people mostly ignore 16:24:08 I'm happy to do that - the aio appears to be stable now 16:24:26 odyssey4me: do you plan to add tempest to the aio? 16:24:29 I take it that we want to get it voting on the non=master branches too? 16:24:55 hughsaunders: yeah, that would be ideal - but afaik we don't have a properly working set of tempest tests do we? 16:24:56 odyssey4me: i'd say yes. 16:25:14 +1 16:25:40 other than that, i'm done with bugs , unless there are a few that people want to talk about specifically? 16:26:06 odyssey4me: sadly not since the glance issues kicked off 16:27:16 anything? 16:27:40 ok, moving. 16:27:56 next on the agenda "Making everything more generic". 16:28:15 this was the biggest ask from people replying to the original mailing list post 16:28:42 presently we have rpc, rackspace, and rax all over the place. 16:29:00 most people have said they want that gone before they begin using it. 16:29:25 additionally it was asked if we can make the roles less lxc dependent and more galaxy like . 16:30:01 which is all part of the separating the rax product release we have into a more community project. 16:30:15 so those two things should be though of as one task? 16:30:15 should we kill two birds with one stone and do it at once, or should we dig into doing it with the current method first? 16:30:41 i've taking the route of killing the multiple birds 16:30:57 but that could be done as a step by step process. 16:31:14 cloudnull: agree but thats not a simple thing so we need some kind of plan - i also want to avoid a situation where its one persons task and nobody else has any idea. Or a situation where there is a month period where any improvements are wasted because its a wholesale change. 16:31:39 additionally we have to consider how current installs would upgrade - although if we do it right that shouldn't be a problem. 16:31:56 andymccr: agreed. 16:32:00 if we do it right. 16:32:19 there are multiple blueprints on this, which I would like people to participate in 16:32:48 You can #link them here so they're included in the notes ;) 16:32:59 sadly the blueprints were created "2014-12-10" and nobody but myself has even looked at the, 16:33:05 https://blueprints.launchpad.net/openstack-ansible/+spec/galaxy-roles 16:33:15 https://blueprints.launchpad.net/openstack-ansible/+spec/rackspace-namesake 16:33:24 https://blueprints.launchpad.net/openstack-ansible/+spec/inventory-cleanup 16:34:28 now, i've started the rax namesake and galaxy roles which can be seen here. https://github.com/os-cloud and now that im formally asking i'd like people to look at and help out with getting that done. 16:34:36 I looked at the inventory cleanup, but it's been a while 16:34:52 like andymccr said we need a plan . 16:34:56 and not just my plan 16:34:59 yeah how do we split this up? 16:35:21 let's add an action item to come up with a plan 16:35:38 We've got a start on broken-out roles 16:35:58 But we need more people than cloudnull looking at them. I've done some very brief work, but need to dive further 16:36:38 mattt like palendae said, we've started on the broken out roles. i've not done or looked at logging, neutron, horizon, or swift. 16:36:42 I would guess that we need to review existing infrastructure galaxy roles and see if they suit our needs and are responsive to our PR's. It's either that or we develop our own infrastructure service galaxy roles and maintain them within the community. 16:37:02 cloudnull: cool but i've not seen any reviews for this stuff? 16:37:06 or did i miss them? 16:37:15 mattt: They're at https://github.com/os-cloud 16:37:19 what is that? 16:37:22 I took a brief look through some related to galera earlier today. 16:37:36 I think part of cloudnull's logic was he wasn't sure how many repos it would be, and didn't want to bug stackforge infra until he did 16:37:58 mattt there have been no reviews for them . because we need to decied if we want to have the roles in sub repos in stackforge or in a master repo like we have now. 16:38:10 also yes. we've bothered infra enough for now 16:38:25 ok i just worry about going off to the side to develop stuff 16:38:29 seems a bit counter-productive 16:39:08 if we follow the puppet, and cookbooks they have sub repos so it shouldnt be a big ask to have them make the additional repos 16:39:10 could we gradually convert our existing roles into galaxy roles in the current stackforge repo? 16:39:22 then split out into separate repos at a later point? 16:39:39 That sounds reasonable to me 16:39:40 hughsaunders, we could however pulling the roles in with a galaxy requirements file would be a bit cumbersome 16:39:50 with static and remote roles 16:41:47 initially yes, but better than a big-bang switchover? 16:41:49 could we not do a set of basic starting repo's in stackforge - empty as placeholders 16:41:53 mattt its counter productive to have a side repo, im not saying we use the test things i've created. but they exist to see how it worked is a collective whole. 16:42:32 odyssey4me: i say we should be able to maintain a few place holders that we gradually migrate to . 16:42:41 cloudnull: i know what you're getting at 16:43:12 initially we just create repo's per openstack project we work with - let them be blank to start with, then we transition code into them and start adjusting them 16:43:50 and also start adjusting the 'parent' repo to work with them 16:44:38 quick, voluntell miguelgrinberg to do something since he's late ;) 16:44:39 hughsaunders, agreed. a big ban switchover would be hard to swallow. but we need to do something so other people, not rackspace, begin contributing. 16:44:56 miguelgrinberg: your on code review for 2 weeks. :) 16:44:58 how is the upgrade process for moving from our traditional roles to the galaxy roles (from a user perspective)? 16:45:17 cloudnull: sorry, last minute ETO 16:45:26 hahaha 16:45:31 j/k miguelgrinberg 16:45:50 d34dh0r53 they need to get the roles using the ansible-glaxey command 16:45:59 which exist in a yaml file. 16:46:05 Converting our current roles to be galaxy compliant, with the repo, and removing rax branding at the same time could work 16:46:07 but can those roles be run on an existing cluster? 16:46:11 *within 16:46:12 d34dh0r53: ie https://github.com/os-cloud/os-ansible-deployment/blob/master/ansible-role-requirements.yml 16:46:29 cloudnull: so could we create roles under os-cloud and link to them from our existing os-ansible-deployment repo? 16:46:37 d34dh0r53: yes, if we like andymccr said do it right. 16:46:41 It would seem that I need to do some playing to understand how the galaxy-based roles work first. 16:46:47 odyssey4me: heh same 16:46:52 mattt yes. 16:47:24 that was a major concern that the directorator had in discussions with logging, we need to ensure that we can in-place upgrade existing clusters without losing anything 16:47:39 d34dh0r53 odyssey4me https://github.com/os-cloud/os-ansible-deployment/blob/master/bootstrap.sh#L74-L77 thats essentilly the change to get the roles. 16:48:02 cloudnull: i guess that makes sense, and we can bring roles from os-cloud into stackforge as we see fit 16:48:16 +1 I like that idea 16:48:18 because presumably the infrastructure-related roles wouldn't need to live under stackforge (galera/rabbit/etc) 16:48:32 cloudnull so does that essentially cache the role locally, then operate it similarly to how we work now? 16:49:27 odyssey4me yes. 16:49:35 so an approach could be that we actually convert to using galaxy roles for the infrastructure first, then look at the openstack stuff later (or in parallel) 16:49:43 it also makes it so that we can pull different version of the role as needed. 16:50:14 i'd do it one at a time. and would like to have the roles in stackforge for the whole review process. 16:50:26 including github modules in stackforge leads to different review/contribution processes for parts of the same project. 16:50:27 if we take a staggered approach 16:50:48 so we create new blank repo's for any openstack project roles in stackforge as placeholders 16:50:52 i want to avoid what hughsaunders said . 16:51:26 then we divide up the tasks of transitioning the existing roles (both infra and openstack) to make use of galaxy roles (in stackforge and outside for infra) 16:51:29 if we do it staggered can we create a set of guidelines on the first one, that are followed on the following conversions - we can adjust the guidelines/backport changes as required to the already changed ones. 16:51:43 odyssey4me yes, but to do so we need to ask infra, which i'd like to limit so we should figure out how many repos we'll need leave them blank and gradually migrate to them as we see fit. 16:52:14 There's also the question of removing rax/rpc mentions in the repo; I think this can be done without moving any repos 16:52:24 And should probably be done prior to separating roles 16:52:49 palendae well thats the hardest part. we have rpc rax rackspace everywhere. 16:52:51 surely we need just one for each openstack project, ie nova, glance, heat, keystone, swift, openstack-client 16:53:09 cloudnull: Yeah 16:53:09 odyssey4me, yes nova galnce ... 16:53:19 *glance 16:53:36 which collapses our logic into a role. 16:53:46 instead of the play which is what we have. 16:53:49 we just call them 'ansible-openstack-nova', etc 16:54:11 if we're not sure how many repos we need, why not create these roles as dirs in the existing repo, then migrate to separate repos once the roles have stabilised? 16:54:12 odyssey4me id say yes. 16:54:25 you can see an example of collapsed logic here https://github.com/os-cloud/opc_role-os_nova/blob/master/tasks/main.yml 16:54:48 hughsaunders we could do that. 16:54:57 i think that is a good idea 16:55:06 gives us some room to experiment 16:55:11 we may need multiple roles for some projects 16:55:16 while still keeping everything in stackforge 16:55:16 yeah, I must say that for simplicity's sake it would be easier to evolve as far as possible in one repo before splitting it out 16:55:41 if they are truly independent then it will be no problem to move the directory into its own repository if we decide that is better 16:55:46 at that point we know exactly what we need though 16:56:04 if thats possible i think we should do that 16:56:11 ok, well make the roles as galaxy roles and keep them in the main repo until such a point where we migrate them to separate repos. 16:56:21 it may not technically be galaxy compatible to start with, but to evolve the roles to a place where the logic in them is compatible is quite a big first step 16:56:22 if we move to primarily using role dependancies rather than playbooks to tie everything together, then we may end up with quite a lot of roles 16:57:15 i estimate that we'll have 16+ odd roles. 16:57:52 and thats with removal of all of the *_common roles and such 16:57:56 if that is the case, having them all as seperate repo's makes packaging/testing/branching/tagging a nightmare 16:58:20 we really need to limit the number of sub-repo's as far as is practical 16:58:30 just throwing this out there but in the interest of time it seems like the rpc* cleanup is a rather quick easy win which we could do decoupled from the galixyifying of everything, that may quell some of the request in the short term while we work on the other 16:59:02 d34dh0r53: kind of agree w/ you there 16:59:13 we're out of time people. well have to continue this in the channel and into our next meeting 16:59:14 sed 16:59:14 d34dh0r53 agreed - I do think it'll be easier to de-rax in the current structure/method as we already know it 16:59:45 alright, thanks cloudnull 16:59:52 ty cloudnull 16:59:57 yep well chaired cloudnull :) 16:59:57 ty cloudnull 17:00:01 ok , thanks guys. 17:00:03 \o/ 17:00:06 thank you dearest ptl 17:00:13 dont' forget to #endmeeting ;) 17:00:32 #endmeeting