16:00:17 #startmeeting OpenStack Ansible Meeting 16:00:22 Meeting started Thu Sep 10 16:00:17 2015 UTC and is due to finish in 60 minutes. The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:26 The meeting name has been set to 'openstack_ansible_meeting' 16:00:46 moo 16:00:54 #topic rollcall 16:01:00 o/ 16:01:01 yo 16:01:11 cloudnull: will be getting food at the start 16:01:31 \o 16:01:53 o/ 16:02:15 o/ 16:03:11 o/ 16:03:49 couple more min to let some folks trickle in 16:03:58 #chair openstack 16:03:59 Current chairs: cloudnull openstack 16:04:04 #chair odyssey4me 16:04:05 Current chairs: cloudnull odyssey4me openstack 16:04:22 o/ 16:04:53 I'm sorry I'm not familiar with english, did you consider odyssey4me as furniture? ;) 16:05:05 he is a fixture :) 16:05:19 #topic Review action items from last week 16:05:41 item: make a BP to track each OS service and remove deprecated variables 16:05:42 haha 16:06:03 ah, my bad - that should have been me and I haven't done it yet 16:06:05 cloudnull: does this include new things for liberty too? 16:06:10 add that as one allocated to me please 16:06:11 there's a lot of those changes 16:06:12 #link https://review.openstack.org/#/c/221189/ 16:06:37 oh yes, I did do a liberty spec :) 16:06:47 now we need people to review it 16:06:50 i have some neutron changes that someday i'll get time to do 16:07:57 Sam-I-Am: If you have the changes outlined can you write them down somewhere so we can get to it if you cant 16:08:30 etherpad, spec review, etc are all perfectly acceptable. 16:08:34 sure 16:08:41 better yet, comment on the review that you want to be on the neutron work - there's a spot in the work task list 16:08:58 although i'm waiting on a better way to add configurability 16:08:59 ++ 16:09:03 i think you had a patch for that 16:09:17 base your changes off of the change if needed. 16:09:20 my suggestion is that we try and pair up the work, ideally with pairs that are in different time zones to get better coverage and hand over 16:10:26 as a reference this is the review for the better config capabilities 16:10:28 #link https://review.openstack.org/#/c/217030/ 16:10:57 #link https://review.openstack.org/#/c/220212 16:11:18 code to make it go and an implementation example ^ 16:11:32 Hey 16:11:41 o/ BjoernT 16:11:45 next: prometheanfire to get involved with the ops meetup ligtning and moderator talks 16:12:14 cloudnull: needs more docs :) 16:12:22 I've added myself as a possible moderator and will do at least one lightening talk 16:12:45 moderating for infra containers 16:12:49 kk 16:13:10 o/ sorry I'm late, phonecall 16:13:15 #topic Liberty Release Blueprints 16:13:22 #link https://blueprints.launchpad.net/openstack-ansible/+spec/liberty-release 16:13:56 from a bp prosepctive i think we're good. 16:14:10 we just need reviewers on the implementations of the approved bps 16:15:14 I found myself thinking this morning that perhaps the liberty release should focus primarily on a greenfield deployment, then we work on the upgrade mechanism for a .1 release later 16:15:18 liberty release - do the things' 16:15:34 I'm good with that. 16:15:38 I'm concerned about the code churn in the release prep interfering with the ability to effectively do upgrade testing 16:15:40 odyssey4me: I think that's going to be the practical thing 16:15:52 odyssey4me: Right now it's been hard to get people to do even greenfield stuff 16:15:59 I'm also concerned with resourcing 16:16:12 I'm not fond of it, because I'll need to upgrade ;) 16:16:25 but I understand 16:16:47 the .1 release with confirmed upgradability could be released very soon after .0 - but I'd prefer that we have a more focused sprint or two on it 16:16:59 ok 16:17:26 turns out... most of the kilo stuff works in liberty plus a few deprecation messages 16:17:27 evrardjp this suggestion is for your protection :) also, we'd like your help with the upgrade testing 16:17:36 ^ 16:17:42 however, liberty wouldnt really be liberty without fixing that stuff 16:17:50 Help on testing upgrades would be very, very appreciated 16:18:38 one of these days... upgrade gating 16:18:51 one other thing I think we should discuss is when we want to release liberty 16:18:59 when its ready 16:19:22 when all liberty-related config changes are done 16:19:33 do we want to target the same day as the upstream release, the friday that follows, or when? 16:19:55 Sam-I-Am vague target dates are never good - things never get done 16:20:01 I personally think that we rev all of our sha's and if gate passes we release. 16:20:08 we should set a target and endeavour to meet it 16:20:16 just because the gate passes doesnt mean its liberty 16:20:24 it's the 19th of this month, right? 16:20:24 i found that out with keystone this morning 16:20:40 next month? 16:20:43 i disagree. if gate passes and its running the liberty code, its liberty 16:20:58 RC1 is this month on the 24th iirc 16:21:05 this is why gating fails to be realistic 16:21:15 neutron technically passes the gate 16:21:34 Sam-I-Am you're talking about a tangential issue - liberty will not release until it has passed the integrated gate tests 16:21:51 what i'm concerned about is releasing, then having to make considerable changes to config files for a point release 16:21:56 it's entirely different to the testing that's done day to day, and is a distraction in this conversation 16:22:00 because it wasnt really using liberty config items 16:22:18 Sam-I-Am so that's the point of the items to review all the config files 16:22:36 yes, and the reason why i was saying we shouldnt release until those are done 16:22:39 so we need to set a date when we expect that to be done, and we need volunteers to do it 16:22:43 which hopefully is on or around the upstream release 16:22:56 they should be frozen now at least 16:23:09 and if you have thoughts on that please contribute them to the spec so we can implement the changes. 16:23:19 we can do the checks and adjustments with the RC's 16:23:25 ++ 16:23:43 if anyone has the inclination, that inspection can already be started now 16:24:00 odyssey4me: sort of working on that a a side-effect of upstream install guide updates 16:24:03 as 16:24:03 patches for policy/config file changes are welcome right now 16:24:04 ^ that would be awesome for somebody to start 16:24:16 working my way through the services 16:24:40 keystone is going to be fun 16:24:40 Sam-I-Am: if you have a diff as you work through the services in the install guide refresh we can work off that 16:24:56 cloudnull: its more like notes rather than a diff, but there's something 16:25:06 or if you're keeping track on an etherpad we can lurk there too 16:25:07 also need to determine what is a packaging bug 16:25:13 hence why i shall test these in osad too 16:25:16 ahh yeah thats a good idea 16:25:27 a matts-liberty-ramblings etherpad 16:25:33 would you mind starting an etherpad ? 16:25:37 sure 16:26:00 #action Sam-I-Am to create an etherpad for the config changes he finds for future OSAD implementation 16:26:06 tyvm! 16:26:13 also 16:26:21 cloudnull: https://etherpad.openstack.org/p/liberty-config-changes 16:26:32 #action odyssey4me to switch the blueprint for juno->liberty upgrades to 12.1.0 16:26:50 Whoa what? 16:26:52 Juno to liberty? 16:26:55 whoops 16:26:58 :) 16:27:13 Kilo to liberty? 16:27:17 correction, Kilo->Liberty y'all know what I'm sa'in 16:27:36 I've heard some crazy things in the past few months, so... :) 16:27:57 :) 16:27:58 palendae: i heard we're upgrading icehouse to liberty 16:28:02 palendae: tag, you're it 16:28:15 seamlessly 16:28:18 notit 16:28:25 for various definitions of seam 16:28:28 and lessly 16:28:29 Sam-I-Am: You can tag me all you want 16:28:55 so next up 16:28:57 #topic Mitaka Summit Discussion Agenda 16:29:11 it would seem to me that testing swift-only deployments and support for systemd also need to move to a later milestone? 16:29:21 wont be there 16:29:24 glhf 16:29:24 just real quick , please add items to the etherpad for consideration at the summit 16:29:26 odyssey4me: probably 16:29:26 #link https://etherpad.openstack.org/p/openstack-ansible-mitaka-summit 16:29:45 out of expendable organs 16:30:14 you got spare lung, kidney... 16:30:24 maybe we could remove haproxy-v2, it's not that important I guess 16:30:24 true 16:31:07 is this things required for liberty to 'release'? 16:31:22 prometheanfire uh, what are you referring to? 16:31:38 we're talking about mitaka summit discussions? 16:31:46 liberty specs 16:32:01 welcome to 10 minutes ago? 16:32:02 ah, don't think the previous topic was closed 16:32:02 the systemd support Im sure I can get to, I just need reviewers to make the other irons in the fire go. But if someone else wants to implement the change they're welcome to it 16:32:35 for the summit discussion, shouldn't talk about the docs? 16:32:41 cloudnull my concern is that if it doesn't make it into 12.0.0 then it may have to wait until 13.0.0, unless it's not a breaking change? 16:33:13 odyssey4me: nice to have or need to have? and manpower 16:33:16 I'd rather get that into a .0 release so that it's there when we get into thorough upgrade testing 16:33:26 yes 12 or 13 IMO we could do it as a major release but that feels dirty 16:33:47 with the new versioning we can version when we want 16:34:00 not that we already couldn't 16:34:10 prometheanfire yes, but we've opted to stay in line for major versions 16:34:58 ths is what we have so far for liberty https://github.com/stackforge/os-ansible-deployment-specs/tree/master/specs/liberty (spec wise) 16:35:20 did we want the ipv6 spec for it? 16:35:49 I think you said it'd be nice to have for liberty 16:35:51 named veths would be a good one to complete theres a second part to that process which is here: https://review.openstack.org/#/c/220342/ 16:36:23 veth betterification in general 16:36:27 prometheanfire yeah, although considering the work we already have on our plate I'm inclined to say we should rather get that going in the Mitaka cycle 16:36:31 its been an annoying problem 16:36:49 cloudnull on the veth issue 16:36:52 odyssey4me: should I remove haproxy v2? 16:36:52 odyssey4me: ya, doesn't mater to me when tbh 16:37:03 #topic reviews 16:37:26 I have the impression from somewhere that naming the veths will cause the container restarts to not work at all... although I'm not clear on whether that is all the time or only sometimes 16:37:40 odyssey4me: yes that is true 16:37:44 odyssey4me: Correct, if they dangle 16:37:45 it only fails if the veth pair still exists? 16:37:45 evrardjp it's up to you whether you want to discuss it or not 16:37:45 and intended. 16:37:53 hence why there's the veth-deleter thingy 16:37:53 mhayden: ping 16:37:58 The 2nd half of the solution is that because we can now know the names, we can delete them 16:38:02 the second part of that fix resolves it though 16:38:10 what palendae said 16:38:12 :) 16:38:14 The names without the delete are bad, though 16:38:17 palendae cloudnull ok, then the missing piece for me to make that all go is that ensure that there is an included script for cleaning up the dangling dirties 16:38:28 palendae: at least things fail a little better 16:38:40 But generally agreed to be better than veth pairs floating in the ether. vether pairs. 16:38:52 odyssey4me: https://review.openstack.org/#/c/220342/ 16:38:57 prometheanfire: echo reply 16:39:00 ah, I see the cleaner is in https://review.openstack.org/220342 16:39:06 ok cool - I'm down with that :) 16:39:22 it would be great to backport that to kilo too 16:39:40 mhayden: we are talking about the veth stuff here 16:39:47 cloudnull is that not perhaps a breaking change though? 16:39:49 i'm still trying to figure out how to fix this in LXC, but i'm not well versed in C 16:40:02 mhayden: That may be a long tail thing 16:40:11 Sounds like LXC's tools could use some love overall 16:40:17 long story short, LXC should be cleaning up the veths -- if the kernel netlink backlog is overloaded, LXC should try to re-do it 16:40:23 palendae considering your work on this issue, I'm surprised not seeing your vote on https://review.openstack.org/220342 ? 16:40:26 extraordinarily long tail :) 16:41:00 odyssey4me: Last week was meeting hell, this week's been cinder investigation hell. I'll re-visit that this afternoon though 16:41:10 next review I want to highlight evrardjp 16:41:13 #link https://review.openstack.org/#/c/218818/ 16:41:25 ok 16:41:33 First, the status: 16:41:35 * cloudnull hands mic to evrardjp 16:41:43 They were 2 patches for delivering keepalived for haproxy. We've got an agreement, and therefore we are closer to release keepalived for haproxy. 16:41:47 However, there is still work to do. 16:41:54 one of its is to decide the inventory form. 16:42:01 we'll have to introduce an env.d/ file if there are ppl who'd like to test keepalived in containers. 16:42:11 I'd not personnally do it, so I don't really care BUT I think keepalived/haproxy deserve an env.d/ file, whether it's to define variables for it or generate containers. 16:42:20 I was planning to create "keepalived" containers, but it seems that the name isn't obvious. 16:42:27 We can decide whether we use "haproxy", "keepalived" or do nothing. 16:42:33 Keepalived: you can use the same name/container for other services than haproxy. I doubt that ppl will do that though. 16:42:39 Haproxy: the name of the container reflects better of what's running inside in this case: haproxy 16:42:49 odyssey4me: i'd +1 https://review.openstack.org/220342 except i sort of worked with cloudnull to write it, so not sure here. 16:42:57 nothing: if I'm not mistaken (cloudnull, could you confirm?) NOT setting an env.d would make the keepalived playbook run on the target you mention in conf.d. So it will run automatically on bare metal. The setup-hosts will not generate any containers. 16:43:14 So what's your opinion on this first topic? ;) 16:44:07 hmm, so I find myself wondering why anyone would seperate their keepalived router from the service 16:44:14 do they not need to colocate? 16:44:21 I think if we're making haproxy and keepalived a first class citizen it should have an entry in env.d 16:44:40 ie keepalived does start and stop scripts for haproxy, so surely they need to be on the same system 16:44:41 I propose as haproxy 16:44:47 I agree with the env.d entry 16:44:53 ok 16:45:02 we seem to agree :) 16:45:14 so for me it seems sensible to have a container for haproxy, and keepalived can facilitate the vip for it in the same container 16:45:30 that part will be tricky though 16:45:42 keepalived in containers with multicast... not so sure... 16:45:49 it should be tested 16:45:55 odyssey4me: brings up a good point. but in this case keepalived could, in the future, be used with multiple services. so from a role prospective i think it makes snese to have two roles for haproxy and keepalived 16:45:56 oh dear, yes... there is that 16:46:13 cloudnull ++ 16:46:18 yeah, that seem obvious 16:46:21 we could then set keepalived as a dep of the haproxy role passing vars into as needed 16:46:25 ^ 16:46:36 it shouldn't even be a dep 16:46:39 it's not mandatory 16:46:42 evrardjp to keep things simple for now I'd suggest leaving out trying to make it work in a container and rather set that as a future improvement task 16:46:42 you could 16:46:44 and it will work 16:46:49 ok 16:46:52 there is another point: 16:46:56 maybe a dep with a conditional ? 16:47:05 we could 16:47:15 so about the other point: 16:47:17 conditional executes when there os > 1 host ? 16:47:17 at the moment I'm importing my changes from galaxy to this repo, so it's kinda annoying. 16:47:25 cloudnull I would prefer it not to be a dep in the haproxy role as it is too prescriptive of the architecture 16:47:52 it would be better to include the roles in the playbooks, that way the playbook describes the architecture but the roles are independent 16:47:56 what's inside the current commit is almost good I'd say 16:48:16 we can just include the keepalived playbook conditionnaly in haproxy's one 16:48:48 evrardjp yeah, I'm down with that 16:48:49 so about the galaxt part: 16:48:51 odyssey4me: if someone wanted haproxy on >1 host then they would need something like keepalived so having it as a dep makes sense to me. 16:49:11 cloudnull but maybe they want to use another way of doing the virtual ip 16:49:27 like corosync etc. 16:49:31 yup 16:49:52 less prescriptive in the role, more prescriptive in the playbooks 16:49:56 so this is a good topic, I think it would be best to continue it on the commit maybe? 16:50:11 this way I can have another part of my message :) 16:50:16 ok carry on 16:50:25 so 16:50:38 if I could use the ansible-role-requirements.yml(.example), that would be of some help to avoid double work for me 16:50:44 I know there is no consensus yet, but do we have a timeline for it? 16:51:01 ah, so you mean the approval to allow independent role repositories? 16:51:14 yup 16:51:24 ie https://review.openstack.org/213779 16:51:40 ++ 16:51:55 I have a revision to make on the spec after feedback from hughsaunders and cloudnull 16:52:09 so definitely not until mitaka summit 16:52:24 ok 16:52:27 it still seems that everyone is confused about the spec and thinks that it's the button to break out the current roles, which it is not 16:52:28 evrardjp: I view that more like a meta spec . 16:52:42 once we approve it we can start the process. 16:52:49 I thought this could be a good use case, that's all 16:52:57 and if moving the keepalived to the requirements file makes sense then i think we do that 16:53:00 the keepalived's role 16:53:00 evrardjp it would :) 16:53:02 as to not duplicate work 16:53:42 I have another last topic, but I've taken too much time, sorry 16:53:45 ok, I'll revise that spec in the morning and hopefully we'll get some reviews through the day 16:53:52 IMHO if that spec goes in we should have the ability to add requirement roles to the stack from liberty > 16:54:06 k 16:54:59 so that's all for me 16:55:08 we have only 5 min left 16:55:27 #topic Open discussion 16:55:29 anything ? 16:56:00 and input on the security hardening thread is good ;) 16:56:06 * mhayden is considering drafting a spec 16:56:13 .doit() 16:56:15 s/and/any/ 16:56:26 mhayden: Yes, we should :) 16:56:26 * mhayden will do the things 16:56:57 there should be a lot of discussion about SSLing things then :p 16:57:09 ++ 16:57:17 and omg... apparmor >< 16:57:28 TOMOYO? ;) 16:57:34 since we can stack LSM's in Linux 4.2 now 16:57:39 mhayden: and replied to the reply 16:57:41 * mhayden wanders off-topic and waits for cloudnull to throw a keyboard 16:57:48 hahahaha 16:57:53 mhayden: Is 4.2 in 16.04? :) 16:57:53 lol 16:57:55 4.2 or bust ! 16:57:56 the LSM stacking is not that useful imo :P 16:58:00 palendae: 17.10 16:58:18 i still can't convince people that one LSM at a time is a good thing :P 16:58:31 mhayden: selinux or bust 16:58:34 im sure RHEL will backport it to 3.10 ... 16:58:39 cloudnull: why don't we have selinux support? 16:58:47 ubuntu? 16:58:48 i heard selinux sucks 16:58:50 we do 16:58:52 * mhayden giggles 16:58:53 they all suck 16:58:54 setenforce = 0 16:59:00 :D 16:59:05 cloudnull: sigh 16:59:12 http://stopdisablingselinux.com/ 16:59:22 I though about wearing that tshirt today :P 16:59:26 http://startdisablingselinux.com/ 16:59:31 haha 16:59:40 great, now someone will buy that :P 16:59:40 time to end the meeting 16:59:43 ok we're done here 16:59:44 http://dontreadthedocs.org 16:59:45 https://twitter.com/grsecurity/status/638529614743273472 16:59:49 * mhayden woots 16:59:49 thanks everyone ! 16:59:57 lates d00ds 16:59:57 #endmeeting