16:00:12 #startmeeting OpenStack-Ansible 16:00:13 Meeting started Thu May 19 16:00:12 2016 UTC and is due to finish in 60 minutes. The chair is odyssey4me. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:16 The meeting name has been set to 'openstack_ansible' 16:00:20 #topic Rollcall & Agenda 16:00:45 o/ 16:00:47 \o/ - But I'm also in the diversity meeting so please tag if really needed:) 16:00:53 o/ 16:00:53 o/ 16:01:00 #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting 16:01:05 * mhayden woots 16:01:11 o? 16:01:17 o/ 16:01:25 o/ 16:01:25 mhayden damn you're quick 16:01:43 here 16:01:55 \o 16:02:02 \o 16:02:11 spotz: i move fast! 16:02:46 o/ 16:03:21 o/ 16:04:05 #topic Tests Repo 16:04:10 andymccr how's that going? 16:04:15 so 16:04:23 working through it hopefully have something tangible up next week 16:04:40 ++ nice! 16:04:40 there are some things around repo cloning/role inclusion and what we want to keep in the separate roles 16:04:52 . 16:04:54 which will be up for discussion after the PoC 16:05:15 but got a pretty good idea of how it will look (in a rough sense) and we can clean thatup 16:06:15 thanks for the work already :) 16:07:29 so tl;dr its progressing, will have something next week - right now nothing specific! 16:08:00 ok cool - are there any challenges you'd like some help with, or are you happy to keep going on your own for now? 16:08:53 andymccr ^ ? 16:09:26 happy for now . i'll be able to get a working thing going - we may need to refine a lot thought :) 16:09:29 *though 16:09:44 cool, thanks for working on it 16:09:50 #topic Action items from last week 16:09:54 #link http://eavesdrop.openstack.org/meetings/openstack_ansible/2016/openstack_ansible.2016-05-12-16.01.html 16:10:12 I am due most of them, and am behind in doing them. Some of these take time, but I'll keep going. 16:10:24 #topic Mid Cycle Planning 16:10:40 We didn't get to discuss this last time around and we need to make a call. 16:10:58 where is it going to be this time? 16:11:00 San Antonio!:) 16:11:17 Last time we tried a co-located mid cycle, along with remote access. It didn't really work - so in my view it's either we're all using video conferencing, or we're all co-locating. 16:11:35 SAT would certainly make my life easier :) 16:11:37 If its in San Antonio, I’m likely to be able to attend in-person 16:11:44 All using video conferencing means that we'll have to figure out how to facilitate a mid cycle across time zones. 16:11:51 ya, sat is easier for me 16:11:53 All co-locating excludes many of us from participating. 16:12:12 but in person 100% more productive also 16:13:10 I was thinking that we could certainly try out a video conf mid cycle across time zones such that we work out the work items ahead of time, then do sprint groups in each major time zone and hand over to each other at the end of the work day. 16:13:32 hangouts? 16:13:33 But yes, co-locating does allow us to have very different discussions - and allows us to discuss, think about it, then discuss again. 16:13:56 odyssey4me: I will say with use of etherpad the global bugsquash worked pretty well coordinating the groups in different timezones/locations 16:14:16 prometheanfire The tool for video conferencing isn't the issue - the issue is that working across time zones all together doesn't work, and nor does a mix of in-person and remote. 16:15:02 I like your proposal for structuring the video only odyssey4me 16:15:15 +1 16:15:20 jmccrory_ cloudnull d34dh0r53 stevelle mattt hughsaunders andymccr mhayden your input especially please 16:15:20 I’d prefer to be co-located, but understand that may not be possible 16:16:08 I like a co-located midcycle 16:16:17 in person and remote mixed is how we work all the time 16:16:27 we could probaby semi-colocate 16:16:27 ^ 16:16:39 we dont really need a midcycle to continue doing what we already do 16:16:56 I'd agree with cloudnull on that 16:17:04 heh, true 16:17:09 i'd prefer something in-person 16:17:24 it's nice if it can work 16:17:27 +1 semi-colocate if possible 16:17:42 yeah, the focal point for the mid cycle is to continue design discussions we may not have completed at the summit, and to ensure that we can put the right people together to unblock any work that is critical to deliver this cycle 16:17:42 perhaps we do two in-person rooms that are linked via VC? 16:17:46 (as an alternative) 16:18:10 mhayden: that's what I was thinking 16:18:20 perhaps a north american and a european one 16:18:22 then link them 16:18:24 so the trouble with semi colocation, or any sort of mix where everyone isn't in the same room, is that we inevitably end up with side discussions that not everyone can hear 16:18:26 +1 mhayden 16:18:50 odyssey4me: I think that's always a risk 16:18:56 that's why I thought of the sprint groups - allowing localised co-location, but handovers between time zones 16:19:00 id love to be optimistic, but i agree with odyssey4me - the logistics of it are tough and ive genuinely never seen it work. 16:19:13 odyssey4me: there would just need to be ground rules for that 16:19:39 for folks who cant make the mid-cycle we could do a recorded hangout so folks can watch and participate via IRC. but i agree the logistics are hard. 16:19:42 otherwise we need to have day 1 of midcycle be the "fix all the ridiculous issues with VC meetings" day 16:19:47 summary + vote? 16:19:53 lol 16:20:28 evrardjp and i talked, and we should have the midcycle in belgium 16:20:32 * mhayden winks 16:20:33 basically I was thinking that we forgo the trying to work together across time zones and instead work seperately, but hand over the work in a brief discussion 16:20:35 cloudnull, discussions aren't the same as a talk; not sure recording would be super valuable 16:20:36 :D 16:20:39 (with the applicable code review) 16:22:24 palendae: +1 16:22:55 Appreciate the sentiment, just not sure enough people would view/listen to make it worth the hassle 16:22:57 so IHMO side discussions happen just as much when we're all together as when co-located so that is sort of a non issue for me, we have to be more disciplined about sharing the results of those discussions (I myself am very guilty of not sharing) 16:22:59 it makes sense to have this next one in the US. 16:23:01 yeah, recordings are not the point 16:23:06 we had the last one in the UK 16:23:25 maybe philly :) -cc automagically 16:23:32 d34dh0r53, true, but it gets more garbled with video conferencing 16:23:33 cloudnull: but the UK is the center of the world. so that just makes sense. 16:23:42 automagically and I discussed and we pick Aachen 16:23:45 time starts in the UK 16:23:46 We could host, I’m sure 16:23:51 ha! 16:23:54 andymccr: it was once. now no longer. 16:23:55 andymccr: technically brussels is 16:23:56 d34dh0r53: exactly. 16:24:01 cloudnull: GMT - its where time begins. 16:24:07 and ends 16:24:09 :p 16:24:19 Sufficiently bike shedded yet? 16:24:19 cool, let's have the mid-cycle in Brussels 16:24:27 +1 automagically 16:24:46 so vote or more ideas? or how are we gonna decide on this 16:24:49 ok, ok - we don't absolutely have to decide right now - I agree that it's fair to have it in the US if we all co-locate, but there are some other factors to discuss before we finalise that discussion. 16:24:54 the yaks are bare 16:25:00 The next things to think about is how long it should be. 16:25:09 to echo automagically im sure RAX at the castle would have no issues hosting 16:25:15 Then finally where it should be, assuming co-location. 16:25:18 mhayden: that's the good thing about yaks, their pelts grow back very quickly 16:25:34 Was it 3 days last time 16:25:40 automagically: only 1, it was not nearly enough 16:25:41 Previously we tried one day, and IMO it wasn't enough time. 16:25:49 figure out who can go, if most people are coming from SAT then naturally SAT 16:25:51 midcycles are typically 3 days in the rest of openstack 16:26:04 Yeah, 1 day is too little, especially when having people travel to co-locate 16:26:05 3 days sounds good to me 16:26:05 2-3 days seems reasonable 16:26:15 I have had 3 days of time recommended - generally the first day for discussion, the next day for follow-on and starting actual sprint work, then the last day to finalise work. 16:26:17 y'all should try to get a room with video conferencing built-in to allow remotes though 16:26:32 sigmavirus24 we tried that last time and it was terrible 16:26:47 what was terrible about it? 16:26:50 sigmavirus24: yeah i mean we will do that i think - we tried last time, but yeah its quite hard. and the nature of mid-cycles doesn't really go well with vc's etc imo. 16:26:52 it worked for the openstack security midcycle 16:26:53 in fact it was quite disruptive 16:26:55 i thought it was on someone's laptop last year 16:27:04 or last time, i mean 16:27:18 the setup was poor, which is why it didn't work out 16:27:24 mhayden nope, it was a VC in the room with the whole big camera, mic and all 16:27:35 but the room was crazy loud which didn't help 16:27:40 weird 16:28:05 try booking the room by the nda lab at the bottom of the stairs to the toy store 16:28:52 ok i think we should move on. we have an idea of the logistics/issues/ideas etc. 16:29:06 +1 16:29:06 there are some outstanding questions we need answers to before we can decide anyway. 16:29:14 +1 16:29:33 +1 16:29:49 +1 16:29:57 yeah, agreed andymccr - let's think on it and discuss it again next week - I'd like very specific feedback about who would be able to come if we colocate in the US, where we could colocate, and what dates would be suitable for that colocation (for those offering venues) 16:30:16 ^ Makes sense 16:30:34 #action Finalise discussion about mid-cycle next week 16:30:38 what time frame are we talking about? July? 16:32:20 michaelgugino we can choose when, and we'll have to look into other mid cycle dates and try not to clash with any other mid cycles our contributors are attending 16:32:27 #topic Release Planning and Decisions 16:33:00 We're scheduled to do releases for Liberty & Mitaka today - is there anything specific that's a blocker on any of those? 16:33:51 For Kilo note that we've done a SHA bump to the EOL tags for Kilo and will release our EOL late next week. If you have anything you need in Kilo, now's the time to get the patches done and justify why they should be included. 16:34:19 Anyone got any blockers? 16:34:58 I believe there is 1 change that could go into liberty but needs me to submit a known issue doc 16:35:14 would be nice to get it in but isn't critical 16:35:18 I believe all the stuff I was concerned about has been merged into stable/mitaka 16:36:25 stevelle: link or scope? 16:36:31 stevelle Can it wait for the next tag? 16:37:09 odyssey4me: it can wait, it made it this far with only one person asking in irc for the fix 16:37:36 ok, thanks 16:37:48 #action odyssey4me to request releases for Liberty, Mitaka today 16:38:07 #topic Ubuntu 16.04 LTS Support 16:38:18 #link https://etherpad.openstack.org/p/openstack-ansible-newton-ubuntu16-04 16:38:26 I think we're doing quite well so far. 16:38:33 Thank you all for pitching in! 16:38:54 You may have noticed that all non-openstack roles now have a non-voting xenial gate check - that merged this morning. 16:39:09 I'll do the same for CentOS. 16:39:22 +1 woot! 16:39:30 Some really good progress there 16:39:44 nice 16:40:04 For those that are passing both CentOS and Xenial checks, I'll promote them to voting checks next week or ASAP afterwards. 16:40:12 i've been a bit tied up and haven't spent much time on the memcached role, if anyone is itching to take it over feel free 16:40:32 jmccrory left some good feedback which needs addressing 16:40:37 mine's not:( 16:40:46 One note I've been warned about is that there aren't yet wheel mirrors for CentOS/Xenial, so the gate builds will be slower. 16:41:17 There is a local pypi mirror, so it'll still be reasonably fast. 16:42:03 running into lots of fun quirks with xenial builds during gates 16:42:13 Any questions/comments/thoughts? 16:42:25 I don't feel we're getting consistent container builds 16:42:34 How so? 16:42:52 michaelgugino after the patch merge this morning, that should be better. 16:43:01 this was passing: https://review.openstack.org/#/c/312602/ 16:43:38 https://review.openstack.org/315114 / https://review.openstack.org/318571 should have fixed up much of the container build weirdness that had been bugging us for around a week 16:43:40 well, it was technically passing because the check was running two passes; one for the actual install, and one for the upgrade steps. The initial install was working, and it was failing on the upgrade steps 16:44:02 I removed the upgrade steps, now we can't install pip due to what looks like an ssl error. I tweaked the test script to install ca-certificates, hopefully that helps. 16:44:29 ssl errors like that seem to be cropping up randomly recently 16:44:43 the ssl error may be completely unrelated and may instead have to do with comms issues 16:44:49 that's what is making me think inconsistent containers 16:44:59 evrardjp was digging into some of that, as was mattt 16:45:14 we may have a container IP range clashing with the cloud provider ranges 16:45:29 we had that before, and it caused the same kind of random failures 16:45:31 I'm seeing u'Connection failure: unknown error (_ssl.c:2829)'} 16:45:44 I would suggest checking the cloud providers to see if there's some consistency 16:45:58 michaelgugino: could you check if you have connectivity? 16:46:10 that error usually means invalid certs. Invalid certs means outdated ca-certificates package. 16:46:14 because what I started with nova is definitely not done all over 16:46:16 evrardjp: conn to what? 16:46:18 we need to know if it's all providers, or some - if it's specific to the platform (trusty/xenial/centos) or not 16:46:31 +1 odyssey4me 16:46:34 looks like it's just xenial atm 16:47:03 michaelgugino: I meant network connectivity working fine in the container, just to make sure 16:47:20 I can only track what I can see :p 16:47:34 michaelgugino FYI https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-16.04.yml#L61 is run on the container cache/image and therefore the containers have fully up to date package levels 16:47:43 how do I check network connectivity in the container? 16:47:54 that's why we had issues when we switched from the ubuntu archive to the rackspace mirror 16:48:23 you'll have to implement something in the tasks to output to the console - we have no access into the instances that run the thing 16:48:56 troubleshooting this stuff in openstack-ci is a pain... that's why the integrated gate has so much diagnostic information 16:49:34 we should run apt-get update before we run apt-get upgrade 16:50:06 michaelgugino that's done in line 60 16:50:44 I see that now 16:51:24 odyssey4me: that's for the lxc_hosts. The failures seem to be in the containers themselves 16:51:44 anyway - the point is that all you have for logs is the console, so to diagnose issues you have to put a WIP patch in which gives you the info you want on the console, then wait for the job to run 16:51:49 should we continue this outside the meeting? 16:52:02 sure 16:52:11 michaelgugino yep, I was just trying to tell you that the containers are up to date when they're built 16:52:30 ok, we can continue in the channel 16:52:41 #topic Open Discussion 16:52:43 We have 5 mins available for general discussion 16:52:54 Does anyone want to raise something specific? 16:52:58 I have a question that fits here 16:53:03 I made a blueprint for cloudkitty: https://blueprints.launchpad.net/openstack-ansible/+spec/role-cloudkitty 16:53:12 and Im wondering how to proceed 16:53:37 I had to make changes to os_horizon and added a playbook to osa so I wasnt sure what to do with all this.. 16:54:14 do you have a role we can already check? 16:54:18 yes 16:54:23 good start :D 16:54:40 Posted in the channel yesterday but easy to miss in the scrollback.. here's updated task/gate profiling: https://gist.github.com/Logan2211/6e3160c7d28d3886ba9d212c780e1918 16:54:46 https://github.com/michaelrice/openstack-ansible-os_cloudkitty 16:55:12 errr if you're ready to import then let me know and I'll request the import, then we can work on it from there 16:55:36 odyssey4me: Im ready to import 16:55:42 errr with regards to edits on other roles, I'd suggest that you temporarily note them or include shortcut things for them in an 'extras' folder 16:56:14 odyssey4me: Im not sure I understand what you mean by that 16:56:14 you can see an example here: https://github.com/flaviodsr/os_sahara/ 16:56:18 ah ok 16:56:49 from there we can setup the tests, then work towards patching things into the other roles and the integrated repo 16:56:50 about it odyssey4me any news on the sahara import? 16:57:01 I'll put in the request for the import 16:57:14 flaviodsr https://review.openstack.org/317931 16:57:28 thanks logan- :D 16:57:39 good thanks odyssey4me! 16:57:40 I guess I'll just update and say that working to add a distro to nodepool is not fun and takes forever 16:57:43 thanks logan- I haven't had a chance to look at it yet 16:58:28 yep np 16:58:34 flaviodsr I'll add a request to also have a core team for the sahara role and include you there, because you know how it works. 16:58:38 repo as usual :D 16:58:43 errr I'll do the same for the cloudkitty role. 16:59:11 ok thanks 16:59:12 ok odyssey4me, I am about to finish the install-guide entry 16:59:19 We're out of time, so any further discussion can happen in #openstack-ansible. 16:59:39 Thanks all for your time and continued contributions! 16:59:39 #endmeeting