16:01:46 #startmeeting OpenStack Ansible Meeting 16:01:47 Meeting started Thu Jul 23 16:01:46 2015 UTC and is due to finish in 60 minutes. The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:51 The meeting name has been set to 'openstack_ansible_meeting' 16:01:59 o/ 16:02:02 #topic rollcall 16:02:07 o/ 16:02:09 o/ 16:02:22 */ 16:02:25 o/ 16:02:51 o/ 16:03:23 hello 16:03:53 o/ 16:05:23 * sigmavirus24 clears throat 16:05:44 so i guess thats all of us 16:05:46 ... 16:05:52 #topic Review action items from last week 16:05:54 Good meeting 16:05:55 Better than some glance meetings attendance =P 16:06:21 Regarding EPOC and pip - add more hours to the day so sigmavirus24 can put together a POC <- sigmavirus24 16:06:31 *epoch 16:06:35 Yeah, I haven't gotten to that yet 16:06:40 Still need those hours 16:07:08 #action add more hours to the day so sigmavirus24 can put together a POC for pip and epoch's 16:07:10 carried 16:07:13 Would that solely be part of the upgrade script? 16:07:19 Or changes to yaprt? 16:07:31 id assume either can be done. 16:07:43 or both . 16:07:47 Ok 16:07:48 sigmavirus24: what did you have in mind 16:08:05 I was thinking keep it in the upgrade script for now 16:08:12 Or go for that to see if that alone will work 16:08:21 May not be a bad idea to make the master upgrade script be kilo -> liberty at this point 16:08:24 I don't think yaprt needs this, unless we want to force yaprt to build with epochs 16:08:38 which will then be something that has to happen for every upgrade after that 16:08:38 And any juno -> kilo upgrade stuff just goes straight to kilo 16:09:22 ok next item 16:09:24 reenable the successerator for a single retry within the gate. To be removed as soon as Ansible v2 drops upstream - someone 16:09:32 this was done. 16:10:09 #link https://review.openstack.org/#/c/203257/ 16:10:18 :thumbsup: 16:10:23 :) 16:10:29 next 16:10:31 get a thread going on the mailing list surrounding issues with upgrading Kilo > Liberty - someone 16:10:56 and we're carrying that until there are more hours in a day 16:10:58 #action get a thread going on the mailing list surrounding issues with upgrading Kilo > Liberty - someone 16:11:08 o/ jwitko :) 16:11:27 Hello! 16:11:29 #topic Blueprints 16:11:40 #link https://review.openstack.org/#/q/status:open+project:stackforge/os-ansible-deployment-specs,n,z 16:12:03 we have a few open ones, please review them when you get a chance 16:12:26 important ones IMHO are https://review.openstack.org/#/c/203708/ 16:12:37 https://review.openstack.org/#/c/203706/ 16:12:42 and https://review.openstack.org/#/c/203754/ 16:13:53 also https://review.openstack.org/#/c/202373/1 16:13:59 which is a clean up of the specs dir 16:14:21 I will commit to getting at least one of those reviewed today 16:15:05 I've looked through most of them already and have posted some questions. I'll get through a few more tomorrow. 16:15:24 cloudnull: Does the my.conf one depend on the v10 one at all? 16:15:40 palendae: no 16:15:43 k 16:17:35 also we have https://review.openstack.org/#/c/205062/ which is a continuation of the work done by svg 16:17:52 #link https://review.openstack.org/#/c/181957/ 16:18:49 #topic Reviews 16:19:16 anything that we want to highlight regarding open reviews ? 16:19:45 cloudnull I think that https://review.openstack.org/#/c/205062/ is a spec by our team to cover the work done by svg, but with our revisions in it 16:20:13 yes 16:20:18 it'll become the blueprint for that work - the team will revise svg's patch and bring it up to date and do some heavy testing on it 16:20:27 ++ 16:22:15 in our open reviews we have a few inflight that would be good to get nailed down soon-ish https://review.openstack.org/#/q/starredby:cloudnull+status:open,n,z 16:23:01 does anyone have any issues with backporting https://review.openstack.org/#/c/205008/ https://review.openstack.org/#/c/205056/ and https://review.openstack.org/#/c/204996/ 16:23:26 i'd like to have them in Kilo, to rid ourselves of key distribution issues. 16:24:36 cloudnull I'm happy with that - I'm surprised that they built first time! 16:24:57 #startvote Should we backort the new key distribution process? Yes, No, Maybe 16:24:58 Begin voting on: Should we backort the new key distribution process? Valid vote options are Yes, No, Maybe. 16:24:59 Vote using '#vote OPTION'. Only your last vote counts. 16:25:11 #vote yes 16:25:15 #vote yes 16:25:39 #vote yes 16:25:46 #vote yes 16:25:54 #vote yes 16:25:57 #vote yes 16:26:18 #showvote 16:26:18 Yes (6): d34dh0r53, palendae, odyssey4me, andymccr, Apsu, cloudnull 16:26:43 ok i think its safe to say its a yes. 16:26:46 #endvote 16:26:46 Voted on "Should we backort the new key distribution process?" Results are 16:26:47 Yes (6): d34dh0r53, palendae, odyssey4me, andymccr, Apsu, cloudnull 16:26:49 done :D 16:27:25 #action backport https://review.openstack.org/#/c/205008/ https://review.openstack.org/#/c/205056/ and https://review.openstack.org/#/c/204996/ to kilo - someone 16:27:38 already done 16:27:49 sweet 16:28:01 #topic Open discussion 16:28:21 what do we want to talk about? its time for festivus . 16:28:43 we could do with some eyes on https://review.openstack.org/194474 - the Keystone SSL cert/key distribution 16:29:08 that review brings that into line with the same mechanism as is used for horizon and fixes a lot of the brokenness 16:29:36 it also appropriately fixes the haproxy configuration depending on whether you have Keystone setup with SSL or not 16:30:50 can we modify the commit to use the process outlined by andymccr which is being done in swift ? or do you think that can be done in a dependent patch ? 16:31:10 for ssl cert sync 16:31:19 cloudnull this one's thoroughly tested and consistent with all other ssl cert distributions 16:31:52 I'd like to try out the method discussed with andymccr, but that'll be when there's time for that in a subsequent patch. 16:32:02 this one works as-is and fixes a lot of broken 16:32:08 yeh i think if its working now lets mark as "to do:" work and get it done at some point 16:32:27 ok 16:33:00 ill get a review in on all that today 16:33:39 thanks 16:33:59 everyones favorite topic https://review.openstack.org/#/c/204315/ 16:34:22 cloudnull: oh? 16:34:29 I don't recall that being my favorite topic =P 16:34:31 trying to make gate runs happier in hpb4 16:34:39 you kno w you love it :) 16:34:58 Oh hpb4 is my favorite topic. Carry on =P 16:35:04 this forces an update to happen in all containers and logs all the things in a new gate directory "http://logs.openstack.org/15/204315/8/check/os-ansible-deployment-dsvm-check-commit/a495ded/logs/ansible_cmd_logs/" 16:35:32 cloudnull not the neatest of logs, but something's better than nothing 16:35:48 yea the ansible return data is kinda shit to read 16:36:15 cloudnull: do we want to recheck that until we get it to hpb4? 16:36:19 last run was against raxdfw 16:36:34 I don't think so - the best way we're going to get it running on hpcloud-b4 is to merge it in 16:37:17 odyssey4me: I saw one of the last rechecks was "recheck -hp1 success" 16:37:22 so I figured we could keep rolling the nodepool dice 16:37:58 odyssey4me: i think i can make the return output from ansible better if i add in "--one-line" to the ansible commands 16:38:16 it will be a lot to read on oneline however it wont be something like http://logs.openstack.org/15/204315/8/check/os-ansible-deployment-dsvm-check-commit/a495ded/logs/ansible_cmd_logs/force_apt_update/aio1_memcached_container-541276d9 16:38:45 youre call. if you think it best to rev it 16:38:59 cloudnull: Why not pipe to python -m json.tool to format it a little? 16:39:45 each log file is generated with per node. 16:39:55 i guess we could do a post process 16:40:05 cloudnull I think it's fine as-is and can be improved in a later patch 16:40:09 ok 16:40:14 for now we need some sort of diagnostics for hpcloud-b4 16:40:21 ++ 16:40:28 i hate hpb4 .. . 16:40:31 and we won't be getting it unless we recheck ad nauseum, or merge it 16:40:48 okiedokie typie typie make it go :) 16:41:11 what else we got ? 16:41:20 sigmavirus24 made it go :) 16:41:28 sigmavirus24: for PTL 16:41:32 :) 16:41:35 of? 16:41:37 yes 16:41:40 -_- 16:42:35 ah liberty keystone is unhappy here https://review.openstack.org/#/c/199126/ 16:42:49 if someone is board it would be a great help to get that narrowed down 16:43:03 if its an upstream issue or an us issue, idk 16:44:03 cloudnull on another issue - upstream kilo merged the urllib cap, do you plan to rev the SHA's next week? if so, we can remove our cap in the same review 16:44:21 ah good point, 16:44:29 i plan on being on vacation next week 16:44:32 :) 16:45:06 maybe we rev it today 16:45:13 cloudnull ah ok, we can chat a bit after this then - we have a few things to sort out for your absence 16:45:15 and remove our cap 16:45:25 FYI odyssey4me will be handling bussiness while im gone. 16:47:43 Meet the new boss, same as the old boss 16:48:02 what can possible go wrong? 16:48:07 I for one welcome our UK overlords 16:48:14 ^ 16:48:36 b3rnard0: you could forget a y? 16:49:34 lol 16:49:46 I suppose that we're done here? 16:50:12 yup 16:50:32 thanks everyone 16:50:35 #endmeeting