15:59:57 #startmeeting openstack-chef 15:59:57 Meeting started Mon Feb 8 15:59:57 2016 UTC and is due to finish in 60 minutes. The chair is jklare. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:01 The meeting name has been set to 'openstack_chef' 16:00:05 hey everybody 16:00:08 hey hey! 16:00:15 hi there 16:00:35 any topics from you guys for today? 16:01:05 nope, though we’ll probably have some Doc Reviews coming in from a few people 16:01:23 they want ATC and asked me what the easiest way to get it from openstack-chef. 16:01:31 i said docs, god we need better docs 16:02:11 our docs or our stuff in the docs repo? 16:02:19 both? 16:02:31 o/ 16:02:44 yo 16:02:46 cool 16:02:50 hey sc` markvan 16:03:55 will be good to get some more docs, especially from the outside 16:03:55 #topic refactoring (The same procedure as every year, James!) 16:04:26 yeah, we need to update a lot of stuff after we merged the refactored cookbooks 16:04:41 all the READMEs in the cookbooks and of course all the docs in the docs repo 16:04:49 not sure how that links to each other 16:05:27 I see the repo is getting thru the gate, but failing at the end with that odd tempest issue. Good to see this coming together 16:06:09 markvan yeah, i actually have no idea why its failing 16:06:17 i ran an allinone locally on my mac with the patches last week and it completed from a chef point of view. didn't try tempest 16:06:58 this tempest error we get looks like the command is run in the wrong or without a virtualenv 16:07:04 missing something in its path 16:07:15 but i have not looked into that in detail 16:07:42 i pushed patches for all of your comments markvan 16:07:45 yup, looks does not appear to be related to this refactoring, we could blacklist it if we want more green light. 16:07:53 :D 16:08:06 i think we should fix it 16:08:17 but we still can start merging since its non-voting 16:08:20 yeah, just some final cleanups, I'll go thru them again shortly and we can start pushing after that. 16:08:26 great 16:08:52 sc` j^2 can any of you spare some hours this week for review? 16:08:53 after building allinone i was tempted to +wf the patches with a +2 16:09:08 jklare: yeah i should be able to 16:09:12 just for sack of mind, we could push a temp patch to the repo without and deps to see if the tempest fails with "old" master 16:09:26 +1 16:09:36 i think we still need to clean up some stuff, but that can be done in additional smaller patches 16:09:55 i do not like to keep working on this ones util its perfect 16:10:06 makes code review quite hard 16:10:11 agreed, more cleanups at this point is just churn 16:10:16 same here. if they converge, let's get them in master and clean it up later 16:10:26 i want external folks to start banging away on some of this soon 16:10:44 got somebody in mind sc` ? 16:10:57 so maybe we should spend a bit a time on that tempest issue (from both sides, why it failing and does it fail without the refactor) 16:10:59 because we still need more docs on how to use it and what we changed 16:11:14 and at some time we can do final reviews 16:11:46 maybe someone from my team. have to talk it over internally. we're looking at either liberty or mitaka, mitaka if it's released by the time we get to 'upgrading' 16:11:54 actually i would love to get that tempest issue fixed, but i have to focus a bit more on utilising the refactored cookbooks for our own deployment 16:11:59 got some deadlines there 16:12:38 my idea is to use the cookbooks for a real multi node deployment and fix/cleanup things on the way 16:12:47 makes sense 16:13:23 i don't want to spend a whole lot of extra time finessing the patches. if they work (and they appear to), let's start pushing some code 16:13:52 tempest check patch: https://review.openstack.org/277480 16:13:57 sure, but we still need to fix that tempest issue, since our integration jobs should work before we could call something stable 16:14:21 markvan cool, lets hope it fails :) 16:14:31 yup. just adding my $0.02 :) 16:14:49 ? 16:15:11 wrong window 16:15:15 :D 16:15:38 i hoped you wanted to start betting on the gate job :) 16:15:47 ha! 16:16:28 anything else directly related to the refactoring? 16:16:44 i have another small topic 16:17:35 #topic core-reviewers 16:18:20 so, as you might have noticed, we have 2 new groups in gerrit -> openstack-chef-core and openstack-chef-release 16:19:26 i moved all of you ( sc`, markvan and j^2 ) into that new group as well as zhiwei 16:19:32 sounds good 16:19:39 right on 16:19:45 +1 16:20:03 what should happen with the other members of the old openstack-manager-core group 16:20:25 it’s safe to say they aren’t around anymore, time to say goodbye 16:20:37 alan meados, christopher m luciano, ma wen cheng and matt ray 16:20:44 yep 16:21:00 i agree with j^2 here 16:21:03 i'd say if they're not around, they're not interested (right now) in the project or have time to spare 16:21:38 but i wanted to mention it to not just remove people without consensus 16:22:14 the last time i saw alan was in Vancouver :( 16:22:16 of course. we have to be transparent in stuff like this 16:22:28 Chris i think is a different part of IBM now 16:22:58 I think ma wen cheng is still around, but dn't know his status, but with zhiwei on there, I think we're good 16:23:03 and Matt is still my boss, but he has little todo with OpenStack as a whole now 16:23:10 ok great 16:23:11 he’ll want ATC again for austin, but i think it’s safe to say he’s not around anymore either 16:24:23 any comments on the other group (openstack-chef-release) ? only members of this group are allowed to push signed tags (what i did) and currently i am the only member because infra does not want to have to many people pushing tags 16:24:45 works for me 16:24:58 i think we should only put our current PTL in this group and he or she should be the only one that pushed tags and does the release related stuff 16:25:18 makes sense 16:25:53 sounds good 16:25:57 totally, it’s part of the responsibility of the PTL to do that 16:26:27 ok cool 16:26:54 while we are at it, does any of you want to take over the PTL for the next cycle? 16:27:05 i was considering it 16:27:09 great 16:27:22 +1 for sc` 16:28:03 still have to deliver mitaka this cycle :D 16:28:10 sure 16:28:26 but i think we make good progress here 16:28:39 but next cycle i will have to focus a bit more in internal stuff 16:28:41 definitely. i'll start looking into what needs to be done for centos 16:29:05 yeah, centos is a quite interesting topic in that regard 16:29:13 most of the refactoring was just pointed to ubuntu 16:29:28 packages have been getting more stable, way faster after the CI overhaul 16:29:34 so maybe we should focus on more centos support and especially centos testing next cycle 16:29:36 so now we're behind instead of waiting 16:30:12 yeah, would be good to see a centos gate test, which I think is totally possible in infra now 16:30:35 i think the nodes are there and we should be able to build both 16:31:08 yeah. would be nice to have a gate for each platform we say we support 16:31:20 rhel might be hard 16:31:36 rhel might have to be unit tests only 16:31:48 suse too, if support ever comes back 16:31:57 i would not bet on that 16:32:17 yeah the suse guys hard forked us a long time agon 16:32:38 I’ve had some conversations with aspires about it, it ain’t coming back to us…probably ever 16:32:50 yeah, but maybe they like the new and shiny versions of our cookbooks 16:32:59 good to know 16:33:23 unlikely, they have spent _tons_ of money doing what they need to get done to build their “enterprise friendly ha OS cloud” 16:33:31 and in the end its better to support just 2 platforms with good testing than support 5 without tests or maintenance for most of them 16:33:31 they have no desire to come back to the community 16:33:56 like i said earlier, my team is going to be switching gears to a newer release of openstack. we want to target mitaka, but will do liberty if it's not done 16:34:34 so there may be some patches from them. again, timing 16:35:04 ok, i guess we are already in open discussion 16:35:30 does any of you have more topics for the meeting or should we move the discussion to our channel? 16:35:45 i'm good. we can move things to our regular channel 16:35:47 I think I might see the issue with tempest 16:36:10 ? 16:36:36 it looks like a recent test addition, but requires tempest to actually be "pip" installed, not just cloned. I don't see anywhere in the integration cookbook where we install it 16:37:02 i think we clone the repo and install everything into a venv 16:37:07 tempest is packaged in rdo now 16:37:24 not sure about ubuntu 16:37:49 humm, but I don't think the venv contains tempest itself, just the other dependencies 16:38:23 yeah. all i see is the 'git clone' for tempest 16:38:38 I guess a couple of debug lines in the gate would show what installed in the venv and verify this guess 16:39:20 none of the other test cases actually run "tempest" within the testing suite, until this one showed up in late Nov 15 16:40:00 sounds like a good start markvan 16:40:44 but I guess I don't really care about testing the tempest cmd line within our integration test, so I vote for blacklisting it 16:40:57 :D 16:41:05 unless the plugin list is really meaningful to someone? 16:41:07 yeah 16:41:09 at least we should not spend to much time with this 16:41:47 if it doesn't look like it makes sense, take it out. if someone cares enough, they'll add it back 16:41:47 i think we can just blacklist it for now 16:41:55 There are plenty of other tempest tests we could add that would actaully hit openstack itself that we could spend time on 16:42:22 ok, I'll take a look at what it takes to blacklist it...have not done that in a while now 16:42:40 great 16:43:04 ok, i guess thats all for the meeting for today 16:43:09 * markvan grounded in central iowa blizzard...but it's great, I'm with my granddaughter! 16:43:09 thanks for attenting 16:43:31 and see your around in the channel 16:43:38 see you there! 16:43:46 ;) 16:43:53 #endmeeting