19:04:37 #startmeeting User Committee 19:04:38 Meeting started Mon Dec 14 19:04:37 2015 UTC and is due to finish in 60 minutes. The chair is ShillaSaebi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:04:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:04:41 The meeting name has been set to 'user_committee' 19:04:43 Thank you, I am! Looks like we have Shilla, Jonathan, Shamail and Lauren here - who else for the UC meeting? 19:04:44 Hi! 19:04:48 Morning all. 19:04:49 o/ 19:04:57 I'm rathert franticly trying to pull togatehr me notes & irc meeting cheet sheet 19:05:17 Subu sends his regrets 19:05:23 I'd love a copy of that cheat sheet sometime @jproulx 19:05:29 jproulx: I'll be doing that for product WG and I'll share it with UC 19:05:40 well it's on line somewhere & I'm googling for it :) 19:06:12 alright 19:06:17 we can start with the first topic on the agenda 19:06:20 https://wiki.openstack.org/wiki/Meetings/ChairaMeeting 19:06:23 https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee 19:06:42 Thanks! 19:06:48 #topic meeting schedule proposed to biweekly 19:06:50 hey :D 19:06:50 jproulx: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 19:07:05 the command is #chair to add another chair 19:07:20 #chair jproulx 19:07:21 Current chairs: ShillaSaebi jproulx 19:07:29 ok we should be good to go 19:07:46 #topic schedule 19:08:01 proposal to witch to biweekly (from monthly) & have alternating time with 1 APAC friendly 19:08:20 I think we're generally on board with the idea, bu thow shoudl w epick time 2 19:08:46 The proposal makes sense to me, easier to remember. I would like to suggest someone taking an action item to send out a doodle to poll timing options. 19:09:04 rolandchan seems to be our regular apac participant so maybe let him make suggestions 19:09:43 I can send a doodle. Do we have enough APAC participants? Split meeting have not worked for diversity.. 19:09:43 if rolandchan is willing that sounds like a good idea is he here? 19:10:15 I think the User Committee has enough business & tasks that it warrants biweekly meetings. I'm flexible on time. 19:10:16 we'll need to make sure that the intersection of topics are recorded and destributed per meeting. i suggest notes sent out after each meeting per the ML 19:10:29 +1 19:10:48 rolandchan: +1 19:11:04 +1 19:11:06 we need a scribe then, that will take responsibility to make the notes and email each meeting 19:11:09 rolandchan: it’s a good point about participation. we could either scratch the idea or try and review after the first couple 19:11:36 I think it's worth trying too 19:12:25 Ok. Let's see how we go. I'll send a doodle request to the list 19:12:49 #agreed rolandchan to send doodle request for APAC-friendly meeting time 19:13:31 So any other scvheduling before we move topics 19:13:32 Thanks rolandchan 19:14:26 #topic Next steps for Survey analysis 19:15:22 i have to run in a few minutes but i did want to chat about this one 19:15:39 lsell sent out an update about the nps analysis to date 19:15:50 i was just pulling the link actually: http://lists.openstack.org/pipermail/user-committee/2015-December/000554.html 19:16:50 i know there’s also been discussion about how useful nps is for us to track. i actually feel like it’s very valuable to have a quantitative metric for trending 19:17:02 Yep. 19:17:25 nps is definitely not just a brand loyalty tool as stef was sort of pigeon-holing it 19:17:42 it is also industry recognized 19:17:58 I'd support continued NPS surveying. I have certification from Satmetrix to administer NPS if we have questions on methodology. 19:18:19 That looks like a good plan to me for the NPS part & I think since we have it we should look, if it turns out not to correlate in any useful way then we can think about discuss dropping it 19:18:34 jbryce: at the same time, nps is also extremely tenuious at best to grok. As someone who has seen huge mistakes made because of unfavorable NPS scores they really are hard to use without long term trending 19:18:34 but i agree the number on its own is not the whole story. we’ve actually been digging into and reaching out to the detractors to follow up on their specific circumstances 19:18:50 Oh? That's great! 19:18:52 jbryce: that's great to hear 19:19:02 so you have been doing interviews based on that, that was my next suggestion 19:19:03 I like it (especially having a quantitative value to track) but I do agree that we need to define what we are capturing (segments, deployment phase, etc.) and what will it influence for us. 19:19:27 A big point made in certification is following up with the groups (promoters, neutrals, detractors). It can be more significant than score-tracking, especially for a small survey population. 19:19:44 HeidiJoy: +1 19:19:50 j^2: for sure, i agree. and you have to take into account the make up of your sample 19:20:02 jbryce: :D 19:20:36 jbryce: has the foundation also been following up on negative responses in comments aside from NPS? 19:21:18 I think we'd get the most out of additional analysis if we narrowed down the 54 questions asked on the user survey to a handful of questions that we could do multivariant quant analysis on. In the same way we already did analysis on user roles and company size and found no variations there. 19:21:41 jproulx: it depends on where the comments show up. some we follow up on and some we pass on to PTLs or others 19:22:52 I'd like to share that data with the Product WG as well (if it's in a sanitized form)... 19:24:34 HeidiJoy: I think projects used might be relevent but there's so many possible permutations idk of it would just devolve to noise 19:25:04 I think it would be interesting to see if version used correlates or shows any trending between versions 19:25:18 sparkyco_ +1 19:25:43 sparkyco_: +1 19:26:51 That would cut ~350 responses by ~25 variables and probably be too small to give us trend data @jproulx. But we could pull ~55 detractors and simply look at which projects they identified, which would not show a trend, but would be useful with their comments as context. 19:28:53 HeidiJoy, sparkyco_ suggestion to cut by version seems like it may yield workable sample size 19:29:45 I agree we can cut by version for a good size. Are there other variables we should look into? 19:30:29 I don't recall the questions, but whether the respondent had any external assistance would be interesting. 19:30:48 Note that cutting by version has a basis of ~350=deployments only. We have a total answer population of ~1400 for this survey. Would we want to cut by an answer that all respondents gave? 19:31:08 I think we should focus on deployments 19:31:15 So I'm fine cutting by version 19:32:51 HeidiJoy do you recall if there's a question that relates to external support? I can't think of one, but you've been liveing much closer to this... 19:33:06 I have to drop off, will catch up on the meeting via log. Take care. 19:34:02 Roland, yes, we have many questions asking about which vendor is used for specific things. Which do you have in mind? Pointing to a page number in the report is especially helpful in the survey: https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf Page 26-33 reference vendors. 19:36:00 "What packages does this deployment use…?" seems relevent 19:36:26 sort of but even the wording of that is a bit open ended 19:36:28 Jproulx examples are PaaS tools, hypervisors, drivers, etc 19:37:57 39% of deployments are using vendor distributions, 55% unmodified. So we could look at "which packages does this deployment use?" 19:38:43 The packages question on pg 25 looks like a reasonable start to me 19:39:31 Yes, that's the one I was referencing 19:39:46 So I can go back to the data scientist and cut by that question (page 25) and the question on OpenStack version (page 20) ... anything else? 19:40:19 similarly for "packages you've modified" - which packages 19:40:20 I can also probably do a historical comparison to the prior survey's data. 19:40:30 Just a gut feeling on this but possibly network driver (pg 29) 19:40:44 I don't appear to be having much luck with the web client today :( 19:41:24 For this question on page 25, would we want to look at all deployment responses, or only those in production, or those in production + test phase? 19:41:43 All I think. 19:41:49 Options are production, dev/QA, and POC 19:43:32 I have to drop off but I will be happy to engage with the data scientist for these additional questions and any more on this meeting string. 19:43:41 +1 on @jproulx gut feeling, 19:43:53 thanks HeidiJoy 19:45:12 So I think Survey stuff is covered with further discussion to mailing list, unless anyone has some final comments? 19:45:29 I'm done 19:46:03 #topic UC direction 19:46:22 I believe this was j^2 's topic: 19:46:28 :D 19:46:29 discuss where we want to take the UC and what we can change 19:46:40 that's the short of it yeah 19:47:27 the UC has an oppertunity to help the community as a whole 19:47:38 we need to empower ourselves to make this happen 19:47:42 One of the things I'm seeing is on https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee we list a numbe rof working groups and teams. 19:48:33 I wonder how many of the organisers of those know they're listed there :) 19:48:54 as the UC do we have representives at each of these WGs? 19:49:23 I don't think so. 19:49:23 if not, we should, and we should ask those representives to come back to the UC to report what that WG is doing/deciding 19:50:23 unless i'm missing something, but as an openstack community member we need to help where we can, obviously people won't come to the UC so the UC needs to come to them 19:50:24 perhaps we should add a colum to that table on the wiki for 'UC liason' 19:50:34 jproulx: +1 love it 19:51:12 then we can try and fill in that blank... 19:51:42 +1 19:51:54 we should ask for volunteers from this group to fill in those boxes which are most realistily close to their day to day. We are the user committee so something should fall within your skill set 19:52:25 #action jproulx will add a column to 'Working Groups and Teams' table on the wiki for 'UC liason' 19:52:30 and by our next meeting we should have liasons that we can ask to start...liasoning? 19:53:39 j^2 that may be optimistic, but we should at least know which need a liason and which already have active involvement 19:53:47 LOL. Liaising :) 19:54:19 jproulx: hey sometimes a streach goal is something we need to get involvement ;) 19:55:04 j^2 I'm all for optimism, who wants to take that challenge? 19:55:14 it's a challange to the group :D 19:55:21 sorry I am late to the party… :) 19:55:53 whoever liases most gets a gold star? 19:56:00 exactly :D 19:56:15 works for me :) 19:56:25 so time check we're at 4 min to go 19:56:33 that should definantly be put in the meeting notes, and followed up in the ML 19:56:38 anything we've not covered? 19:57:22 we've not covered who will send meeting notes to ML...any volunteers? 19:57:39 basically clean up irc log 19:58:32 I'll give it a shot 19:59:01 thanks rolandchan__ 19:59:22 hearing no other topics I think we're at teh end 20:00:00 #endmeeting