15:01:25 #startmeeting stackalytics 15:01:26 Meeting started Mon Oct 21 15:01:25 2013 UTC and is due to finish in 60 minutes. The chair is ilyashakhat. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:29 The meeting name has been set to 'stackalytics' 15:01:36 hello everyone! 15:01:45 hi 15:02:01 this is the first meeting of stackalytics team 15:02:05 I wonder if there are anyone else eager to discuss Stackalytics roadmap 15:02:37 lets go thru agenda 15:02:44 the agenda: to review blueprints and proposals for the next version 15:03:36 excellent... do we have any open blueprints? 15:03:55 https://blueprints.launchpad.net/stackalytics 15:04:03 we have 6 blueprints 15:04:14 the first is https://blueprints.launchpad.net/stackalytics/+spec/review-disagreements 15:05:15 it's a proposal to add the same stats on disagreements as in Russell's report 15:05:24 so basically this one reproduce statistics from RUssel 15:05:30 yes 15:05:53 the only 'hard' thing is that we need a list of core devs 15:06:15 well - we can use heuristics 15:06:42 anyone who gave +2 review on a project would be a core/PTL 15:07:10 good point) 15:07:21 probably that will work only for PTLs 15:07:28 need to double check 15:07:45 no, cores do +2 as well 15:07:48 it might be former ptl/core? 15:07:48 lets take a look on Neutron 15:07:58 or you don't look backwards 15:08:22 we will calc stats only for particular releqse cycles 15:08:59 the most stable way is to check 'drivers' group in LP - like https://launchpad.net/~neutron-drivers/+members#active 15:09:15 https://review.openstack.org/#/admin/groups/ 15:09:18 but this may require additional configuration 15:09:26 guys, isn't this something you're looking for? 15:10:17 does it have a RAST API? There was a docs on dev version of API 15:10:41 ogelbukh: yes, this list is the best source 15:10:58 Stackalytics never uses data grabbing from web pages 15:11:05 :) 15:11:26 need to check in gerrit api 15:11:41 http://gerrit-documentation.googlecode.com/svn/Documentation/2.6/rest-api-groups.html 15:12:18 ok, lets move further. Lets assume that this report would be in 0.4 release 15:12:57 https://blueprints.launchpad.net/stackalytics/+spec/graceful-loc-statstics 15:13:17 it's about calculating sloc not loc 15:13:27 this one requires running sloccount on each commit 15:13:30 but it will require to analyze every single diff 15:13:57 probably we can run a daily butch 15:14:07 probably we can run a daily batch 15:14:27 will top management like slocs more than loc? 15:14:52 that will give us not only slocs 15:15:14 lang stats? 15:15:25 sloccount gives a ballpark for project development price 15:15:48 i.e. we will make a graph of $ evaluation 15:16:01 top managers will like it 15:17:09 lets settle that we don't need slocs from every commit, but $ graph is an option to consider 15:17:44 ok, but let's give low priority to this BP 15:17:45 i.e. this BP is not in 0.4 roadmap 15:17:52 ok 15:17:58 what's next& 15:18:23 https://blueprints.launchpad.net/stackalytics/+spec/module-review-backlog-stats 15:18:53 some metrics that analyze review backlog 15:19:02 do we have anything similar in activity dashboard& 15:19:09 do we have anything similar in activity dashboard? 15:19:10 no 15:19:45 how this report should look like in UI? 15:20:11 some summary page at the bottom that particular project is filtered? 15:20:26 it may be a list of reviews ordered by time desc 15:20:35 Russell has something similar 15:20:48 lets take a look 15:21:55 http://russellbryant.net/openstack-stats/nova-openreviews.html 15:22:51 well - just a bunch of unsorted data... 15:23:25 that needs to be made shiny and readable 15:23:25 do you think that it has some meaningful information? 15:24:11 compare backlog length between projects may have sense 15:24:25 at least it is discussed in ML from time to time 15:24:27 lets put this as a low priority report 15:24:34 agree 15:24:52 next one 15:25:06 https://blueprints.launchpad.net/stackalytics/+spec/review-punchcard 15:25:52 https://blueprints.launchpad.net/stackalytics/+spec/no-down-time-updates 15:25:54 it's about tracking usual time when engineer is awake and ready to review patches 15:26:44 the data can be shown on user profile screen 15:26:53 this one has some value 15:27:10 lets schedule this one to 0.4 roadmap 15:27:17 fine 15:27:21 the open reviews info is *very* useful from a project management perspective 15:27:31 i look at it a lot, fwiw 15:27:33 https://blueprints.launchpad.net/stackalytics/+spec/no-down-time-updates - this one should be reclassified as Completed 15:28:22 Russell - thanks for the input 15:28:40 russellb: hi! :) we just talked about your reports 15:28:50 yep, saw them mentioned 15:29:02 hnarkaytis: will mark as complete 15:29:04 lets complete review of open BPs and get back to this particular report 15:30:09 https://blueprints.launchpad.net/stackalytics/+spec/web-filters-caching - the last one 15:30:41 it's about caching search queries 15:30:52 pretty strait forward - lets put it as medium priority 15:31:30 ok - lets get back to Russell's report 15:32:12 I want to go thru all numbers and figure out if we have required data in place 15:32:44 Total Open Reviews: 272 - this is simple 15:33:25 Waiting on Submitter: - should be just a counter of reviews in particular state 15:33:46 Waiting on Reviewer - the same 15:33:52 we can calc a state during update and store it as attr 15:34:24 and we have everything for time calculation 15:35:09 Stats since the latest revision: - I wonder how to calculate wait time 15:35:55 let's look into sources) 15:36:08 we retrieve data from Gerrit only with the current state and do not store a history in Stackalytics 15:36:36 it's not a count of state, exactly 15:36:44 well depends on how you define state 15:36:44 yeahh - need to take a look at Russell's code 15:37:05 waiting on reviewer is reviews without any -1 or -2, and not a work in progress either 15:37:11 something like that :) 15:38:42 well - all this staff is not evident ... need to do reverse engineering of Russell's report 15:39:01 it's pretty simple python code 15:39:09 might be nice to turn it into a library 15:39:17 separate the logic from the output 15:39:22 so you don't just reimplement it 15:39:34 good point 15:39:47 then we need to agree on an interface 15:40:10 anyway - we need to take this offline 15:40:40 another point regarding this report is when/where to show it 15:41:00 when metric Reviews is selected? 15:41:17 the current UI paradigm is not suitable for this kind of reports 15:41:37 make it popup like blueprint details 15:41:47 probably a popup window from Project name 15:42:10 show summary at bottom and popup with more details 15:42:42 ok - there was one more thing that I wanted to discuss 15:43:17 do we need to focus on drill-down reports or continue with other metrics 15:43:33 there was an idea to added stats on IRC chats 15:43:39 and open tickets 15:44:06 Russell, what do you think on that? 15:44:38 hnarkaytis: depends on who you want to serve 15:44:59 hnarkaytis: my answer is probably different than a user, or someone in management at a company participating, or whatever 15:45:24 decide on some user personas, and then prioritize them 15:45:33 because each will want more information on different things, i think 15:45:48 I believe that we already covered basic statistics required for marketing and now need to address needs of PTLs and core contributors 15:46:06 in my very biased opinion, I prefer information that helps with running the projects :-) 15:46:16 OK, cool 15:46:36 so, thing that are important to me are all around "how are we doing as a project" 15:46:39 "are we keeping up" 15:46:46 that's why looking at stats on open reviews is important 15:46:51 stats on open bugs is also very useful 15:47:06 do you have report on open bugs? 15:47:11 not yet 15:47:24 so that might be a good one to work on, because that's a need not really being served anywhere that well that I know of 15:47:39 and not just on open vs closed 15:47:47 but also stats on bugs getting triaged, that's really important too 15:48:32 ok - then lets put this way. We will start with report on bugs and implement stats similar to your stats on reviews 15:48:48 sounds like a good set of next steps 15:48:59 thanks for asking :) 15:49:21 Ilya, do we have anything else in our agenda? 15:49:24 ok, i will file bp on this 15:49:47 no, i think that's all with agenda 15:49:57 we have a good list of thing to do in 0.4 15:51:39 yeahh - 3 major blue-prints for next release 15:51:54 lets wrapup for today 15:52:06 ok 15:52:11 thanks everyone! 15:52:29 #endmeeting