17:00:38 <jaypipes> #startmeeting
17:00:39 <openstack> Meeting started Thu May 24 17:00:38 2012 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:46 <dwalleck> hey folks!
17:00:54 <rohitk> hellooo
17:01:01 <jaypipes> #link Agenda: http://wiki.openstack.org/Meetings/QATeamMeeting
17:01:01 <Ravikumar_hp> hi
17:01:12 <jaypipes> #topic Awesomeness
17:01:15 <dwalleck> We've got a fresh one from the Rack. Sam___ just joined us
17:01:24 <jaypipes> w00t. welcome Sam___
17:01:49 <dwalleck> He does stuff with files and volumes and such. Smart fellow :)
17:02:09 <Sam___> Hello Jay. Happy to be here.
17:02:16 <jaypipes> Before we get to the real agenda, I wanted to give a big shout-out to rohitk, Ravikumar_hp and all the folks who have enabled the tempest test suite to reach nearly 260 tests!
17:02:19 <jaypipes> https://jenkins.openstack.org/job/gate-tempest-devstack-vm/test/?width=1200&height=600
17:02:46 <dwalleck> woot!
17:02:50 <rohitk> yay! Keep the bar green!!
17:02:52 <jaypipes> I think back to just one month ago, and we've made a heck of a lot of progress in just 4 weeks.
17:02:57 <Ravikumar_hp> Thanks. It is collective joint effort. still more to come
17:03:04 <jaypipes> thx to fattarsi as well, for adding Identity API tests!
17:03:41 <jaypipes> we've been having some off and on failures in that gate job, but not due to tempest...
17:03:55 <fattarsi> thanks, great to see the progress on tempest as a whole
17:03:56 <jaypipes> the failures are due to some instabilty in the CI node providers.
17:04:05 <jaypipes> fattarsi: indeed :)
17:04:51 <rohitk> jaypipes: Special mention to NTT folks in Japan who have contribute many of these tests!
17:05:11 <rohitk> contributed*
17:05:12 <jaypipes> that stability I refer to is well represented in the following graph:
17:05:14 <jaypipes> https://jenkins.openstack.org/job/gate-tempest-devstack-vm/834/testReport/history/
17:06:04 <jaypipes> anyway, I'm going to work on diagnosing those instabilities with jeblair and the CI team, but in the meantime, let's keep up the code reviews and new test patches..
17:06:10 <jaypipes> ok, on to the main agenda
17:06:19 <jaypipes> #topic Status of the smoke test merge proposal
17:06:40 <jaypipes> Unfortunately, I have not completed the changes from dwalleck's review. I should be able to get those done today, however.
17:06:55 <jaypipes> So the smoke refactor will have to be pushed out to status next week...
17:06:58 <JoseSwiftQA_> blargh
17:07:05 <jaypipes> #topic Status of the Swift test code reviews
17:07:12 <jaypipes> JoseSwiftQA_: perfect timing :)
17:07:15 <JoseSwiftQA_> :D
17:07:29 <dwalleck> sounds good
17:07:55 <JoseSwiftQA_> I've got some of the changes made already, and am getting the rest done as time permits.  Should be able to finish soon.
17:08:03 <jaypipes> good.
17:08:19 <jaypipes> JoseSwiftQA_: any of the code review comments that you have questions about?
17:08:55 <JoseSwiftQA_> Not really.  They're all fairly straight forward and good suggestions.
17:09:27 <jaypipes> coolio.
17:09:50 <jaypipes> #topic https://bugs.launchpad.net/tempest/+bug/1003741 is currently the only failing test on the devstack Tempest gate job and needs to be addressed
17:09:52 <uvirtbot> Launchpad bug 1003741 in nova "Delete a flavor test fails" [High,Fix committed]
17:10:12 <jaypipes> that was the only blocker I saw. Looks like dprince has addressed the bug (in Nova).
17:10:20 <Ravikumar_hp> jaypipes: joseSwiftQA: we will pick up some additional testcase once first batch of tests checked-in
17:10:28 <jaypipes> Ravikumar_hp: for swift?
17:10:33 <Ravikumar_hp> yes
17:10:43 <jaypipes> kk
17:10:56 <JoseSwiftQA_> Sure.  I'll start adding the rest of the client functionality as well.
17:10:59 <rohitk> jaypipes: did we have tempest job run after the Fix Commit?
17:11:03 <jaypipes> so, that blocker is actually no longer blocking :) so, scratch that off the list ;)
17:11:11 <jaypipes> rohitk: yup, and all tests passed.
17:11:12 <rohitk> jaypipes: cool
17:11:23 <jaypipes> #topic Outstanding code reviews
17:11:35 <jaypipes> #link https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z
17:12:06 <jaypipes> We've got quite a few outstanding reviews to get to. donaldngo_hp, I will get to those stable/essex ones today.
17:12:14 <jaypipes> sorry, stable/diablo.
17:12:22 <donaldngo_hp> jaypipes++
17:12:26 <jaypipes> I just need to spin up a diablo env...
17:13:11 <Ravikumar_hp> jaypipes: Fixes bug 903875 � New tests for Volume Attachments - we will abandon for now
17:13:13 <uvirtbot> Launchpad bug 903875 in tempest "Write Testcases for Volume attachments" [Medium,In progress] https://launchpad.net/bugs/903875
17:13:21 <jaypipes> If everyone could go through all the merge proposals and provide a review, that would be appreciated!
17:13:46 <jaypipes> Ravikumar_hp: yes, makes sense since the extension doesn't seem to be complete in Nova anyway :)
17:13:47 <rohitk> jaypipes,dwalleck,dkranz: Thanks for reviewing our branches so meticulously, but how are we handling dependent branches?
17:14:10 <rohitk> if we have already submitted some code for review and another review is depended on the first review, how do we set the dependency?
17:14:23 <jaypipes> rohitk: depends :) if a branch is TRULY dependent on another, then if you do a git review, then the branch will be shown in DEPENDENT BRANCHES on the Gerrit review area
17:14:26 <dwalleck> rohitk: Do we have dependent branches? I've been avoiding doing that to not have issues
17:14:52 <rohitk> dwalleck: well, 'avoiding' is one way to handle it
17:14:59 <jaypipes> rohitk: if the branch is not really a dependent branch (i.e. you are just waiting for someone else's stuff to go in before yours), then you need to write a comment saying so in the review.... ok?
17:15:42 <rohitk> jaypipes: ok, for me it's important that the reviewers are in the know about dependent branches
17:15:50 <jaypipes> rohitk: another strategy for branches like that is to use the DRAFT branch Gerrit feature, which pushes the branch up to Gerrit for you to review, but keeps it in a DRAFT status. Use git review -D for that.
17:15:56 <rohitk> unless comments get overlooked
17:16:05 <dwalleck> rohitk: I haven't found a reason to do it. I never know how much will change in a parent branch, so to avoid extra work, I get parent branches in first
17:16:08 <jaypipes> rohitk: again, depends on what your definition of "dependent branch" is ;)
17:17:05 <rohitk> dwalleck: I haven't looked into the true dependent branch model in gerrit, if that works while submitting a branch, that should do
17:18:00 <jaypipes> alright, any more questions on that topic? Everybody please do code reviews as much as you can :)
17:18:18 <jaypipes> #topic Assigning QA team members to adopt Folsom blueprints for Nova, Glance and Keystone to ensure a functional test plan is part of the blueprint and that the developer of any features for Folsom are collaborating with QA team members
17:18:31 <jaypipes> OK, so Nayna, Ravikumar_hp and I had a chat yesterday about this...
17:19:04 <jaypipes> basically, we are going to propose that a QA team member be assigned to new Folsom blueprints in Nova to ensure a functional/integration test plan is put together in addition to unit tests
17:19:18 <jaypipes> Ravikumar_hp was going to come up with a blueprint we could use as an example.
17:19:24 <jaypipes> Ravikumar_hp: any progress on that?
17:19:47 <Ravikumar_hp> jaypipes: will start today . Need to see if blueprint has enough details
17:19:57 <jaypipes> ah, ok.
17:20:11 <jaypipes> #action Ravikumar_hp to find good example blueprint for QA functional test plan
17:20:38 <jaypipes> #action jaypipes to write draft email to ML about QA team working with developers on functional test plans for all new feature blueprints.
17:20:54 <jaypipes> #topic Assigning QA team members to address the increasing number of skipped tests; bugs uncovered in Nova should be tracked appropriately and the @skip decorators removed when the bug is fixed. Jay recommends having an agenda item on the weekly meetings where the skip count (and associated bug reports) are covered.
17:21:19 <jaypipes> OK, so this is a topic I added to the agenda because I've been getting concerned about the increasing number of (valid) skips in Tempest
17:21:23 <dwalleck> For blueprints being dev'd within an org, would it make sense to assign those blueprints to some tester from that org?
17:21:46 <jaypipes> We need to ensure that skips only last as long as a bug is not fixed, and that the skip is removed when the fix goes into the upstream project
17:21:55 <jaypipes> dwalleck: yes, it would indeed.
17:22:08 <rohitk> jaypipes: ++
17:22:32 <fattarsi> jaypipes: agreed, but it is difficult to track, unless there is a frequent audit
17:22:33 <jaypipes> What do y'all think about my proposal that we have an ongoing agenda item for tracking skips?
17:22:34 <rohitk> it will be really hard to track bugs and remove those decorator
17:22:55 <jaypipes> fattarsi: right, which is why I'm suggesting adding a piece to our weekly meeting to track progress..
17:23:02 <Ravikumar_hp> jaypipes: makes sense since there is no automated way
17:23:44 <dwalleck> We could do something automated to check if a bug was fixed, but there's no guarentee the fix would be deployed to your test environment
17:23:45 <jaypipes> the only other thing I can think of is working with jeblair and mtaylor to put a Gerrit hook into the Ci system to notify QA team when a bug that is in our skip list is Fix Committed...
17:23:59 <Sam___> I think an ongoing agenda item for tracking skips is a good fix short term
17:24:10 <Sam___> longer term would be to automate a way to process the skips
17:24:17 <jaypipes> dwalleck: well, for the gating job at least, it would be (since the devstack builder builds from the HEAD of all project trunks
17:25:10 <jaypipes> So, I've found that unless there is a person (or persons) responsible for tracking stuff like this, it rarely gets done...
17:25:36 <jaypipes> And I think it would be useful to have a "Skip Captain" in the same way we have a QA Captain rotation for doing the agenda/summary of the meetings...
17:25:36 <rohitk> jaypipes: Once the notifications are out, would it be helpful for the committer remove his own skip decorators?
17:25:45 <mtaylor> jaypipes: you know where our puppet repo is ... ;)
17:25:46 <rohitk> jaypipes: Skip Captain ++
17:25:50 <jaypipes> mtaylor: :P
17:26:13 <JoseSwiftQA_> heh... Skipper.
17:26:47 <dwalleck> jaypipes: What we do internally is have a skipped test build
17:26:55 <jaypipes> So, basically the skip captain would be responsible for looking at the test result reports for skips and seeing if the associated bugs are in the project trunks
17:27:06 <dwalleck> It runs our skipped tests daily and is inversed. If any test passes, the build fails
17:27:07 <jaypipes> dwalleck: could you explain further?
17:27:29 <jaypipes> dwalleck: ah, I see... so basically running nosetests --noskip?
17:27:35 <dwalleck> Which triggers an investigation and removal of the skip
17:27:37 <dwalleck> yup
17:27:41 <jaypipes> gotcha...
17:28:13 <jaypipes> dwalleck: perhaps the skip captain could be responsible for checking the status of such a Jenkins jobs if I set one up and following up in the code for removing fixed bug skips?
17:28:57 <rohitk> and have a periodic SkipSquash
17:28:57 <jaypipes> dwalleck: I like automating as much as possible, but I think we really need a person/persons responsible for tracking of the skip stuff...
17:29:15 <jaypipes> rohitk: every week... should have a "removed X number of bug fix skips" status report
17:29:23 <rohitk> agreed
17:29:47 <jaypipes> rohitk: my point is I really think somebody needs to have it as a weekly duty... nothing gets done if everyone assumes someone else is doing it ;)
17:30:20 <rohitk> jaypipes: Rotation sounds good to me
17:30:26 <jaypipes> so... a vote. Does anyone mind me creating a Skip Captain rotation in the same way as the http://wiki.openstack.org/QACaptainRotation
17:30:39 <dwalleck> sure
17:30:43 <Ravikumar_hp> Jaypipes: sure
17:30:52 <Sam___> sure
17:31:05 <JoseSwiftQA_> sounds like a plan
17:31:18 <jaypipes> k, I will include more people and ensure the QA Captain and Skip Captain duties don't overlap
17:31:32 <jaypipes> #action jaypipes to create Skip Captain duties wiki page and rotation
17:31:58 <Ravikumar_hp> hope skip captain does not skip the duty
17:32:02 <jaypipes> lol :)
17:32:22 <jaypipes> #topic Open discussion
17:33:08 <Sam___> I have a couple things I've been working on for lunr that might be vialble for a blueprint
17:33:16 <jaypipes> Sam___: cool.
17:33:33 <Sam___> the main thing I'm thinking right now is that I have a wrapper class that functions as a client for the nova-client CLI process
17:33:45 <Sam___> I needed it to test integration between nova and lunr
17:34:22 <Sam___> It wouldn't take me long to put a blueprint together and tempest-ze the code.
17:34:23 <jaypipes> Sam___: does it use the CLI or the novaclient *library*?
17:34:40 <jaypipes> IOW, out of process or in-process calls?
17:35:05 <Sam___> It uses the acutal CLI client process
17:35:34 <jaypipes> so it calls out to a subprocess and calls things like "nova server-list", etc?
17:35:44 <Sam___> there is a generic command line connector that has a child wrapping biz logic for things like nova server-list
17:35:47 <jaypipes> or instance-list, can never remember...
17:36:02 <Sam___> then that is consumed by an abstracted client
17:36:10 <jaypipes> Sam___: hmm, ok...
17:36:14 <Sam___> to deal with texts
17:36:41 <Sam___> it screen scrapes the prettytable and turns it into a domain object that is a child of prettytable and allows for searching the data
17:37:03 <Sam___> I'm doing it on purpose because of a request to specifically test the CLI client
17:37:16 <jaypipes> Sam___: you might want to check this out as an alternative: https://review.openstack.org/#/c/7069/3/tempest/manager.py
17:37:35 <jaypipes> Sam___: uses the client programming interface instead of the CLI... but you get the picture.
17:39:13 <jaypipes> Sam___: we've had numerous debates about whether to test with novaclient or with a custom rest client that tempest uses. We've pretty much come to the conclusion that both are important to test with for different reasons, but I don't think we've had any agreement that testing using out-of-process CLI calls is any more useful than using the novaclient.Client classes in-process
17:39:41 <jaypipes> dwalleck has some strong views on that I believe ;)
17:41:01 <Sam___> sorry, was just looking at your manager.py class. It looks like an interesting start. I will dig into it more later.
17:41:16 <jaypipes> Sam___: yup, no worries! looking forward to your contributions!
17:41:30 <jaypipes> Anybody else have stuff to bring up? Else I'll end the meeting...
17:41:39 <Ravikumar_hp> none
17:41:43 <dwalleck> nope
17:41:46 <rohitk> jaypipes: Can we time our nosetests?
17:42:11 <jaypipes> rohitk: I believe they already do?
17:42:16 <rohitk> or I havent seen the jenkins console jobs yet
17:42:19 <dwalleck> rohitk: Do you mean set limits per test? I believe there's a decorator for that
17:42:21 <rohitk> oope, sorry :)
17:42:23 <Sam___> I agree that that is a larger discussion in general. My personal (over-simplified) two cents is that we should have clients/adapters/etc... for anything that is delivered code. The real thought about the difference between the imported CLI and the actual from the command line CLI comes down to code path difference really. :-)
17:42:40 <rohitk> jaypipes: gotcha
17:42:41 <jaypipes> Sam___: agreed.
17:42:43 <dwalleck> The xunit results should also show how long each test took if you just want tracking
17:43:16 <jaypipes> dwalleck: right
17:43:19 <rohitk> ok
17:43:42 <jaypipes> OK, let's wrap this baby up. I'll send a summary report by end of day.
17:43:46 <jaypipes> thx all!
17:43:49 <jaypipes> #endmeeting