17:05:19 <jaypipes> #startmeeting
17:05:21 <openstack> Meeting started Wed Jan 11 17:05:19 2012 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:05:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:06:06 <jaypipes> OK, so there are two separate topics I'd like to discuss first:
17:06:13 <Ravikumar_hp> nayna is another meeting
17:06:17 <jaypipes> 1) Test case style
17:06:36 <jaypipes> 2) Using code from novaclient for basic REST client
17:06:49 <jaypipes> both topics based on email from AntoniHP today...
17:07:50 <jaypipes> #topic Test case style
17:08:11 <jaypipes> so, has everyone seen AntoniHP's email from this morning?
17:08:54 <AntoniHP> here is link https://lists.launchpad.net/openstack-qa-team/msg00023.html
17:09:11 <jaypipes> there are a number of points AntoniHP raises, and I think we should discuss them here now in order. AntoniHP, ok with you?
17:09:16 <Ravikumar_hp> I have not seen . may be not sent to the gruo email
17:09:36 <dwalleck> It went to the group email list, which seems a bit flaky for some reason
17:09:44 <nati> [Openstack-qa-team] Implementing tests ?
17:10:05 <jaypipes> Ravikumar_hp: yeah, I did not get notified of it either... see link above.
17:10:10 <jaypipes> #link https://lists.launchpad.net/openstack-qa-team/msg00023.html
17:10:14 <Ravikumar_hp> ok
17:10:49 <AntoniHP> I think yes, we can discuss it here, or if more time is needed discussus it on the end of meeting?
17:10:52 <jaypipes> AntoniHP: so, let's start with 1) Dependability of test case on each other
17:12:04 <heckj> AntoniHP has a good point - there is a NOSE test driver that alleviates some of that concern however: http://pypi.python.org/pypi/proboscis/1.0
17:12:08 <heckj> #link http://pypi.python.org/pypi/proboscis/1.0
17:12:23 <jaypipes> heckj: there is no need for a test driver...
17:12:38 <jaypipes> heckj: nose already supports skipping based on conditions just fine
17:12:46 <jaypipes> heckj, AntoniHP: as an example, see https://github.com/openstack/tempest/blob/master/tempest/tests/test_list_servers.py
17:12:53 <heckj> it is an addition to Nose - an extension, not a test driver, that allows specifications of dependencies between tests
17:13:14 <nati> I suppose we already discussed about Dependability. Class way and method way
17:13:22 <nati> Is this different topic
17:13:23 <nati> ?
17:13:24 <dwalleck> I think the core issue is not whether we can have dependencies between tests, but should we
17:13:31 <heckj> jaypipes: please take a look at it before you just dismiss it. I've been using it with some success to resolve some  dependency issues between tests
17:13:43 <jaypipes> heckj: I have looked at it. :)
17:14:17 <nati> heckj: This moudle looks cool.
17:14:44 <heckj> dwalleck - is that the concern? If so, apologies for the random link. It wasn't clear from AntoniHP's email
17:14:49 <dwalleck> I think the better question is "what is the problem we are trying to solve by having test dependencies"
17:15:02 <jaypipes> so, basically, I'd like to know from AntoniHP what about nosetests does not allow for 1) to be taken care of..
17:15:29 <dwalleck> heckj: No, but the "should we" question is the one we need to answer here as a group
17:15:38 <jaypipes> right
17:15:46 <AntoniHP> it does, within nosetests the tests could be easily depndable
17:16:41 <dwalleck> I think we have some philosophical differences on test design, so the goal is to find a solution that will either directly address everyones concerns, or allow people to use Tempest in different ways to ease those concerns
17:16:50 <nati> Dependency could be occur by nature when we wanna resuse testcode or exisiting resource.
17:17:16 <AntoniHP> it is possible to be implemented in different ways, within nose, without nose, with extra driver etc
17:17:17 <nati> So I think this is a matter of code style. "Class vs Method"
17:17:28 <dwalleck> So if we want to reuse existing resources, wouldn't it be easier to have an external library/process handling that?
17:18:02 <AntoniHP> but as nati says this is about style, what nose test case should NOT be equal to test case
17:18:13 <dwalleck> It seems like that would be a more robust solution, and addresses the concern of execution time and reuse of resources
17:19:07 <nati> hmm. I suppose we are doing same discussion. I suppose both of Class style and Method style has merit and demerit.
17:19:11 <jaypipes> AntoniHP: I guess I'm failing to see how the 001_, 002_ test method examples in your email would be of benefit over something like this: http://pastebin.com/2pdV34Ph
17:19:48 <nati> Then I think our next action is discuss with actual code example.
17:20:03 <nati> then vote it?
17:20:07 <dwalleck> But if reuse of resources isn't the core problem, help me understand where the desire for test dependencies comes from
17:20:07 <AntoniHP> if there is some code between asserts and first assert fails then code is not executed
17:20:26 <dwalleck> nati: Actually, I think the easier idea would be for someone to submit a patch to Tempest
17:20:49 <dwalleck> That way it can follow the traditional code acceptance path, and make it easily visible
17:20:56 <jaypipes> AntoniHP: but why would you have separate test methods for things that are dependent on each other to be done in an order?
17:20:58 <nati> dwalleck: Yes. I suppose reuse is core problem and right way to solve it is use libs.
17:21:34 <jaypipes> AntoniHP: you can't parallelize them, and so you only add fragility to the test case because now dependencies between test methods must be tracked.
17:22:02 <jaypipes> AntoniHP: the only advantage that your approach gives is more verbose output on successful test methods, AFAICT
17:22:07 <dwalleck> nati: I'm working on that solution as part of my next sprint. There's varying levels of complexity to how it could be implemented, but it will be done in some form or fashion
17:22:26 <AntoniHP> jaypipes: they can be paralellized as classes
17:22:28 <jaypipes> AntoniHP: and I'm confused why anyone would care about successful methods -- I only care about stuff that errors or fails?
17:22:46 <dwalleck> jaypipes: ++
17:23:00 <Ravikumar_hp> jaypipes +1
17:23:16 <jaypipes> AntoniHP: but in the case where you put a series of dependent actions in a single test method, the methods of a class can be run in parallel even with a shared resource...
17:23:18 <Ravikumar_hp> i care sucess or failed .
17:23:19 <AntoniHP> jaypipes: it provides context to results
17:23:37 <jaypipes> AntoniHP: how so?
17:23:46 <jaypipes> AntoniHP: could you provide example?
17:23:51 <Ravikumar_hp> i want to fail dependent tests also instead of skiipinf
17:23:56 <vandana> AntoniHP: most of the asserts are dependent on the previous assert to have passed so would it be useful to run the second assertion if the first one failed?
17:23:58 <Ravikumar_hp> skiiping
17:24:14 <nati> I think by using class way. The test log looks more easy to read without adding logging code.
17:24:25 <nati> #sorry typo
17:24:40 <nati> The merit of Class way is the log looks more easy to read without adding logging code.
17:24:40 <AntoniHP> jaypipes: sometimes yes, and sometimes no - this also provides easy entry point for automated handling of errors
17:24:57 <dwalleck> AntonioHP: So what if, regardless of test design practice, you could see the results of all assertions in the results. Is that the goal you're trying to reach?
17:25:08 <AntoniHP> vandana: yes, in my example a failed response to API call could still create new object
17:25:20 <Ravikumar_hp> for reporting purpose - Success , Failed ... 1) I will fail volume-attachment tests if create volume is failed ,
17:25:42 <AntoniHP> dwalleck: yes, that is why I'm totally not insitent on using this way - I proposed different solutions
17:26:03 <jaypipes> AntoniHP: sorry, perhaps this is just lost in translation :) could you provide some example output that shows the benefit for automated handling of errors?
17:26:49 <AntoniHP> create oject call -> verify response from call -> verify that object exists
17:27:26 <jaypipes> Ravikumar_hp: our point is that if you need to "skip" a dependent set of actions based on an early bailout or failure, the dependent set of actions should be in the same test case method...
17:28:01 <jaypipes> AntoniHP: but if those calls were in the same test method, the assert message would indicate which step failed...
17:28:19 <vandana> but won't there be a lot of overhead in figuring out these dependent assertions
17:29:00 <AntoniHP> so result .F. would point to problems with API, FSS network connectivity, ..F to nova scheduler not working
17:29:33 <AntoniHP> and then .FF would be different to .F.
17:29:35 <jaypipes> AntoniHP: but so would a single F, with the error output message indicating the step that failed...
17:29:49 <dwalleck> jaypipes: Right, like you did with images based on what's in the system. It makes sense for the test suite to be aware of it's surroundings and resources
17:30:27 <dwalleck> So right now I get failures like this....
17:30:45 <dwalleck> ======================================================================
17:30:45 <dwalleck> ERROR: The server should be power cycled
17:30:45 <dwalleck> ----------------------------------------------------------------------
17:30:47 <dwalleck> Traceback (most recent call last):
17:30:49 <dwalleck> File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/tests/servers/test_server_actions.py", line 33, in setUp
17:30:51 <dwalleck> self.server = ServerGenerator.create_active_server()
17:30:56 <dwalleck> File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/tests/__init__.py", line 27, in create_active_server
17:30:56 <dwalleck> client.wait_for_server_status(created_server.id, 'ACTIVE')
17:30:57 <dwalleck> File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/services/nova/json/servers_client.py", line 193, in wait_for_server_status
17:30:59 <dwalleck> raise exceptions.BuildErrorException('Build failed. Server with uuid %s entered ERROR status.' % server_id)
17:31:01 <dwalleck> BuildErrorException: u"u'Build failed. Server with uuid e0845137-61d7-48b8-9db8-128db00cd7b5 entered ERROR status.'
17:31:01 <AntoniHP> we aim to automate, so if such logic is not in test, we would need to parse output messages then
17:31:03 <dwalleck> Ack
17:31:05 <dwalleck> https://gist.github.com/3da4cc395268f5ca36cb
17:31:10 <dwalleck> Try that instead, bit easier to read :)
17:31:46 <jaypipes> AntoniHP: automate what exactly? the reading of test results to put on some report? Then we can just use xunit output, no?
17:31:59 <AntoniHP> jaypipes: exactly !
17:32:20 <AntoniHP> jaypipes: by having separate entries in units, we do not need to be very smart about parsing error messages
17:32:30 <jaypipes> AntoniHP: but all that would mean is a simple --with-xunit
17:32:44 <AntoniHP> jaypipes: no, because you need to parse a message in result
17:32:58 <AntoniHP> to see which case has happend
17:33:19 <dwalleck> I can't really post a link to my Jenkins reports, but that's pretty much what I have now with the --with-xunit
17:33:29 <AntoniHP> otherwise you have a code that pinpoints failure, and captured output could be used for technical data
17:33:31 <jaypipes> AntoniHP: If your automation depends on the sequence of E, F, ., and S in the test output, then something is more fundamentally wrong than the order of the test method execution IMHO
17:34:36 <nati_> jaypipes: Create server sometimes fails sometime oK. And some test fails not because of this.
17:34:38 <jaypipes> AntoniHP: for instance, what happens when you insert a new 00X method and F.. becomes F.F.? How does your automated reporting handle that?
17:35:04 <jaypipes> AntoniHP: you would have to make a corresponding change to your automation report, no?
17:35:19 <AntoniHP> that is question to donaldngo_hp
17:35:26 <jaypipes> nati_: that's a totally separate issue :)
17:36:18 <AntoniHP> but still this allows for a) continuing execution of following steps
17:36:20 <jaypipes> AntoniHP: I guess my hesitation is to change to a test class/method style in order to just support a certain type of output to the test run.
17:36:36 <donaldngo_hp> cant we achieve what antoni wants (which is each test class is a testing scenario with dependent steps) and what tempest provides which is code resusabitlity through service classes
17:36:50 <AntoniHP> jaypipes: I agree with that statement
17:37:10 <dwalleck> AntonioHP: My case is that if an assertion fails, I probably don't want to make any more assertions, and the rest will likely fail and dirty my results
17:37:18 <jaypipes> donaldngo_hp: but what we are saying is that test *methods* should contain all dependent series of actions, not the class. That way, there is no need to have dependency tracking.
17:37:25 <AntoniHP> I think fundamentally this problems goes from using unit testing framework for working on other types of tests
17:37:46 <jaypipes> AntoniHP: virtually every functional/integration testing framework derives from unit test frameworks.
17:37:53 <vandana> dwalleck: +1
17:38:29 <Ravikumar_hp> jaypipes: yes . current test methods already contains dependent actions.
17:38:33 <AntoniHP> yes, but I have a feeling that we are still quite bound by thinking of those tests as single unit tests
17:38:59 <jaypipes> AntoniHP: they aren't. :) test *methods* are single functional tests.
17:39:04 <donaldngo_hp> jaypipes: there will be still dependencies in one way or another. in antonis approach i think its a logical grouping of steps to run. using methods you still have to keep track that you need to do a before b before c ect
17:39:17 <jaypipes> AntoniHP: with the test class housing shared resources the test methods use.
17:39:52 <AntoniHP> jaypipes: my proposal is test *classes* are single functional tests with few test *methods* or generators (implementation detail)
17:39:54 <dwalleck> I don't think switching frameworks would solve that problem. Whether unit, integration, or functional, each test has a goal. It makes assertions towards that result and then ends
17:40:05 <jaypipes> donaldngo_hp: no, that's wrong. if the test method is a series of dependent actions, assertions in a or b will mean c will not be executed...
17:40:39 <jaypipes> donaldngo_hp: sorry, shouldn't say that's "wrong"... :) just my opinion...
17:40:59 <dwalleck> I think what we're talking about only applies to some tests as well. I'm not sure I could see that style in use for negative tests, say one verifying that if I use an invalid image, a server won't build
17:41:26 <dwalleck> Would it be fair to say that we care most about these results for the more core/smoke/positive tests that we have?
17:41:36 <dwalleck> err, more=most
17:41:40 <jaypipes> AntoniHP: and in your approach, if test method 002_xxx failed, then test method 003_xxx should be skipped, right?
17:41:51 <AntoniHP> no
17:42:21 <AntoniHP> if 1) fails then 2) and 3) skips, if 1) succeeds then 2) and 3) execute
17:42:34 <AntoniHP> because we can get malformed API response, yet be able to actually boot VM
17:42:47 <dwalleck> but then if 3 depends on 2 and it not skipped, 3 will fail, which would be a false positive
17:43:28 <jaypipes> AntoniHP: right, and we are arguing that those three steps are assertions that should be in a single test method called test_basic_vm_launch(), otherwise you need to add stuff to test case framework to handle dependencies between test methods
17:43:29 <AntoniHP> dwalleck: yes, there are different scenarious possible, sometimes test would be just like assertions and sometimes not
17:43:32 <dwalleck> I think we need code here...
17:44:01 <AntoniHP> jaypipes: that is false statement, generators allow to execute logical flows wihtout any nosetest additions
17:44:14 <dwalleck> I can assert that I can create tests that give the same results, but are not dependent
17:44:31 <jaypipes> AntoniHP: and putting the assertions in a single test method allows to execute logical flows without generators ;)
17:44:52 <dwalleck> Which then break the dependency chain, allow for class level parallel execution, and for isolated test execution
17:45:01 <AntoniHP> jaypipes: no, because first raised assetion stops the test method
17:45:11 <jaypipes> AntoniHP: yes, that's what you want!
17:45:46 <AntoniHP> jaypipes: not in case of integration test, as I mentioned before malformed response from REST call does not indicate final result of initial call
17:45:59 <dwalleck> AntoniHP: So how about this...why not re-write some of the core servers tests in the style you propose
17:46:11 <jaypipes> AntoniHP: that's a totally diffrerent test than "launch this VM in a normal operation" though :(
17:46:20 <dwalleck> That way we're talking about concrete things instead of concepts
17:46:25 <jaypipes> dwalleck: ++
17:46:45 <AntoniHP> ok, I will do this
17:46:47 <dwalleck> I think it would be easier to be able to put this all on the table and compare things with real world examples
17:47:26 <jaypipes> agreed
17:47:58 <dwalleck> And that way we can see the results, compare the output, and see what is different and/or lacking
17:48:09 <jaypipes> alrighty, let's let AntoniHP put some example code up to a pastebin/gist...
17:48:19 <dwalleck> Until then, I don't think further discussion will help much
17:48:38 <AntoniHP> ok
17:52:11 <dwalleck> Good, I think that will help quite a bit
17:52:39 <donaldngo_hp> how about we set up some time where we can see the code on someones desktop? i think we would all reach the end goal a lot faster then our current approach of code pasting
17:53:25 <nati_> donaldngo_hp++
17:53:30 <dwalleck> donaldngo_hp: I think that's a good idea. I'd still like to have a chance to see and run it before as well
17:53:30 <donaldngo_hp> we can discuss real time
17:54:07 <jaypipes> donaldngo_hp: I think I'd actually prefer pastes and the public mailing list for discussion...
17:54:50 <dwalleck> And it may help if I also share what the tempest results I'm using now look like. I think there's quite a bit that comes out of the --with-x-unit results that are fairly helpful
17:56:06 <donaldngo_hp> dwallect: would love to see what your report looks like
17:56:30 <dwalleck> awesome, I'll find a way to get that viewable
17:56:42 <jaypipes> donaldngo_hp: are you using xUnit output for the feed into your reports?
17:56:46 <dwalleck> And then we can see better what we have, and what is missing
17:57:09 <donaldngo_hp> jaypies: yea we are using xunit to product xml and then aggregate into junit style report
17:57:19 <donaldngo_hp> *produce
17:57:27 <jaypipes> donaldngo_hp: k
17:58:13 <dwalleck> donaldngo_hp: Ahh, then you're probably seeing pretty much what I am
17:58:21 <donaldngo_hp> i can send our report out to the group as well
17:58:29 <jaypipes> donaldngo_hp: yes, please do! :)
17:58:55 <jaypipes> let's use the main mailing list, though, with a [QA] subject prefix... the team mailing list is proving unreliable at best :(
17:59:20 <dwalleck> Though I'd like to add more...for example, my devs love that I say that a server failed to build, but without more info (the server id, IP, etc), it's not much help. I'm trying a few things to make that better
17:59:42 * dwalleck ideally would like to pull error logs directly from the core system, but not today
17:59:55 <jaypipes> dwalleck: that's what the exceptions proposed branch starts to address :)
18:00:19 <dwalleck> jaypipes: yup!
18:00:29 <jaypipes> dwalleck: and I've been creating a script that does a relevant log file read when running tempest against devstack...
18:01:54 <dwalleck> jaypipes: I was thinking along the lines of that. It's good start, but I'm afraid of how verbose it could be
18:02:32 <jaypipes> dwalleck: well, the script I have grabs relevant logs, cuts them to the time period of the test run, and then tars them up :)
18:02:58 <jaypipes> dwalleck: figured it would be useful for attaching the tarball to bug reports, etc
18:03:03 <dwalleck> nice!
18:03:12 <dwalleck> That sounds very useful
18:03:31 <jaypipes> dwalleck: yeah, just can't decide whether the script belongs in devstack/ or tempest!
18:03:45 <dwalleck> good question
18:04:05 <AntoniHP> when you decide can you post link to it on the list?
18:04:19 <dwalleck> sounds like an alternate plugin for tempest for those using devstack
18:04:50 <jaypipes> dwalleck: speaking of that... one other thing we all should decide on once we come to consensus on the style stuff is when to have tempest start gating trunk :) currently, only some exercises in devstack are gating trunk IIRC...
18:04:54 <jaypipes> AntoniHP: absolutely!
18:05:38 <dwalleck> jaypipes: I was thinking of the same thing. When I saw the gating trunk email, I was excited until I realized it wasn't on Tempest :)
18:05:40 <jaypipes> dwalleck: right.. I added the tempest/tools/conf_from_devstack script recently to allow someone to generate a tempest config file from a devstack installation... very useful after running stack.sh ;)
18:06:12 <jaypipes> dwalleck: since stack.sh wipes everything and installs new base images, which are needded in the tempest conf :)
18:06:20 * jaypipes needs to blog about that...
18:06:52 <dwalleck> Hmm...I would say once we can confidently say we have a solid set of smoke tests that we consider to be reliable. That seems like a reasonable goal
18:07:56 <jaypipes> dwalleck: ++
18:08:01 <jaypipes> dwalleck: we're getting there...
18:08:41 <dwalleck> I think we're close. The one thing I'm wrestling with is that it's bit hard to visualize coverage based on the bug list in Launchpad
18:09:28 <jaypipes> dwalleck: agreed. though the tags help a bit..
18:10:14 <dwalleck> jaypipes: They do. I'm still going to keep bouncing that idea around in my head
18:10:58 <dwalleck> Well good folks I need to bow out, off to the next meeting
18:11:59 <dwalleck> Or not :) Jay is stepping away for a sec
18:12:49 <dwalleck> nati_: Are you still here?
18:13:05 <jaypipes> AntoniHP: how about we just comment on the code on the mailing list? ok with you?
18:13:35 <AntoniHP> I think using list would be more productive, as it is less interactive and code needs time to be read
18:13:42 <jaypipes> AntoniHP: not a problem.
18:13:52 <jaypipes> ok good discussion so far, we will continue on the ML.
18:13:56 <jaypipes> #endmeeting