17:02:34 #startmeeting qa 17:02:35 Meeting started Thu Nov 29 17:02:34 2012 UTC. The chair is davidkranz. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:02:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:02:39 The meeting name has been set to 'qa' 17:02:39 Donald here 17:03:14 <- here 17:03:33 jaypipes, dwalleck: You guys here? 17:03:47 just got here 17:04:23 Looks like the tempest swift tests will be part of the gate as soon as the devstack fix is approved. 17:04:57 nice 17:05:00 davidkranz: that would be grat 17:05:06 davidkranz: which devstack fix is that? 17:05:10 I can take a look 17:05:14 we are working on some new test for Swift 17:05:16 sdague: Any progress on the "instances go to ERROR" issue? 17:05:52 davidkranz: mtreinish is looking into it today, the problem is it's hard to reproduce 17:06:04 I expect our systems are too fast 17:06:13 sdague: https://review.openstack.org/#/c/17119/ 17:06:17 so he was going to try to run it in a kvm guest to slow it down 17:06:26 which is the way ci is run anyway 17:07:03 sdague: Yeah, this is probably acting as a lame stress test in the ci setup. 17:07:22 sdague: I would really like to get some stress tests in the nightly build. 17:07:26 ok, devstack patch approved 17:07:39 I'm going to kick a merge prop in today that should add the reason for a server going into error status into the logging as well, should help wiht debugging 17:07:47 sdague: Thanks. We'll see if anything blows up :) 17:07:48 dwalleck: great 17:08:07 davidkranz: well that devstack patch was just adding another variable to config tempest, so I hope not :) 17:08:45 sdague: Yeah. I just meant that those tests will now start running in all project gates. 17:09:25 davidkranz: cyeoh on my team is starting to look at the blueprint for converting tempest to testr/testtools. So expect to start to see some activity on that one in the next week or so. 17:10:26 chunwang put in a blueprint for customized-test-launcher script 17:11:18 I think that blueprint needs some more information about how it integrates with parallel execution and what the main purpose is. 17:11:46 yes, the blueprint is a work around and improvement based on the issues currently we found during tempest execution... 17:12:03 do we know yet if a testtools solution is viable? I thought someone was going to do a prototype 17:12:35 Actually I think it's not parallel execution, but a batch run of the tempest cases, and with the customized test case list and environment clean module 17:13:02 dwalleck: that will be cyeoh 17:13:22 going to assume that it's workable, and if not, he'll flag that as an issue :) 17:13:41 sdague: cyeoh wil analyze testtools for parallel execution also . right? 17:13:49 yes 17:14:14 chunwang: url for your blueprint? 17:15:05 https://blueprints.launchpad.net/tempest/+spec/customized-test-launcher-script 17:15:50 I'm just a bit wary of diving into a solution without a full plan of attack. I'm curious though if any of this will tie us to testtools. It'd be nice if in the stripping away of nose any standard unittest test runner just worked 17:16:44 dwalleck: fair, though I think in reality you can't really know unless you try 17:17:18 why the parallel exeution is so important? May I know how long will the whole tempest test cases take in your environment? 17:17:42 if it's not any good we won't force it in just because it's on the list. :-) But this will at least make it possible to evaluate 17:18:06 davidkranz: you have the timings for the gate? 17:18:08 sdague: True. I guess what I was thinking was to perhaps start with converting say one project's tests first before doing the whole lot 17:18:53 dwalleck: from the experience I've seen in the folks working on converting nova here, you more or less have to just do it in one batch 17:19:08 chunwang: Because in a Devstack environment the tests take about 45 minutes. In a full deployed environment, especially using Windows images, it might almost take a day 17:19:13 sdague: The last hourly run took just under 1000 seconds for the full part. 17:19:14 chunwang: I think in the longer term parallelism would prove more beneficial 17:19:41 dwalleck: yes. but convert one test before converting project's all tests 17:19:44 anyway, cyeoh is in australia, so I'll proxy his updates here, because it's some ungoddly hour of the night 17:19:49 The goal is to just to get more done quickly so folks are more inclined to run them 17:20:07 sdague: and 200s for the smoke tests 17:20:08 All fair points. I'm just a cautious one :-) 17:20:31 dwalleck: right, which is why we've got a good review process :-) 17:20:42 yup, good point 17:21:22 Simlar with your time...in our environemnt, it will take about 25 hours to finish the whole tests... 17:22:04 chunwang: 25 hours? what environment is that? 17:22:09 But in parallel, I'm running my Tempest tests (with real Linux/Windows images) in < 45 min, so getting there is the goal 17:22:48 a Essex version Openstack, with 16 cn, 1 cc... 17:24:03 dwalleck: When we get parallelism for devstack gate we will likely run into https://bugs.launchpad.net/nova/+bug/1016633 17:24:04 Launchpad bug 1016633 in nova "Bad performance problem with nova.virt.firewall" [Medium,Incomplete] 17:24:08 but it may not similar with normal environment, becasue some of the OS image will take about 20 min to boot up at 1st time boot 17:24:34 chunwang: gotcha 17:24:36 Firing up a bunch of instances on a single compute node is slow. 17:25:12 I guess we will deal with that when it happens. 17:25:15 agreed 17:25:27 Hmm, interesting. 17:25:35 for the gate, were you going to also make it error if there were stack traces in the daemons? 17:26:18 sdague: I really want to do that but there are still errors in the nova logs as far as I know. 17:26:30 davidkranz: yes, there are. 17:26:46 it would be great to get each one of those distinct stack traces as a nova bug 17:26:49 sdague: As soon as they are clean I would go for it. 17:27:19 seperately they are solvable, I did fix one a couple weeks ago, which was serious enough to be a folsom backport as well 17:28:57 sdague: Yeah. It would be best if the nova team as a whole made this higher priority. 17:29:37 davidkranz: yeh sure, just saying that if they get broken up as discreet issues, I can probably get folks looking at them 17:30:20 sdague: I agree. I can do look at the latest logs and do that. 17:30:26 that would be great 17:31:23 I need to duck out folks. I have some updates, but I'll send those in an email this afternoon. Adios! 17:32:00 I am going to take a look at the multi-node-testing blueprint. 17:32:13 Any one know if anything is happening with fuzz testing? 17:32:44 May I know when will the quantum scripts be ready for use? 17:33:29 I will be jumping onto some quantum tests next week too 17:33:45 chunwang: Not sure. I think Nachi Ueno is the lead ono that. 17:34:13 chunwang: https://blueprints.launchpad.net/tempest/+spec/quantum-tempest 17:34:53 rohitk: Great. Make sure to touch base with Nachi to avoid duplication. 17:35:06 davidkranz: yup :), mnewby was working on a patch I believe 17:35:34 Any other topics for today? 17:35:41 ravkumar_hp: What features are you targeting w/ your latest swift tests? 17:35:47 Just curious 17:35:47 davidkranz: I can't see the fuzz testing/randgen coming in anytime soon 17:36:01 what's the strategy to accept more negative tests? 17:36:11 rax-Jose: tempurl + https://blueprints.launchpad.net/tempest/+spec/add-some-functional-swift-tests 17:36:12 https://blueprints.launchpad.net/tempest/+spec/add-swift-security-tests 17:36:20 coolbeans, thanks. 17:37:16 davidkranz: nothing more from me 17:37:20 rohitk: Is some one working on fuzz testing at all? There isn o assignee for the blueprint. 17:38:09 davidkranz: I don't think so, but the old style negative API validation tests were not being accepted as they slowed down the runtime 17:38:31 if there is any scripts need help, I will try to help on that... 17:38:48 rohitk: That is still a problem. It is also a problem that they take much longer to write than if using a negative test-generator. 17:39:06 rohitk: But there is the up-front, non-distributed cost of doing that. 17:39:27 davidkranz: Agreed, I don't think we should afford that 17:39:28 chunwang: Is proposing and creating a negative test runner something you could do? 17:39:54 chunwang: There are some tools out there that do this kind of thing. 17:40:07 rohitk: I think the important thing to is that the negative tests do more than just test fail, they need to get fails the way they expect 17:40:43 sdague: A negative test runner would take care of that if you specify the space of parameters and the expected result. 17:40:50 sdague: Yes, more to do around bad input 17:40:56 ok, currently I didn't proposing more negative test myself, but if there is any existing requirement there, I will try to look into it... 17:42:27 davidkranz: oh, that's the last thing. The coverage reporting for tempest is actually coming along by mtreinish. He's got a nova extension in final stages of review that will let us get nova coverage from an external test runner 17:42:43 sdague: Great. 17:42:45 so I'm hoping that's available in a couple of weeks as part of normal tempest runs 17:43:15 sdague: Sounds good! 17:43:24 Last call for new issues... 17:43:25 it doesn't really add much to the test runs time wise, so it should be something we can enable in the default runs 17:43:45 sdague: Good news, we also need this kind of data 17:46:49 OK, see you all next week. 17:46:54 see you then 17:47:00 #endmeeting