17:01:02 #startmeeting 17:01:03 Meeting started Wed Nov 23 17:01:02 2011 UTC. The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:04 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:01:26 #topic nati2 to give quick status update on forward-porting unit test bug fixes to trunk 17:02:08 nati2: ? 17:02:29 Yes Jay, Donald, Ravi helps forward-porting. And Jay also make a document how to forward-port 17:03:08 nati2: how many are left? 17:03:15 nati2: It is only one tranaction that I and Donald did 17:03:33 Some problems occurs. At first, Essex code changes, so many logical conflicts occurs, especially for test cases 17:03:49 ok hold on 17:04:12 o/ 17:04:15 39 branch left 17:04:20 k 17:04:31 nati2: however some of those branches are crazy big ;) 17:04:44 nati2: not sure some of them will be able to be done in a single patch... 17:04:44 Ah for test case branch 17:05:14 But patch branch is small, but test case branch is large. 17:05:19 right 17:05:25 So let's start bug patch branch 17:05:48 nati2: well, let's deal with them one at a time, eh? 17:06:12 at a time? 17:06:18 nati2: the process I outlined in that email should work for most things... it's just that during reviews (example: https://review.openstack.org/#change,1834), we need to be sure to respond to folks 17:06:59 nati2: I was just saying that most of the branches are small... we should just press forward with the smaller ones and try to get some momentum 17:07:41 Yes I agree 17:08:15 nati2: OK, well, please do respond to Mark McLoughlin on that review above... I'd like to get that smallish patch forward-ported so we can look to a good example going forward. 17:08:39 jaypipes: I got it! Thanks 17:08:46 nati2: even for 4 of us working on this, it's going to be a stretch to get this forward-porting done by start of December... 17:09:16 ok, let's move on to functional/integration testing... 17:09:24 jaypipes: hmm, I agree 17:09:35 #topic westmaas and dwalleck to give status on openstack-integration-tests progress 17:10:08 we have a name, and jeblair is doing prepwork on the git/gerrit migration 17:10:29 Still working on bringing new branches of tests in. I'm holding off on the whole domain name concept until we have more content 17:10:44 we do need to find a time to do the migration -anything proposed during that time will be confused, apparently 17:10:58 dwalleck: domain name concept? 17:11:08 dwalleck: for those who want to work on writing tests - what is the recommended process/examples? 17:11:17 dwalleck: do we have a public list of what needs to be migrated yet, and/or are you already working on things? 17:11:23 dwalleck: would there be a lot of rework with the existing tests after bringing in domain? 17:11:27 er, working with others on things* 17:11:32 jaypipes: The objected orietented reference we talked about last time 17:11:36 ah 17:11:50 rohitk: Not much, but some 17:12:04 dwalleck: so, I set up https://github.com/openstack-dev/openstack-qa and started some initial documentation for integration testing... 17:12:05 dwalleck: thanks 17:12:25 anotherjesse: I was supposed to get with jaypipes in the last week to find somewhere to put the offical docs up that I have my team working from 17:12:30 So I can add to that 17:12:36 dwalleck: I believe jeblair has openstack-qa set up in Gerrit and we can make a job that populates qa.openstack.org the same way as ci.openstack.org 17:12:56 dwalleck: https://github.com/openstack-dev/openstack-qa is the official docs :) 17:13:08 westmaas: I only have additional test cases setup in Launchpad. I should also mark test suites in progress to bring over so there's no extra work done by folks 17:13:43 https://github.com/openstack-dev/openstack-qa will map qa site? 17:13:44 I can bring everything over in one swoop, or I can keep going test suite by test suite. I just thought going one at a time might be easier for reviews 17:13:45 dwalleck: yes, please do that. I know that Ravi's team is eager to contribute tests and the last thing we want is duplication of effort of course.. 17:13:56 Ravikumar_hp: yes 17:14:07 Ravikumar_hp: or at least the /doc/ folder in there will be... 17:14:19 jaypipes: the openstack-qa is going to talk about the strategy / format for adding tests to openstack-integration-tests? 17:14:28 anotherjesse: yes 17:14:37 https://github.com/openstack-dev/openstack-qa/blob/master/doc/integration.rst <- there? 17:14:38 anotherjesse: https://github.com/openstack-dev/openstack-qa/blob/master/doc/integration.rst 17:14:44 anotherjesse: heh, yes. 17:15:02 look forward to "Adding a New Integration Test" 17:15:03 ;) 17:15:13 anotherjesse: I just rushed up a quick starter doc... 17:15:27 dwalleck: is integration test the right name there? 17:15:47 should that file be moved to something else? functional.rst or something else? 17:16:03 will be be discussing on general framework issues like running tests in parallel and how reports will be generated? 17:16:22 westmaas: Integration is fine for right now 17:16:59 donaldngo_hp: ++ 17:17:09 donaldngo_hp: Paralleziation can be achieved through the nose parallel execution plugin. It's a bit strange, so the option to develop a better plugin is on the table with my team 17:17:19 on our project we are producing a junit style report from nosetest from the xmlunit plugin 17:17:31 Reports can easily be generated with the xunit nose plugin 17:17:31 donaldngo_hp: we can discuss whatever you'd like, however I ask that we approach the issues from a "how do we improve the existing storm test suite" instead of "let's rewrite the test framework to support X: 17:17:45 That's what our team has been using internally 17:17:59 But of course, writing a new plugin is also always an option :) 17:18:08 dwalleck: parallelization cannot, unfortunately, be achieved using the nose parallel execution plugin. 17:18:42 jaypipes: i've ran some parallel tests last night its working fine. can you clarify? 17:18:43 dwalleck: I run into this virtually every time I attempt it: https://gist.github.com/1144287 17:19:35 jaypipes: Since we're using class setup/teardowns, I believe nose asks you to set some fields in each test to make it aware so it doesn't stumble over itself 17:19:57 i created 4 sample test that ran in 4 minutes. set the processes=4 and reran and they ran in 1 minute 17:20:04 donaldngo_hp: can you clarify what "on our project" means? :) I want to make sure we aren't duplicating more work... 17:21:39 donaldngo_hp: ? 17:21:40 jaypipes: i invision our team being able to drop in "tempest" and running something like nosetest -v tempest/ --with-nosexunit --proccesses=5 17:21:54 donaldngo_hp: what is tempest? 17:22:09 jaypipes: Is'nt that new name? 17:22:13 tempest is a subset of "our project" meaning we will run this suite along with our web gui tests, some jcloud tests, rubby wrapper test 17:22:16 The new project name 17:22:25 oh 17:22:28 sorry... 17:22:37 no more storm 17:22:48 storm was a good name :( 17:22:55 gotcha... sorry, wasn't aware of that 17:23:05 donaldngo_hp: yeah it have already taken 17:24:11 donaldngo_hp: So basically, you cloud run the tempest with nosetest -v tempest/ --with-nosexunit --proccesses=5 ? 17:24:18 And result was correct? 17:24:20 donaldngo_hp: OK, so is your team writing additional tests, or is your team compiling other test suites and running them? 17:24:36 nati2_ no thats why i brought it up :) 17:24:50 what i did run was a proof of concept of creating 4 tests running them in parallel 17:25:31 donaldngo_hp: Ah, but many actual tests have dependencies. 17:25:35 jaypipes: yes but these tests are not valuable to the community. for example our web gui tests run on selenium and hit our website 17:25:43 donaldngo_hp: I've found as long as the tests don't import eventlet, everything is fine with the parallel testing... once you import eventlet, it dies... but let's move on. If you have the parallel stuff running the integration tests, then that's good... 17:25:44 ------------------------------------------ TEST AUTOMATION FRAMEWORK -------------- 17:25:44 Web UI <-----------------------------> 17:25:44 JCloud Wrappers <--------------> 17:25:44 Ruby Wrappers <----------------> Jenkins <------> Gradle --->JUnit Report 17:25:44 Tempest <------> 17:25:44 CLI <----------------------------------> 17:25:46 TBD <---------------------------------> 17:25:48 ----------------------------------------------------------------------------------- 17:25:52 this is what our framework looks like 17:26:25 He's just injecting tempest into the middle of a larger project 17:26:34 understood... 17:26:49 donaldngo_hp: and you are saying that running nosetests /tempest does not work, right? 17:27:17 jaypipes: right now out of the box no 17:27:32 donaldngo_hp: OK, then let's get a bug reported and have that fixed :) 17:27:40 i have been able to run some of the tests in kong 17:27:43 donaldngo_hp: sounds like a pretty important thing to fix to me... 17:28:00 You have to run nosetests tempest/tests 17:28:08 donaldngo_hp: the kong tests, IIRC, were eventually going to go away, as they were replaced with identical tempest ones. 17:28:12 Since the /tests directory is where everything lives 17:28:16 i had reported a similar bug last week, and it was fixed after dwalleck's missing files were brought in 17:28:37 I think it might be more useful to run larger groups of tests in pararell, rather than executing single tests in that manner - this will bring best of two worlds, deterministic execution of series of actions that might depend on each other and fast execution times - equivalent of running multiple nosetest process on each file containing tests 17:28:53 does the tempest/ directory have an __init__.py? if not, just add it and nosetests tempest/ should just work... 17:28:58 donaldngo_hp: Are you setting your storm.conf with values for your environment? 17:29:21 however even then it does require maintaining plenty of configuration details, which is head ache 17:29:41 dwalleck: i haven't touch the new tempest folder yet, i have an old code that still uses kong 17:29:58 AntoniHP: could you elaborate on what you mean with "rather than executing single tests in that manner"? 17:30:09 thats why i wanted to have a discussion so that we are all aligned on how the tests will run and report 17:30:21 donaldngo_hp: Then why did you say you couldn't get them to run? I'm confused 17:30:30 donaldngo_hp: tempest is the suite we are using going forward. kong is a dead-end. 17:30:36 nosetest would search for tests, and for each test do processs that will execute setup, test, teardown 17:30:57 well the kong tests out the box i had to modify the endpoints to use ssl since we go through an apigee loadbalancer 17:31:23 AntoniHP: but with --processes=4, you get parallelism in execution (not collection of tests, but execution is parallelized...) 17:31:44 AntoniHP: I think that was always the plan. The short term goal though was on test coverage, so that's what I've been focused on 17:31:59 dwalleck: ++ 17:32:34 donaldngo_hp: OK, so what's your plan? Are you going to update to the recent tempest stuff? 17:32:41 it will run every test in pararell, so every test has to be completely self contained - usually a _sequence_ of tests is self contained 17:32:54 jaypipes: Would the correct action be blueprints for parallel execution and perhaps better logging output beyond the xunit plugin? 17:33:13 and can be run together with other _sequences_ 17:33:14 jaypipes: yea i plan to do that. right now we are using kong which is working well for acceptance tests of apis 17:33:39 AntoniHP: By sequences of tests, do you mean dependent tests? 17:33:40 AntoniHP: hold off on that thought for a sec... let's finish this conversation about kong/tempest first. 17:33:43 dwalleck: ++ on logging, there is practically no logging in the current suite 17:33:52 dwalleck: hold off on that... want to get a decision on one thing at a time... 17:34:09 rohitk: I have logging implemented locally. I can push a patch for that this week 17:34:19 donaldngo_hp: OK, good to hear. When do you plan to move off of Kong? 17:34:24 dwalleck: ok 17:34:31 It's just a matter of wrapping logging around the rest client 17:34:43 donaldngo_hp: in other words, what needs to be fixed in tempest to make that transition happen? :) 17:35:02 jaypipes: hopefully as soon as i get it working on our env. Maybe the end of next week? 17:35:22 jaypipes: I will let the group know next meeting :) 17:35:26 donaldngo_hp: And if you need any help or have any questions, please let me know 17:35:31 donaldngo_hp: Can we aim for next Wednesday instead? Perhaps a follow-up with me, dwalleck and you to hammer out remaining issues? 17:35:58 jaypipes: I will let the group know next meeting :) 17:36:05 hehe, ok :) 17:36:22 alright, AntoniHP, let's discuss dependent tests now... 17:36:55 I do know that the original idea was to have test cases be self-contained with no side-effects, making them parallelizable. 17:37:30 AntoniHP: are you suggesting that requirement be lifted or loosened? 17:37:36 there are two requirement IMHO for tests to be self contained, they must not depend on other tests 17:37:44 and they must not depend on same resources 17:38:30 this is difficult to achieve, because sometimes tes would need a separate project 17:38:46 other cost is if test requires a running VM for example, it takes time to spin it up 17:39:40 so far in testing I tried to group similar tests to reuse resources, if i have tes toPUT into server/id/metadata I can safely use same /server/id/metadata to test reading metadata 17:39:40 AntoniHP: How about backfire way? It uses @depend decorators 17:39:59 Test dependencies will hurt us in the end 17:40:04 nati2_: backfire uses DTest, not nose, just FYI. That would really be a complete rewrite of tempest. 17:40:16 Then that hurts the ability for us to run tests in isolation 17:40:23 dwalleck: agreed. 17:40:27 jaypipes: I got it. 17:40:57 A solution our team has thought of and been working on is a test resource pool module, which holds a "cache" of test data, such as servers, etc 17:41:02 nati2_: proboscis had support for dependent test ordering 17:41:39 * jaypipes not entirely convinced that functional integration tests can be effectively run in parallel against the exact same deployed platform... seems to me that any assertions on platform state would need to be very carefully planned... 17:41:39 And then you simply ask the cache for the data you want. If a clean object exists, you get it. If not, it creates it 17:41:43 dwalleck: i was thinking about similar solution 17:41:46 rohitk: proboscis is the library? 17:41:57 dwalleck: Cache looks good 17:41:58 nati2_: https://github.com/rackspace/python-proboscis 17:42:10 I strongly believe we can get test parallelization working in a sane and clean manner 17:42:40 dwalleck: but should that be our *focus* right now? I'm really not sure... 17:42:44 rohitk: proboscis is interesting, but a) would it work with multiprocess plugin in nose? b) would it support resource sharing? 17:43:00 Well, I thought our original focus was test coverage, so that's where I've been going 17:43:22 AntoniHP: not sure, need to check 17:43:26 I think this group could definitely come up with a good parallelization solution, but frankly, our charter right now is to increase the quantity and quality of the functional integration test cases and get those gating trunk. Speeding up the test suite is a secondary priority 17:43:41 jaypipes++ 17:43:44 jaypipes++ 17:43:47 The basis of zodiac also takes into account the idea of parallel execution, so its capable, just not ready 17:43:52 jaypipes++ 17:44:18 jaypipes++ 17:44:32 jaypipes++ 17:44:40 increase coverage top priotiry 17:44:53 OK, so I would ask that we backburner efforts to parallelize for right now (maybe we make parallelization a blueprint in the openstack-qa LP project and we have a hackathon to try and do it). But we focus in the next couple months on increasing the quality and quantity of tests in the *tempest* suite. 17:45:13 To define workflow to add new integration test 17:45:35 nati2_: yes, dwalleck and I will get that documentation up on qa.openstack.org ASAP. 17:45:38 Jaypipes: sounds good 17:45:44 jaypipes: Thanks! 17:45:48 dwalleck, westmaas: the second priority is that list of missing tests... 17:45:52 Right. I'll talk with jaypipes to understand how best to get docs in. I'll try to get something in today 17:45:58 agree 17:46:06 looking forward to docs on how we should write tests if we want to help coverage 17:46:11 jaypipes: by missing you mean not ported in yet? 17:46:13 the third priority is working with donaldngo_hp and AntoniHP to ensure tempest can replace kong in their project... 17:46:16 dwalleck: yep 17:46:27 anotherjesse: yes, we know... 17:46:51 jaypipes: I'll submit all remaining tests today, in addition to making the name change to tempest. Sound fair? 17:46:51 alright... let me make some action items here... 17:47:00 jaypipes++ 17:47:16 And then going forward, I'll make sure my team opens bugs for each new test they are working on 17:47:17 #action jaypipes and dwalleck to get "How to Contribute an Integration Test to Tempest" done by EOD Thursday 17:47:57 #action dwalleck and westmaas to provide list of areas that are not covered by integration tests *after dwalleck pushes all remaining tests his team has into tempest* 17:48:20 #action jaypipes and dwalleck to schedule meeting with donaldngo_hp to discuss kong migration to tempest 17:50:04 OK, let's open up the discussion. 17:50:08 #topic open discussoin 17:50:22 Nati's team at NTT wants to contribute to an idea for negative tests, https://blueprints.launchpad.net/openstack-qa/+spec/stackmonkey 17:50:43 so we have come up with a supporting spec and High level design 17:51:10 rohitk: hmm, very interesting.. :) 17:51:14 Nice! That looks like a very interesting concept 17:51:40 Yes we would like to implement very bad monkey for OpenStack 17:51:50 a gorilla would do too 17:51:51 lol :) 17:52:13 nati2: Is it unit tests ? Choas monkey? 17:52:42 Ravikumar_hp: Is is a library which could be used from tempest 17:53:02 Ravikumar_hp: this would be larger tests 17:53:04 Ravikumar_hp: basically, it's a library that will kill off important processes and wrreak havoc on a running system.. 17:53:12 things that would mess up the infrastructure 17:53:19 Ravikumar_hp: it would be a very bad monkey. 17:53:39 not to run on production!! 17:53:42 Using the monkey, we can check error messages and logs 17:53:45 hehe 17:53:48 Ravikumar_hp: :) 17:53:50 Ravikumar_hp: right :) 17:54:23 nati2_, rohitk: well, I think you've gotten the official thumbs up on the project :) 17:54:24 Netflix uses it for production system 17:54:39 jaypipes: Thanks :) 17:54:39 at hp , we have lot of choas monkey tests . That we can automate 17:54:54 Ravikumar_hp: Ah really!? 17:55:01 Ravikumar_hp: please do collaborate with rohitk and nati2_ on this project, then! 17:55:34 Ravikumar_hp: OK TTYL 17:56:30 OK folks, anything else before I close the meeting? 17:57:54 alrighty! 17:57:56 #endmeeting