19:02:42 #startmeeting 19:02:43 Meeting started Tue Jun 14 19:02:42 2011 UTC. The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:44 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:02:47 HOYOO 19:03:17 :-) 19:03:24 #info Added an agenda overview to the wiki 19:03:29 #link http://wiki.openstack.org/Meetings/CITeamMeeting 19:04:14 So - I guess let's get this puppy rolling 19:04:47 #topic Discuss/Present overall CIPlan as it currently stands 19:04:48 o/ 19:04:54 so, starting with ciplan 19:05:14 yup. seemed like a good idea to start at the beginning 19:05:18 which smoke testing are you referring to? 19:05:34 #link http://wiki.openstack.org/CIPlan 19:05:46 well, right now to the jenkins job which is useless 19:05:51 the stuff in /smoketests only? 19:06:25 I was _actually_ just talking about the infrastructure to run it - but yes, to run the stuff in nova's /smoketests 19:07:32 so, for those the general plan is: start a vm, provision with chef, point smoketests at them 19:07:54 at the moment ex-anso has been using vagrant because it automates a good portion of the things we need to do to set up a multinode cluster 19:08:04 yup. although I'd say the general plan is "start a vm, install/provision with something, test them" 19:08:05 but you seem to be set on using external cloud servers 19:08:15 i am asking for a more concrete plan 19:08:24 indeed. 19:08:35 "something" == chef scripts from openstack-recipes 19:08:37 in my opinion 19:08:38 what I was going to say is that for now "install/provision with something" seems to be chef as a best practice 19:08:41 yes 19:08:57 if there is no disagreement lets start saying htat 19:08:59 instead of something 19:09:14 we were starting at the beginning here today, so I thought we'd be clear 19:09:22 we are clarlifying 19:09:23 Will chef scripts be standard openstack deployment tool? 19:09:52 nati: i think they will be one of them, the puppet guys will also have a set of tools 19:09:56 nati: that's certainly a possibility 19:10:17 yeah - and getting in to chef v. puppet is certainly not something I care to do anytime in my personal near future 19:10:26 I think OpenStack should have an official deployment tool, and Smoketesting tool should use the official one. 19:10:40 nati: agreed, but we can't make the political statement right now 19:10:46 ++ 19:10:46 Because we should test deployment tool also. 19:10:47 nati: for now we are going to use the chef recipes 19:10:56 right. because they are what's there 19:10:56 and our docs will describe the usage of those 19:11:02 and our tools will depend on them 19:11:03 agreed 19:11:11 the puppet guys will do owrk to make those tools able to use puppet as well 19:11:17 but that onus is on them 19:11:28 we like them we just have more inertia on chef right now 19:11:47 so, for the vm stuff, shall we say we use rackspace cloud servers? 19:11:59 or vagrant? 19:12:05 yes. I think that gets us the best ability to do many of these in parallel 19:12:09 RCS. 19:12:21 we also can get them more cheaply than amazon i assume 19:12:28 yes. yes we can :) 19:12:43 Chef cookbooks written properly should work on bare metal, cloud, or a personal VM on your laptop (vagrant). 19:12:51 which then gets us to getting the damn thing working 19:12:52 dprince: yup, that's the goal 19:12:56 dprince: yes. agree 19:13:14 the two approaches on the table in my head for making this happen are: 19:13:17 So the way I see us collaboration working here is we collaborate on good config management practices. 19:13:45 openstack_vpc fired from jenkins - or just firing up a node and doing the chef bits on it by hand 19:13:56 what is openstack_vpc? 19:14:00 I am, at this point, fine with either 19:14:10 openstack_vpc is the thing that powers smokestack 19:14:10 termie: https://github.com/dprince/openstack_vpc 19:14:23 cool, will that kick of chef pieces? 19:14:28 yes 19:14:30 Yep. 19:14:35 perfect, lets use that 19:14:38 the main question I have for folks there is 19:14:49 we can make dprince make it ddo what we want :)) 19:14:49 termie: https://github.com/rackspace/chef_vpc_toolkit 19:14:55 Is there any limitations of VPC? 19:14:56 Termie, I forked the Anso cookbooks and have made a couple minor changes to get things working in the cloud. 19:15:16 nati: The limitations of VPC are the same limitations you'd hit on Cloud Servers. 19:15:17 is requires redis, which also is used in the nova unit tests - is there any reason to think that running openstack_vpc on the jenkins box will bork the unit tests 19:15:18 dprince: bug xtoddx to get them integrated? 19:15:36 where is the redis requirement? 19:15:53 nova doesn't use redis anywhere anymore i thought 19:16:11 termie: smokestack does... 19:16:14 great! I just know that there were already redis processes on the jenkins box, and I didn't want to step on anything 19:16:14 termie: Smokestack uses --> Cloud Servers VPC. Both of these are Rails apps for which I'm using a Redis backed job queue (Resque). 19:16:28 mtaylor: I don't think openstack_vpc needs redis 19:16:38 mtaylor: just the stuff built on top of it 19:16:48 problem solved then - redis on jenkins box is historical and non-conflicting 19:16:48 mtaylor: I'm happy to let you have access to the Cloud Servers VPC instance we are using on Titan. 19:16:53 dprince: so vpc is not a command-line tool 19:16:54 ? 19:17:23 termie: openstack_vpc is a set of config files and rake tasks to spin up testing groups. 19:17:26 dprince: well - main thing is that I need for jenkins to be able to fire the commands - probably easier to just install it on the jenkins box, yeah? 19:17:34 termie: it is command line driven. 19:17:39 Smokestack scripts it. 19:17:45 dprince: but we have to have a server running all the time that hits that server? 19:17:52 erm 19:17:57 and a cli that hits that server 19:18:32 termie: openstack_vpc uses Cloud Servers VPC to spin up cloud servers. 19:18:49 termie: Cloud Servers VPC is an always running rails app - so yes 19:19:13 and what does its always running get us over having it just do a one-shot thing? 19:19:13 dprince: I couldn't figure out how to get openstack_vpc to talk to a cloud servers vpc on another box ... was I just missing something? 19:19:34 the queuing and such can be handled by jenkins, is what i am getting at 19:19:40 termie: yes 19:19:56 so - eventually I want to just have jenkins spin up the servers and do the chef bits 19:20:12 agreed, but i think we should be using a common tool for that 19:20:17 but having jenkins call openstack_vpc should get us further today 19:20:19 and it sounds like vpc has the pieces for it 19:20:22 yes 19:20:33 i just don't want another long running process to maintain 19:20:38 So I'm already running each trunk commit in SmokeStack (via a Jenkins). I'm happy to move that move that bit over to the public Jenkins now if you guys would like. 19:21:07 that would be grand 19:21:11 ++ 19:21:24 have mtaylor make you admin'y? 19:21:29 there are things I would like to do with that setup long term - but that gets us a thing we need today 19:21:37 and we can talk about the other long term things later 19:21:41 Yes. 19:21:42 dprince: are you dprince on launchpad? 19:21:42 so 19:21:52 I'm dan-prince. 19:21:59 we set up an openstackvpc instance (probably just steal titan's for now) 19:22:16 dprince: ok. you are now an admin on the openstack jenkins 19:22:31 Also. I'm happy to make Cloud Servers VPC available for anyone else to use as well. 19:22:37 we deploy a single and multinode setup as per anso's old vagrant testing, and run the nova smoketests against ahtat 19:22:41 Can we test VLANManager on VPC? 19:22:43 smokestack is just doing openstack api, right>? 19:23:03 termie: I'm now running the nova smoketests too. 19:23:07 ooo 19:23:10 great 19:23:13 so 19:23:14 termie: minus volume tests. 19:23:31 ah, do we think there is a way to integrate those? 19:23:53 The issue is I can't get iscsi-target to run on Cloud Servers. 19:23:54 i'd say in that case lets just ditch the first section of CIPlan and replace it with smokestack 19:23:58 dprince: is it possible to run them with the xunit xml output and get those copied back to the jenkins box? 19:24:00 ah 19:24:21 mtaylor: Sure. We can do that. 19:24:34 it sounds like titan has a strong lead on most of this 19:24:37 termie: well - the whole thing here is that we want to get this integrated with jenkins so that we can eventually add it to the tarmac stuff 19:24:42 yup 19:24:55 So. On that front I actually have mixed feelings. 19:25:02 mtaylor: right, just saying integrate smokestack instead of writing your own bits 19:25:06 totally 19:25:16 dprince: oh? do tell :) 19:25:21 Is the idea that we would prevent (block the trunk commit) if the functional tests fail? 19:25:25 if he's already got that work done on a different jenkins 19:25:34 maybe - the idea is that I want to be able for us to make that choice 19:25:39 physically 19:25:48 so that the choice is policy and not lack of ability 19:25:52 The model we've been following on Titan is we run branches in merge prop. Heavily. Its kind of a review tool. 19:25:57 dprince: i think that is a tarmac config issue rather than a jenkins config issue 19:26:07 dprince: we want that as well 19:26:18 so on _that_ one _I_ have mixed feelings 19:26:22 dprince: we == ansoy people at least 19:26:29 which is a security thing 19:26:39 running heavily on branches from known folks is one thing 19:26:46 right 19:26:52 running code from _any_ unreviewed merge prop is, well, fail 19:26:52 Right. But what I'm getting at is any time you have functional tests there is a chance it could fail due to an external dependency. Something like the network was down, launchpad, GitHub, etc. 19:26:59 totally 19:27:22 I'd hate to hold up a trunk commit if there is a long lived dependency failure. 19:27:32 I have no issues holding up a trunk committ 19:27:43 the whole idea is to keep trunk in good shape 19:27:45 dprince: well, once we're on github some of those things will be a little bit easier to work around 19:27:47 but that's a whole other thing 19:28:04 dprince: it is relatively easy to integrate things into your dev tree and take them back out later 19:28:11 I don't think any part of this has anything to do with github v. launchpad - but that's _also_ a different conversation 19:28:13 Sure. I just wanted to make sure everyone was on the same page. 19:28:29 yup. for now... 19:28:42 dprince: can I put you down with an action of getting smokestack jenkins job done? 19:28:43 Also the tests as is take 15 minutes. Is that everyone acceptable with that? 19:29:01 mtaylor: Sure. I can take that. 19:29:12 #action dprince make smokestack jenkins job 19:29:24 for now we don't automatically run against merge props and will try gating merges on them with the option that we might need to turn that off 19:29:35 yes. 19:29:36 titan can continue doing it for their own stuff, obvs 19:30:00 although gating merges will have to wait for now on some jenkins/tarmac work - but that should be done soon enough - for now, notification will be a huge win 19:30:33 step 1, public smokestack, step 2, tarmac integration 19:30:38 yup 19:30:50 want to move to baremetal bits? 19:30:50 Sure. So just to be clear. You want me to make a simple Jenkins job that runs each time we have a trunk commit. This gives you the indicator light on Jenkins you want? 19:31:01 yes 19:31:02 and 19:31:06 dprince: we'll want to get nosexunit out 19:31:14 dprince: also - take a peek at the nova-sometest job at the nosexunit output stuff 19:31:19 because we want that too 19:31:28 Sure. 19:31:33 baller 19:31:48 * mtaylor will purchase for dan prince a beer 19:31:58 One the bare metal thing. I've got 4 nodes that I'm working on getting integrated with XenServer testing. 19:32:07 are we moving to baremetal now? 19:32:12 meetingbot 19:32:18 #topic Jenkins job to integrate/fire bare metal testing 19:32:36 "< dprince> One the bare metal thing. I've got 4 nodes that I'm working on getting integrated with XenServer testing. 19:33:00 i am wondering what rpath is offering 19:33:18 rpath is offereing something for this similar to what dprince just offered for functional 19:33:23 we had a nicely working pxe + ubuntu bootstrap setup working already 19:33:44 that only needed the chef provisioning bits on the client side 19:33:45 For bare metal testing I'm actually very interested in using Dells Crowbar project. 19:33:49 they currently have a jenkins running jobs that inject images in to cobbler and stuff 19:33:55 dprince: everybody is but it doens't exist yet 19:34:07 I'm less interested in crowbar unless it's only one of the solutions 19:34:13 dprince: and many of us are a bit tired of waiting on them 19:34:15 because just testing dell is mega uninteresting to me 19:34:16 termie: I'm asking for the bits as we speak... 19:34:33 (via emails, etc). 19:34:37 the chef scripts would theoretically be the same 19:34:41 Sure. 19:34:57 so all we really need is a simple script to kick off chef clientside 19:35:03 I'm working on a XenServer cookbook to help setup the dom0 and domu. 19:35:11 cool 19:35:26 our side doesn't know how to do that, we're still kvm-ville 19:36:01 for the moment though i'd love to get what we already had nearly working finished 19:36:40 cool 19:36:46 yup. and I can turn my attention to that now that we're considering functional done via smokestack 19:37:00 termie: did you have a reason for doing pxe stuff directly and not using cobbler or something similar? 19:37:24 mtaylor: it was dead simple and didn't require learning much as we were already doing it for nasa 19:37:27 How many machines do you have access to? 19:37:34 dprince: i think 20 19:37:37 10 19:37:39 we have 10 machines 19:38:03 you have 10 or we gave you 10? 19:38:13 Cool. Well we have 4 on titan. Not as many but they are big boys. 48 Gigs of memory, lots of disk. 19:38:14 someone gave me access to 10 19:38:40 mtaylor: was it us who gave you access? i am trying to ask whether you are uysing the ones we allotted or some other mystery set 19:38:53 termie: the ones from jesse - so probably 19:39:03 termie: "us" is very nebulous to me most of the time 19:39:41 mtaylor: ex-anso is my usual us when i am acting as spokesperson 19:39:58 we don't really have a good line of distinction 19:40:05 fair - I guess I should be more clear - that doesn't mean anything to me either ... but it's really not important right now 19:40:07 but anso is the easiest 19:40:44 in any case- I have access to 10 machines in the equinix facility in the bay area- I believe these are from "you" 19:40:51 okies 19:41:25 so i will assume we can't give you any more 19:41:34 I'm also assuming that 19:42:16 the thing we talked about at ODS was eventually making a setup that had 5 machines allocated to swift, 4 to nova and 1 to glance which would also be the driver of the thing 19:42:36 that, obviously, will not be what we do in the short term - but that's the aim 19:42:50 Why so many to swift? 19:43:06 i'd say 4 nova, 4 swift 1 glance and one orchestrator 19:43:21 I'd go for more nova myself. 19:43:22 4 swift just to test replication and have a separate proxy 19:43:34 Okay. I see. 19:43:40 there was a reason they said 5 instead of 4 - jaypipes do you remember what it was? 19:43:50 because I'd love to have the orchestrator be separate 19:44:14 same here 19:44:27 because the other machines hsould probably be in a different network config 19:44:45 ok. let's say for sake of argument that we do that until such a time as someone comes in and says "dear god! we have to have 5 for swift" 19:45:36 mtaylor: no, I can't remember why. 19:45:50 we also have, while we're on the subject, an offer of machines from the Novel/Microsoft Interoperability Lab to ensure that we have a deployment testing the hyperv stuff 19:46:46 should probably be easier to make use of that once we have the first set of these up and going 19:48:02 so action items for monty? 19:48:08 so - actions coming out of this week on this topic are that I'm going to get a jenkins job that fires off termie's pxe boot work. we're waiting on proper multi-machine chef recipes, yeah? 19:48:21 nope, we have those 19:48:28 but they will probably have to be updated for baremtetal 19:48:46 the stuff i gave you kicks off a chefserver in lxc 19:48:56 #action mtaylor jenkins job that fires off pxe 19:48:57 yup 19:49:00 saw that 19:49:22 we have probably additional knowledge on some of that now 19:49:26 the key there then is getting the right chef stuff running on the machines once they are booted 19:49:27 great! 19:49:29 so you can ask us questions 19:49:40 yeah, but it sounds like dprince might know some of that 19:49:44 I'd love to - as chef makes me want to rip my arms off 19:49:46 that was where i handed things to you 19:49:58 so i don't know how to actually kick off a chef client 19:50:02 it makes me want to rip my arms also 19:50:11 although they claim it is getting better 19:50:12 dprince: I'll be coming at you for questions on that 19:50:28 mtaylor: Sure. Happy to help. 19:50:33 termie: I keep expecing something like a command line "chef nova-node" 19:50:42 yeah me too 19:50:45 termie: but instead apparetnly I have to write json files :) 19:50:48 but it is like, get this key from somewhere 19:50:56 ok. as long as it's not just me 19:51:14 vagrant does it, too, if you want to read some ruby 19:51:15 for now, I'll make sure I get the first bits up and kicking, and then I'll bug dan on the next step 19:51:42 it does - and there's too much magic in the vagrant files to help me figure out what to do on the command line 19:51:47 darned "helpful" things 19:52:07 whatever - I'm sure I'll be a chef expert by the time this is all said and done 19:53:16 next agenda section probably shouldn't be its own section - but I just wanted to make sure to mention that at some point figuring out how we verify the chef and puppet stuff is probably a good idea - but I don't feel like solving that today 19:53:33 i think we effectively punted that earlier 19:53:34 we're running short on time - anybody got anything else on bare metal for now? 19:53:49 only thing i would asdd is due dilligence to see what loren has 19:54:05 loren? 19:54:25 USC guy who has done a bunch of baremetal stuff 19:54:29 actually probably not useful for now 19:54:35 it is for provisioning baremetal from nova 19:54:41 let's skip 19:54:44 ah. cool. 19:54:46 i'll keep up with him 19:55:01 I'm going to combine a couple of topics real quick: 19:55:07 #topic Jenkins Plugins 19:55:14 anybody want to do any Java hacking? 19:55:18 we don't need any right now, right? 19:55:43 you want to replace tarmac, i guess, but do we need to? 19:55:51 well - we need some at some point - and I always want to ask if there's anyone lurking who wants to help 19:56:00 we're getting close to the point where we need to 19:56:28 why do we need to? 19:56:35 as we add additional tests for things in Jenkins, as long as we're using tarmac or roundabout, we have to double implement those outside of jenkins if we want them as part of the gatekeeping 19:56:52 what is being double implemented? 19:57:15 well, nothing right now - we're just not adding those things to the gatekeeper as of yet 19:57:29 but our friends at ntt wanted to add code coverage metric counts to the gatekeeping 19:57:53 and since we're already tracking that in jenkins - if dependent jobs worked for us, that would be a piece of cake 19:58:23 but the tarmac/roundabout is ignorant of jenkins 19:59:09 how is it ignorant of jenkins? 19:59:11 in any case - obviously not this week - but I wanted to check with folks. however, as is usually the case, there are not 100s of java devs clamoring to help out - so we'll get to it when we get to it 19:59:12 it rungs the tests 19:59:25 so, not that i want to 19:59:30 but i can be a java devloper 19:59:30 it runs tests that are defined in a config file on the file system - the only interaction is that jenkins runs tarmac 19:59:35 but i don't think we need it 19:59:45 one minute left :) 19:59:58 tarmac can't pass/fail things based on the outcome of jenkins jobs 20:00:10 really? i thought that was exactly what it did 20:00:13 nope 20:00:22 that is what roundabout does 20:00:25 nope 20:00:35 or - is it really? 20:00:37 ... yeah 20:00:51 it runs the tests in jenkins and then if it fails it dumps info into th elof 20:00:54 log 20:00:56 s/log/issue 20:01:01 ok - I'll re-look at it - I thought it did the same thing that tarmac did 20:01:03 pretty sure 20:01:07 plz double check though 20:01:09 time to wrap up? 20:01:10 I will 20:01:11 yeah 20:01:22 #action mtaylor will look at roundabout jenkins triggering 20:01:27 ok guys - till next week 20:01:28 okay wrap up 20:01:29 #endmeeting