19:01:43 #startmeeting 19:01:44 Meeting started Tue Oct 25 19:01:43 2011 UTC. The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:45 Useful Commands: #action #agreed #help #info #idea #link #topic. 19:03:08 sigh. the minutes linked to on the wiki are not the droids I'm looking for 19:03:21 so - last time we talked for about two hours on the topic of packaging 19:03:30 anybody want to start that freight train moving again? 19:03:32 it was exciting 19:03:42 it was so exciting 19:03:55 #agreed everyone loved talking about packaging last week 19:03:58 more exciting than keystone meeting? 19:04:12 anotherjesse: I was eating a sandwich during the keystone meeting - did I miss fun? 19:04:27 always 19:04:38 blast. that'll teach me to eat 19:05:22 mtaylor: do you have time this week where we can meet and setup netstack jenkins slave? 19:05:25 lest talk about it again :) 19:05:38 so - in general, since last week I've gotten nova trunk gating moved to building from pip-based venv instead of slaves with depends installed from packages 19:06:20 it was fun - I discovered in working on caching the process of building the venv that venvs are not terribly relocatable, even after running virtualenv --relocatable 19:06:40 carlp: YES 19:06:50 mtaylor: Huh? 19:06:55 mtaylor: Oh, nm. 19:06:56 mtaylor: Let's schedule a time offline for that then 19:06:58 mtaylor: definitely missed the fun of keystone mtg 19:06:59 carlp: how does sometime during the day tomorrow work? 19:07:03 carlp: yes to offline 19:07:04 mtaylor: What about all the stuff that isn't in pip? 19:07:16 soren: those are still installed via apt 19:07:17 mtaylor: Where does that come from? Which versions of everything to expect? 19:07:25 soren: tools/pip-requires 19:07:50 soren: virtualenvs are rebuilt any time tools/pip-requires changes, and additionally are rebuilt every night for good measure 19:07:50 mtaylor: Eh? 19:07:54 soren: the tools/pip-requires are pretty religiously updated too, since almost all the devs are using them 19:08:12 oh wait, I think I misread your question 19:08:19 I'm not thrilled about the current state of tracking the things that get installed not via pip 19:08:25 mtaylor: Right, but the stuff that isn't in pip. Where does that come from? Which versions of those things can one expect? 19:08:39 mtaylor: I think it's mostly in README and the like, is that correct? 19:08:51 mtaylor: I'm with you. Mixing packaging systems is craptastic. 19:08:54 currently, anotherjesse has a list in devstack, I have a list in the puppet modules for our slaves, and jeblair has one in the preseed launch stuff for bare metal 19:09:26 mtaylor: we are currently focusing on diablo 19:09:28 I would really love to figure out a sane way to maintain that list that makes it easy for devs to deal with and not wonky for the currently 3 automated systems that need the list 19:09:33 once we move to essex then we will want to figure out a way to share lists 19:09:43 (perhaps coming from the projects themselves?) 19:09:57 anotherjesse: cool. yeah - it's a terrible problem right now - but it's going to get old over time 19:10:04 sorry 19:10:08 mtaylor: Which kernel version, which version of libvirt, etc. etc.? Whatever's in Maverick? Natty? Oneiric? 19:10:10 it's NOT a terrible problem right now 19:10:11 make some files in tools/ for each project - packagedeps-apt packagedeps-rpm 19:10:25 soren: so, the idea is that we'll still have a ppa with depends 19:10:26 would that work? 19:10:38 mtaylor: Also, how do we handle it when we need changes to upstream code? 19:11:12 soren: at the moment it's nova-core's ppa driving the slaves - but I would like a non-project specific ppa - I made one in ~openstack-ci but I'm not 100% convinced that's the right place for it 19:11:55 mtaylor: We've on more than one occasion had sporadic test suite failures due to Eventlet bugs. 19:12:01 heckj: the main issue there is that from an integration perspective, we kind of need people to agree on dep versions ... so I'd like to have a master list of "this is what needs installed for openstack" 19:12:23 soren: yes - that would be the sorts of things we'd want in the ppa of depends - and also of course forwarded upstream and to the distros 19:12:31 mtaylor: ah - so a need for a combined list, rather than one for each project 19:12:40 heckj: that's what _I'd_ like 19:12:53 but I'm open to being shot down there 19:12:53 mtaylor: So how would that work? Would we temporarily remove it from pip-requires and move it somewhere else? 19:13:04 given the growing cross-dependencies in code, that makes sense to me 19:13:13 soren: oh - you mean pip-requires-based things 19:13:14 hrm 19:13:17 mtaylor: Yes. 19:13:19 sorry - brain dead for a sec 19:13:45 good question ... it's still on the cards to make our own pypi server that trunk versions of our software gets uploaded to 19:14:15 I would think we could temporarily upload other things there - but I'd like to keep that to emergencies- otherwise I think things start getting weird 19:14:36 or? 19:14:49 We didn't choose not to use pip to begin with at random. It was a rather conscious decision. 19:14:54 Just sayin'. 19:14:58 I know 19:15:57 In other words: I don't know. I don't have a good answer for how to do it with pip. 19:16:38 that's fair. I'm going to propose then that when we run against that, we can look at uploading something to our openstack pypi and see how that works for us 19:17:52 anybody have any sense if the projects would be opposed to changing the location of the virtual env that run_tests.sh creates from .${project}-venv to .virtualenv ? 19:18:30 mtaylor: wouldn't that make it harder if different projects need different versions? 19:18:34 I'm not a venv expert 19:18:38 mtaylor: I don't think it makes that big of a difference - they're never in the same directory 19:18:39 because right now I've got a template job in jenkins that handles venv stuff - and I'd love if it could not have to attempt to figure out the name of the venv ... 19:18:46 anotherjesse: what heckj said 19:18:56 anotherjesse: but we're also trying to incite projects to need the same versions of deps :) 19:19:08 mtaylor: crazy talk 19:19:15 I know. I'm completely mental 19:19:44 heckj: cool. I'll try to propose some patches there - I think it'll make the jenkins code a little more reusable 19:20:05 so moving forward: 19:20:27 #action mtaylor get run_tests.sh patched to create .virtualenv instead of .${project}-venv 19:20:41 #action get other projects migrated to the pip builders 19:20:52 #action mtaylor get that darned pypi server set up 19:21:09 that's the stuff I'm going to try to get going on this front 19:22:41 anybody got anything else on pip builders? 19:23:02 nope 19:23:04 #topic bare metal 19:23:21 in other news, jeblair has made great progress in getting the bare metal stuff ready to start running tests 19:23:23 hi 19:23:28 jeblair: wanna catch folks up? 19:23:33 sweet! 19:24:04 https://jenkins.openstack.org/job/dev-openstack-deploy-rax/ 19:24:19 here's a summary: 19:24:51 i have a procedure for setting up a machine to drive tests, based on ubuntu orchestra 19:25:02 documentation for that will be up real soon now 19:25:37 orchestra = juju? 19:25:40 i have a jenkins slave running on that machine, and it's running the above job 19:25:44 ensemble == juju 19:25:48 orchestra == cobbler 19:25:51 ah 19:26:13 basically the idea is to distill down to instructions that look like "apt-get install ubuntu-orchestra-server" and install these config files 19:26:32 attempting to get the barrier to entry for running bare metal tests very low 19:26:33 * anotherjesse would be interested in adding it to devstack tools directory once done 19:26:47 ++ 19:27:47 part of the orchestra configuration is to set the test machines up with an lvm snapshot so we can do use kexec to quickly reset them to a known state. that seems really solid now 19:28:08 so the above job runs kexec to reset the machines, then runs devstack on them to set up openstack 19:28:09 jeblair: interesting - any good docs on reading how that works? 19:28:37 anotherjesse: mostly written, should be up soon 19:29:00 jeblair: I was referring to kexec in general for this use case - but can't wait to see your docs as well 19:29:12 (eg when researching this did you find any good resources) 19:29:14 then the job runs exercise.sh, which is where we are now. it looks like we'll need to change the configuration a bit 19:30:31 anotherjesse: i think this is a moderately novel use of kexec. mostly it's used by kernel developers to test new versions of the kernel. i didn't run into many folks using it for qa, but i could have just missed it. 19:30:47 jeblair: trail blazing! 19:31:00 neat, looking forward to seeing how you did it! 19:31:14 :) another improvement over our last iteration of bare metal testing is: 19:31:41 if you visit the link above, you'll see that the syslogs for each host are separately archived along with the build 19:32:06 so it should be easier to diagnose problems by looking at the syslog for just the head node, or just a compute node, etc. 19:32:38 nice 19:32:59 that's about the state. mostly we need to tune the devstack config so exercise.sh works and make sure all the openstack components are syslogging so we get all the info 19:33:06 jeblair: you're doing this with oneiric images, yeah? 19:33:09 then we should be able to start running post-commit tests on that 19:33:15 natty 19:33:19 jeblair: making devstack enable syslog is probably a good thing in genearl 19:33:34 yep 19:33:56 if post commit is solid, we can start gating 19:33:57 anotherjesse: I was thinking perhaps a flag to devstack which toggles between starting things in screen or starting things normally spitting to syslog? or is that too much? 19:34:17 not too much - lets create an issue (once we move to essex we will move to lp+gerrit for devstack) 19:34:27 mtaylor: i don't care if things start in screen as long as they also syslog 19:34:33 anotherjesse: ++ 19:34:40 ++ 19:34:52 anotherjesse: actually, I think we're going to have to gerrit devstack before we can use devstack based things for trunk gating 19:35:13 because we'll want to be able to make sure that devstack changes don't break known good trunk, and then that trunk works with known good devstack, yeah? 19:35:16 we were hoping to move to essex at end of day today - but given the uncertainity about keystone I'm not sure 19:35:25 mtaylor: if needed we can move earlier 19:35:28 oh well - we should be fine then :) 19:35:47 yeah, i thought you meant something other than that by 'move to essex'. ;) 19:35:56 same here 19:36:14 anotherjesse: you're still in the last meeting in your head, aren't you… I am. 19:36:22 mtaylor: stackrc points to diablo and we haven't worked on essex (eg master) since we need to get diablo working for users first 19:36:30 it *might* work on master- not sure 19:36:38 makes total sense ... 19:36:43 i'm happy to move devstack into lp/gerrit as soon as you're ready. is openstack/ or openstack-ci/ the right place for it? 19:36:58 I think openstack/ but it doesn't feel like a full project 19:37:03 it is just a shell script 19:37:05 we can email the list 19:37:17 I'm good with either. 19:37:17 jeblair: that might be the reason why exercise.sh isn't working for you? or are you running it on diablo too 19:37:37 no, it's running on cloudbuilders diablo 19:37:41 anotherjesse: I may not get the swift stuff complete before you want to move it - in that case I'll just re-do into gerrit from your branch 19:37:43 break one thing at a time. :) 19:37:48 jeblair: ok. good to know :) 19:38:46 i'd like to see openstack/ be for full openstack projects in the long term. i know the ci team has some stuff in there. we're going to move it out i think, but it's a little tricky due to the number of machines with operational scripts expecting those repos right now 19:39:00 should be easier once monty finishes the pip builders, actually. 19:39:33 maybe devstack belongs there, if not, openstack-ci makes some sense to me. it's possible this is a bit of a bikeshed now. :) 19:39:51 any other questions about bare metal 19:39:52 ? 19:40:18 jeblair: just a note that we (rax cloudbuilders) are working on build_domu.sh that works like build_kvm or build_lxc 19:40:21 for xenserver 19:40:31 you might want to add a xenserver version eventually for gating 19:40:55 jeblair: sleepsonthefloor is working on that and getting help from citrix / rax public cloud 19:41:06 only one is the one I hinted at earlier - which also affects non baremetal stuff - currently we're doing all testing on natty still - partially because there are no oneiric images in cloud servers yet 19:41:06 anotherjesse: cool 19:41:17 I'd like to move that to oneiric when we have images for it - anybody have a problem with that? 19:41:27 mtaylor: could you use freecloud with oneiric uec images? 19:41:47 I think using oneiric is probalby good as it (should) decrease the size of the ppa needed? 19:42:10 anotherjesse: possibly. we've also gotten access to an account on the hp cloud which we're working on integrating in to slave pools 19:42:37 whatever provider we use does need to be very stable 19:43:17 yes. which so far is mainly rax cloud servers ... but as soon as we have jclouds plugin finished, I'd love to have on-demand slaves across a few different clouds once we can 19:43:40 especially if we can use the same uec image on each provider 19:43:54 mtaylor: is anyone working on the jcloud? I'm not a java person but would love to see that working 19:43:54 but that's starting to be a whole other topic - and a bit longer-term than this week :) 19:44:24 anotherjesse: still sourcing around for someone to do that work - I have a few leads - but yeah, it'll be great to get that plugin done 19:44:38 what meeting is this again? 19:44:47 zykes-: CI 19:45:00 I think that's about it for bare metal... 19:45:05 ok 19:45:18 #topic open discussion 19:45:22 anybody got anything else? 19:45:31 want to throw rotten fruit? 19:45:43 we should maybe cancel next week's meeting? 19:45:46 mtaylor: I'd love to hear the state of the actual tests - is that what soren has been doing? 19:45:46 I think I got that out of my system with the poking earlier 19:46:16 oh yeah - jeblair, soren, vishy, ttx and I are all going to be at UDS next week 19:46:22 anotherjesse: are you coming? or are you staying home? 19:46:33 mtaylor: I'll be there - as soon as I get my tix 19:46:39 anotherjesse: sweet 19:46:42 me too 19:46:52 anotherjesse: gabe and daryl are working on that as well as soren 19:46:55 I'll be at home :( 19:47:00 so I think it's fair to say we will not have an IRC meeting for CI next week 19:47:00 anyone at the citrix summit now or ? 19:47:23 Our plan is to drown mtaylor in the pool before he starts relying on devstack for CI 19:47:32 count me in 19:47:34 ttx: too late 19:47:35 soren: will need your help 19:47:46 mtaylor: you mean you started to learn swimming ? 19:48:19 ttx: started? dude, I've been swiming since I was a wee small boy- I grew up in the south, there is only one thing to do in the summer down there, and it's get in a pool of water 19:48:29 anotherjesse: it is set up in gerrit - and I've seen patches come through 19:48:53 anotherjesse: I think it'll be really helpful once we're using at least part of it to gate something, so that there can be good feedback in terms of inputs/outputs needed 19:49:00 Still think that with soren and zul we can succeed. 19:50:01 oh, regarding that, if we're ready to gate before os-integration-tests are ready, i'm planning on just using exercise.sh as a placeholder and to have something to gate on 19:50:19 that seems like a great step one to me 19:50:28 jeblair: exercise.sh is a place holder for us too ;) 19:50:50 jeblair: we plan on helping with the ci tests once we things have stabilized 19:51:36 groovy :) 19:54:21 Sorry, was away for a little bit. 19:54:28 ttx: Need my help for what? 19:55:14 this would be a great time to end the meeting! 19:57:08 #endmeeting