19:01:43 <mtaylor> #startmeeting
19:01:44 <openstack> Meeting started Tue Oct 25 19:01:43 2011 UTC.  The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:03:08 <mtaylor> sigh. the minutes linked to on the wiki are not the droids I'm looking for
19:03:21 <mtaylor> so - last time we talked for about two hours on the topic of packaging
19:03:30 <mtaylor> anybody want to start that freight train moving again?
19:03:32 <carlp> it was exciting
19:03:42 <mtaylor> it was so exciting
19:03:55 <mtaylor> #agreed everyone loved talking about packaging last week
19:03:58 <anotherjesse> more exciting than keystone meeting?
19:04:12 <mtaylor> anotherjesse: I was eating a sandwich during the keystone meeting - did I miss fun?
19:04:27 <anotherjesse> always
19:04:38 <mtaylor> blast. that'll teach me to eat
19:05:22 <carlp> mtaylor: do you have time this week where we can meet and setup netstack jenkins slave?
19:05:25 <zul> lest talk about it again :)
19:05:38 <mtaylor> so - in general, since last week I've gotten nova trunk gating moved to building from pip-based venv instead of slaves with depends installed from packages
19:06:20 <mtaylor> it was fun - I discovered in working on caching the process of building the venv that venvs are not terribly relocatable, even after running virtualenv --relocatable
19:06:40 <mtaylor> carlp: YES
19:06:50 <soren> mtaylor: Huh?
19:06:55 <soren> mtaylor: Oh, nm.
19:06:56 <carlp> mtaylor: Let's schedule a time offline for that then
19:06:58 <heckj> mtaylor: definitely missed the fun of keystone mtg
19:06:59 <mtaylor> carlp: how does sometime during the day tomorrow work?
19:07:03 <mtaylor> carlp: yes to offline
19:07:04 <soren> mtaylor: What about all the stuff that isn't in pip?
19:07:16 <mtaylor> soren: those are still installed via apt
19:07:17 <soren> mtaylor: Where does that come from? Which versions of everything to expect?
19:07:25 <mtaylor> soren: tools/pip-requires
19:07:50 <mtaylor> soren: virtualenvs are rebuilt any time tools/pip-requires changes, and additionally are rebuilt every night for good measure
19:07:50 <soren> mtaylor: Eh?
19:07:54 <heckj> soren: the tools/pip-requires are pretty religiously updated too, since almost all the devs are using them
19:08:12 <heckj> oh wait, I think I misread your question
19:08:19 <mtaylor> I'm not thrilled about the current state of tracking the things that get installed not via pip
19:08:25 <soren> mtaylor: Right, but the stuff that isn't in pip. Where does that come from? Which versions of those things can one expect?
19:08:39 <heckj> mtaylor: I think it's mostly in README and the like, is that correct?
19:08:51 <soren> mtaylor: I'm with you. Mixing packaging systems is craptastic.
19:08:54 <mtaylor> currently, anotherjesse has a list in devstack, I have a list in the puppet modules for our slaves, and jeblair has one in the preseed launch stuff for bare metal
19:09:26 <anotherjesse> mtaylor: we are currently focusing on diablo
19:09:28 <mtaylor> I would really love to figure out a sane way to maintain that list that makes it easy for devs to deal with and not wonky for the currently 3 automated systems that need the list
19:09:33 <anotherjesse> once we move to essex then we will want to figure out a way to share lists
19:09:43 <anotherjesse> (perhaps coming from the projects themselves?)
19:09:57 <mtaylor> anotherjesse: cool. yeah - it's a terrible problem right now - but it's going to get old over time
19:10:04 <mtaylor> sorry
19:10:08 <soren> mtaylor: Which kernel version, which version of libvirt, etc. etc.? Whatever's in Maverick? Natty? Oneiric?
19:10:10 <mtaylor> it's NOT a terrible problem right now
19:10:11 <heckj> make some files in tools/ for each project - packagedeps-apt packagedeps-rpm
19:10:25 <mtaylor> soren: so, the idea is that we'll still have a ppa with depends
19:10:26 <heckj> would that work?
19:10:38 <soren> mtaylor: Also, how do we handle it when we need changes to upstream code?
19:11:12 <mtaylor> soren: at the moment it's nova-core's ppa driving the slaves - but I would like a non-project specific ppa - I made one in ~openstack-ci but I'm not 100% convinced that's the right place for it
19:11:55 <soren> mtaylor: We've on more than one occasion had sporadic test suite failures due to Eventlet bugs.
19:12:01 <mtaylor> heckj: the main issue there is that from an integration perspective, we kind of need people to agree on dep versions ... so I'd like to have a master list of "this is what needs installed for openstack"
19:12:23 <mtaylor> soren: yes - that would be the sorts of things we'd want in the ppa of depends - and also of course forwarded upstream and to the distros
19:12:31 <heckj> mtaylor: ah - so a need for a combined list, rather than one for each project
19:12:40 <mtaylor> heckj: that's what _I'd_ like
19:12:53 <mtaylor> but I'm open to being shot down there
19:12:53 <soren> mtaylor: So how would that work? Would we temporarily remove it from pip-requires and move it somewhere else?
19:13:04 <heckj> given the growing cross-dependencies in code, that makes sense to me
19:13:13 <mtaylor> soren: oh - you mean pip-requires-based things
19:13:14 <mtaylor> hrm
19:13:17 <soren> mtaylor: Yes.
19:13:19 <mtaylor> sorry - brain dead for a sec
19:13:45 <mtaylor> good question ... it's still on the cards to make our own pypi server that trunk versions of our software gets uploaded to
19:14:15 <mtaylor> I would think we could temporarily upload other things there - but I'd like to keep that to emergencies- otherwise I think things start getting weird
19:14:36 <mtaylor> or?
19:14:49 <soren> We didn't choose not to use pip to begin with at random. It was a rather conscious decision.
19:14:54 <soren> Just sayin'.
19:14:58 <mtaylor> I know
19:15:57 <soren> In other words: I don't know. I don't have a good answer for how to do it with pip.
19:16:38 <mtaylor> that's fair. I'm going to propose then that when we run against that, we can look at uploading something to our openstack pypi and see how that works for us
19:17:52 <mtaylor> anybody have any sense if the projects would be opposed to changing the location of the virtual env that run_tests.sh creates from .${project}-venv to .virtualenv ?
19:18:30 <anotherjesse> mtaylor: wouldn't that make it harder if different projects need different versions?
19:18:34 <anotherjesse> I'm not a venv expert
19:18:38 <heckj> mtaylor: I don't think it makes that big of a difference - they're never in the same directory
19:18:39 <mtaylor> because right now I've got a template job in jenkins that handles venv stuff - and I'd love if it could not have to attempt to figure out the name of the venv ...
19:18:46 <mtaylor> anotherjesse: what heckj said
19:18:56 <mtaylor> anotherjesse: but we're also trying to incite projects to need the same versions of deps :)
19:19:08 <anotherjesse> mtaylor: crazy talk
19:19:15 <mtaylor> I know. I'm completely mental
19:19:44 <mtaylor> heckj: cool. I'll try to propose some patches there - I think it'll make the jenkins code a little more reusable
19:20:05 <mtaylor> so moving forward:
19:20:27 <mtaylor> #action mtaylor get run_tests.sh patched to create .virtualenv instead of .${project}-venv
19:20:41 <mtaylor> #action get other projects migrated to the pip builders
19:20:52 <mtaylor> #action mtaylor get that darned pypi server set up
19:21:09 <mtaylor> that's the stuff I'm going to try to get going on this front
19:22:41 <mtaylor> anybody got anything else on pip builders?
19:23:02 <heckj> nope
19:23:04 <mtaylor> #topic bare metal
19:23:21 <mtaylor> in other news, jeblair has made great progress in getting the bare metal stuff ready to start running tests
19:23:23 <jeblair> hi
19:23:28 <mtaylor> jeblair: wanna catch folks up?
19:23:33 <heckj> sweet!
19:24:04 <jeblair> https://jenkins.openstack.org/job/dev-openstack-deploy-rax/
19:24:19 <jeblair> here's a summary:
19:24:51 <jeblair> i have a procedure for setting up a machine to drive tests, based on ubuntu orchestra
19:25:02 <jeblair> documentation for that will be up real soon now
19:25:37 <anotherjesse> orchestra = juju?
19:25:40 <jeblair> i have a jenkins slave running on that machine, and it's running the above job
19:25:44 <jeblair> ensemble == juju
19:25:48 <jeblair> orchestra == cobbler
19:25:51 <anotherjesse> ah
19:26:13 <jeblair> basically the idea is to distill down to instructions that look like "apt-get install ubuntu-orchestra-server" and install these config files
19:26:32 <jeblair> attempting to get the barrier to entry for running bare metal tests very low
19:26:33 * anotherjesse would be interested in adding it to devstack tools directory once done
19:26:47 <heckj> ++
19:27:47 <jeblair> part of the orchestra configuration is to set the test machines up with an lvm snapshot so we can do use kexec to quickly reset them to a known state.  that seems really solid now
19:28:08 <jeblair> so the above job runs kexec to reset the machines, then runs devstack on them to set up openstack
19:28:09 <anotherjesse> jeblair: interesting - any good docs on reading how that works?
19:28:37 <jeblair> anotherjesse: mostly written, should be up soon
19:29:00 <anotherjesse> jeblair: I was referring to kexec in general for this use case - but can't wait to see your docs as well
19:29:12 <anotherjesse> (eg when researching this did you find any good resources)
19:29:14 <jeblair> then the job runs exercise.sh, which is where we are now.  it looks like we'll need to change the configuration a bit
19:30:31 <jeblair> anotherjesse: i think this is a moderately novel use of kexec.  mostly it's used by kernel developers to test new versions of the kernel.  i didn't run into many folks using it for qa, but i could have just missed it.
19:30:47 <anotherjesse> jeblair: trail blazing!
19:31:00 <heckj> neat, looking forward to seeing how you did it!
19:31:14 <jeblair> :)  another improvement over our last iteration of bare metal testing is:
19:31:41 <jeblair> if you visit the link above, you'll see that the syslogs for each host are separately archived along with the build
19:32:06 <jeblair> so it should be easier to diagnose problems by looking at the syslog for just the head node, or just a compute node, etc.
19:32:38 <heckj> nice
19:32:59 <jeblair> that's about the state.  mostly we need to tune the devstack config so exercise.sh works and make sure all the openstack components are syslogging so we get all the info
19:33:06 <mtaylor> jeblair: you're doing this with oneiric images, yeah?
19:33:09 <jeblair> then we should be able to start running post-commit tests on that
19:33:15 <jeblair> natty
19:33:19 <anotherjesse> jeblair: making devstack enable syslog is probably a good thing in genearl
19:33:34 <jeblair> yep
19:33:56 <jeblair> if post commit is solid, we can start gating
19:33:57 <mtaylor> anotherjesse: I was thinking perhaps a flag to devstack which toggles between starting things in screen or starting things normally spitting to syslog? or is that too much?
19:34:17 <anotherjesse> not too much - lets create an issue (once we move to essex we will move to lp+gerrit for devstack)
19:34:27 <jeblair> mtaylor: i don't care if things start in screen as long as they also syslog
19:34:33 <mtaylor> anotherjesse: ++
19:34:40 <heckj> ++
19:34:52 <mtaylor> anotherjesse: actually, I think we're going to have to gerrit devstack before we can use devstack based things for trunk gating
19:35:13 <mtaylor> because we'll want to be able to make sure that devstack changes don't break known good trunk, and then that trunk works with known good devstack, yeah?
19:35:16 <anotherjesse> we were hoping to move to essex at end of day today - but given the uncertainity about keystone I'm not sure
19:35:25 <anotherjesse> mtaylor: if needed we can move earlier
19:35:28 <mtaylor> oh well - we should be fine then :)
19:35:47 <jeblair> yeah, i thought you meant something other than that by 'move to essex'. ;)
19:35:56 <mtaylor> same here
19:36:14 <heckj> anotherjesse: you're still in the last meeting in your head, aren't you… I am.
19:36:22 <anotherjesse> mtaylor: stackrc points to diablo and we haven't worked on essex (eg master) since we need to get diablo working for users first
19:36:30 <anotherjesse> it *might* work on master- not sure
19:36:38 <mtaylor> makes total sense ...
19:36:43 <jeblair> i'm happy to move devstack into lp/gerrit as soon as you're ready.  is openstack/ or openstack-ci/ the right place for it?
19:36:58 <anotherjesse> I think openstack/ but it doesn't feel like a full project
19:37:03 <anotherjesse> it is just a shell script
19:37:05 <anotherjesse> we can email the list
19:37:17 <heckj> I'm good with either.
19:37:17 <mtaylor> jeblair: that might be the reason why exercise.sh isn't working for you? or are you running it on diablo too
19:37:37 <jeblair> no, it's running on cloudbuilders diablo
19:37:41 <heckj> anotherjesse: I may not get the swift stuff complete before you want to move it - in that case I'll just re-do into gerrit from your branch
19:37:43 <jeblair> break one thing at a time. :)
19:37:48 <mtaylor> jeblair: ok. good to know :)
19:38:46 <jeblair> i'd like to see openstack/ be for full openstack projects in the long term.  i know the ci team has some stuff in there. we're going to move it out i think, but it's a little tricky due to the number of machines with operational scripts expecting those repos right now
19:39:00 <jeblair> should be easier once monty finishes the pip builders, actually.
19:39:33 <jeblair> maybe devstack belongs there, if not, openstack-ci makes some sense to me.  it's possible this is a bit of a bikeshed now. :)
19:39:51 <jeblair> any other questions about bare metal
19:39:52 <jeblair> ?
19:40:18 <anotherjesse> jeblair: just a note that we (rax cloudbuilders) are working on build_domu.sh that works like build_kvm or build_lxc
19:40:21 <anotherjesse> for xenserver
19:40:31 <anotherjesse> you might want to add a xenserver version eventually for gating
19:40:55 <anotherjesse> jeblair:  sleepsonthefloor is working on that and getting help from citrix / rax public cloud
19:41:06 <mtaylor> only one is the one I hinted at earlier - which also affects non baremetal stuff - currently we're doing all testing on natty still - partially because there are no oneiric images in cloud servers yet
19:41:06 <jeblair> anotherjesse: cool
19:41:17 <mtaylor> I'd like to move that to oneiric when we have images for it - anybody have a problem with that?
19:41:27 <anotherjesse> mtaylor: could you use freecloud with oneiric uec images?
19:41:47 <anotherjesse> I think using oneiric is probalby good as it (should) decrease the size of the ppa needed?
19:42:10 <mtaylor> anotherjesse: possibly. we've also gotten access to an account on the hp cloud which we're working on integrating in to slave pools
19:42:37 <jeblair> whatever provider we use does need to be very stable
19:43:17 <mtaylor> yes. which so far is mainly rax cloud servers ... but as soon as we have jclouds plugin finished, I'd love to have on-demand slaves across a few different clouds once we can
19:43:40 <mtaylor> especially if we can use the same uec image on each provider
19:43:54 <anotherjesse> mtaylor: is anyone working on the jcloud? I'm not a java person but would love to see that working
19:43:54 <mtaylor> but that's starting to be a whole other topic - and a bit longer-term than this week :)
19:44:24 <mtaylor> anotherjesse: still sourcing around for someone to do that work - I have a few leads - but yeah, it'll be great to get that plugin done
19:44:38 <zykes-> what meeting is this again?
19:44:47 <carlp> zykes-: CI
19:45:00 <mtaylor> I think that's about it for bare metal...
19:45:05 <zykes-> ok
19:45:18 <mtaylor> #topic open discussion
19:45:22 <mtaylor> anybody got anything else?
19:45:31 <mtaylor> want to throw rotten fruit?
19:45:43 <jeblair> we should maybe cancel next week's meeting?
19:45:46 <anotherjesse> mtaylor: I'd love to hear the state of the actual tests - is that what soren has been doing?
19:45:46 <carlp> I think I got that out of my system with the poking earlier
19:46:16 <mtaylor> oh yeah - jeblair, soren, vishy, ttx and I are all going to be at UDS next week
19:46:22 <mtaylor> anotherjesse: are you coming? or are you staying home?
19:46:33 <anotherjesse> mtaylor: I'll be there - as soon as I get my tix
19:46:39 <mtaylor> anotherjesse: sweet
19:46:42 <heckj> me too
19:46:52 <jeblair> anotherjesse: gabe and daryl are working on that as well as soren
19:46:55 <carlp> I'll be at home :(
19:47:00 <mtaylor> so I think it's fair to say we will not have an IRC meeting for CI next week
19:47:00 <zykes-> anyone at the citrix summit now or ?
19:47:23 <ttx> Our plan is to drown mtaylor in the pool before he starts relying on devstack for CI
19:47:32 <zul> count me in
19:47:34 <mtaylor> ttx: too late
19:47:35 <ttx> soren: will need your help
19:47:46 <ttx> mtaylor: you mean you started to learn swimming ?
19:48:19 <mtaylor> ttx: started? dude, I've been swiming since I was a wee small boy- I grew up in the south, there is only one thing to do in the summer down there, and it's get in a pool of water
19:48:29 <mtaylor> anotherjesse: it is set up in gerrit - and I've seen patches come through
19:48:53 <mtaylor> anotherjesse: I think it'll be really helpful once we're using at least part of it to gate something, so that there can be good feedback in terms of inputs/outputs needed
19:49:00 <ttx> Still think that with soren and zul we can succeed.
19:50:01 <jeblair> oh, regarding that, if we're ready to gate before os-integration-tests are ready, i'm planning on just using exercise.sh as a placeholder and to have something to gate on
19:50:19 <mtaylor> that seems like a great step one to me
19:50:28 <anotherjesse> jeblair: exercise.sh is a place holder for us too ;)
19:50:50 <anotherjesse> jeblair: we plan on helping with the ci tests once we things have stabilized
19:51:36 <jeblair> groovy :)
19:54:21 <soren> Sorry, was away for a little bit.
19:54:28 <soren> ttx: Need my help for what?
19:55:14 <jeblair> this would be a great time to end the meeting!
19:57:08 <mtaylor> #endmeeting