19:01:07 <mtaylor> #startmeeting
19:01:08 <openstack> Meeting started Tue Jan 31 19:01:07 2012 UTC.  The chair is mtaylor. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
19:01:31 <mtaylor> #topic meetbot
19:01:57 <mtaylor> LinuxJedi_cell: we have access to the meetbot/meetinglogs server now, so we can work our puppet magic on it
19:02:14 <LinuxJedi_cell> mtaylor: fantastic
19:02:17 <mtaylor> LinuxJedi_cell: which _also_ means that I can get you to add a couple of features ...
19:02:35 <LinuxJedi_cell> Sure thing
19:02:53 <mtaylor> a) I'd like to see see #startmeeting take an optional parameter which is a launchpad team containing the people who should have voting rights for votes
19:03:08 <LinuxJedi_cell> ++
19:03:22 <mtaylor> b) I'd like to port in the voting feature from the ubuntu meetbot, so that someone can say "#startvote Do we like chicken"
19:03:35 <mtaylor> and then the bot will tally and record the results of that vote
19:04:10 <mtaylor> LinuxJedi_cell: more python for you!
19:04:28 <mtaylor> speaking of LinuxJedi ...
19:04:36 <LinuxJedi_cell> Looking forward to it:)
19:04:57 <mtaylor> branch expiration has gone live, and we've also got the pastebin sucked in to config management now
19:05:42 <mtaylor> LinuxJedi_cell: when you get bored, docs on the paste setup, yeah? (I think we should eventually have at least a top-level doc for each system/service we run/provide)
19:06:10 <mtaylor> #topic multi-python support
19:06:12 <LinuxJedi_cell> I'll do it before I get bored, will do with backups tomorrow
19:06:16 <mtaylor> awesome
19:06:36 <mtaylor> we've got multi-python testing rolled live for python-quantumclient (hi guinea pig)
19:06:45 <mtaylor> and the build slaves for it created and added
19:06:59 <mtaylor> so we should be able to start adding other projects to multi-python testing via tox this week
19:07:29 * LinuxJedi_cell jumping in car
19:07:35 <mtaylor> LinuxJedi_cell: have fun
19:07:48 <mtaylor> #topic Open Discussion
19:07:55 <mtaylor> anybody else got anything?
19:11:59 <jeblair> i've pushed the ksl branch
19:12:10 <jeblair> along with a merge commit for it
19:12:12 <heckj> yeah!!! (thank you!)
19:12:13 <mtaylor> hey! it's jeblair. w00t
19:12:23 <jeblair> https://review.openstack.org/#change,3572
19:12:26 * heckj is heads down in that KSL branch
19:12:35 <jeblair> there's the commit.
19:12:39 <jeblair> and the branch is called redux
19:12:44 * mtaylor has a few commits to submit to make it work in jenkins
19:12:56 <jeblair> so you can 'git checkout redux', and 'git review redux'
19:13:15 <heckj> we'll be doing work on the redux branch before merging it in -
19:14:08 <jeblair> i figured so.  the merge commit is not straightforward to make, so i think the way to go is to just keep working on the branch and let me know when it's ready to go in and i'll update the merge commit change
19:14:33 <soren> Oh, I've got a question:
19:14:34 <jeblair> i mostly wanted to put in a strawman change so that people can see what the commit will do when we're ready
19:14:38 * soren just stumbled in
19:15:08 <mtaylor> hey soren
19:15:10 <heckj> ola!
19:15:23 <heckj> jeblair: sounds good
19:15:32 <soren> What's the status on the implementing the "try every revision of openstack on a bunch of different people's infrastructure" thing?
19:15:33 <jeblair> (and i'm volunteering to make the commit since we normally don't allow merge commits.  in this case, one wrong step would give us 296 open changes for review)
19:15:43 <LinuxJedi> phew, back
19:16:50 <mtaylor> soren: so far, we keep getting to the point of getting a jenkins slave hooked in for someone, and then it just sits there
19:17:12 <mtaylor> we had an interesting chat with Daviey and jamespage yesterday about the Canonical OpenStack Jenkins
19:17:14 <heckj> jeblair: I will totally take you up on that when we're ready - because I would clearly make that misstep… twice or three times :-) And the channel would be flooded with heck hate.
19:17:39 <mtaylor> where I think an interesting first step (or possibly even long term approach) is for them to also run the gerrit trigger plugin on their jenkins
19:17:53 <mtaylor> to trigger builds and vote on changes similar to how smokestack is working
19:17:53 <soren> mtaylor: I heard a rumour they started doing that today.
19:17:58 <soren> Oh.
19:17:59 <soren> Not that.
19:18:04 <mtaylor> and then if that goes well long term
19:18:17 <soren> Or.. Meh, it's just something I heard in passing. Ignore it.
19:18:20 <soren> Ok.
19:18:22 <mtaylor> perhaps we can add a review category for them so that their jenkins has to pass
19:18:33 <soren> Hm.. It would really be ideal if we could somehow connect two Jenkins instances.
19:18:37 <mtaylor> jeblair brought up good a point about consolidated information presentation though
19:18:41 <mtaylor> yes, it would
19:18:58 <mtaylor> so far the plugin for jenkins that allows that isn't great
19:19:12 <soren> So, at Cisco, I'll probably be setting up an automated test rig of some stuff, but I'd like to use the same ressources for these "upstream" sorts of tests.
19:19:12 <mtaylor> soren: you're at cisco now, right?
19:19:36 <soren> ...but it's hard to share the ressources between our own Jenkins and someone else's who is using our servers as slaves.
19:19:41 <soren> mtaylor: indeed
19:19:53 <mtaylor> soren: agree
19:20:09 <mtaylor> soren: well - perhaps the canonical approach of installing the gerrit trigger plugin would be a good start
19:20:21 <soren> mtaylor: Yeah.
19:20:44 <soren> mtaylor: It's not like we use Jenkins to monitor the status of things much anyway.
19:20:48 <mtaylor> it does meet the criteria that we're interested in which is "vendors control and run the testing infrastructure they're donating"
19:20:50 <soren> We tend to focus on Gerrit.
19:20:55 <mtaylor> that is very true
19:20:59 <soren> It does seem like a natural place to aggregate the results.
19:21:15 <jeblair> the question is how they are aggregated in gerrit
19:21:23 <mtaylor> we'll need to figure out the 'right' way to do information display ... if we get 10 vendors doing their own gerrit voting, it's going to be very ugly to look at
19:21:27 <jeblair> my preference would be to see one report from jenkins with all the tests
19:21:47 <soren> How does it work now?
19:21:56 <soren> Does Smokestack have a special gerrit user?
19:22:11 <mtaylor> soren: it just has a gerrit user
19:22:14 <soren> Ok.
19:22:15 <jeblair> a hodge-podge of random comments from various test rigs all arriving unpredictably at different times isn't really information aggregation or display, it's information spam
19:22:31 <mtaylor> soren: and that gerrit user is unpriviledged and just votes like anyone else
19:22:39 <soren> jeblair: If we could silence the succesful ones, would that be helpful?
19:22:43 <jeblair> it's better than no information, but it's not nearly as nice as a simple list of jenkins tests
19:22:47 <jeblair> how would you know they ran?
19:22:54 <soren> jeblair: You wouldn't.
19:22:58 <soren> jeblair: WEll..
19:23:03 <soren> I mean, sorry, yes of course you would.
19:23:05 <mtaylor> I wonder if we could combine the multiple jenkins talking to gerrit approach ...
19:23:07 <jeblair> people are now all the time asking whether smokestack has run on their tests, or how they can get it to do that
19:23:10 <soren> ...but you wouldn't get e-mailed about them.
19:23:15 <mtaylor> with the jenkins plugin that will report the status of jobs on another jenkins
19:23:23 <mtaylor> so that we could have the gerrit interaction be federated
19:23:29 <mtaylor> but still have a single information display page
19:23:47 * soren ponders
19:24:27 <jeblair> relying only on the gerrit trigger plugin, that extra info would not be aggregated, but if we could solve the problem of collecting output from a single gating job, that would work.
19:24:31 <soren> I mean, I can envision how it could work, but it would mean changing Gerrit's data model somewhat.
19:24:45 <jeblair> gerrit's dato model?  how?
19:25:18 <soren> Well, if we let these test runnes report their results to gerrit, but not have it count as votes in the same fashion as human votes, we could present it differently.
19:25:27 <soren> *test runners
19:26:15 <mtaylor> soren: yeah - that's what I meant earlier by adding additional review categories ... that part is actually reasonably easy
19:26:29 <mtaylor> the tricky part is that the default display of that isn't really designed to be largely scalable :)
19:26:31 <soren> We could have another information box on the change page on Gerrit that shows a lot of green buttons and red dots that would be links back to the respective Jenkins instance's console output for the change.
19:26:38 <mtaylor> so we'll have a grid with a bazillion checkmarks
19:26:57 <soren> It could be collapsed by default.
19:27:15 <jeblair> whenever the topic goes to "reimplement jenkins in another system" i tend to think we should think about using jenkins.
19:27:24 <soren> Just an overall "25 succeses, 0 failures" that could expand to the full list.
19:27:37 <soren> ...or it could even show the failures by default and let you expand to see the full list.
19:27:49 <soren> IT all just gets so much easier if you have a simple programmatic way to tell them apart.
19:28:22 <soren> jeblair: I hear that.
19:28:51 <mtaylor> there may be a multi-step thing lurking in here
19:28:59 <soren> Another benefit of doing it all in Gerrit:
19:29:05 <mtaylor> perhaps doing the gerrit approach at first because it's pretty easy to get it going
19:29:17 <mtaylor> and then set someone on the task of making the jenkins aggregation plugin properly
19:29:25 <mtaylor> so that one jenkins can properly interact with another jenkins master
19:29:38 <jeblair> what's missing there now?
19:29:43 <mtaylor> because the use case of people contributing resources wanting to share those slaves with their own jenkins master is not going to go away
19:29:43 <soren> We've always talked about having multiple tiers of platforms: Supported, not-quite-supported, if-it-works-it's-a-frickin-miracle, it's-not-supposed-to-work
19:29:46 <soren> Or whatnot.
19:30:02 <soren> The "supported" ones might be privileged to give -2 votes.
19:30:13 <soren> The others, only -1.
19:30:30 <soren> I dunno.
19:30:39 <mtaylor> yeah, that's interesting
19:30:51 <soren> ...but that could be done through Jenkins, too, I suppose.
19:31:03 <mtaylor> jeblair: the main thing is proper federation ... as in, a slave jenkins that is itself a master
19:31:07 <soren> But you'd lose quite a bit of detail.
19:31:20 <soren> ...and every time Jenkins got another test result from somewhere, it might have to go back and re-vote.
19:31:31 <soren> ..which will have won us nothing in terms of information spam.
19:31:37 <jeblair> i'm wondering what's the hangup with the 'donated slave' model?
19:31:46 <jeblair> why can't resources be shared?
19:31:47 <mtaylor> the thing I said above...
19:31:51 <soren> I explained that further up.
19:31:56 <mtaylor> because you can't have a slave that's attached to two jenkins masters
19:32:12 <soren> 19:19 < soren> So, at Cisco, I'll probably be setting up an automated test rig of some stuff, but I'd like to use the same ressources for  these "upstream" sorts of tests.
19:32:26 <soren> mtaylor: Well, you can, but they won't know about each other. :)
19:32:26 <jeblair> you can run two slaves on a host.
19:32:37 <soren> Which makes it very hard to do reliable tests for something like Nova.
19:32:42 <jeblair> and you can externally mutex any shared resources they have
19:32:56 <soren> ...which rather expects exclusive access to certain resources.
19:33:03 <soren> jeblair: How so?
19:33:11 <mtaylor> but the cisco jenkins already knows how to control how many things are supposed to be running on a given slave
19:35:06 <jeblair> lockfile?  just thinking out loud. :)
19:36:08 <mtaylor> sure - so, I think there's a possible design nirvana and then some intermediary steps we can take between now and then
19:36:41 <jeblair> well, there's one more thing i'd like to say:
19:37:00 <jeblair> consider contributing dedicated resources to the project
19:37:03 <mtaylor> in a perfect world I don't think we want a quasi-jenkins dashboard in gerrit or a bunch of lockfile scripts running around to share slaves ... but both of those might be things we can do more quickly than implementing full-on jenkins federation
19:37:24 <jeblair> be a serious supporter of the openstack project by saying "we need to dedicate hardware to upstream testing"
19:37:45 <jeblair> at least two companies have donated project-wide testing resources, i'd like to see more.
19:37:58 <mtaylor> sure- but soren bringing this up isn't the first time it's been brought up as a concern by someone who is looking at donating resources
19:38:20 <soren> jeblair: I'm not going to say that that is an unreasonable request, but I believe a model where you can still use the resources for your own stuff if OpenStack doesn't need them is a sensible thing to offer.
19:38:48 <jeblair> ok.  i think we're all on the same page.  :)
19:38:58 <mtaylor> cool.
19:39:00 <soren> What's best? Having 5 machines for "upstream" and 5 machines for everything else, or having 10 machines that you share?
19:39:11 <jeblair> that's a good point
19:39:18 <soren> Multiply/divide as needed.
19:39:20 <soren> :)
19:39:24 <mtaylor> and I think that there isn't huge disparity on step 1 here ... which is that canonical are trying their own gerrit plugin for non-binding voting
19:39:42 <mtaylor> and I think we can sort of see how that goes for a little bit before we have to solve whether we add them as a category or not
19:39:46 <mtaylor> yeah?
19:39:48 <soren> Yeah. I don't think the information level will be too overwhelming for at least a couple of months.
19:39:55 <soren> And even if so, I think it's a good trade-off.
19:40:12 <mtaylor> yeah - sort of like getting smokestack to vote - it's not perfect, but it's more than we had before :)
19:40:21 <soren> Precisely.
19:40:55 <soren> Ok, great. This is enlightening.
19:40:57 <jeblair> we will shortly be up to 4 systems leaving feedback in gerrit, so i do think we need to go ahead and start solving the aggregation problem
19:41:20 <soren> That would be very welcome for sure.
19:41:47 <soren> I'd just hate to be without test results just due to potential information overload.
19:41:58 <jeblair> yep
19:42:07 <soren> "First world problem" doesn't exactly apply here, but close.
19:42:22 <mtaylor> maybe let's schedule time to discuss in person with a whiteboard at ODS?
19:42:25 <jeblair> organized testing > testing > no testing
19:42:44 <soren> Right :)
19:43:11 <soren> mtaylor: Let's do that. I hope we have some practical experience to base the chat on by then, though :)
19:44:22 <mtaylor> soren, heckj: speaking of ...
19:44:49 <mtaylor> fwiw, jeblair, LinuxJedi and I will be meeting in Boston the week of Feb 14
19:45:04 <mtaylor> you, or anyone else lurking, is more than welcome to come
19:46:01 <heckj> sounds nice - but I'll be on the VERY other side of the USA (hawaii) on vacation :-)
19:46:11 <mtaylor> heckj: BAH to vacation
19:46:13 <soren> I'll be in San Jose.
19:46:21 <mtaylor> Daviey, jamespage ^^^ you too
19:46:25 <soren> I'll think of you when I fly over Boston.
19:46:37 <mtaylor> yeah, it's cool - just wanted to make sure we'd invited you
19:47:09 <jeblair> mtaylor: i told you you should hold it someplace awesome
19:47:27 <jeblair> nobody wants to go to _boston_ in _februrary_  :)
19:47:30 <soren> Boston's not awesome?
19:47:35 <soren> Come on, Boston has..
19:47:36 <soren> er...
19:47:39 <soren> No, you're right.
19:48:05 <mtaylor> blame HP
19:48:12 <jeblair> ok
19:48:12 <soren> I always do.
19:48:32 <soren> It's company policy.
19:48:39 <Daviey> hola
19:48:40 <soren> jk :)
19:48:48 * Daviey reads context
19:49:23 <Daviey> mtaylor: Yeah, i don't think i can justify flying out to Boston just for a beer...
19:49:28 <Daviey> would love to thou :)
19:49:48 <mtaylor> cool. also, we were talking a bit about your jenkins
19:49:50 <mtaylor> mostly nice things
19:50:14 <Daviey> mtaylor: right..
19:50:29 <Daviey> mtaylor: We are maintaining it in silent mode for the time being.
19:50:39 <mtaylor> Daviey: seems like a good starting place
19:50:43 <Daviey> Testing trunk post commit, and stable/diablo pre-commit
19:50:50 <Daviey> (as a comment only, not gate)
19:51:40 <soren> Does the Jenkins gerrit plugin report back?
19:51:47 <soren> Or is that a custom built thing?
19:52:00 <jeblair> it's built in
19:52:04 <soren> Wicked.
19:52:07 <jeblair> you can customize what it does on success/failure
19:52:17 <jeblair> so what messages it leaves, whether/how it votes, etc
19:52:29 <jeblair> (per job or jenkins-wide)
19:52:47 <Daviey> yep, seems to be working well!
19:53:00 <mtaylor> Daviey: you guys using our fork of it for now?
19:53:25 <Daviey> mtaylor: yep
19:53:36 <mtaylor> great. we'll keep you in the loop if we update anything
19:54:15 <soren> Oh, there's a fork?
19:54:18 <soren> That's good to know.
19:54:23 <Daviey> thanks!
19:54:31 * soren hasn't a clue how to get custom plugins into Jenkins, though.
19:54:37 <jeblair> i have clearance to upstream our changes, will be starting that soon
19:54:39 <mtaylor> soren: it's easy
19:54:42 <mtaylor> https://github.com/jeblair/gerrit-trigger-plugin/tree/trigger-on-comment-added
19:54:58 <soren> Trigger on comment added? Seriously?
19:54:59 <Daviey> jeblair: Is that your emoployer issues?
19:55:00 <mtaylor> soren: you build it, it makes a file, you drop it in to the jenkins plugin dir
19:55:05 <soren> Why would you want to retest on every comment?
19:55:16 <mtaylor> soren: approval is a type of comment
19:55:25 <soren> Ah.
19:55:27 <mtaylor> soren: so you have to be able to respond to comment types
19:55:33 <jeblair> soren: it has a filter, we test on "APRV +1"
19:55:43 <soren> So the first approval sends it to testing?
19:55:59 <jeblair> Daviey: yes, it apparently took them a bit to okay that.
19:55:59 <mtaylor> the +1 vote in the approval column
19:56:13 <mtaylor> soren: a normal +1 vote from a person in code review is "CRVW +1"
19:56:19 <mtaylor> and a +2 vote is "CRVW +2"
19:56:32 <Daviey> well.. it was my plan to do testing when anyone proposes anything to diablo/stable.
19:56:32 <Daviey> The idea being is that it smokes it before a human looks at it
19:56:46 <Daviey> then the human can respond based on code review AND jenkins return
19:56:48 <soren> mtaylor: Oh. SO the tests don't run until someone has approved the patch?
19:57:01 <mtaylor> soren: for the openstack jenkins, yes
19:57:03 <Daviey> soren: that is an option.
19:57:13 <mtaylor> soren: although we're going to add pre-approval pep8 testing real soon now
19:57:21 <soren> What Daviey says sounds smarter.
19:57:27 <mtaylor> security risk
19:57:29 <Daviey> yep
19:57:31 <soren> I know.
19:57:33 <Daviey> discussing that atm.
19:57:36 <mtaylor> also
19:57:45 <mtaylor> it's not testing entirely the right thing
19:57:54 <soren> I've the one who's been saying that from day 1 when everyone else was saying "Just run the tests!" :)
19:58:00 <mtaylor> because it's testing the patch submitted, rather than the patch as merged as it will be applied
19:58:24 <mtaylor> however- if we get past the security issue, I think that what we might do is smoke test pre-approval
19:58:24 <Daviey> soren: I need to protect against someone including "rm -rf /" in setup.py :)
19:58:30 <mtaylor> and then re-test right before merge
19:58:31 <jeblair> ideally, we'll do both, as we address the security problem
19:58:37 <mtaylor> what jeblair said
19:59:08 <soren> Daviey: I understand entirely
19:59:14 <Daviey> i'm pondering a few things, one might be - creating the tarball in kvm.
19:59:23 <soren> Daviey: ...but that's the case even if it's been approved.
19:59:32 <mtaylor> destroying testing slaves is less of a problem if we are blowing them away and re-creating them every time
19:59:39 <mtaylor> which we're moving towards
19:59:39 <Daviey> I don't trust chroot or lxc yet.
19:59:46 <Daviey> soren: right
19:59:51 <mtaylor> so then it's just a problem of catching spam bots
19:59:59 <Daviey> mtaylor: right.. i don't care about the nodes
20:00:28 <Daviey> Well
20:00:35 <Daviey> spambots worry me less.
20:00:39 <Daviey> It's a pretty sterile enviroment
20:00:49 <mtaylor> python is pretty powerful :)
20:00:52 <Daviey> I had to get permission just to open up the firewall just to pypi
20:00:58 <soren> Oh, we're imposing on the PPB meeting slot.
20:01:05 <mtaylor> ppb isn't meeting
20:01:11 <mtaylor> our slaves are all public cloud images :)
20:01:16 <Daviey> Thye didn't meet last week either :/
20:02:25 <mtaylor> however - I think we're probably good here.
20:02:45 <mtaylor> soren: obviously let us know if you want help on getting the modified gerrit-trigger-plugin going
20:03:04 <mtaylor> jeblair: we should probably get around to fixing the manual trigger page
20:03:14 <Daviey> soren: What are you testing, and on what platform?
20:03:23 <jeblair> mtaylor: yep
20:03:33 <Daviey> I power-read scrollback, so might have missed that
20:04:00 <Daviey> soren: fwiw, we have 12 machines.
20:04:21 <soren> Daviey: Nothing yet. I already told you I'm waiting for access to hardware :)
20:04:48 <Daviey> soren: right, but i'm guessing there is a plan?
20:05:08 <soren> Yes. Yes, there is.
20:05:15 <Daviey> i'm sure you are not just waiting on hardware to start thinking about what you are testing for, and on what platform. :)
20:06:06 <soren> I have a draft plan.
20:06:34 <soren> It would be unlike me if it didn't involve Ubuntu.
20:06:47 <mtaylor> soren: I'm SHOCKED!
20:06:49 <Daviey> cryptic++
20:07:06 <Daviey> soren: you are crazy.
20:07:28 <soren> Daviey: That's never been proved.
20:07:58 <zns> zns here - sorry for being late.
20:08:44 <zns> * realizes the ppb is not meeting *
20:08:47 <soren> zns: You're excused. :)
20:09:01 <zns> soren: thanks :-)
20:09:35 <Daviey> DOes anyone have other ideas satisfying the security issue?
20:10:31 <soren> Yeah.
20:10:49 <soren> My idea from the beginning was to only run tests from people we know who are.
20:10:59 <soren> ..and where they live.
20:11:06 <soren> So we can hunt them down if they screw with us.
20:11:21 <jeblair> yeah, we've been thinking about that as well
20:11:30 <Daviey> Well yes, but can i trust you not to accidently break our ci lab with a rm -rf /? :)
20:11:37 <zul> hell no
20:11:39 <soren> ..and the same group of people could send other people's branches off for testing once they're confident it's not a cracking attempt.
20:11:51 <soren> Daviey: No. No, you can't.
20:12:05 <Daviey> soren: it needs to be a stronger model than that
20:12:24 <soren> Daviey: It works for Ubuntu?
20:12:48 <Daviey> soren: Not really.
20:13:12 <Daviey> soren: Can you r00t a buildd ?
20:13:23 <soren> Daviey: Yes.
20:13:55 <Daviey> soren: and you could break the host system?
20:14:24 <jeblair> i think our approach is going to generally be to a) be protected if that does happen (throwaway bulid slaves), combined with a small amount of deterence for that happening (signup process (the CLA is filling this for now but i'd still like to get rid of it) or using the reputation of the submitter to decide to run a check on upload)
20:14:24 <soren> Daviey: On the Ubuntu buildd's
20:14:25 <soren> ?
20:14:33 <soren> Daviey: Or the PPA ones?
20:14:36 <soren> (not the same thing at all)
20:15:23 <Daviey> right
20:15:29 <Daviey> Regardless, I want a stronger model.
20:15:31 <Daviey> :)
20:15:49 <Daviey> I also don't want to encourage levels of contributor
20:16:04 <Daviey> contributor and core is enough of a split IMO.
20:16:37 <Daviey> having degrees of contributor isn't ideal IMO.
20:17:21 <mtaylor> agree. I was originally advocating the only split being that you had to have landed at least one patch before we would run your stuff pre-approval
20:17:53 <mtaylor> between having signed the CLA and having convinced someone to land something at least once, it's likely we at least know you're a real person and where to find you
20:19:11 <soren> Daviey: I can root the Ubuntu buildd's. BUilds run as root.
20:19:25 <Daviey> nah, that is still crappy.
20:19:53 <Daviey> soren: heh, yes - but can you BREAK the infra?
20:21:02 <soren> Daviey: To the best of my knowledge, yes.
20:21:31 <soren> I could be wrong, of course. And I'm not going to test the hypothesis. elmo knows where I live. :)
20:21:43 <Daviey> heh
20:22:42 <soren> I depends on what you mean by "the infra", of course. I'm pretty sure I can take a buildd out.
20:24:08 <Daviey> But that is the 'node' that i don't care about.. because it can be recovered through out of band power management
20:24:33 <Daviey> and pxe booting a re-isnstall
20:24:33 <Daviey> The issue is the ftp-master, breaking that.
20:24:33 <Daviey> right?
20:25:09 <soren> Well, if taking out a single node isn't a problem, then running arbitrary code from random crackheads shoulnd't be a problem either.
20:25:11 <Daviey> I really don't worry too much about the node having woes, it's the incoming machibe
20:25:26 <soren> Howeve.r.
20:25:31 <Daviey> soren: Yes, but the jenkins server isn't a throwaway machine
20:25:37 <soren> The problem isn't someone doing an "rm -rf /".
20:25:45 <soren> You will discover that immediately.
20:26:03 <Daviey> We could do the incoming process on a throwaway node i suppose, but an extra level of complexity and speed
20:26:19 <soren> The problem is someone sneaking something in that will sit around and only much later add a backdor to Nova in an entirely unrelated commit or something.
20:26:35 <Daviey> soren: /I/ won't discover it... because it's hands-off
20:26:35 <Daviey> a forkbomb is just as inconvient.
20:26:44 <Daviey> yeah
21:00:16 <ttx> o/
21:00:22 <notmyname> hi
21:00:40 <bcwaldon> ello
21:00:54 <ttx> zns, jaypipes, vishy, devcamcar: around ?
21:00:54 * bcwaldon is standing in for jaypipes
21:01:02 <ttx> ok
21:01:07 <ayoung> o/
21:01:50 <zns> zns here
21:01:55 <Daviey> .
21:02:04 <ttx> Still missing bloody Californians.
21:02:19 <danwent> ttx: hey...
21:02:22 <bcwaldon> that's what happens when they go to $Texas
21:02:25 <ttx> danwent: woops.
21:02:50 <danwent> its just the bloody californias that you need for this meeting that are missing :)
21:02:58 <mtaylor> o/
21:02:59 <ttx> ok let's start. I'll pretend we have the meetbot, just in case we can refeed it the log
21:03:00 <termie> ...
21:03:07 <mtaylor> we have it
21:03:10 <mtaylor> #endmeeting