19:03:45 <fungi> #startmeeting infra
19:03:47 <openstack> Meeting started Tue Oct 29 19:03:45 2013 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:50 <openstack> The meeting name has been set to 'infra'
19:04:12 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting
19:04:49 <fungi> #topic Actions from last meeting
19:04:56 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-22-19.01.html
19:05:22 <fungi> #action clarkb decommission old etherpad server eventually (after the summit)
19:05:34 <fungi> #action jeblair move tarballs.o.o and include 50gb space for heat/trove images
19:05:39 <fungi> carrying forward ;)
19:06:04 <fungi> i suspect some of the topics on the agenda are stale, but you can tell me
19:06:09 <fungi> #topic Trove testing (mordred, hub_cap)
19:06:27 <clarkb> SlickNick has a change up to enable trove testing
19:06:36 <pleia2> cool
19:06:37 <fungi> ooh, link?
19:06:39 <clarkb> well a prereq
19:06:44 <clarkb> #link https://review.openstack.org/#/c/53972/
19:07:02 <hub_cap> hey fungi, im blocked on any work currently wrt this
19:07:03 <clarkb> Trove needs dib images and to do that cleanly the dib things need to go in their own repo
19:07:35 <fungi> hub_cap: no worries. i think it's on the agenda perpetually so we remember to give you a hard time or something
19:07:49 <clarkb> the bits are slowly falling into place
19:07:54 <hub_cap> yes feel free to scream at me at the summit :)
19:08:08 <fungi> hub_cap: i'll scream with you instead
19:08:24 <clarkb> we need soundproof rooms we can just yell :)
19:08:35 <fungi> sounds like good stress relief
19:08:47 <fungi> so anything else on this topic since last week?
19:09:29 <clarkb> doesn't look like it
19:09:33 <fungi> #topic Tripleo testing (lifeless, pleia2)
19:09:43 <pleia2> I'm continuing to work through setting up iteration 2 here:
19:09:45 <pleia2> #link https://etherpad.openstack.org/p/tripleo-test-cluster
19:10:07 <hub_cap> fungi: sounds good :)
19:10:20 <pleia2> got a little of the networking stuff set up for the test instance, now just working through leveraging existing tripleo scripts to set it up, from there I might need some help with specific portions (particularly the gearman bit)
19:10:48 <pleia2> that's about it, making progress
19:11:32 <fungi> sounds awesome
19:12:13 <fungi> questions? comments? applause?
19:12:32 <clarkb> excited to have more testing
19:12:53 <fungi> testing good. regressions bad
19:13:01 * fungi applauds and moves on to...
19:13:08 <fungi> #topic Wsme / pecan testing (sdague, dhellman)
19:13:32 <dhellmann> o/
19:13:44 * dhellmann hopes sdague has something in mind...
19:13:49 <fungi> stale topic on the agenda? or are there updates? i saw a review just pop up earlier today for something related
19:14:08 <clarkb> I believe most of the bits are in place, but we reverted the change that added the jobs to zuul
19:14:20 <clarkb> because we were gating nova on wsme and vice versa which wasn't desired
19:14:39 <dhellmann> #link https://review.openstack.org/#/c/54333/
19:14:49 <dhellmann> that runs tox tests for pecan against wsme, ceilometer, and ironic
19:15:11 <fungi> yeah, that's the one i was looking for
19:15:44 <dhellmann> I assume sdague and clarkb have the (a)symmetric gating thing in hand
19:16:05 <clarkb> dhellmann: I believe sdague planned to propose a change that used different tests to avoid symmetric gating
19:16:09 <fungi> i always like to assume that. helps me sleep easier at night
19:16:18 <dhellmann> clarkb: makes sense
19:16:50 <dhellmann> clarkb: the jobs in ^^ run the unit tests, but integration tests would be good, too
19:16:52 <fungi> quick aside, unrelated, but i think the d-g changes just broke gating
19:17:05 <clarkb> fungi: :(
19:17:06 * fungi seems very much red all over the zuul status screen
19:17:07 <dhellmann> d-g?
19:17:13 <fungi> devstack-gate
19:17:28 <dhellmann> ok
19:17:51 <clarkb> fungi: I think the lack of early enough reexec bit us
19:19:49 <fungi> yeah
19:19:57 <fungi> anything else on pecan/wsme?
19:20:10 <fungi> i'll move this along while we troubleshoot
19:20:12 <dhellmann> nothing from me
19:20:28 <fungi> #topic New etherpad.o.o server and migration (clarkb)
19:20:40 <clarkb> nothing new here really.
19:20:47 <fungi> i'm guessing this one is probably no longer needing to stay on the agenda. we still have the action item
19:20:51 <pleia2> still seems to be running well
19:20:51 <clarkb> the new server continues to be fine according to cacti
19:21:03 <clarkb> and no yelling from users despite getting use before the summit
19:21:03 <fungi> great
19:22:36 <fungi> #topic Retiring https://github.com/openstack-ci
19:22:42 <fungi> who's was this?
19:22:52 <fungi> #link https://github.com/openstack-ci
19:23:00 <pleia2> oh yes
19:23:09 <pleia2> so someone found that the other day and was confused
19:23:16 <fungi> dead stuff and a placeholder saying we went -> that way
19:23:19 <pleia2> looking at it, I think we should get together next week and delete it
19:23:39 <pleia2> or delete everything except for the -> that way
19:24:12 <fungi> perhaps. there may still be old articles and things floating around the 'net, so the we-have-moved sign might be warranted. i'm on the fence there
19:25:23 <pleia2> maybe we just chat about this next week and see if it's still on the agenda the following
19:25:37 <pleia2> this is just a cleanup thing anyway
19:26:19 <clarkb> ++
19:26:32 <clarkb> I asked that it be put on here because it is the sort of thing I think we need consensus on
19:26:40 <clarkb> next week consensus should be easy
19:28:39 <fungi> sounds good
19:28:46 <fungi> #topic Savanna testing
19:28:55 <fungi> SergeyLukjanov: was this yours?
19:29:00 <SergeyLukjanov> fungi, yep
19:29:01 <SergeyLukjanov> hi
19:29:12 <fungi> welcome! you have the floor
19:29:36 <SergeyLukjanov> I just would like to share some our problems/options for testing
19:29:55 <SergeyLukjanov> btw we have a design track session in savanna topic to discuss CI approach
19:30:29 <SergeyLukjanov> the main problem is that we need to test MapReduce job working at Hadoop cluster deployed at instances provisioned at OpenStack
19:30:42 <SergeyLukjanov> it could be one instance for the simplest tests
19:30:55 <SergeyLukjanov> but it'll not work for more complex tests
19:31:27 <fungi> can multiple instances run on the same virtual machine for testing multi-instance? or is it one per vm?
19:31:31 <clarkb> SergeyLukjanov: today we can only do single node tests, I know that we would like to have the option to do multinode (and have for a while) but the work to make that happen hasn't been done yet (there are tricky bits around networking)
19:31:37 <SergeyLukjanov> one per vm
19:31:53 <SergeyLukjanov> but it could be many vms per host
19:32:01 <clarkb> SergeyLukjanov: you can hook into the simple single node stuff today by creating a job that runs on the devstack-gate nodes
19:32:27 <SergeyLukjanov> oh, I'm afraid that I write incorrect
19:32:43 <fungi> well, tripleo bare metal aside, we don't control any real hosts (though our devstack vms act as hosts for some basic tests, but the cirros vms on top of them are nearly unusable for real workloads)
19:32:44 <SergeyLukjanov> I mean that we will need several OpenStack instances, not compute nodes
19:33:17 <SergeyLukjanov> fungi, and that's really a problem
19:33:51 <SergeyLukjanov> we need at least one instance with 1 vCPU, 1 GB RAM
19:34:02 <clarkb> and probably actual virt
19:34:06 <clarkb> not qemu
19:34:17 <SergeyLukjanov> and some prev. tests show that we need actual acceleration
19:34:21 <SergeyLukjanov> yeah
19:34:29 <clarkb> though, if you can get by with qemu or containers the d-g nodes should suffice
19:34:35 <clarkb> they are 8GB 4VCPU nodes
19:34:45 <SergeyLukjanov> Hadoop starts several hours on qemu and jobs aren't working on it after start )
19:34:58 <fungi> do you need more than novaclient et cetera access to a tenant on a openstack cloud, or do you need actual administrative control over the openstack parts under those vm instances?
19:35:23 <fungi> like control of the computer nodes themselves
19:35:32 <fungi> gah, compute nodes
19:35:46 <SergeyLukjanov> we need only access to the keystone's service acc to check tokens and all other ops done using user's token
19:36:22 <SergeyLukjanov> we'reusing trusts to support some long-term ops
19:36:50 <fungi> just wondering if you're able to, say, run this sort of work through a public cloud you don't own. in which case it sounds like you're looking for multi-node testing we've been talking about
19:36:51 <SergeyLukjanov> as for the services, we're using nova, glance and optionally cinder and neutron
19:38:28 <SergeyLukjanov> we're looking now only for simple CI (one or several instances on one-node devstack is enough)
19:40:04 <SergeyLukjanov> am I right that qemu used for gating?
19:40:12 <fungi> okay, so if you can run what you want initially with devstack in an 8gb flavor vm on ubuntu 12.04 lts (precise) 64-bit in hpcloud or rackspace, then it might be pretty easy to implement (for some definitions of "easy")
19:40:12 <clarkb> SergeyLukjanov: yes
19:40:34 <SergeyLukjanov> fungi, it should be enough for the most tests
19:40:47 <clarkb> SergeyLukjanov: however you could in theory use containers if that buys you the ability to run hadoop in a semi performant way
19:40:49 <SergeyLukjanov> the only problem is acceleration
19:41:05 <fungi> if your initial needs deviate from that, then there's some more substantial engineering required
19:42:18 <SergeyLukjanov> I think that we should start from the testing Hadoop on the gating node to understand how it'll work there and which tests could pass
19:42:51 <SergeyLukjanov> at least it could be used to check integration with other projectss
19:42:58 <clarkb> ++, I think you should get something in place and put it in the experimental pipeline
19:43:01 <fungi> that sounds like a great first step
19:43:26 <clarkb> SergeyLukjanov: I wouldn't start by adding it to all of the projects (just because we have found that usually a lot of iteration is needed on the project being tested first)
19:43:31 <SergeyLukjanov> is it the right behavior to add savanna testing job and add it to the experimental pipeline?
19:43:41 <clarkb> SergeyLukjanov: that is what I would suggest
19:43:51 <SergeyLukjanov> oops, reading too slow, jet lag strikes back after two weeks :(
19:43:51 <fungi> to test the waters first, yeah
19:44:14 <clarkb> once you sort out the major issues that you will inevitably run into we can move you to the silent queue or check/gate
19:45:31 <SergeyLukjanov> agreed, we need to make such test stable before adding them to other projects
19:46:23 <fungi> if they're "stable enough" that they pass most of the time and you want to collect more data, we could move it to a non-voting job before making it enforced in the gate, as an additional stepping stone
19:46:30 <SergeyLukjanov> so, to summarize, I'll start from adding job for savanna in experimental pipeline to test how it'll work
19:46:48 <clarkb> SergeyLukjanov: yup and you can look at the pbr integration test as an example
19:46:50 <SergeyLukjanov> fungi, yeah, it sounds good
19:46:59 <SergeyLukjanov> one more question
19:47:24 <SergeyLukjanov> is it possible to access some gating worker w/o job to check how hadoop will run there?
19:47:35 <fungi> the pbr integration test might actually be a little severe. it doesn't really use devstack at all (not sure if you plan to use any of devstack, like where it gets the services set up for you at least)
19:47:39 <SergeyLukjanov> or some specific cloud/flavor/dc coordinates
19:47:53 <clarkb> SergeyLukjanov: rackspace and hpcloud 8GB 4VCPU nodes
19:48:01 <clarkb> SergeyLukjanov: running ubuntu precise with latest updates
19:48:01 <SergeyLukjanov> clarkb, got it, thx
19:48:24 <SergeyLukjanov> fungi, I'll take a look at the pbr tests
19:48:57 <SergeyLukjanov> btw savanna could be installed by devstack now, so, looks like job will not be very different
19:49:24 <SergeyLukjanov> there are no more questions from my side, thank you guys!
19:49:30 <fungi> SergeyLukjanov: some of this walkthrough might also help with emulating our setup manually... https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/README.rst#n100
19:49:40 <fungi> #link https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/README.rst#n100
19:50:00 <fungi> anyone else have comments on this topic?
19:50:31 <SergeyLukjanov> return back on summit and/or after it :)
19:50:46 <fungi> #topic Design summit Infra track etherpads (fungi)
19:50:59 <fungi> (last-minute agenda addition, chair's perogative)
19:51:14 <fungi> #link https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Infrastructure
19:51:25 <clarkb> abusing the system ;)
19:51:28 <fungi> basic announcement
19:51:42 <fungi> clarkb said he thought jeblair was done finalizing our schedule, so i threw those together
19:51:51 <fungi> take a look if you get time, make sure they're sane
19:52:45 <zaro> why bunch them all on thurs?
19:52:59 <clarkb> zaro: the slots are more spread out this time
19:53:09 <zaro> opps forgot monday is dead
19:53:12 <clarkb> but tend to clump a bit so that we can avoid conflicts with other sessions we want to be in
19:53:14 <fungi> previous summits we were in the process track and mostly all on one day
19:53:42 <clarkb> I have a lot more single session slots this time around, overall very happy about that
19:54:18 <fungi> but i don't specifically know the reasons for particular slots there, just that there was a lot of jockeying to make sure related topics between tracks didn't overlap so people would be more likely to be able to attend where needed
19:55:14 <fungi> #topic Open discussion
19:55:29 <fungi> with our last five minutes... what else is going on?
19:55:36 <pleia2> for those of us going to LCA, call for sysadmin miniconf talks ends this friday http://lists.lca2014.linux.org.au/pipermail/chat_lists.lca2014.linux.org.au/2013-October/000012.html
19:55:36 <fungi> anybody break anything fun lately?
19:56:03 <pleia2> I submitted one about how we manage our whole puppet configs in the open + hiera (instead of just releasing generic modules)
19:56:17 <fungi> we do that? excellent!
19:56:32 <pleia2> haha
19:57:01 <clarkb> nothing from me. I need to run
19:57:03 <pleia2> I'm also writing a "Code Review for Systems Administrators" article for USENIX logout (bi-monthly, digital-only companion to the more serious/academic ;login: magazine they have for members: https://www.usenix.org/publications/login), mostly based on my OSCON talk but less about infrastructure, more sysadmin benefit side
19:57:13 <clarkb> cool
19:57:14 <fungi> clarkb: have a good run
19:57:41 <fungi> very neat
19:58:12 <zaro> i just returned from jenkins conf with a jenkins bobblehead.
19:58:38 <pleia2> zaro: hooray!
19:58:41 <zaro> clarkb really enjoyed it.
19:58:49 <pleia2> hehe
19:59:46 <fungi> representin'
20:00:09 <fungi> okay, that's all for this time. join us next week... IN HONG KONG!!!! (echo, echo echo)
20:00:21 <fungi> #endmeeting