19:02:39 <jeblair> #startmeeting infra
19:02:40 <openstack> Meeting started Tue Aug 11 19:02:39 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:43 <openstack> The meeting name has been set to 'infra'
19:02:46 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:46 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-08-04-19.03.html
19:02:53 <jeblair> #topic Actions from last meeting
19:02:59 <jeblair> so, i forgot this topic last meeting
19:03:02 <jeblair> and we had a bunch
19:03:09 <hogepodge> o/
19:03:31 <jeblair> pleia2 provide swift log upload traceback
19:03:31 <jeblair> jhesketh update os-loganalyze to use pass-through rules
19:03:31 <jeblair> jhesketh/clarkb have os-loganalyze generate indexes for directories which lack them
19:03:44 <jeblair> those 3 all seem related; anyone know if they happened?
19:03:56 <fungi> i think the pass-through merged a few hours ago?
19:04:00 * fungi finds it
19:04:21 <clarkb> pass through is in
19:04:30 <clarkb> needs checking can use coverage for that
19:04:50 <mrmartin> o/
19:04:52 <jeblair> cool
19:04:55 <clarkb> jhesketh was talking to notmyname about doing listingd for index gen not sure if patvh is up yet
19:05:14 <fungi> #link https://review.openstack.org/208767
19:05:42 <jeblair> i don't see anything likely for indexes
19:06:04 <jeblair> ianw update yum dib element to support disabling cache cleanup
19:06:21 <jeblair> anyone know the status on that one?
19:06:31 <greghaynes> There are some dib patches up
19:06:47 <fungi> Clint was working on something related too, though f22 specific i think
19:07:11 <greghaynes> https://review.openstack.org/#/c/211434/1
19:07:14 <fungi> no, my bad. that was pabelanger
19:07:28 * fungi has no idea why he mixed them up
19:07:35 * Clint shrugs.
19:07:45 <jeblair> cool
19:07:47 <jeblair> mordred investigate problem uploading images to rax
19:07:50 <pabelanger> ya. I have some dib stuff up for fedora 22.  Around dnf and such. But the changes are not in diskimage-builder, but system-config right now
19:08:02 <clarkb> jeblair: that seems to juts be owrking now
19:08:07 <clarkb> jeblair: not sure if mordred did anything to it
19:08:11 <pabelanger> I can review ianw work too
19:08:18 <jeblair> clarkb: okiedokie :)
19:08:20 <fungi> yeah, we're getting fairly consistent uploads to rax now, i think. double-checking
19:09:14 <fungi> well, maybe. we have uploads from today and a week ago
19:09:50 <fungi> so mayhaps not
19:10:16 <jeblair> :/
19:10:53 <greghaynes> The other issue in that vain (nodepool dib working) is rootfs resizing currently isnt working for those nodes
19:11:10 <greghaynes> s/vain/vein
19:11:26 <greghaynes> I havent had time to get that fully fixed in dib :(
19:11:58 <jeblair> nibalizer add beaker jobs to modules
19:12:06 <nibalizer> in review
19:12:17 <nibalizer> #link https://review.openstack.org/#/c/208799/
19:12:23 <jeblair> cool
19:12:23 <jeblair> nibalizer make openstackci beaker voting if it's working (we think it is)
19:12:31 <nibalizer> anteaya: asked tha twe start with one before exploding so thats why that looks a little weird
19:12:40 <nibalizer> also in review
19:12:43 <nibalizer> #link https://review.openstack.org/#/c/208631/
19:13:09 <jeblair> and
19:13:10 <jeblair> nibalizer create first in-tree hiera patchset
19:13:16 <nibalizer> #link https://review.openstack.org/#/c/206779/
19:13:20 <nibalizer> also in review :)
19:13:29 <jeblair> w00t
19:13:37 <jeblair> jeblair write message to openstack-dev with overall context for stackforge retirement/move
19:13:37 <anteaya> nibalizer: I do like to see a new job building first, thanks
19:14:05 <jeblair> i wrote this: https://etherpad.openstack.org/p/3GYKL57APR
19:14:19 <jeblair> will send it later today
19:14:30 <jeblair> jeblair start discussion thread about logistics of repo moves
19:14:41 <jeblair> i'll start that after sending the first message
19:15:04 <jeblair> #topic Specs approval
19:15:15 <jeblair> we don't have any on the agenda today
19:15:28 <jeblair> i'll just note that i merged this:
19:15:34 <jeblair> #info greghaynes primary assignee on nodepool workers spec
19:15:35 <jeblair> #link nodepool workers spec https://review.openstack.org/208442
19:15:49 <jeblair> since greghaynes volunteered to take on an unassigned spec
19:15:52 <jeblair> yay! :)
19:15:56 <anteaya> thanks greghaynes
19:16:04 <greghaynes> :) note the gerrit topic
19:16:06 <greghaynes> reviews welcome
19:16:36 <jeblair> i also pushed up this change to specify zuulv3 would happen in branches on nodepool and zuul:
19:16:39 <jeblair> #link specify branch-based development for zuulv3 https://review.openstack.org/211687
19:16:51 <jeblair> which we discussed briefly last week
19:17:17 <jeblair> and finally, i've proposed we make maniphest a priority effort:
19:17:18 <jeblair> #link add maniphest to priority efforts https://review.openstack.org/211690
19:17:53 <jeblair> we should probably get formal votes on that
19:18:46 <jeblair> #link voting on maniphest priority effort open until 2015-08-13 1900 UTC
19:18:47 <jeblair> er
19:18:48 <fungi> completely agree
19:18:55 <jeblair> #undo
19:18:55 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0xa3b63d0>
19:19:00 <jeblair> #info voting on maniphest priority effort open until 2015-08-13 1900 UTC
19:19:15 <jeblair> #link add maniphest to priority efforts https://review.openstack.org/211690
19:19:17 <jeblair> just for good measure
19:20:01 <jeblair> #topic  Restore from backup test (jeblair)
19:20:01 <anteaya> I'm out, weather is bad
19:20:30 <jeblair> anyone want to take this on?
19:21:17 <jasondotstar> take on the issue, or take on talking about it here in the mtg?
19:21:24 <clarkb> I hear greghaynes typing so maybe he does
19:21:26 <greghaynes> I wish :( too many things ATM
19:21:30 <jeblair> take on the issue
19:21:31 <greghaynes> clarkb: gotcha
19:21:47 <jasondotstar> I'm new. Looking for a challenge.... sounds fair :-)
19:21:54 <jasondotstar> what's this entail?
19:21:57 <pabelanger> Any info on how it would work?
19:21:57 <clarkb> I would except babies and I am one of the two people that I think has attempted it in the past
19:22:03 <jeblair> we need someone to write up a plan (could be anyone), and a root member to execute it
19:22:05 <clarkb> and jeblair wanted new eyes on it iirc
19:22:50 <jeblair> pabelanger: we talked about it a little at the last meeting
19:23:23 <jeblair> pabelanger: but i think mostly what needs to happen first is to decide what it is we want to verify and how to go about it
19:23:26 <pabelanger> okay. I mean, I can look around and see what is needed. If nobody else has the time
19:23:38 <jeblair> #link backup documentation http://docs.openstack.org/infra/system-config/sysadmin.html#backups
19:24:02 <greghaynes> Yep, I think the consensus meeting was to do a one time backup restore test to information gather on what all we would need
19:24:08 <greghaynes> er, the consensus last meeting
19:24:08 <fungi> pabelanger: jasondotstar: i'm happy to work with either or both of you on it
19:24:27 <pabelanger> okay. Add my name to the list. I don
19:24:31 <pabelanger> 't mind look into it
19:24:36 <pabelanger> looking*
19:24:39 <jasondotstar> same here.
19:24:50 <fungi> we seem to be down a few root admins for today's meeting, which is probably not helping the volunteering
19:25:11 <jeblair> #action pabelanger,jasondotstar look into restore-from-backup testing
19:25:21 <jasondotstar> pabelanger: if you want to take point that's cool. this will help me get my feet wet
19:25:33 <pabelanger> sure
19:25:44 <jeblair> fungi: yeah, less than half of us are here today?
19:25:51 <fungi> seems that way
19:25:55 <jasondotstar> pabelanger: cool
19:25:58 <jeblair> ah, august
19:26:28 <fungi> i wouldn't have picked this time of year to buy and move into a house, but at least i'm coming out the other end of that timesink now
19:26:31 <jeblair> anyway, this is something that will benefit from new eyes, so cool.
19:26:51 <jeblair> #topic Fedora 22 snapshots and / or DIBs feedback (pabelanger)
19:26:58 <pabelanger> ohai
19:27:24 <pabelanger> so, thanks to the help of people here. We can acually provision a jenkins node using fedora 22 (which is puppet4).
19:27:51 <jeblair> our first use of puppet4? :)
19:28:08 <pabelanger> So, my questions are more for root admins about how to get fedora22 actually running.  Questions are if we can use snapshots or continue work on dibs
19:28:32 <pabelanger> I have a few reviews up for cache_devstack and could use some eyes
19:28:46 <pabelanger> the downside with snapshots, hpcloud doesn't have fedora22 images
19:28:51 <pabelanger> not sure what is required to add them
19:29:02 <jeblair> i'd like to push on dibs if possible
19:29:02 <pabelanger> as for DIBs, well I think people know the state of it
19:29:33 <fungi> it would probably entail in-place upgrade of a f21 image to f22, if that's somethinf fedora can do (i may be showing my debian derivative bias here)
19:29:38 <nibalizer> I'm dubious that we should be running Puppet4 on one random machine
19:29:44 <fungi> the snapshots would, i mean
19:29:59 <jeblair> nibalizer: is there a way to run puppet3?
19:30:12 <nibalizer> jeblair: I haven't verfied, but yes there should be
19:30:13 <pabelanger> so, my question about dibs is. What would be a reasonible timeframe to get dibs all working?
19:30:42 <fungi> but yes, i think adding new snapshot images goes against the "when you find yourself in a hole, the first thing to do is stop digging" adage
19:30:53 <crinkle> puppetlabs doesn't package puppet for fedora 22 yet, which implies they're not testing on fedora 22 yet, so i would be wary of trying to run it
19:31:10 <crinkle> moreover fedora is packaging it completely differently than puppetlabs will be
19:31:14 <nibalizer> the puppet4 support comes from fedora/epel repos I guess
19:31:22 <jeblair> pabelanger: frankly, i don't think we can really set that.  so much of the nodepool dib work seems to be blocked on mordred who is not around
19:31:37 <nibalizer> ya so the two organizations that would package puppet4 are going about it different ways
19:31:39 <pabelanger> right, the main reason for this, is some downstream teams that would require fedora22. Even puppet bits
19:32:06 <pabelanger> plus, some efforts for projects pulling in newer libs
19:32:06 <nibalizer> which means its not really stable yet
19:32:33 <pabelanger> I agree puppet 4 is new, however if puppet 3 is required. We could do what the puppet openstack team does and uninstall puppet a job launch, and setup a gem
19:32:34 <jeblair> what do you mean by this: 19:32 < pabelanger> plus, some efforts for projects pulling in newer libs
19:33:17 <pabelanger> jeblair: my understanding. A few teams at RedHat are wanting bleeding edge packages for experimental support
19:33:38 <pabelanger> not 100%, just head rumblings
19:33:43 <pabelanger> heard*
19:33:49 <jeblair> k
19:33:51 <pabelanger> /stupid keyboard
19:34:38 <jeblair> regarding puppet3/4 -- we don't run much puppet on these nodes, and we want to run even less in the future; i'd prefer to use the same puppet from the puppetlabs repo, but if that's not possible, i'm not too worried about using puppet4 here
19:35:27 <jeblair> if we can finish the dib work, we can start on that effort in earnest
19:35:52 <fungi> yeah, it's local puppet apply, no remote/puppetmaster involvement at all
19:36:15 <nibalizer> ya really for the provisioning step, puppet3 vs puppet4 we won't feel much of a difference
19:36:34 <nibalizer> I owuld reccomend we write the 10 lines of shell to get the puppetlabs repo in place and just use puppet3
19:36:47 <nibalizer> especially since I'm the one who will be getting pinged because puppet4 did something stupid
19:36:51 <jeblair> nibalizer: er, i thought it wasn't an option?
19:36:55 * jeblair is confused
19:36:59 <crinkle> nibalizer: puppetlabs doesn't package for fedora 22 yet
19:37:07 <crinkle> http://yum.puppetlabs.com/
19:37:13 <crinkle> either 3 or 4
19:37:18 <nibalizer> crinkle: jeblair so gem install puppet --version=3 or something
19:37:25 <nibalizer> or add the f21 packages :)
19:37:48 <nibalizer> thaght might be my debian showing
19:38:23 <jeblair> nibalizer: pabelanger seems to be saying that this is working now; is it worth doing more work for that?
19:38:51 <pabelanger> Ya, fedora 22 is working with -infra puppet modules
19:38:53 <nibalizer> probably not
19:39:29 <nibalizer> so yea we can just go with puppet4
19:39:31 <jeblair> so maybe we try this out, and if we keep shooting ourselves in the foot, then invest in puppet3ifying it?
19:39:42 <pabelanger> so much so, I want to get a puppet apply test node going to make sure we are gating on changes for puppet 4
19:40:03 <nibalizer> ya thats a decent idea
19:40:06 <pabelanger> even if non-voting
19:40:26 <nibalizer> trusty/precise though, since we don't run any services on f22
19:41:09 <nibalizer> but f22 nodes, once available wont hurt
19:41:10 <jeblair> nibalizer: we do run the slave template through the apply test, so that at least can run on f22
19:41:47 <jeblair> pabelanger: got your questions more or less answered?
19:42:16 <pabelanger> jeblair: so, Fedora 22 dibs?  Drop snapshot?
19:42:51 <jeblair> i think so; it's work either way, and at least this way we're all focused on the same probs
19:43:26 <fungi> worst case, while f22 images are experimental we can get by with only having them in hpcloud
19:43:42 <pabelanger> okay
19:43:48 <fungi> and continue working on getting red hat based systems working with glean
19:44:09 <fungi> whatever the blocker is there at the moment
19:44:11 <pabelanger> Right, I haven't even tried fedora22 with uploading yet
19:44:16 <pabelanger> so, not sure what will happen
19:44:37 <pabelanger> either way, will continue hacking away on it
19:45:04 <jeblair> thanks!
19:45:08 <jeblair> #topic  puppet-pip vs puppet-python (rcarrillocruz)
19:45:26 <nibalizer> right so pip is installed from install_puppet.sh then unmanaged by puppet modules
19:45:50 <yolanda> we found this problem downstream, were puppet and pip are not installed in that way for some cases
19:46:03 <yolanda> we were relying on puppet-pip that doesn't do what it promises
19:46:17 <nibalizer> in what case does install_puppet.sh not get run?
19:46:30 <yolanda> infra-ansible for example
19:46:49 <yolanda> it's not a part of the automation itself, so right now is just a manual step that is not related to the automation
19:46:52 <fungi> nibalizer: downstream consumers of our puppet modules not provisioning systems the way we do
19:47:08 <clarkb> whats infra-ansible?
19:47:21 <yolanda> clarkb, is a project we have downstream, to automate a whole infra
19:47:31 <clarkb> ok, couldn't you have it run install_puppet.sh?
19:47:36 <jeblair> why is this downstream?
19:47:36 <fungi> basically "getting pip installed on your server" is an exercise left to the reader for puppet modules where we use the pip package provider
19:47:49 <nibalizer> fungi: in my mind i completely agree with you
19:47:58 <jeblair> yolanda: isn't that one of our highest priority efforts we're supposed to be working on together upstream?
19:48:02 <nibalizer> if you want to use infra modules you need a couple prereqs such as puppet and pip
19:48:05 <yolanda> jeblair, we are
19:48:16 <nibalizer> we could do better about listing whats needed 'before you begin'
19:48:18 <yolanda> it's on github at the moment, in development process
19:48:33 <jeblair> yolanda: no, that's not how we work
19:48:48 <yolanda> jeblair why do you say that?
19:49:01 * fungi notes that "on github" is not "working upstream"
19:49:08 <jeblair> we don't work by doing things off in the corner on github
19:49:26 <pabelanger> I am in favor of using an upstream puppet module for this
19:49:50 <jeblair> pabelanger: i think the premise is flawed
19:49:58 <yolanda> jeblair but we cannot work all the time by proposing upstream specs and waiting for approval
19:50:02 <yolanda> because we simply don't have the time
19:50:16 <yolanda> so in some cases we need to cook some things downstream then try to reuse
19:50:35 <jeblair> yolanda: i'm sorry you feel that way.  i could not disagree more about that, and the way you are dealing with it.
19:50:39 <jeblair> let's move on to another topic.
19:50:43 <jeblair> #topic  Nodepool REST API spec (rcarrillocruz)
19:51:06 <yolanda> that has been pending review for long time
19:51:16 <yolanda> so Ricky needed some reviews to move it forward
19:52:18 <jeblair> it looks like it could use some more elaboration and agreement
19:52:27 <fungi> link?
19:52:29 <jeblair> particularly the keystone thing seems vague
19:52:39 <jeblair> #link https://review.openstack.org/141016
19:52:42 <fungi> thanks!
19:53:27 <jeblair> so, someone who actually knows something about keystone, and its suitability here should probably weigh in on that
19:53:57 <pabelanger> I can. I wrote something using both pecan and keystone last year.  And can see how it would work
19:54:15 <jeblair> pabelanger: cool, thanks
19:54:30 <fungi> i do worry a little about tying nodepool to keystone, but i'll save my comments for the spec
19:55:10 <pabelanger> fungi: ya, it would be optional and easy to disable.
19:55:19 <pabelanger> or easy to enable ;)
19:55:32 <jeblair> that would seem to imply that in order to use this, you need to have a cloud account on one or more or all providers
19:55:40 <jeblair> and i don't really understand how that fits with the nodepool usage model
19:55:56 <jeblair> or why we would assume that would be the case
19:56:15 <jeblair> or, it looks like mordred suggested we could run a keystone specifically for this
19:56:23 <jeblair> which sounds heavyweight, but what do i know
19:56:31 <nibalizer> that sounds ... like a lot of work
19:56:41 <yolanda> i'd prefer to keep it simple really
19:56:59 <fungi> i must have mis-skimmed it because i thought it was more like treating nodepoold as a rest server in an existing cloud environment (similar to nova, glance, et cetera)
19:57:47 <pabelanger> That's what I assumed
19:57:55 <fungi> where the only benefit was not having to use some not-from-openstack authentication mechanism
19:57:58 <jeblair> there is also very little indication of why it's needed.  nodepool is designed to have no user interface
19:58:23 <yolanda> jeblair, it's one of the most important features needed for downstream consumption
19:58:33 <yolanda> on our daily basis, users are requesting to hold nodes all the time to debug issues
19:58:36 <jeblair> yolanda: the spec should probably mention that
19:58:46 <fungi> a proof of concept using http basic auth would probably get us most of the way and then you could fairly easily add other auth mechanisms supported by apache
19:59:00 <jeblair> yolanda: right, so there's a use case that should be described, and then we can elucidate requirements from that
19:59:11 <yolanda> jeblair, can you note it on the review? not mine
19:59:20 * fungi will add notes too
19:59:22 <jeblair> for instance, it sounds like a simple http access credential might suffice
19:59:34 <jeblair> yolanda: of course... it was proposed as a meeting topic though.  ;)
20:00:08 <pabelanger> IIRC, pecan already supports hooks into keystoneclient. Which makes the integration into keystone that much easier
20:00:24 <yolanda> yes, and great to be talking about it. So the most important need is autohold of nodes but having some way to interact with nodepool features sounds like a good idea to me
20:00:31 <jeblair> time is up; thanks everyone
20:00:34 <jeblair> #endmeeting