20:00:01 <sdake> #startmeeting kolla
20:00:02 <openstack> Meeting started Mon Oct 13 20:00:01 2014 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:06 <openstack> The meeting name has been set to 'kolla'
20:00:13 <sdake> #topic rollcall
20:00:24 <sdake> \o/
20:00:38 <rhallisey> hi
20:00:39 <dvossel> hi!
20:00:46 <daneyon_> hola
20:00:54 <jpeeler> yo
20:01:34 <GheRivero> o/
20:01:37 <radez> hey ya'll
20:01:40 <larsks> howdy.
20:01:41 <sdake> #topic agenda
20:01:59 <sdake> #link https://wiki.openstack.org/wiki/Meetings/Kolla
20:02:05 <sdake> anyone have anything to add?
20:02:34 <larsks> sdake: maybe something about summit?
20:02:42 <sdake> sure we can add that at the end
20:02:53 <sdake> should b e a fast meeting anyway :)
20:02:57 <sdake> #topic becoming core
20:02:57 <jlabocki> here!
20:03:13 <portante> portante: here
20:03:15 <sdake> I see folks not on the core team buildilng code and executing reviews
20:03:33 <sdake> it would make sense to expand our core team when we thinkj is appropriate to include these new core contributors
20:03:46 <sdake> what I have seen work well in the past is 1/2 the core team must +1 candidate
20:03:48 <sdake> with no -1 votes
20:03:51 <sdake> (-1 = veto)
20:04:03 <sdake> core candidate must be cracking on reviews
20:04:07 <sdake> and doing some code development
20:04:19 <sdake> any objections to this process?
20:04:23 <daneyon_> no
20:04:32 <sdake> could I get a +1/-1 from the core review team plz
20:04:39 <larsks> sounds fine by me.  voting to happen...on the mailing list?
20:04:45 <sdake> yes on ml
20:04:47 <larsks> +1
20:04:51 <sdake> lets vote now make sure everyone happy :)
20:04:52 <sdake> +1
20:04:59 <rhallisey> +1
20:05:10 <radez> +1
20:05:19 <sdake> jlabocki jpeeler dvossel thoughts?
20:05:24 <jlabocki> +1
20:05:25 <dvossel> +1
20:05:29 <jlabocki> sorry, stops multitasking
20:05:29 <jpeeler> +1
20:05:34 <sdake> :)
20:05:37 <sdake> ok cool so we will do that
20:05:46 <sdake> and we can reference this wonderful irc log when someone complains :)
20:06:03 <sdake> #topic staying on top of reviews
20:06:17 <sdake> #link https://review.openstack.org/#/q/status:open+project:stackforge/kolla,n,z
20:06:25 <sdake> so review queue been getting a bit long
20:06:38 <sdake> if folks could spend 5-10 minutes a day just looking at the queue and approving stuff, that would rock :)
20:06:49 <sdake> as is, sometimes patches sit for 3-4 days
20:06:55 <sdake> when they are basic 1 liners
20:06:56 * larsks aplogizes for making it *really* long today.
20:06:58 <jrist> o/
20:07:07 <rhallisey> go larsks!
20:07:07 <sdake> any objections ? :)
20:07:14 <portante> sdake: this happens on other projects as well
20:07:19 <sdake> yes I know
20:07:29 * jrist thinks sdake is all too familiar
20:07:49 <sdake> #topic Bashate gate fixing day
20:07:54 <sdake> gates rock
20:08:00 <sdake> busted nonvoting g ates, not so much
20:08:15 <sdake> I think what should work here is I'll add bugs for all the bashate failing software
20:08:26 <sdake> and we can crank out a bashate fixing d ay on 16th or 17th
20:08:29 <sdake> and then turn the gate on
20:08:35 <sdake> 350 bashate failures atm
20:09:00 <larsks> sdake: but it looks like those are erroneous right now.
20:09:08 <larsks> sdake: the gate *itself* seems to be broken.
20:09:12 <sdake> I think what this will look like is each of our core devs get 3-4 bugs to fix
20:09:20 <larsks> tox -v -ebashate --> ERROR: unknown environment 'bashate'
20:09:34 <larsks> so we shouldn't try fixing anything until the gate script is working correctly.
20:09:36 <sdake> larsks our bashate gate isn't integrated with tox
20:09:42 <sdake> although we could do that
20:09:45 <larsks> sdake: then it needs  fixing.
20:09:49 <larsks> because that is the output from jenkins.
20:09:50 <sdake> the upstream gating is working though
20:09:55 <larsks> E.g., http://logs.openstack.org/78/128078/1/check/gate-kolla-bashate/3ca35e4/console.html
20:10:09 <larsks> That is the output from the upstream gating.
20:10:11 <sdake> ok, well we can fix that easily enough
20:10:17 <sdake> that is local to our tox env
20:10:34 <larsks> Yup.  Just saying, we don't actually know how many bashate related failures we have right now.
20:10:42 <sdake> I ran locally - 350
20:10:52 <larsks> Okay.
20:10:54 <sdake> bashate ./ ;)
20:11:04 <sdake> #topic milestone #1 announcement
20:11:06 <larsks> So, you will fix the gate?
20:11:45 <sdake> #link https://etherpad.openstack.org/p/kolla-1-ann
20:11:49 <sdake> larsks if I don't someone else will :)
20:11:56 <sdake> so I was thinking we should have an announcement email
20:12:17 <sdake> announce likely on Oct 21 (tuesday) @ 10AM est
20:12:19 <larsks> sdake: I am looking to have fixing the gate be an action item for a sepcific person before we move on.
20:12:28 <sdake> ok I'll take it larsks
20:12:30 <larsks> Yay!
20:12:50 <sdake> if folks want to open that etherpad
20:13:02 <sdake> feel free to edit away
20:13:12 <sdake> just looking over the announce, is there anything that seems problematic?
20:13:22 <sdake> I'll give folks a couple minutes to read it
20:13:44 <larsks> do we have function nova controller/compute containers?
20:13:51 <sdake> we will by friday
20:14:19 <larsks> okay. there is no link to the Heat templates in the etherpad; we should probably add that.
20:14:22 <sdake> we have semi-working ones now
20:14:28 <larsks> oh wait, there it is.
20:14:28 <jpeeler> is friday the last day of dev for milestone 1?
20:14:30 <larsks> never mind.
20:14:34 <sdake> jpeeler yup
20:14:35 <radez> is the network expected to run not in a conatiner yet?
20:14:41 <jlabocki> sdake: what about developers who don't have an openstack environment … i.e. - on a disconnected laptop? Is there a way to make development environment lighter weight?
20:14:45 <sdake> radez you mean neutron?
20:15:00 <radez> right, there's not a neutron container listed
20:15:08 <larsks> jlabocki: if someone wants to put together a vagrant-based guide or something that might help out.  In theory the upstream k8s one might work.
20:15:09 <sdake> jlabocki I really don't know - could install k8s locally but there would be some pain involed
20:15:09 <radez> so will that just not run in a container?
20:15:17 <sdake> radez ya we don't have neutron in a container, thats m2
20:15:22 <radez> ack
20:15:28 <jlabocki> larsks: sdake: ack
20:15:49 <daneyon_> I am working on the neutron bp. I am ready to submit a review for a functioning neutron-server, still need the agents and plugin.
20:15:56 <sdake> #topic Milestone #1 blueprint review
20:16:05 <daneyon_> What about adding nova-network to the nova-controller container work?
20:16:05 <sdake> #link https://blueprints.launchpad.net/kolla/milestone-1
20:16:17 <sdake> daneyon_ I think that should probably be done
20:16:34 <larsks> daneyon_: sdake: I think that might be too much to get done if our deadline is this friday.
20:16:35 <daneyon_> I know of many deployments that are on nova-net and plan on staying put for a while.
20:16:46 <larsks> But a good first step for m2, maybe?
20:16:54 <sdake> which, nova-compute?
20:16:54 <daneyon_> agreed.
20:17:00 <sdake> nova is pretty much set
20:17:07 <jlabocki> what part of nova?
20:17:07 <sdake> it just needs debugging
20:17:14 <sdake> and configing
20:17:17 <sdake> nova-controller and nova-compute
20:17:48 <sdake> so glance container dradez is that one blue->green?
20:18:05 <jlabocki> nova-conductor?
20:18:12 <sdake> kube-glance-conatianer
20:18:14 <larsks> sdake: glance container works with those patches I just submitted.
20:18:23 <sdake> ok well lets mark it done
20:18:29 <sdake> dradez if you would please :)
20:18:34 <sdake> libvirt - so stauts on that
20:18:38 <radez> sure thing
20:18:42 <sdake> working pretty well i think, but hard to test wihtout a real workload
20:18:51 <daneyon_> with larsks linkmanager, i think i can pull off the neutron agents/ml2 plugin using a single phy interface and create an add'l bridge vxlan network for floating ip's.
20:18:55 <sdake> been blocked on busted keystone, but now that we have a working dependency chain, we should be set
20:18:59 <rhallisey> jlabocki, conductor, scheduler, novnvproxy are in the controller
20:19:07 <rhallisey> and maybe network now?
20:19:28 <sdake> rhallisey mind giving us an update on nova-container blueprint
20:19:50 <larsks> daneyon_: that would be pretty spiffy.
20:20:12 <sdake> jpeeler, any update on kube-heat-container?
20:20:14 <rhallisey> sdake, I have a bunch more cofig work then hopefully things shouldn't be to bad
20:20:30 <sdake> rhallisey try to push up as soon as you ahve something functional
20:20:35 <sdake> because I need it for compute :)
20:20:42 <jpeeler> i need to sort out how the proper env variables are passed around and test
20:20:46 <sdake> we can beutify lateer
20:21:03 <sdake> which env variables you looking for jpeeler?
20:21:07 <larsks> jpeeler: in fact, at some point we will have to face the general problem of how to configure and orchestrate the entire set of containers.
20:21:17 <daneyon_> larsks: i may ned to ping u offline if i run into any issues trying to implement. I'm diggin connectivity manager... you're the man!
20:21:30 <jlabocki> larsks: we will call that point value
20:21:32 <sdake> larsks I have that targeted for m#2
20:21:33 <jlabocki> :)
20:22:08 <jpeeler> sdake: things like KEYSTONE_ADMIN_PORT_35357_TCP_ADDR. i'll have to look more in a few
20:22:13 <sdake> check out kolla-cli and k8s-rest-library
20:22:26 <daneyon_> +1 to ansible for container orchestration.
20:22:29 <sdake> jpeeler just feel free to ask
20:22:55 <larsks> jpeeler: those are set automatically by kubernetes based on service descriptions.
20:23:06 <sdake> daneyon_ I like to get a non-beutified implementation working
20:23:12 <sdake> we can gold plate later ;-)
20:23:16 <larsks> I am trying to simplify those to ..._SERVICE_HOST in all cases.
20:23:40 <sdake> #topic Milestone #1 bug review
20:23:51 <sdake> #link https://bugs.launchpad.net/kolla/milestone-1
20:24:38 <sdake> #link https://bugs.launchpad.net/kolla
20:24:41 <sdake> actual bugs ^
20:25:01 <sdake> 1379057 needs attention, I think larsks said he had fixed it in his latest patch spam
20:25:14 <sdake> 1377034 - isn't that fixed larsks?
20:25:35 <larsks> sdake: dunno, pulling up bug right now so I can see what it is...
20:25:45 <larsks> https://bugs.launchpad.net/kolla/+bug/1379057 is fixed.
20:25:46 <uvirtbot> Launchpad bug 1379057 in kolla "if keystone starts first, spam of errors" [Critical,Triaged]
20:25:50 <sdake> lag ftw
20:26:06 <larsks> https://bugs.launchpad.net/kolla/+bug/1377034 is also fixed
20:26:07 <sdake> ok I marked it as such
20:26:09 <uvirtbot> Launchpad bug 1377034 in kolla/milestone-1 "keystone should idempotently create users, endpoints, and services" [Undecided,New]
20:26:58 <sdake> #topic 1 hr project slot for ODS (OpenStack Developer Summit)
20:26:59 <jpeeler> larsks: i didn't see a way to make crux only create a role, can it not do that?
20:27:11 <sdake> Thierry was akind enough to grant us a 1 hour developer slot
20:27:24 <sdake> I wont be at summit, but larsks and radez will, who will lead the session
20:27:35 <larsks> jpeeler: it can't right now, but you can always create the role as part of creating a user.  I can make it do that if there's a good case for it.
20:27:37 <sdake> its a 1 hour slot, so I don't expect you will solve world peice ;)
20:27:52 <larsks> jpeeler: ping me after the meeting.
20:27:53 <radez> larsks: could we plan for you to and I'll fly backup?
20:28:03 <larsks> radez: sure.
20:28:35 <sdake> anything you wnated to add larsks?
20:29:01 <larsks> I'm good.  If people have some free time this afternoon/evening/whatever time it is locally to look at that chunk of patches I just dropped, that would be awesome.
20:29:09 <daneyon_> larsks: Re ODS, I have a Heat talk with @pythondj. Are you OK if I highlight your heat-kube template and possibly demo it?
20:29:12 <sdake> yes please take a look
20:29:20 <radez> larsks: I'm working through the patches now, a couple didn't merge clean
20:29:31 <larsks> radez: Huh, they should if they are approved in order.
20:29:49 <larsks> Note the depends on/depended by information.
20:29:51 <jlabocki> larks: radez: please link me to the slot you get, I will put at the end of the summit presentation we have on it to drive more people to it
20:29:57 <radez> is that top to bottom or bottom to top?
20:30:02 <larsks> daneyon_: sure, you are welcome to use those templates.
20:30:08 <radez> jlabocki: sure will
20:30:34 <larsks> radez: I'm not sure. "depends on" means  "is required before this patch"
20:30:44 <larsks> radez: I can't parse bottom/top correctly :)
20:30:49 <radez> ah, gotcha... I may have gone backwards....
20:30:54 <sdake> #topic open discussion
20:30:56 <daneyon_> larsks: I'll shoot you the info on the talk. You're welcome to attend if it works with your schedule.
20:31:03 <sdake> ok folks any open discussion?
20:31:05 <larsks> I have something for open discussion...
20:31:16 <sdake> shoot larsks
20:31:45 <larsks> I was looking at persistent storage for k8s this weekend.  I have a modified version of those heat templates that sets up a gluster cluster, at which point I was able to migrate mysql and glance containers between minions and have things keep working.
20:31:47 <larsks> It was nifty.
20:32:23 <larsks> I think it makes a good solution for the lack of persistent storage in k8s right now...
20:32:35 <larsks> ...and maybe it's a longer term one, too.
20:32:35 <sdake> gluster runs on bare metal then?
20:32:36 <daneyon_> +1
20:32:40 <rhallisey> cool
20:32:45 <larsks> Gluster runs on the same hosts as k8s.
20:33:16 <sdake> cool - just need some k8s linkage
20:33:23 <larsks> With a magical autofs mountpoint so that gluster volumes are immediately available at /gluster/<volname> around the cluster.
20:33:23 <sdake> right?
20:33:37 <larsks> Well, no.  k8s already has the necessary support via volumes with a hostDir source.
20:33:41 <larsks> So mostly, it Just Works.
20:33:55 <larsks> I can through together some examples.
20:34:00 <larsks> errr, "throw"
20:34:02 <larsks> wow.
20:34:38 <daneyon_> larsks: will cinder/gluster use the vxlan or eth0 for replication?
20:34:52 <larsks> daneyon_: eth0 (it's a host service)
20:35:30 <larsks> patches are in this branch: https://github.com/larsks/heat-kubernetes/tree/feature/gluster
20:35:43 <larsks> No guarantees about functionality/stability at this point.
20:35:55 <larsks> That's all I've got.
20:36:05 <sdake> cool nice work larsks
20:36:12 <sdake> I'll have to check it out when I'm unswamped :)
20:36:19 <sdake> the first heat-kub* code rocked !
20:36:27 <sdake> hey shardy_z
20:36:36 <sdake> any other open discussion?
20:37:07 <sdake> ok then, reminder to stay on top of the review queue and look for bugs for bashate to be assigned by wed ;)
20:37:13 <sdake> #endmeeting