20:00:01 #startmeeting kolla 20:00:02 Meeting started Mon Oct 13 20:00:01 2014 UTC and is due to finish in 60 minutes. The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:06 The meeting name has been set to 'kolla' 20:00:13 #topic rollcall 20:00:24 \o/ 20:00:38 hi 20:00:39 hi! 20:00:46 hola 20:00:54 yo 20:01:34 o/ 20:01:37 hey ya'll 20:01:40 howdy. 20:01:41 #topic agenda 20:01:59 #link https://wiki.openstack.org/wiki/Meetings/Kolla 20:02:05 anyone have anything to add? 20:02:34 sdake: maybe something about summit? 20:02:42 sure we can add that at the end 20:02:53 should b e a fast meeting anyway :) 20:02:57 #topic becoming core 20:02:57 here! 20:03:13 portante: here 20:03:15 I see folks not on the core team buildilng code and executing reviews 20:03:33 it would make sense to expand our core team when we thinkj is appropriate to include these new core contributors 20:03:46 what I have seen work well in the past is 1/2 the core team must +1 candidate 20:03:48 with no -1 votes 20:03:51 (-1 = veto) 20:04:03 core candidate must be cracking on reviews 20:04:07 and doing some code development 20:04:19 any objections to this process? 20:04:23 no 20:04:32 could I get a +1/-1 from the core review team plz 20:04:39 sounds fine by me. voting to happen...on the mailing list? 20:04:45 yes on ml 20:04:47 +1 20:04:51 lets vote now make sure everyone happy :) 20:04:52 +1 20:04:59 +1 20:05:10 +1 20:05:19 jlabocki jpeeler dvossel thoughts? 20:05:24 +1 20:05:25 +1 20:05:29 sorry, stops multitasking 20:05:29 +1 20:05:34 :) 20:05:37 ok cool so we will do that 20:05:46 and we can reference this wonderful irc log when someone complains :) 20:06:03 #topic staying on top of reviews 20:06:17 #link https://review.openstack.org/#/q/status:open+project:stackforge/kolla,n,z 20:06:25 so review queue been getting a bit long 20:06:38 if folks could spend 5-10 minutes a day just looking at the queue and approving stuff, that would rock :) 20:06:49 as is, sometimes patches sit for 3-4 days 20:06:55 when they are basic 1 liners 20:06:56 * larsks aplogizes for making it *really* long today. 20:06:58 o/ 20:07:07 go larsks! 20:07:07 any objections ? :) 20:07:14 sdake: this happens on other projects as well 20:07:19 yes I know 20:07:29 * jrist thinks sdake is all too familiar 20:07:49 #topic Bashate gate fixing day 20:07:54 gates rock 20:08:00 busted nonvoting g ates, not so much 20:08:15 I think what should work here is I'll add bugs for all the bashate failing software 20:08:26 and we can crank out a bashate fixing d ay on 16th or 17th 20:08:29 and then turn the gate on 20:08:35 350 bashate failures atm 20:09:00 sdake: but it looks like those are erroneous right now. 20:09:08 sdake: the gate *itself* seems to be broken. 20:09:12 I think what this will look like is each of our core devs get 3-4 bugs to fix 20:09:20 tox -v -ebashate --> ERROR: unknown environment 'bashate' 20:09:34 so we shouldn't try fixing anything until the gate script is working correctly. 20:09:36 larsks our bashate gate isn't integrated with tox 20:09:42 although we could do that 20:09:45 sdake: then it needs fixing. 20:09:49 because that is the output from jenkins. 20:09:50 the upstream gating is working though 20:09:55 E.g., http://logs.openstack.org/78/128078/1/check/gate-kolla-bashate/3ca35e4/console.html 20:10:09 That is the output from the upstream gating. 20:10:11 ok, well we can fix that easily enough 20:10:17 that is local to our tox env 20:10:34 Yup. Just saying, we don't actually know how many bashate related failures we have right now. 20:10:42 I ran locally - 350 20:10:52 Okay. 20:10:54 bashate ./ ;) 20:11:04 #topic milestone #1 announcement 20:11:06 So, you will fix the gate? 20:11:45 #link https://etherpad.openstack.org/p/kolla-1-ann 20:11:49 larsks if I don't someone else will :) 20:11:56 so I was thinking we should have an announcement email 20:12:17 announce likely on Oct 21 (tuesday) @ 10AM est 20:12:19 sdake: I am looking to have fixing the gate be an action item for a sepcific person before we move on. 20:12:28 ok I'll take it larsks 20:12:30 Yay! 20:12:50 if folks want to open that etherpad 20:13:02 feel free to edit away 20:13:12 just looking over the announce, is there anything that seems problematic? 20:13:22 I'll give folks a couple minutes to read it 20:13:44 do we have function nova controller/compute containers? 20:13:51 we will by friday 20:14:19 okay. there is no link to the Heat templates in the etherpad; we should probably add that. 20:14:22 we have semi-working ones now 20:14:28 oh wait, there it is. 20:14:28 is friday the last day of dev for milestone 1? 20:14:30 never mind. 20:14:34 jpeeler yup 20:14:35 is the network expected to run not in a conatiner yet? 20:14:41 sdake: what about developers who don't have an openstack environment … i.e. - on a disconnected laptop? Is there a way to make development environment lighter weight? 20:14:45 radez you mean neutron? 20:15:00 right, there's not a neutron container listed 20:15:08 jlabocki: if someone wants to put together a vagrant-based guide or something that might help out. In theory the upstream k8s one might work. 20:15:09 jlabocki I really don't know - could install k8s locally but there would be some pain involed 20:15:09 so will that just not run in a container? 20:15:17 radez ya we don't have neutron in a container, thats m2 20:15:22 ack 20:15:28 larsks: sdake: ack 20:15:49 I am working on the neutron bp. I am ready to submit a review for a functioning neutron-server, still need the agents and plugin. 20:15:56 #topic Milestone #1 blueprint review 20:16:05 What about adding nova-network to the nova-controller container work? 20:16:05 #link https://blueprints.launchpad.net/kolla/milestone-1 20:16:17 daneyon_ I think that should probably be done 20:16:34 daneyon_: sdake: I think that might be too much to get done if our deadline is this friday. 20:16:35 I know of many deployments that are on nova-net and plan on staying put for a while. 20:16:46 But a good first step for m2, maybe? 20:16:54 which, nova-compute? 20:16:54 agreed. 20:17:00 nova is pretty much set 20:17:07 what part of nova? 20:17:07 it just needs debugging 20:17:14 and configing 20:17:17 nova-controller and nova-compute 20:17:48 so glance container dradez is that one blue->green? 20:18:05 nova-conductor? 20:18:12 kube-glance-conatianer 20:18:14 sdake: glance container works with those patches I just submitted. 20:18:23 ok well lets mark it done 20:18:29 dradez if you would please :) 20:18:34 libvirt - so stauts on that 20:18:38 sure thing 20:18:42 working pretty well i think, but hard to test wihtout a real workload 20:18:51 with larsks linkmanager, i think i can pull off the neutron agents/ml2 plugin using a single phy interface and create an add'l bridge vxlan network for floating ip's. 20:18:55 been blocked on busted keystone, but now that we have a working dependency chain, we should be set 20:18:59 jlabocki, conductor, scheduler, novnvproxy are in the controller 20:19:07 and maybe network now? 20:19:28 rhallisey mind giving us an update on nova-container blueprint 20:19:50 daneyon_: that would be pretty spiffy. 20:20:12 jpeeler, any update on kube-heat-container? 20:20:14 sdake, I have a bunch more cofig work then hopefully things shouldn't be to bad 20:20:30 rhallisey try to push up as soon as you ahve something functional 20:20:35 because I need it for compute :) 20:20:42 i need to sort out how the proper env variables are passed around and test 20:20:46 we can beutify lateer 20:21:03 which env variables you looking for jpeeler? 20:21:07 jpeeler: in fact, at some point we will have to face the general problem of how to configure and orchestrate the entire set of containers. 20:21:17 larsks: i may ned to ping u offline if i run into any issues trying to implement. I'm diggin connectivity manager... you're the man! 20:21:30 larsks: we will call that point value 20:21:32 larsks I have that targeted for m#2 20:21:33 :) 20:22:08 sdake: things like KEYSTONE_ADMIN_PORT_35357_TCP_ADDR. i'll have to look more in a few 20:22:13 check out kolla-cli and k8s-rest-library 20:22:26 +1 to ansible for container orchestration. 20:22:29 jpeeler just feel free to ask 20:22:55 jpeeler: those are set automatically by kubernetes based on service descriptions. 20:23:06 daneyon_ I like to get a non-beutified implementation working 20:23:12 we can gold plate later ;-) 20:23:16 I am trying to simplify those to ..._SERVICE_HOST in all cases. 20:23:40 #topic Milestone #1 bug review 20:23:51 #link https://bugs.launchpad.net/kolla/milestone-1 20:24:38 #link https://bugs.launchpad.net/kolla 20:24:41 actual bugs ^ 20:25:01 1379057 needs attention, I think larsks said he had fixed it in his latest patch spam 20:25:14 1377034 - isn't that fixed larsks? 20:25:35 sdake: dunno, pulling up bug right now so I can see what it is... 20:25:45 https://bugs.launchpad.net/kolla/+bug/1379057 is fixed. 20:25:46 Launchpad bug 1379057 in kolla "if keystone starts first, spam of errors" [Critical,Triaged] 20:25:50 lag ftw 20:26:06 https://bugs.launchpad.net/kolla/+bug/1377034 is also fixed 20:26:07 ok I marked it as such 20:26:09 Launchpad bug 1377034 in kolla/milestone-1 "keystone should idempotently create users, endpoints, and services" [Undecided,New] 20:26:58 #topic 1 hr project slot for ODS (OpenStack Developer Summit) 20:26:59 larsks: i didn't see a way to make crux only create a role, can it not do that? 20:27:11 Thierry was akind enough to grant us a 1 hour developer slot 20:27:24 I wont be at summit, but larsks and radez will, who will lead the session 20:27:35 jpeeler: it can't right now, but you can always create the role as part of creating a user. I can make it do that if there's a good case for it. 20:27:37 its a 1 hour slot, so I don't expect you will solve world peice ;) 20:27:52 jpeeler: ping me after the meeting. 20:27:53 larsks: could we plan for you to and I'll fly backup? 20:28:03 radez: sure. 20:28:35 anything you wnated to add larsks? 20:29:01 I'm good. If people have some free time this afternoon/evening/whatever time it is locally to look at that chunk of patches I just dropped, that would be awesome. 20:29:09 larsks: Re ODS, I have a Heat talk with @pythondj. Are you OK if I highlight your heat-kube template and possibly demo it? 20:29:12 yes please take a look 20:29:20 larsks: I'm working through the patches now, a couple didn't merge clean 20:29:31 radez: Huh, they should if they are approved in order. 20:29:49 Note the depends on/depended by information. 20:29:51 larks: radez: please link me to the slot you get, I will put at the end of the summit presentation we have on it to drive more people to it 20:29:57 is that top to bottom or bottom to top? 20:30:02 daneyon_: sure, you are welcome to use those templates. 20:30:08 jlabocki: sure will 20:30:34 radez: I'm not sure. "depends on" means "is required before this patch" 20:30:44 radez: I can't parse bottom/top correctly :) 20:30:49 ah, gotcha... I may have gone backwards.... 20:30:54 #topic open discussion 20:30:56 larsks: I'll shoot you the info on the talk. You're welcome to attend if it works with your schedule. 20:31:03 ok folks any open discussion? 20:31:05 I have something for open discussion... 20:31:16 shoot larsks 20:31:45 I was looking at persistent storage for k8s this weekend. I have a modified version of those heat templates that sets up a gluster cluster, at which point I was able to migrate mysql and glance containers between minions and have things keep working. 20:31:47 It was nifty. 20:32:23 I think it makes a good solution for the lack of persistent storage in k8s right now... 20:32:35 ...and maybe it's a longer term one, too. 20:32:35 gluster runs on bare metal then? 20:32:36 +1 20:32:40 cool 20:32:45 Gluster runs on the same hosts as k8s. 20:33:16 cool - just need some k8s linkage 20:33:23 With a magical autofs mountpoint so that gluster volumes are immediately available at /gluster/ around the cluster. 20:33:23 right? 20:33:37 Well, no. k8s already has the necessary support via volumes with a hostDir source. 20:33:41 So mostly, it Just Works. 20:33:55 I can through together some examples. 20:34:00 errr, "throw" 20:34:02 wow. 20:34:38 larsks: will cinder/gluster use the vxlan or eth0 for replication? 20:34:52 daneyon_: eth0 (it's a host service) 20:35:30 patches are in this branch: https://github.com/larsks/heat-kubernetes/tree/feature/gluster 20:35:43 No guarantees about functionality/stability at this point. 20:35:55 That's all I've got. 20:36:05 cool nice work larsks 20:36:12 I'll have to check it out when I'm unswamped :) 20:36:19 the first heat-kub* code rocked ! 20:36:27 hey shardy_z 20:36:36 any other open discussion? 20:37:07 ok then, reminder to stay on top of the review queue and look for bugs for bashate to be assigned by wed ;) 20:37:13 #endmeeting