20:00:33 #startmeeting kolla 20:00:34 Meeting started Mon Sep 29 20:00:33 2014 UTC and is due to finish in 60 minutes. The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:38 The meeting name has been set to 'kolla' 20:00:41 #topic rollcall 20:00:46 hi 20:00:48 hi 20:00:49 hallo 20:00:50 o/ 20:00:50 howdie \o/ 20:00:51 hi 20:00:52 hello 20:00:52 o/ 20:00:52 hi all 20:00:57 hi 20:01:00 hi 20:01:02 hey 20:01:04 o/ 20:01:08 howdy 20:01:16 hi 20:01:38 #topic agenda 20:01:46 https://wiki.openstack.org/wiki/Meetings/Kolla#Agenda_for_next_meeting 20:01:55 anyone have anything to add or change? 20:01:57 Chris Alfonso 20:02:03 howdie funzie 20:02:17 Seems reasonable to me. 20:02:26 I would not mind a review of the big picture 20:02:35 I found the discussion confusing 20:02:46 mspreitz cool maybe we should do that first 20:02:54 #topic big picture - what are we trying to accomplish 20:02:54 also curious what exists today, if anything 20:03:11 sdake: I don't know if it's appropriate but I wanted to ask about this vs the openstack container effort.. 20:03:12 Do you want some leading questions? 20:03:24 slower I don't think it has mnuch to do with the container effort 20:03:29 shoot mspreitz :) 20:03:34 ok I'll talk to you about it later 20:03:48 sdake: scollier, apologies for the tardiness 20:03:48 so big picture - put kubernetes in the undercloud 20:03:57 What I understood is that the basic idea is using container images of OpenStack controllers and/or compute nodes... 20:04:01 Q1: Which of those? 20:04:10 Q2: what deploys the minions 20:04:17 Q3: in VMs or on bare metal? 20:04:28 q1 both q2 minions deployed by magic elves :) 3 - bare metal 20:04:29 I'll stop there for now 20:04:37 q2 may be a challenge for us 20:04:41 I think eveyrone is struggling with that ponit 20:04:47 trying to do kubernetes dev 20:05:04 So both compute nodes and controller nodes will be containers, right? 20:05:05 for 3, are we limited to bare metal for any reason? 20:05:09 sdake: some chance the answer to q2 may end up being "puppet", or something like that, right? 20:05:15 right larsks 20:05:30 dvossel no just better performance characterisitcs vs a vm 20:05:59 Re Q1: my vague understanding is that there is a lot of cross-configuration to be done while installing OpenStack... 20:06:09 is that already done in the images? 20:06:13 it is not done yet 20:06:18 so jdob asekd what is avialble 20:06:19 And if so, doesn't it have to be undone/redone? 20:06:32 mspreitz: totally. trying to figure out how best to handle that is one of the big to-do items, I think. 20:06:32 atm, we have a container that launches mariadb, and a container that launches keystone 20:06:34 RE q2: "everyone is struggling with that point" can you elaborate? is it a k8s specific issue or enhancement needed? 20:07:02 derekwaynecarr well deployment of the intitial node is a huge pita which eveyrone struggles with 20:07:03 OK, I think I understand where things are 20:07:09 people make tools like pxeboox etc 20:07:15 but it is still a struggle ;) 20:07:22 but it seems to me that Kubernetes is not bringing a lot to the table for the big problem, which is all that configuration 20:07:35 Maybe I'm missing the point here but (oversimplifying...) is this simply TripleO -> OpenStack on Kubernetes effort then ? 20:07:39 and the magic elves' job 20:07:40 good point, the config is difficult 20:07:44 mspreitz: kubernetes brings scheduling and service auto-discovery, the latter of which will help out with but not solve the cross-config issue. 20:07:47 florianotel right 20:08:03 I dont know if anyone has a config solution 20:08:24 derekwaynecarr: k8s today makes some assumptions on how things are configured from a networking perspective as well to support per pod-ip concepts as well... all we have today is some salt scripts to configure, but we could do more 20:08:56 #topic Discuss development environment 20:08:58 sdake, Ok, follow-up Q then: Point being... ? i.e. What's the tech / ops advantage of doing so ? 20:09:28 flrianotel treating openstack as a hermetically sealed container for that particular piece of software 20:09:36 eg, all the magic is hidden in a container 20:09:52 TripleO proved to be enough of a challenge as-is (at least AFAIU..(, not quite clear what k8s will bring to that picture ?? 20:09:54 I'd have to convince you containers are a good idea in general 20:09:59 for them to be a good idea for openstack 20:10:27 sdake, No need. That was table stakes for me to attend this meeting already :) 20:10:45 cool 20:10:50 sdake: wait... 20:11:00 so dev environment, larsks can you put a #Link to your repo for launching kube on heat 20:11:00 it is one thing to say containers are good for virtualization 20:11:10 it is another to say they are a good way to setup software on machine 20:11:13 machines 20:11:23 where each machine is being treated as a whole 20:11:35 container is like rpm with a build in installer/deployment model 20:11:36 imo :) 20:11:40 link is https://github.com/larsks/heat-kubernetes for the heat templates, but I am not meetingbot-aware enough to know if that is sufficient :) 20:11:41 mspreitz, the latter IMO. That's the whole point AFAICT 20:11:42 so it is for software4 setup 20:11:43 we will put one container on each machine, right? no virtualization 20:11:45 sdake, +1 20:12:10 mspreitz the scheduler will sort that out 20:12:16 the scheduler being kubernetes 20:12:21 but it could put multiple things on one machine 20:12:32 I guess, I don't know how kubernetes scheduler works ;-) 20:12:36 if we are not fixing one container per machine then this is significantly different from what one expects of a bare metal install of OpenStack 20:12:44 mspreitz: I was actually assuming multiple containers/machine, in most cases. E.g., right now you might have a controller running multiple services. In the k8s wolrd, maybe that will be one service/container, but multiple containers/host. 20:12:54 sdake: I think that's right, multiple containers can end up on one machine 20:13:02 mspreitz: I think that is actually pretty similar to a bare-metal install of openstack. 20:13:17 radez: certainly, the scheduler will put multiple pods on a single host. 20:13:22 sdake: k8s scheduler is in early stages, but multiple containers can end up on same stages, major scheduling constraint today is free host port 20:13:28 larsks: maybe you are thinking of more containers than I was assuming 20:13:45 *same machines 20:13:49 mspreitz: the break down is a container a service pretty much 20:13:51 if we use a container for each of what is now a process, then.. 20:14:01 right 20:14:12 mspreitz: possibly, or a "pod" for each of what is now a process (a "pod" being a group of tightly linked containers). 20:14:13 but the process carries config and init with it 20:14:18 well, I am not sure about service vs. process 20:14:20 Err, s/process/service/ 20:14:24 with container, all that gets lumped together which is hermetically sealed = winning 20:14:58 OK, next issue.. 20:15:13 back on larsks link, I'd recommend folks get the install dev environmnet rolling if you plan to write code for the project 20:15:24 so far, jpeeler, larsks, radez, sdake have got the env running that I am aware of 20:15:26 Is there any concern with allowing all the freedom that the k8s scheduler currently has? What if we want to keep some heavy services off of compute nodes? 20:15:27 so bug them for qs 20:15:49 I think we want to keep everything off the compute nodes 20:15:50 note that larsks considers his heat templates a bit of a hack, in particular the way it's handling the overlay network to meet kube's networking model. 20:15:55 and I think mesos can solve that 20:16:10 but like I said, I don't know enough about the scheduler to know for sure ;) 20:16:31 those heat templates ... are they published anywhere? 20:16:38 I know of ones that use Rackspace resources 20:16:41 yar on larsks github 20:16:42 mspreitz: yeah, that link I posted a few lines back... 20:16:52 heat-kubernetes 20:17:01 Actually, y'all are talking too much, it's a lot of lines back now :) 20:17:08 mspreitz: https://github.com/larsks/heat-kubernetes 20:17:22 thanks, something glitched and I did not see the earlier ref 20:17:23 any questions about dev environment? 20:17:42 I think the minimum you want to get setup is make surey ou can do a keystone endpoint-list from outside the instance 20:17:42 no question, just encouragement to submit PRs for those templates if you think something can be done better... 20:18:15 the dev environment is focused on heat because its easy to setup openstack 20:18:24 larsks can serve as a point of contact if you get stuck there ;) 20:18:29 * sdake volunteers larsks!! 20:18:31 * larsks hides. 20:18:57 #topic Brainstorm 10 blueprints to kick off with 20:19:14 we need some features to implement in the launchpad t racker 20:19:24 I think radez was thinking of entering one 20:19:34 I put one in... lemme get the link 20:19:40 https://blueprints.launchpad.net/kolla/+spec/kube-glance-container 20:19:50 another underlying question is how much the containers may be customized using puppet...thinking along the line of the staypuft installer or RHCI common installer which will be staypuft based 20:19:57 I think we should probably have a simliar blueprint for each service we are attempting to containerize. 20:20:07 bthurber no idea ;) 20:20:12 this is basically to start working through the glance containers... I don't know what's involved so I'll have to fill in stuff as I go a bit 20:20:19 bthurber: I think that is one of the things we need to figure out. 20:20:24 +1 20:20:33 anyone volunteer to make a launchpad tracker for all the containers? 20:20:39 separately of course ;) 20:20:47 radez: bthurber: I almost think that "figuring out how to handle configuration" is going to be the #1 blueprint, because it's going to inform the work on everything else... 20:21:09 larsks: right, and .. 20:21:10 you bet...work backwards a bit to determine the overall strategy 20:21:15 larsks: agreed, though that shouldn't prevent us from doing work to get things working 20:21:21 wouldn't the obvious thing be to leverage the service binding of k8s? 20:21:35 mspreitz: totally! That's what I was mentioning earlier. 20:21:48 OK. But let's avoid the botch introduced by k8s 20:21:54 sdake, I can do it 20:21:58 mspreitz: which botch? 20:22:02 thanks rhallisey 20:22:06 SERVICE_HOST 20:22:15 it requires that every proxy be universal 20:22:27 using container linking envars instead avoids that assumption 20:22:29 #action rhallisey to enter separate blueprints for each openstack service with containerization language 20:22:30 mspreitz: Ah, okay. So far we've been using the --link-like environment vars. 20:22:47 mspreitz: there is a proposal for k8s to eliminate that BOTCH 20:22:54 great 20:23:11 is it in the k8s repo of issues? 20:23:26 see https://github.com/GoogleCloudPlatform/kubernetes/issues/1107 20:23:46 thanks 20:24:38 as far as services go, there is nova, swift, cinder, neutron, horizon, keystone, glance, ceilometer, heat, troeve, zaqar, sahara 20:24:49 that is 13 separate blueprints 20:25:00 neutron might not be monolithic 20:25:18 rhallisey once you have the blueprints entered, can you send a mail to openstack-dev so people can take ownership of the individual ones? 20:25:22 mspreitz: neutron might noe be pretty! 20:25:34 neutron and cinder aregonig to be a real challenge 20:25:40 sdake, should we split up by containers or by services? 20:25:45 sdake, ok 20:26:03 I suggest organize people by service 20:26:07 rhallisey: I would say "by service" for now, and possibly the implementation will be multi-container. Or not. 20:26:09 let the people decide about containers 20:26:11 I vote by component/service 20:26:14 Ah, great minds. 20:26:41 sdake: possibly more....if you want to break out the components of each service 20:26:59 ya, atm we break out each componenet of each service into a separate container 20:27:06 +1 20:27:09 I think we will have to experiment to see what works best there 20:27:23 there may be some shared components as well 20:27:34 so topic * Define list of initial Docker Images (10 min) 20:27:35 is probably covered 20:27:57 #topic Map core developer to docker image for initial implementation\ 20:28:01 bthurber: although my sticking to one-process-per-container, we avoid the whole "how we we handle process supervision" bugaboo. And I think that the "pod" abstraction makes the one-process-per-container model a little more tenable. 20:28:21 I guess my thinking on this is we can just pick up blueprints when rhallisey sends out the note 20:28:25 does that work for everyone? 20:28:50 Sure. rhallisey, don't forget to include "supporting" services like mysql, rabbitmq... 20:29:06 larsks: prob good to start there and as we mature see where there is overlap. May find opportunity for some efficiency. 20:29:07 larsks, sounds good 20:29:39 WRT Neutron, is there a Blueprint built for this? i am curious how we can containerize some of the services. larsks maybe you might know? 20:29:48 rook: no blueprints yet! 20:29:51 rhallisey: did you see the link to the glance one I created? if it doesn't meet your standard just ditch it or we can change it 20:29:52 larsks: roger. 20:30:04 larsks: how about your thoughts? ;) 20:30:13 we can offline it... 20:30:30 radez, I'll take a look 20:30:46 rook: if you look at the current code some things are already broken down across different service 20:31:01 #topic gating 20:31:04 you can get an idea of how some are being done already there to get your gears turning 20:31:04 radez which code? 20:31:08 rook: chat after meeting, maybe? 20:31:10 atm, we have no gating in the codebase 20:31:20 I'll file blueprints for every service to introduce gating 20:31:21 radez: roger - my concern is namespaces wrt Neutron 20:31:28 I thinkj what would work best is atleast tempest gating on the containers 20:31:35 I'll tackle t he implementation 20:31:44 if somone wants to join me, that wfm ;) 20:31:51 rook: https://github.com/jlabocki/superhappyfunshow/ 20:32:11 rook: but note, moving to github.com/openstack Real Soon Now. 20:32:14 sdake: where is that right now? 20:32:40 stackforge -> https://review.openstack.org/#/c/124453/ 20:33:10 larsks radez thx 20:33:46 any other thoughts on gating? 20:33:55 Nah, temptest seems like a reasonable starting point. 20:34:06 #topic open discussion 20:34:20 likely we will just end in 10 mins, so I'll set a 10 minute timer :) 20:34:36 anyone have any open items they wish to discuss? 20:34:55 dumb question, what room does the project talk in? 20:35:01 is there a kolla room or using #tripleo? 20:35:02 #tripleo 20:35:04 kk 20:35:14 as soon as I started asking that I remembered the initial email 20:35:15 sdake: do we want to create a project-specific channel? 20:35:31 larsks the tripleo folks thought it would be better if we used the same channel 20:35:38 Fair enough. 20:35:42 because separate channels never die, and we are really just an offshoot of the tirpleo project 20:36:31 any other discussion? 20:36:38 30 secs and I'll end meeting ;) 20:36:54 =) 20:37:03 thanks folks 20:37:05 #endmeeting