16:00:58 #startmeeting containers 16:00:58 Meeting started Tue Jan 6 16:00:58 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:02 The meeting name has been set to 'containers' 16:01:05 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-01-06_1600_UTC Our Agenda 16:01:11 o/ 16:01:13 #topic Roll Call 16:01:16 Adrian Otto 16:01:18 Thomas Maddox 16:01:25 hi thomasem! 16:01:26 Andrew Melton 16:01:27 containers containers bobainers \p\ 16:01:27 Digambar Patil 16:01:30 jay-lau-513 16:01:38 Hey adrian_otto!! 16:01:47 Hongbin Lu 16:01:55 what is with the 513 jay lau 16:01:55 good day everyone 16:02:15 my guess is he is a racer, and 513 is his race number 16:02:15 why not "jay-lau" :) 16:02:29 now that would make sense 16:02:50 the fasters Lau in the west 16:03:00 sdake my lucky number ;-) 16:03:00 except he lives in the East 16:03:03 or east as it may be 16:03:36 he had 512 and then he incremented it. 16:03:49 anyway, glad to have you all here. let's proceed to Announcements 16:03:50 513 is a prime nubmer I think 16:04:02 #topic Announcements 16:04:11 1) Welcome Jay Lau to magnum-core! 16:04:26 he is our most recent addition 16:04:26 thanks, my honor to join this team 16:04:44 team growing fast 16:04:47 lots of interst in containers 16:04:48 we are honored to have you as well 16:05:05 we also added another core reviewer: 16:05:10 2) Welcome Motohiro/Yuanying Otsuka to magnum-core! 16:05:43 Welcome! 16:05:55 he may be sleeping 16:06:01 I think its middle of night for him now 16:06:15 both of these new reviewers have been very active. Traditionally designating cores takes a few months, but in these early stages, I'm willing to propose additions sooner. 16:06:41 anyone else interested in serving as a core reviewer may see me for guidance 16:06:51 Any other announcements from team members? 16:07:43 ok, in December we set a January target date for our first tagged release. Who remembers the date? 16:08:04 13th jan 16:08:11 ding! 16:08:32 So there is one work week remaining before that date 16:08:49 during a later section I'll re-raise this fur further discussion. 16:09:07 but one quick topic first 16:09:08 #topic Blueprint/Task Review 16:09:14 #link https://review.openstack.org/144203 Enable tests for db objects 16:09:32 do we have https://review.openstack.org/#/q/owner:abhishek%2540cloudscaling.com+status:open,n,z present today? 16:10:00 I think he's absent, so maybe I'll follow up on this later 16:10:14 So, on the topic of blueprints, tasks, and bugs 16:10:26 first of all, I want to compliment the team on remarkable progress 16:10:42 yay - we almost got launching pods/services in a micro os in a bay :) 16:10:48 I feel like the commit throughput is way up, and there are lots of solid commits hitting the repo 16:11:45 so take a moment to look around you and recognize terrific progress from those around you, and know that we appreciate your efforts. 16:12:17 next, let's identify any must-have tasks that should be completed by Jan 13 16:12:31 and I will help to make sure any of them that need a bird dog have one 16:12:51 thoughts on must-have work that remains pending or in-progress? 16:13:01 we need to be able to specify minion servers to kubectl commands 16:13:05 I think there is a review up for that 16:13:21 thanks sdake. Let's take a moment to look for that. 16:13:24 we need to be able to pass the pod/service data to kubectl commands 16:13:30 I think there is a review up for that too 16:13:52 #link https://blueprints.launchpad.net/magnum Our Blueprint List 16:13:52 I've just been operating under the principle we won't actually get container scheduling sorted out for milestone #1 16:14:10 sdake: yes, that's acceptable and appropriate 16:14:16 but I think if we have kube working that should be good enough 16:14:52 Ideally we need one of two things 1) a heat template that works for ironic based upon larsks repo or 2) updated documentation that shows how to deploy in virtual environments as opposed to ironic 16:15:06 I think #1 is likely not to occur in the next week 16:15:41 so I think we should set another limit then, which is we only intend milestone #1 to launch on virtual machines, not integrated with Ironic 16:15:51 for #1, is it possible that we merge lasrk's code to etc/magnum? 16:15:55 diga_: you are working on a containerized environment for Magnum that would suit #2 above, correct? 16:16:08 yes 16:16:23 larsks code is for virt only, not for ironic 16:16:30 we need two templates, one for each environment type 16:16:31 sdake or just write some readme telling end user where to get the template? 16:16:34 the network is different 16:16:49 jay-lau-513 I'd prefer to merge it into our repo and the license is compatible 16:16:54 then we can just keep it up to date from larsks repo 16:17:22 can we identify two Stackers on the magnum team to co-own that responsibility? 16:17:34 although atm it works great, I doubt there will be many changes, except possibly to handle ironic 16:17:39 ok, I see, 16:17:55 as that will require watching the code in the original project 16:18:12 I'll take on the docs part, and I'll take on merging new changes from larsks repo 16:18:20 but need someone else to do the original copy :) 16:18:27 ok, any volunteers to aid sdake? 16:18:47 I can help him 16:18:52 I can help sdake 16:19:05 two volunteers ;-) 16:19:15 jay a copy with the correct install bits should do the trick 16:19:24 yes 16:19:32 ok, perfect, we should be in good shape 16:19:55 I know sdake is planning some time away, so feel free to select diga_ or jay-lau-513 as a delegate for this accordingly. 16:20:13 ironic you mean not through nova but direct? 16:20:25 I mean through nova, but the ironic network model is different 16:20:32 the heat tempalte larsks produced is based upon neutron 16:20:45 ironic supports flat networking as 1 model and ovs enabled switches as another model 16:21:05 in the case of #1, where most of our users are going to be, we need a atemplate that does flat networking 16:21:12 I see - thanks for that claroification 16:21:55 #link https://github.com/larsks/heat-kubernetes heat-kubernetes 16:21:59 I'm glad someone understands it, I sure dont :( 16:22:30 ^^ for jay-lau-513 and diga_ to reference 16:22:41 ok, any other must-haves for Jan 13? 16:22:43 so ya, launching pods, launching services, inside a bay, that is a good set of features for milestone #1 16:22:51 ok 16:22:57 ideally we need a virtual interface to represent the cluster 16:23:04 I'm not sure if we have time for that or not 16:23:08 I think that we can also launch replication controllers ;) 16:23:19 the way that works is we put a LB in front of every minion 16:23:44 that probably requires a new heat template 16:23:49 jay-lau-513: indeed we can. 16:24:21 sdake, why have an lb in front of a minion? 16:24:34 I have two responsibilities 1) setup magnum repo in container 2) heat-kubernetes 16:24:35 that way you dont have to figure out which minion to talk to 16:24:37 sdake why one cluster need a lb 16:24:47 diga_: yes 16:25:26 yep 16:25:32 sdake, why do we care which minion is used? 16:25:33 I think it would be handy to have 1 IP address represent the entire user-experience for the kubernetes cluster 16:25:52 eg: minions 1.1.1.1 minion 2.2.2.2 16:25:53 this sounds to me like scheduling logic 16:25:56 your app connects to 2.2.2.2 16:25:58 2.2.2.2 dies 16:26:03 now your app is busted 16:26:06 because it doesn't know about 1.1.1.1 16:26:21 that is what the lb fixes 16:26:26 1 IP address for all the minions 16:26:45 ok, so you are not talking about a crontrol plane, you are talking about the data plane for availability of apps 16:26:56 ya, although to set it up is control plane :) 16:27:10 I see now, thanks. 16:27:26 probably can wait until milestone #2 16:27:32 * dims__ says o/ a bit late :) 16:27:39 bays controls pods , so you mean lb->minion->bays to data plane is the flow? 16:27:42 hi dims__ 16:28:02 rprakash a pod service and replication controller is luanched in a bay 16:28:10 dp pods 16:28:16 a bay is a collection of nodes running a micro os such as coreos or atomic 16:28:39 one of the bay's nodes is a master (running etcd) the others are minions (running kubeproxy etc) 16:28:54 to control the cluster, you use magnum which contacts the bay's master node 16:29:39 got it it's orchestration from cp to dp pods - hanks 16:30:52 #link https://blueprints.launchpad.net/magnum/milestone-1 BPs for milestone-1 16:31:08 let's take a look at the Delivery column on the above link 16:31:21 lots of green :) 16:31:28 Indeed! 16:31:36 for the ones that are blue, can any be updated? 16:31:49 should any be re-scoped? 16:32:20 I updated https://blueprints.launchpad.net/magnum/+spec/implement-magnum-bays 16:32:38 ideally, I'd like the whiteboards in the open BP's to indicate what work is remaining so I can plan accordingly 16:33:05 magnum-backend-docker-* can probably go to milestone #2 I suspect 16:33:10 thanks sdake 16:33:11 unless they are working now 16:33:37 diga_: is there remaining implementation on that BP for milestone-1? 16:34:01 I marked https://blueprints.launchpad.net/magnum/+spec/backend-bay-heat-kube as implemented 16:34:15 both the blueprint work is completed I guess 16:34:40 can you control docker containers via magnum client? 16:34:41 adrian_otto: all the magnum-container-* are implemented, but work only against the docker daemon specified in the magnum.conf 16:34:50 sdake: yep 16:34:55 sweet 16:35:08 well I guess that doesn't need to be rescoped then 16:35:16 ok, so do we need a task filed to expand that to work on more daemons? 16:35:21 right adrian_otto 16:35:49 ok, I'll file a BP for milestone-2 16:35:57 https://blueprints.launchpad.net/magnum/+spec/magnum-agent-for-nova 16:36:00 adrian_otto can we add to the agenda the creation of hte release announcement in etherpad, please - in case I forget 16:36:08 this is scoped for ml2 16:36:29 yes, let's make an etherpad for drafts now… one moment and I will do that and link it here 16:36:31 are we finally going with zaqar or not ? 16:36:42 I dont think we need zaqar 16:36:52 we already have the list of minion ips 16:36:58 ok 16:37:02 we can round robin select for scheduling if necessary 16:37:09 I think we need to think through the scheduling of the containers though 16:37:13 its no easy task 16:37:21 ok 16:37:25 especially when you throw multi-node networking into the mix 16:37:34 ok 16:37:47 #link https://etherpad.openstack.org/p/magnum-release Where we will draft our release announcement. 16:39:48 feel free to help write it folks :) 16:40:01 lets spend 5-10 mins - team effort :) 16:40:24 #topic Draft Magnum Release Announcement 16:42:44 #link https://blueprints.launchpad.net/magnum/+spec/magnum-docker-backend-selection Selection of multiple docker backends 16:42:50 ^^ dims__ 16:43:00 adrian_otto: thanks 16:43:39 adrian_otto: I should be able to help out with that piece 16:44:58 apmelton: awesome 16:45:10 along with the magnum bay stuff as well 16:45:25 I'm still getting up to speed with everything that's been going on since I last attended 16:48:27 Same here 16:51:26 Going to have to look at Zaquar. 16:51:35 sdake: any pointer to uOS we can add? 16:53:56 #topic Open Discussion 16:54:13 adrian_otto: will there be a magnum mid-cycle? 16:54:28 mid-cycle sprint/meetup* 16:56:27 ok looking good to me 16:56:42 uOS is Fedora Atomic or CoreOS 16:56:48 I just call it uOS 16:56:54 not sure if it has an official name - lets make one :) 16:58:44 sdake what does u mean for uOS? 16:58:45 ok, time is almost up 16:59:14 editing the release announcement will continue after we adjourn, and will be discussed in #openstack-containers 16:59:38 well if you dont like the u, delete it ;) 16:59:42 our next team meeting will be 2015-01-13 at 2200 UTC 16:59:52 thanks folks :) 16:59:56 thanks everyone for attending! 16:59:58 cheers! 17:00:04 #endmeeting