22:08:25 #startmeeting containers 22:08:26 Meeting started Tue Nov 18 22:08:25 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:08:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:08:29 The meeting name has been set to 'containers' 22:08:43 (fetching link to agenda) 22:09:02 #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda 22:09:08 #topic Roll Call 22:09:11 Adrian Otto 22:09:19 hey folks \o/ 22:09:22 Jesse Butler 22:09:31 daneyon meeting got kicked off now 22:09:40 Dmitry Guryanov 22:10:04 Yuanying Otsuka 22:10:40 o/ 22:11:08 #topic Announcements 22:11:20 none prepared. Announcements from team members? 22:11:26 hmm 22:11:42 #link https://review.openstack.org/#/c/135436/ 22:11:50 magnum client import 22:11:58 oh, sweet 22:12:04 does nothing, but need to start somewhere :) 22:12:28 +1 22:12:44 any other announcements? 22:13:18 #topic Review Action Items 22:13:24 * adrian_otto adrian_otto to attempt to identify Linux kernel patches that could enable use of nested containers in non-privileged mode (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-11-11-16.00.log.html#l-40, 16:10:57) 22:13:37 incomplete, carrying forward 22:13:44 #action adrian_otto to attempt to identify Linux kernel patches that could enable use of nested containers in non-privileged mode (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-11-11-16.00.log.html#l-40, 16:10:57) 22:14:14 #topic Discuss initial core reviewers for https://github.com/stackforge/magnum 22:14:33 ok, so now that we have a repo, it's time to form a group of reviewers to care and feed it 22:15:05 currently I'm the only core reviewer in the group, and I'd like to arrange a consensus among team members about who should be in the initial group 22:15:17 +1 sdake 22:15:18 since we are starting from a blank slate, we'll need to be willing to adapt this 22:15:56 but initially we are looking for contributors who are willing to allocate at least a few hours a day for a while (perhaps 2 months?) to review, and submit patches 22:16:06 is that a fair bar to set? 22:16:16 yup I think that is a fair requirement 22:16:18 sounds good 22:16:22 fair 22:16:45 we can continually review the group membership and adjust it 22:16:48 so we should probably see who is actually interested :) 22:16:53 * sdake raises hand 22:17:00 * adrian_otto raises hand 22:17:07 * achanda_ raises hand 22:17:25 * dims raises hand 22:17:25 achanda_: what's your LP username? 22:17:46 adrian_otto: abhishek-i 22:17:58 for posterity's sake, i would like to say that i'm absolutely willing to help... but i'm a total noob with all of this. so i'm hoping to learn from you all before i volunteer for critical path jobs. 22:18:01 cool so that looks like a solid core team to me :) 22:18:33 jlb13: so in that case. I ask that you set up the review queue to email you 22:18:43 review the code, and cote on patches with +1 and -1 22:18:53 ok, cool 22:18:54 I'm in the same boat as jlb13 22:18:58 once you feel confident, then we can take it from there 22:19:01 me too 22:19:07 Could you, please. setup review queu for me too 22:19:20 dguryanov|3 you hae to do that for yourself 22:19:25 https://review.openstack.org/#/q/status:open+magnum,n,z 22:19:26 its part of the gerrit interface 22:19:28 #link https://review.openstack.org/#/q/status:open+magnum,n,z 22:19:29 "watched projects" 22:19:37 OK, thanks! 22:19:39 right, so set up magnum as a watched project 22:19:59 and bookmark the link above for easy access 22:20:27 ok, so let's seek an #agreed on the initial list 22:20:59 i'm watching magnum 22:21:07 Proposed abhishek-i, sdake, adrian_otto 22:21:14 did I miss any? 22:21:19 dims 22:21:31 Proposed abhishek-i, sdake, adrian_otto, dims 22:21:43 thanks sdake :) 22:21:52 * sdake watching out for you dims :) 22:22:19 and we have jlb13 and daneyon and dguryanov doing reviews with +1/0/-1 participation 22:22:29 PR's accepted from everyone 22:23:02 some of our contributors are not here today, so I'm willing to revisit this list as often as we want 22:23:31 to vote a reviewer in or out of the core group, we will simply email the ML with the subject tag [magnum] and ask for the current cores to vote 22:23:32 wfm more core = faster dev 22:23:41 simple majority prevails 22:23:59 I will do my best to make sure we get throughput with enough of us with +2 power 22:24:10 #agreed initial core group will be abhishek-i, sdake, adrian_otto, dims 22:24:12 just to be clear adrian_otto, many projects have a -1 = veto on core review vote 22:24:25 is that in effect or not in effect (I dont care) 22:24:31 just dont want people surprised later :) 22:24:45 good question.. here is how I want to deal with it… 22:25:26 if there is a core who votes -1, and others who vote +2, then I'd like to signal a conversation in IRC or ML to discuss the point of dispute 22:25:39 so if you -1 be ready to compromise 22:26:04 i have seen ithappen before and it was unpleasant :) 22:26:13 anyway off to more pleasant topics :) 22:26:17 discussions, or lack of? 22:26:25 discussion after a -1 in public 22:26:27 got ugly 22:26:57 this early in the project I think we should have a relatively low bar 22:27:05 as long as there are unit tests, and func tests 22:27:11 let's not get too pedantic about stuff 22:27:12 ya who cares imo :) 22:27:43 once we have something that works end-to-end we can get more opinionated 22:28:04 doh, sorry, thought it was an hour later my time 22:28:12 Slower: np 22:28:23 * Slower is also interested in group membership 22:28:36 ok, I will revise the #agreed if there are no objections 22:28:43 agree 22:29:08 pausing a moment for anyone else 22:29:19 ok, you're in. 22:29:21 irc://irc.freenode.net:6667/#agreed initial core group will be abhishek-i, sdake, adrian_otto, dims 22:29:24 +1 22:29:27 ack 22:29:41 #agreed initial core group will be abhishek-i, sdake, adrian_otto, dims, Slower 22:29:44 that's better 22:29:49 Maybe include me to, as former OpenVZ developer :) 22:29:50 agreed 22:29:59 agreed 22:30:03 ok, so the primary goal is to get a POC quality prototype working 22:30:06 and iterate on it 22:30:12 more the merrier really 22:30:32 +dguryanov 22:31:31 if you want to use -2 power, be committed to addressing the point of dispute with the committer of the patch 22:31:38 don't just block and walk 22:31:53 OK, of course, I understand it 22:31:57 ya -2 is a permanent veto, so be nice about those :) 22:32:12 I like fast development process very much too :) 22:32:16 dguryanov|3: what is your LP username? 22:32:43 especially for a PoC 22:32:48 dguryanov 22:33:06 one other thing, remember takes two +2's to have a +A 22:33:11 for t hose folks new to review process 22:33:56 #agreed dguryanov, abhishek-i, sdake, aotto, dims, Slower 22:34:13 #agreed 22:34:15 ok, so let's advance to the next. 22:36:00 #topic Open Discussion 22:36:05 about the rest api 22:36:12 I extracted this from Eric's work 22:36:15 https://github.com/sdake/python-magnumclient/blob/master/magnumclient/magnum.py#L17 22:36:46 is this what we are tryhing to achieve with the magnum service 22:38:00 sdake: yes, that looks about right 22:38:04 being a bit new to the project :) I'm not sure if there is somethign else planned 22:38:21 so next q, natively, or as a wrapper over docker + k8s? 22:38:27 "attach" is missing 22:38:28 or both? 22:38:37 ya that was in the review as impossible to implemenet iiric :) 22:38:38 clearly there is more to define eg networking etc 22:38:56 slower a pod defines the network 22:39:08 maybe I need to do more homework, but would container create be similar to a docker build? 22:39:08 it's not impossible, just not part of an async API 22:39:12 sdake: our plan was to build it on top of docker 22:39:25 so native then 22:39:29 eg, reimplement pods? 22:39:38 well I think that is a good question 22:39:50 native is my preference 22:39:50 we should entertain the idea of doing it on top of k8s 22:40:02 if we do native that wfm, but we shoudl consider abstraction to run with k8s as well imo 22:40:11 agreed 22:40:29 k8s+swarmd -> magnum -> nova 22:40:31 daneyon do you have any idea how to implement the networking natively that pods do? 22:40:35 that should work 22:40:46 do pods offer everything we need for networking? Or do we want to define eg open ports to containers etc.. maybe a qeustion for later 22:40:57 slower we are missing a "services" object 22:41:04 sdake: i can dig in and let you know. 22:41:06 if we want a true abstraction over k8s and docker 22:41:21 well that seems reasonable then 22:41:27 I need to look at k8s more 22:41:39 eveyrone should have a look if we are implementing pods in the project 22:41:59 I wish erw was here 22:42:15 pong 22:42:16 the source code for fig was offered 22:42:18 aha!! 22:42:24 erw can you elaborate? 22:42:36 adrian_otto: for Orchardup, specifically 22:42:51 right 22:43:00 * erw will read through the meeting notes later btw 22:43:23 adrian_otto: to be honest, the more I see Magnum progressing, I wonder if the orchard stuff is worthwhile. 22:43:33 link to orchard? 22:43:33 yes, it exists and it does much of the same things... 22:43:41 I would say k8s pods + the kolla networking bits should be enough to get us going from a networking standpoint. 22:43:41 https://www.orchardup.com/ 22:43:47 a python project 22:43:58 might help, not sure 22:44:00 daneyon I think the idea is to make a native implementation though 22:44:22 eg, no dependency on k8s 22:44:29 eg/ie rather ;) 22:44:43 sdake: i see. I need to dig into the k8s networking code and I'll circle back around. 22:45:06 erw in this api, bay create, how is that intended to operate? 22:45:19 sdake: bay create creates an instance in nova 22:45:20 does that create physical hardware? 22:45:30 what about r unning on bare metal? 22:45:31 sdake: possibly creating a pod (which may possibly create a container) 22:45:55 sdake: instances in nova may be baremetal (Ironic) or near-metal (nova-docker / lxd / etc) 22:46:41 so bay create is an abstraction over all those various types of end platformzsx 22:47:20 yeah 22:47:20 sdake: bay create is specifically for creating the undercloud. 22:48:18 so you can identify which pods end up on which hardware? 22:48:28 yes 22:48:43 sdake: the question comes down to does bay-create allow creating pods and containers…. or does container-create possibly create a pod and bay? Possibly the latter. 22:48:45 and if you dont care which hardware, what is the api call? just pod creat e? 22:48:46 the scheduling of all this for scalability will be interesting.. 22:48:46 first implementation can just take a bay id 22:49:26 I suspect ultimately we will just ask for a pod, and the scheduling will deal with bays 22:49:29 container-create uses a bay if one is specified, otherwise tries to fit in existing ones, or auto-create 22:49:58 adrian_otto: rather it would use a pod if specified, otherwise it creates a pod. A pod, if a bay is not specified, creates a bay 22:50:00 right? 22:50:12 erw: yes 22:50:12 erw: that makes sense to me.. 22:50:24 interesting approach 22:50:26 i like it 22:50:34 does something nobody else does, and will make kolla work \!o!/ 22:50:35 my only twist on that... 22:51:03 is that if you don't specify a bay, and a bay already exists with capacity, allow that slack capacity to be used rather than creating new 22:51:10 sdake: you’re looking at nova+ironic+magnum as an undercloud for kolla? 22:51:25 adrian_otto: +1, I think that's the scheduler role.. 22:51:26 erw magnum i think 22:51:27 adrian_otto: I think that’s something that needs to be tunable, but yes. 22:51:30 not sure what else ;) 22:51:46 erw, Slower +1 22:52:11 re deployment and bays 22:52:22 is there goign to be an agent for bare metal? 22:52:25 or just ironic? 22:52:36 is container-stop == "docker kill" ? 22:52:38 adrian_otto: actually, the reason for pods and bays being split was so more than one pod could exist on a bay 22:52:44 I think nova-docker will be go-to for bare metal basically 22:53:02 nova-docker is single node slower? 22:53:15 container in container is still bare metal, I don't think there are any performance issues there? 22:53:18 slower pods are painfully diffidcult to implement in the networking model I suspect 22:53:29 sdake: nova-docker talks to one docker daemon, yes 22:53:39 adrian_otto: probably ‘docker stop’ which tries to stop kindly (versus ‘kill’ which is SIGKILL) 22:53:46 is nova-docker nova + docker backend? 22:53:50 i.e. SIGTERM vs SIGKILL 22:54:15 sdake: I guess that depends.. mmm, I think subcontainers would have their own network namespace with their own virtual interfaces.. are they bridged to teh host directly? I don't know that.. 22:54:20 ok, so here is what I think we should set as GOAL-1: container-create, container-stop, container-delete 22:54:21 sdake: nova-docker is a Nova virt driver that uses docker-py to talk to docker backend 22:54:32 get that working end-to-end leveraging docker for now 22:54:45 then iterate from that point to become more ambitious 22:55:16 thoughts? 22:55:22 maybe need start and restart as well? 22:55:28 it is a good point, if subcontainers have to network through the parent it could be painful 22:55:31 create+execute 22:55:34 so milestone #1 then = rest client + rest server + db + container create/delete/list/show? 22:55:49 do we need db? 22:55:55 yes a db is mandatory 22:55:58 sdake: what if we just skip the client to start with? 22:56:00 when is a different question 22:56:02 I actually question the need for execute, in general 22:56:08 adrian_otto how you going to maeke the calls? :) 22:56:09 just use docker as the client? 22:56:19 hacked to add an auth header 22:56:19 we can't do that with pods I dont think 22:56:37 tbh that sound smore complicated then just makign a client 22:56:41 i.e. can ‘docker execute’ just be a special case of ‘container create’? 22:56:49 sdake: good point 22:56:56 (I have this same question for the docker api itself actually) 22:56:58 very good point.. doh! 22:57:06 +1 to milestone #1 as above 22:57:35 sdake: so in the proposed milestone we just do container stuff, no bay/pod apis? 22:57:57 slower adrian proposed, wfm - that hsould take 2-3 weeks if we are all working on it 22:58:04 rephrased: milestone irc://irc.freenode.net:6667/#1 rest client + rest server + db + container create/delete/list/show 22:58:41 I thought the idea was to have an image in the 'bay' that held the rest server? 22:58:43 +1 lets set a deadline 22:58:58 * Slower isn't ready to +1 that yet :) 22:59:05 * Slower is slow 22:59:10 so, we are closing in on the end of the hour 22:59:12 need more minerals!! 22:59:17 hehe 22:59:22 let's get an ML thread open on milestone 1 22:59:28 sounds good to me 22:59:36 +1 22:59:37 and let's organize around that 22:59:38 or we could switch to #openstack-containers 22:59:49 email thread is good 22:59:53 maybe we will pick up more contrib 23:00:00 thanks everyone for attending today 23:00:04 +1 on making progress, in general 23:00:11 thanks 23:00:15 thanks guys 23:00:16 #endmeeting