03:00:09 #startmeeting zun 03:00:10 Meeting started Tue Jul 5 03:00:09 2016 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:14 The meeting name has been set to 'zun' 03:00:15 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-07-05_0300_UTC Today's agenda 03:00:20 #topic Roll Call 03:00:25 Madhuri Kumari 03:00:26 Wenzhi Yu 03:00:30 Shubham sharma 03:00:46 Vivek Jain 03:01:06 o/ 03:01:33 Thanks for joining the meeting mkrai Wenzhi shubhams Vivek_ sudipto 03:02:07 #topic Announcements 03:02:13 o/ 03:02:15 The python-zunclient repo was created! 03:02:20 hi 03:02:23 #link https://review.openstack.org/#/c/317699/ 03:02:28 #link https://github.com/openstack/python-zunclient 03:02:51 Thanks mkrai for creating the repo 03:03:07 hongbin, :) 03:03:13 flwang1: yanyanhu hey 03:03:17 :) 03:03:34 Any question about the python-zunclient repo ? 03:03:40 service-list command is now supported 03:04:02 #link https://review.openstack.org/#/c/337360/ 03:04:16 Great! 03:04:17 great! will try zunclient later 03:04:23 I will check and confirm 03:04:32 Thanks! 03:04:39 #topic Review Action Items 03:04:45 hongbin draft a spec for design option 1.1 (Work In Progress) 03:04:50 #link https://etherpad.openstack.org/p/zun-containers-service-design-spec 03:05:11 I will continue to work on the etherpad this week 03:05:21 You are welcome to collaborate :) 03:05:37 looks great 03:05:45 thanks 03:06:01 I will leave the etherpad offline 03:06:08 #topic Architecture design 03:06:15 #link https://etherpad.openstack.org/p/zun-architecture-decisions 03:06:53 At the last meeting, we decided to choose option #1.1 03:07:04 I want to confirm it again 03:07:14 Is everyone on the same page? 03:07:32 yes, I think so :) 03:07:41 Yes 03:07:49 +1 03:07:54 +1 03:07:57 Anyone want a clarification. Now it is hte time 03:08:25 OK. Looks everyone agree 03:08:36 Then, we will work according to the decision 03:08:41 #topic API design 03:08:46 #link https://blueprints.launchpad.net/zun/+spec/api-design The BP 03:08:51 #link https://etherpad.openstack.org/p/zun-containers-service-api Etherpad 03:09:07 mkrai: I gave the stage to you :) 03:09:14 hongbin, Thanks 03:09:33 I have started creating spec for zun API according to our design 03:09:43 #link https://etherpad.openstack.org/p/zun-containers-service-api-spec 03:09:56 I want everyone to please have a look 03:10:21 And feel free to add to it 03:10:48 According to our discussion, in our first implementation we will support docker and later COEs 03:10:49 mkrai: just had a quick glance 03:10:56 i think at the init stage 03:11:09 we don't have to care about the details of each api endpoint 03:11:21 we need to figure out the resource asap 03:11:28 Yes, agree 03:11:28 and define the details later 03:11:30 Yes it will generally evolve flwang1 03:11:44 Ok that will be helpful for me too :) 03:12:01 I will add all the resources this week 03:12:38 since it's easy to define the action if we can figure out the objects/resources 03:12:49 That's all from my side 03:12:55 and a relationship between each resources will be lovely 03:13:04 Sure I will add that 03:13:04 yeah and given that we have two overlapping runtimes, i wanted to understand if everyone feels - we should look for an overlap - or should we be more like - building our stuff around one runtime - support it well - and then move to the other? 03:13:32 sudipto: good question 03:14:09 We should abstract container runtimes also 03:14:25 sudipto: define the two runtimes? you mean container and COE? 03:14:37 docker and rocket 03:14:42 flwang1, sorry should have been clearer. I meant docker and rkt 03:15:25 my point being, is there a need to build both of these API supports simultaneously? (I'd be clear i have no bias towards either) 03:15:46 IMO it is not good to have different set of APIs for container runtimes even. 03:15:59 it is not about two different set of APIs 03:16:18 think about libvirt being the driver nova was built on and later VMWare was added...if that serves as a good analogy 03:16:32 one set of API, two different drivers? 03:17:06 Wenzhi, yeah it will be one set of APIs only. However, you will probably have NotImplementedError() for the one that doesn't support it. 03:17:09 sudipto: you mean two set of APIs for docker and rkt? or two set of APIs for runtimes and COEs? 03:17:21 hongbin, two set of APIs for runtimes and COEs... 03:17:28 sudipto: I see 03:17:47 sudipto: It looks you challenge the design option 1.1 :) 03:17:58 sudipto: but let's discuss it 03:18:07 hongbin, no, i don't think i am challenging that :) 03:18:18 hongbin, i am just sticking to the API design discussion... 03:18:39 anyway, i think we can take it offline... 03:18:43 sudipto: ok. could you clarify ? 03:18:44 on the etherpad that is... 03:19:05 basically - should be building around OCI/CNCF? 03:19:10 *we 03:19:51 If collaboration between OCI/CNCF is possible, I am open to that 03:20:33 and my point in a more crude way was - if docker supports x y z APIs - should we be building the same x y z for rocket and through a NotImplementedError() 03:20:52 That mean we needs to figure out something, and propose it to OCI/CNCF? or just imeplement whatever OCI/CNCF? 03:21:06 maybe mkrai is right - we should probably abstract it at a level - where there is a overlap 03:21:22 my only concern is - we shouldn't end up doing things not so well for either runtimes...that's all 03:21:33 Yes, that is one option 03:21:45 * eliqiao joins 03:21:50 eliqiao: hey 03:22:06 hongbin, i will take this on the etherpad and with you and mkrai later, don't want to derail the meeting over it :) 03:22:20 * eliqiao just arrived at hangzhou(for bug smash day) 03:22:24 focus on docker, my 0.02 03:22:28 sudipto: well, I don't have much items to discuss later :) 03:22:55 flwang1, yeah i think i meant something like that :) as in the focus bit :) 03:23:14 don't end up being a place where nothing works well... 03:23:32 Also, important to refer to what version of remote APIs we will support. 03:23:46 Do we agree to concentrate on docker now? 03:24:01 sudipto, the latest one? 03:24:19 mkrai,yeah mostly, but document per say - latest would be a good to cite in numbers. 03:24:29 since the API updates are pretty frequent 03:24:52 Ok 03:25:08 well, I compared the APIs of docekr and rkt 03:25:23 obviously, docker have double APIs operations than rkt 03:25:35 However, the basic looks the same 03:25:40 Agree hongbin 03:25:49 e.g. start, stop, logs, .... 03:25:54 hongbin, mkrai so we would look at mostly the basic operations? 03:26:03 i was more looking at a thing like docker attach for instance... 03:26:12 do you consider that basic or necessary? 03:26:56 This is a challenging question 03:26:57 The basic ones 03:27:07 I think that we should focus on necessary 03:27:16 docker the target, but we can implement it in several stages 03:27:41 that said 03:27:57 Yes the basic ones first and later the necessary ones 03:28:03 at 1st stage, implement the basic(common) api between docker and rokect 03:28:05 agreed. 03:28:12 and then adding more 03:28:20 agree 03:28:23 that's not a problem 03:28:39 sounds good! 03:28:45 wfm 03:28:48 +1 03:29:03 hongbin, please mark it as agreed 03:29:17 Everyone agree? 03:29:36 agreed 03:29:50 * sudipto is worried for the future, but agrees in the present 03:29:51 #agreed I think there are two general approach to define the API at the beginning, and then adding more later 03:29:57 so we agreed on one set of container runtime API and two different drivers(for docker and rkt)? 03:30:07 sudipto: We can discuss it further later 03:30:15 hongbin, yeah sounds good. 03:30:55 Wenzhi: One thing one set of API for both docker and rkt 03:31:22 For the runtimes and COEs, we can discuss if it is one set of APIs or two set 03:31:44 sudipto: We have plenty of time, we can touch this topic a bit here? 03:32:09 I think it will be very difficult to abstract COE and container runtime 03:32:10 hongbin, you mean the COE API design? 03:32:25 yes 03:32:52 sudipto: you seem to propose one set of APIs for COEs and runtimes? 03:33:06 hongbin, no no, i don't think it is to overlap them 03:34:04 flwang1, the other day had a mention of the vmware apis in nova - i thought you meant something like managing a cluster with the same set of apis - that a compute driver like libvirt would support? 03:34:31 maybe i missed something, but i think we're not going to create one set API to fit both container and COE 03:35:06 IMHO, we do need a group/set of api for container and one for COE, and the hard part is not on the container part 03:35:12 Yes flwang1 03:35:12 the hard part is the COE part 03:35:37 Yes, possibly need another etherpad for COE 03:35:38 because, we're going to have a native COE and the other COE support may from magnum 03:36:08 so how to work out a group/set of api to make everybody happy is the hard part, does that make sense? 03:36:23 or i'm making it mess ;) 03:36:26 yeah it is not just hard, it's like probably impossible. 03:36:27 Each COE has different resources, so different set of APIs for each COE 03:36:46 sudipto, I agree with you 03:37:44 hongbin, I think its better to create an etherpad for all COE APIs 03:37:54 And then try to take out some common if possible 03:37:54 hah, i think that's the interesting part of software design :) 03:38:00 since we have to compromise 03:38:06 #action hongbin create an etherpad for the COE API design 03:38:09 different set of APIs for each COE is not beautiful I think 03:38:10 or i would say balance 03:38:15 you will be biased in such a case, i think that's fine though ;) 03:38:34 figure out a balance point between the reality and the design 03:39:20 I think the hardest part is that almost everyone is using native COE API, how to convince them to use our API is the question 03:39:28 hongbin, ++ 03:39:33 that's my primary concern 03:39:36 Frankly, I don't have an answer for that 03:40:03 Unless, we make it much simpler or more high-level or somthing 03:40:29 hongbin : its only possible if we provide wrapper over existing coe and that wrapper should be same for each COE to end user/customer for all 03:40:43 unless we can unify them, or I don't think people will use a cmd like zun k8s XXX 03:41:01 Wenzhi : yeah right 03:41:09 hongbin: i don't think that's a really problem IMHO 03:41:24 flwang1: why? 03:41:30 the main target user of Zun is the application developer, not operator 03:41:48 do you remember our discussion about the user case of Zun? 03:41:57 yes 03:42:12 IMHO, it's like: As a user, i want to create a container 03:42:25 then i can manage its lifecycle with zun's api/client 03:42:43 if 03:42:59 As a user, I want to replicate my container to N 03:42:59 the user also want to create container based on a COE cluster 03:43:06 Another use case? 03:43:11 and the cloud provider didn't provide it 03:43:23 then he may have to play with the COE API 03:43:40 then that's users choice 03:43:50 if we're doing bad job 03:44:06 then don't expect user will use zun's api to talk with COE 03:44:24 i think that's fair enough 03:44:39 Yes, this area needs more thinking 03:44:51 I am still struggle to figure it out 03:44:59 imagine a supermarket A has good price and service, why do you want to chose B? 03:45:09 I think user would want to use Zun if they want to run COEs on container infrastructure 03:45:10 unless B has better price and service 03:45:52 Sorry *openstack infrastructure 03:46:46 mkrai: yep, the good gene of zun is, it's the native container api of openstack 03:47:02 it should know more about the infra than the COE itself 03:47:07 Yes, that is a point 03:47:08 mkrai, that's a weak case at the moment... since the direction is more towards making openstack as an app on kubernetes :) If i read through the SIG discussions right. 03:47:25 in other words, we can provide better integration with other services 03:47:28 however, that doesn't mean the other way around is not possible, 03:47:48 and it's probably good for people who already have invest heavily on openstack to try out containers as well. 03:48:42 How to make COE integrate well with OpenStack? 03:49:10 I think it is not our job to teach operators to how enable Kuryr in COE? 03:49:40 If Kuryr/neutron is not configured in COE, Zun cannot use it 03:50:07 Same for Cinder, Keystone, ... 03:51:02 OK. Looks like we need to take the discussion to an etherpad 03:51:16 #topic Open Discussion 03:51:21 I have a concern on managing state of containers 03:51:47 Are we planning to manage it in Zun db? 03:52:09 mkrai: I think yes 03:52:22 mkrai: Although we need to reconsider the "db" 03:52:27 How do we sync it will container runtime? 03:52:34 mkrai: I think etcd will be better 03:52:37 s/will//with 03:52:47 etcd +1 03:53:08 mkrai: I think it is similar as how Nova sync with VMs 03:53:32 I certainly have a strong feeling that we shouldn't treat containers and VMs similarly. 03:53:33 Ok sounds good 03:53:42 we can sync them with a periodic task maybe 03:53:54 the state sync in VMs can be asynchronous - you might end up killing a container before that happens :) 03:54:05 sudipto: OK. I think it is like how kubelet sync with containers :) 03:54:09 as in containers could be very short lived as well. 03:54:22 hongbin, hah ok. 03:54:53 mkrai, were you inclined towards something like a distributed db? like yuanying pointed out? 03:55:04 but agree, containers are started and stopped very frequently, so db is not the right choice 03:55:42 sounds reasonable ^^ 03:56:08 Yes 03:56:52 OK. sound like an agree? 03:57:25 silient.... 03:57:46 agreed., 03:57:47 agree 03:57:53 :) 03:58:03 +1 03:58:09 +1 we need a db operation implementation for etcd backend 03:58:13 And also what about to re-consider "rabbitMQ"? 03:58:24 +1 03:58:42 #agreed store container state at etcd instead of db 03:58:59 yuanying: Yes, I am thinking about the message queue as well 03:59:03 yuanying: what do you think? 03:59:05 taskflow also uses kvs for message passing 03:59:37 https://wiki.openstack.org/wiki/TaskFlow 03:59:46 We can implements conductor on KVS 04:00:18 OK. TIme is up. We can re-discuss the rabbitmq in the ML or next meeting 04:00:27 All, thanks for joining hte meeting 04:00:30 Thanks everyone 04:00:32 #endmeeting