03:00:09 #startmeeting zun 03:00:10 Meeting started Tue Aug 9 03:00:09 2016 UTC and is due to finish in 60 minutes. The chair is hongbin_. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:13 The meeting name has been set to 'zun' 03:00:16 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2016-08-09_0300_UTC Today's agenda 03:00:20 #topic Roll Call 03:00:28 Namrata 03:00:31 shubham 03:00:32 Madhuri Kumari 03:00:39 Wenzhi 03:00:43 hi, yanyan is here 03:01:00 Dilip 03:01:08 vikas 03:01:11 o/ 03:01:20 Thanks for joining the meeting Namrata shubhams mkrai Wenzhi yanyanhu itzdilip vikasc flwang 03:01:27 #topic Announcements 03:01:39 I have no announcement, anyone has? 03:01:54 #topic Review Action Items 03:01:57 none 03:02:03 #topic Runtimes API design (mkrai) 03:02:08 mkrai: ^^ 03:02:40 I have tested the api patch and it is working for all apis except for container-show command 03:03:11 I have uploaded the patch in zunclient for all container related command 03:03:24 #link https://review.openstack.org/#/c/352357/ 03:03:39 Nice 03:04:01 Everyone please review the patch 03:04:08 What is wrong with the show command though? 03:04:23 The container controller patch needs a revision which I will do after meeting 03:04:31 sure 03:04:47 hongbin_, Issue is with response being an object 03:05:04 It should not be a difficult issue 03:05:10 I will fix it today 03:05:18 sounds good 03:05:26 Rest of the commands are working fine :) 03:05:46 Please feel free to test the APIs and post your comments 03:06:10 hongbin_, I have also updated your compute patch 03:06:27 mkrai: Yes, I saw that. It looks good 03:06:41 Yes it is working now 03:07:05 That's all from my side 03:07:24 Thanks mkrai . 03:07:33 Any question about the runtime API? 03:08:12 #topic Nova integration (Namrata) 03:08:17 #link https://blueprints.launchpad.net/zun/+spec/nova-integration The BP 03:08:21 #link https://etherpad.openstack.org/p/zun-containers-nova-integration The etherpad 03:08:29 Namrata: ^^ 03:08:30 I am working on the specs of nova integration 03:08:36 and will submit it by this week 03:09:24 Great 03:09:47 Namrata, let us know if you need any help on that 03:09:53 yeah sure 03:09:57 Thanks mkrai 03:10:05 Namrata: Could you tell us your general thoughs about the design? 03:11:03 I will include both the implementation details which we discussed in last meeting 03:11:27 And as of now we don't of scheduler so we will consider Nova's scheduler only 03:11:46 And the driver will work same as ironic driver 03:12:16 OK 03:12:33 I remembered we discussed two approach at the last meeting 03:12:48 1. Ironic approach (use nova shceduler) 03:12:56 Yeah i will write both the implemenatations in the spec 03:13:01 2. VMWare approach (disable nova scheduler) 03:13:13 En, ok 03:13:33 hongbin_, Is there a way to disable scheduler? 03:13:52 mkrai: Just use Nova as a proxy 03:14:31 Nova -> ZUn API -> Zun scheduler -> Zun compute 03:14:44 Ok got it 03:14:50 Like this: http://docs.openstack.org/kilo/config-reference/content/vmware.html 03:15:01 hongbin_, actually nova scheduler still takes effect. Just the scheduling happens cross multiple vcenters 03:15:26 yanyanhu: Then, it sounds like a two level scheduling 03:15:28 hongbin_, yes, basically, the 'real' scheduling is done inside vcenter/zun 03:15:41 Interesting 03:15:44 if we only have one zun/vcenter instance there 03:16:06 yes scheduler is tightly coupled so it can't be disabled I guess 03:16:09 so, what you stated is right 03:16:43 ok 03:16:51 mkrai, using the second way, the will be only one available host for nova scheduler to choose actually :) 03:17:08 s/host/nova-compute node 03:17:25 the compute-node is actually zun service 03:17:25 yanyanhu, How? 03:17:53 in that case, Zun will be a nova-compute node for Nova 03:18:05 so 03:18:32 there is no real scheduling happen at nova side I feel 03:18:44 yes just like ironic 03:19:43 if we choose another way, nova will be responsible to schedule instance 03:20:19 I think the first appraoch is more scalable: Nova choose a cluster, Zun pick a node in the cluster 03:20:29 just my understanding, maybe not accurate 03:20:57 hongbin_, what you mean here by cluster? 03:21:21 Like VMWare, Nova pick a cluster, vcenter pick a host 03:21:32 I see 03:21:36 Oh, I might misunderstand something 03:21:40 so two level scheduling 03:21:44 Yes 03:22:43 OK, let's discuss the detail in the spec Namrata will write 03:22:50 sure 03:22:57 Thanks Namrata for working on this 03:22:58 +1 03:23:11 #topic Integrate with Mesos scheduler (sudipto) 03:23:20 sudipto_: here? 03:24:01 It looks sudipto is not here. Anyone else want to discuss this topic? 03:24:35 OK. Then, let's start the open discussion 03:24:42 #topic Open Discussion 03:25:08 I have one topic to discuss, if nobody else has anyting 03:25:20 hongbin_, After we have the basic docker API support, what shall we work on? 03:25:24 sure, go ahead plz 03:25:38 mkrai: There are several priority 03:25:40 We should decide on some features 03:25:47 mkrai: image, network, storage 03:26:14 * sudipto_ is eavesdropping 03:26:15 mkrai: Do you have any feature in mind? 03:26:23 sudipto_: hey 03:26:28 So I think we should add few of these topics in next meeting 03:26:37 sorry i am out for a customer visit... so couldn't join in time. 03:27:04 I also feel image, storage and networking are priority 03:27:11 mkrai: sure, will start exploring additional topics next time 03:27:23 sudipto_: NP at all 03:27:47 Let's enhance the docker support first 03:28:09 hey 03:28:16 Thanks hongbin 03:28:17 Not sure why my name changed 03:28:26 .... 03:28:44 haha 03:28:45 OK. Let's continue 03:28:56 yeah my view is the same, the mesos updates can wait till we finalise the docker support... 03:29:07 sudipto_: ack 03:29:44 For the next priority, I remembered we have an etherpad to list them 03:29:58 * hongbin is finding the etherpad 03:30:27 #link https://etherpad.openstack.org/p/container-management-service 03:31:07 Several things in the list 03:31:30 1. Implement a simple scheduler 03:31:57 2. Neutron integration 03:32:03 yes let's list it somewhere so that contributors can take it 03:32:10 3. Cinder integration 03:32:45 mkrai: sure. First, we needs to decide which one is the priority first 03:32:55 4. Glance integration 03:32:59 Yes 03:33:15 mkrai: Do we have a Keystone integration ready? 03:33:20 neutron intergration should be easiest one 03:33:35 Yes 03:33:36 since kuryr already supports libnetwork 03:34:12 vikasc: hopefully, it is easy 03:34:38 and kuryr currently supports baremetal only 03:34:40 mkrai: How about multi-tenancy? 03:35:02 hongbin, No it is not yet supported 03:35:18 mkrai: ack. I think multi-tenancy is very important 03:35:23 We need to look into it 03:35:25 Yes agree 03:35:43 #action hongbin created a BP for multi-tenancy support 03:36:08 Anything else? 03:36:31 5. Host management 03:36:39 6. Magnum integration 03:37:17 also Composition maybe 03:37:26 but not urgent 03:37:32 yanyanhu: ack 03:37:37 Shall we pick some priority topics to be included in next meeting? 03:37:50 mkrai: if you want, yes 03:38:08 I think multi tenancy and glance integration is priority I feel 03:38:16 mkrai, totally agreed. I think we have to narrow our focus to get something working first. 03:38:50 sudipto_: mkrai get the basic of runtime API working already 03:39:04 hongbin, oh great. I wasn't aware. 03:39:11 +1 as well for glance integration and multi-tenancy support 03:39:20 Maybe i could do some code reviews - if you guys add me? 03:39:29 sure sudipto_ I will add you 03:39:48 sudipto_: as core reviewer? 03:40:01 hongbin, well i didn't ask for that :) A basic reviewer would do too. 03:40:29 sudipto_: a basic review don't need to be added :) 03:40:52 However, I can propose you to be a core 03:41:02 +1 for it hongbin :) 03:41:07 hongbin, alright sounds good! 03:41:15 +1 from me 03:41:18 sudipto_: will propose you later 03:41:26 hongbin, ok thanks! 03:41:33 anyone else want to become a core. Please contact me as well 03:42:07 OK. Back to the Glance integration. 03:42:37 We have about 17 minutes left, want to discuss Glance integration now? 03:42:52 I see glance integration is taken by Fei long 03:43:03 flwang: ^^ 03:43:04 flwang, are you there? 03:43:53 I am not sure if you guys know. Glare has been splitted out from Glance 03:43:58 I am not really sure where glance is going. 03:44:04 with Glare and all that. 03:44:06 yes, long thread in mailing list :) 03:44:20 some discussion (argument as well :P ) 03:44:42 Yes, it is a long debate 03:45:32 Glare's API is suitable to implement layers of docker images 03:46:01 Basically, Glare is like a superset of Glance 03:46:10 oh, I thought it is for template storing before 03:46:41 yanyanhu: It seems it is not. It is a model to represent the image and its meta-data 03:46:53 I see 03:47:01 so basically add versioning support 03:47:12 What we needs is the ability to express dependencies between image layers 03:47:43 which Glare can do 03:48:07 so you want to detach yourself from the docker hub and create a private registry of sorts? 03:48:28 sudipto_: Yes, that is an alternative 03:48:51 sorry 03:48:54 back 03:49:00 flwang: hey 03:49:15 flwang: I guess you saw the news that Glare was splited out? 03:49:23 yep i know 03:49:43 flwang: Then, what the Glare API will look like, a superset of Glance API? 03:49:55 and as the discussion with nikhil, implement the layer in glance is most like 'impossible' 03:50:16 hongbin: it's an artifacts repo 03:50:27 not a superset of glance api 03:50:38 flwang: I see 03:51:18 in other words, or IMHO, it's most like a definition for an OS/image 03:51:45 it can define very detailed attributes of an image 03:51:53 flwang: but the image blob data must store somewhere? 03:52:07 till glance and glare sorts out stuff, should we just go with a private registry? 03:52:17 i would say the image data is still in glance 03:52:50 sudipto_: that's a reasonable option 03:53:01 since i can see it will still last very long time to settle down 03:53:13 flwang, same feelings. 03:53:23 The weakness of docker registry is the multi-tenancy support 03:53:27 and keystone 03:54:17 hongbin: right 03:54:19 It looks docker registry is single tenant, that means we need to setup one registry per tenant 03:54:39 or modify the docker registry to make it compatible with multi-tennacy model 03:54:40 hongbin: and it's hard to maintain i think 03:55:28 flwang: I might have a crazy idea. Have Glance backed by docker registry 03:56:01 so that we use Glance API at front, use docker registry as a backend 03:56:20 hongbin: that's the initial plan I discussed with Sam 03:56:24 years ago 03:56:34 but i can't remember the details now :( 03:56:54 sounds like the idea was rejected 03:58:12 because even if you can work out docker registry as a glance backend, you still have to make glance respect the layers 03:58:14 right? 03:58:26 flwang: maybe not 03:58:31 how? 03:58:37 how about an offline mailing list discussion on this? maybe not to the whole community but with a few of us in it? 03:58:47 ok, sure 03:58:48 coz we are on the brink of the hour. 03:58:58 sure... 03:58:59 let's back to zun channel 03:59:13 All, thanks for joining the meeting 03:59:22 #endmeeting 03:59:29 ... 03:59:36 thanks.. 03:59:59 Meetbot isn't working? 04:00:00 #endmeeting