03:00:02 <hongbin> #startmeeting zun
03:00:03 <openstack> Meeting started Tue Jun  6 03:00:02 2017 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
03:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
03:00:06 <openstack> The meeting name has been set to 'zun'
03:00:09 <hongbin> #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2017-06-06_0300_UTC Today's agenda
03:00:13 <hongbin> #topic Roll Call
03:00:24 <lakerzhou2> lakerzhou
03:00:33 <Shunli> shunli
03:00:33 <Namrata> Namrata
03:00:40 <mkrai> Madhuri
03:00:48 <shubhams> Shubham
03:00:51 <kevinz> kevinz
03:01:15 <hongbin> thanks for joining hte meeting lakerzhou2 Shunli Namrata mkrai shubhams kevinz
03:01:21 <hongbin> let's get started
03:01:27 <hongbin> #topic Announcements
03:01:37 <hongbin> anyone has an announcement?
03:01:58 <hongbin> seems no
03:02:02 <hongbin> #topic Cinder integration
03:02:08 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/direct-cinder-integration Direct Cinder integration
03:02:13 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/cinder-zun-integration Cinder integration via Fuxi
03:02:37 <hongbin> for this topic, i proposed to have two drivers for handling volumes
03:02:48 <hongbin> 1. cinder, 2. fuxi
03:03:07 <hongbin> there is a review up for that: https://review.openstack.org/#/c/468658/
03:03:18 <diga> o/
03:03:28 <hongbin> hi diga
03:03:35 <diga> hongbin: Hello
03:03:40 <hongbin> comments on this topic?
03:04:16 <diga> hongbin: I will push the latest patch today by incorporating given comments
03:04:25 <hongbin> diga: ack
03:05:01 <diga> hongbin: If i need any help in gate passing, will ping you
03:05:11 <hongbin> diga: ok
03:05:45 <diga> hongbin: that's it from my side
03:05:48 <hongbin> ok, everyone, you could spend your time to review hte spec after th e meeting
03:05:58 <hongbin> and provide your feedback on the review
03:06:01 <kevinz> OK I will review this
03:06:03 <hongbin> #link https://review.openstack.org/#/c/468658/
03:06:17 <hongbin> kevinz: thank you
03:06:20 <hongbin> ok, next topic
03:06:29 <hongbin> #topic Introduce container composition (kevinz)
03:06:41 <hongbin> kevinz: want to drive this one?
03:06:48 <kevinz> hongbin: sure
03:07:16 <kevinz> I have one topic to discuss. That's from the comment of lakerzhou in the spec
03:07:17 <kevinz> https://review.openstack.org/#/c/437759/17/specs/container-composition.rst@73
03:07:43 <kevinz> We have : CPU and memory limits: Given that host resource allocation, cpu and memory limitation
03:07:43 <kevinz> support will be implemented.
03:08:18 <kevinz> Lakerzhou has a comments : With the limits, the CPU/RAM usage of the capsule will never beyond the number, but do we have any grantee that how much resources allocated to a capsule? Should we use resources instead of limits here?
03:09:10 <hongbin> lakerzhou2: could you clarify " Should we use resources instead of limits here?"
03:10:11 <hongbin> for the first question, i think filter scheduler will ensure the amount of resources allocated to a container/capsule
03:10:19 <lakerzhou2> I think limits meaning the containers will take less resource than the limit
03:10:44 <hongbin> that is true
03:11:01 <lakerzhou2> but in fact, resources is what the containers require from the host
03:11:22 <hongbin> yes
03:11:39 <mkrai> Limit is on the resource. Right?
03:11:55 <hongbin> lakerzhou2: but the scheduler will schedule containers based on resources on the host?
03:11:57 <mkrai> I don't get the correlation here
03:12:36 <lakerzhou2> yes, why use limits, does that mean container might take less resources?
03:13:41 <lakerzhou2> for example, if a container requires 2 vcpus, scheduler should assign 2 vcpus
03:13:51 <hongbin> i see
03:14:06 <kevinz> lakerzhou2: Per my understanding, you mean we should add a field to control the "at least" resources for container?
03:14:32 <lakerzhou2> no, I don't see a reason to use "limits"
03:15:01 <lakerzhou2> resources: vcpus: 2, ram: 3G
03:15:49 <hongbin> perhaps just remove the "limits" from the naming, call it cpu/memory
03:16:14 <lakerzhou2> that is what I meant
03:16:33 <hongbin> yes, i am fine with that
03:16:37 <kevinz> hongbin: I thinks it's fine, we can remove
03:16:53 <hongbin> kevinz: ok
03:17:17 <hongbin> any opposing point of view on this?
03:18:16 <hongbin> lakerzhou2: i think this is a good suggestion. thanks lakerzhou2
03:18:47 <lakerzhou2> np,
03:18:49 <hongbin> kevinz: anything else about this topic?
03:19:07 <kevinz> hongbin: No, that's all from me
03:19:45 <hongbin> kevinz: ok, hope you get enough feedback to get started
03:20:01 <hongbin> kevinz: thanks for driving this effort :)
03:20:15 <kevinz> hongbin: thx hongbin, my pleasure:-)
03:20:27 <hongbin> next topic
03:20:29 <hongbin> #topic Add Zun Resources to Heat
03:20:34 <hongbin> #link https://blueprints.launchpad.net/heat/+spec/heat-plugin-zun
03:20:49 <hongbin> Namrata: you want to chair this topic?
03:20:55 <Namrata> Yeah sure
03:21:03 <Namrata> https://review.openstack.org/#/c/437810/
03:21:11 <Namrata> I have updated the patch
03:21:22 <Namrata> there are some more comments which I will incorporate
03:21:48 <hongbin> cool
03:22:29 <hongbin> i think the patch looks pretty close to merge
03:22:30 <Namrata> and as discussed in earlier meeting I am also working on the heat doc for zun
03:22:42 <Namrata> Hongbin : Yes
03:22:51 <hongbin> great
03:23:18 <Namrata> That's it
03:23:34 <hongbin> Namrata: thanks Namrata
03:23:43 <Namrata> thanks hongbin
03:23:44 <hongbin> all, any comment on this topic?
03:24:20 <hongbin> seems no
03:24:25 <hongbin> #topic Others
03:24:31 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/make-sandbox-optional Make infra container optional
03:24:56 <hongbin> i wanted to bring up this one to see if you think this is a good idea/bad idea
03:25:16 <hongbin> a brief introduction
03:26:01 <hongbin> the concept of sandbox, which is an infra container, is used to present the container that doesn't do anything but just provide the infra
03:26:08 <hongbin> i.e. kubernetes/pause
03:26:39 <hongbin> there are feedback to suggest to make it optional when we don't need it
03:26:53 <mkrai> This  is a good idea, I got feedback from Intel's clear container team as well
03:27:05 <hongbin> mkrai: ack
03:27:05 <lakerzhou2> I vote for making it optional
03:27:13 <hongbin> lakerzhou2: ack
03:27:28 <hongbin> any opposing point of view?
03:28:08 <hongbin> ok, if this is optional, we can remove it on naitive docker driver
03:28:28 <hongbin> this is needed for capsule and nova driver only i guess
03:29:00 <kevinz> hongbin: yeah, make it optional is OK
03:29:11 <hongbin> kevinz: ack
03:29:24 <hongbin> ok, seems everyone agree on this proposal
03:30:05 <hongbin> #agreed make snadbox optional for drivers that are not needed
03:30:33 <hongbin> btw, pls feel free to take that bp if you interest to do it
03:30:56 <hongbin> i will be the default owner if nobody want to take it
03:31:09 <hongbin> ok, next bp
03:31:18 <hongbin> #link https://blueprints.launchpad.net/zun/+spec/infra-container-in-db Persist infra container in DB
03:31:56 <hongbin> this one is proposed by me as well
03:32:09 <hongbin> and there is a patch available for that
03:32:28 <hongbin> #link https://review.openstack.org/#/c/467535/
03:32:44 <hongbin> there are some debate on the review
03:32:59 <hongbin> therefore, i summarize the opinions into the whiteboard
03:33:04 <mkrai> hongbin: What is the need for this?
03:33:27 <hongbin> mkrai: we want to have a way to keep track of the infra container
03:33:46 <hongbin> mkrai: or later, if we use vm as sandbox, we need to keep track of all the vms as well
03:34:02 <mkrai> Ok
03:34:22 <hongbin> there are three implementation options
03:34:48 <hongbin> OPTION 1:
03:35:06 <hongbin> Add a field (i.e. infra_container_id) into the container table. If a container is infra container, this field is None, otherwise, it points to the uuid of its infra container.
03:35:25 <hongbin> OPTION 2:
03:35:34 <hongbin> Create a separated table for infra container
03:35:41 <hongbin> OPTION 3:
03:35:51 <hongbin> Same as option 2 but leverage the concept of 'capsule' instead
03:36:28 <hongbin> i think option 2/3 are almost the same, the different is the naming of hte table
03:36:37 <hongbin> name it sandbox or capsule
03:36:49 <hongbin> thoughts on this?
03:37:07 <mkrai> hongbin: In #2, we need to have relation b/w container and infra container
03:37:16 <mkrai> But how is that done?
03:37:46 <hongbin> mkrai: i assume there is a foreigh key in the container table to point to the sandbox id
03:38:24 <mkrai> hongbin: So what info of infra contaienr we want to save?
03:39:01 <mkrai> Because it only makes sense to have new table only when we want to store all info related to a infra_container
03:39:08 <mkrai> Otherwise #1 is better
03:39:28 <hongbin> mkrai: this is a good question
03:40:11 <hongbin> i haven't given it a careful thought in before
03:41:21 <mkrai> I can check the patch today
03:41:26 <hongbin> anyone has comments on this?
03:41:26 <mkrai> And leave my comment there
03:41:36 <hongbin> mkrai: ok, thx
03:42:19 <hongbin> i think if not everyone agree on the idea, we could table this bp for now
03:42:36 <hongbin> and bring it back when someone interest in it
03:42:51 <mkrai> I am ok with it
03:43:17 <hongbin> kevinz: from capsule point of view, which option wil lbe better?
03:43:38 <hongbin> kevinz: or you think we don't need to persist this info to db
03:44:46 <hongbin> ok, never mind, we need to move on to the next one
03:45:07 <hongbin> kevinz: you can comment on the review later
03:45:09 <hongbin> #link https://etherpad.openstack.org/p/zun-nfv-use-cases NFV use cases
03:45:28 <hongbin> lakerzhou2: you want to drive this one?
03:45:49 <lakerzhou2> sure
03:46:36 <lakerzhou2> I did not have much input lately, but the core idea is to support VNF workload over containers
03:47:06 <hongbin> #link https://review.openstack.org/#/c/465661/
03:47:09 <lakerzhou2> VNF usually requires CPU pinning, huge page and NUMA support
03:47:43 <lakerzhou2> which will need some support from scheduler
03:48:44 <lakerzhou2> sounds great, I will review the kuryr spec
03:48:46 <hongbin> i think Shunli might interest to help out the scheduler part ? since he worked on the filter scheduler
03:48:57 <lakerzhou2> to see how we can leverage the work with zun
03:49:13 <Shunli> sure.
03:49:22 <hongbin> lakerzhou2: oh, it is about k8s
03:49:46 <hongbin> lakerzhou2: if we want to leverage it, i might need to figure out if libnetwork should support the same
03:50:11 <lakerzhou2> ok, I will explore the possibility
03:50:52 <hongbin> lakerzhou2: i think the requirements for this is very clear since it is all listed in the etherpad
03:51:20 <hongbin> lakerzhou2: the next step is to turn the requirements into bps, so that i could find contributors to work on them
03:51:43 <lakerzhou2> ok, I will work on it
03:52:05 <hongbin> thanks, i will help out this part as well
03:52:27 <hongbin> will work on the Action items session in the etherpad
03:52:36 <lakerzhou2> Hongbin, thanks, I will ping you on IRC
03:52:43 <hongbin> lakerzhou2: ack
03:53:01 <hongbin> everyone, any comment on this topic?
03:53:28 <mkrai> No
03:53:36 <mkrai> Will see the etherpad
03:53:43 <Shunli> no
03:53:47 <mkrai> Thanks lakerzhou2
03:53:49 <hongbin> mkrai: thx
03:54:00 <hongbin> btw, this spec has been there for a while: https://review.openstack.org/#/c/427007/
03:54:03 <hongbin> #link https://review.openstack.org/#/c/427007/
03:54:16 <hongbin> i think it is time to move it forward
03:54:35 <hongbin> this could be the first step for the nfv support i think
03:54:58 <hongbin> #topic Open Discussion
03:55:26 <hongbin> anyone want to bring up a discussion?
03:55:33 <mkrai> To support clear container in Zun, I have posted a patch in docker-py to support --runtime option
03:55:43 <mkrai> #link https://github.com/docker/docker-py/pull/1631
03:56:07 <mkrai> And would want the infra container to be removed
03:56:19 <hongbin> mkrai: ok
03:56:52 <hongbin> mkrai: i think we need to bump the priority for bp to make infra container optional
03:57:01 <mkrai> hongbin: Right
03:57:02 <hongbin> since everyone wanted it gone
03:57:05 <hongbin> :)
03:57:17 <hongbin> done
03:57:27 <mkrai> I will see if I can do it :)
03:58:37 <hongbin> mkrai: sure, take it if you have time
03:58:56 <mkrai> hongbin: Sure. Thanks
03:59:13 <hongbin> ok, everyone, thanks for joining the meeting
03:59:27 <hongbin> see you next time
03:59:30 <hongbin> #endmeeting