03:00:02 #startmeeting zun 03:00:03 Meeting started Tue Jan 17 03:00:02 2017 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:00:06 The meeting name has been set to 'zun' 03:00:08 #link https://wiki.openstack.org/wiki/Zun#Agenda_for_2017-01-17_0300_UTC Today's agenda 03:00:13 #topic Roll Call 03:00:17 Pradeep 03:00:20 o/ 03:00:24 Madhuri Kumari 03:00:27 Namrata 03:00:37 lakerzhou 03:00:48 o/ 03:00:59 kevinz 03:01:04 thanks for joining the meeting pksingh diga mkrai Namrata lakerzhou sudipto_ kevinz 03:01:12 #topic Announcements 03:01:18 i have no announcement 03:01:22 anyone else has? 03:01:40 #topic Review Action Items 03:01:42 none 03:01:49 #topic Cinder integration (diga) 03:01:54 #link https://blueprints.launchpad.net/zun/+spec/cinder-zun-integration The BP 03:01:59 #link https://review.openstack.org/#/c/417747/ The design spec 03:02:02 diga: ^^ 03:02:13 hongbin: Yes 03:02:22 hongbin: I saw your comments 03:02:47 I think the current approach looks okay to me 03:02:56 ok 03:03:34 I agreed on we should create seperate volume table for this implementation 03:03:54 diga: ack 03:04:23 hongbin: but I have studied on this, then I came up with this approach 03:04:33 diga: sure 03:04:38 diga: most of my comments are asking for clarification 03:04:46 hongbin: if you do this way, then later on, we can extend this to multiple drivers 03:04:50 hongbin: yes 03:04:59 diga: ok 03:05:33 diga: then, i would look forward to your revision to address them 03:05:35 hongbin: I will revisit the spec, will reply to your comments 03:05:39 hongbin: yes 03:05:54 hongbin: I will update it in next one hr 03:05:58 diga: thanks 03:06:20 hongbin: welcome! 03:06:20 for others, any comment about the cinder integration spec? 03:06:51 i agree that there should be no hard dependency on any projects 03:07:09 we should design in that way 03:07:32 The driver based implementation is preferable 03:07:40 yes, that's the approach I am taking in this spec 03:07:54 pksingh: mkrai +1 03:08:46 ok, next topic 03:08:52 #topic Support interactive mode (kevinz) 03:08:57 #link https://blueprints.launchpad.net/zun/+spec/support-interactive-mode The BP 03:09:02 #link https://review.openstack.org/#/c/396841/ The design spec 03:09:12 kevinz: ^^ 03:09:18 Hi 03:10:09 I prepare to use CLIS->API->COMPUTE->Docker Daemon to finish the container tty resize function 03:10:18 In server side 03:10:31 great 03:11:14 Also websocket link need docker version 03:11:26 i see 03:11:43 in compute node. Do we already have ? If not I can add one func to get 03:12:06 Yes it is already there 03:12:24 we do have a conf for it 03:12:46 i think the problem is how to expose the version via REST API? 03:13:45 API just get the doceker version in compute node and then generate the websocket link to CLIS 03:14:03 i see 03:14:06 hongbin: Yeah 03:15:20 kevinz: if i understand correctly, zun needs to have an admin api to return the link for cli to do interactive operations? 03:16:11 kevinz: however, the websocket link is generic? or it is runtime-specific? 03:16:12 Yes, exactly 03:17:17 Yes the websocket APi need the docekr version, docker daemon IP and port 03:17:43 kevinz: ok 03:18:03 kevinz: feel free to go ahead and submit a patch for review 03:18:16 OK, I see 03:18:20 Thanks hongbin 03:18:44 any other question for kevinz ? 03:18:58 No more :-) 03:19:23 thanks kevinz 03:19:27 next one 03:19:29 #topic Make Zunclient an OpenStackClient plugin (Namrata) 03:19:34 #link https://blueprints.launchpad.net/zun/+spec/zun-osc-plugin The BP 03:19:38 Namrata: ^^ 03:19:42 hi 03:20:21 the blueprint is completed 03:20:30 Namrata: awesome 03:20:33 as for noe as discussed earlier 03:20:43 thanks hongbin 03:20:52 Namrata: thanks for the great work 03:21:18 Namrata: great work :) 03:21:26 thanks pksingh 03:21:36 Namrata: mind marking this bp as implemented? https://blueprints.launchpad.net/zun/+spec/zun-osc-plugin 03:21:50 yeah sure 03:21:55 Namrata: thanks 03:22:10 any other comment for the osc bp? 03:22:39 ok, next one 03:22:42 #topic How to expose CPU configurations for containers 03:22:50 #link https://review.openstack.org/#/c/418675/ A proposal to add cpushare to container 03:22:55 #link https://review.openstack.org/#/c/418175/ A proposal to change description of cpu parameter 03:23:26 i will try to summarize the discussion, sudipto_ feel free to chime in if you have any comment 03:23:39 sure 03:23:53 we have been discussing how to expose the cpu contraints of the container via zun api 03:24:00 I have been struggling with time management this past week, but hopefully this week would be better. 03:24:24 currently, we are exposing cpu constraints as the number of core 03:24:34 that is vcpu (same as nova) 03:24:54 however, there are several alternatives proposed 03:25:20 for example, exposing the cpushare parameter instead 03:25:40 hongbin, i have my doubts over what we are calling a vcpu right now. 03:25:59 sudipto_: i guess it is the number of virtual cores 03:26:00 hongbin: number of core or relative number of cpu cycles, i am not sure whether they are same or different 03:26:20 pksingh: i see, i am not sure either 03:26:31 hongbin, number of virtual cores have no significance unless you can map them to cores on the system...which we aren't doing. 03:26:51 sudipto_: i see 03:27:06 then, let's discuss. what is the best way to do this 03:27:27 hongbin, the last time we discussed, the proposal was to bring out cpu policies 03:27:35 we are using cpu-quota and docker says it as 'cpuquota - Microseconds of CPU time that the container can get in a CPU period' 03:28:07 pksingh, yup, that's my point. 03:28:36 so if you define a CPU period of 10 ms. Then a cpu-quota will define, how many ms - your container can execute in that period. 03:28:51 sudipto_: +1 03:29:07 so it kinda boils down to a shares concept 03:29:51 i have looked at k8s for cpu in before 03:30:27 if i remembered correct, they used cpu quota for max cpu allocation, cpu share for required cpu allocation 03:31:36 however, k8s is using other cpu unit (not vcpu) 03:32:18 sudipto_: pksingh what are your opinions of the ideal solution? 03:32:25 hongbin, so let's talk in terms of physical cores on the system, if there are 5 cores in the system...what's a cpu period/quota for this system? 03:32:44 hongbin, i plan to get some clarity on this today 03:32:56 i was thinking exposing these things can make scheduling job complex 03:33:19 hongbin, we don't expose these, we expose policies. 03:33:25 pksingh, ^ 03:33:38 policies being - shared/dedicated/strict 03:33:52 means it would be configurable? 03:33:57 where shared means the default case, where we are operating right now. 03:34:04 yeah configurable as a part of zun run command 03:34:33 sudipto_: i am fine with the policy things 03:34:41 dedicated means, the zun backend code will give you dedicated cpu cores to run on. While strict means, there's a one on one mapping of cores to containers. 03:34:52 sudipto_: however, that is about cpu pining, but less about cpu allocation? 03:35:06 hongbin, agreed. 03:35:31 hongbin, do you know k8s do cpu allocation? (shares is one way) 03:35:36 sudipto_: ok, it seems you proposed to expose "number of physical cores" + policy? 03:36:16 hongbin, after the discussion with you and pksingh i feel it's a good idea to not expose the cores, but just the policy to the end user. 03:36:25 i was thinking abou a public cloud, will it be better to expose this, 03:37:13 pksingh, public cloud with openstack? very few :) but that's beyond the point. 03:37:39 i agree with you that we should not be exposing cores to the end users, hence the policies. 03:37:52 i think zun would be mainly targeted for private cloud (since container on public cloud has isolation problem) 03:38:10 Now why policies? Because there's a need for running NFV based workloads to have dedicated resources. 03:38:37 hongbin: +1 03:38:45 sudipto_: if we expose policies, we need to expose the number of cores as well? 03:38:57 sudipto_: ccan we run them on diferent set of compute nodes? 03:39:10 hongbin, not to the user necessarily right? 03:39:11 sudipto_: for example, if we have a policy "dedicated", then how many cores are dedicated? 03:39:21 hongbin, o yeah, for that yes. 03:39:30 i thought you mean the actual numbers on the system 03:39:37 pksingh, meaning? 03:39:50 policy is related to cpu pining support only 03:39:57 lakerzhou, yeah 03:40:13 sudipto_: we have some nodes in the system which are dedicated for this dedicated policy 03:40:29 sudipto_: we will always alot that container to that set of nodes 03:41:02 NFV applications usually require a certain # of cores (dedicated) 03:41:10 pksingh, that does sound like the availability zones concept, but yes you need to do that. 03:41:37 pksingh, someone in nova had proposed a way to overcome this by creating host capabilities. I will share that spec with you once i find it. 03:42:32 lakerzhou, +1 03:42:38 sudipto_: ok 03:42:41 ok, if we want to expose # of cores, how to do that? 03:43:00 $(nproc) 03:43:41 bkero: yes, it seems that is the command to get the number of processor 03:43:52 Also in nova, the # of cores are vcores, not physical cores 03:43:55 hongbin, that boils down - to if we can expose something in the form of a vcpu for a container 03:44:16 sudipto_: what do you think about that? 03:44:27 lakerzhou, the virtual cores, give you the idea of how many physical cores you would need for a dedicated use case. 03:44:49 hongbin, i will get back on this by today, if that's ok. 03:44:59 sudipto_: ok, sure 03:45:11 perhaps, we could table this discussion to next week 03:45:26 then, all of us can study more about this area 03:45:38 yeah 03:45:48 any last minute comment before advancing topic? 03:46:06 #topic Discuss BPs that are pending approval 03:46:13 #link https://blueprints.launchpad.net/zun/+spec/support-port-bindings Support container port mapping 03:46:49 kevinz: i saw you proposed this bp, want to drive this one? 03:46:59 hongbin: YEAH 03:47:17 I think we can add a port binding to container when create 03:47:24 hongbin, do we also keep track of allocated ports? I am guessing we should? 03:47:45 sudipto_: i am not sure 03:48:08 hongbin, otherwise, two containers can potentially overlap on the same port? 03:48:15 same host port for example. 03:48:20 sudipto_: yes, that is a problem 03:48:42 sudipto_: however, you could use the -P opiton and let docker pick a port for you 03:49:19 hongbin, kevin has put a -p with the zun command line...which i think is legit because docker might not just be the driver of the future... 03:49:57 sudipto_: +1 03:50:59 sudipto_:+1 03:51:03 hongbin, this too points to some kind of a host inventory, we need to build in zun 03:51:29 sudipto_: yes, that is true 03:51:30 the cpu one would need that too, and so will many other host capabilities. 03:51:55 sudipto_: if we follow the openstack deployment, host will be under a management network, which is different from the tenant network 03:52:15 port mapping in this case means to expose a container to a management network.... 03:52:43 hongbin, good point... 03:52:45 i am not sure if this makes sense, however, if containers are running on vm, this makes perfect sense 03:52:58 hongbin, yup, thats a very valid point. 03:53:11 another thing to brainstorm about :) 03:53:13 hongbin: +1 03:53:47 then, how to deal with this bp, table it? drop it? keep it? 03:54:30 hongbin, come back do some research next week? 03:54:32 kevinz: what do you think? 03:54:49 sudipto_: ok, sure 03:54:55 yes that would be better 03:54:57 table this one 03:55:04 #link https://blueprints.launchpad.net/zun/+spec/support-zun-copy Support zun copy 03:55:19 hongbin: +1for sudipto_ 03:55:22 how about this one? a good/bad idea? 03:55:48 i think this is good 03:56:01 pksingh: ack 03:56:09 is this docker cp? 03:56:16 i thunk k8s also supports this 03:56:21 sudipto_: i guess it is 03:56:29 yes sudipto_ 03:56:34 yeah, doesn't seem to harm. at all. 03:56:48 ok, i will approve it if there is no further objection 03:57:03 next one 03:57:08 #link https://blueprints.launchpad.net/zun/+spec/kuryr-integration Kuryr integration 03:57:41 i am proposing to use kuryr for our native docker driver 03:57:49 hongbin, +1 03:58:00 perhaps just an invetigation for now, to see if this is possible 03:58:30 yes that would be good 03:58:34 hongbin, +1 03:58:40 currently, our nova driver has neutron integration (via nova capability), the native docker driver doesn't have any neutron integration yet 03:58:48 but it does not support multitenancy right? 03:59:11 pksingh: i hope it does 03:59:15 pksingh: will figure it out 03:59:26 hongbin: ok sure 03:59:29 sorry, run out of time 03:59:35 #topic Open Discussion 04:00:02 it looks most of us agreed on the kuryr integration bp, then i will approve it 04:00:09 all, thanks for joining the meeting 04:00:12 sure 04:00:12 #endmeeting