Tuesday, 2017-03-07

openstackgerritAkihito Takai proposed openstack/storlets master: WIP: create the configration file for the agent side  https://review.openstack.org/43626305:01
openstackgerritAkihito Takai proposed openstack/storlets master: WIP: create the configration file for the agent side  https://review.openstack.org/43626306:44
*** kota_ has quit IRC08:11
*** kota_ has joined #openstack-storlets08:11
openstackgerritKota Tsuyuzaki proposed openstack/storlets master: Set concurrency 2 for less gate failures with 35 mins Timeout  https://review.openstack.org/43691208:32
kota_rebased off to the unique container patch set 408:32
openstackgerritKota Tsuyuzaki proposed openstack/storlets master: Cleanup deploy_storlet and the tests  https://review.openstack.org/42247008:47
openstackgerritKota Tsuyuzaki proposed openstack/storlets master: Set concurrency 2 for less gate failures with 35 mins Timeout  https://review.openstack.org/43691208:54
kota_it resolved my mis-rebasing08:56
*** sagara has joined #openstack-storlets08:59
*** akihito_ has joined #openstack-storlets08:59
sagaraI also thinking multi tenant use case. when many account in the same node, they might consume almost all host resource08:59
kota_sagara: yeah, that's worst case09:00
sagaraam I thinking too much?09:00
eranromsagara: no! :-)09:00
kota_sagara: I'm feeling the resource limitation has complex metrics to think of, so I'd like to solve the tangled problems step by step09:01
eranromI think the point here is to give the admin enough control to make sure that even in a multi-tenant env, Swift can still work09:01
kota_eranrom: yes09:01
kota_that is absolutely first step09:02
kota_no- it's goal09:02
kota_and even if it's not enough, the first step is enabling some resourece could be limited by operator somehow (ulimit? docker limitation?)09:02
sagarasure, I agree, we need to do it step by step09:02
kota_and the point for development, where we should start with,09:03
kota_it's my question09:03
kota_account level? app level?09:03
eranromI would even say this:09:04
kota_the worst case as sagara described, current storlets can eat any resource we should reserve for the backend swift works09:04
kota_what I'd like to address first, any way is welcome if it's possible09:05
kota_my opinion09:05
kota_eranrom: sorry for interapting you :/09:05
eranromkota_: np. I am not sure what I was about to say...09:06
eranromok, here is the thought:09:06
eranromIdeally we could cap the maximum resources used by all Docker containers in a way that more resource can be used only if available (not sure this is possible)09:06
eranromonce all containers are capped to a limited set of resources, we can split those between them either on an app or account level09:07
kota_eranrom: nnnnnnnnice09:07
eranromhere is an example for CPU:09:08
eranromso the admin can define the set of CPUs on which docker can run09:08
sagarathanks, I think it's nice too.09:08
eranromsay we are on a 24 cores system and the admin has a way to say, docker can use cores 0-17 (just an example)09:09
eranromthen in a more fined grained level each tenant gets a subset of those, say tenant 1 gets 0-409:10
eranromThe problem here is that this is too rigid09:10
eranromand does not allow over provisioning. Perhaps this is here we can switch to nice / io-nice09:10
eranromI mean:09:10
eranromall docker containers will use cores 0-17.09:11
eranrombetween the containers we will use nice, according to some service level agreement09:11
eranromor even a combination:09:12
eranroma customer that pays a lot will get dedicated reosurces (e.g. cores 0-3)09:12
eranromothers can get a better nice values09:12
kota_the priorities are adressed in the 0-17, right?09:12
kota_and 18-24 are still available for swift09:12
eranromtechnically, this means that all containers are executed with a limiting affinity to cores 0-17 (should be supported by docker)09:13
sagaraif we can use docker '--cpu-shares=0' or '--blkio-weight=' in capped all docker containes, I think it will resolve cap problem09:13
eranromgiven that we have the management hooks in place...09:13
eranromsagara: right!  I was not aware of this option. Its been some time since I looked at this :-)09:15
eranromsagara: do you know if this is a CGroup feature?09:15
sagaramaybe it can. I will confirm09:15
eranromsagara: thanks!09:16
eranromso maybe a first step would be to define the APIs or policies that an admin can define - just a thought09:16
eranromalso, back to kota's point, seems like we can have both priorities between containers and perhaps between apps:09:18
eranromdifferent containers get different --cpu-share09:18
eranromdifferent apps get different nice values09:18
eranromwhere --cpu-share is defined by the admin based on SLA, and different apps by the tenant himself09:19
eranromagain just a thought :-)09:19
kota_sounds reasonable to me09:21
eranromsorry if I got carried away and "spoke" too much :-|09:22
kota_sagara could find the first step, maybe? does it make you sense sagara?09:22
kota_eranrom: np09:23
sagarayes, of course, thank you09:24
sagaraI think app priorities seems to be ditto.09:33
sagaraIn a certain (proxy or object) host, we can control that resource with multi-cgroup.09:34
sagarae.g. CGroup(host / storlet) - (CGroup (tenant wight)) - CGroup (app wight) - app.09:34
sagaraso we can expose CGroup (app wight) API for tenant user. tenant user can adjust their app weight as they like09:35
sagarabut app wight may not be so effective, it is valid just only same host app09:36
eranromsagara: I think this makes sense. This assumes that CGroup supports weights - right?09:45
sagaraeranrom: yes. I saw docker reference, docker has weight feature, so Cgroup maybe also have it09:47
sagara#link: https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources09:47
sagaraI need to confirm it09:48
eranromsagara: right! I have just checked out some CGroups docs. It surely has changed a lot since I looked at it 3 years ago...09:48
eranromApparently, for CPU it was introduced in the 3.2 kernel...09:52
*** openstackgerrit has quit IRC10:33
sagaraeranrom, kota_: many thanks today! I will continue to consider that10:38
eranromsagara: Thank you for sharing your thoughts!10:39
*** sagara has quit IRC13:12
*** akihito_ has quit IRC13:13

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!