16:01:19 #startmeeting hierarchical_multitenancy 16:01:20 Meeting started Fri Oct 9 16:01:19 2015 UTC and is due to finish in 60 minutes. The chair is schwicke. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:23 The meeting name has been set to 'hierarchical_multitenancy' 16:01:27 hi, all 16:01:31 Hi :) 16:01:40 Hi all :-) 16:01:48 Hello 16:02:00 We have one main topic today 16:02:07 p/ 16:02:10 o/* 16:02:34 #topic hierarchical quota and overcommitting 16:02:57 not 100% sure how to structure that 16:03:35 I was away because I was sick, but I've read the discussion about it. 16:03:42 The current implementation allows this only at the root level, is that correct ? 16:03:43 schwicke ...the first question is that , whether to allow it or not ? 16:03:46 I totally agree with Tim Bell. 16:03:53 I mean in Nova 16:04:19 the last time that I talked with johnthetubaguy, we was thinking in create two API calls 16:04:40 schwicke: yes, on nova and on cinder too 16:05:09 ok 16:05:39 I also agree that for this first round we should be consistent with what has been implemented for Cinder 16:05:48 Else this will be confusing to users. 16:06:30 in Cinder it is currently not allowed, correct ? 16:06:34 I think John was interested in over commit ...Am I right ? 16:06:39 schwicke: kind of... be consistent doesn't mean taht we don't need support overcommit 16:06:40 schwicke, not for subprojects 16:06:57 what about root project ? 16:07:07 schwicke: this was the default behavior for the projects before HMT 16:07:21 today, this works on neutron, cinder, magnum and nova 16:07:42 so we can do that for every root project 16:07:53 Currently , in nova I have coded like that only 16:08:02 raildo: hi can I help answer questions on my thoughts here? 16:08:25 johnthetubaguy: sure 16:08:27 Hi John ..we were talking about over commit 16:08:36 my basic worry is about us changing the general pattern for quotas 16:09:16 so many/most users overcommit their quota, its just there as a safety net 16:09:40 as a user, I would expect hierarchical quota to be the same idea as the regular quota 16:09:52 johnthetubaguy: ++ 16:10:01 ++ 16:10:09 now I see a use case for not having over-commit, and that should just work, you give people less quota, thats cool 16:10:44 I could see a case where we add extra things to make not allowing over-commit, in different ways, but thats a different feature really 16:11:07 For the root projects it makes sense. 16:11:09 my take, is by default, I would like us to keep allowing overcommit 16:11:14 I have some doubts for sub-projects though. 16:11:21 schwicke ++ 16:11:42 johnthetubaguy, I see... so, I think we have to also support over-commit in Cinder, right? In order to keep consistency. 16:11:50 johnthetubaguy: for subprojects too ? 16:12:01 Let's say the resource provider sets some quota on the root project. 16:12:02 ericksonsantos: that would be ideal, thats true 16:12:10 johnthetubaguy, nice 16:12:12 johnthetubaguy++1 16:12:29 why not? It's really weird have different behaviours for projects in different levels... 16:12:35 then he creates some sub-projects and delegates the administration of those 16:12:35 sajeesh: so I am talking about subprojects and projects really, I expect them to work the same way 16:12:36 this is bad design 16:12:49 johnthetubaguy: ok 16:13:04 raildo: so to keep consistency, an openstack spec would be a good thing here 16:13:29 ++1 16:13:40 johnthetubaguy:you mean a cross project spec for every service that ahve quota control? 16:13:43 have* 16:13:48 yeah 16:13:58 johnthetubaguy: makes sense... 16:14:05 but lets not get ahead of ourselves here 16:14:13 johnthetubaguy: makes sense ++1 16:14:16 I am just saying thats a good approach to get come consistency 16:14:28 we should probably do that, but anyways, lets get back to overcommit 16:14:35 ok 16:15:01 so, there is a bit I don't quite get here, sorry for these dumb questions 16:15:21 so we have tenant A as a parent of tenant B and C 16:15:24 my worry is that if overcommit is allowed at all levels then the the upper levels loose control over the quota, making them meaningless 16:15:47 ok 16:15:52 so whats the starting position here 16:15:59 ok 16:16:03 let's say A got some quota by the resource provider 16:16:05 they all have the default quota? 16:16:21 so before that 16:16:30 A is will having default quota 16:16:37 A, B, C all have the default quota right? 16:16:37 johnthetubaguy: the default quota for subprojects will be zero 16:16:46 B and C will be having zero default quota in the begining 16:16:57 so how is a project and a sub project different? 16:17:12 that are all just tenants for nova 16:17:19 so I guess the starting point is 16:17:20 johnthetubaguy: only root projects are different 16:17:33 johnthetubaguy, we get this information in keystone 16:17:38 right, so we have to reconfigure nova at this point 16:17:51 so nova has a default tenant of zero? 16:17:58 oops, default quota of zero? 16:18:10 johnthetubaguy, yes, just for subprojects 16:18:11 johnthetubaguy: except for root projects 16:18:21 johnthetubaguy: hum... got it, we are just setting default quota to zero, because the allocated quota... 16:18:22 johnthetubaguy:yes 16:18:30 for root projects everything is as it is now 16:18:40 ++1 16:18:53 The difference between sub-projects and is who manages them. 16:19:10 johnthetubaguy: root projects will behave like the projects in DbQuotaDriver 16:19:44 well, thats a change from today I guess, right now everyone just gets the default quota regardless right? 16:19:55 johnthetubaguy, yes 16:19:58 nova just sees user tokens for a specific tenant 16:20:02 yes 16:20:19 OK, so the first change is Nova defaults sub tenants quota to zero 16:20:28 yes 16:20:31 johnthetubaguy: in this still happen with hierarchical multitenancy, the token is scoped olny for a project 16:20:50 raildo:+1 16:21:14 +1 16:21:35 agreed the token is still for a single tenant 16:21:40 I guess thats my issue here in some ways 16:21:50 so let me describe what I was expecting I guess... 16:21:55 ok 16:21:57 everyone gets a default qutoa 16:22:06 sub tenant or otherwise 16:22:26 actually, thats dumb I guess 16:22:43 ... so who creates the sub tenant usuaully? 16:22:49 then root tenant? 16:23:09 we start from the root tenant 16:23:27 the owner of a project can create sub-tenants and determine their quota 16:23:28 sorry, s/then/the/ 16:23:46 schwicke: right makes sense, just getting that straight in my head 16:24:13 so we could default to having no restriction on subtenant, they just consume quota of the parent 16:24:29 exactly ! 16:24:59 in that world, the subtenant is simply used for scoping resources, etc, and organising, which is cool 16:25:25 sometimes it is easier to look at a real live example. 16:25:51 so in my head, that means each subtenant, basically has the same quota as the parent, except its consumes quota from that tenant, because its not a root project 16:26:28 johnthetubaguy: but in the other world, my as a cloud provider want improve his profit by making better use of the cloud resources, doing overcommit below the hierarchy... 16:26:50 johnthetubaguy: but we do check that sum of children quota will not exceed parent quota 16:26:54 so in this world, I assume the root has an overcommited quota anyways 16:27:11 sajeesh: agreed the current code doesn't, I guess I am saying it should 16:27:12 johnthetubaguy: and a cloud provider (or project admin) can be present in a parent project (and not always a root project) 16:27:15 johnthetubaguy: in that case yes it is 16:27:32 johnthetubaguy:ok got your point 16:27:51 so lets back up 16:28:17 I guess I am expecting many root tenants, A and A' just acting like today, get there own default quota 16:28:30 ok 16:28:34 hi all 16:28:48 then can create sub teants B, C and B' C' that can consume their quota with them 16:28:52 thats cool, life is happy 16:29:09 in this case, B C just consume quota from A, no checks of their own 16:29:12 joining late sorry bt that 16:29:25 same B' C' just consume quota of A' no extra checks 16:29:26 johnthetubaguy, right 16:29:35 the trick here is that the person who creates A and a' is not the same who creates B and b' 16:29:42 schwicke: +1 16:29:49 schwicke, +1 16:29:51 OK, so we have the setup 16:29:54 +1 16:29:58 that seems the good default here 16:30:03 and we are not considering the quota for user =/ 16:30:09 johnthetubaguy, that's how we are doing in Cinder 16:30:57 so at some point A wants to limit B and C, becuase B just used up all the quota leaving A and C with no quota 16:31:30 seems fine, so A saying B can't use more than 1/4 of my quota, same with C 16:31:36 joththetubag : yes https://review.openstack.org/#/c/205369/ this is the implementation for cinder nested quota driver 16:31:49 correct 16:32:06 OK, so the nova spec seems very different to this world 16:32:13 yes 16:32:18 well, thats wrong 16:32:43 johnthetubaguy, so, how do you think it should work in nova? 16:32:53 seems like I want what cinder has, for the Nova one, if I am understanding this all correctly 16:33:18 oh, nice 16:33:20 the current way of working in nova is included as a sub-case which is when you have only root projects. 16:34:02 so we have per user quota I guess, is this the bit thats causing worries? 16:34:32 yes, we are not sure about how to handle per user quotas 16:34:39 johnthetubaguy: Do you expect this behaviour in the current DBQuotaDriver, right? in other words, a user doesn't need change any configuration for this works. 16:35:00 raildo: not sure if I understand that question yet 16:35:17 ericksonsantos: so just thinking about per user quotas, after reading this: https://blueprints.launchpad.net/nova/+spec/per-user-quotas 16:35:27 honestly, a user just consumes the project quota 16:35:49 it sounds like just another level or hierarcy, for users within the tenant 16:36:20 johnthetubaguy: we have the dbquotadriver, today right? we can change this drive to include the hierarchy information, as we made on Cinder, or we can create a new driver, and the user should have to change the driver in nova.conf for this works. 16:36:21 similar to just having nested teants, except when you list your servers all users see each others servers in the same tenant 16:36:23 johnthetubaguy, hm... so, we won't need to change it, right? 16:36:45 ericksonsantos: unsure, I suspect the code is not as clean as all that 16:37:02 ericksonsantos: but in theory, the behaviour can be the same, logically speaking :) 16:37:17 johnthetubaguy: for nested projects in nova , users can be treated like sub projects right ? 16:37:25 johnthetubaguy, I see... thanks for clarifying that 16:37:32 sajeesh: roughly yes, its the same deal 16:37:45 johnthetubaguy:ok 16:37:48 so massive thank you for walking through all that 16:38:05 my head is much clearer on this now 16:38:14 I think we need the spec updating so it does that quite simple walk through 16:38:19 awesome :D 16:38:30 +1 16:38:31 so there is one other issue here... 16:38:31 ok 16:38:36 johnthetubaguy:+1 16:38:42 actuallyI updated the spec, to be bery closer to the cinder implementation 16:38:48 but we can fix some points 16:39:00 the slight issue is that quotas are fairly broken at the moment 16:39:02 to be closer* 16:39:26 they race like crazy, and the fix up logic, is messy, and no one really understands the code enough to fix it 16:39:26 :( 16:39:48 johnthetubaguy: we can put this as a next step for us 16:39:51 now honestly, thats mostly why its proving hard to get people to review and think about this 16:40:04 raildo:++1 16:40:09 we do have a spec up with ideas to totally simplify the system 16:40:28 if I was given a choice, I would say, simplify it, then add the new features into the simpler system 16:40:47 following that idea to create a cross project spec for nested quota in every service :) 16:41:10 thats me be honest, thats my preference here, fix quotas, then add the nesting 16:41:25 there is a horrible idea that just popped into my head.... 16:41:32 johnthetubaguy : nested quota just adds a heirarchy structure for quota management… 16:41:57 johnthetubaguy: do you have some opened bugs related to this? 16:42:02 so generally, its best to fix something before you extend it, thats all really 16:42:04 johnthetubaguy, can you send us a link for this spec for us to take a look at? 16:42:35 +1 16:42:36 raildo: honestly, there are a few, but its mostly that there are lots of edge cases, and the quotas in production seem to drift out of sync with reality in odd ways 16:42:37 johnthetubaguy: I think that we can goinf fixing some poing when we are adding nested quota... 16:42:52 so the fix here, is a re-write of the concept, thats the issue 16:42:57 let me find the spec for that... 16:43:04 ok 16:43:29 https://review.openstack.org/#/c/182445/2/specs/backlog/quotas-reimagined.rst,cm 16:43:39 if the idea is a complete re-design would it not be better to do that from the beginning with support for nesting ? 16:43:44 we had a session on the last summit discussing all these quota issues, and that kinda of came out of that 16:44:01 schwicke: thats an option yes, I would go for that 16:44:10 schwicke++1 16:44:21 what are the time scales for this ? 16:44:29 we could have a whole new quota driver, that takes the new approach 16:44:46 johnthetubaguy ++1 16:44:49 the current problem, is I have been totally unable to get anyone to work on quotas to fix them up 16:45:18 so the timescale is currently towards the "unknown point in the future" rather than next cycle, or the one after 16:45:32 we have ideas, but not enough willing folks to work on the fixes right now 16:45:44 I think that the addition of nested quota in nova in mitaka, will be a great feature 16:45:48 so I like the idea of doing nested from the beginning 16:46:01 and I think that we can start working on this re-design without remove this feature for now 16:46:02 as a separate driver, written from scratch 16:46:15 yes 16:46:50 johnthetubag : qq so is fixing quota infrastruture a dependency for making nested quota in nova ? or nested quota for nova can go in mitaka and redesign in future release ? or both the efforts can go in parallel ? 16:47:20 so the problem is quotas don't work properly, I really don't like extending the broken thing 16:47:50 now we could add a new thing that supports nesting from the beginning 16:48:04 ok 16:48:05 as suggested above, but that might be a big tasks 16:48:11 so the idea goes like this... 16:48:12 yes 16:48:19 I'm a bit worried about the time lines to implement this. 16:48:31 johnthetubag : thats true..its a huge task 16:48:36 have a db table with an entry for every quota "usage" 16:48:43 and if we think in compatibility with cinder, this will not wappen, since the cinder code is very similar to the nova code 16:48:44 use a compare and swap style update 16:49:00 so add in the usage, then check if you have gone over, if you have gone over remove the usage 16:49:05 so we might have to re-design quota for other service... and this is a huge risk 16:49:26 simple optimistic concurrency control stuff, basically 16:49:27 happen* 16:49:39 johnthetubaguy: lol 16:50:06 yeah, "simple" and concurrency in the same sentence, thats just wrong :) 16:50:24 the other idea is one about a cloths/washing line 16:50:26 haha 16:50:43 basically, in the API layer, you have to pass a quota check, just like a policy check 16:50:49 there is a line in the sand you must pass 16:51:06 the checks ensure you don't have too much quota when you pass that line and consume the quota 16:51:23 when you delete the server, you decrement the qutoa (even if the delete fails) 16:51:36 i.e. we remove all the rollback complexity 16:51:56 (there is a dance around resize you need, but its the same idea there) 16:52:11 anyways, with all that, in place, we have a really simple system 16:52:35 and I think the nested stuff, should be quite easy to add into that system, its just a few more checks in the check phase 16:52:41 anyways, that spec tried to describe all that 16:52:57 so... I should shut up now 16:53:06 haha 16:53:08 I guess the question is, how do we move forward... 16:53:17 johnthetubaguy: ++ 16:53:18 I think we update that spec, so we match cinder 16:53:33 johnthetubaguy: thanks a lot for joining the meeting 16:53:39 I think we probably cut and paste that into the openstack specs repo, so we get cross project alignment 16:54:09 we then spend some time at the summit (in the unconference I think) to look at what the community things is the best approach 16:54:37 from our discussion here, sounds like Cinder has a nice simple system for nested, that is basically what we have with users already in nova 16:54:42 sounds the same anyways 16:54:49 johnthetubaguy: Do you want that we change this spec, that you sent to match with the cinder, or the spec that we are moving for mitaka? 16:55:16 the sticking point is, fix quotas then add features, or add features then fix, I prefer the first one, but I don't have anyone to help with that yet 16:55:35 raildo: so I think we rework that moved spec to it matches cinder 16:55:43 johnthetubaguy: ok 16:55:50 johnthetubaguy ++1 16:55:56 raildo: that might just be a cut and paste of the cinder spec, with s/cinder/nova s/volumes/servers/ 16:56:03 my feeling is that there is some sympathy for the approach here 16:56:07 johnthetubaguy: ++ haha 16:56:44 schwicke: so I love the feature, I want the feature in Nova, the problem is how we make that happen in a sustainable way 16:57:47 so that was super helpful for me, thanks for letting my wreck your meeting here 16:57:50 sure 16:58:02 thanks a lot for joining! 16:58:05 johnthetubaguy, thank you :) 16:58:05 does everyone feel like we have a path to move forward? 16:58:09 johnthetubaguy: thanks a lot :-) 16:58:10 I also thing it was very useful 16:58:10 thanks johnthetubaguy :) 16:58:17 thanks =) 16:58:23 time is out 16:58:24 johnthetubaguy: sure many things are clear now 16:58:29 hey, happy to help, will read the logs before I go through that spec :) 16:58:38 thanks ! :) 16:58:41 we have to finish the meeting :( 16:58:53 yep 16:58:59 #endmeeting