16:00:00 <jungleboyj> #startmeeting cinder 16:00:01 <openstack> Meeting started Wed Mar 7 16:00:00 2018 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:06 <openstack> The meeting name has been set to 'cinder' 16:00:11 <jungleboyj> courtesy ping: jungleboyj DuncanT diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlontpsilva patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut 16:00:18 <e0ne> hi 16:00:20 <tommylikehu> hi 16:00:29 <geguileo> hi! o/ 16:00:35 <arnewiebalck_> hi 16:00:43 <amito> Hey 16:01:28 <jungleboyj> Give people another minute to collect up. 16:01:49 <jungleboyj> Hope everyone eventually made it home safely from the PTG. 16:02:01 <DuncanT> Hey 16:02:19 <DuncanT> Still a bit of snow around :-) 16:02:22 <jungleboyj> DuncanT: 16:02:31 <jungleboyj> Has your country returned to normal? 16:02:56 <jungleboyj> We had a storm here on Monday that made the one there look like a flurry. Hardly paused us. 16:03:05 <DuncanT> Yeah. Got rid of nearly all of the angry foreign types who were complaining about their extended stay now, too 16:03:20 <jungleboyj> Though it took 4 attempts to get Rachel's car out of my neighborhood including a beautiful 180 I engineered. 16:03:29 <jungleboyj> DuncanT: :-) 16:03:29 <DuncanT> Video? 16:03:42 <jungleboyj> DuncanT: Nope sorry, was too busy driving the car. 16:03:46 <DuncanT> #action jundleboy to get a dashcam 16:03:48 <xyang> hi 16:03:54 <jungleboyj> :-) 16:04:11 <jungleboyj> Ok, I know smcginnis is in Tokyo so he is probably not going to be here. 16:04:18 <jungleboyj> So, lets get started. 16:04:25 <jungleboyj> #topic announcements 16:04:47 <jungleboyj> First, my apologies for everyone who got caught up in the Cinder Dinner debacle. 16:05:01 <jungleboyj> Can't believe an Irish pub would run out of food and beer. :-( 16:05:06 <Swanson> Hi 16:05:17 <jungleboyj> Thanks to those of you who came and who tried to come. 16:05:31 <jungleboyj> I apparently was too optimistic that night. 16:05:47 <jungleboyj> I blame being a Minnesotan. 16:06:32 <jungleboyj> Also, I have started working on getting the Cinder Rocky priorities and spec review list put together. I have linked it at the top of the meeting agendas etherpad. 16:06:42 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:06:49 <jungleboyj> I will be talking about that a bit more later in the meeting. 16:07:00 <tommylikehu> jungleboyj: thanks 16:07:01 <jungleboyj> Also a note that we have moved to the Rocky etherpad for meeting agendas. 16:07:11 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-rocky-meeting-agendas 16:07:48 <jungleboyj> Last, I have gotten a recap of all the decisions and action items put together from the PTG: 16:07:51 <jungleboyj> #link https://wiki.openstack.org/wiki/CinderRockyPTGSummary 16:08:12 <jungleboyj> If anyone has concerns or questions with the content there, please let me know. 16:08:43 <jungleboyj> I think that is all I had for announcements. 16:08:52 <jungleboyj> #topic PTG Wrap Up 16:09:10 <jungleboyj> Thank you to all of you that were able to make it there. I feel like it was a productive week. 16:09:27 <jungleboyj> Thank you all for being flexible and working around the issues that the Beast From the East caused. 16:09:47 <jungleboyj> Kind-of feel like we all shared living through a disaster movie. ;-) 16:10:13 <jungleboyj> As you can see from the links above I have worked to get all the info we discussed organized for this release already. 16:10:39 <jungleboyj> Anyone have anything else they want to share with regards to the PTG? 16:11:36 <jungleboyj> For those who couldn't be there, I will share that the Foundation is having trouble getting funding for the PTGs from sponsors and is, therefore, losing a lot of money on them. 16:11:59 <jungleboyj> Sounds like they will be having another PTG in the fall, location TBD, but not sure about after that. 16:12:07 <e0ne> :( 16:12:18 <tommylikehu> :( 16:12:20 <jungleboyj> I think if they stop doing PTGs this team would need to go back to doing mid-cycles. 16:12:35 <jungleboyj> Do people agree with that statement? 16:12:52 <e0ne> jungleboyj: +1 for mid-cycles in this case 16:13:24 <geguileo> jungleboyj: I agree 16:13:42 <jungleboyj> Anyone disagree? 16:14:46 <eharney> +1 midcycles 16:15:02 <jungleboyj> Ok, so I am going to start putting a bug in my managements ear about possibly doing something in RTP February or so of next year. 16:15:28 <jungleboyj> That way we have at least started considering an option if the PTGs come to an end. 16:16:12 <DuncanT> What are the odds of snow stopping play there? 16:16:27 <jungleboyj> #action jungleboyj to start back-up planning for a mid-cycle in RTP for spring 2019. 16:16:35 <jungleboyj> DuncanT: :-) Pretty low. 16:16:51 <jungleboyj> Could happen in January but now they are at the point where the temps are in the 50's. 16:17:30 <jungleboyj> #topic New processes for Rocky release. 16:17:44 <jungleboyj> So, I have taken some of the discussion from the PTG and decided to try something new. 16:17:58 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking 16:18:15 <jungleboyj> If you look at the spec review tracking page you will see that I have extended the content somewhat. 16:18:42 <jungleboyj> Instead of just including specs and associated reviews I have also put sections in for each of the changes that we agreed upon at the PTG. 16:18:55 <jungleboyj> I often use that page as a starting point for my review priorities. 16:19:05 <jungleboyj> I am hoping the rest of the team can do the same. 16:19:43 <jungleboyj> If you have work you are doing that is related to one of the priority items, please add it to the review list there and then this will help the team to focus on getting the work we have agreed upon reviewed and merged. 16:20:03 <jungleboyj> I will be taking some time in the weekly meeting to quickly review it each week and to follow up on progress. 16:20:18 <jungleboyj> I have the list roughly in priority order. 16:20:33 <jungleboyj> I am leaving those that still need specs merged first in the 'Spec Review' list. 16:21:25 <jungleboyj> I hope this is helpful to the team. Think it is worth a try and will help avoid getting to the end of the release, me going through the list of stuff that we talked about at the PTG and then realizing we missed stuff like happened with Queens. 16:21:55 <jungleboyj> Any questions on that? 16:22:40 <hemna_> ooh mid cycles... 16:22:42 <eharney> i'll take a look at it 16:22:49 * hemna_ catches up 16:23:39 <jungleboyj> eharney: Thanks. 16:23:52 <jungleboyj> Welcome hemna_ Glad you are still alive. 16:24:31 <jungleboyj> So, obviously this takes the community to help make successful. I will add reviews in there as I see them. Just feel like we need to add a little more guidance behind our work. 16:24:41 <jungleboyj> Don't want to kill everyone with process either though. 16:25:29 <jungleboyj> That was all I had there. 16:25:38 <jungleboyj> #topic cinder coresec list 16:25:49 <jungleboyj> Thank you to eharney for noting the list is out of date. 16:26:03 <jungleboyj> I have gotten admin control from jgriffith for our security reviewer list. 16:26:37 <jungleboyj> So, I think that leaving hemna_ eharney DuncanT on the list makes sense. 16:26:46 <jungleboyj> I added smcginnis as he was, hilariously, missing. 16:26:50 <hemna_> hehe 16:26:57 <hemna_> well he is kinda shifty.... 16:27:01 <jungleboyj> hemna_: Yeah. 16:27:31 <jungleboyj> So, I am also thinking that thingee can maybe be removed as well as winston-d 16:27:50 <hemna_> heh winston-d 16:27:52 <hemna_> miss that guy 16:27:59 <jungleboyj> Add in e0ne, geguileo and tommylikehu ? 16:28:05 <hemna_> +1 16:28:05 <jungleboyj> hemna_: Yeah, me too. 16:28:43 <jungleboyj> Any objections to that plan? 16:28:55 <jungleboyj> eharney: Any concerns that that is too large a group to have? 16:29:06 <eharney> no 16:29:18 <jungleboyj> Ok. So I am going to make those changes. 16:30:02 <jungleboyj> Thank you for the input there. 16:30:10 <Swanson> hemna sighting. 16:30:25 <hemna_> barely survived Dublin 16:30:33 <jungleboyj> hemna_: Will be here more often now. 16:30:36 <jungleboyj> Righ? 16:30:40 <jungleboyj> Right? 16:30:41 <hemna_> that's the plan :) 16:30:44 * jungleboyj glares at hemna_ 16:30:58 <DuncanT> You'll be in Dublin more often? Awesome 16:31:01 <jungleboyj> #topic deferred volume removal 16:31:16 <jungleboyj> arnewiebalck_: The floor is yours. 16:31:20 <arnewiebalck_> :) 16:31:50 <arnewiebalck_> I’d like to propose to support rbd’s deferred volume deletion in Cinder. 16:31:56 <arnewiebalck_> As an option. 16:32:14 <e0ne> it would be good to have feedback from eharney and jbernard on this 16:32:27 <eharney> the proposal is that this is leveraged by the existing volume delete API? 16:32:35 <jungleboyj> e0ne: ++ 16:32:39 <jungleboyj> Why I brought it here. 16:32:59 <DuncanT> Can somebody explain the way it works, please? 16:33:12 <hemna_> the name alone gives me pause 16:33:23 <e0ne> from the docs: "Initial support for deferred image deletion via new rbd trash CLI commands. Images, even ones actively in-use by clones, can be moved to the trash and deleted at a later time." 16:33:53 <hemna_> when does that later time happen? 16:33:54 <eharney> this sounds like something we should be investigating for the RBD driver, yes 16:33:59 <hemna_> after all outstanding operations are complete? 16:34:03 <arnewiebalck_> hemna: not by itself 16:34:19 <DuncanT> arnewiebalck_: So what's the point? 16:34:23 <arnewiebalck_> the real deletion needs to be triggered 16:34:33 <hemna_> so delete.......no..now really delete... 16:34:47 <eharney> this has the potential to help with some issues we've had in the driver when deleting volumes with dependencies, i think, but i'm not too familiar with it 16:34:48 <arnewiebalck_> DuncanT: The point is that “deletion” would be instantaneous from Cinder’s PoV. 16:34:57 <arnewiebalck_> move to trash is very fast 16:35:05 <arnewiebalck_> deletion happens in the background or later 16:35:06 <jungleboyj> So, Cinder thinks it is gone but later RBD finishes the process. 16:35:14 <hemna_> wiat 16:35:16 <hemna_> err 16:35:17 <e0ne> eharney: as I understood, it can help us to avoid 'rbd_flatten_volume_from_snapshot' param usage 16:35:18 <jbernard> right, it's lazy 16:35:37 <arnewiebalck_> deletion time becomes independent from volume size 16:35:39 <hemna_> wait....the real deletions happen automatically? or have to be triggered later (by cinder somehow?) 16:35:47 <DuncanT> arnewiebalck_: What's the advantage? delete is async, and until it is really gone you can't reuse the storage, so all you're doing is lying to the user.... 16:36:34 <arnewiebalck_> DuncanT: we’ve seen cases where heavy deletion of several large volumes can block other operations 16:36:38 <hemna_> if Cinder has to trigger the real delete later....how can it do that because the volume is marked deleted in the cinder DB. 16:36:38 <eharney> DuncanT: that kind of happens already when data is shared between volumes and images, might be useful, we'll have to go look at design for this 16:36:49 <arnewiebalck_> the user can delete and recreate immediately 16:36:52 <e0ne> DuncanT: in case of Ceph, large volumes removal could be very long operation. in this case, cinder will mark such volume as deleted 16:36:55 <eharney> at any rate this would just be a change in the driver, i believe 16:36:57 <arnewiebalck_> I’m not lying to the user :) 16:37:14 <arnewiebalck_> eharney: yes, this is driver specific 16:37:24 <hemna_> how/when is the trashed volume deleted? 16:37:33 <DuncanT> arnewiebalck_: Until the volume is gone, they can't recreate if there is not enough space.... 16:37:37 <jungleboyj> hemna_: At that point RBD handles the deletion from a separate command. 16:37:43 <arnewiebalck_> hemna: sth needs to trigger a purge 16:37:45 <jbernard> rbd trash purge 16:37:55 <hemna_> in other words, cinder has to issue that 16:38:03 <e0ne> :( 16:38:06 <arnewiebalck_> purge comes with Mimic, in Luminous it would be remove 16:38:08 * jungleboyj laughs ... it is like Window's recycle bin. 16:38:09 <hemna_> that's bad mmmkay 16:38:15 <eharney> guys 16:38:29 <jungleboyj> eharney: Bring reason. 16:38:30 <eharney> we don't need to try to hash out the whole implementation here right now 16:38:44 <jungleboyj> ++ 16:38:59 <jungleboyj> eharney: jbernard is there customer demand for this? 16:39:00 <hemna_> I'm just trying to understand the basics of it 16:39:32 <eharney> sounds like there would be if it gives a perf benefit 16:39:38 <DuncanT> We don't need to hash out the whole implementation, but unless we can discuss the theory of operation, there's no point it being on the agenda.... 16:40:10 <jungleboyj> :-) So, this feature is limited to RBD. 16:40:14 <eharney> all i can say is it sounds like something interesting to look into 16:40:23 <e0ne> eharney: +1 16:40:30 <geguileo> eharney: but while on trash, is it still affecting the used space? because then it's affecting the scheduler over provisioining? 16:40:34 <jungleboyj> eharney: +1 16:40:42 <geguileo> eharney: +1 16:40:51 <hemna_> so the only perf benefit is that cinder marks it as deleted in it's db and then ignores it forever, instead of waiting for a 'real' rbd delete image to finish? 16:41:08 <jungleboyj> Are the RBD deletes slow? 16:41:28 <arnewiebalck_> jungleboyj: for a 50TB volume they can be 16:41:29 <jbernard> we can easily synchronize this in the driver, would it be helpful if i send a mail to the list with more details? 16:41:30 <hemna_> RBD *** slow.... 16:42:28 <hemna_> arnewiebalck_, is it more efficient to have 10 50TB volumes in the trash being purged? or 10 50TB volumes being deleted the original way? 16:42:48 <jungleboyj> arnewiebalck_: Does it also allow the user to control the load on the filesystem caused by deletes? 16:42:50 <geguileo> arnewiebalck_: so you tell Cinder to delete 50TB, cinder tells you is deleted, and you still don't have that space available, that would be odd for any Cinder user 16:42:55 <arnewiebalck_> hemna_: it’s more efficient from Cinder’s and the user’s point of view I’d say 16:42:55 <jungleboyj> I.E. delete at off peak times? 16:43:02 <jbernard> there is an expiration limit, which purge adheres to 16:43:09 <DuncanT> I see no advantage to speeding up the transition from 'deleting' to 'deleted' if that is the only difference... If the transition to 'deleted' has finished, the space should be immediately available for reuse IMO 16:43:17 <arnewiebalck_> hemna_: I don’t think it’s faster on the backend 16:43:55 <hemna_> arnewiebalck_, ok, I guess that's what I'm trying to understand. the purge just happens async at some other time, and it's still as inefficient as a normal delete. 16:44:22 <hemna_> kicking the can down the road is all this does 16:44:24 <arnewiebalck_> DuncanT: if a user deletes a 50TB volume, he has to wait the hour it takes to delete it before recreating a new volume 16:44:51 <arnewiebalck_> DuncanT: Also, Cinder may run out of worker threads as they are all busy with deletions. WE’ve seen this here. 16:45:04 <arnewiebalck_> hemna_: yes 16:45:14 <geguileo> arnewiebalck_: Are you telling me you are running out of 1000 threads in Cinder volume? 16:45:17 <DuncanT> arnewiebalck_: But until the delete is finished, they're still using the storage, so they should still hold the quota 16:45:34 <geguileo> DuncanT: +1 16:45:44 <arnewiebalck_> DuncanT: why? 16:46:01 <DuncanT> arnewiebalck_: The point of the quota is to limit how much storage a user can tie up... Seems like there's a trivial DoS attach with deferred delete.... 16:46:09 <arnewiebalck_> DuncanT: if the space is needed, the trash can be purged any time 16:46:26 <jungleboyj> Ok ... team ... I think we are missing the point here. 16:46:40 <geguileo> jungleboyj: +1 16:46:49 <geguileo> this requires a spec 16:46:54 <jungleboyj> We have another topic to cover yet and I don't want to go spinning out of control. 16:46:56 <geguileo> and we can discuss it there 16:47:00 <jungleboyj> geguileo: ++ 16:47:20 <jungleboyj> For those that know RBD this sounds like something that is somewhat sensible. 16:47:29 <DuncanT> Specs are painful to have discussions on about something like this :-( 16:47:37 <DuncanT> Way too slow round-trip time 16:48:02 <jungleboyj> Lets get a spec on this that explains better the semantics and then continue discussion here when we all understand the proposal. 16:48:22 <jungleboyj> DuncanT: Understood, but it is a vehicle for jbernand, eharney and arnewiebalck_ to educate us. 16:48:53 <jungleboyj> arnewiebalck_: You ok with getting that started and we can talk more in a week or two? 16:49:00 <arnewiebalck_> jungleboyj: sure 16:49:00 <_alastor_> Does this sound useful for any backend other than Ceph? 16:49:08 <jungleboyj> _alastor_: Good question. 16:49:13 <eharney> it's a ceph driver/backend feature... 16:49:24 <DuncanT> We've discussed deferred deletes before 16:49:29 <DuncanT> As a general feature 16:49:44 <DuncanT> I think the last person who proposed it wanted an un-delete featuere 16:49:52 <DuncanT> (Might even have been HP that proposed it) 16:50:00 <DuncanT> Quota ended up being the issue 16:50:07 <eharney> proposing a new feature/API sounds like a whole different discussion to me 16:50:21 <jungleboyj> eharney: ++ 16:50:27 <arnewiebalck_> eharney: +1 16:50:28 <jungleboyj> Ok, so lets table this for today. 16:50:41 <jungleboyj> #action arnewiebalck_ to put together spec with better description. 16:50:54 <jungleboyj> #action jungleboyj to add to spec review list. 16:51:15 <jungleboyj> #action team to review the spec and we will discuss further in the meeting a week or two from now. 16:51:29 <jungleboyj> arnewiebalck_: Sound like a plan for you? 16:51:41 <arnewiebalck_> jungleboyj: perfect 16:51:51 <arnewiebalck_> thx 16:52:09 <jungleboyj> arnewiebalck_: Welcome. Thanks for bringing it up. 16:52:20 <jungleboyj> Ok, last topic: 16:52:35 <jungleboyj> #topic Reexamine policy rule of OWNER . 16:52:40 <jungleboyj> tommylikehu: .... 16:52:43 <tommylikehu> https://bugs.launchpad.net/cinder/+bug/1714858 16:52:43 <openstack> Launchpad bug 1714858 in Cinder "Some APIs doesn't check the owner policy" [Medium,In progress] - Assigned to TommyLike (hu-husheng) 16:52:47 <tommylikehu> jungleboyj: thanks 16:52:57 <tommylikehu> hope this would be quick and easy one 16:53:03 <jungleboyj> #link https://bugs.launchpad.net/cinder/+bug/1714858 16:53:09 <tommylikehu> this is not a big issue, but will affect a lot of APIs. 16:53:27 <tommylikehu> the point is what the response code we want the end user to get when non-administrator try to query/update/delete resource that is out of his project scope. 16:53:50 <tommylikehu> and whether we want to make the policy of OWNER really works in cinder. 16:54:33 <eharney> i'm not too clear on the status of this bug and its fix as of now 16:55:06 <eharney> aren't there still concerns buried in there somewhere about force delete operations being allowed when they shouldn't be, etc? (if not other operations) 16:55:41 <tommylikehu> eharney: this bug is saying the policy of OWNER will always succeed 16:55:57 <eharney> tommylikehu: but it doesn't spell out what that actually means 16:56:03 <jungleboyj> tommylikehu: So, the problem is that if it is admin or owner sometimes the owner isn't able to use the command with that policy? 16:56:11 <eharney> which is one of the things that we need to provide so we can understand the impact 16:56:23 <jungleboyj> tommylikehu: ? 16:56:36 <eharney> it isn't clear when reading that bug if there are a bunch of different vulnerable APIs or what 16:57:04 <geguileo> jungleboyj: the policy check will always think you are the owner 16:57:15 <eharney> that sounds fairly severe to me 16:57:19 <tommylikehu> geguileo: +1 16:57:20 <jungleboyj> geguileo: Oy! 16:57:22 <DuncanT> eharney: Yup 16:57:33 <jungleboyj> Yeah, that is a problem. 16:57:39 <eharney> so how is this not a security bug? 16:58:00 <jungleboyj> eharney: Now I am agreeing with you. 16:58:06 <geguileo> but most operations won't succeed because we limit our retrieval by the context's project_id 16:58:16 <geguileo> so the policy passes 16:58:17 <tommylikehu> eharney: because we still can not handle the resource that out of my project scope cause it will raise 404 16:58:27 <eharney> tommylikehu: for what operation? 16:58:27 <geguileo> and the get of the resource would raise a NotFound 16:58:39 <eharney> this change was made in numerous places, we don't even have a clear list of which are broken 16:58:51 <geguileo> ^ that is the problem 16:59:16 <geguileo> I checked some and we were protected by the DB retrieval mechanism, but I didn't check them all 16:59:18 <eharney> so it still sounds like it should be tracked as a security bug to me until it's proven that there is no vulnerability etc 16:59:26 <tommylikehu> the problem is policy OWNER doesn't work at all 16:59:43 <eharney> that's an implementation problem, the question is, what vulnerabilities are deployers exposed to? 17:00:14 <jungleboyj> So, we are out of time. 17:00:21 <jungleboyj> Can we take this to the channel? 17:00:32 <tommylikehu> sure 17:00:39 <jungleboyj> Ok. Cool. 17:00:48 <jungleboyj> Thanks everyone for a good meeting. 17:00:52 <jungleboyj> Talk to you next week. 17:00:56 <jungleboyj> #endmeeting