16:00:00 #startmeeting cinder 16:00:01 Meeting started Wed Mar 11 16:00:00 2015 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'cinder' 16:00:12 hi everyone! 16:00:14 hi 16:00:14 hi 16:00:15 hi all 16:00:16 hi 16:00:16 hi 16:00:17 hi 16:00:22 o/ 16:00:30 hi 16:00:31 hi 16:00:38 hi 16:00:46 hi 16:00:47 howdy 16:00:55 so I've been MIA getting feedback at the ops meetup and doing patches here and there. Just wanted to say you all did an awesome job in my absense 16:01:05 hello 16:01:06 seriously look at this https://launchpad.net/cinder/+milestone/kilo-3 16:01:14 so much green! 16:01:23 hi 16:01:23 hello 16:01:31 hi 16:01:34 hello 16:01:41 o/ 16:01:43 hi 16:01:44 so thanks everyone for making k-3 successful with Cinder 16:01:51 hey 16:01:58 o/ 16:02:01 closing remarks on multi-attach for me 16:02:31 I'm not upset that this is merging. I gave my comments, but ultimately it's up to the community on the direction, so it's merging for k-3 16:02:43 just want to recognize hemna for his hard work on this. 16:02:53 and everyone testing/reviewing things 16:02:57 o/ 16:03:07 thanks to everyone for the help and understanding. 16:03:29 ok enough me cheerleading...lets get starteD! 16:03:32 the work isn't done, and I'll follow up and finish it off. 16:03:47 agenda for today: https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:03:48 Aye, well persevered, Walt 16:03:48 hemna: Thank you! 16:03:54 #link https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:04:14 #topic Cinder supports several target objects now for the lvm driver 16:04:20 hemna: hi 16:04:24 so yah 16:04:37 #link https://review.openstack.org/#/c/158829/ 16:04:40 I though of this issue when I was reviewing the IeT target driver 16:04:53 and realized that I don't think we are CI'ing all of the different target drivers that land 16:05:12 and I wanted to raise the issue and see how we can make sure that the targets get CI'd/gated 16:05:23 hemna: +1 16:05:30 since they fundamentally change how some drivers that use them work, namely lvm 16:05:37 hemna: nope, we're not and we never have 16:05:42 I agree and I think it's fair after me requesting this from zone manager drivers 16:05:52 o/ 16:05:54 should note that IET is not exactly new, though 16:06:03 eharney: +1 16:06:05 eharney, yah I know, but what is new is the target mechanism 16:06:13 jgriffith even asked for this with LIO 16:06:15 and ensuring that they all get tested/gated 16:06:15 as I've explained multiple times, IET has been in place since nova-volume 16:06:15 just important for context 16:06:23 eharney, +1 16:06:52 I'm fine if somebody wants to propose either adding it to the gate/CI or deprecating in next release 16:06:55 thingee, yah especially if we add LIO with all of it's possible transports 16:07:33 adding/deprecating what exactly? 16:07:46 eharney: that question for me? 16:07:52 jgriffith: yes 16:07:56 hemna: what's the status of the various ways? in the DRBD-for-nova patch I had to add support to brick, target driver, and the real implementation... which of these should survice? 16:08:02 eharney: so IET inparticular 16:08:17 eharney: we need to do something with LIO as well IMO (although I know RHEL does work here) 16:08:30 jgriffith: so does datera 16:08:38 jgriffith: we wrote the damn thing :) 16:08:41 :P 16:08:50 thingee: well... my point was RHEL deploys with it 16:09:04 thingee: they use it with LVM in packstack deploys 16:09:06 flip214, not sure I understand the question sorry. 16:09:08 jgriffith: so do we, although those are POC's :D 16:09:22 thingee: ok, good for you :) That's not my point though 16:09:35 thingee: my point is that we should have some visible testing 16:09:39 jgriffith: anyways I think it's fine for these to be within CI's of drivers is fine? 16:09:40 that's all 16:09:45 jgriffith: +1 16:10:05 thingee: eharney I'd also like to ask if anybody thinks we should limit target options? 16:10:23 jgriffith, +1 16:10:29 hemna: https://review.openstack.org/#/c/156212/ had to touch brick, target driver, and then provide the implementation just to get the codepaths to work. 16:10:30 thingee: eharney I think I floated the idea of making LIO default and dropping some others in the past but I don't think we were ready 16:10:32 jgriffith: I would disagree with that, mainly because I'm not sure why LIO should be limited to just ISCSI 16:10:35 jgriffith: In what sense? 16:10:46 thingee: no no.. .that's not what I mean 16:10:46 will any of these get cleaned up before L, so that I can drop some bits of that patch? 16:10:52 jgriffith: oh ok 16:10:57 what's the thinking there? 16:11:01 forgive my ignorance, but how many different targets are we talking about? 16:11:11 thingee: what I mean is we now have IET, TgtADM, LIO and then all the crazy variants that went in this release 16:11:13 thingee: I think testing them as part of a driver CI is fine, though some way of tracing what is tested where would be useful 16:11:21 flip214, hadn't seen that one yet, so I can't really comment just yet. can chat about it in #cinder if you like after the meeting 16:11:22 thingee: do we want to keep dragging alll of these? 16:11:28 DuncanT: +1 16:11:31 jungleboyj: i think we're at about 5 total now? 16:11:40 thingee: or do we want to pick say LIO and deprecate drop IET and TGT 16:11:42 jgriffith: might change after a certain session in vancouver 16:11:45 eharney: Thanks. 16:11:48 DuncanT, +1. That's exactly what I was hoping for 16:11:50 thingee: ? 16:11:53 * thingee looks around secretly 16:12:26 DuncanT: +2 16:12:53 well.. not sure anybody else has a concern on this; so I'll step back 16:13:13 I am concerened about the matrix of targets and drivers but guess it's ok 16:13:19 jgriffith, I think that might be a valid but separate discussion ? 16:13:25 jgriffith: +1 16:13:30 jgriffith: do we have a matrix problem with target drivers? 16:13:43 jgriffith, well I am concerned as well, which is why I was bringing the topic of CI for the various targets up. 16:13:49 thingee: well... IMO yeah, kinda 16:13:58 * thingee feels dumb for not knowing this 16:14:10 thingee: LVM can use one of 4 tgt drivers now 16:14:11 jgriffith: I thought abc is enforced with the target class? 16:14:18 * thingee might be confused 16:14:22 thingee: but that's not the point 16:14:38 thingee: enforcing abc is fine, but it doesn't mean it's getting tested or works well 16:14:52 thingee: and it does't mean we get to focus on improving maintaining "one" thing 16:15:01 but as hemna said, maybe thisis a different topic? 16:15:06 jgriffith: true, look at the bandaid fix I did with tgt 16:15:09 with acls missing 16:15:25 that would've been annoying to keep in gate come close to march 19th 16:15:29 lets make migration to LIO target a topic in Vancouver ? 16:15:43 hemna: +1 16:15:57 hemna: I would like that, but I think I need to spend more time making things more accessible for everyone as jgriffith pointed out 16:16:02 THis seems like something that we need to be in a room to discuss. 16:16:11 I agree with hemna's point that of we have all these options, we should make sure they are covered by CI. 16:16:13 and accessible I mean easier for people using debian based systems. 16:16:22 anyways that's a whole other topic 16:16:30 thingee, yah that sounds good, but another topic :P 16:16:38 so I think we agree with hemna that we need CI here. 16:16:39 sorrison, LIO improvements :) 16:16:49 hemna: I can put together a proposal? 16:16:50 gah, xchat 16:16:54 jgriffith: ++ 16:16:55 thingee, +1 16:16:56 for L timeline 16:17:05 that sounds great. 16:17:12 jgriffith: I would like to understand more on the feature matrix of the target drivers too 16:17:21 jgriffith: because I was not aware of this, sounds interesting 16:17:45 I see 5 or 6 targets in there now. 16:17:52 #action thingee to make CI proposal for target drivers in cinder 16:17:59 fc should be another, but that will hopefully get covered by LIO 16:18:06 #action thingee to discuss with jgriffith about feature matrix in target driver 16:18:14 hemna: +1 16:18:19 hemna: plus RDMA variants etc 16:18:19 hemna: anything else? 16:18:21 or anyone else? 16:18:23 I'm good. 16:18:28 eharney, +1 16:18:39 nvme is about to land in LIO too ;D 16:18:49 * thingee thingee raises the roof 16:19:09 eharney: +1 16:19:13 #topic Make public Volume Snapshots 16:19:14 w00t 16:19:25 no name, you're up! 16:19:51 oh 16:19:54 rushiagr: hi 16:19:59 rakesh_mishra_: hi 16:20:04 o/ 16:20:15 blueprint: 16:20:17 hi 16:20:17 #link https://review.openstack.org/#/c/125045/ 16:20:27 patch: 16:20:29 #link https://review.openstack.org/#/c/159372/ 16:20:29 looks too late for Kilo: blueprint is not targeted, code should be rebased 16:20:34 e0ne: +1 16:20:46 getting that out of the way, what did we want to discuss with this rushiagr ? 16:20:47 * DuncanT is still against to cencept 16:21:15 thingee: about the idea 16:21:36 thingee: if people still see this as something not required for cinder 16:21:44 DuncanT: maybe spec will give us more details? 16:22:23 #link https://review.openstack.org/#/c/125045/ 16:22:26 e0ne: ^ 16:22:38 rushiagr: I'm still kinda of opposed, and I still ask: "why" 16:22:45 rushiagr: that's what we did volume_transfer for 16:22:56 the last time we discussed this, we wanted better use cases. 16:23:03 * thingee checks if the spec has been updated with that info 16:23:15 rushiagr: there are a number of complexities that this is going to add IMO and I don't see much gain 16:23:18 oops. i missed link for spec in bp 16:23:22 other than "it's nifty" 16:23:35 rushiagr: the spec still doesn't included added use cases 16:23:46 rushiagr: which is what we asked for the last time this was brought up in the meeting 16:24:09 * thingee has a spec to require this in specs. it'll be great 16:24:14 spec update rather 16:24:26 :-) Specs for specs. 16:24:28 thingee: I think that was for sharing snapshots with a particular tenant.. 16:24:40 jungleboyj: hehe 16:24:41 jungleboyj, and a template for those spec specs, with hacking checks. 16:24:52 it's hacking checks all the way down 16:25:09 :-) 16:25:19 * jungleboyj gets my hacking check writing hat out. 16:25:21 rushiagr: yeah I still don't get why someone wants to do that. I think we've discussed just sharing images before? 16:25:30 jgriffith: I think 'nifty' is a good thing to have.. 16:25:40 rushiagr: -- 16:25:44 rushiagr: sure, depending on the cost 16:25:56 jgriffith: +1 16:26:03 i am a fan of this feature, saves a lot of time creating volumes from a base snapshot instead of image for arrays like ours where its basically a no-op to create volume from snapshot 16:26:12 without having to put in a image cache kind of thing 16:26:13 rushiagr: so IMO if we're going to promote snapshots like this we need to revisit our stance on what "snapshots are" and how they'r eused 16:26:21 rushiagr: so here's the thing, I'm not hearing a strong reason to have this from folks. do you have users in mind who are really wanting this? and if so, can they speak why? 16:26:43 patrickeast, rushiagr: what is the _real_ usecases from users and/or operators? 16:26:50 patrickeast: good point... but why does it matter if it's shared or private to tenant? 16:26:53 jgriffith: I don't think this breaks our definition of a snapshot. A user will still need to create a vol out of this (publicly shared, owned by a different guy) snapshot 16:27:02 patrickeast: I'd rather you put time into image caching and better glance integration, and get fast BfV as a side effect of this usecase 16:27:06 patrickeast: you mean so tenants don't have duplicate templates? 16:27:30 DuncanT: +1 16:27:31 patrickeast: I'm not sure in the real world there's the "common" template model everybody seems to envision here 16:27:54 jgriffith: yea, i'm not sure how much demand there really is for sharing them between tenants 16:28:05 patrickeast: I actually have a customer who is interested in this feature for the same reason (create boot volume is too slow) 16:28:06 jgriffith: this is something i've heard from our sales guys in the field asking for though 16:28:11 patrickeast: that's the sticking point for me really 16:28:24 patrickeast: hmm... interesting 16:28:35 patrickeast: be wary of the sales request :) 16:28:44 hehe yea 16:28:45 xyang: See my previous comments about just creating an image cache on the array 16:28:55 patrickeast: I get asked for all sorts of crazy stuff from sales, most of the time they don't even know what a tenant is :) 16:29:01 DuncanT: ya, that will solve the problem too 16:29:32 rushiagr: ok, so again I'm going to sound like a broken record on this. Give me use cases, and users actually explaining why they want this. 16:29:32 thingee: a customer of us uses the same API with amazon, and wants to use the same thing with openstack as they're planning to move.. I'm not saying amazon has it so do it, though.. 16:29:47 patrickeast: xyang DuncanT thingee so rather than publick snapshots... maybe we need to talk in Vancouver about heirarchal access? 16:30:02 jgriffith: ??? 16:30:06 jgriffith: heirarchal access? 16:30:10 the amazon case in openstack is kind of not a priority it seems, if you haven't noticed from other projects 16:30:18 IIRC vishy proposed something like this a while ago... but I don't remember the details 16:30:31 jgriffith: I thought we had talked about that before. 16:30:45 jgriffith: yea either that or maybe a more standardized image cache... if this is something others are going to run into might make sense to put something in that can be shared 16:30:49 thingee: agree.. But I'm not seeing the downsides of this feature as well... 16:31:00 DuncanT: so something like "tenant-A includes tenant-B, tenant-C...." 16:31:05 rushiagr: I'm not hearing a huge want for this feature either 16:31:07 other than 'it's more code' 16:31:12 rushiagr, lots and lots of complexity in the cinder codebase that we have to fix bugs in. 16:31:15 DuncanT: things can be created for only tenant-A 16:31:22 jgriffith: I see. 16:31:23 or the parent tenant 16:31:43 I don't know where it ended up, but I liked it WAY better than the crazy ACL stuff that was going around 16:31:46 jgriffith: How much support does keystone have for that? They were looking at it 16:31:49 anyways, summit session I guess. get users feedback there. since it seems like this is not being answered in the spec. 16:31:59 I get you can share to different tenants. 16:32:02 DuncanT: sorry, I have zero details insight on where it ended up 16:32:04 that's not a use case 16:32:04 jgriffith: If keystone has sorted it, then sure we should investigate 16:32:13 thingee: +1. we need to discuss it more 16:32:15 there was a working group that started but I don't know where it went 16:32:26 I'm happy to look into it if people are interested 16:32:32 bring back any info next week 16:32:41 jgriffith: +1 16:32:41 It might be dead for all I know :) 16:32:45 haha 16:32:47 jgriffith: that will be nice 16:33:03 #action jgriffith to get feedback for next meeting, if some working group is not dead. 16:33:18 #agreed potential summit session on this 16:33:23 http://goo.gl/kcBHDt 16:33:38 you talking about hierachical multitenancy? 16:33:43 vishy: yeah :) 16:33:49 a vishy appears! 16:33:50 the code is in keystone 16:33:51 vishy: just posted link to wiki 16:33:58 rushiagr: https://etherpad.openstack.org/p/cinder-liberty-proposed-sessions 16:34:41 vishy: awesome 16:34:54 thingee: rushiagr DuncanT xyang so I'd propose we look at that path instead 16:35:00 jgriffith: +1 16:35:00 patrickeast: you too :) 16:35:09 sounds good 16:35:12 and nested-quota-support is proposed for nova 16:35:14 jgriffith: sure, thanks 16:35:15 jgriffith: Image cache magic is the way forward IMO 16:35:16 not sure if it made it in yet 16:35:27 I'll have a look.. 16:35:36 #agreed potential hierarchy code in Keystone might solve this 16:35:46 rushiagr: anything else? 16:35:51 or anyone else on this? 16:35:52 vishy: looks like part of it: https://review.openstack.org/#/c/129420/ 16:36:02 #link https://review.openstack.org/#/c/129420/ 16:36:18 thingee: nope 16:36:23 rushiagr: thanks 16:36:25 nope didn’t make it: https://review.openstack.org/#/c/151677/ 16:36:39 #link https://review.openstack.org/#/c/151677/ 16:36:51 L it is :) 16:36:52 implementing it is essentially updating quotas to support nested operations 16:37:01 #topic Code Freeze Exception for Kilo 16:37:04 then updating the list commands to handle them 16:37:10 or I guess features only 16:37:18 since we'll still accept bug fixes at this point 16:37:22 vishy: well, we have a reference to work off of, and looks like it will start moving in L 16:37:23 tbarron: hi 16:37:27 get operations should work because roles are inherited 16:37:27 vishy: so we can have some consistency 16:37:33 i want to thank xyang, jungleboyj, e0ne for awesome help getting the swift backup refactor through 16:37:34 we'll start with you, because I'm getting this left and right 16:37:43 #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 16:37:53 xyang esp did lots of last minute testing 16:38:23 but all that was for one review of two 16:38:25 #undo 16:38:26 Removing item from minutes: 16:38:51 the second was dependent, didn't merge 16:38:54 tbarron: My pleasure. 16:39:00 #link http://lists.openstack.org/pipermail/openstack-dev/2015-March/058814.html 16:39:03 but the code has been ready 16:39:07 tbarron: it still requiures rebase 16:39:22 tbarron: I would disagree, I think we were waiting around prior to march 10th for new code to posted 16:39:26 and it's in conflict right now 16:39:37 e0ne: yeah, i couldn't rebase till the change merged last night 16:39:45 I think reviewers have been trying to work with you, but reups have been slow 16:39:50 that's my opinion anyways. 16:39:54 and the rebase just got triggered last Friday 16:40:02 with serious changes to Swift 16:40:16 it wasn't so much a rebase as a rewrite 16:40:17 it was because of my incremental backup patch got merged late 16:40:25 and tbarron had to do a big rebase 16:40:28 xyang: ah ok 16:40:29 xyang could speak to the complexity 16:40:42 * tbarron notices she just did 16:40:51 so the hard part is done 16:41:05 swift.py was refactored completed. incremental backup changed swift.py. so rebase was complicated 16:41:05 and the small dependent change got stuck 16:41:08 tbarron: it's still failing and conflicted 16:41:09 #link https://review.openstack.org/#/c/149726 16:41:12 this is not ready 16:41:15 I tested the rebased code and it worked 16:41:17 and we're past deadline 16:41:34 thingee: Yeah, tbarron had to wait for xyang's change. Didn't realize that we needed to get xyang 's in first and it kind of screwed up tbarron 16:41:52 I'm not sure why someone would ask for exception for something that isn't ready right now to be merged. 16:42:03 instead of spending time on writing FFE to the mailing list 16:42:05 thingee: i respect your decision but the code on which it has to be rebased just merged at right before midnight last night Eastern time 16:42:42 thingee: I thought I was respecting the process and asking without doiing that was incorrect 16:42:46 so who is ready to +2 this now? 16:43:05 thingee: Once he rebases I am happy to look at it again. 16:43:12 i can have a look too 16:43:14 tbarron: you're free to fix your patches as much as you want. I'm just saying it's not ready right now and you're asking for exception. 16:43:16 It was in good shape before it needed rebase. 16:43:24 jungleboyj: thanks 16:43:26 I can look at it too 16:43:47 thingee: it has +2 from me and jungleboyj before merge conflict. i'll review changes afrer rebase 16:43:52 * DuncanT will certainly review as soon as something is up 16:44:11 tbarron: I don't know, you aren't getting much support here. ;-) 16:44:15 I appreciate all the review offers 16:44:23 * tbarron blushes 16:44:33 Awww. 16:44:56 ok, if people can get this through while I'm flying back and no more back and forth on major issues, fine with me. 16:45:05 given that it was 2 X +2 before rebase issues 16:45:08 I think is fair 16:45:15 thingee: ++ 16:45:20 thingee: how many hours do we have?:) 16:45:24 haha 16:45:32 :) 16:45:34 :-) 16:45:40 thingee: thanks, I'll get some code up and make sure it passes all unit tests after rebase before I post it 16:45:42 thingee: +2 16:45:56 But this part is thin, so it shouldn't be too bad 16:46:00 Thanks everyone! 16:46:03 xyang: 7 hours 16:46:08 that's more than enough time 16:46:08 :) 16:46:08 tbarron hoping thingee's flight is delayed 16:46:09 just fyi, queue in gates is more than ~100 16:46:13 kmartin: :( 16:46:17 kmartin: +1 16:46:33 thingee: where are you now? 16:46:34 thingee: thanks, i really wouldn't wish that on anyone. 16:46:38 hemna: We are in control again! ;-) 16:46:50 #agreed NFS backup patch has FFE 16:46:55 ok next 16:46:55 :P 16:46:57 e0ne: hi 16:47:00 thingee: are you connecting through anywhere? just curious .. 16:47:02 hi 16:47:10 tbarron: yeah 16:47:11 jungleboyj, hush, dad hasn't left yet....don't blow our cover 16:47:21 e0ne: lets talk shadow tables 16:47:32 thingee: i just want to get -2 from hemna. 16:47:40 +2 will be better:) 16:47:45 lol 16:47:58 #link https://review.openstack.org/#/c/131182/ 16:48:06 hemna: just a remainder that you put +1 for spec 16:48:07 #agreed e0ne just wants a -2 16:48:14 oops.. 16:48:17 e0ne, yah understand 16:48:19 * jungleboyj looks sheepish. 16:48:50 but I changed my mind after actually looking at what a dev has to do to maintain it over time. 16:48:51 i'll take a look on oslo.db integration for shadow migrations for L 16:48:51 :( 16:49:18 hemna: agree that sqlite downgrades looks ugly:( 16:49:18 alright so what's going on here? 16:49:28 basically it puts 2x the work on cinder devs for maintaining what turns into archived data for operators. 16:49:44 I think that cost is too high, IMO 16:50:03 hemna: it's a very reasonable point. 16:50:11 hemna, this is regarding a SQL schema migration? 16:50:25 I had proposed another approach, that's a bit different, but gives operators the ability to offload 'deleted' rows in much the same manner, w/o putting the burden on cinder devs to maintaining 2 schemas 16:50:50 here is example: https://github.com/e0ne/cinder/commit/8ae973852fdf11b7c01f9e268361fa1d319846d4 16:50:57 instead of shadow tables, the concept is more like snapshot tables. 16:51:05 ok, so are we sitting on this for L-1? Seems like there is still discussion going on 16:51:26 if an operator wants to offload those deleted records, they run a new tool, that basically snapshots the existing schema as it is when the tool is run, then migrates out the deleted rows. 16:51:40 there isn't any thing for a cinder dev to do on a patch by patch basis 16:52:01 I'd like to at least have the chance to talk about this approach before we go with the current shadow tables approach. 16:52:12 unfortunately it's late in the K timeframe. :( 16:52:20 hemna, fwiw, it looks like the no-downward migrations x-project spec is going to be approved next week. operators are saying they wouldn't perform a schema downgrade. they'd snap-shot and roll back 16:52:41 based on feedback from the ops-meetup thing 16:52:47 morganfainberg, do we have a url for that spec. I'm unfamiliar with it. 16:52:55 hemna, 16:52:57 https://review.openstack.org/#/c/152337/5/specs/no-downward-sql-migration.rst 16:52:58 #link https://review.openstack.org/#/c/152337/ 16:53:10 I presume it's another approach to dealing with rows that are soft deleted ? 16:53:19 hemna: The problem with that approach is what you do with existing data is shadow tables once the schema changes 16:53:26 or just removing the idea of downward migrations ? 16:53:36 hemna, removing the concept of downward migrations 16:53:44 morganfainberg: +1 16:53:50 DuncanT, when the schema changes between runs of the tool, you get new schema tables instead. 16:54:01 morganfainberg: +1. db backups works better