16:00:02 #startmeeting Cinder 16:00:02 Meeting started Wed Apr 12 16:00:02 2017 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'cinder' 16:00:13 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov 16:00:18 hi 16:00:19 wxy viks ketonne abishop sivn breitz 16:00:21 o/ 16:00:24 Hi 16:00:25 Hi! 16:00:25 hi 16:00:26 hi 16:00:27 o/ 16:00:30 hello 16:00:42 hi 16:00:45 hi 16:00:45 Hi 16:00:46 Hey everyone. 16:01:12 #topic Announcements 16:01:38 #info Pike-1 milestone cut 16:01:45 hey sean! now I'm leaving again 16:01:53 Rockyg: :) 16:02:14 Since Pike-1 doesn't have any specific Cinder items, I've already done the release for that milestone. 16:02:37 Hello :) 16:02:54 I think some good progress so far, but quite a few decent patches sitting out there. 16:03:04 Please, please, do reviews if you have the cycles. 16:03:40 0/ 16:04:07 And especially cores, please also review cinder-specs too. There's a few of those that we want to get in Pike that could use some attention. 16:04:30 +1 16:04:51 #topic How should we clean up the expired user messages? 16:05:01 tommylikehu: All yours. 16:05:09 yes, I think this part is left by sheer. As we already has this feature merged, we should consider how to clean the expired messages out. 16:05:19 o/ 16:05:30 i think the original plan was to use a periodic task, which still seems like a good idea 16:05:31 IMO, cinder-manage command will be good for this 16:05:41 eharney: That's what I thought. 16:05:42 e0ne: i disagree 16:05:45 wow 16:05:47 o/ 16:05:59 we should have a decision 16:06:00 no real reason to push this off to a manual admin action when we can just do it from within the service 16:06:02 I would like it to be automatic and not something an operator would have to remember to do. 16:06:08 eharney: +1 16:06:18 I like the automatically way 16:06:22 i think we can make both variants 16:06:23 eharney: good point 16:06:33 mdovgal why both 16:06:42 smcginnis: ++ 16:06:46 mdovgal: We could, but is there a benefit? 16:07:18 I won't argue if community decides to have periodic job for it 16:07:42 tommylikehu, why not? maybe i need to clean up it immediately, not to wait when periodic task will be triggered 16:07:51 Anyone have any argument for not making it a periodic task that's done automatically? 16:08:08 smcginnis: we can delete when inserting 16:08:17 once a day maybe 16:08:40 to which service will this task be connected? 16:08:45 but I prefer the perodic task 16:08:54 cinder-volume i'd assume 16:09:04 mdovgal: I think you are right we could do both. But I think I would rather make it automatic first, then wait to see if there is operator feedback that they need a manual method before we implement that. 16:09:09 mdovgal: That make sense? 16:09:26 smcginnis, i think it's ok 16:09:32 smcginnis: +1 16:09:32 mdovgal: Cool 16:09:42 How are we going to trigger the automatic removal? 16:09:59 when expired 16:10:13 No, I mean how the task is going to be fired 16:10:19 check the records after a period 16:11:09 tommylikehu, just to check count of records? or datetime of them? 16:11:30 I think the datetime would be ok 16:11:43 or the expired_at column 16:11:53 tommylikehu: How is the method that does the cleanup going to be started/run? 16:12:07 geguileo: periodic_task? 16:12:24 geguileo: Yeah, periodic task in cinder-volume? 16:12:27 cFouts: where? and how do you make sure it only runs once and not in multiple services? 16:12:36 ah 16:12:41 Oh ... 16:12:42 why not scheduler 16:12:45 jungleboyj: What happens when you have multiple c-vol services? 16:12:49 tommylikehu: Same issue 16:12:54 ok 16:12:57 geguileo: Yeah, that just hit me. 16:12:59 We don't have a leader, so they could all trigger it 16:13:05 geguileo: Good point. But would that really cause a problem if multiple services ran it? 16:13:25 smcginnis: Depends on the queries they do and how many services you have running them 16:13:30 as they could come in waves 16:14:00 evey though you could add a randomization to the period of the start of the task 16:14:19 geguileo: Dont' know, but wouldn't it just be a DELETE WHERE expired_at > now, so one service would delete all records, the rest would just pass. 16:14:39 smcginnis: If that's all it really takes then it's ok 16:14:47 Yeah, not sure. 16:15:00 smcginnis: +1 16:15:02 just wanted to bring it up in case it turned out to be more complex 16:15:06 tommylikehu: But I think we have an answer. We can debate implementation details in a review if you want to implement something. 16:15:16 FYI, this's what glance does for the expired task: when users list tasks, glance will delete the expired ones at the same time. 16:15:32 wxy|: Only on the list request? 16:15:39 smcginnis: yeah 16:16:00 Problem I see with that is if no one asks for them, they just accumulate. 16:16:08 Not sure if that would actually be the case though. 16:16:09 it could 16:16:19 But I kind of like the periodic task idea better at this point. 16:16:31 smcginnis: I can pick up that implementation 16:16:45 tommylikehu: Great, thanks! OK if we move on then. 16:16:51 thanks 16:16:53 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L1502 16:17:04 wxy|: Thanks 16:17:07 cinder-manage with cron job could be an option in a future 16:17:09 #topic How should we support like filters for name/description etc columns? 16:17:32 e0ne: Could work too, but requires outside configuration. Certainly a possibility though. 16:17:43 let's assume we would have like filter in the future :) 16:17:53 :) 16:18:19 will we support both exact-match and inexact-match for one columns? 16:18:36 jgriffith: You around? I know you had some opinions on filtering. 16:18:46 I am now :) 16:18:47 something like 'cinder list --filters name=vol' and 'cinder list --filters name%=vol' 16:18:50 Literally just logged on 16:19:07 jgriffith: thanks 16:19:17 jgriffith: Timing is everything. ;) 16:19:48 so I'm not a fan of the "filter on all things", that being said I talked to tommylikehu last night and if people would like this functionality that's cool; we could work it into the proposed generic filter design that I proposed 16:20:22 but honestly a lot of this is overkill IMO, if people need this that badly just write your own client/shell script to do it 16:20:32 that's all I have :P 16:21:25 +1 for integrating it into the proposed generic filter design 16:21:32 tbarron: Agree 16:21:54 jgriffith, tbarron I just updated my spec to take advantages of jgriffith 's pne 16:21:59 one 16:22:00 https://review.openstack.org/#/c/442982/ 16:22:52 it seems that no one cares about this topic on whether we should support them bothv:( 16:22:58 both :( 16:23:01 I do think if we're going to provide filtering on the service side, it would be a better user experience to be able to do both. 16:23:35 yes :) 16:23:38 smcginnis: +1 16:24:02 I think folks care but are still thinking through it. ;) 16:24:27 tommylikehu: OK, unless anyone brings up a strong argument against both, I think we have an answer. 16:24:44 thanks, that's clear 16:24:54 #topic remotefs based 3rd party CIs broken 16:25:08 This was added by kaisers1 but he wasn't able to attend 16:25:27 remotefs drivers are broken right now apparently. 16:25:35 https://bugs.launchpad.net/cinder/+bug/1681406 16:25:36 Launchpad bug 1681406 in Cinder "RemoteFS: volume cascade delete fails" [Undecided,In progress] - Assigned to Lucian Petrut (petrutlucian94) 16:25:53 #link https://review.openstack.org/#/c/455256/ Proposed fix. 16:26:14 So I think he wanted this in the meeting agenda to just raise awareness. 16:26:23 Consider it raised. 16:26:34 I feel aware. 16:26:53 * smcginnis imagines jungleboyj glowing with enlightenment 16:27:00 :) 16:27:07 #topic Open Discussion 16:27:10 * jungleboyj laughs ... If only. 16:27:13 Anything else to discuss today? 16:27:18 Summit is coming up. 16:27:23 Hope to see a lot of folks there. 16:27:35 Or at least at "the forum" 16:27:42 Looked like hotels were filling up so ... 16:27:42 I have one little question 16:28:45 ? 16:28:55 there is one blueprint about cascade backup delete https://blueprints.launchpad.net/cinder/+spec/cascade-delete-backup 16:29:01 #link https://blueprints.launchpad.net/cinder/+spec/cascade-delete-backup 16:29:29 Do somebody have any problems with it implementation in future? 16:29:30 it should be a useful feature for incremental backups 16:29:58 might be nice to have this 16:30:12 mdovgal: Just approved it. 16:30:19 Another question: We're working on CI for Veritas os-brick connector, https://review.openstack.org/#/c/442754. Is CI needed before merge? 16:30:24 Nova team said they want connector merged before starting to review the Veritas Nova volume driver that depends on it (https://review.openstack.org/#/c/443951) 16:30:29 smcginnis, nice. thank you 16:30:36 mdovgal: No problem. 16:30:45 cuhler, its not required no, but its strongly advised 16:30:59 cuhler: Our policy has been if you have a specific connector in os-brick then it is strongly advised to have CI coverage. 16:31:19 cuhler: I mainly wanted some record on there as to the plans for that, and I see it was added. 16:31:30 cuhler: I don't think we need to block the review waiting on that. 16:31:41 Okay. Can the CI be driven by the unmerged Nova volume driver, or is Nova somehow simulated? 16:32:05 cuhler: Yeah, I think it would have to be a cherry pick of your driver into nova or something. 16:32:16 cuhler: Until that merges in Nova at least. 16:32:21 ok, that clarifies things. Thx 16:32:33 done 16:33:59 Oh, next PTG announced in case anyone missed that. 16:34:12 Denver the week of Sept 11. 16:34:28 MOAR torchy's tacos! 16:34:34 :) 16:34:36 Denver? Spit. 16:34:42 :-) I had that thought too. 16:34:45 Bugsmash even taking place the week after the summit. 16:34:53 I like Denver! 16:35:02 You would. 16:35:08 smcginnis: ++ 16:35:08 I've heard their brownies are fantastic. 16:35:09 cool! no international travel 16:35:11 :D 16:35:18 Not NBA time on the 11 of Sept :( 16:35:25 It was cheaper than our other option and ATL both so hopefully that helps people get there. 16:35:33 Sounds like the PTG after that will international though. 16:35:33 +1 16:35:55 jungleboyj: is it? 16:35:57 Actually glad to see the PTG alternate since we were never able to get that going with the midcycle. 16:36:09 smcginnis: True. 16:36:30 xyang2: Yeah, I think they were talking Europe. 16:36:37 diablo_rojo: Probably knows more. :-) 16:36:49 jungleboyj: well, too early to worry about that now:) 16:37:07 :) 16:37:15 Anything else today? 16:37:24 jungleboyj, I know of places we are looking at but we are FAR from a decision so I don't know all that much more :) 16:37:37 diablo_rojo: Cool. 16:37:54 diablo_rojo: Then next two summits are Sydney and Vancouver, right. 16:38:07 smcginnis, yep :) 16:38:44 OK, sounds like we are pretty much done here. Thanks everyone. 16:38:50 Please do reviews! :) 16:38:50 United tickets should be cheap and plentiful. Buy your airfare now. 16:38:55 LOL 16:38:59 :) 16:39:02 :D 16:39:10 Bwah ha ha 16:39:10 Swanson: Never United!:) 16:39:13 Swanson: At this rate you can just buy the whole plane. 16:39:21 they are not that cheap still 16:39:27 smcginnis: That would be awesome! 16:39:39 #endmeeting