14:59:59 #startmeeting manila 14:59:59 Meeting started Thu Apr 20 14:59:59 2017 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:02 The meeting name has been set to 'manila' 15:00:07 oops 1 second too early 15:00:10 hello all 15:00:16 Hello 15:00:19 hi! 15:00:19 hi 15:00:22 hi 15:00:23 hello 15:00:27 Hi 15:00:28 hi 15:00:34 o/ 15:00:35 \o 15:00:42 hi 15:00:46 hey 15:00:47 hi 15:00:50 hi 15:00:53 #topic announcements 15:01:24 so we decided to grant extensions to 4 specs last week, with a new deadline of today 15:01:45 the only agenda items for this meeting are those 4 specs 15:02:00 none of them merged yet but some of them did make good progress 15:02:16 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:02:24 #topic Ensure shares 15:02:30 #link https://review.openstack.org/#/c/446494/ 15:02:46 so first up, ensure shares 15:02:47 o/ 15:03:02 this one is the most important to me, and the one I've been focusing on 15:03:23 we held a teleconference tuesday morning to discuss changes with interested parties 15:03:42 and we agreed that I would become a co-author on the spec and work on it with zhongjun 15:04:18 the current proposal is somewhat different that what was proposed last week, and still somewhat different than what we agreed to on the tuesday meeting 15:04:25 thanks 15:04:53 the tuesday meeting mostly accomplished the team agreeing what problem we were trying to solve, and agreeing in principle how to solve it 15:05:30 after writing up what we agreed to, I realized we could refine it further and give drivers more control, plus avoid gratuitous ensure_share calls 15:06:04 we also agreed that, despite our original assumptions, there are no cases when we want to ensure some subset of the shares on a backend 15:06:09 it's always all or nothing 15:06:33 bswartz: all per backend you mean? 15:06:42 yes all or nothing per backend 15:07:05 because the work is done by the drivers, there's no way to go cross-backend with ensure share 15:07:27 and there's no reason to either 15:07:48 so anyways, I think this spec is basically ready 15:07:56 we could refine it and add more details if needed 15:08:11 but I feel like we're past all the major design decisions 15:08:35 we can discuss it here, or decide to merge it, or give people some more time for more reviews before it merges 15:08:51 I dunno if the rest of today is enough time, but I hope it is 15:09:02 bswartz: at least rest of the day would help 15:09:14 bswartz: last PS was just uploaded 15:09:14 well we gave a 1 week extension 15:09:23 so that takes us to 23:59 today 15:09:31 how many hours from now? 15:09:44 about 8 (?) more hours 15:09:53 ok, ty 15:10:10 does anyone want to wait longer on this spec? 15:10:44 okay sounds like no 15:11:04 let's try to get final reviews done today and I'll be on top of responding to comments and updating if needed 15:11:09 let's move on 15:11:16 +1 15:11:16 #topic Update shares 15:11:21 #link https://review.openstack.org/#/c/453553/ 15:11:27 +1 15:11:53 so this one got a bit of attention late last week, but not much this week 15:12:36 give the amount of focus on other specs, it's not surprising this didn't get as much attention 15:12:42 This one have a little change from ensure share spec. 15:13:09 I'm wondering if we should simply deprioritize this one and focus on the remaining ones 15:13:18 there are a flurry of periodic calls already, and adding one more might not be good design 15:13:29 I'd like to invest in features that matter to people, and this one is definitely lowest on my list (of the remaining specs) 15:13:31 can we see how ensure share goes and evaluate? 15:13:46 It is based on ensure share spec, so it is not got a bit of attention 15:13:46 gouthamr: have you looked at "autosized/infinite"share spec? 15:14:19 vponomaryov: that's later in the agenda 15:14:19 vponomaryov: not recently, but i know usage stats are expected to be updated periodically... 15:14:27 let's not cover that yet 15:15:11 I'm not in favor of merging update share today, we could decide to give it more time or push it out 15:16:03 zhongjun if you feel strongly about keeping this spec in pike please explain why it's necessary and ensure_shares isn't good enough 15:16:12 bswartz: This one based on ensure share spec, since the ensure share has been merged, this one will not far from merge 15:17:02 zhongjun: I don't agree -- the use cases may be similar but the technical challenges are very different and the design needed to satisfy online updates is much more complex 15:17:16 if ensure_shares gets implemented early we could decide to reconsider it then. I think there are tricky design issues that haven't been worked out yet even though there's lots of good ideas that have been started. 15:17:44 we covered that topic briefly in the tuesday meeting 15:18:11 periodic tasks are something really dislike 15:18:17 If we approve the spec as is then we will hash out those issues in code review and there's good chance of -2s. 15:18:30 if there isn't agreement on fundamental design. 15:19:50 I suppose I can imagine ways that this could be implemented efficiently 15:19:53 bswartz: yes, I also don't like periodic task, firstly I try to add new API to implement it 15:19:58 but my guy tells me it won't be easy 15:20:08 s/guy/gut/ 15:20:41 did using a signal to kick off dynamic updates get rejected somewhere? 15:20:58 markstur: link? 15:21:29 just thought we'd have dynamic config re-read using a kill signal like nova has discussed for a long time 15:21:39 I actually think a periodic task is the right user experience, but we have known issues with period tasks that should be sorted out before we add more uses of them 15:22:02 markstur: Supposedly that is working for manila . 15:22:07 markstur: that was intended for reloading of conf files, not for updates of all of the shares on the backend 15:22:20 I haven't gotten it to work in test but I think that was because I was in devstack which is a 'special flower'. 15:22:30 bswartz: ++ 15:22:53 I guess I'm not opposed to this features and I'm sure we could sort out the issues eventually 15:23:01 ok. I was just thinking a conf change could kick off ensure-shares and some of this is overkill, but... 15:23:10 my main concern is focusing the team during the limited time we have 15:23:27 given a clean state a conf listener would be better than a kill signal 15:23:42 markstur: you mean HUP signal 15:23:51 we're getting offtopic though 15:24:16 so this spec isn't ready. The question should be whether another extension makes sense 15:24:40 "focusing the team" implies push to Q doesn't it? 15:24:46 I would be okay with giving it more time, but I in my mind this is less important than the other 2 15:25:04 I like gouthamr's point that we could wait to see how ensure_shares will play out 15:25:11 I we could stack rank the specs I'd put this one at the bottom 15:25:34 We could solve one more use case if we give this spec some time 15:25:35 ganso: that's a good iterative approach 15:25:37 honestly I don't care whether it goes into pike or queens as long as it doesn't distract from other pike stuff we care more about 15:26:14 the issue is that our process forces us to make a call at milestone-1 about specs being in or out 15:26:26 we can't postpone that decision under our current rules 15:26:52 (other than granting specific extensions, which is what we did here) 15:27:36 I'm going to resort to the vote feature, so we can make a call and move on 15:28:17 #startvote How should we handle update shares spec? more_time, push_to_queens 15:28:18 Begin voting on: How should we handle update shares spec? Valid vote options are more_time, push_to_queens. 15:28:19 Vote using '#vote OPTION'. Only your last vote counts. 15:28:32 (sorry for long choices) 15:28:36 #vote push_to_queens 15:28:42 #vote push_to_queens 15:28:50 #vote push_to_queens 15:28:54 #vote #push_to_queens 15:28:55 kaisers_: #push_to_queens is not a valid option. Valid options are more_time, push_to_queens. 15:29:01 #vote #push_to_queens 15:29:01 ganso: #push_to_queens is not a valid option. Valid options are more_time, push_to_queens. 15:29:04 #vote push_to_queens 15:29:07 #vote push_to_queens 15:29:09 #vote push_to_queens 15:29:20 #vote push_to_queens 15:29:24 #vote push_to_queens 15:29:33 #vote push_to_queens 15:29:35 Ok, we could do it in next version 15:29:38 #endvote 15:29:38 Voted on "How should we handle update shares spec?" Results are 15:29:39 push_to_queens (9): bswartz, toabctl, markstur, ganso, gouthamr, kaisers_, vponomaryov, tbarron, dustins 15:29:44 wow we're unanimous 15:29:50 that was easier than I though 15:29:51 I regret that this discourages folks from getting it ready during pike. I hope someone has some time to get it ready 15:30:08 but it seems our spec process/etc does help us all focus 15:30:11 markstur: you make a good point but I have a response 15:30:34 I would strongly prefer it if new features were worked on ahead of a release rather than during a release 15:30:56 how great would it be to start the queens release with the code and spec for update shares already done 15:31:06 +! 15:31:07 bswartz: +1000 15:31:09 +1 15:31:21 yes we're prioritizing other things, but that shouldn't discourage contributors for working on stuff for queens during like 15:31:26 s/like/pike/ 15:31:30 * markstur typo let to +!, but that works! 15:31:31 * bswartz smh 15:31:47 okay we have 2 more important ones, let's move on 15:31:58 #topic Infinite shares 15:32:03 #link https://review.openstack.org/#/c/452097/ 15:32:15 I don't think we have agreement on what an infinite or auto-sized share would be yet. 15:32:15 We do have agreement that it would require collecting usage per share and collecting usage per share. 15:32:15 And I think we have agreement that the need for collecting such usage: 15:32:15 (1) exists independently of infinite/auto-sized share (see arne's remarks in manila channel earlier today) 15:32:15 (2) would be a natural extension of vkmc's ceilometer integration work 15:32:17 (3) but we don't have a good scalable design for collecting usage yet 15:32:50 * tbarron shuts up now (for a minute at least) 15:33:02 (3) is the most critical 15:33:07 tbarron: we could handle this one similarly to what we did with ensure_share and schedule another meeting to hash out the details with interested parties 15:33:33 vponomaryov: +1000 15:33:39 assuming there's enough interest in pushing for this to get done in pike and we believe that the problems are tractable 15:33:52 and yes, should be split up to two - usage reporting is one spec and auto/infinite is second 15:34:41 so I apologize for not being familiar with this spec, but I have vague memories of the earlier discussions in austin about infinite (unlimited) shares 15:35:22 for my benefit, what's the purpose of tracking the size at all? 15:35:33 bswartz: to bill for usage, not quota 15:35:38 isn't unlimited supposed to mean unlimited 15:35:58 okay but we don't currently support usage or billing directly 15:35:58 bswartz: yes, user will pay for usage 15:36:23 isn't infinite supposed to mean infinite? 15:36:34 so we're assuming there's a mechanism to monitor the consumed space and report that to a billing tool? 15:36:44 would we ever want similar functionality for our limited shares? 15:36:45 there are as many ideas of what infinite share means as reviewers 15:36:54 bswartz: no, store it and update in our DB 15:37:04 markstur: nothing is really infinite 15:37:08 bswartz: we do want it, yes, see arne's remarks in channel today 15:37:09 we're also expecting that administrators must be able to control the size of "infinite" shares... 15:37:31 gouthamr: +1 15:37:43 gouthamr: why? 15:37:53 bswartz: we could send it to ceilometor.. or something other tool 15:37:57 gouthamr, plus automatically 15:37:57 resource starvation... 15:38:05 bswartz: in theory infinite is infinite and nothing is something 15:38:12 if the adminsitrator is getting money from the user for the used space in the share, doesn't the administrator have an incentive to let the usage grow without bounds? 15:38:13 but sorry for digressing 15:38:28 markstur: :) are you loitering around the himalayas these days 15:38:33 bswartz: not if it uses up the pool 15:38:59 gouthamr: :) 15:39:12 markstur: Is there here to derail for fun. ;-) 15:39:14 tbarron: a fully consumed pool is maximum profit for the provider assuming you're billing for usage 15:39:23 apart from billing, I think it would be useful to users to know how loaded are the shares they are administrating, that can help them to take actions 15:39:34 bswartz: not if other users discontinue service b/c they got starved 15:40:05 like, I don't know, resize the share because there is a lot of wasted resources 15:40:06 tbarron: in theory that would only happen if the admin wasn't fast enough to grow the pool as it filled up 15:40:13 bswartz: also, not all resource control is by chargeback, see CERN example in channel 15:40:15 bswartz: not all deployments are for profit (and will hence simply grow if there is higher user demand) 15:40:24 bswartz: ack 15:40:32 ^^^ real world 15:40:36 thanks to disks being obscenely slow pieces of hardware, you usually have a warning when they approach full capacity 15:40:55 ^^^ real world, 3am 15:41:06 okay so arnewiebalck's use case is the more challenging case 15:41:46 there's no natural pressure to use storage efficiently because the consumers aren't charged, but the admin still wants to size the shares flexibly 15:42:30 tbarron: I used the weasel work "usually" 15:42:43 weasel word 15:43:00 bswartz: I'm always looking for weasel work. 15:43:11 :D 15:43:19 okay so I don't feel like we're going to solve the technical problem during this meeting 15:43:21 * tbarron is hungry 15:43:32 and I also sense that there's interested in continuing to work on this 15:43:54 bswartz; yes, idea is interesting and useful, design not nailed yet 15:44:06 so I'm going to propose giving this spec some more time, and trying to schedule a specific meeting to get together and work through the issues here 15:44:07 seems like an interesting spec to discuss, but not close to ready for code 15:44:10 vponomaryov: +1, I think users will like this 15:44:26 bswartz: +1 15:44:39 please rasie your hand if you want to be included in meetings about this spec 15:44:45 .o/ 15:44:47 +1 15:44:50 o/ 15:44:53 +1 15:44:56 o/ 15:45:06 rasie-ing my hand o/ 15:45:06 o/ there, more graphical 15:45:14 friday morning is probably the best time, if people are available 15:45:20 +1 15:45:21 o/ 15:45:43 +1 15:45:58 \o/ 15:46:02 how about 1300 on friday? 15:46:09 I am ok 15:46:10 timezone? 15:46:12 does that not work for anyone? 15:46:15 UTC 15:46:19 markstur UTC, always 15:46:29 UTC is the official timezone of openstack 15:46:30 works for me 15:46:37 works for me 15:46:49 okay, sadly got to bail out.. will be on a plane at 1300 UTC tomorrow.. 15:46:51 i have a conflict 1300-1400 15:47:03 does 1400 work? 15:47:08 can do earlier or later 15:47:40 okay we don't have to use up this whole meeting with scheduling 15:47:51 arnewiebalck: How about you? 15:47:53 bswartz: i'll try catching up with the after-meeting brief 15:47:57 I'll contact interested parties after the meeting 15:48:08 I'd like to do 1300 or 1400 tomorrow just to get it done 15:48:17 otherwise we're looking at monday 15:48:25 I’m fine with all discussed options. 15:48:46 But prefer Friday over Monday :-D 15:48:50 bswartz: if it is 1400 tomorrow, I can join too 15:49:03 #agreed infinite share spec gets another week and we'll swarm on it to get the spec ironed out 15:49:15 #topic API filtering 15:49:22 #link https://review.openstack.org/#/c/447775/ 15:49:46 okay, about this tommylikehu and i discussed extensively last evening 15:49:51 i've a couple of things to add 15:50:14 so last week we agreed that we would defer to cinder on the API interface details and only use the manila spec to cover the manila-specific implementation details 15:50:16 1) I misunderstood what the intent of the "generic" filtering spec on cinder's side until code showed up.. 15:50:30 a couple ? I thought we only have one 15:50:43 but there are now concerns about the intent of the cinder spec 15:50:47 cinder has an option to allow administrators to control filters 15:51:09 now, they want to create an "allowable_filters" json file that can be tinkered and discovered over API "get me available filters" 15:51:44 we neither have that option, nor do i think we signed up to that specific proposal 15:51:45 gouthamr: as I mentioned yesterday, I think that's bad API design 15:52:06 the cinder guys made a choice in the past which I don't agree with 15:52:27 bswartz: only one? 15:52:37 ganso: ^_^ 15:52:42 ganso: good point, lol 15:52:52 lol 15:52:55 :) 15:53:04 i agree... the point being we don't need to reference that specific spec anymore unless we want to misunderstand the work being proposed here 15:53:04 haha 15:53:29 we lack filtering in a "generic" way across our GET APIs... some of them have it: shares, snapshots, networks.. 15:53:39 maybe we can audit the rest and add them in one micro-version 15:53:40 bswartz: provide a list of things that you do agree with Cinder:) 15:53:43 and document that 15:53:59 anyways the decision to implement filtering, but only if a server option is turned on, seems TOO configurable for me 15:54:11 now 2) zhongjun and tommylikehu's main point on the spec was to introduce "contains" or "like" matching 15:54:17 xyang1: :) 15:54:25 :) 15:54:27 sometimes API functionality needs to be modifiable for good reasons, we have examples of that 15:54:47 policy 15:54:52 but what kinds of filtering is allowed? that's beyond the pale for me 15:55:15 filtering should be one of those things that's consistent across the board 15:55:22 it should just work 15:55:51 hmmm, so zhongjun: i apologize for the mix up on my part.. i have some comments for you on the spec... 15:56:04 so gouthamr suggests we should go back to two weeks ago and get back to the initialize propose. 15:56:20 gouthamr: It doesn't matter, I will see it 15:56:47 we'll get this in shape today.. 15:56:54 gouthamr: does that mean not trying to harmonize our design with cinders? 15:57:07 or am I missing something 15:57:20 so, different people make different software :D 15:57:26 bswartz; why not make good example for Cinder? ) 15:57:34 however, can we have an extension on this pl0x? 15:57:53 gouthamr: you just said we'll get it in shape today? 15:58:12 bswartz: sure, but i can't expect reviewers to take a look at it as we shape it 15:58:19 okay 15:58:27 +1 for another extension 15:58:28 what's the argument for doing anything at all? 15:58:28 and i don't think zhongjun will stay up much longer 15:58:38 why not punt to queens 15:58:48 I can :D 15:59:11 I guess if we punt to queens we can't set a good example for cinder 15:59:14 this is a reasonable improvement, i think pike is enough time to get it right.. 15:59:26 because cinder will probably do whatever they're planning to during pike 15:59:55 okay we're out of time, let's make this last decision in the #manila channel 16:00:06 please go to the manila channel for 2 minutes 16:00:09 #endmeeting