15:00:29 <bswartz> #startmeeting manila
15:00:31 <openstack> Meeting started Thu Apr  7 15:00:29 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:35 <openstack> The meeting name has been set to 'manila'
15:00:38 <bswartz> hello all
15:00:39 <cknight> Hi
15:00:44 <vponomaryov> hello
15:00:45 <aovchinnikov> hi
15:00:45 <tpsilva> hello
15:00:46 <zhongjun_> hi everyone
15:00:47 <Yogi1> Hello
15:00:54 <gouthamr> hello o/
15:00:57 <ganso> hello
15:01:05 <xyang1> hi
15:01:18 <jseiler> hi
15:01:22 <bswartz> anyone seen nidhimittalhada?
15:01:52 <vponomaryov> bswartz: I guess it is too late for her
15:02:08 <gouthamr> bswartz: she works IST hours..
15:02:22 <bswartz> she PM'd me 8 hours ago and it sounded like she might be here
15:02:26 <markstur_> hi
15:02:28 <gouthamr> ah
15:02:44 <bswartz> I know this meeting timeslot sucks for IST
15:02:50 <bswartz> :-/
15:03:20 <bswartz> oh well let's get started
15:03:27 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:03:48 <bswartz> #topic access rules
15:04:00 <bswartz> We have a bug affecting access rules
15:04:03 * bswartz looks for the number
15:04:23 <bswartz> https://bugs.launchpad.net/manila/+bug/1566815
15:04:24 <openstack> Launchpad bug 1566815 in Manila "share manager fails to remove access rules on replicated share" [Undecided,New] - Assigned to Goutham Pacha Ravi (gouthamr)
15:04:53 <bswartz> I wanted to figure out who is still working on access rules because I think a few more changes are needed in newton
15:05:10 <bswartz> in particular, I think it was a mistake to remove the per-rule status column from the DB in mitaka (my mistake)
15:05:15 <toabctl> hi
15:05:34 <dustins> \o
15:05:34 <bswartz> I think if we add that column back in, it will allow us to do smarter things
15:05:37 <gouthamr> bswartz: so, that bug is a sporadic failure.. i suspect a DB race.. i wanted to sanitize that a bit..
15:06:15 <bswartz> yes gouthamr I think it's possible to fix the bug without any huge changes, but nevertheless I still want to look at additional changes to access rules implementation in newton
15:06:25 <ganso> bswartz: column is still there, just would need to work the way it was
15:06:37 <bswartz> ganso: you mean in the model, but not in the schema
15:06:41 <gouthamr> ganso: column is removed :) it is just fudged
15:06:55 <tpsilva> it's now just a property that maps to the instance access_rules_status
15:06:57 <ganso> gouthamr: oh yes, it is now a property, sorry that confused me
15:07:13 <bswartz> also I wanted to raise the subject of access rule mapping table
15:07:23 <bswartz> currently we're mapping rules to share _instances_
15:07:32 <bswartz> it seems to me that the rules would more properly be mapped to shares
15:07:35 <gouthamr> bswartz: +1 .. I agree.. I really hope we can go back to linking access rules and shares instead of share instances..
15:08:01 <bswartz> there are cases where the actual rules on 2 instances should be somewhat different, such as during migration
15:08:07 <gouthamr> bswartz: for replication, it does not make any difference..
15:08:16 <bswartz> however I think we can do that without having different rules for different instances in the DB
15:08:52 <bswartz> would anyone be opposed to returning the access rule mapping back to the share object at the DB layer?
15:09:15 <vponomaryov> bswartz: what do you expect to be solved by such change?
15:09:35 <bswartz> vponomaryov: currently we have multiple rows in mapping table for shares that have multiple instances
15:09:42 <ganso> gouthamr, bswartz: we questioned if access rules equal across shares made sense across replicas that are in different locations and may not be accessible by some hosts
15:09:51 <bswartz> keeping all the rows correct is harder than 1 row
15:10:04 <bswartz> and I'm concerned about soft deletes being impossible in a mapping table
15:10:36 <ganso> bswartz: also, migration can work, but there are several workarounds to be done
15:10:39 <bswartz> I also want to look at eliminating the mapping table altogether and having just one access rules table
15:10:43 <gouthamr> ganso: but the API does not allow you to control access to a replica
15:10:53 <ganso> bswartz: I think we should also consider the access groups proposal by nidhi... it is prone to change things
15:11:08 <ganso> bswartz: I think the design is good idea
15:11:11 <bswartz> gouthamr: yes I do NOT propose changing the API, just the implementation
15:11:12 <gouthamr> ganso: so there's documentation that says access rules to secondary copies are going to be controlled by the driver as necessary..
15:11:23 <bswartz> and ganso that was my next topic
15:11:58 <gouthamr> ganso: i mean, for certain types of replication, some access rules don't make sense at all, or different sense..
15:12:22 <bswartz> I'm just trying to figure out if I'm missing anything with my current understand of access rules
15:12:32 <gouthamr> vponomaryov implemented readable type and he applies any r/w access level as r/o to any secondary replica
15:13:04 <bswartz> gouthamr: that's an easy case to solve -- just ignore type
15:13:27 <bswartz> ignore type *on passive replicas
15:13:30 <gouthamr> bswartz: yes, with the 'dr' type, no access rule makes sense,
15:13:58 <gouthamr> the passive copies are meant to be "inaccessible"
15:14:19 <bswartz> okay so since nidhimittalhada isn't here I'll propose her topic
15:14:24 <bswartz> #topic access groups
15:14:49 <bswartz> so getting access rules implementation right is important because we finally have a volunteer to write access groups!
15:14:54 <bswartz> #link https://wiki.openstack.org/wiki/Manila/design/access_groups
15:15:39 <bswartz> we need some feedback on this spec
15:16:08 <bswartz> I know it's hard to offer feedback in a wiki, personally I use the "discussion" feature of wiki
15:16:32 <bswartz> people are also welcome to use the ML to discuss this
15:17:00 <bswartz> the basic idea is that if I have 10 shares with access to the same clients, and I want to grant access to another client, I shouldn't have to call 10 access-allow APIs to make that happen
15:17:58 <bswartz> additionally, if tenants would rather define their groups of clients outside of Manila, such as using neutron security groups, we should allow that
15:18:24 <gouthamr> bswartz: would this have any overlap with generic groups?
15:18:28 <bswartz> or similarly, if tenants wish to grant access to shares by instance ID rather than IP address, we should allow that
15:18:51 <bswartz> gouthamr: it overlaps, but probably not in a good way
15:19:01 <vponomaryov> bswartz: we can add possibility to add access by share list -> "manila access-allow share1,share2,shareN ip 1.1.1.1"
15:19:12 <bswartz> if we had share groups, it would in theory be possible to access allow to the whole group of shares easily
15:19:36 <bswartz> but share groups might not be the same granularity as the desired access groups
15:20:01 <bswartz> and I don't think we're considering hierarchical groups (groups of groups) for shares
15:20:20 <gouthamr> bswartz: i'll add that to the wiki's discussion
15:20:45 <bswartz> vponomaryov: that solves half of the problem
15:20:54 <gouthamr> #action: gouthamr will add share groups to access groups wiki discussion
15:20:55 <cknight> bswartz: hierarchical groups wouldn't be that hard
15:21:03 <bswartz> vponomaryov: but what if I want to create a new share and give it the same access as all of my other shares
15:21:04 <cknight> bswartz: but it could be a later enhancement
15:21:20 <bswartz> cknight: I think it adds even more complexity
15:21:35 <vponomaryov> bswartz: your latest case is covered by second approach on wiki page
15:21:42 <vponomaryov> bswartz: with "inherit" command
15:21:43 <bswartz> cknight: it's worth considering, but I suspect we'll decide not to do it
15:21:46 <cknight> bswartz: yes, but the prototype Alex & I build 18 months ago had hierarchical groups
15:22:04 <bswartz> oh you mean hierarchical groups for access
15:22:10 <cknight> bswartz: yes!
15:22:18 <bswartz> I meant hierarchical groups of shares for replication, consistency, etc
15:22:33 <cknight> bswartz: that's definitely harder
15:22:45 <cknight> bswartz: not sure it's worth the effort
15:22:50 <bswartz> me neither
15:23:10 <bswartz> not only is it harder but it makes the UI that much more ugly
15:23:30 <bswartz> because you need capabilities which inherit and capabilities which don't inherit
15:24:32 <bswartz> in any case, now is the time to refine the access groups spec proposal from nidhi
15:25:08 <bswartz> certainly by the time we come back from austin we should have made decisions on all of the open items
15:25:24 <bswartz> moving on....
15:25:33 <bswartz> #topic design summit planning
15:25:41 <bswartz> #link https://etherpad.openstack.org/p/manila-newton-summit-topics
15:26:11 <bswartz> Thanks to all those who proposed session ideas, and thanks for voting on these
15:26:36 <bswartz> we've got 2 fishbowl slots
15:27:02 <bswartz> the high vote-getters are:
15:27:09 <bswartz> Concurrency issues in Manila
15:27:24 <bswartz> Add "Revert share from snapshot"?
15:27:38 <bswartz> Generic groups
15:28:09 <bswartz> so concurrency issues doesn't make sense to cover in a fishbowl -- that's more of a working session topic
15:28:28 <bswartz> and we covered snapshot revert in a fishbowl in tokyo
15:28:32 <bswartz> do we need another one?
15:28:58 <vponomaryov> bswartz: do we have spec for it?
15:29:07 <vponomaryov> bswartz: design notes?
15:29:17 <bswartz> vponomaryov: we have the tokyo etherpad somewhere
15:29:38 <bswartz> it seems like we have new information and questions that didn't get resolved in the last 6 months
15:29:52 <cknight> #link  https://etherpad.openstack.org/p/manila-mitaka-summit-topics
15:29:57 <bswartz> so maybe another fishbowl is called for
15:30:06 <bswartz> cknight not that one
15:30:29 <bswartz> #link https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Manila
15:30:46 <bswartz> #link https://etherpad.openstack.org/p/mitaka-manila-snapshot-semantics
15:31:33 <bswartz> vponomaryov: probably not the level of information you were lookign for
15:31:36 <vponomaryov> bswartz: there is no point about revert to not latest snapshot - prohibited?
15:31:56 <bswartz> vponomaryov: it's something we haven't discussed
15:32:04 <bswartz> we can add that topic to the end of this meeting
15:32:21 <cknight> vponomaryov: I suspect there will be disagreement there, but it's a good question.
15:32:30 <bswartz> We have 1 more week to finalize our design summit sessions
15:32:54 <bswartz> if you have a great topic you forgot to add, it's not too late, but do it now, because I'm going to start scheduling things today
15:33:30 <bswartz> oh crud I remembered another topic
15:33:47 <bswartz> let me modify agenda while gouthamr covers his topic
15:33:53 <bswartz> #topic  Release notes, continued
15:33:57 <tbarron> late hello
15:34:00 <bswartz> gouthamr: you're up
15:34:13 <gouthamr> #link https://review.openstack.org/#/c/300656/
15:34:18 <gouthamr> hi tbarron..
15:34:28 <gouthamr> thanks bswartz..
15:34:36 <gouthamr> alright, so the reno guideline's been up for review
15:34:48 <gouthamr> i was hoping we can have consensus and a discussion
15:35:16 <gouthamr> the examples may amuse you, but it was intentional :)
15:35:49 * bswartz marks tbarron tardy
15:35:55 <vponomaryov> gouthamr: not funny enough, improve it! ))
15:36:04 <cknight> vponomaryov: +1
15:36:10 * gouthamr it's hard to amuse vponomaryov
15:36:51 <bswartz> thanks gouthamr
15:37:14 <gouthamr> the idea was hoping we'd do renos whenever necessary, as noted..
15:37:17 <bswartz> the reno infrastructure has been in manila since early mitaka, but we haven't used it as much as we should have IMO
15:37:38 <gouthamr> #link http://docs.openstack.org/releasenotes/manila/mitaka.html
15:37:46 <bswartz> so I think we should ask core reviewers to add renos to their checklist of things to look at before merging changes
15:38:00 <bswartz> if you see a change that needs a reno and doesn't have one, -1 it
15:38:05 <cknight> bswartz: last week we agreed to use it more, provided we could make it objective when a reno was needed.  That's why gouthamr wrote this.
15:38:13 <bswartz> ah
15:38:20 <cknight> gouthamr: thanks for writing this up
15:38:28 <bswartz> okay sounds great
15:38:42 <bswartz> #topic Midcycle meetup
15:39:09 <bswartz> so unfortunately I think we don't have enough critical mass to do the meetup in germany
15:40:25 <bswartz> I would love to have one in germany, out of fairness to our european core team members, but vponomaryov can't travel this summer, and I didn't hear that any of the americas-based cores could make it either
15:41:09 <bswartz> so I think we should try again in ocata, but I don't think it makes sense to do the midcycle in europe for newton
15:41:31 <vponomaryov> mkoderer__: ^
15:41:33 <bswartz> sorry mkoderer and thanks for offering
15:41:33 <cknight> bswartz: correct, the biergartens are closed in the winter
15:41:43 <bswartz> what?!?
15:41:47 <bswartz> no beer in winter time?
15:42:00 <bswartz> ocata is a winter release....
15:42:11 <bswartz> or at least the midcycle for ocata release will fall in winter
15:42:22 <bswartz> unless we hold it south of equator
15:42:40 <cknight> bswartz: ganso could host one
15:42:48 <ganso> O_O
15:42:49 <bswartz> maybe ganso will invite us all to Sao Paulo
15:42:56 <bswartz> lol
15:42:58 <vponomaryov> cknight: does ganso know about it? ))
15:43:06 <cknight> vponomaryov:  he does now
15:43:28 <dustins> I for one hear that South America is really nice in the summer :)
15:43:44 <bswartz> dustins: it's nice in the winter too
15:43:46 <markstur_> our summer or their summer?
15:43:55 <tpsilva> we pratically have summer all year long
15:43:58 <dustins> markstur_: Either?
15:44:05 <markstur_> both
15:44:12 <dustins> lol
15:44:21 <bswartz> okay moving on
15:44:27 <bswartz> #topic revert to snapshot
15:44:46 <bswartz> there's no sense in waiting until summit to start gathering opinions about this feature
15:45:05 <bswartz> it's an obviously useful feature which some backends can implement
15:45:13 <bswartz> the main question is this
15:45:43 <bswartz> if you have 2 or more snapshots, and you want to revert to one that's not the latest, is it okay to delete the later ones in the process of reverting?
15:46:01 <bswartz> so I have snapshots A, B, and C (in that order)
15:46:06 <bswartz> and I want to revert to B
15:46:11 <vponomaryov> and second: "how to determine "latestness"?"
15:46:17 <bswartz> after revert, I might only have snapshots A and B
15:46:50 <bswartz> specifically, we need to think about whether this makes sense from an end user perspective
15:46:58 <gouthamr> vponomaryov: maybe Manila API shouldn't worry about this..
15:47:09 <bswartz> and also we need to know from vendors who might implement this feature if anyone can revert to snapshot B *without* deleting snapshot C
15:47:25 <vponomaryov> ZFsonLinux cannot
15:47:29 <vponomaryov> only latest
15:47:57 <bswartz> if the answer is that we must delete all "later" snapshots in order to revert to a specific snapshot, then obviously we need to introduce a concept of snapshot ordering into manila
15:47:58 <zhongjun_> In my backend can support this.
15:48:22 <markstur_> I don't think I'd need to invalidate older snapshots.
15:48:29 <bswartz> the current timestamps for the snapshots in the Manila DB aren't actually guarnateed to reflect the true ordering of the snapshots
15:48:32 <ganso> I think we would need to enforce that a snapshot is a recoverable image
15:48:37 <gouthamr> bswartz: wouldn't the `created_at` attribute already do the ordering for you?
15:48:39 <vponomaryov> markstur_: newer, not older
15:48:53 <ganso> so Snapshot B can be reverted to even if reverted to a prior
15:48:56 <bswartz> markstur_: not the older ones, the ones taken after the snapshot being reverted to
15:49:11 <markstur_> yeah.  what vponomaryov said
15:49:30 <bswartz> zhongjun_: you can revert to B without deleting C? is that an efficient operation or does it involve a data copy?
15:49:42 <vponomaryov> also, replication needs will break it in case of ZFSonLinux
15:49:54 <markstur_> it would involve data copy though
15:50:00 <cknight> gouthamr: Not if created_at is set in the API layer and the snapshot is taken asynchronously.
15:50:04 <vponomaryov> there "service" snapshot will be latest almost always
15:50:14 <bswartz> vponomaryov: it wouldn't need to break replication
15:50:30 <bswartz> vponomaryov: zfs driver could simply restart replication at the point of the snapshot being reverted to
15:50:37 <vponomaryov> bswartz: I mean ZFsonLinux driver cannot support revert on replicated shares
15:51:13 <bswartz> vponomaryov: so part of the revert operation on a replicated snapshot would be to reset the "base" snapshot for replication
15:51:25 <bswartz> replicated share*
15:52:47 <bswartz> gouthamr: created_at is the time the DB record was created, which is typically different from the time the snapshot was created by possibly dozens or hundreds of milliseconds
15:53:14 <vponomaryov> bswartz: we can update that field from share-manager
15:53:38 <gouthamr> bswartz cknight: true.. we need a available_at sorta field
15:53:45 <bswartz> vponomaryov: that's what I meant by "introduce a concept of snapshot ordering into manila"
15:54:10 <bswartz> maybe the way we do it is we force drivers to update timestamp to the true timestamp on the backend
15:54:18 <cknight> gouthamr: yes, and with multiple threads in multiple share services, you really ought to get that timestamp from the backend to be sure.
15:54:34 <gouthamr> cknight: +1
15:55:19 <markstur_> Doing a revert that destoys current data and recent snapshots in favor of some older snapshot (that hopefully is the one you want).  Is a very dangerous thing.
15:55:27 <bswartz> okay so we have ideas about how to solve this if we decide to delete snapshots on revert
15:55:44 <bswartz> however we're no closer to the answer about whether this is a good idea or note
15:55:46 <bswartz> or not*
15:56:02 <bswartz> we know of at least 2 backends that cannot revert while preserving newer snapshots
15:56:33 <vponomaryov> generic? windows?
15:56:33 <bswartz> my instinct is the same as markstur_'s
15:56:34 <markstur_> those 2 are probably the most optimized
15:56:39 <tbarron> bswartz: are you thinking the driver would choose to revert older snaps or that this would be a universal decision for all drivers even if they don't have to.
15:56:41 <markstur_> s/optimized/dangerous/
15:57:15 <zhongjun_> bswartz:If I remember right, It is an efficient operation in array.
15:57:33 <bswartz> tbarron: we have to define what the semantics of the revert API are
15:57:39 <markstur_> I still think it would be nice to have. Even if it has "warning, warning, warning" on it.  But that is reason to pause and consider.
15:57:51 <vponomaryov> bswartz: hm, I think, ZFs can support revert to not latest, using tricky thing called "clone"
15:57:53 <bswartz> tbarron: if backends can't match the semantics we define, then they don't implement the feature
15:58:41 <bswartz> my fear is that we define revert in such a way that either very few backends implement it, or that the implementation are awful and nobody uses it
15:59:05 <tbarron> yeah, drivers that implement snaps as read only clones shouldn't have an issue keepin C when reverted to B
15:59:09 <bswartz> so we should take the time to the this one right
15:59:32 <bswartz> to get* this one right
15:59:47 <ganso> time check
15:59:47 <bswartz> oh well I introduced this issue
15:59:55 <bswartz> maybe more time is needed in Austin after all....
16:00:01 <bswartz> we are indeed at the end of our time
16:00:09 <bswartz> thanks all
16:00:11 <markstur_> right on time
16:00:14 <tbarron> bye
16:00:22 <bswartz> #endmeeting