15:00:29 #startmeeting manila 15:00:31 Meeting started Thu Apr 7 15:00:29 2016 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:35 The meeting name has been set to 'manila' 15:00:38 hello all 15:00:39 Hi 15:00:44 hello 15:00:45 hi 15:00:45 hello 15:00:46 hi everyone 15:00:47 Hello 15:00:54 hello o/ 15:00:57 hello 15:01:05 hi 15:01:18 hi 15:01:22 anyone seen nidhimittalhada? 15:01:52 bswartz: I guess it is too late for her 15:02:08 bswartz: she works IST hours.. 15:02:22 she PM'd me 8 hours ago and it sounded like she might be here 15:02:26 hi 15:02:28 ah 15:02:44 I know this meeting timeslot sucks for IST 15:02:50 :-/ 15:03:20 oh well let's get started 15:03:27 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:03:48 #topic access rules 15:04:00 We have a bug affecting access rules 15:04:03 * bswartz looks for the number 15:04:23 https://bugs.launchpad.net/manila/+bug/1566815 15:04:24 Launchpad bug 1566815 in Manila "share manager fails to remove access rules on replicated share" [Undecided,New] - Assigned to Goutham Pacha Ravi (gouthamr) 15:04:53 I wanted to figure out who is still working on access rules because I think a few more changes are needed in newton 15:05:10 in particular, I think it was a mistake to remove the per-rule status column from the DB in mitaka (my mistake) 15:05:15 hi 15:05:34 \o 15:05:34 I think if we add that column back in, it will allow us to do smarter things 15:05:37 bswartz: so, that bug is a sporadic failure.. i suspect a DB race.. i wanted to sanitize that a bit.. 15:06:15 yes gouthamr I think it's possible to fix the bug without any huge changes, but nevertheless I still want to look at additional changes to access rules implementation in newton 15:06:25 bswartz: column is still there, just would need to work the way it was 15:06:37 ganso: you mean in the model, but not in the schema 15:06:41 ganso: column is removed :) it is just fudged 15:06:55 it's now just a property that maps to the instance access_rules_status 15:06:57 gouthamr: oh yes, it is now a property, sorry that confused me 15:07:13 also I wanted to raise the subject of access rule mapping table 15:07:23 currently we're mapping rules to share _instances_ 15:07:32 it seems to me that the rules would more properly be mapped to shares 15:07:35 bswartz: +1 .. I agree.. I really hope we can go back to linking access rules and shares instead of share instances.. 15:08:01 there are cases where the actual rules on 2 instances should be somewhat different, such as during migration 15:08:07 bswartz: for replication, it does not make any difference.. 15:08:16 however I think we can do that without having different rules for different instances in the DB 15:08:52 would anyone be opposed to returning the access rule mapping back to the share object at the DB layer? 15:09:15 bswartz: what do you expect to be solved by such change? 15:09:35 vponomaryov: currently we have multiple rows in mapping table for shares that have multiple instances 15:09:42 gouthamr, bswartz: we questioned if access rules equal across shares made sense across replicas that are in different locations and may not be accessible by some hosts 15:09:51 keeping all the rows correct is harder than 1 row 15:10:04 and I'm concerned about soft deletes being impossible in a mapping table 15:10:36 bswartz: also, migration can work, but there are several workarounds to be done 15:10:39 I also want to look at eliminating the mapping table altogether and having just one access rules table 15:10:43 ganso: but the API does not allow you to control access to a replica 15:10:53 bswartz: I think we should also consider the access groups proposal by nidhi... it is prone to change things 15:11:08 bswartz: I think the design is good idea 15:11:11 gouthamr: yes I do NOT propose changing the API, just the implementation 15:11:12 ganso: so there's documentation that says access rules to secondary copies are going to be controlled by the driver as necessary.. 15:11:23 and ganso that was my next topic 15:11:58 ganso: i mean, for certain types of replication, some access rules don't make sense at all, or different sense.. 15:12:22 I'm just trying to figure out if I'm missing anything with my current understand of access rules 15:12:32 vponomaryov implemented readable type and he applies any r/w access level as r/o to any secondary replica 15:13:04 gouthamr: that's an easy case to solve -- just ignore type 15:13:27 ignore type *on passive replicas 15:13:30 bswartz: yes, with the 'dr' type, no access rule makes sense, 15:13:58 the passive copies are meant to be "inaccessible" 15:14:19 okay so since nidhimittalhada isn't here I'll propose her topic 15:14:24 #topic access groups 15:14:49 so getting access rules implementation right is important because we finally have a volunteer to write access groups! 15:14:54 #link https://wiki.openstack.org/wiki/Manila/design/access_groups 15:15:39 we need some feedback on this spec 15:16:08 I know it's hard to offer feedback in a wiki, personally I use the "discussion" feature of wiki 15:16:32 people are also welcome to use the ML to discuss this 15:17:00 the basic idea is that if I have 10 shares with access to the same clients, and I want to grant access to another client, I shouldn't have to call 10 access-allow APIs to make that happen 15:17:58 additionally, if tenants would rather define their groups of clients outside of Manila, such as using neutron security groups, we should allow that 15:18:24 bswartz: would this have any overlap with generic groups? 15:18:28 or similarly, if tenants wish to grant access to shares by instance ID rather than IP address, we should allow that 15:18:51 gouthamr: it overlaps, but probably not in a good way 15:19:01 bswartz: we can add possibility to add access by share list -> "manila access-allow share1,share2,shareN ip 1.1.1.1" 15:19:12 if we had share groups, it would in theory be possible to access allow to the whole group of shares easily 15:19:36 but share groups might not be the same granularity as the desired access groups 15:20:01 and I don't think we're considering hierarchical groups (groups of groups) for shares 15:20:20 bswartz: i'll add that to the wiki's discussion 15:20:45 vponomaryov: that solves half of the problem 15:20:54 #action: gouthamr will add share groups to access groups wiki discussion 15:20:55 bswartz: hierarchical groups wouldn't be that hard 15:21:03 vponomaryov: but what if I want to create a new share and give it the same access as all of my other shares 15:21:04 bswartz: but it could be a later enhancement 15:21:20 cknight: I think it adds even more complexity 15:21:35 bswartz: your latest case is covered by second approach on wiki page 15:21:42 bswartz: with "inherit" command 15:21:43 cknight: it's worth considering, but I suspect we'll decide not to do it 15:21:46 bswartz: yes, but the prototype Alex & I build 18 months ago had hierarchical groups 15:22:04 oh you mean hierarchical groups for access 15:22:10 bswartz: yes! 15:22:18 I meant hierarchical groups of shares for replication, consistency, etc 15:22:33 bswartz: that's definitely harder 15:22:45 bswartz: not sure it's worth the effort 15:22:50 me neither 15:23:10 not only is it harder but it makes the UI that much more ugly 15:23:30 because you need capabilities which inherit and capabilities which don't inherit 15:24:32 in any case, now is the time to refine the access groups spec proposal from nidhi 15:25:08 certainly by the time we come back from austin we should have made decisions on all of the open items 15:25:24 moving on.... 15:25:33 #topic design summit planning 15:25:41 #link https://etherpad.openstack.org/p/manila-newton-summit-topics 15:26:11 Thanks to all those who proposed session ideas, and thanks for voting on these 15:26:36 we've got 2 fishbowl slots 15:27:02 the high vote-getters are: 15:27:09 Concurrency issues in Manila 15:27:24 Add "Revert share from snapshot"? 15:27:38 Generic groups 15:28:09 so concurrency issues doesn't make sense to cover in a fishbowl -- that's more of a working session topic 15:28:28 and we covered snapshot revert in a fishbowl in tokyo 15:28:32 do we need another one? 15:28:58 bswartz: do we have spec for it? 15:29:07 bswartz: design notes? 15:29:17 vponomaryov: we have the tokyo etherpad somewhere 15:29:38 it seems like we have new information and questions that didn't get resolved in the last 6 months 15:29:52 #link https://etherpad.openstack.org/p/manila-mitaka-summit-topics 15:29:57 so maybe another fishbowl is called for 15:30:06 cknight not that one 15:30:29 #link https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Manila 15:30:46 #link https://etherpad.openstack.org/p/mitaka-manila-snapshot-semantics 15:31:33 vponomaryov: probably not the level of information you were lookign for 15:31:36 bswartz: there is no point about revert to not latest snapshot - prohibited? 15:31:56 vponomaryov: it's something we haven't discussed 15:32:04 we can add that topic to the end of this meeting 15:32:21 vponomaryov: I suspect there will be disagreement there, but it's a good question. 15:32:30 We have 1 more week to finalize our design summit sessions 15:32:54 if you have a great topic you forgot to add, it's not too late, but do it now, because I'm going to start scheduling things today 15:33:30 oh crud I remembered another topic 15:33:47 let me modify agenda while gouthamr covers his topic 15:33:53 #topic Release notes, continued 15:33:57 late hello 15:34:00 gouthamr: you're up 15:34:13 #link https://review.openstack.org/#/c/300656/ 15:34:18 hi tbarron.. 15:34:28 thanks bswartz.. 15:34:36 alright, so the reno guideline's been up for review 15:34:48 i was hoping we can have consensus and a discussion 15:35:16 the examples may amuse you, but it was intentional :) 15:35:49 * bswartz marks tbarron tardy 15:35:55 gouthamr: not funny enough, improve it! )) 15:36:04 vponomaryov: +1 15:36:10 * gouthamr it's hard to amuse vponomaryov 15:36:51 thanks gouthamr 15:37:14 the idea was hoping we'd do renos whenever necessary, as noted.. 15:37:17 the reno infrastructure has been in manila since early mitaka, but we haven't used it as much as we should have IMO 15:37:38 #link http://docs.openstack.org/releasenotes/manila/mitaka.html 15:37:46 so I think we should ask core reviewers to add renos to their checklist of things to look at before merging changes 15:38:00 if you see a change that needs a reno and doesn't have one, -1 it 15:38:05 bswartz: last week we agreed to use it more, provided we could make it objective when a reno was needed. That's why gouthamr wrote this. 15:38:13 ah 15:38:20 gouthamr: thanks for writing this up 15:38:28 okay sounds great 15:38:42 #topic Midcycle meetup 15:39:09 so unfortunately I think we don't have enough critical mass to do the meetup in germany 15:40:25 I would love to have one in germany, out of fairness to our european core team members, but vponomaryov can't travel this summer, and I didn't hear that any of the americas-based cores could make it either 15:41:09 so I think we should try again in ocata, but I don't think it makes sense to do the midcycle in europe for newton 15:41:31 mkoderer__: ^ 15:41:33 sorry mkoderer and thanks for offering 15:41:33 bswartz: correct, the biergartens are closed in the winter 15:41:43 what?!? 15:41:47 no beer in winter time? 15:42:00 ocata is a winter release.... 15:42:11 or at least the midcycle for ocata release will fall in winter 15:42:22 unless we hold it south of equator 15:42:40 bswartz: ganso could host one 15:42:48 O_O 15:42:49 maybe ganso will invite us all to Sao Paulo 15:42:56 lol 15:42:58 cknight: does ganso know about it? )) 15:43:06 vponomaryov: he does now 15:43:28 I for one hear that South America is really nice in the summer :) 15:43:44 dustins: it's nice in the winter too 15:43:46 our summer or their summer? 15:43:55 we pratically have summer all year long 15:43:58 markstur_: Either? 15:44:05 both 15:44:12 lol 15:44:21 okay moving on 15:44:27 #topic revert to snapshot 15:44:46 there's no sense in waiting until summit to start gathering opinions about this feature 15:45:05 it's an obviously useful feature which some backends can implement 15:45:13 the main question is this 15:45:43 if you have 2 or more snapshots, and you want to revert to one that's not the latest, is it okay to delete the later ones in the process of reverting? 15:46:01 so I have snapshots A, B, and C (in that order) 15:46:06 and I want to revert to B 15:46:11 and second: "how to determine "latestness"?" 15:46:17 after revert, I might only have snapshots A and B 15:46:50 specifically, we need to think about whether this makes sense from an end user perspective 15:46:58 vponomaryov: maybe Manila API shouldn't worry about this.. 15:47:09 and also we need to know from vendors who might implement this feature if anyone can revert to snapshot B *without* deleting snapshot C 15:47:25 ZFsonLinux cannot 15:47:29 only latest 15:47:57 if the answer is that we must delete all "later" snapshots in order to revert to a specific snapshot, then obviously we need to introduce a concept of snapshot ordering into manila 15:47:58 In my backend can support this. 15:48:22 I don't think I'd need to invalidate older snapshots. 15:48:29 the current timestamps for the snapshots in the Manila DB aren't actually guarnateed to reflect the true ordering of the snapshots 15:48:32 I think we would need to enforce that a snapshot is a recoverable image 15:48:37 bswartz: wouldn't the `created_at` attribute already do the ordering for you? 15:48:39 markstur_: newer, not older 15:48:53 so Snapshot B can be reverted to even if reverted to a prior 15:48:56 markstur_: not the older ones, the ones taken after the snapshot being reverted to 15:49:11 yeah. what vponomaryov said 15:49:30 zhongjun_: you can revert to B without deleting C? is that an efficient operation or does it involve a data copy? 15:49:42 also, replication needs will break it in case of ZFSonLinux 15:49:54 it would involve data copy though 15:50:00 gouthamr: Not if created_at is set in the API layer and the snapshot is taken asynchronously. 15:50:04 there "service" snapshot will be latest almost always 15:50:14 vponomaryov: it wouldn't need to break replication 15:50:30 vponomaryov: zfs driver could simply restart replication at the point of the snapshot being reverted to 15:50:37 bswartz: I mean ZFsonLinux driver cannot support revert on replicated shares 15:51:13 vponomaryov: so part of the revert operation on a replicated snapshot would be to reset the "base" snapshot for replication 15:51:25 replicated share* 15:52:47 gouthamr: created_at is the time the DB record was created, which is typically different from the time the snapshot was created by possibly dozens or hundreds of milliseconds 15:53:14 bswartz: we can update that field from share-manager 15:53:38 bswartz cknight: true.. we need a available_at sorta field 15:53:45 vponomaryov: that's what I meant by "introduce a concept of snapshot ordering into manila" 15:54:10 maybe the way we do it is we force drivers to update timestamp to the true timestamp on the backend 15:54:18 gouthamr: yes, and with multiple threads in multiple share services, you really ought to get that timestamp from the backend to be sure. 15:54:34 cknight: +1 15:55:19 Doing a revert that destoys current data and recent snapshots in favor of some older snapshot (that hopefully is the one you want). Is a very dangerous thing. 15:55:27 okay so we have ideas about how to solve this if we decide to delete snapshots on revert 15:55:44 however we're no closer to the answer about whether this is a good idea or note 15:55:46 or not* 15:56:02 we know of at least 2 backends that cannot revert while preserving newer snapshots 15:56:33 generic? windows? 15:56:33 my instinct is the same as markstur_'s 15:56:34 those 2 are probably the most optimized 15:56:39 bswartz: are you thinking the driver would choose to revert older snaps or that this would be a universal decision for all drivers even if they don't have to. 15:56:41 s/optimized/dangerous/ 15:57:15 bswartz:If I remember right, It is an efficient operation in array. 15:57:33 tbarron: we have to define what the semantics of the revert API are 15:57:39 I still think it would be nice to have. Even if it has "warning, warning, warning" on it. But that is reason to pause and consider. 15:57:51 bswartz: hm, I think, ZFs can support revert to not latest, using tricky thing called "clone" 15:57:53 tbarron: if backends can't match the semantics we define, then they don't implement the feature 15:58:41 my fear is that we define revert in such a way that either very few backends implement it, or that the implementation are awful and nobody uses it 15:59:05 yeah, drivers that implement snaps as read only clones shouldn't have an issue keepin C when reverted to B 15:59:09 so we should take the time to the this one right 15:59:32 to get* this one right 15:59:47 time check 15:59:47 oh well I introduced this issue 15:59:55 maybe more time is needed in Austin after all.... 16:00:01 we are indeed at the end of our time 16:00:09 thanks all 16:00:11 right on time 16:00:14 bye 16:00:22 #endmeeting