15:00:16 #startmeeting manila 15:00:17 Meeting started Thu Jan 8 15:00:16 2015 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'manila' 15:00:23 hello all 15:00:28 Hello 15:00:32 hello 15:00:34 hi 15:00:42 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:00:42 hny 15:00:46 hi 15:00:57 hope you all took some time off over the holidays 15:01:16 and happy new year! 15:01:44 #topic dev status 15:01:52 vponomaryov: I know you've been busy 15:01:59 dev status: 15:02:10 1) Tempest CI jobs for Manila have been improved and now should be more stable. 15:02:10 merging a lot of tempest-stability patches 15:02:19 2) Manage/unmanage shares/share-servers 15:02:24 BP: #link https://blueprints.launchpad.net/manila/+spec/manage-shares 15:02:24 status: work in progress 15:02:33 3) Single SVM mode for Generic driver 15:02:38 BP: #link https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver 15:02:41 gerrit: #link https://review.openstack.org/#/c/142403/ 15:02:55 that's the main, other are whistles and bells 15:03:16 is there a WIP for (2)? 15:03:35 (2) contains lots of subtasks 15:03:48 hi 15:03:56 oh I see them 15:03:58 so, BP over all is in WIP 15:04:16 3 changes in gerrit 15:04:31 will be more 15:04:39 yeah I'm sure 15:04:48 ty vponomaryov 15:04:54 anyone have questions about the above? 15:05:14 I have a 1 question 15:05:32 why do our tempest-dsvm jobs sometime still fail? 15:05:35 I saw one failure this morning 15:05:47 this time devstack did not start 15:05:49 at all 15:05:53 happens 15:05:56 anything we can do about that? 15:06:15 I do not think so 15:06:33 why don't other projects have this issue? 15:06:43 who said this? 15:07:00 When we pushed fix to Cinder 15:07:08 I'm asking because my next question is, can we make the tempest-dsvm jobs voting now? 15:07:13 it succeded only from third attempt 15:07:26 bswartz; I think yes 15:07:36 ok 15:07:38 this time is near 15:07:48 thanks great news 15:08:06 next topic 15:08:10 #topic rename driver mode 15:08:18 #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/053960.html 15:08:24 chen, you're up 15:08:51 I want to change current driver mode name because they're confusing 15:09:36 I'd like to suggest, change single_svm_mode to static mod_mode and multi_svm_mode to dynamic_mode 15:09:53 chen: mod_mode? 15:10:09 static_mode 15:10:11 thanks for putting much of the discussion on the ML 15:10:12 sorry 15:10:13 ok 15:10:30 I read through the thread and responded with my comments 15:10:42 I see 15:10:44 those of you who haven't followed should read the ML 15:10:50 chen I agree with you 15:11:01 the names are probably a bit confusing and could be better 15:12:08 so first of all, does anyone disagree and want to keep the current names? 15:12:30 current driver modes are "single_svm" and "multi_svm" 15:12:55 I do mind with 'static' and 'dynamic' 15:13:04 single_svm mode implies no share servers will be created, and no networking config is needed within manila 15:13:06 the current names are okay with me as those were proposed from the start 15:13:11 we can rename, but need good new names 15:13:18 I don't like static 15:13:23 multi_svm mode implies that share servers will be created and they will consume network resources 15:14:09 either created or reused. relation 1:many 15:14:29 o/ (late) 15:14:41 from practical aspect i think pattern is multi-svm is more close to east-west traffic 15:14:53 and single is more north-south 15:15:00 I am not a big fan of the new names 15:15:03 I think one valid complain is that "svm" is an ancronym not used elsewhere and not understood 15:15:21 but i suspect it will change alot over time 15:15:31 ganso1: bswartz, i agree 15:15:42 ganso also proposed in manila chat variants 'basic' and 'advanced' 15:15:47 no_share_servers and multi_share_servers might be more accurate 15:16:00 bswartz: definitely 15:16:12 I think the term "Share_server" must be included 15:16:23 it is the term we are using throughout Manila 15:16:24 basic and advanced are not good. it implies drivers supporting basic is not as good 15:16:32 bswartz: from my point of view, it seems that no_share_servers have no shares.. is that what is meant? 15:16:32 xyang1: +1 15:16:38 xyang1: +1 15:16:41 xyang1: I agree 15:16:45 lpabon: well no 15:16:55 xyang1: +1 15:17:08 no_share_servers would mean the drive doesn't create share servers because it's using something preexisting 15:17:35 okay so we may need to brainstorm on this topic 15:18:00 bswartz, I considered "no_share_servers", but in the single_svm_mode for generic, a instance need to be configured, so , when admin working under this mode. no_share_server but need one instance 15:18:05 can I suggest that we resolve this by continuing the ML thread and people can suggest better alternatives? then next week we can pick one? 15:18:07 mind if i make it more complicated? 15:18:23 jasonsb_: go ahead 15:18:29 xyang1: what's the problem with static and dynamic? 15:18:31 i'm confronting situation where i would like to load balance over several share servers 15:18:33 bswartz: +1 15:18:47 so i might be single_svm but there are many of them 15:18:57 jasonsb_, +1 15:19:02 bswartz: +1 15:19:04 i suspect i'm not alone 15:19:06 bswartz: +1 15:19:08 jasonsb_: okay so that's part of the confusion here 15:19:11 "static" sounds the capability is not flexible enough 15:19:33 xyang1: +1 15:19:36 Let's also not keep changing names 15:19:38 we don't want to prevent backends from doing what they need to do -- which is why the definition of a share server is intentionally vague 15:19:40 static and dynamic are not good because real criteria - do we create additional resources or not 15:19:53 so i think its hard to pigeon hole this at this time 15:20:07 svm seems fine to me 15:20:13 in the case of netapp, a "share server" actually has multiple IP addresses and lives on multiple physical nodes 15:20:28 we used single tenant and multi tenant before 15:20:29 and our driver can create them and destroy them as needed 15:20:31 are there multiple IP's that can host a given share? 15:21:01 jasonsb_: Manila is able to provide only one export location, right now 15:21:15 but server can have more than 1 net interface 15:21:18 vponomaryov: yes i discovered that ) 15:21:19 the only important aspect of a share_server is that it's something created by manila, so manila expects to own its lifecycle 15:21:24 jasonsb_: my driver actually may fall into that category 15:21:28 common case - service net int and tenatn net int for export 15:21:39 if your driver uses something preexisting, then it's not a share server (from manila's perspective) 15:21:51 that doesn't mean that it can't serve shares 15:22:21 this split is what we were trying to capture with the single/multi svm thing 15:22:35 bswartz: then something like 'share_server_needed' and 'share_server_included' could be possible names? 15:22:36 it's perfectly fine to have a "single_svm" driver which is backed by a large cluster of servers 15:22:39 bswartz: that makes sense 15:22:52 toabctl: -1 15:23:00 the difference that manila cares about is that manila is not responsible for creating/destroying the servers themselves 15:24:05 we can replace mode as string with boolean with name "driver_handles_share_server = True/False" 15:24:08 I think changing from "single_svm" to "single_share_server", "multi_svm" to "multi_share_server" is a the simplest change we can make 15:24:10 manage share or manage share+network assets 15:24:13 one thing that's clear to me is that regardless of what we do with the name, we need much better documentation on what these modes and share server are all about 15:24:23 vponomaryov: +1 15:24:34 vponomaryov: I think that's better 15:24:57 vponomaryov: which modes to true and false map to? 15:25:07 true - multi_svm 15:25:10 true = single_svm, false = multi_svm? 15:25:11 oh 15:25:19 or perhaps just enumerate the assets and who manages? 15:25:36 (vponomaryov idea) 15:25:36 so the option means "driver supports share server creation" 15:25:44 vponomaryov: yes. it's not really a mode. it's just a flag which indicates that there is some more stuff todo during creation/deletion of a share 15:26:13 for now it looks like a great solution 15:26:15 toabctl: right - for driver developer - it is implementation of additional interfaces 15:26:28 toabctl: it's still sort of a mode, because when you set it to true, there are additional expectations from the config 15:26:55 and the manager will interact with the driver differently if the flag is set to true 15:27:33 how about we just keep the current names but with better explanation in the code and doc 15:28:21 I am open to changes, but I do not insist on it. 15:28:32 xyang1, -1 15:28:51 vponomaryov: +1 15:28:55 xyang1: that's one option, but I want to give some time to make better proposals 15:29:03 I still not understand what's the "single" means in single_svm_mode. 15:29:06 if change then to boolean, because we will not have third value 15:29:20 I'll put an agenda item next week to decide whether to rename the option and if so, what the new names should be 15:29:22 perhaps the thing to do is to write some stub drivers as documentation 15:29:26 let's keep this discussion going on the ML 15:29:30 and see how many patterns develop 15:29:42 then revisit 15:29:45 so far I like valeriy's proposal best 15:30:19 bswartz: thanks, that's a good idea (the agenda) 15:30:24 I think vponomaryov's proposal is more straight forward 15:30:33 everyone okay with pushing the decision to next week and giving everyone time to consider? 15:30:36 I just don't like keep changing names 15:30:42 bswartz: and then how do you answer your own argument that it's a mode b/c it impiiles different scheme on part of manager? 15:30:49 we just got rid of single tenant and multi tenant 15:30:58 xyang1: +1 15:31:00 xyang1: I agree, but this change went in during kilo so we haven't actually released the new option 15:31:14 bswartz: +1 15:31:27 bswartz: +1 15:31:33 I want to get this right during kilo because it will be much harder to change it during L 15:31:38 bswartz: aye! 15:31:49 bswartz: +1 15:32:02 bswartz: if we can settle down in Kilo, that will be great 15:32:09 okay 15:32:15 lets do it next meeting 15:32:25 having poll 15:32:29 #topic level-of-access-for-shares BP 15:32:40 #link https://blueprints.launchpad.net/manila/+spec/level-of-access-for-shares 15:32:53 Idea of this^ spring out of following use case: 15:32:54 vponomaryov: you're up 15:33:04 use case: public share with different access levels for different users of different projects. 15:33:10 Like publisher with 'rw' access and readers with only 'ro' access. 15:33:16 This is useful with imlementation of another idea described in BP: #link https://blueprints.launchpad.net/manila/+spec/level-of-visibility-for-shares where we can make share visible for all. 15:33:27 So, question for maintainers of drivers. Will it be possible to implement it with your drivers? 15:33:36 if such interface appears 15:34:28 planned three possible levels - ro, rw and su 15:34:31 so the share is still owned by 1 tenant, but they can do access-allow with rw/ro instead of just rw? 15:34:39 right 15:34:44 okay ro/rw/su 15:34:54 those 3 levels only make sense for NFS btw 15:35:03 for CIFS the allowed "levels" might be different 15:35:18 lets leave to abstraction level 15:35:26 the idea of more than one level 15:35:30 manila access-list would have additional field? 15:35:31 It is not clear to me the difference between su and rw 15:35:47 well if we support it in the manila API, then the implementation must be standard across all backends 15:35:48 ganso1: su have execution right 15:35:49 bswartz: +1 15:36:04 vponomaryov: humm ok 15:36:14 we can't have some backends that support some levels and other backends that support different levels 15:36:17 vponomaryov: is this mode supported by both CIFS and NFS? 15:36:23 rwx or rw- or r-- 15:36:39 the difference between rw and su is that su means "root_squash" is turned off 15:37:03 bswartz: thanks, but root_squash is only for NFS, correct me if I am wrong please 15:37:12 ganso1: I did not look deep into CIFS according to that 15:37:12 correct 15:37:26 vponomaryov: -1 15:37:30 su has nothing to do with the x bit 15:37:33 vponomaryov: what about r-x? 15:37:35 also, I believe changing permissions manually via a script is out of hand, correct? 15:37:42 su only has to do with root_squash 15:38:41 ganso1: permission for who? all at once or some? 15:39:13 vponomaryov: I meant that those modes will apply to the share as a whole, such as the options configurable in NFS export, not the files themselves 15:39:21 and NFS client can directly chmod files inside the NFS share, and access is controlled inside the NFS protocol 15:39:53 some NFS servers can squash_root, meaning that clients cannot obtain root access under any circumstances 15:39:56 it is not about files, it is about access for whole share 15:40:12 some NFS servers can also force read-only access, regardless of the underlying mode bits on the filesystem 15:40:44 the NFS server has no control of whether the client can execute stuff or not 15:41:41 so in order to make progress on this 15:41:55 we need to find out if all of the existing driver can even support a feature like this 15:42:09 I'm pretty sure the generic driver can (for NFS) 15:42:15 and the NetApp driver also could 15:42:35 we'd need to define some levels for CIFS and find out if everyone can support those levels 15:42:51 but there is the separate question of whether these is even demand for this 15:43:02 s/these/there/ 15:43:12 mentioned use case 15:43:31 that belongs to public deployment 15:43:38 so there is a theoretical use case, but are any real users asking for this? 15:44:01 I know about 1 case in driver development project 15:44:07 it was impelmented using metadata 15:44:13 liek workaround 15:44:20 which driver 15:44:25 WFA 15:44:32 ah 15:44:55 what the use case for RO or for something else? 15:45:11 I think read only is a must have 15:45:11 when we need to share info, but keep it safe 15:45:12 s/what/was/ 15:45:34 since for a big company, the IT adm may put several files there and it should prevent users from deleting them 15:45:41 if we only implemented RO and RW, would that be enough? 15:45:41 so it should have this option, RO 15:45:49 ganso: +1 15:46:09 RO and RW both have fairly obvious semantics and I'm sure we can support them for both NFS and CIFS 15:46:09 bswartz: ganso1: +1 15:46:23 bswartz: that's my question too. why not allow setting r, w, x, any combination? 15:46:27 #info I have been particpating starting last summit and keeping gab on the same 15:46:29 other "levels" like SU are less obvious and might not be supported universally 15:46:58 xyang1: that's not how any NFS server I'm aware of works 15:47:03 bswartz: we have no interfaces that are supported by all 15:47:06 there may be less use cases for SU 15:47:12 xyang1: these would be export-wide settings 15:47:32 starting with RW and RO sounds good to me. 15:47:32 bswartz: yes, I think our driver (hdi-driver) does not support su 15:47:36 I think it is safe to assume that we can start partially, with RO and RW... and add SU if needed 15:47:36 bswartz: so, it should not be a problem - supporting by all 15:47:49 xyang1: the mode bits for individual files would remain as-is 15:47:51 bswartz:ok, I'll check our backend too 15:48:06 ganso1: i think you are correct 15:48:46 i like idea of rw and ro but try to make general enough to do su later 15:49:01 so, main question is satisfied. level of access is required 15:49:02 #agreed implementing read-only and read-write access levels seems like something everyone can do and there are obvious use cases 15:49:13 #topic Boot Get started on wiki 15:49:37 also read-only and read-write makes sense for both NFS and CIFS and (hopefully) other protocols 15:49:51 #info established in December frank says 15:50:01 rprakash: can we help you? 15:50:29 #topic open discussion 15:50:49 do we have logs for irc chat ? didn't find manila at http://eavesdrop.openstack.org/irclogs/ 15:50:56 chen: +1 15:51:13 chen: yes 15:51:17 https://wiki.openstack.org/wiki/Manila/Meetings 15:51:20 http://eavesdrop.openstack.org/meetings/manila/ 15:51:29 not meeting 15:51:31 oh! 15:51:34 bswartz, this is only for meeting 15:51:35 room of manila 15:51:38 IRC logs for the channel 15:51:47 bswartz, yep 15:51:50 no I don't believe that infra logs our channel 15:51:53 ups. that's the link I wanted to post. thanks bswartz . it's mentioned on the wiki page 15:52:02 I log the channel, but my logs are not public 15:52:16 bswartz: interested in discussing export_location in db? 15:52:54 jasonsb_: is it a quick topic? 15:53:01 we've got 7 minutes 15:53:09 not sure 15:53:34 go ahead and ask the question 15:53:50 is there existing patterns for changing the endpoint address depending on some circumstance 15:53:55 # info is the boxes in Oregon for VPN access at Linuxfoundations or they at Ericsson DCs? 15:53:55 (load balancing perhaps) 15:54:41 jasonsb_: this sounds like "share migration" 15:54:44 in my case I have many IP addresses I can use but I see that the IP address is coded into database 15:55:10 jasonsb_: you can write any address but only one 15:55:17 yeah... 15:55:18 but idea is good 15:55:19 jasonsb_: endpoint address of what? the share-server? the manila api service? 15:55:23 this seems like a limitation 15:55:38 ##action can we get access to BGS hardware for contributions? 15:55:49 clustered NFS server implementations often have a list of IPs through which the share can be accessed 15:55:54 rprakash: 0.o ??? 15:56:02 rprakash: please stop spamming! 15:56:06 I was wondering what other drivers might do where there are many IP's to choose from 15:56:45 jasonsb_: we only return 1 IP address, and then rely on in-band negotiation between the NFS client and NFS server to discover other IP addresses 15:56:54 jasonsb_: maybe a workaround for this limitation is a setting up a proxy. But getting rid of this limitation is a good proposal 15:57:01 that's what PNFS is all about 15:57:46 I have the same question as jasonsb_ , is there a way to change glusterFS driver to add more than one "glusterfs_target", and all glusterfs_targets are replications for each other. Then when manila create a share, chose one target to use. This would distribute data traffic to the cluster, higher bandwidth, higher performance 15:58:17 a proxy is not the answer 15:58:41 I think manila may need to allow multiple mount points to be stored in the DB 15:58:43 need implement list of exports instead of one string as export 15:58:54 the question is whether those would change over time 15:58:55 chen: it's a good idea to implement, but I don't think it's possible today 15:59:13 i was thinking that the driver itself could be involved in scheduling context 15:59:13 because we currently store that export one time and never change it 15:59:16 hello 15:59:16 to determine this 15:59:41 so its some interesting variable to the single/multi_svm discussion 15:59:47 jasonsb_ it's a good idea, but we're out of time 15:59:51 I'm sure we can revisit this topic 16:00:02 sounds good 16:00:08 it's not related to the single/multi_svm discussion though 16:00:14 let's discuss this again next meeting or create a ML :) 16:00:15 bswartz: jasonsb_ +1 16:00:17 if you think it is then you don't understand the driver modes 16:00:30 I'll try to explain why in the ML thread 16:00:42 thanks everyone! 16:00:45 thanks 16:00:53 thanks! 16:00:55 thanks! 16:00:55 #endmeeting