15:02:27 <bswartz> #startmeeting manila 15:02:28 <openstack> Meeting started Thu Jan 15 15:02:27 2015 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:32 <openstack> The meeting name has been set to 'manila' 15:02:41 <bswartz> oh well 15:02:47 <lpabon> hi 15:02:48 <bswartz> #topic dev status 15:02:52 <geguileor> Nice! 15:02:59 <vponomaryov> ok, here is dev status: 15:03:04 <vponomaryov> 1) Share access levels 'ro' and 'rw' 15:03:08 <bswartz> looks like we have recording but not channel topic management 15:03:09 <vponomaryov> BP: #link https://blueprints.launchpad.net/manila/+spec/level-of-access-for-shares 15:03:19 <vponomaryov> gerrit: 15:03:19 <vponomaryov> - client #link https://review.openstack.org/144298 15:03:19 <vponomaryov> - server #link https://review.openstack.org/144617 15:03:22 <vponomaryov> status: finished, ready for review 15:03:29 <vponomaryov> 2) Manage/unmanage shares 15:03:33 <vponomaryov> BP: #link https://blueprints.launchpad.net/manila/+spec/manage-shares 15:03:37 <vponomaryov> gerrit: #link https://review.openstack.org/147495 15:03:40 <vponomaryov> status: work in progress 15:03:45 <vponomaryov> 3) Functional tests for manilaclient 15:03:48 <vponomaryov> BP: #link https://blueprints.launchpad.net/python-manilaclient/+spec/functional-tests 15:03:53 <vponomaryov> status: CLI tests from Manila project and its tempest plugin were ported to manilaclient and new were added. Right now it is read-only tests. 15:04:00 <vponomaryov> gerrit: 15:04:01 <vponomaryov> - docs update #link https://review.openstack.org/147161 15:04:01 <vponomaryov> - tests themselves #link https://review.openstack.org/137393 15:04:07 <vponomaryov> 4) Single_svm mode for Generic driver 15:04:11 <vponomaryov> BP: #link https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver 15:04:15 <vponomaryov> gerrit: #link https://review.openstack.org/142403 15:04:18 <vponomaryov> status: ready for review 15:04:25 <vponomaryov> That's the main 15:04:43 <bswartz> okay 15:04:49 <bswartz> I'll review 1 and 4 soon 15:04:53 <bswartz> 2 is on the agenda for today 15:05:10 <bswartz> any question before we move on? 15:05:24 <bswartz> #topic Rename driver_mode decision 15:05:42 <bswartz> okay last week blew by fast and I didn't answer the questions raised on the ML 15:05:44 <bswartz> so I'll do so now 15:06:19 <bswartz> is chen here? 15:06:33 <chen> bswartz, yes 15:06:36 <bswartz> ok 15:07:01 <bswartz> so there can be relationship between share servers and virtual machines 15:07:08 <bswartz> however that's up to the driver 15:07:22 <bswartz> many drivers do in fact create a VM per share server 15:07:43 <bswartz> the point of the share_server construct was to enable that model, but not to force it 15:08:12 <bswartz> regarding why the scheduler needs to know about share servers 15:08:27 <bswartz> typically a share_server is a relatively expensive object to create (inside the driver) 15:09:10 <bswartz> it's usually preferrable for the scheduler to send requests to create a 2nd, 3rd, etc share to the same backend where the 1st share was created for any given tenant 15:09:30 <bswartz> at least, we want to allow the administrator to make that decision if he/she chooses 15:09:41 <bswartz> so the scheduler needs to know what's already been created and where 15:10:04 <chen> bswartz, what you mean " enable that model, " ? and "expensive object" 15:10:51 <bswartz> we wanted to create a persistence model in manila so that driver that did create VMs would have a place to track them 15:11:57 <bswartz> and VMs are typically an order of magnitude more resource intensive than individual shares 15:12:02 <bswartz> make sense? 15:12:08 <chen> bswartz, yes 15:12:49 <bswartz> okay 15:13:08 <chen> bswartz, we want VMs be tracked => in share server. But share server is not limited by VMs. It can contain other infomations too ==> up to driver ? 15:13:22 <bswartz> yes 15:13:46 <bswartz> and if someone comes up with a case where the info we store for the share server isn't enough, we can consider changing the model to add more info 15:13:58 <bswartz> the share_server model works well for existing drivers 15:14:08 <bswartz> if a new driver doesn't fit the model, we can talk about how to change it 15:14:20 <bswartz> the goal is to give driver authors flexibility 15:14:28 <lpabon> cool 15:14:31 <bswartz> but I get that too much flexibility creates confusion 15:14:56 <bswartz> so I saw the proposal for the concept of a "dummy share server" 15:15:32 <bswartz> I'm not sure that would offer any advantage over letting manila know explicitly that driver will never create share servers 15:15:56 <bswartz> and if manila knows that fact, then instantiating a dummy object doesn't server any purpose that I can think of 15:16:11 <markstur_> so share server mode indicates there is startup cost? 15:16:33 <bswartz> that depends on the backend 15:16:37 <markstur_> isn't it also about using the network setting passed in? 15:16:57 <bswartz> I mean, we know that the generic driver has to create a nova VM for each tenant/share network and creating nova VMs is not instantaneous 15:17:36 <bswartz> so after the first share is created, it's strongly preferable to create additional shares on the same VM if possible 15:17:49 <markstur_> scheduler will assume startup-cost and do share-server-affinity for additional share -- but that could be tuned later 15:18:21 <bswartz> yeah basically 15:18:36 <chen> bswartz, "preferable to create additional shares on the same VM if possible" ==> is there a chance admin would like create more share-server for higher throughput ? 15:18:43 <bswartz> we will make the information available, and the administrators should be able to tune the scheduler how they choose 15:19:24 <bswartz> chen: the driver is ALWAYS able to create new storage_servers if it's in the multi_svm mode 15:19:25 <vponomaryov> chen: right now if you want separate share server, just use another one share-network 15:19:33 <bswartz> that's a driver decision 15:19:42 <bswartz> s/storage_server/share_server/ 15:20:19 <ganso_> if the admin creates several share networks related to the same private network (I do not know why he would do that), then a new share server would be created while it is not necessary 15:20:21 <bswartz> the point of making the scheduler aware is to steer requests to the same backend 15:20:53 <bswartz> a multi_svm backend manages a large pool of resources typically 15:21:04 <bswartz> ganso_: correct 15:21:15 <bswartz> although share_networks are created by tenants not admins 15:21:46 <ganso_> yes, sorry, replace that word by tenant in that case 15:22:00 <ganso_> still, isn't this a problem in the scenario we are discussing? 15:22:17 <ganso_> it is resources being wasted 15:22:39 <bswartz> ganso_: that's why we have a quota on share_networks 15:23:12 <bswartz> the admin may choose to bill based on share_networks or limit them to 1 or 2 per tenant 15:23:50 <ganso_> bswartz: but then that quota may conflict with the scenario that he may have several private networks and would not be creating several share_networks per private networks. Shouldn't the option to do this be blocked or something? 15:24:17 <bswartz> ganso_ that would be between the tenant and the admin to work out 15:24:26 <bswartz> we're just providing the tools 15:25:23 <bswartz> If you can thinking of a way to exploit the current design with a DOS attack, please file a bug and we'll fix it 15:25:25 <ganso_> back when I was coding that part in my driver, I was torn between just throwing an error because it would simplify things, or support that scenario that has no use cases that I know of 15:25:53 <bswartz> if you can think of a more user friendly way to prevent bad users from doing things, let's talk about it 15:26:10 <ganso_> yea, I need to think about it 15:26:12 <bswartz> ganso_: ah I get it 15:26:26 <ganso_> I do not know the solution to this "problem" 15:27:04 <bswartz> yeah the intention is that if there are multiple share networks, that new share_servers will get created if it's a multi_svm backend 15:27:11 <bswartz> which brings me back to the current topic 15:27:18 <bswartz> the driver mode name 15:27:35 <bswartz> #link https://etherpad.openstack.org/p/manila-driver-modes-discussion 15:28:54 <vponomaryov> or not name, but choose an approach to distinguish two driver modes. 15:28:54 <bswartz> I see support for the boolean option 15:29:12 <bswartz> and after writing that etherpad I myself was starting to lean towards the boolean option 15:30:06 <bswartz> has everyone seen the proposal from markstur? 15:30:44 <chen> bswartz, still has a question for markstur_ : If I create a share without "share-network", the I get a share, the Export location might be $gluterfs-ip:/manila-volume0/share-xxxx. 15:30:44 <chen> Is there a chance my instances can't reach $gluterfs-ip at all ? 15:30:44 <chen> Eg, $gluterfs-ip = 192.168.6.98, but the network mask can be 255.255.0.0 or 255.255.255.0. 15:30:45 <chen> I'm a openstack user. I'm not the admin. I don't know what $gluterfs-ip stands for. 15:31:23 <vponomaryov> chen: network infrastructure is below Manila 15:31:34 <bswartz> I believe we talked about the possibility for a driver to support "both" modes 15:31:35 <bswartz> and we concluded that it's better to simply run 2 backends if that's the behavior you want 15:32:24 <bswartz> the reason for that is that the "driver mode" is reported back to the scheduler and can be associated with share_types, etc 15:32:28 <chen> bswartz, nope. I jut wander when user would use the single_svm_mode => only when the user know the network => user == admin ? 15:32:37 <vponomaryov> chen: so, it is up to network infrastructure of cloud whether you instances are able to connect to VMs or not 15:32:43 <bswartz> it's important to keep the 2 driver types separated 15:33:07 <bswartz> chen: single_svm mode is all around simpler, but less secure 15:33:25 <bswartz> it's up to the administrator to decide what he wants to or is able to implement 15:34:06 <bswartz> and not to disparage single_svm mode -- it can still be made to be perfectly secure, it's just harder 15:34:49 <chen> bswartz, but how user know what admin did ? especially when manila has multi-backend 15:34:50 <vponomaryov> bswartz: need to wrap up this topic and start other 15:34:55 <bswartz> so I'd like to call for a vote on these options if all the questsions 15:35:05 <ganso_> I'm with manage_share_servers=true, should we just vote and end it? 15:35:40 <markstur_> Is one option to vote to not change yet? 15:36:02 <bswartz> you mean to vote to discuss for another week? 15:36:06 <bswartz> I suppose that's an option 15:36:24 <markstur_> ugh, yes. Don't like the current names, but the new ones seem to need some thinking. ugh again 15:36:34 <bswartz> okay we'll see how this goes 15:36:50 <bswartz> anyone if favor of keeping the current names? 15:37:21 <bswartz> wait I have a better idea than doing this in IRC 15:37:31 <bswartz> please put your votes in the etherpad 15:37:32 <ganso_> online poll? 15:37:34 <ganso_> oh 15:37:39 <bswartz> by adding a +1 (name) to the option you like 15:38:00 <bswartz> https://etherpad.openstack.org/p/manila-driver-modes-discussion 15:38:12 <nileshb> ok 15:38:36 <ganso_> bswartz: are you picking the boolean name? 15:38:46 <xyang1_> bswartz: the boolean option has lots of names 15:38:49 <ganso_> bswartz: oh, I see, it is below 15:39:29 <bswartz> who put the -1 on the boolean option? 15:39:58 <bswartz> vponomaryov: is that your -1? 15:40:06 <xyang1_> I think that's valeriy -1 that name, but still supports boolean 15:40:11 <vponomaryov> bswartz: it is for one of names 15:40:18 <vponomaryov> ther is another +1 for another name 15:40:19 <bswartz> okay 15:40:32 <bswartz> so there is unanimous support for a boolean option 15:40:37 <bswartz> the question is which one 15:40:41 <toabctl> so who flips the coin? :) 15:40:54 <bswartz> did those that put +1 under "driver_handles_share_servers" prefer that option name? 15:41:15 <bswartz> what about multi_share_servers, use_share_servers, or manage_share_servers ? 15:41:26 <ganso_> wait 15:41:27 <ganso_> 5x3 15:41:40 <ganso_> 6x3 15:41:54 <bswartz> I didn't have a preference 15:42:21 <bswartz> but it sounds like y'all do 15:42:26 <markstur_> i'd go either way, but vp's -1 convinced me 15:42:34 <bswartz> #agreed driver_mode=single/multi_svm will be replaced by driver_handles_share_servers=True/False 15:43:06 <bswartz> okay moving on 15:43:21 <bswartz> #topic Use bashate for bash script checks 15:43:23 <u_glide> manage_share_servers will conflict with importing share servers if we implement it 15:43:32 <bswartz> toabctl: you're up 15:43:45 <toabctl> solved now 15:44:04 <markstur_> u_glide, good point 15:44:08 <bswartz> nothing to discuss? 15:44:12 <toabctl> ah. it's basically about adding a check for shell scripts. 15:44:37 <bswartz> I read the commit message and reviews 15:44:43 <toabctl> vponomaryov asked me to add this topic to the agenda. 15:44:52 <bswartz> is this really important? 15:45:00 <bswartz> how many bash scripts do we even have? 15:45:05 <toabctl> from my side there's nothing to discuss. I would just accept the commit. vponomaryov ? 15:45:15 <ganso_> I have a question 15:45:17 <ganso_> about my bugs 15:45:38 <vponomaryov> toabctl: its not clear what possibilities does it have 15:45:43 <ganso_> https://bugs.launchpad.net/manila/+bug/1410221 and https://bugs.launchpad.net/manila/+bug/1410246 15:45:46 <bswartz> ganso_: please save them for the end if we have time 15:45:52 <vponomaryov> toabctl: what exactly can do 15:45:53 <bswartz> ganso_: otherwise #openstack-manila 15:45:53 <ganso_> ok, sorry, I thought we were open 15:46:13 <toabctl> vponomaryov: see README here: https://github.com/openstack-dev/bashate 15:46:26 <bswartz> toabctl: which bash scripts were you thinking needed these checks? 15:46:29 <lpabon> ganso_: i think we are still on the topic of bashate 15:46:42 <toabctl> bswartz: all scripts we have in our codebase 15:46:43 <ganso_> lpabon: yes, looks like so, sorry 15:46:55 <lpabon> ganso_: no problem :-D 15:47:07 <bswartz> maybe I'm dense, but I didn't think we had that many 15:47:23 <bswartz> I'm looking now 15:47:48 <bswartz> I guess we can save this discussion for the code review 15:48:08 <bswartz> my gut feeling is that it's not a bad thing, but it also doesn't add much since bash is not a big part of manila 15:48:11 <toabctl> bswartz: bashate is also used by devstack. so if we want to get our contrib/devstack stuff merged at some point we need to pass the tests anyway. 15:48:22 <bswartz> so it might be a waste of resources to run it 15:48:41 <bswartz> anyways everyone can weigh in on the code review 15:48:47 <vponomaryov> toabctl: it can be fixed in commit with devstack 15:48:53 <bswartz> let's move on 15:48:53 <vponomaryov> denseif something appear 15:49:03 <bswartz> #topic Share manage/unmanage 15:49:08 <toabctl> sure. as I don't have a stron opinion about it. reject or accept it. 15:49:18 <toabctl> +1 for move on 15:49:49 <bswartz> so a few of us have been spending time thinking about how to add a manage/unmanage feature to manila comparable to what cinder has 15:50:22 <bswartz> for the driver_handles_share_servers=False drivers, I think it's pretty straighforward 15:51:03 <bswartz> those driver will have to add methods to either allow or veto a share manage operation 15:51:23 <bswartz> drivers will be free to veto a share manage operation for any reason 15:52:05 <bswartz> share manage is an admin-only action and the admin does it with knowledge of where the existing data actually is, so if a manage operation fails, the admin can typically make a change and retry until it succeeds 15:52:54 <bswartz> unmanage is the reverse -- taking a share created by manila and removing it from the manila DB but leaving the data intact wherever it exists 15:53:15 <bswartz> the problem is that this get far more complicated with driver_handles_share_servers=True drivers 15:54:08 <bswartz> because there is a question of how manila can take over ownership of SVMs created outside manila, and what happens to any existing data on those SVMs after they're taken over 15:54:44 <bswartz> right now we're leaning toward a share-server-manage/unmanage feature as well 15:55:45 <markstur_> could share-server-manage be split to a separate later thing? Is it needed for share-manage/ 15:56:10 <csaba> bswartz: can you give a pointer to a description of the Cinder feature? 15:56:17 <bswartz> one thing we're struggling with is whether manila completely owns the share-server after it's managed, or not 15:56:39 <u_glide> markstur_ Yes, we could 15:56:50 <markstur_> I suppose some shares/drivers would have to bring in a share-server with the share(?) 15:56:59 <bswartz> markstur_: it's needed to manage existing shares if you have a driver_handles_share_servers=True backend 15:57:14 <bswartz> we could decide to not support that case at all and only allow managing of shares if driver_handles_share_servers=False 15:57:37 <u_glide> +1 15:57:37 <u_glide> because it will simplify first implementation of this feature. And we can return to this question later, when more users give feedback about their needs. 15:57:58 <xyang1_> bswartz: manila probably doesn't have control on whether the share-server is also used outside of manila or not 15:58:10 <bswartz> personally I would like to support both cases, but I agree that it's possible to start with something simple and add to it 15:58:30 <vponomaryov> xyang_: having share-server entity we assume owning by manila 15:58:33 <bswartz> xyang1_: it can't prevent that, but we can decide if we want to support it or not 15:58:51 <bswartz> if we don't support it, then all bets are off if you do something you're not supposed to 15:58:51 <xyang1_> ok 15:59:06 <bswartz> if we do support it, then we have to think about what problems that could cause down the road 15:59:21 <markstur_> when share-manage provide [share-network] then a share server can be created or found 15:59:39 <bswartz> unfortunately we're out of time and we'll have to continue this topic later 15:59:45 <vponomaryov> markstur_: manage is "registering" something already existing 16:00:18 <bswartz> #endmeeting\