15:01:09 #startmeeting manila 15:01:10 Meeting started Thu May 12 15:01:09 2016 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:13 The meeting name has been set to 'manila' 15:01:19 hello all 15:01:21 hello 15:01:23 hi 15:01:24 hello 15:01:29 hi 15:01:34 hi 15:01:37 hi 15:01:55 \o 15:02:01 hi 15:02:04 hi 15:02:14 courtesy ping: cknight 15:02:30 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:02:59 #topic Driver private storage 15:02:59 :-) Hi. Thanks, Ben. 15:03:07 hi 15:03:20 xyang1 and xyang3? 15:03:21 https://review.openstack.org/#/c/315346/ 15:03:24 which one is real? 15:03:30 both real:) 15:03:38 it is easier to type here on the computer 15:04:07 so I added an admin api to retrieve driver private storage 15:04:27 * bswartz reads the review 15:04:30 Valeriy suggests to use cinder-manage instead 15:04:38 manila-manage 15:04:46 sorry:) 15:05:10 I'm asking question on cinder channel on why we have both cinder-manage delete volume and cinder force-delete 15:05:14 okay so we're okay with the read-only admin-only API, but don't want it in the client? 15:05:50 or vponomaryov are you against even having a REST API? 15:05:51 seems the one in cinder-manage was left over from nova days and no one work on it any more 15:06:09 we have other admin-only commands in python-manilaclient 15:06:13 bswartz: I don't see a reason to have API with all its whistles and bells 15:06:15 I mention this only because I think it is relevant to take that into consideration when make decisions 15:06:27 bswartz: manila-manage should be enough for admin's debug 15:06:37 What is the use case for reading driver private storage via the API? 15:07:03 IIRC when we discussed this before it was a debugging use case 15:07:14 could be used for writing of better functional tests 15:07:36 but only for driver-specific functional tests 15:07:37 It last came up during the snapshot-manage work to enable Tempest coverage. We found a better approach then. 15:07:56 cknight: so an admin wants to check that info in a resource before doing something drastic like force-delete or unmanage 15:07:59 right -- snapshot manage is not the use case 15:08:11 yes not for manage 15:08:24 (for all "provider-location" for used for "snapshot-manage" API) 15:08:29 the only things in the driver private data should be driver specific -- anything expected to be common for all drivers should go elsewhere 15:08:30 but admin wants to find out more for a resource before get rid of it 15:09:00 for example I know NetApp has netapp-specific tempest tests which are not upstream 15:09:13 those tests might benefit from an API like this 15:09:47 but I think the use case we should primarily consider is an admin debug interface 15:10:17 in that case vponomaryov has a good point that it could be done via manila-manage 15:11:27 xyang3: what is your feeling about using manila-manage instead? 15:11:44 personally I'm okay with the proposal for an API 15:12:04 bswartz: so my initial reaction was manila-manage usually deal with whole backend things not resource specific 15:12:17 but valeriy pointed out cinder-manage delete volume 15:12:26 that is why I started asking why cinder has both:) 15:12:38 turns out it is redundant to have both 15:12:47 well does cinder-manage or manila-manage send any RPCs or just hack the DB? 15:13:09 just db, I think 15:13:13 because the moment you actually want to DO something on a backend, the share manager should be involved, and you need RPCs 15:13:21 bswartz: it depends, could be both 15:13:27 if we're just reading what's in the DB then it seems pretty simple to use manila-manage 15:14:35 bswartz: cinder-manage volume delete deletes from DB then sends RPC cast 15:14:47 yes but it looks like cinder will get rid of that feature 15:14:58 because they prefer cinderclient force delete 15:15:08 API requires identity involvement 15:15:11 it is redundant 15:15:26 if we ever want a GUI version of the feature API involvement is required 15:15:42 jgriffith said some distro does not include cinder-manage 15:15:54 for now it seems we're okay with CLI-only 15:16:21 xyang3: how do those distros do schema upgrades? 15:16:34 bswartz; excellent question )) 15:16:34 bswartz: I don't know 15:16:55 * bswartz is suspicious of that statement 15:17:10 I'd like to know which distro doesn't include it and what they do instead 15:17:14 anyone here knows how distro package things? 15:17:21 anyways, it seems like we're leaning toward manila-manage here 15:17:42 any objections from anyone? 15:17:45 can we move on? 15:17:45 no problem as long as we are on the same page 15:18:12 okay back to backlog of topics from design summit 15:18:19 #topic Py3 functional tests 15:18:27 vponomaryov: this is one of yours 15:18:35 #link https://etherpad.openstack.org/p/newton-manila-contributor-meetup 15:18:47 #link http://specs.openstack.org/openstack/openstack-specs/specs/enable-python-3-int-func-tests.html 15:18:57 #link https://review.openstack.org/#/c/181165/ 15:19:13 for the moment we run tempest tests using py2.7 15:19:31 and it is right time consider py3 15:19:38 for running our functional tests 15:19:51 any chance it will "just work" if we turn it on? 15:19:57 or are there known issues at this time? 15:20:05 could be requried some fixes to core tempest 15:20:25 does anyone else do this yet? 15:20:33 bswartz: don't have such info yet 15:20:43 question is mainly about are we interested in it? 15:20:49 being first could be painful, but also an opportunity to lead in the community 15:20:52 in general? 15:20:59 I'm interested in it 15:21:03 Python 3 is mature 15:21:29 It's questionable how much longer Py2.X will be supported 15:21:39 just for information, Ubuntu 16.04 will have py3 as default 15:22:05 I think the pain will be worth it in the end 15:22:21 yes and I'm not aware of any OSes which can't support py3 anymore 15:22:33 especially with Python 3.X becoming default in Fedora and Ubuntu at least 15:22:35 I think Py2 was required for some _very_ old versions of RHEL 15:23:21 bswartz: py27 will EOL in 2020 15:23:36 toabctl: 2020 can't come soon enough :) 15:23:44 toabctl: what situation with py3 for SUSE? 15:23:55 so vponomaryov I think we should proceed with py3 functional tests and if we discover issues then we can talk about them 15:24:09 bswartz: agree 15:24:33 #topic UI Customization 15:24:53 tbarron: this has your name on it 15:24:57 vponomaryov: py2 still the default 15:25:18 hi, sorry, am doing 2 mtgs concurrently 15:25:25 doh 15:25:31 tbarron: should we postpone this topic? 15:25:46 yes pls 15:25:49 #topic Replication with share networks 15:25:50 vponomaryov: but py3 is there. not sure when to switch the default. 15:26:00 this one is mine 15:26:00 toabctl: then it is ok, thanks 15:26:28 when we designed share replication -- we intentionally limited the implementation to no-share-server drivers 15:27:18 the reason is because our replication design allows shares to have replicas in different AZs, and because each share has exactly 1 share-network and each share-network has exactly 1 subnet 15:27:37 to have replicas in different AZs you really need more than 1 subnet 15:28:04 my proposal is to modify the share-network APIs to allow specifying a network per AZ 15:28:44 this would allow multi-AZ configurations to work with share servers and replicated shares 15:29:09 AFAIK that's the only thing blocking us from extending replication support to share-server drivers 15:29:59 so we'll need a spec for this, but I wanted to propose the idea in case anyone sees a problem with it 15:30:29 bswartz: the 2 AZs would be in the same share network, thus a share-network will allow multiple subnets, is that correct? 15:30:40 ganso: that's my proposal 15:31:06 It would be gross to start allowing multiple share networks per share, because the share network also contains the security service which should NOT be different per-AZ 15:31:09 don't forget about standalone network plugin 15:31:11 bswartz: this change may interfere with how existing drivers have implemented DHSS=true 15:31:54 vponomaryov: we will need to test that but I don't foresee any problems 15:32:09 bswartz: a share server will still be on just one subnet, right? 15:32:18 all forms of flat-network plugins can be handled in the conf files 15:32:31 cknight: yes that's my thinking 15:32:46 subnets should not span AZs and neither should share servers 15:33:07 bswartz: OK, so no driver changes. This mirrors how neutron has multiple subnets per network. Seems fine to me. 15:33:46 okay so look forward to a spec about this 15:34:10 #topic Service-check 15:34:21 ganso: this was your topic 15:34:23 bswartz: is the spec repo ready? 15:34:31 bswartz: yes 15:34:43 xyang3: this needs to merge first https://review.openstack.org/#/c/312631/ 15:34:50 ok 15:35:08 so the current status is that the API "service-disable" only sets status of service to "disabled" in DB 15:35:15 the scheduler still works if disabled 15:35:24 if shares are already created, calls to backends still work if disabled 15:35:31 bswartz: do you think the driver private storage thing needs a spec or you are fine without it? 15:35:55 I have 3 ideas on addressing this 15:36:25 xyang3: we haven't discussed requiring specs -- the repo exists but I'm not in favor of forcing people to use it if they don't want to 15:36:36 ok 15:36:37 #1: communication between services is through RPCAPI. If rpcapi.py files check if service is running before making a call and returns an error, this partially solves the problem 15:36:50 except that the services are still running and using CPU resources (updating stats) 15:37:17 I am not sure if it is standard in openstack to prevent a service from running if it is considered to be disabled 15:37:27 ganso: what is the purpose of disabling a service? 15:37:52 bswartz: for any services besides backend services, I don't know 15:37:53 bswartz:the same as in cinder 15:38:05 bswartz: I have only seen people disabling specific backends 15:38:12 vponomaryov: which is? 15:38:21 bswartz: disable service from scheduling 15:38:34 oh! 15:38:51 bswartz: yes, but if shares are already created, they are already scheduled 15:38:54 so you can effectively blacklist a whole backend in the scheduler by marking the service disabled? 15:39:06 yes 15:39:17 and if an user wants to allow_access to a share even though the service is disabled, it works 15:39:19 but existing shares on that backend function as normal right? 15:39:26 ganso: manila is control plane, already created shares do not require service to run 15:39:42 yes but I'm asking about snapshots, delete calls, access change calls 15:39:56 if we say that the sole purpose of disabling a backend is to prevent scheduling, then there is no bug, and disabling scheduler and data service should do nothing at all 15:40:01 those all ignore this "disabled" flag right? 15:40:09 bswartz: yes, correct 15:40:22 ganso: it seems the bug is that the terminology is unclear 15:40:40 bswartz: yes, but depending on the correct definition, there may be a bug 15:40:59 then, we just should decide what do we expect from this feature 15:41:03 snapshots and delete don't go thru scheduler, so prevent scheduling won't include them? 15:41:24 xyang3: yes, disabling the backend will not block any snapshot or delete calls 15:41:32 ganso: ok, thanks 15:41:33 rather than "disabled" it should be called "prevent new shares" or something 15:41:41 bswartz: +1 15:42:25 api service, as well as scheduler should be able to be disabled, not load-balanced as well 15:42:35 ok so how do we properly define the correct terminology? should it be service-disable and completely disable a service, or should it just prevent scheduling to that service? 15:42:38 ganso: does this answer your question? 15:43:03 vponomaryov: in a HA environment it makes sense 15:43:14 vponomaryov: but I believe that currently this is not being applied at all 15:43:19 ganso: I think "service-disable" is confusing because that's not what the feature does, nor do I think we want it to do that 15:43:53 bswartz: how about an HA environment? should just the service be killed by an admin? 15:44:06 in a scenario with a load-balancer, you would need a way to control which api services are enabled, but that config would have to exist in the load-balancer, not in the manila DB 15:44:09 ganso: disable is not terminate 15:44:26 bswartz: I see 15:44:48 cinder uses the same terminology 15:45:04 ganso; because we didn't change that inherited )) 15:45:13 vponomaryov: yes 15:45:14 I wonder if we inherited this code from the fork, or if it was added later 15:45:19 but it could be confusing to change 15:45:35 ganso: I agree, but it's also confusing to not change it 15:45:45 we should do the normal deprecation thing and fix it 15:45:46 bswartz: yes 15:46:19 and probably also propose a similar fix for cinder 15:46:50 okay is this topic wrapped up? 15:46:56 bswartz: yes 15:47:02 #topic Container driver 15:47:19 #link https://review.openstack.org/#/c/308930/ 15:47:29 I just wanted people to know this is coming along 15:47:49 LXD was backed out before the release of Mitaka, for reasons described on the ML 15:48:24 this new driver replaces that one and our hope is to use it as the reference implementation of a share-server driver for development and testing 15:48:29 master 15:49:09 it needs code reviews 15:49:18 not many people have looked at it or played with it yet 15:49:30 right now it has problems at the networking layer 15:49:51 and also aovchinnikov when I try to run it my docker container just hangs 15:50:41 back 15:50:44 anyways I want to raise awareness about this so we can avoid the late-in-the-release problems we had with LXD and distros 15:50:44 hm, that's strange 15:51:00 it would be helpful to have devref for this, no? 15:51:08 +1 15:51:14 also lvm 15:51:44 yeah aovchinnikov it would be good to add a page to the devref explaining the theory of operation for this driver 15:51:44 busy people who want to play without reverse-engineering the jobs in gate 15:51:59 tbarron: on the plus side the driver is not that huge 15:52:02 reduce startup cost to help 15:52:18 bswartz: +1 15:52:26 tbarron: true devs always do reengineering =) 15:52:26 the whole driver including CI hook changes, devstack changes, code and unit tests weighs in at only 1500 lines 15:52:27 ok, sounds reasonable. I will provide one for docker driver 15:53:15 of course after all the bugs are fixed it might get bigger 15:53:37 I should mention that the current version is CIFS only and NFS will be added later 15:53:39 vponomaryov: i agree that reverse-engineering may be the best way to learn, but there are many thiings to do and only so much time 15:53:55 there were some issues with ganesha-nfs, docker, and ubuntu 15:54:05 bswartz: any particular reason to implement CIFS before NFS? 15:54:08 bswartz: it still has the problem with connectivity 15:54:24 ganso: samba is readily available on UBuntu 15:54:43 aovchinnikov: I've started investigating the network issue but I'm blocked by the docker container hanging 15:54:46 ganso: user-space nfs is required for it 15:55:02 ganso: and ganesha-nfs is not easy to make it work in containers 15:55:23 nothing worth doing is easy 15:55:28 we'll figure it out 15:55:33 i'd say setting up a container with ganesha-nfs is more time consuming 15:55:44 okay I'd like to switch topics since tbarron is back 15:55:52 #topic UI Customization 15:56:01 tbarron: you're up again 15:56:14 only 4 minutes though 15:56:20 we may need to revisit next week when i can have sgotliv with me, it's holdiay in IL 15:56:26 gah! 15:56:32 #undo 15:56:32 Removing item from minutes: 15:56:32 we have a customer who doesn't run cifs 15:56:41 #topic Open Discussion 15:56:41 they run a public cloud 15:57:00 would like to hide protocaols not used from their customers 15:57:06 that's #1 15:57:24 it is question for implementing additional APIon server side 15:57:26 sorry tbarron I thought you were saying you wanted to wait to next week 15:57:34 keep going 15:57:53 well, we'll need to talk about it then too 15:58:00 so if someone has something else for now 15:58:11 tbarron: do you know if cinder-manage and manila-manage is packaged in Red Hat distro? 15:58:17 does it make sense to turn protocols on/off per-share-type? 15:58:35 or only turn protocols on/off across the whole instance of manila? 15:58:42 xyang3: i'll check and get back to you 15:58:53 tbarron: ok, thanks 15:58:53 xyang3: but i think yes 15:58:54 bswartz: we already has appropriate extra spec 15:59:08 vponomaryov: is it tenant-visible? 15:59:20 no 15:59:40 "storage_protocol" - reported by each driver 15:59:40 I think we need something tenant-visible for horizon can remove invalid options from the protocol dropdown 15:59:57 okay we'll indeed need to revisit this topic 16:00:05 tbarron: see if you can get sgotliv to join us next week 16:00:06 yeah, thanks 16:00:11 i will 16:00:14 thanks all 16:00:17 thanks 16:00:21 thanks 16:00:23 #endmeeting