15:01:09 <bswartz> #startmeeting manila
15:01:10 <openstack> Meeting started Thu May 12 15:01:09 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:13 <openstack> The meeting name has been set to 'manila'
15:01:19 <bswartz> hello all
15:01:21 <ganso> hello
15:01:23 <aovchinnikov> hi
15:01:24 <vponomaryov> hello
15:01:29 <tbarron> hi
15:01:34 <dgonzalez> hi
15:01:37 <markstur_> hi
15:01:55 <dustins> \o
15:02:01 <xyang1> hi
15:02:04 <jseiler__> hi
15:02:14 <bswartz> courtesy ping: cknight
15:02:30 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:02:59 <bswartz> #topic Driver private storage
15:02:59 <cknight> :-) Hi.  Thanks, Ben.
15:03:07 <xyang3> hi
15:03:20 <bswartz> xyang1 and xyang3?
15:03:21 <xyang3> https://review.openstack.org/#/c/315346/
15:03:24 <bswartz> which one is real?
15:03:30 <xyang3> both real:)
15:03:38 <xyang3> it is easier to type here on the computer
15:04:07 <xyang3> so I added an admin api to retrieve driver private storage
15:04:27 * bswartz reads the review
15:04:30 <xyang3> Valeriy suggests to use cinder-manage instead
15:04:38 <vponomaryov> manila-manage
15:04:46 <xyang3> sorry:)
15:05:10 <xyang3> I'm asking question on cinder channel on why we have both cinder-manage delete volume and cinder force-delete
15:05:14 <bswartz> okay so we're okay with the read-only admin-only API, but don't want it in the client?
15:05:50 <bswartz> or vponomaryov are you against even having a REST API?
15:05:51 <xyang3> seems the one in cinder-manage was left over from nova days and no one work on it any more
15:06:09 <bswartz> we have other admin-only commands in python-manilaclient
15:06:13 <vponomaryov> bswartz: I don't see a reason to have API with all its whistles and bells
15:06:15 <xyang3> I mention this only because I think it is relevant to take that into consideration when make decisions
15:06:27 <vponomaryov> bswartz: manila-manage should be enough for admin's debug
15:06:37 <cknight> What is the use case for reading driver private storage via the API?
15:07:03 <bswartz> IIRC when we discussed this before it was a debugging use case
15:07:14 <bswartz> could be used for writing of better functional tests
15:07:36 <bswartz> but only for driver-specific functional tests
15:07:37 <cknight> It last came up during the snapshot-manage work to enable Tempest coverage.  We found a better approach then.
15:07:56 <xyang3> cknight: so an admin wants to check that info in a resource before doing something drastic like force-delete or unmanage
15:07:59 <bswartz> right -- snapshot manage is not the use case
15:08:11 <xyang3> yes not for manage
15:08:24 <vponomaryov> (for all "provider-location" for used for "snapshot-manage" API)
15:08:29 <bswartz> the only things in the driver private data should be driver specific -- anything expected to be common for all drivers should go elsewhere
15:08:30 <xyang3> but admin wants to find out more for a resource before get rid of it
15:09:00 <bswartz> for example I know NetApp has netapp-specific tempest tests which are not upstream
15:09:13 <bswartz> those tests might benefit from an API like this
15:09:47 <bswartz> but I think the use case we should primarily consider is an admin debug interface
15:10:17 <bswartz> in that case vponomaryov has a good point that it could be done via manila-manage
15:11:27 <bswartz> xyang3: what is your feeling about using manila-manage instead?
15:11:44 <bswartz> personally I'm okay with the proposal for an API
15:12:04 <xyang3> bswartz: so my initial reaction was manila-manage usually deal with whole backend things not resource specific
15:12:17 <xyang3> but valeriy pointed out cinder-manage delete volume
15:12:26 <xyang3> that is why I started asking why cinder has both:)
15:12:38 <xyang3> turns out it is redundant to have both
15:12:47 <bswartz> well does cinder-manage or manila-manage send any RPCs or just hack the DB?
15:13:09 <xyang3> just db, I think
15:13:13 <bswartz> because the moment you actually want to DO something on a backend, the share manager should be involved, and you need RPCs
15:13:21 <vponomaryov> bswartz: it depends, could be both
15:13:27 <bswartz> if we're just reading what's in the DB then it seems pretty simple to use manila-manage
15:14:35 <vponomaryov> bswartz: cinder-manage volume delete deletes from DB then sends RPC cast
15:14:47 <bswartz> yes but it looks like cinder will get rid of that feature
15:14:58 <bswartz> because they prefer cinderclient force delete
15:15:08 <vponomaryov> API requires identity involvement
15:15:11 <vponomaryov> it is redundant
15:15:26 <bswartz> if we ever want a GUI version of the feature API involvement is required
15:15:42 <xyang3> jgriffith said some distro does not include cinder-manage
15:15:54 <bswartz> for now it seems we're okay with CLI-only
15:16:21 <bswartz> xyang3: how do those distros do schema upgrades?
15:16:34 <vponomaryov> bswartz; excellent question ))
15:16:34 <xyang3> bswartz: I don't know
15:16:55 * bswartz is suspicious of that statement
15:17:10 <bswartz> I'd like to know which distro doesn't include it and what they do instead
15:17:14 <xyang3> anyone here knows how distro package things?
15:17:21 <bswartz> anyways, it seems like we're leaning toward manila-manage here
15:17:42 <vponomaryov> any objections from anyone?
15:17:45 <bswartz> can we move on?
15:17:45 <xyang3> no problem as long as we are on the same page
15:18:12 <bswartz> okay back to backlog of topics from design summit
15:18:19 <bswartz> #topic Py3 functional tests
15:18:27 <bswartz> vponomaryov: this is one of yours
15:18:35 <bswartz> #link https://etherpad.openstack.org/p/newton-manila-contributor-meetup
15:18:47 <bswartz> #link http://specs.openstack.org/openstack/openstack-specs/specs/enable-python-3-int-func-tests.html
15:18:57 <bswartz> #link https://review.openstack.org/#/c/181165/
15:19:13 <vponomaryov> for the moment we run tempest tests using py2.7
15:19:31 <vponomaryov> and it is right time consider py3
15:19:38 <vponomaryov> for running our functional tests
15:19:51 <bswartz> any chance it will "just work" if we turn it on?
15:19:57 <bswartz> or are there known issues at this time?
15:20:05 <vponomaryov> could be requried some fixes to core tempest
15:20:25 <bswartz> does anyone else do this yet?
15:20:33 <vponomaryov> bswartz: don't have such info yet
15:20:43 <vponomaryov> question is mainly about are we interested in it?
15:20:49 <bswartz> being first could be painful, but also an opportunity to lead in the community
15:20:52 <vponomaryov> in general?
15:20:59 <bswartz> I'm interested in it
15:21:03 <bswartz> Python 3 is mature
15:21:29 <bswartz> It's questionable how much longer Py2.X will be supported
15:21:39 <vponomaryov> just for information, Ubuntu 16.04 will have py3 as default
15:22:05 <dustins> I think the pain will be worth it in the end
15:22:21 <bswartz> yes and I'm not aware of any OSes which can't support py3 anymore
15:22:33 <dustins> especially with Python 3.X becoming default in Fedora and Ubuntu at least
15:22:35 <bswartz> I think Py2 was required for some _very_ old versions of RHEL
15:23:21 <toabctl> bswartz: py27 will EOL in 2020
15:23:36 <dustins> toabctl: 2020 can't come soon enough :)
15:23:44 <vponomaryov> toabctl: what situation with py3 for SUSE?
15:23:55 <bswartz> so vponomaryov I think we should proceed with py3 functional tests and if we discover issues then we can talk about them
15:24:09 <vponomaryov> bswartz: agree
15:24:33 <bswartz> #topic UI Customization
15:24:53 <bswartz> tbarron: this has your name on it
15:24:57 <toabctl> vponomaryov: py2 still the default
15:25:18 <tbarron> hi, sorry, am doing 2 mtgs concurrently
15:25:25 <bswartz> doh
15:25:31 <bswartz> tbarron: should we postpone this topic?
15:25:46 <tbarron> yes pls
15:25:49 <bswartz> #topic Replication with share networks
15:25:50 <toabctl> vponomaryov: but py3 is there. not sure when to switch the default.
15:26:00 <bswartz> this one is mine
15:26:00 <vponomaryov> toabctl: then it is ok, thanks
15:26:28 <bswartz> when we designed share replication -- we intentionally limited the implementation to no-share-server drivers
15:27:18 <bswartz> the reason is because our replication design allows shares to have replicas in different AZs, and because each share has exactly 1 share-network and each share-network has exactly 1 subnet
15:27:37 <bswartz> to have replicas in different AZs you really need more than 1 subnet
15:28:04 <bswartz> my proposal is to modify the share-network APIs to allow specifying a network per AZ
15:28:44 <bswartz> this would allow multi-AZ configurations to work with share servers and replicated shares
15:29:09 <bswartz> AFAIK that's the only thing blocking us from extending replication support to share-server drivers
15:29:59 <bswartz> so we'll need a spec for this, but I wanted to propose the idea in case anyone sees a problem with it
15:30:29 <ganso> bswartz: the 2 AZs would be in the same share network, thus a share-network will allow multiple subnets, is that correct?
15:30:40 <bswartz> ganso: that's my proposal
15:31:06 <bswartz> It would be gross to start allowing multiple share networks per share, because the share network also contains the security service which should NOT be different per-AZ
15:31:09 <vponomaryov> don't forget about standalone network plugin
15:31:11 <ganso> bswartz: this change may interfere with how existing drivers have implemented DHSS=true
15:31:54 <bswartz> vponomaryov: we will need to test that but I don't foresee any problems
15:32:09 <cknight> bswartz: a share server will still be on just one subnet, right?
15:32:18 <bswartz> all forms of flat-network plugins can be handled in the conf files
15:32:31 <bswartz> cknight: yes that's my thinking
15:32:46 <bswartz> subnets should not span AZs and neither should share servers
15:33:07 <cknight> bswartz: OK, so no driver changes.  This mirrors how neutron has multiple subnets per network.  Seems fine to me.
15:33:46 <bswartz> okay so look forward to a spec about this
15:34:10 <bswartz> #topic Service-check
15:34:21 <bswartz> ganso: this was your topic
15:34:23 <xyang3> bswartz: is the spec repo ready?
15:34:31 <ganso> bswartz: yes
15:34:43 <bswartz> xyang3: this needs to merge first https://review.openstack.org/#/c/312631/
15:34:50 <xyang3> ok
15:35:08 <ganso> so the current status is that the API "service-disable" only sets status of service to "disabled" in DB
15:35:15 <ganso> the scheduler still works if disabled
15:35:24 <ganso> if shares are already created, calls to backends still work if disabled
15:35:31 <xyang3> bswartz: do you think the driver private storage thing needs a spec or you are fine without it?
15:35:55 <ganso> I have 3 ideas on addressing this
15:36:25 <bswartz> xyang3: we haven't discussed requiring specs -- the repo exists but I'm not in favor of forcing people to use it if they don't want to
15:36:36 <xyang3> ok
15:36:37 <ganso> #1: communication between services is through RPCAPI. If rpcapi.py files check if service is running before making a call and returns an error, this partially solves the problem
15:36:50 <ganso> except that the services are still running and using CPU resources (updating stats)
15:37:17 <ganso> I am not sure if it is standard in openstack to prevent a service from running if it is considered to be disabled
15:37:27 <bswartz> ganso: what is the purpose of disabling a service?
15:37:52 <ganso> bswartz: for any services besides backend services, I don't know
15:37:53 <vponomaryov> bswartz:the same as in cinder
15:38:05 <ganso> bswartz: I have only seen people disabling specific backends
15:38:12 <bswartz> vponomaryov: which is?
15:38:21 <vponomaryov> bswartz: disable service from scheduling
15:38:34 <bswartz> oh!
15:38:51 <ganso> bswartz: yes, but if shares are already created, they are already scheduled
15:38:54 <bswartz> so you can effectively blacklist a whole backend in the scheduler by marking the service disabled?
15:39:06 <vponomaryov> yes
15:39:17 <ganso> and if an user wants to allow_access to a share even though the service is disabled, it works
15:39:19 <bswartz> but existing shares on that backend function as normal right?
15:39:26 <vponomaryov> ganso: manila is control plane, already created shares do not require service to run
15:39:42 <bswartz> yes but I'm asking about snapshots, delete calls, access change calls
15:39:56 <ganso> if we say that the sole purpose of disabling a backend is to prevent scheduling, then there is no bug, and disabling scheduler and data service should do nothing at all
15:40:01 <bswartz> those all ignore this "disabled" flag right?
15:40:09 <ganso> bswartz: yes, correct
15:40:22 <bswartz> ganso: it seems the bug is that the terminology is unclear
15:40:40 <ganso> bswartz: yes, but depending on the correct definition, there may be a bug
15:40:59 <vponomaryov> then, we just should decide what do we expect from this feature
15:41:03 <xyang3> snapshots and delete don't go thru scheduler, so prevent scheduling won't include them?
15:41:24 <ganso> xyang3: yes, disabling the backend will not block any snapshot or delete calls
15:41:32 <xyang3> ganso: ok, thanks
15:41:33 <bswartz> rather than "disabled" it should be called "prevent new shares" or something
15:41:41 <xyang3> bswartz: +1
15:42:25 <vponomaryov> api service, as well as scheduler should be able to be disabled, not load-balanced as well
15:42:35 <ganso> ok so how do we properly define the correct terminology? should it be service-disable and completely disable a service, or should it just prevent scheduling to that service?
15:42:38 <bswartz> ganso: does this answer your question?
15:43:03 <ganso> vponomaryov: in a HA environment it makes sense
15:43:14 <ganso> vponomaryov: but I believe that currently this is not being applied at all
15:43:19 <bswartz> ganso: I think "service-disable" is confusing because that's not what the feature does, nor do I think we want it to do that
15:43:53 <ganso> bswartz: how about an HA environment? should just the service be killed by an admin?
15:44:06 <bswartz> in a scenario with a load-balancer, you would need a way to control which api services are enabled, but that config would have to exist in the load-balancer, not in the manila DB
15:44:09 <vponomaryov> ganso: disable is not terminate
15:44:26 <ganso> bswartz: I see
15:44:48 <ganso> cinder uses the same terminology
15:45:04 <vponomaryov> ganso; because we didn't change that inherited ))
15:45:13 <ganso> vponomaryov: yes
15:45:14 <bswartz> I wonder if we inherited this code from the fork, or if it was added later
15:45:19 <ganso> but it could be confusing to change
15:45:35 <bswartz> ganso: I agree, but it's also confusing to not change it
15:45:45 <bswartz> we should do the normal deprecation thing and fix it
15:45:46 <ganso> bswartz: yes
15:46:19 <bswartz> and probably also propose a similar fix for cinder
15:46:50 <bswartz> okay is this topic wrapped up?
15:46:56 <ganso> bswartz: yes
15:47:02 <bswartz> #topic Container driver
15:47:19 <bswartz> #link https://review.openstack.org/#/c/308930/
15:47:29 <bswartz> I just wanted people to know this is coming along
15:47:49 <bswartz> LXD was backed out before the release of Mitaka, for reasons described on the ML
15:48:24 <bswartz> this new driver replaces that one and our hope is to use it as the reference implementation of a share-server driver for development and testing
15:48:29 <aovchinnikov> master
15:49:09 <bswartz> it needs code reviews
15:49:18 <bswartz> not many people have looked at it or played with it yet
15:49:30 <bswartz> right now it has problems at the networking layer
15:49:51 <bswartz> and also aovchinnikov when I try to run it my docker container just hangs
15:50:41 <tbarron> back
15:50:44 <bswartz> anyways I want to raise awareness about this so we can avoid the late-in-the-release problems we had with LXD and distros
15:50:44 <aovchinnikov> hm, that's strange
15:51:00 <tbarron> it would be helpful to have devref for this, no?
15:51:08 <bswartz> +1
15:51:14 <tbarron> also lvm
15:51:44 <bswartz> yeah aovchinnikov it would be good to add a page to the devref explaining the theory of operation for this driver
15:51:44 <tbarron> busy people who want to play without reverse-engineering the jobs in gate
15:51:59 <bswartz> tbarron: on the plus side the driver is not that huge
15:52:02 <tbarron> reduce startup cost to help
15:52:18 <tbarron> bswartz: +1
15:52:26 <vponomaryov> tbarron: true devs always do reengineering =)
15:52:26 <bswartz> the whole driver including CI hook changes, devstack changes, code and unit tests weighs in at only 1500 lines
15:52:27 <aovchinnikov> ok, sounds reasonable. I will provide one for docker driver
15:53:15 <bswartz> of course after all the bugs are fixed it might get bigger
15:53:37 <bswartz> I should mention that the current version is CIFS only and NFS will be added later
15:53:39 <tbarron> vponomaryov: i agree that reverse-engineering may be the best way to learn, but there are many thiings to do and only so much time
15:53:55 <bswartz> there were some issues with ganesha-nfs, docker, and ubuntu
15:54:05 <ganso> bswartz: any particular reason to implement CIFS before NFS?
15:54:08 <aovchinnikov> bswartz: it still has the problem with connectivity
15:54:24 <aovchinnikov> ganso: samba is readily available on UBuntu
15:54:43 <bswartz> aovchinnikov: I've started investigating the network issue but I'm blocked by the docker container hanging
15:54:46 <vponomaryov> ganso: user-space nfs is required for it
15:55:02 <vponomaryov> ganso: and ganesha-nfs is not easy to make it work in containers
15:55:23 <bswartz> nothing worth doing is easy
15:55:28 <bswartz> we'll figure it out
15:55:33 <aovchinnikov> i'd say setting up a container with ganesha-nfs is more time consuming
15:55:44 <bswartz> okay I'd like to switch topics since tbarron is back
15:55:52 <bswartz> #topic UI Customization
15:56:01 <bswartz> tbarron: you're up again
15:56:14 <bswartz> only 4 minutes though
15:56:20 <tbarron> we may need to revisit next week when i can have sgotliv with me, it's holdiay in IL
15:56:26 <bswartz> gah!
15:56:32 <bswartz> #undo
15:56:32 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0xa5b7050>
15:56:32 <tbarron> we have a customer who doesn't run cifs
15:56:41 <bswartz> #topic Open Discussion
15:56:41 <tbarron> they run a public cloud
15:57:00 <tbarron> would like to hide protocaols not used from their customers
15:57:06 <tbarron> that's #1
15:57:24 <vponomaryov> it is question for implementing additional APIon server side
15:57:26 <bswartz> sorry tbarron I thought you were saying you wanted to wait to next week
15:57:34 <bswartz> keep going
15:57:53 <tbarron> well, we'll need to talk about it then too
15:58:00 <tbarron> so if someone has something else for now
15:58:11 <xyang3> tbarron: do you know if cinder-manage and manila-manage is packaged in Red Hat distro?
15:58:17 <bswartz> does it make sense to turn protocols on/off per-share-type?
15:58:35 <bswartz> or only turn protocols on/off across the whole instance of manila?
15:58:42 <tbarron> xyang3: i'll check and get back to you
15:58:53 <xyang3> tbarron: ok, thanks
15:58:53 <tbarron> xyang3: but i think yes
15:58:54 <vponomaryov> bswartz: we already has appropriate extra spec
15:59:08 <bswartz> vponomaryov: is it tenant-visible?
15:59:20 <vponomaryov> no
15:59:40 <vponomaryov> "storage_protocol" - reported by each driver
15:59:40 <bswartz> I think we need something tenant-visible for horizon can remove invalid options from the protocol dropdown
15:59:57 <bswartz> okay we'll indeed need to revisit this topic
16:00:05 <bswartz> tbarron: see if you can get sgotliv to join us next week
16:00:06 <tbarron> yeah, thanks
16:00:11 <tbarron> i will
16:00:14 <bswartz> thanks all
16:00:17 <vponomaryov> thanks
16:00:21 <tbarron> thanks
16:00:23 <bswartz> #endmeeting