15:00:12 <bswartz> #startmeeting manila
15:00:13 <openstack> Meeting started Thu Nov  5 15:00:12 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:16 <openstack> The meeting name has been set to 'manila'
15:00:39 <bswartz> hello all
15:00:41 <ganso> hello
15:00:41 <xyang1> hi
15:00:43 <dustins> \o
15:00:44 <rraja> hi!
15:00:44 <jasonsb> hi
15:00:45 <markstur_> hello
15:00:48 <dustins> welcome back!
15:00:58 <vponomaryov> hi
15:01:07 <cknight> Hi
15:01:41 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:55 <bswartz> welcome back from tokyo
15:02:05 <bswartz> (for those of you who went)
15:02:15 <bswartz> It was a great summit
15:02:31 <bswartz> A lot of what we discussed was captured on the etherpads for the sessions
15:03:00 <bswartz> I will probably send out my own email summary
15:03:15 <dustins> bswartz: Could you link the etherpads? Or just send the link in the email?
15:03:23 <bswartz> but we decided that a number of things needed followups
15:03:28 <gouthamr> https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Manila
15:03:33 <bswartz> dustins: https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Manila
15:03:39 <dustins> gouthamr, bswartz: Thanks!
15:03:43 <bswartz> oh thanks gouthamr
15:04:01 <gouthamr> you're welcome bswartz dustins
15:04:15 <bswartz> okay so one topic from before the summit
15:04:23 <bswartz> #topic Decide on read-only access rules as a required feature
15:04:50 <bswartz> we were discussing this one before the summit, and it also came up duing the session about export location metadata
15:05:13 <bswartz> as far as I know, there is only one driver that has problems with read-only access, and that is the generic driver
15:05:16 <mkoderer> hello all
15:05:43 <bswartz> the problem is limited to CIFS and there is a proposed workaround involving an additional export location
15:06:13 <bswartz> so I propose that we make read-only support mandatory for all drivers and document that
15:06:23 <ganso> bswartz: +1
15:06:30 <markstur_> I had a problem with it on 3PAR NFS but verified a work-around exists.
15:06:40 * bswartz tries to remember how to do an IRC vote
15:06:44 <markstur_> Also on CIFS I would need to stop using IP in addision to User
15:06:57 <markstur_> addition
15:07:02 <bswartz> markstur_: IP-based access control is not required for CIFS
15:07:16 <markstur_> so I'd remove and existing feature
15:07:41 <ganso> bswartz: but is the workaround related to export location metadata?
15:08:10 <bswartz> ganso: yes
15:08:33 <bswartz> well export location metadata would help you figure out which export location to attach to depending on if you wanted read or write access
15:08:46 <vponomaryov> ganso: problem of generic driver that it can not have both RW and RO rules for one export
15:09:10 <bswartz> okay lets register official stances
15:09:13 <markstur_> but most drivers would still use one export location for both RO and RW
15:09:16 <bswartz> #startvote Should read-only access be a mandatory feature for all drivers? Yes, No
15:09:17 <openstack> Begin voting on: Should read-only access be a mandatory feature for all drivers? Valid vote options are Yes, No.
15:09:18 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
15:09:24 <bswartz> #vote yes
15:09:32 <ganso> #vote yes
15:09:34 <gouthamr> #vote yes
15:09:37 <xyang1> #vote yes
15:09:41 <mkoderer> #vote yes
15:09:45 <JayXu> #vote yes
15:09:45 <markstur_> #vote yes
15:09:49 <cknight> #vote yes
15:09:51 <dustins> #vote yes
15:09:57 <bswartz> 30 more seconds
15:10:02 <vponomaryov> #vote yes
15:10:25 <bswartz> #endvote
15:10:26 <openstack> Voted on "Should read-only access be a mandatory feature for all drivers?" Results are
15:10:28 <openstack> Yes (10): bswartz, ganso, gouthamr, JayXu, cknight, vponomaryov, markstur_, mkoderer, xyang1, dustins
15:10:36 <bswartz> okay that was easy
15:10:44 <mkoderer> boring
15:10:47 <bswartz> lol
15:10:59 <bswartz> okay ganso you can update the minimum required features doc
15:11:26 <jasonsb> #action ganso: ganso you can update the minimum required features doc
15:11:29 <ganso> sure, I was waiting for this definition
15:12:24 <bswartz> so I think we will go forward with proposed workaround in the generic driver
15:13:04 <bswartz> and after we have export location metadata we will discuss exactly how we want to communicate to users about read-only export locations
15:13:18 <bswartz> #topic Manila DR - Design/Code feedback
15:13:26 <bswartz> gouthamr: you're up
15:13:38 <gouthamr> okay
15:13:49 <gouthamr> bringing up the DR design..
15:14:08 <gouthamr> I'm hoping some of you have had a chance to look at the design document
15:14:20 <bswartz> #link https://wiki.openstack.org/wiki/Manila/design/manila-mitaka-data-replication
15:14:24 <bswartz> #link https://review.openstack.org/#/c/238572/
15:14:28 <bswartz> #link https://review.openstack.org/#/c/235448/
15:14:42 <gouthamr> I was hoping to get some feedback on the code and some situations where we have no clear answers as of now..
15:14:56 <gouthamr> thanks bswartz..
15:15:25 <bswartz> gouthamr: is there a list of open questions you want to get answers to?
15:15:25 <gouthamr> The questions are here: https://wiki.openstack.org/wiki/Manila/design/manila-mitaka-data-replication#FAQs_and_Unanswered_Questions
15:16:14 <gouthamr> some of them have assumptions we made while proposing the implementation : https://review.openstack.org/#/c/238572/
15:16:49 <bswartz> gouthamr: it's confusing to mix the answered and unanswered questions
15:17:11 <dustins> gouthamr: Looking at the design doc, I'd change the name of the "replication" extra-spec
15:17:14 <gouthamr> bswartz: ah true. Will seperate them out..
15:17:15 <bswartz> I'll update the doc to split them
15:17:23 <dustins> Perhaps to something like "replication-type"
15:17:24 <gouthamr> thank you bswartz
15:17:34 * dustins makes a note to reflect that in Gerrit
15:17:34 <mkoderer> gouthamr: you are only targeting on AZs replication? What about regions?
15:17:54 <gouthamr> mkoderer: yes, currently AZs..
15:17:57 <bswartz> mkoderer: inter-region replication is not in scope for this feature
15:18:15 <mkoderer> bswartz: fine, just for my understanding
15:18:17 <bswartz> mkoderer: it's clearly a feature we want, but it would look pretty different from intra-region replication
15:18:25 <gouthamr> bswartz : you will notice that the code currently does not stop replication from being initiated within the same AZ..
15:18:54 <bswartz> gouthamr: we need to add that as a question -- Should we support intra-AZ replication?
15:18:56 <gouthamr> dustins: Sure.. I agree.
15:18:57 <bswartz> I say now
15:19:00 <bswartz> I say no*
15:19:03 <mkoderer> bswartz: exactly... geo region replication would be an awesome feature for us :) but anyway first step AZs are ok
15:19:09 <bswartz> but I've heard others make cases for why we should
15:19:39 <dustins> bswartz: Perhaps it's something that we can plan on for later, but not for this particular instance
15:20:03 <dustins> And we should make it clear that this spec is only for inter-AZ replication?
15:20:14 <gouthamr> bswartz: sure adding that question.
15:20:18 <bswartz> dustins: well according to gouthamr intra-AZ is currently allowed
15:20:27 <bswartz> I would like to disallow it
15:20:34 <dustins> bswartz: Ah, I gotcha now
15:20:36 <bswartz> unless we can come up with a clear use cases
15:21:03 <bswartz> we're talking about replicating to another place in the same AZ
15:21:18 <bswartz> for testing its clearly simpler to have 1 AZ instead of 2
15:21:29 <bswartz> but for real use cases, I'm not sure what the point would be
15:21:41 <gouthamr> bswartz: true, the original objective to allow it was testing.
15:21:43 <xyang1> bswartz: do we have a clear definition of AZ?  do we check if it the same as nova AZ, etc
15:21:50 <cknight> bswartz: Well, if your cloud only has 1 AZ…
15:22:09 <JayXu> so 1 AZ is only for testing purpose, and real case is to support different AZ?
15:22:12 <bswartz> xyang1: they're supposed to be the same as nova AZs, but that's not actually enforced
15:22:34 <bswartz> xyang1: the only thing we enforce is that each m-shr instance is in exactly 1 AZ
15:22:55 <xyang1> ok
15:23:05 <bswartz> and the cinder folks have even discussed the possibility of a c-vol service spanning AZs so we me may need to consider something similar
15:23:31 <mkoderer> our plan is to enforce the AZ across all the projects (nova, cinder,..) using a Congress policy btw
15:23:31 <gouthamr> cknight: valid point. bswartz, What if there is only one AZ?
15:23:49 <xyang1> bswartz: right, in cinder AZ is just a string currently
15:23:58 <bswartz> gouthamr: if there's only 1 AZ then you can't use replication
15:24:25 <cknight> bswartz: that seems pretty limiting for smaller deployers.
15:24:26 <bswartz> xyang1: it's a string, but it's stored in 3 places and used for decision making in 2 places
15:24:51 <xyang1> bswartz: yes, there is plan to make it really meaningful
15:25:14 <markstur_> cknight, +1 for small use cases.
15:25:21 <bswartz> cknight: smaller deployers won't really have a DR solution then -- which isn't surprising because it's hard to do proper DR at small scale
15:25:30 <xyang1> bswartz: i think having replication support within 1 AZ is good for testing
15:25:54 <bswartz> of course we when add other forms of replication, such as inter-region or inter-cloud, then we may address those smaller use cases better
15:26:23 <xyang1> bswartz: or poc type of deployment
15:26:37 <mkoderer> AZ are used for many different things in many deployments.. I would suggest to allow replication within the same AZ too
15:26:45 <bswartz> okay so let me try to explain my objection to single-AZ replication
15:26:57 <bswartz> the AZ is part of the UI for replication
15:27:26 <bswartz> you create a share, and you specify the AZ at creation time, and also that you want the share to be replicated (by selecting a share type with replication)
15:27:43 <bswartz> then you add a replica of that share by specifying the AZ for the replica
15:28:18 <bswartz> now you have a share with 2 replicas, in 2 AZs, with different export locations in each AZ (because there are actually 2 instances under the covers)
15:29:32 <bswartz> if the second AZ is the same as the first AZ, then you just have a bunch of export locations in the same AZ
15:29:57 <bswartz> and it's tough to test a real failure scenario in that case
15:30:11 <bswartz> how would you even simulate a failure for testing purposes?
15:31:19 <cknight> bswartz: You could take down a single backend to simulate a failure.
15:31:29 <JayXu> testing with generic driver or vendor storage?
15:31:40 <gouthamr> cknight: yes and not the entire az bswartz
15:31:45 <qeas> Hi, everyone. I am from Nexenta and we are considering making a Manila driver for our backend. I have a question: what is the mandatory requirement for access to share? Is it enough to grant access by user/group or access by IP is mandatory too?
15:31:45 <bswartz> jayxu: both cases matter
15:32:08 <bswartz> qeas: please wait until we finish this topic
15:32:14 <mkoderer> for me all depends on the definition what AZ really means.. because it is sometime just used to group compute hosts together
15:32:27 <qeas> bswartz: sure, sorry
15:32:59 <xyang1> you could have two backends in the same AZ
15:33:18 <bswartz> mkoderer: yeah I get that different people use AZs differnetly
15:33:45 <JayXu> xyang1, you mean manila backends?
15:33:50 <bswartz> mkoderer: however I believe that AZs are *supposed* to represent a failure domain, which is why we limit volume attachment across AZs
15:33:53 <xyang1> JayXu: yes
15:34:21 <xyang1> I meant you can failover from one backend to another in the same AZ
15:35:09 <bswartz> so the use case we're thinking about is a single system failure (of the storage system)
15:35:18 <bswartz> where the compute and network resources are all fine
15:35:33 <mkoderer> bswartz: +1, but anyway the questions is if Manila wants to enforce this defintion for this feature :) I am uncertain
15:35:39 <gouthamr> bswartz : yes, failure can happen at the specific storage backend
15:35:50 <bswartz> in that case replication between 2 systems in the same AZ makes sense
15:36:24 <JayXu> bswarstz, in vendor storage, it may allow to trigger failover in array side.
15:36:40 <bswartz> but I think you have to have a pretty low opinion of your storage system to expect it to fail by itself (in the absence of a real disaster)
15:36:43 <JayXu> failover a share to the replica
15:37:11 <JayXu> we have the command to do the unplanned failover
15:37:21 <JayXu> sorry, planned failover
15:37:46 <bswartz> planned failovers may be a more compelling use case
15:37:53 <JayXu> bswartz, as you mentioned, it would be unplanned failover
15:38:02 <bswartz> because then we're not talking about disaster recovery, but perhaps maintenance
15:39:26 <bswartz> well I guess there is broad support for the idea of intra-AZ replication
15:40:02 <gouthamr> bswartz : +1
15:40:04 <bswartz> let's put it to a vote
15:40:08 <jasonsb> in my experience storage fails alot (hpc background).  this sounds good
15:40:41 <bswartz> #startvote Should Manila share replication allow 2 replicas in the same AZ? Yes, No, Undecided
15:40:42 <openstack> Begin voting on: Should Manila share replication allow 2 replicas in the same AZ? Valid vote options are Yes, No, Undecided.
15:40:43 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
15:40:49 <cknight> #vote yes
15:40:50 <xyang1> #vote yes
15:40:51 <gouthamr> #vote yes
15:40:53 <bswartz> #vote undecided
15:40:54 <jasonsb> #vote yes
15:40:56 <mkoderer> #vote yes
15:41:03 <markstur_> #vote yes
15:41:05 <u_glide> #vote undecided
15:41:08 <vponomaryov> #vote yes
15:41:12 <JayXu> #vote yes
15:41:18 <ganso> #vote yes
15:41:23 <dustins> #vote yes
15:41:24 <markstur_> mkoderer, make it less boring?
15:41:30 <mkoderer> :)
15:41:35 <bswartz> jasonsb: white boxes fail a lot, and individual nodes in a clustered system fail a lot, but a correctly designed system should be tolerant to such failures
15:41:52 <dustins> should being the operative word :P
15:41:53 <bswartz> 30 seconds
15:42:22 <bswartz> #endvote
15:42:23 <openstack> Voted on "Should Manila share replication allow 2 replicas in the same AZ?" Results are
15:42:24 <openstack> Yes (10): ganso, gouthamr, JayXu, cknight, vponomaryov, jasonsb, markstur_, mkoderer, xyang1, dustins
15:42:25 <openstack> Undecided (2): bswartz, u_glide
15:42:38 <bswartz> okay I think the people have spoken
15:42:51 <gouthamr> #action: gouthamr will update documentation
15:43:00 <bswartz> gouthamr keep going in the direction you're going with supporting replicas in the same AZ
15:43:08 <bswartz> also make the design doc clear on that point
15:43:30 <bswartz> gouthamr: anything else for this week?
15:43:31 <gouthamr> bswartz: good stuff..
15:43:57 <qeas> I have a question: what is the mandatory requirement for access to share? Is it enough to grant access by user/group or access by IP is mandatory too?
15:43:58 <gouthamr> at this point, nothing else.. i hope i can get some reviews to continue to enhance this
15:44:09 <bswartz> okay great
15:44:14 <bswartz> #topic open discussion
15:44:37 <bswartz> okay qeas now we'll answer your questions
15:44:48 <cknight> qeas: IP for NFS, user/group for CIFS is fine.
15:45:11 <bswartz> yes, we don't require IP-based access control for CIFS shares -- only user/group based
15:45:20 <qeas> cknight: okay, thanks
15:45:28 <qeas> and for NFS - IP is required, right?
15:45:33 <xyang1> qeas: you should review ganso's doc and make sure it is clear
15:45:38 <bswartz> the docs might not be clear on that point because we initially did require IP-based access for CIFs shares and then later we decided that was a bad idea
15:45:52 <cknight> qeas: yes
15:45:52 <xyang1> ganso: can you provide link
15:45:53 <bswartz> qeas: yes for NFS shares IP-based access control is mandatory
15:45:54 <qeas> xyang1: could you give me a link?
15:46:17 <ganso> xyang1: sure just a sec
15:46:17 <bswartz> xyang1: is his doc merged yet?
15:46:27 <xyang1> not yet
15:46:38 <ganso> http://docs-draft.openstack.org/14/235914/9/check/gate-manila-docs/7c45126//doc/build/html/devref/driver_requirements.html
15:47:06 <xyang1> ganso: I thought it is still in review
15:47:16 <ganso> xyang1: it is
15:47:42 <mkoderer> docs-draft.. never heard about it.. cool
15:47:56 <dustins> mkoderer: New one for me too
15:47:57 <dustins> Neat!
15:48:07 <xyang1> ganso: nice
15:48:15 <bswartz> mkoderer: that's just the output of a jenkins test job
15:48:37 <qeas> another question I haid is: does nova have cli command to attach share to instance, like it has for cinder volumes?
15:48:39 <mkoderer> bswartz: yeah.. but anyway helpful :)
15:48:47 <bswartz> qeas: not yet, but it may be added in mitaka
15:49:04 <qeas> bswartz: ok thanks
15:49:20 <bswartz> qeas: the current design is that the guest has direct network access to the share and doesn't need the hypervisor to be involved at all
15:49:31 <xyang1> bswartz: do you have anyone working on it?
15:49:45 <bswartz> this allows us to support VMs, containers, and bare metal all with a single solution
15:49:50 <xyang1> bswartz: I saw that coming up in Sage's talk, good idea
15:50:11 <bswartz> xyang1: sage said he'd find someone at redhat to work on it
15:50:18 <xyang1> bswartz: cool
15:50:41 <bswartz> I hope he's able to do that quickly -- I know the nova guys and if there isn't a spec up in the next few weeks it's not happening in mitaka
15:50:43 <qeas> bswartz: makes sense, I was just thinking of end-user comfort
15:51:05 <bswartz> qeas: yes, automating the mount process is very important from and end-user perspective
15:51:15 <xyang1> bswartz: ya, getting stuff in nova is not trivial
15:51:23 <bswartz> but we have multiple approaches for that, not limited to automation through nova
15:52:18 <bswartz> alright anything else for today?
15:52:26 <jasonsb> i have topic
15:52:48 <jasonsb> attended magnum weekly on tuesday
15:52:54 <jasonsb> can i go ahead?
15:53:03 <bswartz> jasonsb: yeah we have about 7 minutes
15:53:11 <jasonsb> this is short
15:53:35 <jasonsb> i mentioned that manila had an interest in seeking requirements from magnum for mount automation
15:53:51 <jasonsb> adrian responded first with this: one thing I'm a bit worried about is filesystems as an attack vector
15:54:06 <jasonsb> adrian: so I'd rather not allow filesystems to be user supplied
15:54:17 <jasonsb> adrian: I'd like a way for a service provider to "sign" a filesystem
15:54:26 <jasonsb> adrian: but I'm not even sure if what I want is possible/practical yet
15:54:49 <jasonsb> so i think kernel vulnerabilities is something on their minds
15:55:04 <jasonsb> then we ran out of time
15:55:13 <jasonsb> i'll keep attending and gather more requirements
15:55:18 <bswartz> so obviously they're thinking about access to non-empty file systems
15:55:48 <bswartz> the core manila use case is we give you an empty file system and you store data in it
15:55:54 <jasonsb> non empty and filesystems perhaps that have been specially crafted to exploit the kernel
15:56:11 <bswartz> sharing filesystems between tenants does create opportunities for malice
15:56:26 <jasonsb> of course the specially crafted only comes into play if you have genesha etc on the host
15:56:41 <jasonsb> so it is pertinent to the automation strategy
15:57:15 <jasonsb> bswartz: yes, if you have host with single tenant then this problem goes away i think
15:57:29 <bswartz> I think our core use case is people mounting their own filesystems
15:57:39 <bswartz> that needs to be automated and there is no danger in that case
15:58:20 <bswartz> the risks of exploits comes from sharing across tenants, not from automating the mount process
15:58:24 <jasonsb> perhaps.  i think his comment speaks to the concern about containers in a multi-tenant environment
15:58:45 <bswartz> I think that's more of a corner case
15:58:47 <jasonsb> maybe manila doesn't have a problem, but the scrutiny is there
15:58:58 <bswartz> but I realize that containers are bit "special" when it comes to storage
15:58:59 <jasonsb> i can see it
15:59:13 <bswartz> I need to spend more time playing with container solutions and filesystem mounts
15:59:18 <jasonsb> you have a host and its running ganesha on host and sending manila into container
15:59:41 <jasonsb> somebody could craft things to go after the genesha and compromise the kernel
15:59:50 <bswartz> IMO ganesha wouldn't be running on the same box as a guest container
16:00:14 <jasonsb> one of sage's examples did show it this way iirc
16:00:24 <bswartz> all of our ganesha-related use cases involve a separate bridge machine or running it on the hypervisor with full virtualization
16:00:31 <bswartz> it would be pointless on a container host
16:00:44 <jasonsb> oh
16:00:49 <jasonsb> maybe it was just knfsd
16:00:53 <bswartz> yeah...
16:00:53 <jasonsb> my bad
16:00:58 <bswartz> anyways we're past time
16:01:01 <bswartz> thanks everyone
16:01:08 <bswartz> #endmeeting