15:01:01 <bswartz> #startmeeting manila
15:01:02 <openstack> Meeting started Thu Jul 31 15:01:01 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:06 <openstack> The meeting name has been set to 'manila'
15:01:12 <bswartz> hello all
15:01:13 <tbarron1> hi
15:01:18 <nileshb> Hi
15:01:18 <deepakcs> hi
15:01:23 <rushil1> Hi
15:01:42 <scottda> hi
15:01:44 <rraja> hi
15:01:50 <bswartz> so I see some agenda items with no names
15:01:57 <bswartz> who added these?
15:02:20 * bswartz looks for ameade and dasha
15:02:43 <dustins> bswartz: Just finishing up over here :D
15:03:09 <bswartz> okay no worries
15:03:27 <ameade> here
15:03:33 <bswartz> I'll start with the incubation stuff
15:03:37 <bswartz> #topic incubation
15:04:01 <bswartz> so those of you that read the ML should have seen the incubation thread
15:04:13 <bswartz> I'm going to forward it to the TC this afternoon
15:04:49 <bswartz> there hasn't been any discussion really, which is unsurprising because we discussed this all before sending the email
15:05:08 <bswartz> I'm just following the process as written though
15:05:29 <bswartz> any questions/comments about incubation?
15:06:32 <bswartz> ameade: are both these topics yours, or just the second one?
15:07:06 <ameade> bswartz: just the second
15:07:11 <bswartz> I don't see dasha or vponomaryov here cover the first
15:07:23 <bswartz> #link https://wiki.openstack.org/wiki/Manila/Meetings
15:07:27 <bswartz> sorry I forgot to link the agenda
15:07:35 <bswartz> I will take this one
15:07:42 <bswartz> #topic Add support to use default network by multitenant drivers
15:07:51 <bswartz> #link https://blueprints.launchpad.net/manila/+spec/default-network-for-multitenant-drivers
15:08:29 <bswartz> so the issue here is that some people have flat network and they don't use secure multitenancy
15:08:33 <bswartz> flat networks*
15:08:46 <bswartz> but the multitenant drivers expect a share network
15:09:20 <bswartz> so there needs to be a notion of a "default" share network that represents the flat network, in cases where flat networking is used
15:09:34 <bswartz> The main question in my mind is how to repesent it
15:09:47 <bswartz> should we allow an empty share-network and assume that means default?
15:10:17 <bswartz> or should manila create a special share network to represent that default flat network so users can refer to it explicitly?
15:10:51 <bswartz> .....
15:10:55 <ameade> bswartz: what are the cons of the first option again?
15:11:03 <nileshb> what are pros and cons of these two options?
15:11:33 <nileshb> empty share network sounds good for the drivers which are developed for flat networks
15:11:42 <bswartz> The cons of the first one seem like making the share network optional and to implicitly default to a flat network is confusing
15:12:26 <bswartz> we don't want to end up with the single tenant flavor of manila having a significantly different UI than the multitenant flavor
15:12:58 <bswartz> I think I favor something explicit
15:13:08 <nileshb> ok ... agree
15:13:23 <bswartz> even if it's just a special sentinel value
15:13:37 <bswartz> but something that manila adds to the DB automatically would be even better IMO
15:13:53 <nileshb> so manila by default creates a share-network ... which is available to the single tenant drivers?
15:14:11 <scottda> when does that happen?
15:14:15 <bswartz> nileshb: well the network would be available to all drivers
15:14:33 <bswartz> and for single tenant drivers, it would be the ONLY valid network (until we have gateway multitenancy)
15:14:33 <nileshb> true .. but would be the default one?
15:15:10 <deepakcs> bswartz, when you say flat network, it means the same as tenant network or the same as host network or something else ?
15:15:12 <bswartz> I would rather not have an optional argument at the API level
15:15:40 <bswartz> flat network implies that there is no separation -- the backend network and the tenant networks are the same
15:15:58 <bswartz> ofc there may still be firewalls and routers and network funkiness
15:16:20 <bswartz> but logically it's one network manila has to be aware of
15:16:28 <scottda> So does that mean this is a Neutron provider network?
15:17:07 <bswartz> scottda: I'm not sure -- perhaps
15:17:17 <bswartz> I suppose neutron is still needed in a flat network just to provide IPs
15:18:03 <nileshb> but then we are bringing in multi-tenancy aspect to the single tenant drivers as well  .. which then need to create a private n/w first in neutron and use it for creating a shared-nw
15:18:04 <deepakcs> bswartz, typically storage (backend) network is outside of the openstack ecosystem, so instances actually have to use br-ex to access them direclty (without gateway)
15:18:19 <bswartz> does anyone feel strongly that we should make the default share network implicit rather than explicit?
15:18:20 <scottda> It seems that the UI currently requires an explicit call to create a share network. I'm not sure why it's so bad to add an API option to indicate that this is the backend network, and then have manila update the DB accordingly.
15:18:21 <deepakcs> bswartz, so just wondering how/when tenant and backend networks be the same ? Is this possible in real world ?
15:19:57 <bswartz> deepakcs: in private cloud scenarios, people don't always bother to setup VLANs or other network segmentation
15:20:28 <rraja> bswartz: so you suggest that creating a share network be mandatory for all drivers, and creating a manila share without a share-network-id errors out even for single tenant driver[unlike the present state of manila code]?
15:20:43 <deepakcs> bswartz, ok, but i thought in private cloud the storage network could be still be separate network, no ?
15:21:04 <bswartz> rraja: no -- I'm suggesting that manila will create the 'default' network for you and you just have to explicitly refer to it when you create your share if that's what you want
15:21:41 <bswartz> deepakcs: the admin can do whatever he wants
15:21:47 <bswartz> it's a question of whether manila needs to know about it or not
15:22:23 <deepakcs> bswartz, Ok. so we create a service network today, so how is creating a default network different ?
15:22:26 <bswartz> consider the case of cinder
15:22:56 <bswartz> cinder doesn't need to know the network topology -- it's assumed someone has provided connectivity between the storage controllers and the things that will connect to them
15:23:00 <bswartz> manila can have a similar mode
15:23:24 <bswartz> and we can call that the "default" share-network
15:23:37 <nileshb> so then networking aspect is implicit to cinder
15:23:58 <nileshb> why cannot we keep it the same way for flat network drivers in manila?
15:24:27 <bswartz> nileshb: we can keep it the same from a functionality perspective -- I think everyone agrees on that
15:24:34 <bswartz> its' a question of the UI
15:24:55 <bswartz> nileshb: I want to come back to you question above
15:25:31 <bswartz> because we talked about the certificate-based auth stuff for glusterfs, but we haven't discussed changes to networking to support multitenancy with glusterfs
15:26:01 <bswartz> I want to understand what your plans are there and if they conflict with our existing plans for multitenancy on top of gluster
15:26:21 <bswartz> but that's a different topic
15:26:41 <deepakcs> bswartz, that would be me, not nileshb  :)
15:26:42 <bswartz> I'm going to update this BP to require explicitly specifying the "default" network
15:26:56 <bswartz> and we can try that and see if anyone hates it
15:27:04 <bswartz> deepakcs: I think I need to talk to both of you
15:27:18 <bswartz> maybe we'll have time later on in this meeting
15:27:24 <bswartz> I want to get to the other agenda item though
15:27:39 <deepakcs> bswartz, ok
15:27:40 <bswartz> #topic New blueprint for api validation
15:27:51 <bswartz> #link https://blueprints.launchpad.net/manila/+spec/json-schema-api-validation
15:28:08 <ameade> i'll summarize
15:28:09 <bswartz> ameade: I like this
15:28:14 <ameade> The blueprint is just an idea of having manila utilize jsonschema to validate all incoming requests.
15:28:23 <ameade> I think there are other projects that do this, one being Glance. The advantage here is that we can have better HTTP 400 responses when the user makes a mistake.
15:28:35 <ameade> The functionality could also be expanded later to have the API describe itself (glance does this as well). For example, the manila client could query the api about what the current user can do and receive a personalized usage message.
15:28:39 <bswartz> +1
15:28:42 <ameade> or just have schemas that can be referenced as API documentation.
15:29:04 <bswartz> if anyone is looking for a place to contribute to manila, this looks like an easy bit of work
15:29:28 <bswartz> ameade: you mentioned glance has this -- is there any chance we can do a cut+paste job from glance to manila?
15:30:17 <ameade> bswartz: not sure, I think it is pretty intermingled in glance, i would rather have this logic more modularized
15:30:20 <bswartz> or is the hard work here actually writing up the schema?
15:30:28 <bswartz> ok
15:30:35 <ameade> bswartz: my gut tells me that is the hard part
15:30:42 <bswartz> do you have a good idea of how the schemas would be specified?
15:30:53 <bswartz> one giant schema? small bits per API? even more granular?
15:31:24 <ameade> bswartz: so i think you would have a schema per resource
15:31:27 * bswartz imagines the merge conflicts that will result if there is one centralized schema
15:32:09 <ameade> an example would be...on a create request, take the entire request body, parse it, and do jsonschema.validate(body, schema)
15:32:13 <ameade> something like that
15:32:34 <bswartz> what would the process be for a developer to add a new API?
15:32:58 <bswartz> or to add a new argument to an existing API
15:33:30 <ameade> bswartz: they should only have to modify the schema in addition to any changes they would do now
15:33:41 <ameade> when adding a new resource they would have to write a schema for it
15:34:00 <bswartz> seems reasonable to me
15:34:24 <bswartz> any other opinions?
15:34:50 <deepakcs> sounds like a nice idea to me
15:34:51 * bswartz realizes he's the only core team member here
15:34:54 <bswartz> oh well
15:35:13 <ameade> i just think we need to make sure we dont make this a burden
15:35:20 <ameade> so design here will be important
15:35:25 <bswartz> okay well thanks to ameade for suggesting the idea -- I think anyone can take on this project who wants to
15:35:26 <bswartz> wants*
15:35:42 <bswartz> ameade: you might want to add some details to the BP that cover what we talked about
15:35:52 <bswartz> or a link to the eavesdrop for this meeting
15:35:56 <bswartz> or both!
15:35:57 <ameade> bswartz: sure will do
15:36:17 <bswartz> okay we still have time to talk about gluster
15:36:24 <bswartz> #topic glusterfs and multitenancy
15:36:40 <bswartz> nileshb: you back?
15:36:57 <bswartz> vbellur?
15:37:08 <deepakcs> bswartz, vbellur isn't well and was on sick leave today
15:37:15 <bswartz> deepakcs: okay
15:37:25 <bswartz> deepakcs: can you summarize what's going on over there at redhat?
15:37:36 <bswartz> are you and nileshb working together?
15:37:46 <deepakcs> bswartz, nope.. nileshb is from ibm, i am from redhat
15:37:53 <bswartz> what?
15:37:54 <bswartz> DOH!
15:37:57 <deepakcs> bswartz, :)
15:38:07 <deepakcs> bswartz, what u wanted me to summarize ?
15:38:21 <csaba> bswartz, deepakcs: we are just to set up a call next week with nileshb :)
15:38:32 <bswartz> okay I'm a bit confused
15:38:44 <deepakcs> me too confused
15:38:49 <deepakcs> i am missing some context here
15:38:52 <csaba> so eventually, we are coordinating
15:38:55 <bswartz> (11:18:03 AM) nileshb: but then we are bringing in multi-tenancy aspect to the single tenant drivers as well  .. which then need to create a private n/w first in neutron and use it for creating a shared-nw
15:39:10 <csaba> yeah so that's a heads up :)
15:39:10 <bswartz> that's the comment nileshb made that confused me
15:39:23 <deepakcs> csaba, ok!
15:39:42 <bswartz> I thought that was a reference to the glusterfs stuff
15:40:26 <bswartz> deepakcs: okay please explain your thinking regarding multitenancy and glusterfs
15:40:41 <bswartz> sorry I assume nileshb was working on the same
15:40:45 <bswartz> assumed*
15:41:10 <deepakcs> bswartz, this in the context of cert based access type (glusterfs native protocol) ?
15:41:48 <bswartz> well my thinking for the last several months has been that we'd get the gateway code working
15:42:09 <bswartz> so manila could automatically create a bridge from a glusterfs server on a flat network to a NFS client on a private VLAN
15:42:09 <deepakcs> csaba, u wanna talk here on the gateway mediated work u r doing ?
15:42:44 <bswartz> the reason we've been assuming the gateway would create an NFS bridge is to decouple the storage controller's software version from the client's software version
15:43:07 <bswartz> in a public cloud, it's unreasonable to expect everyone to run the same version of gluster all the time
15:43:43 <csaba> deepakcs: that's maybe off topic if single vs. multitenancy is considered, our current effort is agnostic to that
15:45:20 <bswartz> csaba: so you're doing work on the gateway stuff?
15:45:40 <bswartz> and deepakcs is working on some method that doesn't involve gateways? is that right?
15:46:03 <deepakcs> bswartz, yes.. hence i asked, which method above. Mine is for native glusterfs protocol which uses cert based access type
15:46:16 <deepakcs> bswartz, for NFS, csaba and rraja  are working on ganesha based approach
15:46:32 <bswartz> deepakcs: okay that's helpful
15:46:35 <csaba> bswartz: yes, me and ramana work on ganesha driver which eventually will piggyback on generic driver, replacing cinder with ganesha
15:46:59 <deepakcs> bswartz, np, i was a bit confused before :)
15:47:01 <bswartz> deepakcs: I'm more familiar with the gateway work
15:47:03 <csaba> deepakcs is working on the cert based glusterfs driver
15:47:23 <bswartz> okay so are you guys planning to deliver 2 completely separate drivers?
15:47:36 <bswartz> or will the code be merged into 1 driver with 2 ways of operating?
15:47:59 <csaba> bswartz: these are separate efforts, with complementing feature sets
15:48:16 <deepakcs> bswartz, I am not very sure atm. I think one driver but depending on the gluster URI passed can be used for NFS, or native access
15:48:19 <bswartz> yeah I can see how the features complement
15:48:34 <deepakcs> bswartz, and if NFS, it will use serviceVM / ganesha approach
15:49:03 <deepakcs> otherwise the networking has tto be setup so tat the instance subnet is able to access glusterfs server directly over glusterfs protocol
15:49:06 <bswartz> what concerns me is that anything that uses glusterfs natively will be only useful in restricted environments
15:49:07 <deepakcs> bswartz, thats the thinking atm
15:49:28 <csaba> so focus for ganesha driver is to bring nfs4 and multi-backend options to the generic driver scenario
15:49:32 <bswartz> because of the versioning issue I mentioned
15:49:57 <bswartz> although I'd love to be proved wrong about those concerns
15:50:02 <csaba> focus for the cert based driver is to implement a new means of separation with / via strong crtypography
15:50:31 <bswartz> okay so I have a different concern there
15:50:41 <deepakcs> bswartz, ok, i am not myself very sure abt the versioning issue.. need to think
15:51:05 <bswartz> the goal of the multitenancy in cinder isn't just about protecting data with cryptography -- it's about protecting tenants from eachother
15:51:34 <bswartz> it's not okay for tenants to even be able to know that other tenants exist
15:51:40 <bswartz> we're aiming for zero leakage of information
15:51:51 <csaba> bswartz: but that can be achieved via proper networking setup?
15:51:56 <bswartz> can glusterfs with certificate based auth achieve that?
15:52:19 <deepakcs> bswartz, i think so with proper n/wing setup between the tenant's and glusterfs server
15:52:21 <csaba> that part, as you pointed out last week, is independent of the backend access mechanism
15:53:28 <bswartz> I will want to look more closely at the design when it's ready
15:53:36 <bswartz> here's my specific concern
15:53:43 <bswartz> suppose I run a public cloud
15:53:50 <bswartz> I have 2 tenants: coke and pepsi
15:54:03 <bswartz> they both consume glusterfs-based shared filesystems
15:54:49 <bswartz> now suppose a spy at pepsi is able to obtain the secret key for coke's data using some external method
15:55:06 <bswartz> will pepsi be able to read all of coke's data in the cloud using that key?
15:55:19 <bswartz> or will the secure multitenancy protect them?
15:55:50 <bswartz> I would hope the answer is no
15:56:07 <bswartz> when I think about secure multitenancy I think about belt+suspenders type security
15:56:10 <rushil> bswartz: Depends on the networking setup
15:56:49 <deepakcs> bswartz, i think bcos of networking the pepsi spy won't be able to see the coke's share
15:56:56 <bswartz> okay
15:57:05 <bswartz> if that's true then maybe we don't have any problem
15:57:11 <rushil> bswartz: deepakcs is right in saying that
15:57:43 <bswartz> I just want people to understand that we need to take multitenancy security issues very seriously and anything that might leak data between 2 tenants is a major problem
15:57:57 <deepakcs> bswartz, and in setting up that kind of networking in a non-service VM based approach.. we need to have br-ex (external bridge) configured to have tenants access outside network
15:58:13 <rushil> Proper firewall rules will need to be implemented to make sure that the tenants can't breach the lines and circumvent
15:58:15 <deepakcs> bswartz, rushil which i think is part of deployer's responsibiluty in openstack ecosystem
15:58:32 <rushil> deepakcs: +1
15:58:42 <bswartz> okay it sounds like we're all on the same page
15:58:48 <bswartz> since we have only 1 minute left
15:58:52 <bswartz> #topic open discussion
15:58:56 <bswartz> any last minute stuff?
15:59:06 <bswartz> (literally last minute)
15:59:13 <deepakcs> none
15:59:44 <bswartz> okay thanks all
15:59:57 <deepakcs> bswartz, thanks
16:00:07 <rraja> thanks
16:00:09 <bswartz> #endmeeting