15:01:01 #startmeeting manila 15:01:02 Meeting started Thu Jul 31 15:01:01 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:06 The meeting name has been set to 'manila' 15:01:12 hello all 15:01:13 hi 15:01:18 Hi 15:01:18 hi 15:01:23 Hi 15:01:42 hi 15:01:44 hi 15:01:50 so I see some agenda items with no names 15:01:57 who added these? 15:02:20 * bswartz looks for ameade and dasha 15:02:43 bswartz: Just finishing up over here :D 15:03:09 okay no worries 15:03:27 here 15:03:33 I'll start with the incubation stuff 15:03:37 #topic incubation 15:04:01 so those of you that read the ML should have seen the incubation thread 15:04:13 I'm going to forward it to the TC this afternoon 15:04:49 there hasn't been any discussion really, which is unsurprising because we discussed this all before sending the email 15:05:08 I'm just following the process as written though 15:05:29 any questions/comments about incubation? 15:06:32 ameade: are both these topics yours, or just the second one? 15:07:06 bswartz: just the second 15:07:11 I don't see dasha or vponomaryov here cover the first 15:07:23 #link https://wiki.openstack.org/wiki/Manila/Meetings 15:07:27 sorry I forgot to link the agenda 15:07:35 I will take this one 15:07:42 #topic Add support to use default network by multitenant drivers 15:07:51 #link https://blueprints.launchpad.net/manila/+spec/default-network-for-multitenant-drivers 15:08:29 so the issue here is that some people have flat network and they don't use secure multitenancy 15:08:33 flat networks* 15:08:46 but the multitenant drivers expect a share network 15:09:20 so there needs to be a notion of a "default" share network that represents the flat network, in cases where flat networking is used 15:09:34 The main question in my mind is how to repesent it 15:09:47 should we allow an empty share-network and assume that means default? 15:10:17 or should manila create a special share network to represent that default flat network so users can refer to it explicitly? 15:10:51 ..... 15:10:55 bswartz: what are the cons of the first option again? 15:11:03 what are pros and cons of these two options? 15:11:33 empty share network sounds good for the drivers which are developed for flat networks 15:11:42 The cons of the first one seem like making the share network optional and to implicitly default to a flat network is confusing 15:12:26 we don't want to end up with the single tenant flavor of manila having a significantly different UI than the multitenant flavor 15:12:58 I think I favor something explicit 15:13:08 ok ... agree 15:13:23 even if it's just a special sentinel value 15:13:37 but something that manila adds to the DB automatically would be even better IMO 15:13:53 so manila by default creates a share-network ... which is available to the single tenant drivers? 15:14:11 when does that happen? 15:14:15 nileshb: well the network would be available to all drivers 15:14:33 and for single tenant drivers, it would be the ONLY valid network (until we have gateway multitenancy) 15:14:33 true .. but would be the default one? 15:15:10 bswartz, when you say flat network, it means the same as tenant network or the same as host network or something else ? 15:15:12 I would rather not have an optional argument at the API level 15:15:40 flat network implies that there is no separation -- the backend network and the tenant networks are the same 15:15:58 ofc there may still be firewalls and routers and network funkiness 15:16:20 but logically it's one network manila has to be aware of 15:16:28 So does that mean this is a Neutron provider network? 15:17:07 scottda: I'm not sure -- perhaps 15:17:17 I suppose neutron is still needed in a flat network just to provide IPs 15:18:03 but then we are bringing in multi-tenancy aspect to the single tenant drivers as well .. which then need to create a private n/w first in neutron and use it for creating a shared-nw 15:18:04 bswartz, typically storage (backend) network is outside of the openstack ecosystem, so instances actually have to use br-ex to access them direclty (without gateway) 15:18:19 does anyone feel strongly that we should make the default share network implicit rather than explicit? 15:18:20 It seems that the UI currently requires an explicit call to create a share network. I'm not sure why it's so bad to add an API option to indicate that this is the backend network, and then have manila update the DB accordingly. 15:18:21 bswartz, so just wondering how/when tenant and backend networks be the same ? Is this possible in real world ? 15:19:57 deepakcs: in private cloud scenarios, people don't always bother to setup VLANs or other network segmentation 15:20:28 bswartz: so you suggest that creating a share network be mandatory for all drivers, and creating a manila share without a share-network-id errors out even for single tenant driver[unlike the present state of manila code]? 15:20:43 bswartz, ok, but i thought in private cloud the storage network could be still be separate network, no ? 15:21:04 rraja: no -- I'm suggesting that manila will create the 'default' network for you and you just have to explicitly refer to it when you create your share if that's what you want 15:21:41 deepakcs: the admin can do whatever he wants 15:21:47 it's a question of whether manila needs to know about it or not 15:22:23 bswartz, Ok. so we create a service network today, so how is creating a default network different ? 15:22:26 consider the case of cinder 15:22:56 cinder doesn't need to know the network topology -- it's assumed someone has provided connectivity between the storage controllers and the things that will connect to them 15:23:00 manila can have a similar mode 15:23:24 and we can call that the "default" share-network 15:23:37 so then networking aspect is implicit to cinder 15:23:58 why cannot we keep it the same way for flat network drivers in manila? 15:24:27 nileshb: we can keep it the same from a functionality perspective -- I think everyone agrees on that 15:24:34 its' a question of the UI 15:24:55 nileshb: I want to come back to you question above 15:25:31 because we talked about the certificate-based auth stuff for glusterfs, but we haven't discussed changes to networking to support multitenancy with glusterfs 15:26:01 I want to understand what your plans are there and if they conflict with our existing plans for multitenancy on top of gluster 15:26:21 but that's a different topic 15:26:41 bswartz, that would be me, not nileshb :) 15:26:42 I'm going to update this BP to require explicitly specifying the "default" network 15:26:56 and we can try that and see if anyone hates it 15:27:04 deepakcs: I think I need to talk to both of you 15:27:18 maybe we'll have time later on in this meeting 15:27:24 I want to get to the other agenda item though 15:27:39 bswartz, ok 15:27:40 #topic New blueprint for api validation 15:27:51 #link https://blueprints.launchpad.net/manila/+spec/json-schema-api-validation 15:28:08 i'll summarize 15:28:09 ameade: I like this 15:28:14 The blueprint is just an idea of having manila utilize jsonschema to validate all incoming requests. 15:28:23 I think there are other projects that do this, one being Glance. The advantage here is that we can have better HTTP 400 responses when the user makes a mistake. 15:28:35 The functionality could also be expanded later to have the API describe itself (glance does this as well). For example, the manila client could query the api about what the current user can do and receive a personalized usage message. 15:28:39 +1 15:28:42 or just have schemas that can be referenced as API documentation. 15:29:04 if anyone is looking for a place to contribute to manila, this looks like an easy bit of work 15:29:28 ameade: you mentioned glance has this -- is there any chance we can do a cut+paste job from glance to manila? 15:30:17 bswartz: not sure, I think it is pretty intermingled in glance, i would rather have this logic more modularized 15:30:20 or is the hard work here actually writing up the schema? 15:30:28 ok 15:30:35 bswartz: my gut tells me that is the hard part 15:30:42 do you have a good idea of how the schemas would be specified? 15:30:53 one giant schema? small bits per API? even more granular? 15:31:24 bswartz: so i think you would have a schema per resource 15:31:27 * bswartz imagines the merge conflicts that will result if there is one centralized schema 15:32:09 an example would be...on a create request, take the entire request body, parse it, and do jsonschema.validate(body, schema) 15:32:13 something like that 15:32:34 what would the process be for a developer to add a new API? 15:32:58 or to add a new argument to an existing API 15:33:30 bswartz: they should only have to modify the schema in addition to any changes they would do now 15:33:41 when adding a new resource they would have to write a schema for it 15:34:00 seems reasonable to me 15:34:24 any other opinions? 15:34:50 sounds like a nice idea to me 15:34:51 * bswartz realizes he's the only core team member here 15:34:54 oh well 15:35:13 i just think we need to make sure we dont make this a burden 15:35:20 so design here will be important 15:35:25 okay well thanks to ameade for suggesting the idea -- I think anyone can take on this project who wants to 15:35:26 wants* 15:35:42 ameade: you might want to add some details to the BP that cover what we talked about 15:35:52 or a link to the eavesdrop for this meeting 15:35:56 or both! 15:35:57 bswartz: sure will do 15:36:17 okay we still have time to talk about gluster 15:36:24 #topic glusterfs and multitenancy 15:36:40 nileshb: you back? 15:36:57 vbellur? 15:37:08 bswartz, vbellur isn't well and was on sick leave today 15:37:15 deepakcs: okay 15:37:25 deepakcs: can you summarize what's going on over there at redhat? 15:37:36 are you and nileshb working together? 15:37:46 bswartz, nope.. nileshb is from ibm, i am from redhat 15:37:53 what? 15:37:54 DOH! 15:37:57 bswartz, :) 15:38:07 bswartz, what u wanted me to summarize ? 15:38:21 bswartz, deepakcs: we are just to set up a call next week with nileshb :) 15:38:32 okay I'm a bit confused 15:38:44 me too confused 15:38:49 i am missing some context here 15:38:52 so eventually, we are coordinating 15:38:55 (11:18:03 AM) nileshb: but then we are bringing in multi-tenancy aspect to the single tenant drivers as well .. which then need to create a private n/w first in neutron and use it for creating a shared-nw 15:39:10 yeah so that's a heads up :) 15:39:10 that's the comment nileshb made that confused me 15:39:23 csaba, ok! 15:39:42 I thought that was a reference to the glusterfs stuff 15:40:26 deepakcs: okay please explain your thinking regarding multitenancy and glusterfs 15:40:41 sorry I assume nileshb was working on the same 15:40:45 assumed* 15:41:10 bswartz, this in the context of cert based access type (glusterfs native protocol) ? 15:41:48 well my thinking for the last several months has been that we'd get the gateway code working 15:42:09 so manila could automatically create a bridge from a glusterfs server on a flat network to a NFS client on a private VLAN 15:42:09 csaba, u wanna talk here on the gateway mediated work u r doing ? 15:42:44 the reason we've been assuming the gateway would create an NFS bridge is to decouple the storage controller's software version from the client's software version 15:43:07 in a public cloud, it's unreasonable to expect everyone to run the same version of gluster all the time 15:43:43 deepakcs: that's maybe off topic if single vs. multitenancy is considered, our current effort is agnostic to that 15:45:20 csaba: so you're doing work on the gateway stuff? 15:45:40 and deepakcs is working on some method that doesn't involve gateways? is that right? 15:46:03 bswartz, yes.. hence i asked, which method above. Mine is for native glusterfs protocol which uses cert based access type 15:46:16 bswartz, for NFS, csaba and rraja are working on ganesha based approach 15:46:32 deepakcs: okay that's helpful 15:46:35 bswartz: yes, me and ramana work on ganesha driver which eventually will piggyback on generic driver, replacing cinder with ganesha 15:46:59 bswartz, np, i was a bit confused before :) 15:47:01 deepakcs: I'm more familiar with the gateway work 15:47:03 deepakcs is working on the cert based glusterfs driver 15:47:23 okay so are you guys planning to deliver 2 completely separate drivers? 15:47:36 or will the code be merged into 1 driver with 2 ways of operating? 15:47:59 bswartz: these are separate efforts, with complementing feature sets 15:48:16 bswartz, I am not very sure atm. I think one driver but depending on the gluster URI passed can be used for NFS, or native access 15:48:19 yeah I can see how the features complement 15:48:34 bswartz, and if NFS, it will use serviceVM / ganesha approach 15:49:03 otherwise the networking has tto be setup so tat the instance subnet is able to access glusterfs server directly over glusterfs protocol 15:49:06 what concerns me is that anything that uses glusterfs natively will be only useful in restricted environments 15:49:07 bswartz, thats the thinking atm 15:49:28 so focus for ganesha driver is to bring nfs4 and multi-backend options to the generic driver scenario 15:49:32 because of the versioning issue I mentioned 15:49:57 although I'd love to be proved wrong about those concerns 15:50:02 focus for the cert based driver is to implement a new means of separation with / via strong crtypography 15:50:31 okay so I have a different concern there 15:50:41 bswartz, ok, i am not myself very sure abt the versioning issue.. need to think 15:51:05 the goal of the multitenancy in cinder isn't just about protecting data with cryptography -- it's about protecting tenants from eachother 15:51:34 it's not okay for tenants to even be able to know that other tenants exist 15:51:40 we're aiming for zero leakage of information 15:51:51 bswartz: but that can be achieved via proper networking setup? 15:51:56 can glusterfs with certificate based auth achieve that? 15:52:19 bswartz, i think so with proper n/wing setup between the tenant's and glusterfs server 15:52:21 that part, as you pointed out last week, is independent of the backend access mechanism 15:53:28 I will want to look more closely at the design when it's ready 15:53:36 here's my specific concern 15:53:43 suppose I run a public cloud 15:53:50 I have 2 tenants: coke and pepsi 15:54:03 they both consume glusterfs-based shared filesystems 15:54:49 now suppose a spy at pepsi is able to obtain the secret key for coke's data using some external method 15:55:06 will pepsi be able to read all of coke's data in the cloud using that key? 15:55:19 or will the secure multitenancy protect them? 15:55:50 I would hope the answer is no 15:56:07 when I think about secure multitenancy I think about belt+suspenders type security 15:56:10 bswartz: Depends on the networking setup 15:56:49 bswartz, i think bcos of networking the pepsi spy won't be able to see the coke's share 15:56:56 okay 15:57:05 if that's true then maybe we don't have any problem 15:57:11 bswartz: deepakcs is right in saying that 15:57:43 I just want people to understand that we need to take multitenancy security issues very seriously and anything that might leak data between 2 tenants is a major problem 15:57:57 bswartz, and in setting up that kind of networking in a non-service VM based approach.. we need to have br-ex (external bridge) configured to have tenants access outside network 15:58:13 Proper firewall rules will need to be implemented to make sure that the tenants can't breach the lines and circumvent 15:58:15 bswartz, rushil which i think is part of deployer's responsibiluty in openstack ecosystem 15:58:32 deepakcs: +1 15:58:42 okay it sounds like we're all on the same page 15:58:48 since we have only 1 minute left 15:58:52 #topic open discussion 15:58:56 any last minute stuff? 15:59:06 (literally last minute) 15:59:13 none 15:59:44 okay thanks all 15:59:57 bswartz, thanks 16:00:07 thanks 16:00:09 #endmeeting