15:01:09 #startmeeting manila 15:01:10 Meeting started Thu Feb 20 15:01:09 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:13 The meeting name has been set to 'manila' 15:01:20 hello folks 15:01:29 hi 15:01:30 hi 15:01:31 Hi 15:01:34 hi 15:01:35 hi 15:01:50 #link https://wiki.openstack.org/wiki/Manila/Meetings 15:01:58 hi 15:02:11 I don't suppose we have csaba? He added some items to our agenda 15:02:38 ah there he is 15:02:47 okay a few quick things 15:02:53 hi 15:03:13 as many of you know -- the #openstack-manila channel has been unregistered for a long time because I failed to register it when I should have 15:03:20 we're finally getting that fixed 15:03:41 hopefully within the next week it will be registered and we can get gerritbot in there 15:03:59 hello 15:04:05 hi 15:04:34 um, and I can't think of the other thing 15:04:42 I'm sure it will come to me if we just get started 15:04:47 #topic dev status 15:04:53 let's get an update from csaba first 15:05:04 hi 15:05:26 so there are some issues with the image build process 15:05:57 as pointed out by vponomaryov 15:05:58 https://github.com/csabahenk/cirros/issues 15:06:08 I have those fixed 15:06:23 (pushing them in a moment) 15:06:45 so the image should build now with proper nfs support 15:07:09 what does not work: proper initialization of services 15:07:26 what init system does cirros use? 15:07:31 ie. injecting nfsd + smbd to cirros' init system 15:07:34 is it SysV-based? 15:07:55 I haven't yet figured out fully 15:08:00 it has cloud-init 15:08:38 cloudinit is just a tool 15:08:48 I'm talking about /sbin/init 15:08:58 aka PID 1 15:09:21 yep 15:09:37 /sbin/init -> /bin/busybox 15:09:52 busybox implements a SysV variant 15:10:11 do you need help getting the services added? 15:10:16 yes I think it will be good for us 15:10:27 I think I can manage 15:10:44 just so far I was busy with the build process 15:11:12 I'm also to publish pre-made images 15:11:15 ok 15:11:19 cool 15:11:26 how big are your images compared to cirros? 15:11:39 will be available under 15:11:56 #link http://people.redhat.com/chenk/cirros 15:12:11 around 75M qcow2 15:12:17 180M rootfs 15:13:27 Also, there is one another problem with cirros 15:13:45 we will finish generic driver deployment and at that point we'll be able to provide images that we can confirm to work 15:13:48 if we use ssh and inline command 15:14:18 for cirros, we get command not found, that is ok without ssh. It is the way generic driver uses 15:14:34 vponomaryov: I don't get it 15:14:39 vponomaryov: do you have an example? 15:14:56 PATH variable in new ssh connection is not set up 15:15:19 ssh cirros@hostname sudo ls - inline exection 15:15:52 so can we do ssh cirros@hostname sudo /usr/bin/ls ? 15:16:45 bswartz: don't remember exactly 15:16:55 okay it doesn't seem like a huge issue 15:16:59 we can sort out the PATH stuff I'm sure 15:17:35 vponomaryov: what do you have for dev status 15:17:44 Dev status: 15:17:47 works4me 15:17:52 sorry 15:18:03 1) Generic driver's modularity: https://review.openstack.org/#/c/74154/ 15:18:20 2) Volume types: https://review.openstack.org/#/c/74772/ (client) 15:18:35 3) Horizon's manila extension, Work in progress, no code in repos yet. 15:18:48 4) Activation/deactivation api: https://blueprints.launchpad.net/manila/+spec/share-network-activation-api 15:18:48 (server) https://review.openstack.org/#/c/71936/ 15:18:48 (client) https://review.openstack.org/#/c/71497/ 15:19:00 TODO: 15:19:05 1) Implement volume types server side 15:19:18 2) Make drivers use activation/deactivation API 15:19:27 3) Finish Horizon extension for Manila 15:19:34 4) review generic driver's modularity 15:19:41 OPEN ITEMS: 15:19:48 1) Which repo should be used for storing extended Horizon? 15:20:04 2) lightweight image for generic driver 15:20:13 3) update devstack plugin for using generic driver 15:20:13 dependencies: lightweight image, and reliable storage for it. Drivers with activation api for share-networks. 15:20:26 oh that's a good point 15:20:27 Thats all from me 15:21:33 well we can always use personal github repos for the initial development 15:21:37 m not able to see updates on irc 15:21:43 u guys fcing same issue 15:21:48 that's what we did for manila before stackforge let us in after all 15:21:56 navneet: no 15:22:08 ok 15:22:22 actually I have a better idea 15:22:37 I can create a NetApp fork of horizon and grant access 15:22:44 I think I can at any rate 15:22:54 #action bswartz create a NetApp fork of Horizon 15:23:09 vponomaryov: 2) lightweight image for generic driver 15:23:15 bswartz: Ok, I am using your repo for tempest now, IMO, for Horizon will be ok too 15:23:15 ^ is this addressed? 15:23:50 bswartz: lightweight image is not finished, so, it is open item 15:24:17 vponomaryov: okay but it there anything to discuss today? 15:24:44 hm 15:24:51 nothing special 15:25:03 would like to this review for generic drivers modularity 15:25:07 okay I feel like we covered the details well enough above 15:25:07 I'd like to ask guys to talk about modularity 15:25:16 aostapenko: thanks! the modularization design makes sense. so any new driver, which wants to implement network-plumbing based multi-tenancy, can make use of the service_instance module. 15:25:20 yeah it's on the agent 15:25:25 agenda* 15:25:50 vponomaryov: 3) update devstack plugin for using generic driver 15:25:53 bswartz: I have one proposal, make lvm driver as plugin for generic driver, as cinder, etc 15:25:54 this should be pretty easy right? 15:26:10 bswartz: right, need only dependencies for it 15:26:25 vponomaryov: what do you mean by your proposal? 15:26:39 now we have gingle-tenant lvm driver 15:26:54 would the LVM driver replace the role of cinder in that case? 15:26:56 it can be changed to use generic-driver's service Vm module 15:27:29 bswartz: not replace, be one from several 15:27:51 right I mean, if it was used, then cinder would not be used 15:27:53 I mean, with generic driver's modularity, it can become multitenant 15:29:04 it's an interesting idea 15:29:09 it doesn't feel high priority to me 15:29:23 I'd like to know what the benefits are compared to the approach we're taking now 15:29:27 agree, not high priority 15:30:04 alright 15:30:08 anything else for dev status? 15:30:26 No 15:30:35 esker reminded me of another important topic 15:30:43 #topic Atalana Summit 15:31:06 thanks 15:31:15 Two Manila conference sessions planned: 15:31:17 https://www.openstack.org/vote-atlanta/Presentation/welcome-to-manila-the-openstack-file-share-service 15:31:25 https://www.openstack.org/vote-atlanta/Presentation/manila-the-openstack-file-share-service-technology-deep-dive 15:31:42 please vote for these sessions! 15:31:48 and get you collegues to vote too 15:32:03 this will help with manila's publicity a lot I think 15:32:17 Note that in the latter we want to show demos. NetApp will show ours... EMC contacted us about showing theirs... and we'd like to get Red Hat and IBM involved as well. 15:32:34 If anyone else has something WIP that I'm unaware of... please chime in. 15:32:40 Not trying to exclude anyone. 15:32:52 yes as I mentioned before we want to demo working code -- including as many drivers as we can 15:33:06 Will be short demos... pre-recorded to ensure outcome ;-) 15:33:30 The Horizon work is important in this respect as well. 15:33:46 esker: what, you don't like BSOD while you're on stage? 15:33:56 Would like to get folks from the various companies mentioned onstage to show their stuff. 15:34:04 Not a NetApp session alone... a Manila session. 15:34:30 Again, if I've overlooked someone w/ a Manila backend underway... please let me know. We'd like to include you too. 15:35:17 esker: good idea ;) 15:35:28 I used to work for a certain very mercurial, very cantankerous fellow that was in the business of regularly demoing to very large audiences... have had enough BSOD like experiences for one lifetime. 15:35:56 heh 15:36:19 okay next topic 15:36:31 #topic Discussion on service VM role for generic driver and other (hypothetic) multitenant drivers 15:36:38 So to summarize... please vote... get your peers to vote. And let's coordinate on the sessions 15:36:41 nice long topic csaba 15:36:46 yes :) 15:37:19 BTW, Mirantis... we'd like to get you involved on stage as well... 15:37:29 esker: looking forward to it 15:37:42 so we are still just out of the mental process of understanding how the generic driver's network plumbing looks 15:38:00 as a side product we created a diagram 15:38:04 csaba: I have bad news for you there 15:38:11 oh? 15:38:12 csaba: we're looking at changing that again 15:38:20 no problem 15:38:28 bswartz: this attributed to the IPspaces thing? 15:38:46 it may be possible to get the router out of the picture and go to model where the service-vm is directly on the tenant subnet 15:38:55 esker: no this is regarding the generic driver 15:38:59 just want to make sure our understanding of the state of the art 15:39:04 thx 15:39:13 #link https://docs.google.com/drawings/d/1R-ycE52ukV8XpK2MmKhkBO--QJRhygCU3rJZs0Zl3Kg/edit 15:39:38 esker: the ipspaces thing is just a limitaiton on the netapp driver 15:40:10 csaba: nice picture! 15:40:11 our question is, if this is correct -- why exacly we need the service VM-s? 15:40:40 I mean tenant isolation is given by network plumbing 15:40:46 csaba: maybe I misunderstand the question but, where else would NFSD run? 15:40:56 on hypervisor 15:41:09 csaba: ah 15:41:12 what's wrong with that? 15:41:22 like this 15:41:23 so the problem with that is what if the tenant uses Kerberos? 15:41:26 #link https://docs.google.com/drawings/d/17R9I-PepdvoiXptBc8diHpqUHprSi8GDrVpzKwec-PI/edit 15:41:51 certainly strong security is needed for a central nfs server 15:41:55 csaba: Almost, routers do not belong to service tenant. But this architecture will be changed 15:41:57 each tenant could be on a separate kerberos domain 15:42:12 I'm not aware of an NFSD that can participate in multiple kerberos domains at the same time 15:42:13 bswartz: yes 15:42:24 ah 15:42:45 so that's why you need per-tenant nfsd?\ 15:42:56 yes that's been the thinking so far 15:43:15 each service VM has it's own DNS, it's own LDAP, it's own Kerberos -- all specified by the tenant 15:44:04 you need some form of containers/virtualization to run multiple NFSDs with all of those things different for each one 15:44:21 Linux doesn't support per-process DNS servers AFAIK either 15:44:26 that was then the thing I missed.. I thought they are configured by the framwork 15:45:18 I see 15:45:23 yeah I think virtualization is essential for some of this security stuff 15:45:27 for first diagram, there is one service vm per tenant, regardless of how many hv/compute nodes are in the configuration, right? 15:45:43 sure would be nice if we weren't bottlenecking each tenant's fileshares to a single ethernet link though, wouldn't it *sigh* 15:45:48 bill_az__: that's my understanding 15:45:54 bill_az__: it's of course a simplified figure 15:45:56 bill_az__Ж щту ыукмшсу МЬ ащк уфср ырфку-туецщкл 15:46:11 bill_az__: one service Vm for each shre-network 15:46:23 or of a simplistic case 15:46:24 it will scheduled by nova 15:46:34 ok - thats what I was thinking - wanted to be sure we're all on the same page 15:46:57 gregsfortytwo1: yes the VM does become a bottleneck for share traffic 15:47:08 gregsfortytwo1: however it doesn't have to be 1 ethernet link 15:47:32 the admin could arrange for share VMs to go on a server with quad-bonded-10GbE 15:47:47 using nova flavors 15:48:30 bswartz: does this design allow for a potential future enhancement wherein a share might need be established across multiple tenants? 15:48:40 esker: NO! 15:48:49 shares across tenants are a big no-no in the current design 15:48:52 So how might that be accommodated? 15:49:01 concerned about a design that prevents that in the future 15:49:06 bswartz: it can be shared, if network shared 15:49:12 esker: the tenants would have to work that out between themselves 15:49:25 if they arrange for a network route between them then the share data could flow along that route 15:49:30 but the share would always be owned by 1 tenant 15:49:33 I would think that the tenants would want Manilla to work that out for them 15:49:43 sorry... Manila 15:50:01 Okay, that's fair 15:50:04 esker: it's too hard of a problem to solve in the general case, which is why we've punted 15:50:28 one tenant will have to own a given share... but could that tenant elect to make it available to others? Does this design prevent that? 15:50:44 it's just a matter of network connectivity 15:50:47 Punting is okay for this... but concerned about a design that doesn't allow for it in the future 15:50:53 I guess there is one thing to consider 15:51:02 which is manila's access control scheme at the IP level 15:51:30 we just need to make sure that manila doesn't attempt to limit access requests from IPs that don't belong to the owning tenant 15:51:44 then again manila doesn't really know what those are anyways so it's not an issue 15:52:18 we have to trust that if the owning tenant wishes to grant access to an IP that they know what they're doing 15:52:34 esker: Only one tenant sets exports, if network connectivity is present, it can be shared 15:53:03 and doing anything else would require some kind of shared admin authority between the tenants, and nobody wants to do that 15:53:14 gregsfortytwo1: +1million 15:53:56 okay I think that's settled 15:54:00 just one more thing though 15:54:04 Right... fair enough. Of course a single given tenant will have to have sole administrative authority... main concern is provision within Manila's design to allow for sharing to other tenant networks. 15:54:25 csaba: there would be special cases where the nfsd could be run on the hypervisor, as long as the security rules were lax enough that that wouldn't case problems 15:54:45 csaba: I would consider that an optimization though, not a core use case 15:55:15 #topic open discussion 15:55:21 we've got 5 minutes left 15:55:23 anything else for today? 15:56:01 I think csaba is worried about what happens when you get a tenant with 100 VMs trying to share data via a Manila share…think something like Hadoop 15:56:44 I know it's not the most performance optimized design 15:57:04 I'm not fully versed with hadoop but I thought it was a really bad idea to use shared filesystems with it 15:57:18 hadoop is all about bringing the compute close to where the data is 15:57:33 yeah, but there's a lot of data-intensive stuff like that 15:57:37 virtualzation schemes like openstack make that impossible or at least really difficuylt 15:57:50 and hadoop-on-ec2 is pretty popular as I understand it 15:58:02 though that might be mostly using ephemeral storage 15:58:26 that's interesting 15:58:57 I still think hadoop on bare metal would crush the performance of that -- as long as you're okay spending capex funds for hadoop rather than opex funds 15:59:04 but that's a big digression 15:59:08 we won't go there 15:59:25 I'll just post the links one more time 15:59:30 https://www.openstack.org/vote-atlanta/Presentation/welcome-to-manila-the-openstack-file-share-service 15:59:31 https://www.openstack.org/vote-atlanta/Presentation/manila-the-openstack-file-share-service-technology-deep-dive 15:59:40 don't forget to vote 15:59:43 thanks all 15:59:47 thanks 15:59:52 thanks 15:59:52 bye 16:00:11 #endmeeting