15:01:09 <bswartz> #startmeeting manila
15:01:10 <openstack> Meeting started Thu Feb 20 15:01:09 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:13 <openstack> The meeting name has been set to 'manila'
15:01:20 <bswartz> hello folks
15:01:29 <vponomaryov> hi
15:01:30 <xyang1> hi
15:01:31 <scottda> Hi
15:01:34 <rraja> hi
15:01:35 <yportnova> hi
15:01:50 <bswartz> #link https://wiki.openstack.org/wiki/Manila/Meetings
15:01:58 <gregsfortytwo1> hi
15:02:11 <bswartz> I don't suppose we have csaba? He added some items to our agenda
15:02:38 <bswartz> ah there he is
15:02:47 <bswartz> okay a few quick things
15:02:53 <csaba> hi
15:03:13 <bswartz> as many of you know -- the #openstack-manila channel has been unregistered for a long time because I failed to register it when I should have
15:03:20 <bswartz> we're finally getting that fixed
15:03:41 <bswartz> hopefully within the next week it will be registered and we can get gerritbot in there
15:03:59 <achirko> hello
15:04:05 <aostapenko> hi
15:04:34 <bswartz> um, and I can't think of the other thing
15:04:42 <bswartz> I'm sure it will come to me if we just get started
15:04:47 <bswartz> #topic dev status
15:04:53 <bswartz> let's get an update from csaba first
15:05:04 <csaba> hi
15:05:26 <csaba> so there are some issues with the image build process
15:05:57 <csaba> as pointed out by vponomaryov
15:05:58 <csaba> https://github.com/csabahenk/cirros/issues
15:06:08 <csaba> I have those fixed
15:06:23 <csaba> (pushing them in a moment)
15:06:45 <csaba> so the image should build now with proper nfs support
15:07:09 <csaba> what does not work: proper initialization of  services
15:07:26 <bswartz> what init system does cirros use?
15:07:31 <csaba> ie. injecting nfsd + smbd to cirros' init system
15:07:34 <bswartz> is it SysV-based?
15:07:55 <csaba> I haven't yet figured out fully
15:08:00 <csaba> it has cloud-init
15:08:38 <bswartz> cloudinit is just a tool
15:08:48 <bswartz> I'm talking about /sbin/init
15:08:58 <bswartz> aka PID 1
15:09:21 <csaba> yep
15:09:37 <csaba> /sbin/init -> /bin/busybox
15:09:52 <bswartz> busybox implements a SysV variant
15:10:11 <bswartz> do you need help getting the services added?
15:10:16 <csaba> yes I think it will be good for us
15:10:27 <csaba> I think I can manage
15:10:44 <csaba> just so far I was busy with the build process
15:11:12 <csaba> I'm also to publish pre-made images
15:11:15 <bswartz> ok
15:11:19 <bswartz> cool
15:11:26 <bswartz> how big are your images compared to cirros?
15:11:39 <csaba> will be available under
15:11:56 <csaba> #link http://people.redhat.com/chenk/cirros
15:12:11 <csaba> around 75M qcow2
15:12:17 <csaba> 180M rootfs
15:13:27 <vponomaryov> Also, there is one another problem with cirros
15:13:45 <csaba> we will finish generic driver deployment and at that point we'll be able to provide images that we can confirm to work
15:13:48 <vponomaryov> if we use ssh and inline command
15:14:18 <vponomaryov> for cirros, we get command not found, that is ok without ssh. It is the way generic driver uses
15:14:34 <bswartz> vponomaryov: I don't get it
15:14:39 <bswartz> vponomaryov: do you have an example?
15:14:56 <aostapenko> PATH variable in new ssh connection is not set up
15:15:19 <vponomaryov> ssh cirros@hostname sudo ls   - inline exection
15:15:52 <bswartz> so can we do ssh cirros@hostname sudo /usr/bin/ls ?
15:16:45 <vponomaryov> bswartz: don't remember exactly
15:16:55 <bswartz> okay it doesn't seem like a huge issue
15:16:59 <bswartz> we can sort out the PATH stuff I'm sure
15:17:35 <bswartz> vponomaryov: what do you have for dev status
15:17:44 <vponomaryov> Dev status:
15:17:47 <csaba> works4me
15:17:52 <csaba> sorry
15:18:03 <vponomaryov> 1) Generic driver's modularity: https://review.openstack.org/#/c/74154/
15:18:20 <vponomaryov> 2) Volume types: https://review.openstack.org/#/c/74772/ (client)
15:18:35 <vponomaryov> 3) Horizon's manila extension, Work in progress, no code in repos yet.
15:18:48 <vponomaryov> 4) Activation/deactivation api: https://blueprints.launchpad.net/manila/+spec/share-network-activation-api
15:18:48 <vponomaryov> (server) https://review.openstack.org/#/c/71936/
15:18:48 <vponomaryov> (client) https://review.openstack.org/#/c/71497/
15:19:00 <vponomaryov> TODO:
15:19:05 <vponomaryov> 1) Implement volume types server side
15:19:18 <vponomaryov> 2) Make drivers use activation/deactivation API
15:19:27 <vponomaryov> 3) Finish Horizon extension for Manila
15:19:34 <vponomaryov> 4) review generic driver's modularity
15:19:41 <vponomaryov> OPEN ITEMS:
15:19:48 <vponomaryov> 1) Which repo should be used for storing extended Horizon?
15:20:04 <vponomaryov> 2) lightweight image for generic driver
15:20:13 <vponomaryov> 3) update devstack plugin for using generic driver
15:20:13 <vponomaryov> dependencies: lightweight image, and reliable storage for it. Drivers with activation api for share-networks.
15:20:26 <bswartz> oh that's a good point
15:20:27 <vponomaryov> Thats all from me
15:21:33 <bswartz> well we can always use personal github repos for the initial development
15:21:37 <navneet> m not able to see updates on irc
15:21:43 <navneet> u guys fcing same issue
15:21:48 <bswartz> that's what we did for manila before stackforge let us in after all
15:21:56 <bswartz> navneet: no
15:22:08 <navneet> ok
15:22:22 <bswartz> actually I have a better idea
15:22:37 <bswartz> I can create a NetApp fork of horizon and grant access
15:22:44 <bswartz> I think I can at any rate
15:22:54 <bswartz> #action bswartz create a NetApp fork of Horizon
15:23:09 <bswartz> vponomaryov: 2) lightweight image for generic driver
15:23:15 <vponomaryov> bswartz: Ok, I am using your repo for tempest now, IMO, for Horizon will be ok too
15:23:15 <bswartz> ^ is this addressed?
15:23:50 <vponomaryov> bswartz: lightweight image is not finished, so, it is open item
15:24:17 <bswartz> vponomaryov: okay but it there anything to discuss today?
15:24:44 <vponomaryov> hm
15:24:51 <vponomaryov> nothing special
15:25:03 <vponomaryov> would like to this review for generic drivers modularity
15:25:07 <bswartz> okay I feel like we covered the details well enough above
15:25:07 <aostapenko> I'd like to ask guys to talk about modularity
15:25:16 <rraja> aostapenko: thanks! the modularization design makes sense. so any new driver, which wants to implement network-plumbing based multi-tenancy, can make use of the service_instance module.
15:25:20 <bswartz> yeah it's on the agent
15:25:25 <bswartz> agenda*
15:25:50 <bswartz> vponomaryov: 3) update devstack plugin for using generic driver
15:25:53 <vponomaryov> bswartz: I have one proposal, make lvm driver as plugin for generic driver, as cinder, etc
15:25:54 <bswartz> this should be pretty easy right?
15:26:10 <vponomaryov> bswartz: right, need only dependencies for it
15:26:25 <bswartz> vponomaryov: what do you mean by your proposal?
15:26:39 <vponomaryov> now we have gingle-tenant lvm driver
15:26:54 <bswartz> would the LVM driver replace the role of cinder in that case?
15:26:56 <vponomaryov> it can be changed to use generic-driver's service Vm module
15:27:29 <vponomaryov> bswartz: not replace, be one from several
15:27:51 <bswartz> right I mean, if it was used, then cinder would not be used
15:27:53 <vponomaryov> I mean, with generic driver's modularity, it can become multitenant
15:29:04 <bswartz> it's an interesting idea
15:29:09 <bswartz> it doesn't feel high priority to me
15:29:23 <bswartz> I'd like to know what the benefits are compared to the approach we're taking now
15:29:27 <vponomaryov> agree, not high priority
15:30:04 <bswartz> alright
15:30:08 <bswartz> anything else for dev status?
15:30:26 <vponomaryov> No
15:30:35 <bswartz> esker reminded me of another important topic
15:30:43 <bswartz> #topic Atalana Summit
15:31:06 <esker> thanks
15:31:15 <esker> Two Manila conference sessions planned:
15:31:17 <esker> https://www.openstack.org/vote-atlanta/Presentation/welcome-to-manila-the-openstack-file-share-service
15:31:25 <esker> https://www.openstack.org/vote-atlanta/Presentation/manila-the-openstack-file-share-service-technology-deep-dive
15:31:42 <bswartz> please vote for these sessions!
15:31:48 <bswartz> and get you collegues to vote too
15:32:03 <bswartz> this will help with manila's publicity a lot I think
15:32:17 <esker> Note that in the latter we want to show demos.  NetApp will show ours... EMC contacted us about showing theirs... and we'd like to get Red Hat and IBM involved as well.
15:32:34 <esker> If anyone else has something WIP that I'm unaware of... please chime in.
15:32:40 <esker> Not trying to exclude anyone.
15:32:52 <bswartz> yes as I mentioned before we want to demo working code -- including as many drivers as we can
15:33:06 <esker> Will be short demos... pre-recorded to ensure outcome ;-)
15:33:30 <esker> The Horizon work is important in this respect as well.
15:33:46 <bswartz> esker: what, you don't like BSOD while you're on stage?
15:33:56 <esker> Would like to get folks from the various companies mentioned onstage to show their stuff.
15:34:04 <esker> Not a NetApp session alone... a Manila session.
15:34:30 <esker> Again, if I've overlooked someone w/ a Manila backend underway... please let me know.  We'd like to include you too.
15:35:17 <vbellur> esker: good idea ;)
15:35:28 <esker> I used to work for a certain very mercurial, very cantankerous fellow that was in the business of regularly demoing to very large audiences... have had enough BSOD like experiences for one lifetime.
15:35:56 <bswartz> heh
15:36:19 <bswartz> okay next topic
15:36:31 <bswartz> #topic Discussion on service VM role for generic driver and other (hypothetic) multitenant drivers
15:36:38 <esker> So to summarize... please vote... get your peers to vote.  And let's coordinate on the sessions
15:36:41 <bswartz> nice long topic csaba
15:36:46 <csaba> yes :)
15:37:19 <esker> BTW, Mirantis... we'd like to get you involved on stage as well...
15:37:29 <xyang1> esker: looking forward to it
15:37:42 <csaba> so we are still just out of  the mental process of understanding how the generic driver's network plumbing looks
15:38:00 <csaba> as a side product we created a diagram
15:38:04 <bswartz> csaba: I have bad news for you there
15:38:11 <csaba> oh?
15:38:12 <bswartz> csaba: we're looking at changing that again
15:38:20 <csaba> no problem
15:38:28 <esker> bswartz: this attributed to the IPspaces thing?
15:38:46 <bswartz> it may be possible to get the router out of the picture and go to model where the service-vm is directly on the tenant subnet
15:38:55 <bswartz> esker: no this is regarding the generic driver
15:38:59 <csaba> just want to make sure our understanding of the state of the art
15:39:04 <esker> thx
15:39:13 <csaba> #link https://docs.google.com/drawings/d/1R-ycE52ukV8XpK2MmKhkBO--QJRhygCU3rJZs0Zl3Kg/edit
15:39:38 <bswartz> esker: the ipspaces thing is just a limitaiton on the netapp driver
15:40:10 <bswartz> csaba: nice picture!
15:40:11 <csaba> our question is, if this is correct -- why exacly we need the service VM-s?
15:40:40 <csaba> I mean tenant isolation is given by network plumbing
15:40:46 <bswartz> csaba: maybe I misunderstand the question but, where else would NFSD run?
15:40:56 <csaba> on hypervisor
15:41:09 <bswartz> csaba: ah
15:41:12 <csaba> what's wrong with that?
15:41:22 <csaba> like this
15:41:23 <bswartz> so the problem with that is what if the tenant uses Kerberos?
15:41:26 <csaba> #link https://docs.google.com/drawings/d/17R9I-PepdvoiXptBc8diHpqUHprSi8GDrVpzKwec-PI/edit
15:41:51 <csaba> certainly strong security is needed for a central nfs server
15:41:55 <aostapenko> csaba: Almost, routers do not belong to service tenant. But this architecture will be changed
15:41:57 <bswartz> each tenant could be on a separate kerberos domain
15:42:12 <bswartz> I'm not aware of an NFSD that can participate in multiple kerberos domains at the same time
15:42:13 <csaba> bswartz: yes
15:42:24 <csaba> ah
15:42:45 <csaba> so that's why you need per-tenant nfsd?\
15:42:56 <bswartz> yes that's been the thinking so far
15:43:15 <bswartz> each service VM has it's own DNS, it's own LDAP, it's own Kerberos -- all specified by the tenant
15:44:04 <bswartz> you need some form of containers/virtualization to run multiple NFSDs with all of those things different for each one
15:44:21 <bswartz> Linux doesn't support per-process DNS servers AFAIK either
15:44:26 <csaba> that was then the thing I missed.. I thought they are configured by the framwork
15:45:18 <csaba> I see
15:45:23 <bswartz> yeah I think virtualization is essential for some of this security stuff
15:45:27 <bill_az__> for first diagram, there is one service vm per tenant, regardless of how many hv/compute  nodes are in the configuration, right?
15:45:43 <gregsfortytwo1> sure would be nice if we weren't bottlenecking each tenant's fileshares to a single ethernet link though, wouldn't it *sigh*
15:45:48 <bswartz> bill_az__: that's my understanding
15:45:54 <csaba> bill_az__: it's of course a simplified figure
15:45:56 <vponomaryov> bill_az__Ж щту ыукмшсу МЬ ащк уфср ырфку-туецщкл
15:46:11 <vponomaryov> bill_az__: one service Vm for each shre-network
15:46:23 <csaba> or of a simplistic case
15:46:24 <vponomaryov> it will scheduled by nova
15:46:34 <bill_az__> ok - thats what I was thinking - wanted to be sure we're all on the same page
15:46:57 <bswartz> gregsfortytwo1: yes the VM does become a bottleneck for share traffic
15:47:08 <bswartz> gregsfortytwo1: however it doesn't have to be 1 ethernet link
15:47:32 <bswartz> the admin could arrange for share VMs to go on a server with quad-bonded-10GbE
15:47:47 <bswartz> using nova flavors
15:48:30 <esker> bswartz:  does this design allow for a potential future enhancement wherein a share might need be established across multiple tenants?
15:48:40 <bswartz> esker: NO!
15:48:49 <bswartz> shares across tenants are a big no-no in the current design
15:48:52 <esker> So how might that be accommodated?
15:49:01 <esker> concerned about a design that prevents that in the future
15:49:06 <vponomaryov> bswartz: it can be shared, if network shared
15:49:12 <bswartz> esker: the tenants would have to work that out between themselves
15:49:25 <bswartz> if they arrange for a network route between them then the share data could flow along that route
15:49:30 <bswartz> but the share would always be owned by 1 tenant
15:49:33 <esker> I would think that the tenants would want Manilla to work that out for them
15:49:43 <esker> sorry... Manila
15:50:01 <esker> Okay, that's fair
15:50:04 <bswartz> esker: it's too hard of a problem to solve in the general case, which is why we've punted
15:50:28 <esker> one tenant will have to own a given share... but could that tenant elect to make it available to others?  Does this design prevent that?
15:50:44 <bswartz> it's just a matter of network connectivity
15:50:47 <esker> Punting is okay for this... but concerned about a design that doesn't allow for it in the future
15:50:53 <bswartz> I guess there is one thing to consider
15:51:02 <bswartz> which is manila's access control scheme at the IP level
15:51:30 <bswartz> we just need to make sure that manila doesn't attempt to limit access requests from IPs that don't belong to the owning tenant
15:51:44 <bswartz> then again manila doesn't really know what those are anyways so it's not an issue
15:52:18 <bswartz> we have to trust that if the owning tenant wishes to grant access to an IP that they know what they're doing
15:52:34 <vponomaryov> esker: Only one tenant sets exports, if network connectivity is present, it can be shared
15:53:03 <gregsfortytwo1> and doing anything else would require some kind of shared admin authority between the tenants, and nobody wants to do that
15:53:14 <bswartz> gregsfortytwo1: +1million
15:53:56 <bswartz> okay I think that's settled
15:54:00 <bswartz> just one more thing though
15:54:04 <esker> Right... fair enough.  Of course a single given tenant will have to have sole administrative authority... main concern is provision within Manila's design to allow for sharing to other tenant networks.
15:54:25 <bswartz> csaba: there would be special cases where the nfsd could be run on the hypervisor, as long as the security rules were lax enough that that wouldn't case problems
15:54:45 <bswartz> csaba: I would consider that an optimization though, not a core use case
15:55:15 <bswartz> #topic open discussion
15:55:21 <bswartz> we've got 5 minutes left
15:55:23 <bswartz> anything else for today?
15:56:01 <gregsfortytwo1> I think csaba is worried about what happens when you get a tenant with 100 VMs trying to share data via a Manila share…think something like Hadoop
15:56:44 <bswartz> I know it's not the most performance optimized design
15:57:04 <bswartz> I'm not fully versed with hadoop but I thought it was a really bad idea to use shared filesystems with it
15:57:18 <bswartz> hadoop is all about bringing the compute close to where the data is
15:57:33 <gregsfortytwo1> yeah, but there's a lot of data-intensive stuff like that
15:57:37 <bswartz> virtualzation schemes like openstack make that impossible or at least really difficuylt
15:57:50 <gregsfortytwo1> and hadoop-on-ec2 is pretty popular as I understand it
15:58:02 <gregsfortytwo1> though that might be mostly using ephemeral storage
15:58:26 <bswartz> that's interesting
15:58:57 <bswartz> I still think hadoop on bare metal would crush the performance of that -- as long as you're okay spending capex funds for hadoop rather than opex funds
15:59:04 <bswartz> but that's a big digression
15:59:08 <bswartz> we won't go there
15:59:25 <bswartz> I'll just post the links one more time
15:59:30 <bswartz> https://www.openstack.org/vote-atlanta/Presentation/welcome-to-manila-the-openstack-file-share-service
15:59:31 <bswartz> https://www.openstack.org/vote-atlanta/Presentation/manila-the-openstack-file-share-service-technology-deep-dive
15:59:40 <bswartz> don't forget to vote
15:59:43 <bswartz> thanks all
15:59:47 <aostapenko> thanks
15:59:52 <vponomaryov> thanks
15:59:52 <aostapenko> bye
16:00:11 <bswartz> #endmeeting