15:01:50 <bswartz> #startmeeting manila
15:01:51 <openstack> Meeting started Thu Oct 10 15:01:50 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:54 <openstack> The meeting name has been set to 'manila'
15:02:08 <bswartz> hello folks
15:02:12 <yportnova> hi
15:02:13 <caitlin56> hi
15:02:14 <alexpecoraro> hi
15:02:18 <aostapenko> hi
15:02:19 <vponomaryov> hi
15:02:47 <bswartz> err, I just realized that I forgot to update the agenda again
15:03:09 <bswartz> and nobody else put any topics on there so we'll have to wing it again
15:03:22 <bswartz> #topic incubation
15:04:04 <bswartz> I submitted the incubation request for manila just now
15:04:14 <bswartz> the process is actually to create a wiki and then send a link to the TC
15:04:27 <bswartz> #link https://wiki.openstack.org/wiki/Manila_Overview
15:05:04 <bswartz> Anyone is welcome to take a look and fix errors, considering it's a wiki
15:05:40 <bswartz> I got this reviewed by a few people though so I'm reasonably happy with the content -- the main thing I'm unsure about is the list of developers
15:05:52 <bswartz> if I missed anyone please add yourself
15:07:17 <bswartz> I'm pretty confident that we'll be accepted, but it might not happen next week if the TC is wrapped up with Havana release issues
15:07:42 <bswartz> The TC meets on Tuesday afternoons and considers requests during those meetings
15:08:16 <bswartz> there may be some discussion on the openstack-dev list, and you're all welcome to add your opinions
15:08:20 <bswartz> enough about that though
15:08:30 <bswartz> #topic networking
15:09:05 <alexpecoraro> just quickly wanted to add - related to incubation
15:09:21 <bswartz> so we have some news on the challenges we face getting networking implemented properly
15:09:36 <alexpecoraro> shahir wanted me to let you know that he plans to submit a blueprint for an EMC Manila driver either today or tomorrow
15:09:50 <bswartz> yportnova: do you want to update the group on the issues you've discoverd?
15:10:04 <yportnova> bswartz: yes
15:10:06 <bswartz> alexpecoraro: that's great
15:10:36 <yportnova> I was looking how to implement networking plumbing case
15:10:42 <bswartz> alexpecoraro: I didn't put his name on the list of developers, I used yours
15:11:57 <yportnova> I found to solutions: 1. using neutron's provider network extensions 2. Configuring vlan interfaces on storage (accordingly to tenant network's segmentation id)
15:12:30 <alexpecoraro> i'm not sure if he wanted to be listed or not, but i'll ask him to add himself and he'll get his name right - its shamail tahir not shahir tamail :)
15:12:35 <bswartz> yportnova: what about the problems we have which are common to the Ironic project?
15:14:00 <yportnova> We have storage that should be connected to private network, they have Baremetal instances that also should be connected to private network
15:14:26 <yportnova> I think the netwotking model for us and for them can be the same
15:14:40 <bswartz> yportnova: and as far as we know are the Neutron APIs that we need implemented or not?
15:14:45 <caitlin56> Fundamentally, we need to be able to add a VNIC to a non-VM, correct?
15:15:29 <yportnova> caitlin56: yes
15:16:19 <bswartz> caitlin56: yes, and Ironic has the same requirement
15:16:31 <yportnova> bswartz: Provider Network is implemented and we can use it
15:16:55 <bswartz> yportnova: is that what the Ironic team plans to do as well?
15:18:01 <yportnova> bswartz: provider network approach is not suitable for Ironic
15:18:04 <caitlin56> What identification of a newly attached network is ironic going to provide, and is that adequate for a NAS server to identify what file systems it is supposed to export to that network?
15:19:11 <bswartz> caitlin56: the issue here is the low level network connectivity
15:19:21 <bswartz> higher level stuff like filesystems it not relevant to Neutron
15:19:44 <caitlin56> Yes, and we may have to fill that gap ourselves.
15:19:46 <bswartz> the basic workflow needs to go like this:
15:20:15 <bswartz> 1) Ask neutron to provision us IPs for us out of the tenant network (1 per virtual NIC)
15:21:12 <bswartz> 2) Create the virual NICs on the storage controller(s) using the assigned IPs
15:21:39 <bswartz> 3) Tell neutron where those virtual NICs are cabled in so it can plumb them through to the tenant network
15:22:08 <bswartz> 1 is existing funcitonality, 2 is the job of manila and the backend drivers, 3 is something I'm not sure about yet
15:22:31 <bswartz> step 4 would be to configure the backend to export teh filesystems over the virtual NICs
15:23:30 <caitlin56> bswartz: what information would a hypervisor receive for the equivalent of steps 2 and 3 if this were a VNIC for a VM? Is this a case of just needing the same info?
15:23:49 <bswartz> yportnova: can you explain how we can do 3 with the provider network APIs? if not here then maybe produce a writeup in the form of a blueprint?
15:24:03 <jcorbin> For step 3, when a VM is instantiated Neutron will call the ML2 mechanism drivers so that the switch can configure the VLAN on the switch port connected to the host containing the VM.
15:24:39 <bswartz> jcorbin: how is the switch port determined for a nova instance?
15:24:57 <yportnova> bswartz: the one case I described is: 1) Ask neutron to provision us IPs of the tenant network (1 per virtual NIC) and get segmentation_id (VLAN) of network 2) Create corresponding vlan interfaces on storage 3) Create sec-groups allowing incoming traffic to VMs.
15:25:37 <yportnova> bswartz: and we will have 2-way comunication between VM and storage
15:25:43 <jcorbin> I can only speak for Arista switches but we store the topology and can figure out which host is connected to which port by the mac address.
15:26:13 <bswartz> okay well the storage controllers will definitely be able to supply MAC addrs for the virtual NICs they use
15:26:28 <bswartz> so if that's all that's needed by neutron then this might be fairly simple
15:26:34 <bswartz> we just need to prototype it and prove that it works
15:27:14 <jcorbin> bswartz: I would like to help if possible.
15:27:42 <bswartz> obviously the driver interface will need some new methods to communicate with the storage controlle to create virtual NICs and get their MAC addrs, etc
15:28:16 <caitlin56> Actual switches should not be involved in creating VNICs. You are *adding* VLANs to existing ethernet links.
15:28:16 <bswartz> jcorbin: that would be awesome -- grab me 1on1 after the meeting
15:28:48 <caitlin56> So the storage appliance is effectively pretending to be a Hypervisor and a Virtual Switch -- hopefully without having to fully pretend.
15:28:59 <bswartz> caitlin56: yes, it's unclear to me if it makes sense to allow neutron to pick the VLAN number or to let the storage controller pick the VLAN number
15:29:36 <caitlin56> Wouldn't neutron *have* to pick the VLAN if it is also attaching clients who are VMs?
15:29:42 <bswartz> obviously neutron does need to pick the IP address, subnet mask, and gw IP
15:30:08 <bswartz> caitlin56: most likely, unless there are switches that can perform VLAN-translation
15:30:33 <bswartz> I'm not an expert on switch technology -- VLAN translation seems like a simple trick to me but maybe it's harder than I think
15:30:37 <caitlin56> VLAN translation leads to risk of VLAN-routing, which can produce loops. I'd avoid that.
15:31:21 <bswartz> okay in that case let's assume that neutron will be telling us what VLAN to use, in addition to the IP/mask/gw
15:31:28 <caitlin56> Not that we would be creating that risk, but some network admins will be very suspicious of VLAN translation.
15:32:39 <jcorbin> caitlin56: The tenant network segmentation id is the vlan number assuming the segmentation type is vlan.
15:32:52 <bswartz> okay so ignoring the issues around how we implement this in the backend driver itself, do we know enough to prototype this necessary interactions between manila and neutron?
15:33:21 <jcorbin> caitlin56: When you provision the VM you specify the tenant network(s) to connect it to which indirectly specifies the VLAN(s) being used.
15:33:38 <caitlin56> What stable network identifiers will we be promised by neutron?
15:33:53 <bswartz> jcorbin: how does openstack handle >4094 tenants when vlan segmentation is used?
15:34:31 <jcorbin> caitlin56: Do you mean network segementation types?
15:35:04 <caitlin56> jcorbin: no, how do we recognize the "XYZ Corp" tenant network as being the XYZ Corp tenant network?
15:35:10 <jcorbin> bswartz: It doesn't that is the limit. VXLANs will get you beyond 4094 tenant networks.
15:36:11 <bswartz> jcorbin: is anyone doing 802.1 QinQ to get around the limit?
15:36:59 <jcorbin> bswartz: I do not know if anyone is doing 802.1 QinQ.
15:37:13 <bswartz> okay well that's enough on this topic for now I think
15:37:23 <caitlin56> bswartz: I believe the current 802.1 official solution is Provider Bridging. Which I haven't seen being adopted by virtualization folks at all.
15:37:26 <bswartz> this is a rich area and we'll get many chances to revisit it
15:37:47 <bswartz> I want to talk about...
15:37:52 <bswartz> #topic LVM driver
15:38:23 <bswartz> So the existing LVM drivers for Manila are basically single-tenant drivers
15:38:41 <bswartz> they could support multiple tenants but only if you use flat networking
15:39:08 <caitlin56> bswartz: clarify, what do you mean by "LVM driver for manila", LVM would most naturally be for Cinder.
15:39:35 <bswartz> We need to either (1) extend the LVM driver to support multitenancy or (2) create another "generic" driver that supports multitenancy
15:40:12 <bswartz> caitlin56: I'm referring to the generic driver that uses local LVM storage and the LInux NFSD or Samba
15:40:40 <caitlin56> Running generic Linux NFSD/Samba over LVM as provided by Cinder?
15:40:41 <bswartz> the generic drivers are important because devstack will use them and all of the tempest test will rely on them working well
15:40:51 <bswartz> caitlin56: no there's no dependency on cinder
15:41:24 <bswartz> The generic drivers are also important as reference implementations for other driver authors
15:41:39 <caitlin56> Generic Linux NAS over strictly local native Linux LVM?
15:41:54 <bswartz> and lastly, I expect some poor souls will actually put them into production on white box hardware, much like people currently do with the cinder LVM drivers
15:42:41 <bswartz> caitlin56: last time I checked it was strictly local LVM
15:43:07 <bswartz> we're of course free to implement whatever we need to here -- maybe a driver that layers on top of cinder would not be a bad thing
15:43:30 <caitlin56> The real question here is whether this reference platform requires partitions to be assigned to tenants, or just file systems from shared partitions.
15:43:38 <bswartz> simplicity is important though -- insofar as the drivers will be used a reference implementations and used by automated testing
15:44:15 <bswartz> I think we need to have at least one generic driver that can support network partitions
15:44:35 <bswartz> that driver may not be the one that ends up running in devstack by default, depending on how complex it is to make it work
15:44:40 <bswartz> and that's exactly what I'd like to talk about
15:45:21 <caitlin56> So does each tenant have a root mount point in a shared file system, or its own root device?
15:45:41 <bswartz> I don't believe it's possible to have 1 Linux box join 2 kerberos domains at the same time, or to have different daemons use different DNS servers
15:45:48 <caitlin56> The latter is clearly simpler, but it is adequate for at most tesgting.
15:46:45 <bswartz> caitlin56: regard the storage side of this, they could only share filesystems if the FS had quota enforcement such that one tenant could not steal another tenant's space
15:47:04 <bswartz> I suspect that separate filesystems are safer from that standpoint
15:47:29 <bswartz> not to mention general paranoia about mixing tenant data in a common filesystems
15:47:49 <bswartz> from the DNS/domain perspective though, I'm more worried
15:47:51 <caitlin56> What's the in-kernel availability of anything like Solaris zones?
15:48:27 <bswartz> I'd rather not be forced to implement full virtualization of the NFS or Samba daemons to implement multitenancy in the generic drivers
15:48:59 <bswartz> caitlin56: I was hoping someone would know the answer to that :-P
15:49:18 <bswartz> I haven't researched the topic myself
15:49:38 <caitlin56> I know there are products, but I don't know their relationship to kernel.org Linux.
15:49:46 <bswartz> glenng is going to look at it soon, but if anyone has any ideas/suggestions we'd love to hear them
15:50:45 <bswartz> My first thought is usermode Linux, but I have zero familiarity with that
15:50:58 <caitlin56> With Solaris zones we can run the daemons in a zone and access devices that are exported from the core. I believe you can do the same in Linux,, but I don't know the deetails.
15:51:22 <bswartz> have solaris zones or BSD jails been ported to Linux directly?
15:51:47 <caitlin56> There are equivalent products. I don't know if they are mainline though.
15:51:55 <bswartz> it seems impossible that this problem hasn't been solved somewhere by somebody
15:52:09 <bswartz> I just unfortunately don't know the solution yet
15:52:48 <bswartz> anyways look for more answers to these problems as we work on them in the coming weeks
15:53:23 <bswartz> I'd like to have concrete plans by Hong Kong so we can go into IceHouse and just write the code
15:53:29 <bswartz> that's all I have
15:53:34 <bswartz> #topic open discussion
15:53:38 <caitlin56> Would it be acceptable if the reference file server launched VMs for each tenant under a Hypervisor?
15:53:58 <bswartz> caitlin56: acceptable yes, but that's pretty heavyweight
15:54:16 <caitlin56> Way too heavyweight for a product, but possibly acceptable for testing.
15:54:19 <bswartz> caitlin56: that *wouldn't* be acceptable in a devstack/tempest automated testing environment I don't think
15:54:49 <bswartz> we don't want the manila tests to suck up way more resources than the rest of openstack or everyone will hate us
15:54:57 <caitlin56> But we could take that as a starting point, and look for ways to improve on it.
15:55:03 <bswartz> possibly
15:55:33 <bswartz> anyone have other topics to add?
15:56:04 <bswartz> next week let's try to get a real agenda together (me included)
15:56:33 <bswartz> okay thanks everyone
15:56:40 <bswartz> #endmeeting