15:01:50 #startmeeting manila 15:01:51 Meeting started Thu Oct 10 15:01:50 2013 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:54 The meeting name has been set to 'manila' 15:02:08 hello folks 15:02:12 hi 15:02:13 hi 15:02:14 hi 15:02:18 hi 15:02:19 hi 15:02:47 err, I just realized that I forgot to update the agenda again 15:03:09 and nobody else put any topics on there so we'll have to wing it again 15:03:22 #topic incubation 15:04:04 I submitted the incubation request for manila just now 15:04:14 the process is actually to create a wiki and then send a link to the TC 15:04:27 #link https://wiki.openstack.org/wiki/Manila_Overview 15:05:04 Anyone is welcome to take a look and fix errors, considering it's a wiki 15:05:40 I got this reviewed by a few people though so I'm reasonably happy with the content -- the main thing I'm unsure about is the list of developers 15:05:52 if I missed anyone please add yourself 15:07:17 I'm pretty confident that we'll be accepted, but it might not happen next week if the TC is wrapped up with Havana release issues 15:07:42 The TC meets on Tuesday afternoons and considers requests during those meetings 15:08:16 there may be some discussion on the openstack-dev list, and you're all welcome to add your opinions 15:08:20 enough about that though 15:08:30 #topic networking 15:09:05 just quickly wanted to add - related to incubation 15:09:21 so we have some news on the challenges we face getting networking implemented properly 15:09:36 shahir wanted me to let you know that he plans to submit a blueprint for an EMC Manila driver either today or tomorrow 15:09:50 yportnova: do you want to update the group on the issues you've discoverd? 15:10:04 bswartz: yes 15:10:06 alexpecoraro: that's great 15:10:36 I was looking how to implement networking plumbing case 15:10:42 alexpecoraro: I didn't put his name on the list of developers, I used yours 15:11:57 I found to solutions: 1. using neutron's provider network extensions 2. Configuring vlan interfaces on storage (accordingly to tenant network's segmentation id) 15:12:30 i'm not sure if he wanted to be listed or not, but i'll ask him to add himself and he'll get his name right - its shamail tahir not shahir tamail :) 15:12:35 yportnova: what about the problems we have which are common to the Ironic project? 15:14:00 We have storage that should be connected to private network, they have Baremetal instances that also should be connected to private network 15:14:26 I think the netwotking model for us and for them can be the same 15:14:40 yportnova: and as far as we know are the Neutron APIs that we need implemented or not? 15:14:45 Fundamentally, we need to be able to add a VNIC to a non-VM, correct? 15:15:29 caitlin56: yes 15:16:19 caitlin56: yes, and Ironic has the same requirement 15:16:31 bswartz: Provider Network is implemented and we can use it 15:16:55 yportnova: is that what the Ironic team plans to do as well? 15:18:01 bswartz: provider network approach is not suitable for Ironic 15:18:04 What identification of a newly attached network is ironic going to provide, and is that adequate for a NAS server to identify what file systems it is supposed to export to that network? 15:19:11 caitlin56: the issue here is the low level network connectivity 15:19:21 higher level stuff like filesystems it not relevant to Neutron 15:19:44 Yes, and we may have to fill that gap ourselves. 15:19:46 the basic workflow needs to go like this: 15:20:15 1) Ask neutron to provision us IPs for us out of the tenant network (1 per virtual NIC) 15:21:12 2) Create the virual NICs on the storage controller(s) using the assigned IPs 15:21:39 3) Tell neutron where those virtual NICs are cabled in so it can plumb them through to the tenant network 15:22:08 1 is existing funcitonality, 2 is the job of manila and the backend drivers, 3 is something I'm not sure about yet 15:22:31 step 4 would be to configure the backend to export teh filesystems over the virtual NICs 15:23:30 bswartz: what information would a hypervisor receive for the equivalent of steps 2 and 3 if this were a VNIC for a VM? Is this a case of just needing the same info? 15:23:49 yportnova: can you explain how we can do 3 with the provider network APIs? if not here then maybe produce a writeup in the form of a blueprint? 15:24:03 For step 3, when a VM is instantiated Neutron will call the ML2 mechanism drivers so that the switch can configure the VLAN on the switch port connected to the host containing the VM. 15:24:39 jcorbin: how is the switch port determined for a nova instance? 15:24:57 bswartz: the one case I described is: 1) Ask neutron to provision us IPs of the tenant network (1 per virtual NIC) and get segmentation_id (VLAN) of network 2) Create corresponding vlan interfaces on storage 3) Create sec-groups allowing incoming traffic to VMs. 15:25:37 bswartz: and we will have 2-way comunication between VM and storage 15:25:43 I can only speak for Arista switches but we store the topology and can figure out which host is connected to which port by the mac address. 15:26:13 okay well the storage controllers will definitely be able to supply MAC addrs for the virtual NICs they use 15:26:28 so if that's all that's needed by neutron then this might be fairly simple 15:26:34 we just need to prototype it and prove that it works 15:27:14 bswartz: I would like to help if possible. 15:27:42 obviously the driver interface will need some new methods to communicate with the storage controlle to create virtual NICs and get their MAC addrs, etc 15:28:16 Actual switches should not be involved in creating VNICs. You are *adding* VLANs to existing ethernet links. 15:28:16 jcorbin: that would be awesome -- grab me 1on1 after the meeting 15:28:48 So the storage appliance is effectively pretending to be a Hypervisor and a Virtual Switch -- hopefully without having to fully pretend. 15:28:59 caitlin56: yes, it's unclear to me if it makes sense to allow neutron to pick the VLAN number or to let the storage controller pick the VLAN number 15:29:36 Wouldn't neutron *have* to pick the VLAN if it is also attaching clients who are VMs? 15:29:42 obviously neutron does need to pick the IP address, subnet mask, and gw IP 15:30:08 caitlin56: most likely, unless there are switches that can perform VLAN-translation 15:30:33 I'm not an expert on switch technology -- VLAN translation seems like a simple trick to me but maybe it's harder than I think 15:30:37 VLAN translation leads to risk of VLAN-routing, which can produce loops. I'd avoid that. 15:31:21 okay in that case let's assume that neutron will be telling us what VLAN to use, in addition to the IP/mask/gw 15:31:28 Not that we would be creating that risk, but some network admins will be very suspicious of VLAN translation. 15:32:39 caitlin56: The tenant network segmentation id is the vlan number assuming the segmentation type is vlan. 15:32:52 okay so ignoring the issues around how we implement this in the backend driver itself, do we know enough to prototype this necessary interactions between manila and neutron? 15:33:21 caitlin56: When you provision the VM you specify the tenant network(s) to connect it to which indirectly specifies the VLAN(s) being used. 15:33:38 What stable network identifiers will we be promised by neutron? 15:33:53 jcorbin: how does openstack handle >4094 tenants when vlan segmentation is used? 15:34:31 caitlin56: Do you mean network segementation types? 15:35:04 jcorbin: no, how do we recognize the "XYZ Corp" tenant network as being the XYZ Corp tenant network? 15:35:10 bswartz: It doesn't that is the limit. VXLANs will get you beyond 4094 tenant networks. 15:36:11 jcorbin: is anyone doing 802.1 QinQ to get around the limit? 15:36:59 bswartz: I do not know if anyone is doing 802.1 QinQ. 15:37:13 okay well that's enough on this topic for now I think 15:37:23 bswartz: I believe the current 802.1 official solution is Provider Bridging. Which I haven't seen being adopted by virtualization folks at all. 15:37:26 this is a rich area and we'll get many chances to revisit it 15:37:47 I want to talk about... 15:37:52 #topic LVM driver 15:38:23 So the existing LVM drivers for Manila are basically single-tenant drivers 15:38:41 they could support multiple tenants but only if you use flat networking 15:39:08 bswartz: clarify, what do you mean by "LVM driver for manila", LVM would most naturally be for Cinder. 15:39:35 We need to either (1) extend the LVM driver to support multitenancy or (2) create another "generic" driver that supports multitenancy 15:40:12 caitlin56: I'm referring to the generic driver that uses local LVM storage and the LInux NFSD or Samba 15:40:40 Running generic Linux NFSD/Samba over LVM as provided by Cinder? 15:40:41 the generic drivers are important because devstack will use them and all of the tempest test will rely on them working well 15:40:51 caitlin56: no there's no dependency on cinder 15:41:24 The generic drivers are also important as reference implementations for other driver authors 15:41:39 Generic Linux NAS over strictly local native Linux LVM? 15:41:54 and lastly, I expect some poor souls will actually put them into production on white box hardware, much like people currently do with the cinder LVM drivers 15:42:41 caitlin56: last time I checked it was strictly local LVM 15:43:07 we're of course free to implement whatever we need to here -- maybe a driver that layers on top of cinder would not be a bad thing 15:43:30 The real question here is whether this reference platform requires partitions to be assigned to tenants, or just file systems from shared partitions. 15:43:38 simplicity is important though -- insofar as the drivers will be used a reference implementations and used by automated testing 15:44:15 I think we need to have at least one generic driver that can support network partitions 15:44:35 that driver may not be the one that ends up running in devstack by default, depending on how complex it is to make it work 15:44:40 and that's exactly what I'd like to talk about 15:45:21 So does each tenant have a root mount point in a shared file system, or its own root device? 15:45:41 I don't believe it's possible to have 1 Linux box join 2 kerberos domains at the same time, or to have different daemons use different DNS servers 15:45:48 The latter is clearly simpler, but it is adequate for at most tesgting. 15:46:45 caitlin56: regard the storage side of this, they could only share filesystems if the FS had quota enforcement such that one tenant could not steal another tenant's space 15:47:04 I suspect that separate filesystems are safer from that standpoint 15:47:29 not to mention general paranoia about mixing tenant data in a common filesystems 15:47:49 from the DNS/domain perspective though, I'm more worried 15:47:51 What's the in-kernel availability of anything like Solaris zones? 15:48:27 I'd rather not be forced to implement full virtualization of the NFS or Samba daemons to implement multitenancy in the generic drivers 15:48:59 caitlin56: I was hoping someone would know the answer to that :-P 15:49:18 I haven't researched the topic myself 15:49:38 I know there are products, but I don't know their relationship to kernel.org Linux. 15:49:46 glenng is going to look at it soon, but if anyone has any ideas/suggestions we'd love to hear them 15:50:45 My first thought is usermode Linux, but I have zero familiarity with that 15:50:58 With Solaris zones we can run the daemons in a zone and access devices that are exported from the core. I believe you can do the same in Linux,, but I don't know the deetails. 15:51:22 have solaris zones or BSD jails been ported to Linux directly? 15:51:47 There are equivalent products. I don't know if they are mainline though. 15:51:55 it seems impossible that this problem hasn't been solved somewhere by somebody 15:52:09 I just unfortunately don't know the solution yet 15:52:48 anyways look for more answers to these problems as we work on them in the coming weeks 15:53:23 I'd like to have concrete plans by Hong Kong so we can go into IceHouse and just write the code 15:53:29 that's all I have 15:53:34 #topic open discussion 15:53:38 Would it be acceptable if the reference file server launched VMs for each tenant under a Hypervisor? 15:53:58 caitlin56: acceptable yes, but that's pretty heavyweight 15:54:16 Way too heavyweight for a product, but possibly acceptable for testing. 15:54:19 caitlin56: that *wouldn't* be acceptable in a devstack/tempest automated testing environment I don't think 15:54:49 we don't want the manila tests to suck up way more resources than the rest of openstack or everyone will hate us 15:54:57 But we could take that as a starting point, and look for ways to improve on it. 15:55:03 possibly 15:55:33 anyone have other topics to add? 15:56:04 next week let's try to get a real agenda together (me included) 15:56:33 okay thanks everyone 15:56:40 #endmeeting