15:01:08 #startmeeting manila 15:01:09 Meeting started Thu Jan 16 15:01:08 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:12 The meeting name has been set to 'manila' 15:01:15 hi 15:01:19 hi 15:01:23 hello 15:01:25 hello everyone 15:01:27 hi 15:01:29 hello 15:02:16 so I haven't updated the agenda, as usual 15:02:30 my previous meeting ran right into this one 15:03:01 let's jump right into development status -- I know a number of exciting things are getting close to completion 15:03:07 #topic dev status 15:03:29 https://review.openstack.org/#/c/66001/ 15:03:39 https://review.openstack.org/#/c/62917/ 15:03:41 hi 15:03:52 security services are merged 15:03:58 yportnova/vponomaryov: please share the updates we discussed on monday 15:04:06 yportnova: thx 15:04:14 Dev status: 15:04:19 1) network-api 15:04:25 a) share-networks (approved name from previous meeting) 15:04:25 share-network on server side - https://review.openstack.org/#/c/66587/ 15:04:25 share-network on client side - https://review.openstack.org/#/c/59466/ 15:04:25 b) security-services has already been merged - https://review.openstack.org/#/c/66001/ 15:04:25 c) api for linking 'a' and 'b' - TODO 15:04:52 2) Multitenant Generic driver (draft / work in progress) - https://review.openstack.org/#/c/67182/ 15:05:20 3) Multitenant NetApp driver (work in progress) - https://review.openstack.org/#/c/59100/ 15:05:26 4) manila's tempest code has been merged into %manila_project%/contrib/tempest 15:05:26 It has all currently existing tests for manila 15:05:26 Change to openstack-infra/config to enable tempest from that plugin for our 'devstack-job' - https://review.openstack.org/#/c/66846/ 15:05:35 and a reminder for those who don't know, the generic driver will use cinder+nova to create VMs and storage on the tenant network and export it using NFSD or Samba 15:06:01 that all for status 15:06:13 vponomaryov: that was awesome, ty 15:06:52 so this reminds me 15:07:10 bswartz: so partitions (as opposed to VMs) will be supported when nova supports them? 15:07:15 there is some overlap with the current generic driver approach and the so-called bridged networking approach 15:07:30 but first, any questions on the current status? 15:07:46 caitlin56: what do you mean by partitions? 15:07:55 do you mean disk partitions? 15:08:19 What Solaris calls zones: "lite" VMs. 15:08:38 oh, you mean something like a LXC-based VM 15:08:52 bswartz: yes 15:09:36 vponomaryov and bswartz: Is the NetApp reference driver leveraging vFiler assisted Multi-tenancy and the generic driver is using hypervisor-assisted Multi-tenancy? (Think the answer is yes but wanted to confirm) Is the delivery time-line the same or will one be complete before the other? 15:09:40 yeah that's not something I've looked into -- I'm not sure of how well nova support LXC/OpenVZ -- but it doesn't matter so much for us 15:10:20 Using those hypervisors is signigicantly more efficient for the kinds of things manila needs to do, but it's just a performance optimization compared to full virtualization 15:10:24 So no specific issue was encourntered, just a determination that this is probably complex enough that nova should handle it. 15:10:26 ? 15:10:38 or rather, a scalability optimization 15:11:00 shamail: yes the netapp driver relies on vservers 15:11:26 shamail: the netapp driver and the generic driver are being worked on in parallel 15:11:43 Great, thanks. We'd need to look at both. 15:12:15 caitlin56: in the interest of limiting complexity for ourselves, I want to ignore the hypervisor issue and just assume that "nova will handle it" 15:12:45 bswartz: that might be worth capturing as a design rationale. 15:13:00 if someone wants to dig into what exactly nova can support I'd encourage them to do some experiments and share the results 15:13:25 I could easily see some future enhancement in Juno where we add LXC/OpenVZ support 15:14:13 but this discussion segues nicely into the other topic I brought up 15:14:15 AFAIK, Nova supports LXC, but with limitations, why do we consider LXC at all as possible variant to use? 15:14:25 lxc doesn't support runtime volume attachment 15:14:50 vponomaryov: mostly because you can run significantly more LXC instances on the same hardware than you can KVM instances 15:15:58 LXC/OpenVZ are dramatically more efficient virtualization technologies when you don't need the full feature set that KVM provides (i.e. full hardware virtualization) 15:16:39 also, OpenVZ requires patched linux core 15:16:50 if I am not mistaken 15:17:06 bswartz: for generic driver networking we have 2 pollible implementations and no concensus 15:17:13 anyways, I wanted to ignore these issues for now because they're a distraction from our core mission 15:17:36 which is to deliver shared filesystems as a service 15:17:48 bswartz: here is doc which outlines two solutions https://docs.google.com/document/d/1kX7-S8aClxleydlbm6NmFEWirnyk7hsd4UhqxXcC0KA/edit?usp=sharing 15:17:49 achirko: can you explain? 15:18:10 oh yea 15:18:24 this deserves some time for discussion 15:18:58 #topic network communication between manila and service VMs 15:19:34 okay so as we've discussed earlier, the generic driver will create nova instances (called "service VMs") to actually service the shared filesystems into the tenants' networks 15:20:09 these service VMs will have a network interface on the tenant network, and therefore there won't automatically be a network communication path between manila and these VMs 15:20:27 obviously manila needs to be able to communicate with the VMs it creates, so there are a few options 15:20:38 thx to achirko for providing a writeup of the situation 15:21:08 the 2 basic approaches break down like this: 15:21:31 feel free to comment inside document or here 15:21:39 1) manila creates virtual network intefaces connected to the tenant network on the manila node and uses those to talk to the VMs 15:22:11 2) we add a second IP address to the VMs which is connected to somewhere that manila can reach through its default routes 15:22:57 I don't have strong feelings about either approach, except that if we add secondary IP addresses to the service VMs, then those should be on a network that's protected from security threats 15:23:13 the "floating IP" approach implies that these VMs will have public IPs, which I don't like 15:23:50 A third option would be to have the "Base server" act as an application layer gateway, rather than as a router. 15:24:16 we would almost need to create a new type of floating IP which is for internal services rather than for use by end users 15:24:30 caitlin56: I don't follow 15:24:56 bswartz: "floating IP" approach implies that these VMs will have public IPs - in theory we can create separate floating ip pool for Manila 15:25:08 Hey base server you need to tell your subsidiary server fornetwork x this. 15:25:24 yportnova: does neutron explicitly support multiple pools of floating IPs? 15:25:33 bswartz: http://docs.openstack.org/grizzly/openstack-network/admin/content/adv_cfg_l3_agent_multi_extnet.html 15:25:41 yportnova: how hard is that to setup, and will it be a hassle for users of manila? 15:25:49 now we do not have an ability to use 2 floating ip pools 15:26:08 caitlin56: how would that be implemented 15:26:15 bswartz: but there is bug https://bugs.launchpad.net/neutron/+bug/1212947. It is not confirmed but affects some people 15:26:55 bswartz: documentation says that we can have multiple floating ip pool, but on practice it doesn't work - I tried it 15:27:01 you would need to add a message relay daemon on the base server, i'm presuming each vendor knows how to talk to their own vservers within their box. 15:27:06 achirko: hah! 15:27:17 who would have guessed that the openstack docs might be wrong? 15:28:12 achirko: the real trick is that you would want parallel allocation of floating ips. Basically, "matching" terminal numbers on each subnet. That's the way they would be assigned manually, and the only way a human could ever track what was going on. 15:28:12 caitlin56: we're not talking about drivers other than the generic driver -- it's presumed that vendor drivers have ways of solving the problem of talking to their vservers 15:28:58 the issue here is that the generic driver will be using nova to create its "vservers" (actually service VMs) and that nova doesn't provide a generic mechanism for talking to them across networks 15:29:36 so its up to us to create that mechanism 15:30:06 achirko: I'm leaning towards approach 1, because it sounds simpler for the administrator 15:30:15 achirko: what are the downsides to approach 1 15:30:46 bswartz: more work for us, developers 15:31:03 ah 15:31:19 so helpe me understand here why linux network namespaces are needed 15:31:21 bswartz : using 1 approach we will have low-level dependency on Neutron internal implementation 15:31:45 bswartz: we need to have neutron agent set up and runnign on Manila node 15:31:50 can't we just create bridge interfaces and vnets? 15:32:35 requiring a neutron agent doesn't bother me too much -- I suspect there will be similar requirements for the gateway-based multitenancy -- which I want to get to in a minute 15:33:01 Could we do this with a firewalled router? 15:33:16 bswartz: we need net-ns for virtual interfaces isolation - we can have 2 private subnets with the same cidr 15:33:46 achirko: are you sure that's allowed? I was under the impression that openstack didn't allow re-use of IP addresses 15:33:52 That is we explicitly allow manila to have a message routed to the share-network via a router which enforces firewall rules (so general public cannot do this). 15:34:06 bswartz: as well as for security - to isolate manila node from tenant's private subnet 15:34:31 caitlin56: I think that secure firewalling is implied in any of these configurations -- it's not optional 15:34:52 caitlin56: the thornier issue is the basic routing and bridging 15:35:05 bswart:z: true. But by default there would be no routes between tenant networks. 15:35:43 Neutron might allow routing from the basenetwork to tenant networks however. 15:36:37 achirko: have you actually prototyped approach 1 to show that it works? 15:36:49 or is it theoretical at this point? 15:37:09 bswartz: we can have 2 subnets with the same cidr as long as they resides in different neutron networks 15:37:32 achirko: okay so it sounds like neutron has gotten smarter since I last checked 15:37:46 achirko: that would force those tenant networks to be fully isolated,correct? 15:38:10 you would not have a unique IP address while sitting in the common ntework to refer to a tenant endpoint. 15:38:30 bswartz: I pretty sure it will work because neutron uses similar approach for l3 and dhcp services 15:38:37 okay 15:38:51 netapp has supported that kind of isolation for a long time -- we call it "ipspaces" 15:39:15 it's hard to wrap your mind around though -- I worry about new users or even developers getting confused 15:39:53 caitlin56: neutron network = l2 isolated domain using vlan, gre etc. 15:40:06 okay well I'm going to cast my vote for approach 1 at this point 15:40:15 I'd like to at least try it and see if we can make it work 15:40:24 bswartz: many systems have a concept of a "common" or DMZ VLAN,. Anyt VLAN can route to it, it can route to any, but other VLANs cannot route to each other. 15:40:31 the floating IP approach can be a fallback plan 15:40:56 caitlin56: this is stronger separation than that 15:41:13 an ipspace or netns is a whole separate set of routing tables 15:41:28 2 networks in different namespaces don't even share the same routing universe 15:42:04 as an aside, one of the reasons this is needed is because IPv4 doesn't offer a large enough address space 15:42:28 Yes, common universal address almost requires IPV6 addressing. 15:43:02 +1 for first approach 15:43:13 anyone disagree on the approach for communication between manila generic driver and is service VMs? 15:43:18 bswartz: I'm also for the first solution 15:43:28 that's 3 for the first solution 15:43:29 bswartz: First approach 15:43:44 bswartz: I think 2 approach is better 15:43:58 How does the manila agent keep the clients distinct? 15:44:16 yportnova: but approach 2 has some dependencies on stuf that doesn't work currently 15:44:27 yportnova: can you explain why you think approach 1 is wrong? 15:45:33 bswartz: I think the 1 approach is not good because Manila will depend on how networking is implemented in Neutron 15:46:09 yportnova: only the generic driver will 15:46:23 Generic driver will work only with Neutron (but currently we do not have alternatives) 15:46:28 yportnova: and the generic driver already depends on several other key openstack projects, like nova and cinder 15:46:44 bswartz: sorry, I meant the generic driver 15:47:04 other drivers such as the NetApp driver won't need this because we can talk to the vservers through the cluster interface 15:47:14 bswartz: yeah, but it does nor depend on how Nova spawns the VMs 15:47:30 and how Cinder provides volumes 15:48:02 Will the generic he need to talk to a Vserverinstance from a child process associated with the network namespace? 15:48:32 generic driver need to talk... 15:48:40 And floating IP is one of important Openstack features, that provides access to VMs. And I hope the bug with multiple pools will be fixed 15:49:02 yes, it's more low-level dependency, but it's dependency on core neutron architecture 15:49:04 bswartz: Yes, 1st approach introduces dependancy on Neutron architecture 15:49:55 but wait, doesn't the manila core also depend on neutron for basic things like provision IP addresses in tenant networks? 15:49:58 how is this different? 15:50:07 Will any vendor driver be able to implement its own solution on how to talk with its vservers? 15:50:28 bswartz: yeah, but we just communicated with its API, and do not depend on implementation 15:50:35 caitlin56: not only will you be able to, you will have to 15:50:57 caitlin56: unless you reuse some code from the generic driver 15:50:58 bswartz: then I have no objection to either option. 15:51:16 that is like dependency novas vif driver on neutron architecture, but not so complicated 15:51:40 yportnova: can you explain how this introduces a deeper dependency? 15:51:53 maybe I'm thinking about approach 1 incorrectly 15:52:36 bswartz: Manila will need to know ovs integration bridge name 15:52:41 I had assumed that it would work the same, where we provision an IP from the tenants network to ourselves, and the generic driver takes care of creating a bridged network interface to the tenant network and binding the IP to that interface 15:52:45 neutron api is interface for us, but in approach one we will depend on neutron architecture that is behind the API 15:53:19 the most core architecture 15:53:30 well I don't see a deeper dependency on neutron as a large downside, for the generic driver 15:54:03 who would really use that driver outside of an openstack configuration which included nova, cinder, AND neutron? 15:54:23 are there any credible alternatives to neutron that you could use with manila+nova+cinder? 15:54:32 bswartz: the fundamental issue is whether the IP address of the vserver for management purposes is global or not. If it is globally unique then you can set up routing rules about who can talk with it. 15:54:52 caitlin56: it sounds pretty clear that it's not unique 15:54:55 If it is not globally unique then you need to talk with it from a process that is scoped for the correct network namespace. 15:55:04 oh wait 15:55:31 no the tenant-facing address is not unique, but the manila-facing address could be 15:56:09 well I'd like to make a decision on this in the interest of forward progress 15:56:18 If the manila-facing address is unique then neutron can set up the required routes to allow *only* the desired access. 15:56:36 yportnova: we may need to change the implementation later on if the dependency on neutron turns out to be a bad thing 15:57:11 bswartz: sure 15:57:16 #agreed manila generic driver will communicate with service VMs using neutron-provisioned virtual interfaces on the manila management node 15:57:29 okay we used up almost all our time on that issue 15:57:36 but it was a good discussion to have 15:57:39 #topic open discussion 15:57:44 any last minute things? 15:58:07 bswartz: The main thing for generic driver is to make sure it will work with standard devstack inslalation, and approach 1 deliveres this 15:58:18 achirko: agreed 15:58:26 Any anticipated dates for the new reference drivers yet? 15:58:42 Dates being a loose term here, really time-frame 15:59:03 shamail: just a couple of weeks tops 15:59:10 shamail: probably by the end of the month? 15:59:10 Awesome 15:59:20 Thanks. 15:59:47 okay we need to vacate tthis channel -- grab me in #openstack-manila if you need anything else 15:59:49 thanks all 15:59:58 thanks 16:00:00 #endmeeting