15:01:16 #startmeeting manila 15:01:17 Meeting started Thu Jan 30 15:01:16 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:20 The meeting name has been set to 'manila' 15:01:26 hello folks 15:01:27 hello, everyone 15:01:29 hi 15:01:32 hello 15:01:34 Hello 15:01:35 Hi 15:01:39 hi 15:01:47 #link https://wiki.openstack.org/wiki/ManilaMeetings 15:01:49 Hi 15:01:54 hi 15:01:56 hello 15:02:16 some of us were having a discussion earlier this week that I think the whole group would find interesting 15:02:28 (also I'm hoping someone has a genius idea that will solve the problem) 15:02:45 Hi everyone 15:03:04 so the issue is in the generic driver design, with how it connects to the tenant network 15:03:58 Our initial hope was that we could simply create a network port on the tenant's network and assign that to the service VM 15:04:32 however we don't want to make the service VM owned by the tenant for a variety of reasons 15:04:45 Hi, I'm Nagesh, from Bangalore, joining manila meeting for first time.. 15:04:47 1) We don't want it to count against the tenant's quotas 15:04:54 2) We don't want the tenant to accidentally delete it 15:05:41 ndn9797: Hi 15:05:42 3) We want to hide manila backend details from the tenant to the extent possible 15:05:52 ndn9797: hello and welcome! 15:06:19 so it's preferrable for the service VMs created by the generic driver to be owned by some special service tenant 15:06:48 however it seems that either nova or neutron or both won't let us connect a VM owned by one tenant to a network owned by another tenant 15:07:21 one potential workaround is to allow the service VMs to have their own separate network and to simply provide routing between that network and tenant networks 15:07:34 I find that workaround to be complex and error prone 15:07:56 there isn't an infrastructure for "admin" nodes or whatever to be connected to anybody they want? 15:08:03 but without making changes to nova or neutron or both, it seems to be the only option 15:08:29 gregsfortytwo1: problem in assigning user tenant port to service tenant vm 15:08:50 oops I forgot to set the topic 15:08:59 #topic Generic Driver Networking 15:09:21 Would assigning a static IP to the service VM bypass the separate tenant/network issue? Assuming there's a way for us to do this. 15:09:23 okay so hopefully most of you understand the problem 15:09:50 I have news since yesterday, it's quite easy to configure routers, so we will not need any additional route rules in vm in our private network 15:10:00 shamail: the IP will end up getting assigned by neutron and will be effectively static 15:10:02 I understand the problem now. Thanks 15:10:03 The issue is with floating IPs and being assigned cross-tenants, no? Might be completely mistaken since I wasn't in the discussions earlier. 15:10:07 but it will be an IP from a separate network 15:10:12 has anybody talked to people in the other groups yet? I'm surprised there's not an accepted model already 15:10:41 gregsfortytwo1: I think that nobody is doing anything that requires what we want 15:10:50 since you always have the option of setting up virtual routes between tenants 15:10:51 But the assignment mechanism (e.g. Neutron) is the issue, not the address space itself so couldn't we find an alternative way to assign? 15:11:00 everybody else is still just doing things through the hypervisor? really? 15:11:05 no it's not actually an IP issue 15:11:16 Okay. 15:11:34 we can allocate a port on the tenants network, we just can't assign it to our VM through nova 15:11:59 we could still fake it and use the IP within the VM -- but without actual bridging of the packets thats useless 15:12:49 gregsfortytwo1: I'm not sure what you mean 15:12:57 we'd like to share docs about how generic driver is implemented now 15:13:12 gregsfortytwo1: I think people are using their own tenant networks currently, and when they need connectivity they're setting it up through neutron routes 15:13:15 https://docs.google.com/a/mirantis.com/drawings/d/1sDPO9ExTb3zn-GkPwbl1jiCZVl_3wTGVy6G8GcWifcc/edit 15:13:25 https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit 15:13:35 https://docs.google.com/a/mirantis.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit 15:13:57 the ideal case for me is that we can create a VM that right on the tenant's network but the VM is hidden from them and doesn't count against their quotas 15:14:10 Could we do this the opposite way? Rather than giving each machine in each tenant network access to the management network, could we give the management machine a floating IP on each tenant network? 15:14:29 caitlin56: yes that's an option, but it has some problems 15:15:08 in particular, the security services that the storage server needs to communicate with will be inside the tenant network 15:15:25 so the "service VM" will need routes back into the tenant network 15:15:34 There is an even better option, but I do not think neutron supports it, you place the management computer in a "DMZ VLAN" and allow routing to the DMZ from any tenant network. 15:15:43 we may be able to get that to work by just setting up the routes 15:16:05 caitlin56: we need 2-way routing though 15:16:13 The gotcha is that all tenant networks have to be NATtted to a unique address on the tenant network. 15:16:20 I'm not a network engineer so this is all a bit beyond me, but what's error-prone about setting up the routes? 15:16:57 bswartz: yes, we can do so, and it's not difficult 15:16:57 gregsfortytwo: the problem is that the tenant networks can all think they are 10.0.0.0/8 15:17:10 gregsfortytwo1: perhaps error-prone was the wrong phrase -- what I meant was that there's a lot of complexity with network routes and testing all the corner cases seems like a huge problem 15:17:49 aostapenko thinks he can make it work so it's what we're going to try 15:17:52 isn't that Somebody Else's Problem? (where Somebody Else is neutron) 15:17:59 If the ideal solution involves work from Neutron (and perhaps Nova), it might be worth trying to get a blueprint together for those team. Neutron doesn't seem to be too locked down for new features. 15:18:09 I just wanted to throw this problem out to the larger group to see if we could find a better way 15:18:15 we don't just say "connect this IP and this network" or similar? 15:19:19 gregsfortytwo1: it's a question of the routing rules -- which packets go to which gateways 15:19:28 The trick is that you need very specific firewall and NAT rules to be set up properly. You want to allow each tenant network to use the same addresses, and you doi not want to allow general traffic from the tenant networks. 15:20:08 consider this: the service network will have sime CIDR range, and that range will need to be mapped into every tenant's routing tables 15:20:20 The setup I have seen, each tenant is NATted to the DMZ, and routing is limited to the DMZ subnet (i.e. no packets transit the DMZ). 15:20:33 if any tenant is using those same addresses for something (perhaps a VPN connectiong to a corporate network) we will get an address collision 15:20:42 connection* 15:20:50 bswartz: it's ont he 15:21:04 DMZ thaty we need to do the routing. 15:21:39 Wouldn't that still require something to maintain the NAT entries and provision the addresses? How would this be from a maintenance perspective? 15:21:46 The assumption is that the DMZ has a public IPV4 or ipv6 subnet - something a tenant should not be using. 15:22:14 caitlin56: you're proposing that service VMs be given public addresses? 15:22:31 You allow a route from each tenant network to that specific public subnet, and you NAT tranlate every packet when routed. 15:22:57 caitlin56: how do the service VMs tunnel back into the tenant networks? 15:23:00 It can be PNAT or block NAT translation, but here PNAT seems more appropriate. 15:23:22 we'd like to avoid using NAT at all because of nfs limitations 15:23:28 That's where you need NAT rather than PNAT. 15:24:01 yeah the whole idea is to deliver a driver that works similarly to a VLAN-based mutlitenancy solution which requires full 2-way network connectivity 15:24:19 NAT causes problems 15:24:26 The only way to avoid NAT is to have the management server run lots of network namespaces and essentially "route" to the correct process before doing IP routing. 15:24:37 if NAT is going to be required then I'm inclined to use the gateway-mediated multitenancy model 15:25:47 Sorry I may have missed this, but why not a Neutron admin subnet? 15:26:02 scottda: you may need to explain what that is... 15:26:16 We're using 1 service network and multiple subnets in it with different cidrs 15:26:53 Some feature, yet to be implemented, that allows an admin to setup neutron networking across tenants. 15:27:07 and subnets 15:27:26 Sorry about the drop. Anyway there is ultimately only two choices: you create a set of unique IP addresses visisble tot he manager (via NAT or PNAT), or you create slave processes which have uniwuenetwork namespaces and hence can join the tenant network without reservation. 15:27:56 scottda: that sounds like exactly what we need 15:28:09 scottda: if it's not implemented though then we're still left without a solution 15:28:21 bswartz: as of gateway mediated, in the original concept gateway was equal to hypervisor, but for sake of network separation we thought we'd need service vm-s for gateway role so as long as this view held, I don't see how is that simpler as of network plumbing 15:28:25 Well, if the ideal place for this is in Neutron, we could attempt to get it in Neutron. 15:29:02 It might take a bit more time, but we will be living with this solution for a long time. Years. 15:29:05 But if our plan is to get Neutron improved first we will need an interim workaround. 15:29:08 I do think the ideal place is in neutron 15:29:21 Or nova 15:29:39 some way to create special vms that span tenants 15:30:02 scottda: are in a position to get something like that implemented? 15:30:08 I'm not certain, but my instinct is that it will be easier to get a change through Neutron than Nova. Just from a standpoint that Neutron is newer, and changing faster. 15:30:30 I have teammates who work on Neutron. I can certainly fly the idea past them to get some feedback. 15:30:42 Linux namespaces provides the tools needed. Trying to bypass neutron to use them will be very tricky. 15:31:05 They might have already discussed such a thing, or have a quick answer like "sure, sounds doable" or "no way, already thought of that" I really don't know. 15:31:27 caitlin_56: there's nothing technically complicated about allowing a service VM to be directly on a tenant's network 15:31:44 it's an administrative limitation that prevents us from doing what we want 15:32:04 possibly there are security implications 15:32:26 How do you know which tenant you are talking to? 15:32:53 there's one VM per tenant 15:33:12 actually one vm per share-network 15:33:14 so you have a 1-to-1 mapping and you can track that 15:33:21 How do you talk with tenant X? You have to select the namespace before the IP address is any good. 15:33:28 well yeah, thanks aostapenko 15:33:52 caitlin_56: it's no different than if tenant X had created a new VM himself 15:34:06 however the owner of that VM needs to be us 15:34:24 or the "service tenant" as we've been saying 15:34:53 there is the issue of how the manila-share service talks to these special VMs, but that's also a solvable problem 15:35:35 I'll start a dialogue with some Neutron devs about feasibility of an admin network. There's still the issue of the service VM. 15:36:05 scottda: thx -- I'll follow up w/ you after this meeting because I'm very interested in that approach 15:36:12 scottda: this is a general problem, not unique to NFS. So neutron should be willing to listen. 15:36:43 Yes, I'll need to get some more info as I'm not as up to speed on Manila as I'd liket to be 15:36:53 okay enough on that 15:36:57 caitlin_56: agreed. I think I can sell it :) 15:37:00 let's jump to dev status 15:37:05 #topic dev status 15:37:20 i will update about it 15:37:32 vponomaryov: ty 15:37:40 Dev status: 15:37:40 1) Generic driver - https://review.openstack.org/#/c/67182/ 15:37:50 Lion part of work is done 15:37:50 TODO: 15:37:50 a) Finalize work of routers and routes between Vm in service tenant and vm in user tenant 15:37:50 b) Write unittest 15:38:17 Info, mentioned before: 15:38:17 https://docs.google.com/a/mirantis.com/drawings/d/1sDPO9ExTb3zn-GkPwbl1jiCZVl_3wTGVy6G8GcWifcc/edit 15:38:17 https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit 15:38:17 https://docs.google.com/a/mirantis.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit 15:38:44 2) NetApp Cmode driver - https://review.openstack.org/#/c/59100/ 15:38:45 TODO: bugfixing and retesting 15:39:07 3) We have open item about share-networks, that should be disscussed and if accepted will take some time for implementation. 15:39:34 vponomaryov: what open issue? 15:39:38 Should I begin with open item? 15:39:44 vponomaryov: yes pls 15:39:57 Open item: With current code, Vserver (VM) will be created (with data from share-network) only on first share creation call. 15:40:13 its true for both drivers 15:40:19 Problem: 15:40:19 Current realisation assumes creation share and Vserve for itr. And it can be failed due to improper share-network data. So, user would like to use already activated share-networks, and wait much less time. 15:40:19 Also, due to mechanism of mapping security service to share network, we shouldn't try create Vserver (VM) on share network creation. 15:40:45 Proposal: 15:40:45 We can have command for share-network like "initialize" or "run", to make it, when we (admin) are ready. 15:41:06 vponomaryov: you're saying we don't error check the parameters used for share-network-create until long after the API call succeeds? 15:41:46 we are checking this data on creation of Vserver, and we do this on first share creation call 15:42:01 well here's the thing 15:42:26 even if we create a vserver before we need to, we may need to create another one at some point in the future 15:42:44 any share-create call can result in a new vserver getting created 15:43:03 yes, use r will have a choise between active share-networks 15:43:12 which one to use 15:43:17 now I agree that its probably worthwhile to at least validate the params passed in 1 time at the beginning 15:43:45 but that won't prevent us from having issues if something changes behind our back 15:44:13 so, we should not only check, we should create Vserver 15:44:41 if its creation is successfull, we have active share-network 15:44:42 anyone else have an opinion here? 15:44:58 bswartz: that is an unsolvable problem. The fact that you cannot launch a VM right now does not mean that the configuration is incorrect - only that it probably is. 15:44:59 I think I can live with creating an empty vserver, just to validate the parameters passed in 15:45:47 caitlin_56: +1, a lot of issues won't be caused by improper data itself 15:46:03 we should know, we have a Vserver 15:46:11 and can crete share 15:46:28 You either allow "speculative" networks or you prevent some legitimate configuration from being accepted due to a temporary network glitch. 15:47:32 caitlin_56: are you against the proposal? 15:47:49 I think it's fine to do it -- the biggest downside is wasted resources 15:48:57 okay if noone opposed then I say go ahead with it vponomaryov 15:49:09 #topic open discussion 15:49:16 ok 15:49:23 bswartz: I'd raise a error, but allow an operator to override - "no this config is correct even if you can't do it right now." 15:49:26 any other topics for this week? 15:49:55 caitlin_56: if it won't work right now the tenant can try agian later when it will work 15:49:56 A couple with launchpad BPs 15:50:15 caitlin_56: there's little value it setting something up early if that thing is useless until a later time 15:51:33 bswartz: BP https://blueprints.launchpad.net/manila/+spec/join-tenant-network can be marked as implemented, 15:51:33 bugs left according to its changes. + approved change for share-networks 15:51:34 vponomaryov: I'm here 15:51:55 I'm there rather 15:52:22 and second 15:52:33 If we drop xml support, BP https://blueprints.launchpad.net/manila/+spec/xml-support-to-client should be marked as invalid. 15:52:40 Do we drop XML support? 15:52:44 ack my browser is crashing 15:53:20 oh yes 15:53:39 it came to my attention this week that the Nova project is dropping support for XML in their v3 API 15:53:54 so I see no reason to support XML in any version of our API 15:54:04 fine with XML drop.. 15:55:07 I have no questions 15:55:13 thanks 15:55:26 okay 15:55:29 thanks everyone 15:55:52 thanks, bye 15:55:54 #endmeeting