15:02:39 #startmeeting manila 15:02:40 Meeting started Thu May 29 15:02:39 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:44 The meeting name has been set to 'manila' 15:02:58 hello everyone 15:03:02 hi 15:03:03 hello 15:03:03 who do we have today 15:03:08 hi 15:03:40 hi 15:03:49 rushil, ameade: you here? 15:04:35 Hi 15:04:36 okay so the first thing I wanted to do was check up on people who signed up for stuff last week 15:05:01 is everyone still planning on working on the stuff they signed up for last week? 15:05:17 bswartz: Quick update, I am working on understanding cloud-init... No real update yet but I do have a question. 15:05:24 I have started investigating vlan to vxlan/gre routing 15:05:25 I'm trying to figure out how to spread the work around so everyone has interesting stuff to work on who wants to 15:05:34 here 15:05:39 would we prefer to use puppet or chef in conjunction w/ cloud-init? 15:05:59 And I can talk briefly about my findings if we would like 15:06:16 yeah I'd like to do that 15:06:20 BRIEFLY! 15:06:29 #topic cloud-init 15:06:42 I haven't started exploring the other options yet but I have only found stated support for cloud-init in RHEL and Ubuntu. 15:06:52 so I'm not sure why that would matter shamail 15:07:20 my thinking had been that cloud-init runs at boot time and can grab metadata from the hypervisor 15:07:37 therefore it's a great place to stuff mount information so that the client can just do a bunch of mounts at boot time 15:07:40 It wouldn't, just from a preference perspective... I plan to specify metadata via user data and leverage puppet/chef for the actual mount op 15:07:54 how we get the mount info from manila into a place where cloud-init can grab it is half the problem 15:08:02 Agreed 15:08:05 and how cloud-init takes that info and does the mounts is the other half 15:08:35 Solving for metadata still, the actual mount is where I was considering puppet or chef 15:08:42 shamail: personally I'd prefer using neither -- they would be an extra dependency and I don't see what they add 15:08:56 Okay, I'll dig deeper and update the team next week. 15:09:00 cloud-init should be able to invoke a mount command directly 15:09:20 if they add something valuable then let's talk about which to support 15:09:23 The amount of time I have given to this topic last week was minimal due to holiday and other commitment so 15:09:35 #topic vlan to vxlan/gre routing 15:09:45 scottda: what's up with this? 15:10:12 So, doing multitenancy on a vxlan or gre lan using the manual admin-based Neutron provider network should be possible.... 15:10:28 My next step is to test and confirm this, and then I can start on documentation. 15:10:46 The real goal is to have Manila do the routing without manual intervention. 15:10:50 is the plan to prove it works in a manual config first, then to look into how to automate? 15:10:56 And that requires some work. 15:11:06 which parts are manual and which parts are automated? 15:11:42 Right now, there is nothing similar to an agent that Manila can use to connect Neutron sub-nets. 15:11:51 But Nova has this, and DNSaaS would like this. 15:12:28 I've talked with a Neutron Core dev and this feature seems viable, but it might not be doable in the Juno time frame. 15:12:55 In the long run, this is highly desirable, for the vxlan multi-tenancy and possible other Manila features that involve Nuetron. 15:13:05 scottda: what do you mean by "connect Neutron sub-nets"? 15:13:20 in theory the VLAN and the VXLAN would be part of the same subnet 15:13:42 Connect a Neutron Provider Network (which connects to the outside network i.e File Server on a VLAN).... 15:13:59 To a tenant subnet, which might be using VXLan 15:14:02 okay 15:14:29 should some of us be attending neutron meetings and pushing for this if we want it to happen faster? 15:14:44 or do you get the sense that it's happening as fast as possible without out intervention? 15:14:45 Short Answer: Manual should be doable, and I will test and document. Automated is harder, and I'll continue to work with Neutron on this. 15:14:51 s/out/our/ 15:14:59 I can start attending Neutron meetings and try to drive this. 15:15:11 I think intervention can only help. 15:15:33 okay so I'll see who can apply pressure or help with the effort on our side 15:15:40 your help is very much appreciated too 15:15:47 It's a pleasure :) 15:16:10 * bswartz remembers to go look up when the neutron weekly mtgs are 15:16:24 #topic dev status 15:16:31 Dev status: 15:16:37 1) Share servers admin API 15:16:41 okay vponomaryov what have you been up to? 15:17:02 I am on my way 15:17:03 bp: #link https://blueprints.launchpad.net/manila/+spec/add-share-server-list-api 15:17:03 client: #link https://review.openstack.org/95187 15:17:03 server: #link https://review.openstack.org/95558 15:17:15 status: wait for review 15:17:23 2) New ci jobs: 15:17:28 2.1) 'pylint' job - has been enabled 15:17:34 2.2) 'tempest' job (with multibackend installation) 15:17:34 bp: #link: https://blueprints.launchpad.net/manila/+spec/multibackend-installation-tempest-job 15:17:34 status: all 'manila' changes has been implemented, 'config' project commit is in review state. 15:17:34 config commit in gerrit #link https://review.openstack.org/95207 15:17:47 3) Update manilaclient with latest changes in manila 15:17:47 gerrit: #link https://review.openstack.org/96423 15:17:57 4) Update of generic_driver/service_instance modules 15:18:03 bp: #link https://blueprints.launchpad.net/manila/+spec/implement-backend-details-in-drivers 15:18:03 gerrit: #link https://review.openstack.org/#/c/96469/ 15:18:03 status: work in progress 15:18:17 5) Update of Manila's API docs: #link https://wiki.openstack.org/wiki/Manila/API 15:18:17 status: work in progress 15:18:36 TODO: 15:18:36 1) Finish adding of handling of share server details to generic driver 15:18:36 2) Add handling of share server details to cluster_mode (NetApp) driver 15:18:51 that's all 15:19:03 vponomaryov: awesome 15:19:17 bp in (4) is not approved 15:19:55 vponomaryov: the BP is a little sparse on details 15:20:12 vponomaryov: can you explain how you envision the create/use/update/delete working? 15:20:34 we're talking about new driver entry points right? 15:20:35 bswartz: it will do every backend driver 15:21:11 is it just 1 new method in each driver? or a few new methods? 15:21:13 create - when share server is created 15:21:26 not one 15:21:37 or it this mostly about drivers making use of the new core feature? 15:21:43 in different part of code - reading/updateing 15:22:21 bswartz: it is totally up to drivers without human influence 15:22:41 admin can see this info with new APIs 15:23:05 yeah I get that -- but I think if drivers are going to be impact (like the EMC driver that's not upstream yet) then it would be nice to explain what they're going to need to change for this 15:23:31 bswartz: I don't think it will be a problem 15:24:01 there will be two drivers, where it will have been implemented 15:24:21 ok 15:24:41 well clearly it needs to get done -- we can discuss the details in code review so I'll update the BP 15:25:11 targeted for j-1 now 15:25:14 bswartz: ok 15:25:43 okay I had a question for csaba 15:26:10 csaba: are you still working on gateway-based multitenancy support for the glusterFS backend? 15:26:39 bswartz: you mean the NFS Ganesha driver? 15:26:48 because that's a big complex bit of work and I'd like to get you some help if it's not almost done already 15:26:48 if yes, yes :) 15:27:25 csaba: yes that, as well as the work to get manila itself aware of when it needs to invoke something like that 15:28:02 it will be great to have a POC of a gateway that can bridge a large glusterFS share into various tenant subnetworks securely 15:28:10 hm you mean the integration into the upcoming automount feature? 15:28:19 but for it to be useful manila needs to be able to make that happen transparently to the tenant 15:29:08 no no I'm talking about automating the creation of the "gateway" when there's a need for it 15:30:13 so when we have a segmented network (with VLANs, for example) and we're using the glusterFS backend in addition to other backends, manila should make shares that land on the glusterfs backend just work 15:31:05 and the key is that the mechanism manila uses to make that happen should ideally be generalized to work with any backend, not just gluster 15:31:18 it's a big complex problem with a lot of parts 15:31:34 but I think it's solvable within juno 15:31:42 well yes we think about generating the ganesha config 15:32:04 and I want to make sure you're getting enough help 15:32:32 my thought is that maybe vponomaryov and yportnova can help you out with part of that 15:32:44 OK I think we'll push forward the gluster effort with keeping it in mind to be as generic as we can 15:33:10 and then for other ganesha FSAL-s others could contribute 15:33:44 yes of course with the interactions between manila and ganesha configurator, we can see use of help too 15:33:53 okay cool 15:34:24 #topic open discussion 15:34:30 anything else I missed? 15:34:41 close BP: https://blueprints.launchpad.net/manila/+spec/volume-type-support as implemented 15:34:51 I think we have the right blueprints now and mostly we have people looking at them 15:35:06 vponomaryov: ty for reminder 15:35:20 are we keeping the name "volume-type"? 15:35:21 vponomaryov: also ty for reminders to review code 15:35:38 xyang1: unless someone feels strongly about changing it I'm happy to leave it 15:35:42 xyang1: it has such name at the moment 15:35:53 that's fine 15:37:01 okay so if there's nothing else we can end early and use the next 24 minutes to catch up on code reviews 15:37:34 alright thanks all! 15:37:40 thanks 15:37:41 bye 15:37:41 thanks 15:37:43 #endmeeting