15:00:38 <bswartz> #startmeeting manila
15:00:39 <openstack> Meeting started Thu Jun 11 15:00:38 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:43 <openstack> The meeting name has been set to 'manila'
15:00:46 <cknight> Hi
15:00:51 <xyang1> Hi
15:00:53 <ganso_> hello
15:00:55 <rraja> hi
15:00:55 <markstur> hello
15:00:57 <zhongjun2> hi
15:01:00 <bswartz> hello all
15:01:03 <vponomaryov> hello
15:01:11 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:18 <u_glide> hello
15:01:44 <bswartz> sorry I've been unavailable most of the last week
15:02:02 <bswartz> #topic Manila Midcycle Meetup
15:02:41 <bswartz> so based on all the feedback I've received, I think that July 29-30 are the best dates to hold the meetup
15:03:19 <bswartz> The only known conflict that creates is that it's right before the cinder meetup, so people who work on both (like xyang1) probably can't travel
15:03:53 <bswartz> does anyone else have a problem with that date?
15:04:04 <xyang1> bswartz: Yes, I can't travel, not because of cinder though
15:04:27 <xyang1> bswartz: I can join google hangout
15:04:49 <bswartz> yes, sorry xyang1, there was no date that would work for 100% of people
15:05:02 <xyang1> bswartz: Understand
15:06:10 <bswartz> okay I will make it official then, and people can start scheduling their travel (if possible)
15:06:10 <rraja> bswartz: csaba and I would like to join through google hangout. wasn't there a cap on the number of people who could join google hangout last time?
15:06:25 <cFouts> o/
15:06:27 <bswartz> rraja, yes google has a cap for video calls
15:06:44 <bswartz> we can do an audio bridge again which scales much better
15:07:10 <rraja> bswartz: ok. thanks!
15:08:00 <bswartz> next month we will get an agenda together for the meetup, hopefully more organized than last time
15:08:11 <bswartz> anything else about meetup?
15:08:29 <bswartz> #topic Separate ID approach for Share Migration and Replication
15:08:40 <bswartz> ganso_: you're up
15:08:51 <ganso_> ok so, we have not decided on the approach
15:09:02 <ganso_> Approach 1 can be considered as already implemented
15:09:13 <ganso_> are we going to stick with it?
15:09:19 <bswartz> #link https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg54504.html
15:10:05 <bswartz> ganso_: I believe all of the options are workable for the migration use case, and I don't have a strong preference for any of them
15:10:21 <bswartz> ganso_: the decider for me will be which option also works well for replication, and that I just don't know yet
15:10:58 <ganso_> the consequence is that all drivers will have to do some change in order to be compliant with migration. But last time we discussed I brought up the fact that the currently implementation still requires driver updates
15:11:54 <bswartz> why are driver updates required if we don't choose proposal 1
15:12:22 <ganso_> because those changes are related to network path and mount command
15:12:33 <ganso_> we ended up not discussing this at the meetup
15:12:40 <ganso_> /s/meetup/summit
15:13:14 <ganso_> whoever is mounting the source and destination shares need to reach them
15:13:28 <ganso_> unless it is a VM, a network path is needed
15:13:54 <bswartz> I'm still not convinced that those network/mount stuff needs to be in the drivers
15:13:56 <ganso_> it is not common for drivers to provide a network path, because this type of scenario is not needed at this time
15:14:22 <bswartz> ganso_: what is the difference between the "network path" and the export location?
15:14:36 <ganso_> I would prefer not to be, but I still have not figured out a way to not require driver code... at this moment I am working on improving the copy method and error cleanup
15:15:01 <ganso_> the export location has an ip address from inside openstack network
15:15:18 <ganso_> a node from the admin network cannot reach that ip (normally)
15:15:25 <bswartz> oh
15:15:46 <bswartz> you think we need to require all drivers to make their shares also available on the admin network?
15:15:59 <bswartz> that would be a significant new requirement
15:16:18 <ganso_> so the network path is an approach that driver vendors would need to figure out in order to make that connection possible. I implemented it for generic as example
15:16:39 <ganso_> if the entity mounting the shares are on the admin network, yes
15:16:47 <ganso_> if the entity is a VM, then no
15:17:10 <bswartz> ganso_: do you have a link to that change?
15:17:23 <ganso_> yes hold on a sec
15:17:33 <bswartz> https://review.openstack.org/#/c/179791/
15:17:38 <bswartz> found it
15:17:47 <ganso_> yea :) thanks
15:18:16 <bswartz> okay this is something I hadn't considered
15:18:42 <ganso_> the mount command is something that I added as a concern of mine, because the mount command for NFS is simple, but I am not familiar with other protocols. I know CIFS requires additional parameters
15:19:13 <ganso_> if the entity that mounts has the libraries and intelligence to do it, and does not require the parameter values from backends, then drivers would also not need to provide them
15:19:20 <bswartz> network reachability will likely cause problems with migration, but forcing drivers to make shares available on multiple networks might be too difficult for some backends
15:19:55 <ganso_> yes, I tested on HDS hardware and it was a bit more complex
15:20:03 <ganso_> that raised my concern
15:20:11 <bswartz> I wonder if there could be any other solution
15:20:31 <ganso_> if it was a VM, then I am very positive that it can be compatible with any backend
15:20:33 <bswartz> such as use of service VMs connected to multiple networks or some kind of neutron magic
15:20:43 <bswartz> if what was a VM
15:20:54 <ganso_> the entity that mounts, like the data copy service node
15:21:16 <bswartz> yeah...
15:21:24 <bswartz> that would create a new dependency again though
15:21:36 <ganso_> we initially wanted it to be a baremetal node, but maybe it will have to be a VM
15:21:50 <bswartz> I don't like VMs being the only option
15:22:21 <vponomaryov> we need to find the way when we will be able to copy data without mounting of share
15:22:22 <ganso_> unfortunately I do not see another option yet, I am looking for it :\
15:22:42 <bswartz> vponomaryov: I don't think we can do that generically
15:22:44 <ganso_> vponomaryov: for a generic migration I do not think this is possible
15:22:54 <bswartz> vponomaryov: mounting and copying the data is supposed to be the universal fallback
15:23:27 <bswartz> we could in theory removing the existing export location before adding new ones (new export locations on the admin network)
15:23:36 <bswartz> s/removing/remove/
15:23:59 <bswartz> but this still creates headaches for drivers
15:24:36 <ganso_> that would be a similar approach, I don't think we need to remove, but just change it to read only and add an additional export location to the list, and drivers would still need to ensure their share is reachable from admin network
15:25:12 <ganso_> it does not improve a lot
15:25:25 <bswartz> I would want to poll the existing drivers and find out how hard it might be to add additional export locations on a different network for each one
15:25:41 <bswartz> poll the maintainers*
15:25:54 <ganso_> I can send another email to the mailing list asking that
15:26:00 <ganso_> a different thread
15:26:32 <bswartz> this is only an issue for drivers that support share servers
15:26:51 <ganso_> bswartz: not really
15:27:07 <ganso_> bswartz: for DHSS = false I still had to provide the network path
15:28:40 <bswartz> well for drivers that don't support share servers, the drivers have no knowledge of networking -- it's left to the admin to ensure reachability
15:29:43 <ganso_> if we go back to the previous discussion for a moment, if all driver maintainers say they can provide a network path, then are we going to stick with the approach that requires driver changes? on another trail of thought we were discussing drivers not having to provide it, and our solution was to use the VM
15:29:46 <ganso_> bswartz: agreed
15:31:00 <bswartz> ganso_: I can't tell you my preference yet -- I need to spend time looking at the replication feature and how it interacts
15:31:17 <ganso_> how about we put that on hold on a moment and discuss the mount command, that would also be a restriction for that
15:31:51 <ganso_> do you think there is a way we can get rid of having to provide the mount command?
15:32:56 <bswartz> the mount command is going to client-side specific
15:33:09 <bswartz> and protocol specific (obviously)
15:33:23 <bswartz> but if we had a data-copy service, wouldn't it mount all NFS shares the same way?
15:33:26 <bswartz> and all CIFS shares the same way?
15:33:48 <bswartz> any config options needed for mounting CIFS could be part of the manila.conf file
15:34:04 <bswartz> I can't think of a mount option that would be specific to the driver or backend
15:34:05 <ganso_> the manila.conf would be in the destination backend
15:35:06 <ganso_> the data copy service node would need to obtain that information somehow
15:35:37 <bswartz> manila.conf should be the same for all nodes, except for the enabled_share_backends option
15:36:36 <ganso_> hmmm I never thought of that as a real scenario :) I need to learn how to read the configs
15:37:02 <bswartz> in a single node installation, it's all very easy because everything goes in 1 config
15:37:03 <ganso_> ok, so I will send an email to the mailing list asking about the network path
15:37:21 <ganso_> at least we will then know our options
15:37:36 <bswartz> in a multi node installation, you need to think about which nodes run the data copy service (assuming we add it)
15:37:53 <bswartz> okay we're about out of time on this topic
15:38:11 <bswartz> I wish we had more answers, but I don't want to take up the whole meeting on it
15:38:33 <bswartz> I will figure out the share IDs and how they affect replication question by next week
15:39:05 <bswartz> anything else before we move on to next topic?
15:39:30 <bswartz> #topic manila-service-image hosting
15:39:43 <bswartz> u_glide: you're up
15:39:55 <u_glide> As you probably know I have created manila-image-elements project and  proposed new manila-service-image (https://review.openstack.org/187550/). Now it's time to discuss where we will host new images.
15:40:14 <u_glide> Current (old) image hosted on vponomaryov's Dropbox but I think it doesn't meet our new requirements  for image hosting
15:40:32 <u_glide> new manila-service-images should have releases
15:40:55 <u_glide> and this releases should be hosted somewhere
15:40:55 <bswartz> yes
15:41:15 <bswartz> well I was assuming that openstack-infra would host them were they host releases of all of the various projects they build
15:42:02 <bswartz> wouldn't the qcow2 file that gets genereated from the image project just go into the tarball and get uploaded to http://tarballs.openstack.org/ ?
15:43:22 <u_glide> bswartz: seems that http://tarballs.openstack.org/ hosts only source code
15:43:45 <bswartz> it hosts whatever gets put into the tarball
15:44:05 <bswartz> I'll admit that I don't know what process does that
15:44:15 <u_glide> ok, Is it have enough bandwidth?
15:44:31 <bswartz> I'm pretty sure that we can work with the infra folks to solve this
15:45:24 <bswartz> if there needs to be a "release" process that pulls down the latest manila-image project and builds a qcow2 and uploads that somewhere
15:46:01 <bswartz> we can go in their channel and ask
15:46:12 <bswartz> or put an agenda item on the infra meeting and discuss it there
15:46:31 <bswartz> they meet on tuesdays at 1900 UTC
15:46:43 <u_glide> ok, sounds good but we also should plan our release policy for manila-service-image
15:47:08 <bswartz> I think the release policy should match the tarball release policy
15:47:28 <bswartz> Everything with a tag is preserved, as is HEAD from every branch
15:48:19 <bswartz> so if we want to make a new release, we just tag the change in git
15:48:46 <u_glide> it will be perfect
15:48:56 <u_glide> :)
15:49:42 <u_glide> I like an idea of automating such things
15:49:56 <bswartz> It's already automated for tarballs
15:50:08 <bswartz> for example: http://tarballs.openstack.org/manila/
15:50:39 <bswartz> http://tarballs.openstack.org/python-manilaclient/
15:51:58 <u_glide> ok, I will put item to infra meeting agenda
15:52:12 <u_glide> to discuss details
15:52:14 <bswartz> u_glide: will you take the lead in working with infra to get the builds to run somehow and upload the qcow2 files somewhere?
15:52:32 <bswartz> Please copy me so I can attend
15:52:45 <u_glide> bswartz: ok, np
15:53:03 <bswartz> u_glide: anything else on this topic?
15:53:38 <u_glide> no, now we have direction to follow
15:53:44 <bswartz> #topic open discussion
15:53:55 <bswartz> alright, anything else for today?
15:54:01 <ganso_> could we discuss briefly the minimum requirements?
15:54:04 <zhongjun2> I saw manila-minimum-driver-requirements.
15:54:05 <zhongjun2> Some features can't support in manila driver, for example: create_share_from_snapshot.
15:54:05 <zhongjun2> Whether to allow this driver code be merged to Liberty.
15:54:07 <bswartz> ganso_: yes
15:54:11 <ganso_> #link v
15:54:14 <ganso_> #link https://etherpad.openstack.org/p/manila-minimum-driver-requirements
15:54:45 <bswartz> zhongjun2: snapshots are pointless if you can't use them to create new shares
15:55:10 <bswartz> what else do you think snapshots can be used for if not creating new shares?
15:55:25 <vponomaryov> bswartz: restoring existing one
15:55:29 <vponomaryov> data on it
15:55:47 <zhongjun2> The snapshot data can be seen under the share directory
15:55:50 <bswartz> vponomaryov: the only way to do that in manila today is to create a new share from the snapshot
15:56:03 <markstur> outside of manila they'd be used for backup and restore
15:56:15 <ganso_> vponomaryov: the workflow is to create a new snapshot from share, AFAIK
15:56:42 <bswartz> we don't have a reset-share-to-snapshot API
15:56:51 <vponomaryov> bswartz: by manila -yes, but for admin it is definitely use case
15:57:06 <bswartz> so the topic of snapshot support has come up before
15:57:29 <ganso_> the consequence would be 2 different ways to handle snapshots, and driver vendors would have implement one of them?
15:57:55 <cknight> ganso_: Not excited about that
15:58:08 <bswartz> my response has always been: snapshots are an essential feature in manila -- drivers don't have to implement them very efficiently, but they do have to match the semantics expected by the user -- which are that you can create shares from snapshots and snapshots represent the state of the original share at the time the snapshot was taken
15:58:42 <bswartz> I'm not sure if anyone disagrees...
15:58:57 <cknight> bswartz: is there a reason you can't restore a share from a snapshot?
15:59:06 <ganso_> cknight: neither am I
15:59:18 <bswartz> cknight: it's not needed because you can create a new share from it, which has the same effect
15:59:29 <vponomaryov> bswartz: new share takes quota
15:59:45 <cknight> bswartz: new share has different export location
15:59:47 <vponomaryov> bswartz: and if we do not need duplication why should we create new one?
15:59:55 <cknight> vponomaryov: +1
16:00:01 <bswartz> if there is strong demand for a restore-share-to-snapshot API we could add it
16:00:29 <cknight> bswartz: it's always struck me as an odd omission
16:00:33 <markstur> restore was asked for at summit pres (not that 1 question counts as strong demand)
16:00:35 <bswartz> it's tricky though because clients would observe a time warp if they had the share mounted at the time of the restore
16:01:20 <zhongjun2> restore-share-to-snapshot is a good idea
16:01:29 <markstur> +1
16:01:30 <bswartz> the idea has been to keep the API minimal to still allow all the things a client might want to do
16:01:30 <vponomaryov> bswartz: vulnerability is not the reason to skip feature implementation
16:01:37 <bswartz> I'm not opposed to this new API proposal though
16:01:42 <cknight> bswartz: time check
16:01:47 <ganso_> guys, please take a look at etherpad "list of upcoming new minimum requirements", I think it is important so drivers can plan ahead for Liberty release
16:01:48 <bswartz> ah crud
16:01:51 <bswartz> we're past time
16:02:07 <xyang1> bswartz: Do we just add comments on etherpad
16:02:11 <bswartz> thanks everyone
16:02:13 <ganso_> xyang1: yes
16:02:15 <bswartz> xyang1: yes you're welcome to
16:02:19 <xyang1> Ok
16:02:32 <bswartz> eventually the requirements will be codified in the developer docs, and you can submit comments in gerrit for that
16:02:41 <bswartz> #endmeeting