15:00:38 <bswartz> #startmeeting manila
15:00:40 <openstack> Meeting started Thu Jun 18 15:00:38 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:44 <openstack> The meeting name has been set to 'manila'
15:00:47 <cknight> Hi
15:00:49 <ganso_> hello
15:00:51 <rraja> hi
15:00:52 <vponomaryov> Hello
15:00:52 <bswartz> hello
15:00:55 <csaba> hi
15:00:57 <markstur> hi
15:01:05 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:13 <u_glide> hello
15:01:44 <bswartz> real quick before we get started, I wanted to remind everyone that L-1 is next week
15:01:47 <tbarron> hi
15:01:51 <xyang1> hi
15:02:23 <bswartz> only a few things targeted at L-1 have been merged, so there will be a lot of retargeting
15:03:10 <bswartz> but the priority for reviews the next week should be stuff that's targeted at L-1
15:03:48 <bswartz> #topic  manila-service-image hosting
15:04:13 <bswartz> u_glide: you're up
15:04:56 <u_glide> we agreed  with infra team that manila-service-image will be hosted as regular release on tarballs site
15:05:42 <u_glide> all releases + latest master build will be hosted
15:05:54 <u_glide> the only one question left
15:06:22 <toabctl> hi
15:06:22 <u_glide> Which build we will use in devstack plugin
15:06:37 <u_glide> latest master or some stable release
15:06:57 <bswartz> master
15:06:59 <vponomaryov> I think we should use stable with periodic update for newly created
15:07:12 <bswartz> -1
15:07:16 <vponomaryov> to avoid nice surprises
15:07:32 <bswartz> this team is going to control the image project though
15:07:35 <bswartz> how can we surprise ourselves?
15:07:51 <markstur> stable branches should use stable, but I'd think master would use latest
15:07:55 <ganso_> I think for devstack we are supposed to test always the latest
15:08:00 <vponomaryov> easy - tempest tests are running mostly in gates =)
15:08:01 <u_glide> bswartz: service image could be incompatible between releaes
15:08:21 <u_glide> s/releaes/releases/
15:09:10 <u_glide> for example manila L will be compatible only with service image <= 0.2.0
15:09:19 <vponomaryov> I agree to use latest only case if we run all tempest tests against that image on its project commits
15:09:28 <bswartz> okay so if we pin the release of the service image that runs in the gate, how will we get ourselves unstuck when there's a bug that needs fixing?
15:11:06 <vponomaryov> bswartz; you mean cross-changes?
15:11:13 <csaba> can we not have a default known good image url recorded in manila tree itself? and pull that (or a conf overridden alternative if something goes wrong)?
15:12:08 <ganso_> updates to the image are going to be approved after review, correct?
15:12:21 <bswartz> hmm
15:12:34 <ganso_> if something breaks the image, thus breaking the gate, then we revert, pull, or fix
15:12:35 <vponomaryov> ganso_: yes, standard gerrit-review process
15:12:40 <bswartz> ganso_: yes the image project will have gerrit change control like everything else
15:13:03 <ganso_> the image will be tested on its gate then, we will know when a patch breaks
15:13:22 <toabctl> can we just run our latest tempest tests from manila master against every image review request?
15:13:33 <vponomaryov> toabctl: yes, we can
15:13:40 <bswartz> toabctl: yes that's what I was thinking
15:13:42 <vponomaryov> toabctl: and should do
15:13:57 <toabctl> ok. that's also what I would prefer to do
15:14:17 <bswartz> okay so assuming we do it that way, is there any danger using the master version?
15:14:25 <u_glide> Do we plan to run ALL jobs, or some of them??
15:14:25 <markstur> so we can tempest test and use latest for master devstack
15:14:36 <markstur> what about someone running "stable/x" devstack?
15:14:55 <vponomaryov> markstur: for such things we keep old image as is
15:14:56 <u_glide> bswartz: old releases will download incompatible images
15:15:00 <toabctl> u_glide: hm. good question. I think we don't need all.
15:15:06 <bswartz> u_glide: I would say just 1 job
15:15:14 <vponomaryov> markstur: or push new commit with new image
15:15:44 <toabctl> images just have stable/liberty and so on branches which are used for the corresponding manila branches.
15:15:49 <markstur> I was thinking the old branches would need to be told to use an old image.
15:16:19 <bswartz> yeah I think stable branches will use the stable versions of the image
15:16:28 <vponomaryov> markstur: actually newly created image is going to have same functionality + additional without incompatibilities
15:16:37 <bswartz> stable/liberty will track the latest commit to that branch just like master does
15:16:44 <ganso_> vponomaryov: should have, we are not sure we can guarantee that
15:16:51 <cFouts> hi
15:17:08 <markstur> vponomaryov, backwards compat would be best, but I think u_glide already suggested we could break compat
15:17:09 <vponomaryov> ganso_: I mean difference between current one and first new one
15:17:17 <markstur> oh
15:17:23 <u_glide> markstur: +1
15:17:40 <ganso_> vponomaryov: I see, like an upgrade plan
15:17:49 <ganso_> vponomaryov: always backwards compatible with the previous one
15:17:54 <toabctl> there is a timeperiod (when you accepted a image request and the image rebuilds and needs to be published) where manila uses an outdated image
15:18:50 <toabctl> hm. but that shouldn't be a problem I guess...
15:19:16 <vponomaryov> so, we agreed for running tempest tests against image project and use latest build for master branch, right?
15:19:29 <vponomaryov> and stable build for stable branches?
15:19:34 <bswartz> +1
15:19:41 <u_glide> ++
15:20:02 <toabctl> +1
15:20:09 <ganso_> +1
15:20:10 <markstur> +1
15:20:41 <bswartz> okay sounds good
15:20:54 <bswartz> u_glide: you satisfied with this plan?
15:21:01 <u_glide> bswartz: yes
15:21:20 <bswartz> #topic Separate ID approach for Share Migration and Replication
15:21:43 <bswartz> I replied to the mail thread here
15:21:47 <bswartz> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067376.html
15:21:55 <bswartz> we don't need to hash this one out again in the meeting
15:22:08 <bswartz> I think it makes sense to keep the discussion on the ML
15:22:43 <bswartz> but I wanted to mention that I'm now in favor of implement "share instances" even though it will be a lot of work/code change
15:23:11 <bswartz> ganso_: this will affect your migration proposal, but hopefully in a good way
15:23:16 <ganso_> right, so if we have some progress on the share-instance idea, we more people agree it looks promising then we could step forward
15:23:52 <ganso_> bswartz: yes, I was about to start coding a new DB column, but preferred to wait this meeting discussion
15:24:09 <bswartz> ganso_: well the share instance stuff will take a long time
15:24:17 <bswartz> you'd be better off continuing your development in parallel
15:24:38 <bswartz> there will be a lot of work to do in the data copy service and on the network side independent of this change
15:24:42 <ganso_> bswartz: I think your email summarized my proposals in a good way
15:25:00 <bswartz> my proposal only addresses the thorny problem of how the 2 shares are tracked in the database and what IDs are used
15:25:11 <ganso_> bswartz: we have 2 big classes of changes, one that involves changing the ID, the vendor implementing a method, and other that requires a lot of effort on the core code
15:25:55 <vponomaryov> ganso_: maybe better to direct effort to one another blocker - admin network that is requried for mount of both shares?
15:26:04 <u_glide> vponomaryov: +1
15:26:08 <vponomaryov> while ID approach is under development
15:26:41 <u_glide> there are still a lot of unresolved questions
15:26:46 <bswartz> vponomaryov: that's next topic
15:26:57 <bswartz> :-)
15:27:03 <vponomaryov> oh, right =)
15:27:12 <ganso_> vponomaryov: yes, that is also important, but bswartz mentioned that VM approach is not interesting, if more people hop on that bandwagon, then we are off to stick with the vendors implementing the necessary methods and partial support in Liberty
15:27:34 <bswartz> #topic Network accessibility for share migration
15:27:43 <ganso_> vponomaryov: and it is already coded...
15:27:52 <bswartz> this was another new problem ganso_ brought up last week
15:27:58 <ganso_> vponomaryov: there is that benefit
15:28:01 <bswartz> I replied to that thread too
15:28:04 <vponomaryov> bswartz, ganso: why exactly VM, it can be any host accessible for manila host and share backends
15:28:07 <bswartz> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067378.html
15:28:47 <ganso_> vponomaryov: VM would be inside Openstack, a manila host is in admin network. Admin network access to shares is unusual use case at this time
15:28:51 <bswartz> in short, the problem is that some shares are not mountable by the manila services responsible for migrating data in the current proposal
15:29:16 <bswartz> because those shares on only exported on private/segmented tenant networks
15:29:24 <vponomaryov> for that case we should have dependency to have such proxy, that is accesible
15:30:19 <bswartz> my proposal to fix this problem is to require backends which support exporting shares on private networks (which is only backends with driver_handles_share_servers=true) to also export shares on the admin network
15:31:06 <ganso_> vponomaryov: would the use of a proxy be something standard for all driver vendors?
15:31:15 <bswartz> this should not a very difficult technical requirement for drivers
15:31:17 <cknight> bswartz: +1  The network plug-in mechanism we have should be reusable for the admin network.
15:31:22 <ganso_> vponomaryov: like, something we can code in Core code?
15:31:47 <bswartz> vponomaryov: what kind of proxy do you have in mind?
15:31:54 <vponomaryov> ganso_: for drivers that support migration
15:32:20 <vponomaryov> bswartz: host that is available for share backends with migration support
15:32:21 <bswartz> if it's based on a nova VM, then I have a problem with it because I don't want to make share migration dependent on nova
15:32:56 <vponomaryov> bswartz: it does not matter NOva VM it is or not
15:33:07 <bswartz> More and more openstack clouds will be based on other forms of compute, such as ironic and magnum
15:33:21 <cknight> bswartz: +1
15:33:26 <vponomaryov> ironic is backend for Nova
15:34:08 <ganso_> bswartz: what if it is dependent on neutron? if I understand this correctly, it would be possible to work some neutron magic to make the manila host, or data copy service node, to connect to the backend directly through the openstack network, like... if admins, or even DHSS = true drivers usually use of a provider network (FLAT or VLAN) to make that
15:34:08 <ganso_> possible, what if the node in the admin network can also connect to that provider network inside openstack?
15:34:09 <bswartz> and I'm also not forgetting standalone manila without the rest of openstack
15:35:05 <bswartz> I'm concerned that a network bridge/proxy approach will be too complicated -- we already have several network plugins, and we're likely to get more
15:35:24 <bswartz> a proxy would have to understand every network system supported by manila
15:35:40 <bswartz> putting the requirement on the drivers to simply provide an accessible mount point makes the problem far easier
15:35:45 <ganso_> vponomaryov: the generic drivers creates a VIF so the manila node can access the Manila Service network inside openstack, is it possible to create something like this to the provider network the backend is in?
15:36:01 <bswartz> I'm curious to know if there are drivers that can't do that however
15:36:17 <cknight> ganso_: Yes, neutron magic could be used, but that doesn't solve the non-OpenStack use case.  And neutron is already complicated without resorting to magic.  If we define a singleton admin network, it could be defined using any of our 5 network plugins.
15:36:46 <bswartz> it seems like a fairly modest requirement to provide access to shares directly on a flat network in addition to whatever other tenant networks require access
15:36:56 <cknight> ganso_: And the dhss drivers already know how to handle network resources, this is just incremental to that.
15:37:44 <ganso_> cknight: +1 for the non-Openstack use case
15:38:02 <ganso_> cknight: that can be quite a deal breaker
15:38:18 <ganso_> so, no neutron magic
15:39:08 <ganso_> so, requiring network path is already coded in the prototype, as soon as it is merged, driver vendors will probably focus more on that
15:39:27 <bswartz> fwiw, I think the proxy approach would also work, but I feel that it's likely to be more effort to maintain over time
15:39:29 <ganso_> or we will get complaints from ops that this requirement is too complex to attend
15:39:53 <ganso_> /s/attend/satisfy
15:40:37 <vponomaryov> bswartz: we can have more than one solution
15:40:59 <ganso_> vponomaryov: +1, fallback solution maybe
15:41:54 <bswartz> vponomaryov: if all of the drivers that support share_servers and segmented networking can be modified to provide an additional export_location on the admin network then no other solution is necessary
15:42:02 <bswartz> and that's only a few drivers
15:42:10 <vponomaryov> bswartz: solutions can be dedicated to different network plugins or installations in general
15:42:33 <cknight> vponomaryov:  An alternate fallback may be interesting, but let's get the primary design working first.
15:42:51 <vponomaryov> I don't mind
15:42:54 <ganso_> bswartz: what if we delegate the network plugin improvement to the driver vendors that cannot provide the network path? I remember I was going to contribute to the network plugin when I was coding my driver
15:43:06 <bswartz> I'd rather avoid doing extra work if it can be avoided
15:43:47 <bswartz> the main thing that would convince me that we need a general proxy approach would be if there are drivers that simply can't implement my proposal
15:44:14 <vponomaryov> dhss=false drivers
15:44:21 <bswartz> My current understanding of what would be needed for a proxy that would work in all cases is that it would be a ton of work
15:44:39 <bswartz> vponomaryov: those don't have this problem though
15:45:33 <bswartz> for any driver that doesn't manage networking, it is the administrator's job to ensure connectivity, both to the tenant networks and to the admin network
15:46:15 <u_glide> we should consider scaling of this operation
15:46:24 <bswartz> scaling what
15:46:40 <vponomaryov> scaling load of migration operations
15:46:46 <u_glide> scaling of copying from one backend to another
15:46:51 <cknight> u_glide: the data movement service should be horizontally scalable
15:46:58 <ganso_> that's what the data copy service is for
15:47:00 <cknight> u_glide: run as many of them as you need
15:47:18 <u_glide> cknight: ok
15:48:08 <bswartz> yes I think having a separate service for actually doing the data copying is essential to avoid bottlenecks
15:48:18 <bswartz> as cknight says, horizontally scalable
15:49:07 <bswartz> anyways there is a ML thread on this topic
15:49:51 <bswartz> my proposal is there, and I'd like for any driver maintainers that have share_server_supporting drivers to provide feedback
15:49:52 <ganso_> I guess this topic is not such a blocker for the ID topic after all
15:50:15 <bswartz> the generic driver should be really easy to add another admin-network-facing network interface to export the shares
15:50:27 * bswartz hopes that's true
15:50:47 <bswartz> #topic open discussion
15:50:49 <cknight> bswartz: NetApp and HP have already said they can do it.  What other DHSS drivers are there?
15:50:54 <bswartz> anything else today?
15:51:02 <ganso_> bswartz: I already coded that in my prototype, but it kinda looks like a workaround approach
15:51:02 <bswartz> cknight: one of EMC's I think
15:51:03 <vponomaryov> EMC
15:51:20 <ganso_> bswartz: there was another topic in the list
15:51:29 <cknight> OK, xyang1 can weigh in, then.
15:51:30 <vponomaryov> bswartz; 4. Quick review of Minimum Requirements ?
15:51:42 <bswartz> oh!
15:51:51 <bswartz> someone modified the agenda in mid meeting....
15:51:59 <bswartz> sneaky
15:52:03 <bswartz> #topic Quick review of Minimum Requirements
15:52:04 <xyang1> cknight: sorry, I missed the latest discussion
15:52:12 <ganso_> bswartz: no, 1 minute prior to the meeting :P
15:52:23 <vponomaryov> ganso_: confirm =)
15:52:27 <ganso_> someone removed the Upcoming Minimum requirements
15:52:35 <cknight> xyang1: No worries, it's in the DL thread Ben linked to.
15:53:01 <bswartz> xyang1: ping me later today or tomorrow if you want to discuss
15:53:10 <bswartz> #link https://etherpad.openstack.org/p/manila-minimum-driver-requirements
15:53:12 <xyang1> cknight, bswartz: thanks, I'll take a look
15:54:12 <bswartz> ganso_: any specific questions or are you just asking if there is more feedback on what you have?
15:54:14 <ganso_> so, I had included a "upcoming minimum requirements" so driver vendors can anticipate what's coming that they need to support
15:54:20 <xyang1> I think QoS support is advanced
15:54:37 <ganso_> but someone removed that
15:54:46 <bswartz> yes QoS isn't required yet
15:54:57 <xyang1> It is in the minimum required list
15:55:04 <xyang1> it is not even implemented yet
15:55:28 <xyang1> not every driver can support that
15:55:32 <vponomaryov> someone from future came to us =)
15:55:38 <rraja> ganso_: manage/unmanage snapshot and thin provisioning, right?
15:55:40 <ganso_> xyang1: I added because it was a default capability, thanks for the feedback, I will remove it :)
15:55:48 <ganso_> rraja: yes :) thanks!
15:56:01 <xyang1> ganso_: default to False, like in Cinder:)
15:56:34 <bswartz> I'm not sure we need to enumerate future minimum requirements until the features are actually merged and shown to be working
15:56:44 <ganso_> ok so, Thin Provisioning will not be a minimum requirement, but Manage Unmanage Snapshot will, correct?
15:57:12 <bswartz> we have lots of proposals for features but without working code it's hard to even have a discussion about when they'll be required, if ever
15:57:18 <ganso_> bswartz: I can change the title to "Planned Upcoming Minimum Requirements", so it will make more sense that they are still uncertain
15:57:23 <xyang1> ganso_: that is future, not implemented yet
15:57:25 <bswartz> yes that helps
15:58:08 <ganso_> btw, Access Rule can be RW only?
15:58:16 <vponomaryov> ro/rw
15:58:39 <toabctl> no idea if that was discussed in the last meetings, but how is the progress for the external CI systems from the different vendors? I guess that's the mini-minimum requirement...
15:58:42 <ganso_> vponomaryov: ok, thanks
15:59:00 <markstur> toabctl,   mini???
15:59:07 <markstur> mega
15:59:09 <cknight> Somewhere we should also enumerate optional features.
15:59:14 <toabctl> markstur: ah. right! :)
15:59:19 <cknight> And which drivers support them.
15:59:20 <vponomaryov> markstur: ^^
15:59:32 <bswartz> toabctl: the upcoming deadline next week is to have all the driver's maintainers sign up and commit to doing CI
15:59:45 <bswartz> I'm going to track down those to haven't committed and find out why they're not paying attention
16:00:17 <toabctl> bswartz: so you are in personal contact with the different maintainers?
16:00:21 <bswartz> #link https://etherpad.openstack.org/p/manila-driver-maintainers
16:00:37 <bswartz> we know who they are from the git history
16:00:46 <bswartz> and I've talked to all of them in the past
16:01:03 <bswartz> some seem to be less active recently
16:01:04 <markstur> We should continue the "minimum" req list discussion next week
16:01:05 <bswartz> we're out of time though
16:01:14 <toabctl> bswartz: so even if they don't read openstack-dev, they all know that there is a deadline?
16:01:45 <bswartz> they should be following the manila ML at least
16:01:56 <bswartz> those that aren't I will reach out to personally
16:02:07 <toabctl> bswartz: ok. that's good news.
16:02:11 <bswartz> that's why we have this early deadline
16:02:19 <bswartz> thanks everyone
16:02:21 <bswartz> #endmeeting