15:01:08 <tbarron> #startmeeting manila
15:01:09 <openstack> Meeting started Thu Nov  1 15:01:08 2018 UTC and is due to finish in 60 minutes.  The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:12 <openstack> The meeting name has been set to 'manila'
15:01:18 <bswartz> .o/
15:01:21 <ganso> hello
15:01:22 <amito> hello
15:01:26 <markstur> hi
15:01:39 <zaneb> o/
15:02:03 <tbarron> let's wait another minute
15:02:15 <tbarron> ping xyang
15:02:17 <dustins> \o
15:02:24 <xyang> hi
15:02:39 <tbarron> ping toabctl
15:03:14 <gouthamr> o/
15:03:43 <tbarron> Hi all!
15:03:51 <tbarron> #topic announcements
15:04:16 <tbarron> Spec deadline is a week from today.
15:04:36 <tbarron> We have specs on the agenda, so 'nuff said for now.
15:04:49 <tbarron> Summit is the following week.
15:05:12 <tbarron> I will make an etherpad for our forum session and send it to the dev list.
15:05:31 <tbarron> Who here will be at summit?
15:05:45 * tbarron will, vkmc will
15:05:47 <amito> most likely won't be there... but I'll know soon.
15:06:06 <tbarron> ok, ping me if you'll be attending this one please.
15:06:13 <amito> np
15:06:18 <vkmc> o/
15:06:27 <tbarron> I don't have any other announcements, anyone else?
15:06:34 <xyang> I won't be there.  I'll be in KubeCon Shanghai in that same week
15:06:57 <tbarron> xyang: cool, but remember platforms need infra :)
15:07:01 <ganso> erlon, tpsilva and I will
15:07:09 <xyang> :)
15:07:17 <tbarron> ganso: cool, the boys from brazil :)
15:07:34 <tbarron> #topic technical vision
15:07:50 <tbarron> We have zaneb as a special guest today.
15:08:07 <tbarron> I've been following the proposed tech vision in review
15:08:21 <tbarron> #link https://review.openstack.org/#/c/592205/
15:08:31 <tbarron> I personally like the vision and
15:08:44 <tbarron> think manila fits unprobematically with it.
15:08:53 <tbarron> But we should all take a look.
15:09:01 <tbarron> zaneb: you're up :)
15:09:15 <zaneb> hi everyone :)
15:09:39 <amito> hi zaneb
15:09:39 <zaneb> I'm here to listen more than to talk :)
15:09:56 <zaneb> I'd like to hear what y'all think
15:10:23 <zaneb> because I'm not an expert on Manila by any means
15:10:28 <tbarron> me too, have folks had a chance to read over the review?
15:11:01 <zaneb> so do you think this direction is right? is there anything important that we've missed. is there anything there that shouldn't be?
15:11:37 <tbarron> IMO manila provides part of basic data center mgmt via self-service shares
15:11:50 <tbarron> It virtualizes the resources underneath
15:12:05 <tbarron> It isolates tenants from one another
15:12:32 <tbarron> It provides an API so that programs can use it, not just humans
15:12:53 <tbarron> It is not particularly nova-centered since it offers
15:12:53 <dustins> I haven't had the chance to look at it (and honestly didn't know there was an ongoing effort to define a vision)
15:13:07 <tbarron> up storage over a network rather than via a hypervisor
15:13:30 <zaneb> one question I had actually, is does Manila also make it easier to scale granularly (like e.g. Swift does) or do you hit limits in terms of having to pre-allocate the size of the file system (like e.g. Cinder)?
15:13:33 <tbarron> dustins: /me recommends following the dev list
15:14:10 <zaneb> I think it's still the latter, but I was unsure of how it works in practice
15:14:26 * gouthamr isn't aware of cinder's issue needing pre-allocating size of the file-system
15:14:40 <erlon> hey
15:15:09 <tbarron> zaneb: yeah, not sure about the cinder limitation either but
15:15:13 <gouthamr> zaneb: inherently shared-file-systems are "elastic" - so manila supports expanding and shrinking
15:15:21 <tbarron> we are elastic
15:15:33 <zaneb> gouthamr: I mean, the size of a volume in Cinder is fixed. In Swift if you need to store another object, you just do it. In Cinder if the FS is full, you have much more work to do. Not that that's bad, but they're different things
15:15:39 <tbarron> cloud admins need to track usage and increasee backing store
15:15:53 <gouthamr> or users
15:16:09 <tbarron> zaneb: a "share" has a fixed size (with some back ends that's just a back end quota, nothing more)
15:16:18 <tbarron> zaneb: but it can be "extended"
15:16:18 <ganso> zaneb: there is no way to create a share with "undefined", "unlimited" or "indetermined" size, the user has to choose a size and then can increase or decrease it later
15:16:58 <zaneb> ok, so it's sorta in-between
15:17:00 <tbarron> we've considered "infinite" or "auto-sizing" shares but they've not been implemented yete
15:17:04 <tbarron> yet
15:17:15 <zaneb> you have to define a size, unlike Swift, but it's easy to change, unlike Cinder
15:17:29 <gouthamr> yep, cinder doesn't let you shrink a volume
15:17:37 <bswartz> On some backends it's easy to change...
15:17:44 <gouthamr> s/cinder/block devices
15:18:09 <amito> also, increasing or decreasing its size depends on backend capabilities
15:18:41 * zaneb is learning a lot today :)
15:18:52 <bswartz> #TIL
15:19:40 <tbarron> there's nothing stopping a program from selecting a flexible back end (via a share type) for shares, tracking usage, and extendiing as appropriate
15:19:54 <zaneb> so it may be the case that Manila is also contributing to several of the other design goals as well, not just the basic data center infrastructure
15:20:17 <tbarron> but the abstraction sits over some back ends that may not allow this
15:20:25 <gouthamr> +1 - the point about "arbitrary grouping" makes sense on multiple levels for us
15:20:27 <tbarron> zaneb: +2
15:20:33 <gouthamr> for instance ^ :)
15:20:37 <tbarron> I meant +1 :)
15:20:57 * tbarron wasn't exercising any special powers
15:22:21 <gouthamr> 'ts a good read, will review today and provide feedback
15:22:49 <tbarron> zaneb: fwiw I agree with rosmaita that quotas are useful even with "central-planning" cloud economics where individual consumers aren't evaluating opportunity costs :)
15:22:57 <tbarron> but that's a nit
15:23:28 <tbarron> Anyone have any deep concerns about the proposed tech vision or manila's fit ?
15:24:23 <tbarron> zaneb: I'm not hearing any worries so I'll add my +1 as PTL as you requested.
15:24:49 <zaneb> tbarron: thanks! new patch set with responses to comments coming today probably
15:25:00 <tbarron> If anyone comes up with issues take them to the review or catch zaneb on #openstack-tc
15:25:08 <zaneb> ++
15:25:13 <tbarron> zaneb: thanks for joining us today
15:25:24 <zaneb> thanks for your time everyone, looking forward to your comments :)
15:25:32 <tbarron> #topic spec reviews
15:26:32 <tbarron> #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Other_Work
15:26:53 <tbarron> ^^ That will take you to the right spot even though the topic isn't just right
15:27:18 <tbarron> As mentioned earlier, spec deadline is a week from today.
15:27:33 <tbarron> We have three outstanding spec reviews at the moment.
15:27:42 <tbarron> And *no* review comments :(
15:28:10 <tbarron> I'm going to block off some time for these today and tomorrow but
15:28:35 <tbarron> can we get more reviewers to sign up for these and give initial feedback before the weekend?
15:29:04 <tbarron> ganso: You may want to do some lobbying/arm-twisting since as
15:29:30 <tbarron> things stand now I'd say these are at risk for not making this cycle simply because we
15:29:40 <tbarron> are limited on review bandwidth.
15:29:54 <ganso> sorry I don't understand the expression "lobbying/arm-twisting"
15:30:02 <tbarron> ganso: if you have to triage, life-boat style, what is your priority?
15:30:13 <bswartz> ganso: he's saying you need to go ask people for reviews
15:30:19 <tbarron> ganso: sorry for the colloquialism, I mean ^^
15:30:37 <bswartz> There is no magical review fairy
15:31:01 <tbarron> ganso: in an ideal world there would be sufficient reviewer bandwidth and interest in these features that you wouldn't need to do that, but ...
15:31:09 <ganso> my priority would be 1) manage/unmanage DHSS=True, 2) Replication DHSS=True and 3) Create share from snapshot in other backends
15:31:54 <tbarron> ganso: so find other people with DHSS=True back ends who care about these features and get them to review and agree on the approach
15:32:31 <amito> I can try and review specs this weekend.
15:32:40 <tbarron> amito: thanks!
15:33:09 <tbarron> Anything else on this topic today?
15:34:16 <tbarron> #topic planning our work
15:34:27 <tbarron> #link https://wiki.openstack.org/wiki/Manila/SteinCycle
15:35:15 <tbarron> gouthamr: how is it going with running the lvm job in bionic instead of centos?
15:35:37 <gouthamr> tbarron: pretty good so far, late-breakthrough with the quagga issue i was seeing
15:35:48 <tbarron> As a reminder, this is a step in the python3 goal since centos currently doesn't support python3
15:35:58 <gouthamr> tbarron: i want to test those changes on centos and invoke the netapp-ci which is testing this on xenial
15:36:06 <gouthamr> after which we should be good
15:36:34 <tbarron> gouthamr: sounds like good progress
15:36:56 <gouthamr> I have work to do on the base patch wrt the Grenade job - that's on my plate today - it'd be nice to get these in, and be the first project to stop testing Stein on Xenial :D
15:37:02 <tbarron> vkmc: gouthamr: what will be the next step on python3 goal after converting the lvm job?
15:37:17 <vkmc> tbarron, porting tempest tests
15:38:07 <tbarron> vkmc: will we port to zuulv3 as part of that?
15:39:00 <gouthamr> we also need to replicate these job changes on manila-tempest-plugin (and at some point look at rebuilding the service images)
15:39:49 <tbarron> gouthamr: ack
15:39:52 <gouthamr> imho zuulv3 can be a slower transition, unless infra wants us to do it right away
15:40:11 <tbarron> gouthamr: fine by me
15:40:22 <gouthamr> i haven't heard that they do, we have so much other work to get to :)
15:40:59 <tbarron> vkmc: gouthamr: you've been keeping
15:41:07 <vkmc> tbarron, yes, but migrating to zuulv3 requires some planning since this will affect third party ci's
15:41:09 <tbarron> #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Python3
15:41:17 <vkmc> we need to make it slowly
15:41:31 <tbarron> pretty up to date, so please keep doing so :)
15:41:41 <vkmc> we discussed about this briefly during the dublin ptg, we didn't talk about this during the denver ptg
15:41:44 <tbarron> and ack on de-coupling this from zuulv3
15:41:46 <vkmc> but certainly is the next step
15:42:42 <vkmc> yeah, we need to decouple py3 migration to zuulv3 migration
15:42:55 <tbarron> I know everybody has been busy with downstream work and concerns lately but I'll mention
15:43:25 <tbarron> on the Manila CSI subject that we hope to have some
15:43:44 <tbarron> good discussions with CERN and OpenStack k8s sig on this subject
15:43:49 <tbarron> at the Summit
15:44:04 <bswartz> \o/
15:44:17 <tbarron> Meanwhile people are using the dynamic external storage provider for manila successfully but
15:44:44 <tbarron> some of them have raised an issue when using CephFS back end where
15:45:06 <tbarron> k8s needs the shares to be mode 775 instead of 755
15:45:34 <tbarron> because the workload running in the conatiner, which sees the share via biind mout from the k8s host
15:45:57 <tbarron> runs with a high number (non-root) uid but is in root group
15:46:27 <bswartz> Oh yes I remember this discussion
15:46:28 <tbarron> I have an open PR in the ceph_volume_client (in ceph) for this and a dependent review in manila
15:46:43 <bswartz> We need a way for manila to set the mode/owner of the root of the share
15:46:44 <tbarron> I mention this in case any other back ends have similar issues.
15:47:13 <tbarron> bswartz: the manila patch right now is short-term and back end specific but
15:47:14 <bswartz> It's currently undefined which means it's completely up to the driver
15:47:22 <bswartz> And that's terrible
15:47:25 <tbarron> we can look at generalizations too
15:47:50 <bswartz> Some drivers just set it to 777 to avoid any problems
15:48:20 <tbarron> I think we have to get a sense of back end capabilities and defaults in this area before we can do a general solution scoped over all back ends
15:48:42 <tbarron> So I may send an email on the dev list to inquire.
15:49:04 <tbarron> ok, moving along
15:49:12 <bswartz> Well I think it's a solvable problem
15:49:27 <bswartz> There's no reason a backend couldn't set the mode/owner -- it's just not in the manila API
15:49:35 <tbarron> Anyone have updates on any of the other planned work?
15:49:35 <bswartz> The hard part would be amending the Manila API
15:50:04 <tbarron> bswartz: agree, that's why we're doing back end specific for cephfs and cephfs/nfs as the first step
15:50:17 <tbarron> other back ends may want to do something similar
15:50:57 <tbarron> #topic Bugs
15:51:07 <tbarron> #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Bug_Triage
15:51:16 <tbarron> dustins: got anyting for us today?
15:51:34 <dustins> tbarron: I do not, unfortunately/fortunately
15:52:15 <tbarron> dustins: yeah I know you've been pretty busy with downstream concerns ...
15:52:27 <dustins> To put it lightly :P
15:52:28 <ganso> there is a new bug though, and I have some comments from another bug
15:52:42 <dustins> ganso: Do tell!
15:52:43 <tbarron> ganso: link?
15:52:57 <ganso> looking for it just a sec
15:53:14 <ganso> #link https://bugs.launchpad.net/manila/+bug/1799742
15:53:14 <openstack> Launchpad bug 1799742 in Manila "Wrong segmentation ID sent to drivers when using multi-segments" [Medium,New]
15:53:41 <tbarron> some Rodrigo guy raised this one :)
15:53:52 <ganso> I found this while testing the network plugins with multiple neutron segments
15:54:06 <ganso> neutron would allow each subnet to have different VLANs
15:54:32 <ganso> but the manila plugin code just iterates over the list of segments from the network (not the subnet), the picks the last one, for any subnet
15:54:44 <ganso> s/the picks/then picks
15:55:20 <ganso> I checked if it an easy fix
15:55:24 <tbarron> ganso: so is the fix straightforward? i.e. do we have the info to know where in the loop to stop instead of just picking the last?
15:55:26 <ganso> according to my approach, it is not
15:55:57 <ganso> because the neutron segments API is not supported by our abstraction of neutron client, nor neutron client v2.0 itself
15:56:03 <ganso> it requires openstacksdk
15:56:30 <ganso> I glanced at it, and it seems the authentication system is slightly different (uses that yaml file which we wouldn't want to use)
15:56:37 <gouthamr> ganso: https://github.com/openstack/manila/blob/2b40e5618f2c1039bbbbd1a3e31b72e104b5436b/manila/network/neutron/neutron_network_plugin.py#L258
15:57:09 <ganso> gouthamr: L269 is the bug
15:57:15 <gouthamr> the note suggests "the lowest segment" - it's unclear what that is, but mkoderer may have known what was going on
15:57:17 <tbarron> well using the SDK rather than clients is arguably the right direction anyways ...
15:57:33 <ganso> gouthamr: yea, I don't know what it is supposed to mean
15:57:39 <gouthamr> i know what you're referring to, i'm suggesting that you could have a misconfiguration?
15:58:14 <ganso> gouthamr: I am not sure, it is a good idea to be triaged by someone who understands a lot more about neutron multi-segments than I do
15:58:49 <ganso> tbarron: may I proceed to the next bug?
15:58:50 <gouthamr> yes, reading through the code it looks like only one of the segments can have their "'provider:physical_network'" attr set to the physical network
15:58:59 <tbarron> ganso: maybe "reach out" to mkoderer?
15:59:06 <tbarron> ganso: sure, ++
15:59:11 <tbarron> bug++
15:59:14 <ganso> tbarron: yea we can try that
15:59:22 <ganso> bug #2 #link https://bugs.launchpad.net/manila/+bug/1786059
15:59:22 <openstack> Launchpad bug 1786059 in Manila "Cannot Connect to Share Server when Creating Share with Generic Driver (DHSS=True)" [Undecided,Invalid]
15:59:23 <tbarron> you have 45 secs
15:59:31 <ganso> for some reason, a lot of people are hitting that bug
15:59:35 <ganso> I have discussed it already
15:59:43 <ganso> we cannot reproduce it in the gate
15:59:49 <tbarron> right
15:59:52 <ganso> but we don't know why so many people are hitting that
15:59:57 <ganso> I'm receiving emails asking for help
16:00:03 <bswartz> There is an environmental issue
16:00:04 <gouthamr> i kinda have an inkling why
16:00:05 <ganso> and I don't know how to help
16:00:05 <gouthamr> :)
16:00:17 <bswartz> But I was never able to track down the root case
16:00:19 <tbarron> let's go to #openstack-manila, I think a lot of us are getting these emailis
16:00:20 <bswartz> cause even
16:00:28 <ganso> I spent alot of time this and last week helping people debug, and I am out of ideas
16:00:37 <tbarron> Thanks everyone!
16:00:40 <tbarron> #endmeeting