15:01:53 <tbarron> #startmeeting manila
15:01:53 <openstack> Meeting started Thu Aug 23 15:01:53 2018 UTC and is due to finish in 60 minutes.  The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:57 <openstack> The meeting name has been set to 'manila'
15:02:02 <bswartz> .o/
15:02:04 <dustins> \o
15:02:09 <tbarron> #chair bswartz
15:02:10 <openstack> Current chairs: bswartz tbarron
15:02:15 <ganso> hello
15:02:19 <tbarron> in case I get bounced
15:02:32 <tbarron> ping amito
15:02:43 <tbarron> ping erlon
15:02:49 <bswartz> Sorry I was on vacation last week
15:02:52 <tbarron> ping gouthamr
15:02:56 <erlon> hey
15:02:59 <bswartz> And I forgot to mention it
15:03:00 <amito> Hi
15:03:03 <tbarron> ping vkmc
15:03:11 <vkmc> o/
15:03:11 <vkmc> hi
15:03:12 <xyang> hi
15:03:13 <tbarron> ping xyang
15:03:17 <tbarron> jinx
15:03:18 <gouthamr> o/
15:03:41 <xyang> tbarron: can you see my msg?
15:04:00 <tbarron> xyang: if it's "can you see my msg?" then yes
15:04:45 <tbarron> xyang: I was just working through the ping list and did your name about the same time you said hi
15:04:45 <xyang> tbarron: ok.  looks like I can still send msg to a channel but not private msgs.  I need to find out why
15:05:09 <tbarron> xyang: hmm, some IDENT thing maybe
15:05:17 <tbarron> Hi all!
15:05:26 <bswartz> tbarron is avoiding getting kicked for flooding
15:05:26 <tbarron> bswartz: hope you had a nice vacation!
15:05:37 <bswartz> tbarron: I did! thank you
15:05:42 <zhongjun2_> Hi
15:06:03 <tbarron> zhongjun2_: Hi there!  I didn't see your nick in this channel so didn't ping you.
15:06:15 <tbarron> Thought you might have chosen to sleep for a change.
15:06:37 <zhongjun2_> Oh, I just join in this channel
15:06:51 <tbarron> We don't have a big agenda today.
15:06:59 <tbarron> #link https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting
15:07:05 <tbarron> #topic Announcements
15:07:12 <zhongjun2_> Okay
15:07:32 <tbarron> We released manila-tempest-plugin last Friday, more
15:07:48 <tbarron> or less synchronized with the tempest release itself, which is
15:08:04 <tbarron> other things equal good for distros that need to package up this stuff.
15:08:38 <tbarron> As you probably all know we haven't had any critical bug reports with our rc candidate.
15:08:48 <tbarron> So we don't have an rc-n.
15:09:19 <bswartz> Huzzah
15:09:22 <tbarron> The final rocky release is planned to be cut 29 August, next Wednesday.
15:09:40 <tbarron> Some of you were asking me when stable/rocky will open up again.
15:10:06 <tbarron> After the release, we're in a hold-down on that branch till then.
15:10:41 <tbarron> Meanwhile, please start working on Stein!
15:10:49 <tbarron> PTG planning etherpad is here:
15:11:06 <tbarron> #link https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018
15:11:37 <tbarron> I have put up a review to open up Stein specs that should merge later today I think.
15:12:00 <tbarron> zhongjun2_: it moves the access-rule-priority spec over.
15:12:19 <tbarron> all please note that there is driver work to do to complete that spec.
15:12:33 <zhongjun2_> I saw that patch. Thank you
15:12:47 <tbarron> So please plan to inspect your back end :) and see if you need to do anything.
15:13:40 <tbarron> And look at the etherpad, add to it as you wish, and try to come to PTG prepared on issues that interest you.
15:13:56 <tbarron> We want PTG to be productive as well as fun :)
15:14:12 <ganso> zhongjun2_: will you not participate in the PTG remotely?
15:14:31 <tbarron> Final announcement, speaking of fun, is that we'll have a combined dinner with the cinder team.
15:14:48 <tbarron> There's a doodle poll to select the day.
15:15:02 <bswartz> What's doodle?
15:15:03 <zhongjun2_> I will participate part of meeting if the zone time is suitable
15:15:11 <tbarron> #link https://doodle.com/poll/8rm3ahdyhmrtx5gp
15:15:17 <tbarron> bswartz: it's that ^^
15:15:35 <tbarron> The proposed time is 7:30 Denver time.
15:15:55 <tbarron> But the web page keeps getting set back to Eastern time for some reason.
15:16:01 <zhongjun2_> ganso: Do we have remote PTG meeting?
15:16:08 <bswartz> Ouch, 3 large banner ads, and I don't have an ad blocked on this VM
15:16:11 <tbarron> Looks to me that it will be Tuesday night.
15:16:22 <ganso> zhongjun2_: yes, it is listed in the etherpad
15:16:43 <tbarron> Thanks to jungleboyj for setting up the doodle.
15:17:10 <jungleboyj> No problem.  :-)
15:17:15 <tbarron> zhongjun2_: put your name next to topics and  we'll try to discuss those earlier ...
15:17:40 <tbarron> The planning etherpad has a place for remote attendees to indicate their timezones.
15:17:51 <zhongjun2_> tbarron: Okay:)
15:18:25 <tbarron> While we can't *promise* anything we'll do our best to run the agenda so that topics remote attendees are interested in match their availability.
15:19:20 <tbarron> That's all I have for announcements.
15:19:24 <tbarron> Anyone else?
15:20:09 <tbarron> #topic Bugs
15:20:17 <tbarron> dustins: you are up
15:20:28 <dustins> I'm actually good today!
15:20:41 <dustins> Nothing to report this time around
15:20:47 <tbarron> dustins: you are always good
15:20:57 <dustins> tbarron: I work hard at it
15:20:57 <vgreen> dustins++
15:21:04 <dustins> And aren't we all just works in progress :)
15:21:06 <tbarron> ok
15:21:18 <dustins> I'm going through the backlog of "new" bugs to see if they're actually new
15:21:29 <dustins> And as I do that I'll add some to the list to go over later
15:21:50 <dustins> I'll be out for the next couple of meetings, but I'll have some cool stuff to look at when I return
15:22:37 <tbarron> ok
15:22:46 <tbarron> #topic Open Discussion
15:23:29 <tbarron> ganso: I did a test run on stable/queens for that generic driver bug report which you were generous enough to comment on.
15:23:44 <tbarron> ganso: https://review.openstack.org/#/c/592700/
15:24:06 <tbarron> ganso: maybe when you get a chance you can point to the successful share creations there and to
15:24:07 <ganso> tbarron: as we suspected, there doesn't seem to be a problem there
15:24:19 <tbarron> ganso: the package list from the devstack log
15:24:33 <ganso> tbarron: sure, I'll add a comment to the launchpad entry
15:24:42 <tbarron> ganso: suggesting that the bug reporter may want to compare and update his setup accordingly
15:24:48 <tbarron> ganso: ty!!
15:25:09 <ganso> tbarron: thanks for submitting the debug patch
15:25:21 <tbarron> Anyone here up-to-date on placement service?
15:25:43 <tbarron> I will be studying it to understand better
15:26:04 <tbarron> how it solves scheduler races without suffering from them itself when it is run active-active
15:26:07 <gouthamr> +1
15:26:28 <tbarron> Interestingly it is supposed to be quite lightweight, and
15:26:48 <tbarron> also supposed to be capable of being used as a library instead of as an independent service.
15:27:10 <tbarron> This is relevant to positioning manila as open infrastructure that
15:27:30 <tbarron> can run with the rest of OpenStack or not, and with or without keystone.
15:28:23 <tbarron> Somewhat related, you may notice that the PTG etherpad has manila as CSI driver on it.
15:28:52 <tbarron> If you are working on this or know people who are, let's collaborate.
15:29:04 <bswartz> Who added that?
15:29:07 <tbarron> I did.
15:29:34 <bswartz> What do you know about it? Are you working on it tbarron?
15:30:20 <tbarron> learning, not working yet
15:31:08 <tbarron> there's lots of interest where I work and I notice at CERN
15:31:39 <tbarron> and with my upstream PTL hat on I think the continued vitality of manila
15:31:54 <tbarron> depends on positioning it as open infrastructure
15:32:07 <tbarron> (as you notice is happening to OpenStack anyways)
15:32:32 <tbarron> that doesn't live solely for the purpose of providing storage to nova VMs.
15:33:04 <tbarron> manila is well positioned for such a role since it serves up storage over networks rather than
15:33:14 <tbarron> through a hypervisor
15:33:45 <bswartz> Nevertheless there's a bunch of work to integrate cinder w/ containers
15:34:09 <tbarron> bswartz: yes, and a crying need for RWX storage as well :)
15:34:16 <bswartz> The hypervisor vs over-the-network issues doesn't seem to be stopping anybody
15:34:38 <tbarron> bswartz: right, cinder has "local attach" now
15:35:06 <tbarron> all I'm saying is that manila doesn't have to worry about that.
15:35:16 <bswartz> Indeed
15:35:41 <bswartz> I'm involved with the Kubernetes NFS connectivity code
15:35:52 <tbarron> or doing multi-attach and adding a write arbitrator
15:36:03 <tbarron> bswartz: awesome!
15:37:08 <zhongjun2_> bswartz: cool, do you have a code link?
15:37:59 <bswartz> Well the code is part of kubernetes today
15:38:08 <bswartz> All kubernetes has to do is perform the mount
15:38:29 <bswartz> The only interesting part of it is how mount options get communicated
15:38:56 <bswartz> CSI currently does not support mount options, but I'm fixing that
15:39:14 <bswartz> The legacy NFS stuff in Kubernetes does support mount options
15:39:42 <tbarron> bswartz: have you thought about multi-protocol?
15:40:09 <tbarron> a *manila* csi driver (vs just nfs) would really need that
15:40:19 <bswartz> What do you mean multiprotocol?
15:40:27 <bswartz> There's not a ton if interest in CIFS if that's what you mean
15:40:30 <tbarron> nfs, cifs, ceph native, etc.
15:40:54 <bswartz> I'm sure ceph support exists, for blocks at least
15:40:55 <tbarron> there is interest in ceph native and I am interested in manila as abstraction layer over that and nfs
15:41:14 <bswartz> Cephfs may exist, but I wouldn't know anything about it
15:41:21 <tbarron> yeah, people are doign nfs, glusterfs, cephfs separately
15:41:38 <tbarron> manila csi would abstract
15:41:42 <bswartz> A challenge for a generic manila driver would be supporting all those protocol in 1 driver
15:42:09 <bswartz> It would be far simpler to write a manila driver that only supported the NFS subset of backends
15:42:38 <bswartz> Or a manila driver that support cephfs only, for exampe
15:42:50 * bswartz can't type
15:42:58 <tbarron> easier, I agree
15:43:23 <tbarron> but the abstraction layer could be quite useful for cloud admins who don't want to lock in
15:43:53 <bswartz> You'd have to look at the use cases
15:44:15 <bswartz> With kubernetes, at least, you have a million storage options, so lock in doesn't seem like a problem
15:44:29 <bswartz> With other CO's the problem could be worse, but CSI is making it better
15:46:43 <tbarron> Well I've got lots to learn and it's good to know our original PTL knows a lot about this stuff!
15:46:50 <tbarron> Anything else for today?
15:47:37 <tbarron> OK, see y'all in #openstack-manila ...
15:47:40 <tbarron> #endmeeting