15:00:36 <tbarron> #startmeeting manila
15:00:37 <openstack> Meeting started Thu Dec 20 15:00:36 2018 UTC and is due to finish in 60 minutes.  The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:41 <openstack> The meeting name has been set to 'manila'
15:00:53 <tbarron> ping bswartz
15:00:56 <tbarron> ping gouthamr
15:00:57 <ganso> hello
15:01:00 <tbarron> ping ganso
15:01:02 <gouthamr> o/
15:01:04 <tbarron> hi ganso
15:01:14 <tbarron> ping xyang
15:01:32 <tbarron> ping toabctl
15:01:39 <vkmc> o/
15:01:41 <toabctl> hey
15:01:49 <tbarron> ping erlon
15:02:01 <tbarron> ping tpsilva
15:02:09 <tbarron> ping amito
15:02:20 <xyang> hi
15:02:23 <tbarron> ok, that's all the folks who put their names on the ping list
15:02:31 <tbarron> Hello all!
15:02:38 <amito> hey
15:02:43 <bswartz> .o/
15:02:49 <bswartz> Sorry I'm late
15:02:54 <tbarron> Agenda: https://wiki.openstack.org/w/index.php?title=Manila/Meetings&section=2
15:02:58 <tbarron> hi bswartz
15:03:02 <bswartz> Was replacing faulty hardware
15:03:16 <tbarron> we're just starting
15:03:23 <tbarron> #topic announcements
15:03:44 <tbarron> just a reminder that milestone 2 is the week of January 7
15:04:23 <tbarron> also, our stable branches were blocked but
15:04:33 <tbarron> finally https://review.openstack.org/#/c/622262/ merged
15:04:49 <amito> that's a lot of 2's
15:05:11 <tbarron> we're backporting it and only the change for ocata is pending
15:05:28 <tbarron> no controversy once people felt it was safe for stable/rocky
15:05:33 <tbarron> amito: :)
15:06:07 <tbarron> The problem at root is that even though we pin all the requirements down for stable branches OS updates occur anyways
15:06:16 <tbarron> and destabilize everything
15:06:30 <tbarron> Anyone else have announcements?
15:07:01 <tbarron> #topic 27 December meeting - cancel?
15:07:16 <bswartz> cancel
15:07:26 <tbarron> lots of folks are on holiday 27 December so we probably won't have quorum
15:08:01 <tbarron> anyone object to canceling, next meeting would be 3 January, two weeks from now
15:08:09 <tbarron> 3
15:08:13 <tbarron> 2
15:08:18 <tbarron> 1
15:08:41 <tbarron> ok, I hear no objections and will update the manila meetings wiki accordingly
15:08:50 <tbarron> #topic kubecon report
15:09:03 <tbarron> we had several manila folks at kubecon
15:09:20 <bswartz> \o/
15:09:29 <tbarron> In my view manila is  well positioned to provide RWX storage to container orchestrators and
15:09:38 <tbarron> its lack of tight coupling to nova
15:09:54 <tbarron> which may have been a disadvantage for a nova-centric world
15:10:00 <tbarron> is now an advantage
15:10:09 <tbarron> Be that a it may (my views)
15:10:10 <tbarron> I
15:10:29 <tbarron> I'd like to hear from bswartz xyang gouthamr about kubecon
15:10:34 <tbarron> what should we know?
15:10:55 <bswartz> So there was a F2F (face to face) meeting of the storage sig on wednesday
15:11:00 <gouthamr> o/ it was a great gathering, the largest kubecon apparently
15:11:45 <bswartz> There weren't too many sessions in the conference proper on storage related topics (I only went to 1)
15:11:50 <gouthamr> xyang and bswartz presented on a few interesting topics in the storage meetups
15:12:02 <bswartz> But the face to face meeting was very well attended
15:12:02 <gouthamr> SIG Storage F2F notes: https://goo.gl/HjtoBH
15:12:36 <tbarron> was the f2f storage sig by any chance recorded?
15:12:48 <tbarron> gouthamr: great notes, thanks!
15:12:50 <bswartz> No
15:13:08 <bswartz> Audio was just 1 microphone, only the presenter and people on the phone could be heard
15:13:13 <bswartz> No video
15:13:19 <gouthamr> also here's a Cloud native storage whitepaper xyang contributed to: https://goo.gl/XSrL1S
15:13:20 <xyang> sorry, I'm in another meeting now
15:13:27 <tbarron> xyang: :)
15:13:34 * gouthamr and we're going ping-ping-ping on xyang
15:14:40 <bswartz> The topic of Manila did not come up
15:15:12 <tbarron> did the topic of rwx storage for COs come up?
15:15:31 <bswartz> tbarron: not sure what you mean
15:15:40 <bswartz> rwx storage has been a thing for a long time
15:15:41 <gouthamr> CSI support is GA in k8s 1.13
15:15:50 <tbarron> read write many for container orchestrators
15:16:00 <bswartz> Yeah CSI is GA now and the focus is on the next round of CSI-related features including improvements to snapshots
15:16:18 <tbarron> some of the stuff I see in the notes looks like re-doing cinder for CSI :)
15:16:31 <bswartz> At least for kubernetes, rwx has been supported as long as NFS has been around
15:16:46 <tbarron> maybe without paying attention to what we learned when going from cinder to maniila
15:16:47 <bswartz> There are other FS types that support rwx too -- it's not an exotic thing
15:16:52 <bswartz> RawBlock
15:16:56 <bswartz> derp
15:17:16 <bswartz> Raw block support is relatively new in the kubernetes world, and it's getting more stable
15:18:28 <bswartz> tbarron: a lot of the same problems are getting rediscovered, but there's a lot of overlap in the communities so people have seen the problems before
15:18:34 <tbarron> what sort of write-arbitrator does raw block have?
15:19:00 <bswartz> The big open question is whether the kubernetes community will find better solutions than the openstack community did or not
15:19:24 <bswartz> tbarron: raw block, like filsystem, can be single-attach or multi-attach
15:19:59 <tbarron> and if multi-attach and writable there's a need for a write-arbitrator, correct?
15:20:06 <bswartz> Because of the way kubernetes works, it's actually easier to provide multi-attach raw block storage than multi-attach filesystem storage
15:20:23 <bswartz> But in reality nearly all the solutions are single-attach
15:20:38 <bswartz> That's mostly because raw block support is so new
15:20:54 <bswartz> That's not super relevant to us in Manila though
15:22:39 <tbarron> well if you can use it instead of filesystems and it's faster, that would seem to be relevant
15:22:52 <tbarron> like no need for filesystems
15:22:56 <bswartz> It's typically not about speed
15:23:37 <bswartz> Raw block support is typically interesting because there's a wider variety of ways to do multi-attach with it than filesystems
15:23:54 <bswartz> And for applications that just need a bag of bits, a filesystem is just an extra layer of abstraction
15:24:13 <bswartz> Which is unneeded complexity, and yes, a small performance penalty
15:24:51 <tbarron> a bag of bits seems more like object storage :)
15:25:22 <bswartz> Raw block is valuable in the context of storage-oriented applications like databases and shared filesystem implementations
15:25:45 <bswartz> well a bag of bits is 1 object in an object store
15:26:01 <bswartz> A whole object does have some kind of structure to it
15:26:05 <bswartz> Err
15:26:11 <bswartz> A whole object store does have some kind of structure to it
15:26:56 <tbarron> ok so w.r.t. the write-arbitration question, raw block is intended for use by apps like databases that coordiante their writes from e.g. different app instances on different hosts themselves
15:27:11 <bswartz> Yeah
15:27:17 <tbarron> got it
15:27:35 <bswartz> Obviously if you're sharing access to a block device you need a way to coordinate between the accessors
15:27:53 <tbarron> and that's one thing a file system gives the apps for free
15:27:58 <bswartz> That's a distributed systems problem though
15:28:14 <tbarron> at a performance cost
15:28:36 <bswartz> Yeah NFS has built in locking and cache coherency protocols
15:29:27 <tbarron> OK, anything else on kubecon report?
15:29:48 <bswartz> Can't think of anything else that would be interesting to this crowd
15:30:02 <bswartz> I will say, Seattle has a LOT of sea food
15:30:06 <tbarron> :)
15:30:18 <gouthamr> oh, yeah.. sig-openstack is going to restart meetings in January
15:30:41 <gouthamr> meeting schedule is here: https://github.com/kubernetes/community/tree/master/sig-openstack
15:31:06 <tbarron> I think it will be interesting to figure what the role of manila, cinder, etc. will be for on-prem container-clouds when stuff like rook really matures
15:31:10 <bswartz> gouthamr: who is running those?
15:31:28 <gouthamr> it'll be a good time to report progress on our work to shape up teh manila provisioner
15:31:36 <gouthamr> bswartz: hodgepodge, dims
15:31:37 <tbarron> gouthamr: +1
15:32:15 <gouthamr> actually, scratch that: https://github.com/kubernetes/community/tree/master/sig-openstack#leadership :)
15:32:28 <bswartz> gouthamr: are you the one working on that?
15:32:44 <dims> o/
15:32:47 <tbarron> rook will supply storage abstractions that aren't specific to on-prem clouds ...
15:33:07 <bswartz> I meant the manila provisioner
15:33:09 <gouthamr> bswartz: you mean the manila csi implementation? tbarron, vkmc have been more involved than I, but yes.
15:33:26 <bswartz> Okay
15:33:29 <bswartz> Does it have a home?
15:34:14 <tbarron> upstream i think it would be parallel to cinder-csi but that's one thing to nail down
15:34:26 <gouthamr> bswartz: yes, cloud-provider-openstack
15:34:32 <bswartz> Link?
15:34:48 <gouthamr> https://github.com/kubernetes/cloud-provider-openstack
15:34:53 <gouthamr> #LINK https://github.com/kubernetes/cloud-provider-openstack
15:35:23 <gouthamr> ^ that's where the current manila dynamic provisioner lives
15:35:31 <bswartz> Ah yes
15:35:32 <bswartz> Ty
15:35:59 <gouthamr> (old home: https://github.com/kubernetes-incubator/external-storage)
15:36:46 <tbarron> Anything else on this topic today?
15:36:58 <bswartz> Nope
15:37:10 <tbarron> #topic gate issues
15:37:32 <tbarron> besides the stable/branch issues I mentioned we still have lots of non-voting job failures
15:37:45 <tbarron> I want to turn red to green
15:38:07 <tbarron> so will be pushing people for reviews and
15:38:20 <tbarron> asking some of you (you know you you are) for
15:38:24 * gouthamr waves at dims, congratulates him for the k8s community award
15:38:34 <tbarron> help w.r.t. the generic driver problems
15:38:46 <tbarron> yeah, hi dims and congrats!
15:38:46 <dims> aww thanks :)
15:39:27 <tbarron> That's pretty much all I have to say on this topic - thanks to those helping with reviews so far.
15:40:01 <gouthamr> yeah, we still haven't found the real cause of the failures on the manila generic driver jobs
15:40:19 <tbarron> gouthamr: right, just making baby steps
15:40:36 <gouthamr> tbarron++, yep, ty!
15:40:40 <tbarron> gouthamr: updated xenial service image has the same issue as the image from a year ago
15:40:53 <bswartz> Not bionic yet?
15:41:00 <tbarron> bionic image has a different, apparently earlier failure to connect to svm issue
15:41:11 <tbarron> doesn't get as far so far as I can tell
15:41:11 <bswartz> -_-
15:41:38 <tbarron> looks like the networking is messed up
15:41:58 <tbarron> bswartz: do you know if there's any reason for the service module to do that layer2 stitching
15:42:12 <tbarron> different for ovs, for linux bridge, etc.
15:42:14 <bswartz> Wait which service module are you referring to
15:42:15 <tbarron> nowadays?
15:42:20 <bswartz> Oh
15:42:35 <bswartz> It has to match what the cloud is using IIRC
15:42:39 <tbarron> did we do that because neutron didn't have an api for what we needed to do?
15:42:46 <bswartz> That too
15:43:04 <tbarron> maybe we can do everything with neutron then
15:43:07 <bswartz> The code to do the horrible network backdoor exists because neutron didn't support what we needed years back
15:43:33 <bswartz> The option to use linuxbridge or ovs is because the horrible backdoor has to be compatible with what neutron is doing
15:43:34 <ganso> bswartz: which is?
15:44:04 <tbarron> I'm thinking maybe nowadays we can get rid of the horrible backdoor
15:44:07 <bswartz> ganso: a way to SSH from the m-shr service to the service VM
15:44:25 <ganso> bswartz: and what's new in neutron that could allow us to accomplish that without the backdoor?
15:44:36 <tbarron> since we have to debug this issue anyways
15:44:37 <bswartz> Assuming the service VM could be on a neutron network with no routers
15:44:58 <bswartz> ganso: I'm not aware of any changes in neutron
15:45:03 <ganso> bswartz: are you thinking about using something equivalent to "ip netns exec ssh" ?
15:45:18 <bswartz> It sounds like tbarron may know of some new development I haven't heard of yet
15:45:29 <bswartz> ganso: that won't work
15:45:37 <tbarron> bswartz: I don't know but I'm hoping ...
15:46:19 <bswartz> tbarron: without help from nova this is a very hard problem to solve
15:46:48 <tbarron> we should be able to boot up an svm with two nics, one on the tenant net and one on a net that manila share is also on
15:47:18 <ganso> tbarron: there is an option for that
15:47:29 <ganso> tbarron: it is called "connect_to_tenant_network"
15:47:32 <gouthamr> "connect to tenant network" or something?
15:47:39 <ganso> gouthamr: yep
15:47:48 <gouthamr> ^ yes! that's how we solved the recent problem with the grad student :)
15:48:01 <gouthamr> why isn't that the default?
15:48:14 <bswartz> Security issues maybe?
15:48:19 <ganso> gouthamr: it doesn't solve the current gate problem, I already tried that
15:48:51 <gouthamr> bswartz: true, but the manila service network can be secured
15:48:51 <ganso> gouthamr: I mean, it doesn't solve because the driver will still try to use the 10.254.0.x network port to access the share server
15:49:15 <tbarron> ganso: that might be fixable
15:49:41 <ganso> tbarron: but then it would need a different route. The driver cannot access the tenant's network by default
15:49:49 <ganso> tbarron: creating the backdoor there isn't an option
15:50:44 <bswartz> connect_share_server_to_tenant_network
15:50:54 <bswartz> ^ This?
15:50:57 <ganso> bswartz: yes
15:51:58 <tbarron> if the share server has two faces, one on the tenant's net (and that's wehere its export location is published) and one back to manila-share-with-driver ...
15:52:18 <bswartz> We should consider these alternatives and try to changes the defaults to something more likely to work
15:52:35 <ganso> tbarron: actually, we kinda already have the means to fix it with that admin plugin network we added back in 2015
15:52:39 <tbarron> yeah, that's what I wanted everyone to agree to ^^
15:52:46 <tbarron> what bswartz said
15:52:50 <ganso> tbarron: it would have a simpler connection to the driver
15:53:03 <bswartz> Originally when we were designing this, all the options looked bad to us, and we were hoping that neutron or nova would change to make the situation better
15:53:22 <bswartz> Unfortunately other priorities took over, and we never pushed for the improvements needed
15:54:05 <tbarron> ok, let's pursue this more outside of this meeting
15:54:06 <bswartz> I could see the admin network stuff helping here
15:54:14 <bswartz> But the security implications make me shiver
15:54:31 <ganso> bswartz: and it is hardly feasible in the gate
15:54:35 <bswartz> You really have to trust those service VMs if you go down that path
15:54:54 <ganso> bswartz: because we would need the host with 2 network interfaces to create a provider network
15:55:26 <tbarron> bswartz: i wonder if we can mitigate that with security rules, they work a lot better than when this stuff was developed
15:55:32 <ganso> bswartz: the admin network we create for testing is fake
15:55:55 <bswartz> ganso: well we'd need a real one
15:56:16 <bswartz> Maybe we should schedule a breakout meeting to delve into this problem
15:56:21 <gouthamr> +1
15:56:32 <bswartz> It still need a champion to drive it though
15:56:37 <gouthamr> also, can we have bug triage/bug squash soon-ish?
15:56:47 <gouthamr> we haven't looked at them in a while
15:56:54 <tbarron> gouthamr: well we're out of time today
15:57:26 <gouthamr> tbarron: yeah, not today.. but i wanted us to visit them in a separate meeting as we agreed to during PTG
15:57:27 <gouthamr> :)
15:57:36 <tbarron> sure
15:57:56 <tbarron> let's talk about breakout meetings for 5 min in #openstack-manila
15:58:00 <tbarron> #topic our work
15:58:14 <bswartz> Is anyone aware of any new nova features that could serve as a channel of communication from m-shr to the service VM?
15:58:18 <tbarron> Please review the access-rule-priortity review
15:58:31 <tbarron> https://review.openstack.org/#/c/572283/
15:58:39 <tbarron> thanks to toabctl and ganso for reviewing
15:58:47 <tbarron> lately
15:59:17 <tbarron> bswartz: I'm not, but let's explore
15:59:44 <tbarron> ok, let's continue briefly in #openstack-manila
15:59:48 <tbarron> Thanks everyone!!
16:00:02 <tbarron> #endmeeting