14:02:50 <hongbin> #startmeeting fuxi_stackube_k8s_storage
14:02:50 <openstack> Meeting started Tue May 23 14:02:50 2017 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:54 <openstack> The meeting name has been set to 'fuxi_stackube_k8s_storage'
14:02:55 <hongbin> #topic Roll Call
14:03:04 <dims> o/ (listening mostly)
14:03:24 <hongbin> Hongbin Lu
14:04:06 <hongbin> #topic Introduction
14:04:14 <hongbin> Hi all
14:04:20 <apuimedo> Hi
14:04:30 <feisky> hi
14:04:30 <hongbin> I would get started to do a brief introduction
14:04:41 <zhonghuali> hi
14:05:11 <zengche> hello
14:05:15 <hongbin> The purpose of this meeting is to let everyone who interest in k8s storage integration to get together to figure out a development plan
14:05:32 <apuimedo> sounds good
14:05:48 <hongbin> This effort was started by fuxi, then we have a couple of people who expressed interest to work on this area
14:06:06 <hongbin> This includes another team: stackube, and feisky is hte initialior
14:06:43 <hongbin> Therefore, I proposed to schedule a meeting to let all of us to discuss
14:07:13 <hongbin> Ideally, we could leverage this meeting to understand what are the interests of each other, and figure out a common ground to work on this area
14:07:26 <hongbin> That is all from my side
14:07:36 <hongbin> apuimedo: feisky you have anything to add?
14:07:42 <zengche> hongbin:is the purpose that develop a plugin to supply storage offered by openstack for k8s?
14:07:56 <apuimedo> no. You described the goals very well
14:08:13 <feisky> no
14:08:14 <apuimedo> hongbin: it would be nice if you could list the current k8s goals of fuxi-k8s
14:08:21 <apuimedo> and then feisky could do the same
14:08:27 <apuimedo> so we can figure out common ground
14:08:27 <hongbin> apuimedo: sure
14:08:39 <hongbin> #link https://docs.openstack.org/developer/kuryr-kubernetes/specs/pike/fuxi_kubernetes.html the proposal
14:09:05 <hongbin> The idea is to develop two components: the volume provisioner and the flexvolume plugin
14:09:32 <hongbin> The volume provisioner is to listen to k8s api for pvc, and create pv and cinder/manila volume in occurance of event
14:09:55 <hongbin> The flexvolume plugin is for kubelet to conecting to cinder/manila volume
14:10:23 <hongbin> That is a brief summarize of hte design spec from fuxi
14:10:46 <hongbin> I believe there are some room to modify the design if necessary
14:10:55 <hongbin> We could discuss it based on feedback
14:10:55 <feisky> cool. let me make an introduction of stackube
14:11:28 <feisky> stackube aims to provide a kubernetes cluster with openstack components with both soft and hard multi-tenancy.
14:12:15 <feisky> It support all k8s volumes , including flex volumes
14:12:57 <feisky> but one main issue is that k8s mounts the volume on the host regardless of runtime
14:13:56 <feisky> e.g. for hypervisors, pass the volume to VM has much more performance
14:14:12 <feisky> that is one issue we should resolve in stackube
14:14:41 <feisky> And I wonder whether fuxi could help on this issue
14:15:05 <hongbin> feisky: could you clarify "k8s mounts volume on the host regardless of runtime"
14:15:15 <feisky> yep
14:16:18 <feisky> for all volumes (including flex), kubelet will attach it to the host first and then mount the volume to a host path
14:16:47 <feisky> then kubelet set container volumes based on this mountpoint
14:17:09 <feisky> the process is independent of container runtime
14:17:36 <hongbin> that is true
14:17:46 <hongbin> hyper will do something that is differently?
14:17:48 <jgriffith> feisky stack cube?  Link?
14:18:03 <feisky> https://github.com/openstack/stackube
14:18:10 <jgriffith> IMO K8’s distort/model should be completely independent
14:18:40 <jgriffith> I think there’s more value in providing an independent plugin that only cares about upstream k8’s integration
14:18:44 <jgriffith> If that makes sense?
14:19:13 <apuimedo> distort/model?
14:19:16 <jgriffith> Distribution/model… sorry typo
14:19:47 <apuimedo> ah
14:19:52 <jgriffith> :)
14:19:55 <jgriffith> Silly auto correct
14:20:54 <apuimedo> :-)
14:21:09 <jgriffith> So think libstorage…. There’s no reason Cinder shouldn’t actually be “libstorage” just needs the integration layer from Fuxi to make that work
14:22:20 <jgriffith> Not sure if anybody agrees, or maybe not sure what I’m referring to there?
14:22:26 <zengche> jgriffith:sorry, what will  independent plugin do?
14:23:03 <hongbin> jgriffith: yes, i think the process of mounting volumes to host should be all the same, I try to figure out how hyper will do it differently
14:23:04 <jgriffith> zengche so the idea IMO would be that you have a K8’s deployment… deploy Cinder… install Fuji… consume cinder from fuxi
14:23:13 <jgriffith> err.. K8’s
14:23:23 <jgriffith> And I shouldn’t care what K8’s deployment you’re using
14:23:40 <jgriffith> Ideally the Fuxi code runs in a pod as well
14:23:58 <apuimedo> hongbin: for hyper runtime it would probably be mounted to qemu, wouldn't it feisky ?
14:24:20 <feisky> qpuimedo: that's prefered way
14:24:36 <jgriffith> apuimedo hongbin so the other thing is that if you’re using that model you can use the open stack provider in K8’s then no?
14:25:15 <apuimedo> jgriffith: it's not pods running on VMs, the "VMs" are the pods
14:25:23 <hongbin> jgriffith: i am not faimilar with k8s cloud provider, dims , you can comment on that?
14:25:40 <apuimedo> so I'm doubtful that would work with hyper runtime
14:26:00 <jgriffith> apuimedo yeah, I don’t know anything about hyper
14:26:24 <dims> cloud provider assumes that the volume will be attached to something that was started by Nova
14:27:09 <feisky> as I said, k8s doesn't care about runtime now, so it's impossible to mount the volume to hyperVM now
14:27:16 <apuimedo> jgriffith: simplifying. It is a qemu-kvm accelerated VM managed by the hyper runtime that only runs the pod
14:27:24 <feisky> and by the way, stackube doesn't require a cloud provider
14:27:49 <feisky> qpuimedo: right. the vm is managed by hyperd
14:28:48 <jgriffith> feisky sure, not pretending to know about those two products
14:28:59 <apuimedo> feisky: I'm glad the effort I put into reading the code two years ago keeps fresh in my mind
14:28:59 <hongbin> feisky: it sounds like hyper would like to develop your own volume plugin instead of using flexvolume?
14:29:02 <apuimedo> :P
14:29:15 <jgriffith> hongbin +1
14:30:10 <feisky> hongbin: no, we will still confirm to kubernetes volume plugin.
14:30:42 <hongbin> feisky: then, you would like to develop a custom hyper flexvolume plugin for hyper?
14:30:43 <feisky> but at now, since it is impossible, we will extract volume metadata and pass it to runtime directly
14:31:16 <hongbin> feisky: could you elaborate that?
14:31:18 <feisky> it's not a flex plugin
14:31:33 <feisky> e.g. we could take the info via annotations
14:31:48 <apuimedo> feisky: that's the best approach
14:31:50 <feisky> and CRI already support set annotations for runtimes
14:32:10 <jgriffith> feisky do you have a link to this hyper project that you’re referring to?
14:32:40 <feisky> jgriffith: do you mean stackube link?
14:33:02 <feisky> oh, sorry, hyper is here https://github.com/hyperhq/hyperd
14:33:11 <jgriffith> feisky :). thanks
14:33:27 <zengche> feisky:what's the next when runtime got the volume metadata? does runtime can use volume directly?
14:33:29 <feisky> and we have a runtime in kubernetes: https://github.com/kubernetes/frakti
14:34:02 <feisky> zengche: yep, runtime will set the volume to VM
14:34:42 <zengche> feisky:got it. it is better if you five me more details. thanks.
14:35:03 <zengche> feisky:s/five/give
14:35:23 <feisky> zengche: please take an eye on the stackube project, will make the design docs later
14:35:32 <feisky> s/make/add
14:35:47 <zhipeng> feisky any timeline on that ?
14:35:54 <zengche> feisky:ok, i have seen your project.
14:36:05 <jgriffith> I think it might be good to define and narrow scope here a bit
14:36:22 <jgriffith> There’s a lot going on in this conversation
14:36:35 <apuimedo> jgriffith: +1
14:36:53 <jgriffith> Fuxi currently provides a way to consume Cinder from Docker… that’s great
14:37:13 <jgriffith> IIRC it also allows either non-nova or nova configurations
14:37:15 <feisky> jgriffith: sorry, let's focus on the volume itself
14:37:31 <hongbin> jgriffith: yes, that is correct
14:37:33 <jgriffith> hongbin correct me if I’m wrong on any of this?
14:37:36 <jgriffith> hongbin :)
14:37:55 <jgriffith> So… what I wanted to propose was two things:
14:38:08 <jgriffith> 1. Convert the existing Fuxi python code to golang
14:38:27 <jgriffith> 2. Create a kubernetes integration
14:38:45 <hongbin> ++
14:38:50 <zhonghuali> jgriffith: +1
14:38:51 <jgriffith> Those two things are a lot of work, and #2 has a lot of details
14:39:20 <feisky> hongbin: jgriffith: stackube will support fuxi intrinsically (stackube support all existing kubernetes volumes)
14:39:24 <hongbin> jgriffith: for #1, i guess we need to port os-brick to golang?
14:39:39 <hongbin> feisky: ack
14:39:51 <jgriffith> hongbin did a good job of outlining the use of flex volume with a K8 watcher/listener for provisioning
14:40:11 <smcginnis> That will be a lot of work: https://governance.openstack.org/tc/resolutions/20170329-golang-use-case.html
14:40:29 <jgriffith> Keep in mind that flexvol will (hopefully soon) eventually have the ability to do provisioning as well so the listener would be temporary
14:40:38 <jgriffith> smcginnis indeed :)
14:40:45 <hongbin> i see
14:40:56 <jgriffith> But dims has blazed a trail for us that I think will help
14:41:17 <smcginnis> jgriffith: +1 I would love to see some more movement on the golang front.
14:41:22 <jgriffith> There’s one other aspect of this as well….
14:41:31 <apuimedo> smcginnis: you mean that at some point there will not be a need for watching the k8s volume resources?
14:42:02 <jgriffith> dims showed me some work that splits cinder in the existing cloud provider in K8’s, so it may be possible to use that plugin with or without nova
14:42:16 <jgriffith> In which case you get dynamic provisioning and attach etc
14:42:47 <jgriffith> This is where things get interesting IMO, but it’s also why I say there’s a LOT of work to be done
14:43:04 <smcginnis> apuimedo: No, just pointing out rewriting in go is more than just taking the existing code and redoing it in another language.
14:43:08 <jgriffith> And if we can all agree as a team and work on it together it could be pretty cool
14:44:04 <jgriffith> Luckily the go part of Fuxi we have an example we can use:  https://github.com/j-griffith/cinder-docker-driver
14:44:25 <jgriffith> But we need to decide on things like gophercloud vs openstack-golang...
14:44:31 <hongbin> #link https://github.com/j-griffith/cinder-docker-driver
14:44:33 <jgriffith> And most of all infra testing
14:45:15 <apuimedo> does everybody agree to have the docker volume driver as a base (like j-griffith/cinder-docker-driver and openstack/fuxi do)?
14:45:30 <dims> jgriffith : use gophercloud, let's settle on that
14:45:32 <hongbin> i have no problem fo rthat
14:45:38 <apuimedo> just mentioning cause it feels very runtime specific
14:45:45 <jgriffith> dims works for me
14:45:53 <jgriffith> apuimedo how so?
14:46:04 <apuimedo> they are docker volume API drivers
14:46:13 <apuimedo> so I won't expect them to work with rkt/hyper
14:46:21 <jgriffith> apuimedo ahh.. got ya
14:46:47 <jgriffith> apuimedo so that’s a fair point, but there are two aspect to Fuxi and that driver layer
14:47:07 <apuimedo> I would like to separate the consolidation of the docker driver
14:47:25 <jgriffith> apuimedo yeah, that might work.
14:47:29 <apuimedo> and the effort on having k8s cinder/manila support for baremetal/pod-in-vm
14:47:46 <jgriffith> apuimedo What I was hoping to do is have a single package/sdk to talk to Cinder and issue the calls
14:47:49 <jgriffith> That’s all
14:48:10 <apuimedo> jgriffith: that's what fuxi-k8s proposes for now. And kuryr accepted it
14:48:23 <apuimedo> I just want to make sure that we are fine with the runtime lock-in it brings
14:48:43 <jgriffith> apuimedo I think there are ways to address that
14:48:58 <jgriffith> I agree with you that I’d love to see support for things like rkt
14:49:04 <hongbin> we could modify the spec to make it runtime agnostic in the future
14:49:10 <apuimedo> jgriffith: sure, you can have drivers both in the k8s watcher and in the flexvolume
14:49:14 <apuimedo> for different runtimes
14:49:30 <jgriffith> hongbin apuimedo we could also layer packages to make it flexible enough
14:49:32 <apuimedo> but we need to scope and define the steps
14:50:13 <hongbin> jgriffith: sounds reasonable
14:50:16 <apuimedo> jgriffith: kuryr-k8s already supports drivers, so that a same event handler could talk to a docker volume api or to something else
14:50:49 <jgriffith> hongbin apuimedo so maybe a few of us could work on defining this a bit over the next week and we can reconvene next week to see what people like/dislike etc?
14:50:54 <apuimedo> (although, tbh, I think it is cleaner that the k8s watcher talks directly to cinder and manila without going via the docker volume api)
14:51:28 <apuimedo> for the flexvolume part, talking to the docker volume API or another driver looks good though
14:51:32 <jgriffith> apuimedo I don’t disagree with that… but the existing cinderclient is not a great fit for this IMO
14:51:45 <apuimedo> jgriffith: how so?
14:51:47 <jgriffith> That’s where the desire for another layer comes into play
14:51:54 <apuimedo> you mean python-cinderclient?
14:51:59 <jgriffith> apuimedo yes
14:52:40 <apuimedo> jgriffith: does your docker driver use plain HTTP requests or does it use gophercloud?
14:52:49 <jgriffith> apuimedo to be clear, that’s the base layer regardless… but there’s extra logic you’re likely going to want on top of it
14:53:04 <jgriffith> apuimedo gopher cloud…. So just goes down to cinderclient
14:53:15 <apuimedo> jgriffith: ok
14:53:59 <apuimedo> Are we going to use the kuryr-k8s watcher (python) or are we going to make a new one in golang?
14:54:23 <apuimedo> for the flexvolume and docker driver there seems to be a big agreement to move it to golang
14:54:31 <jgriffith> apuimedo I don’t know :). I guess if there’s a watcher that works there’s no good reason to reinvent one
14:54:46 <hongbin> +1
14:54:49 <jgriffith> But I don’t know anything about it I’m afraid :(
14:55:44 <apuimedo> jgriffith: it simply allows you to have pluggable handlers for K8s API objects
14:55:54 <apuimedo> and in the handlers you do whatever you like
14:56:33 <jgriffith> Cool, I’ll check it out
14:56:38 <apuimedo> It will probably move to use the new kubernetes python client
14:56:50 <apuimedo> #link https://github.com/openstack/kuryr-kubernetes
14:56:53 <hongbin> ok, we are almost run out of time
14:57:11 <hongbin> in the last 4 minutes, could we briefly summarize hte next step?
14:57:43 <apuimedo> jgriffith: hongbin: Obviously, when we wanted to start doing the watcher, we would have had a much easier time in Golang, since then we could use k8s official golang client, but at the time the TC was still against golang
14:58:37 <hongbin> apuimedo: yes, in addition, converting everything to goland is a lot of work :)
14:58:43 <dims> apuimedo : jgriffith : folks, sent an email out on gophercloud (and stop the golang client thingy) http://lists.openstack.org/pipermail/openstack-dev/2017-May/117284.html
14:58:49 <apuimedo> fuxi currently is at 0.1.0 release. I wonder if it is possible that the next step is to move it to golang taking in code from jgriffith with the API and options it currently exposes
14:59:32 <apuimedo> hongbin: would you agree to that?
14:59:54 <hongbin> apuimedo: yes, we can do that, i would try to figure out how to do it step-by-step
15:00:22 <apuimedo> dims will probably be able to help on how to do that infra wise
15:00:35 <hongbin> apuimedo: possibly, we would convert part of hte code as flexvolume plugin first
15:00:43 <dims> yep, count me in
15:01:13 <apuimedo> hongbin: where should the flexvolume adapter live? In openstack/fuxi ?
15:01:59 <hongbin> apuimedo: i don't have specific idea for now, we could discuss it later
15:02:13 <hongbin> ok, overflow on openstack-kuryr channel
15:02:23 <hongbin> #endmeeting