14:02:50 #startmeeting fuxi_stackube_k8s_storage 14:02:50 Meeting started Tue May 23 14:02:50 2017 UTC and is due to finish in 60 minutes. The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:54 The meeting name has been set to 'fuxi_stackube_k8s_storage' 14:02:55 #topic Roll Call 14:03:04 o/ (listening mostly) 14:03:24 Hongbin Lu 14:04:06 #topic Introduction 14:04:14 Hi all 14:04:20 Hi 14:04:30 hi 14:04:30 I would get started to do a brief introduction 14:04:41 hi 14:05:11 hello 14:05:15 The purpose of this meeting is to let everyone who interest in k8s storage integration to get together to figure out a development plan 14:05:32 sounds good 14:05:48 This effort was started by fuxi, then we have a couple of people who expressed interest to work on this area 14:06:06 This includes another team: stackube, and feisky is hte initialior 14:06:43 Therefore, I proposed to schedule a meeting to let all of us to discuss 14:07:13 Ideally, we could leverage this meeting to understand what are the interests of each other, and figure out a common ground to work on this area 14:07:26 That is all from my side 14:07:36 apuimedo: feisky you have anything to add? 14:07:42 hongbin:is the purpose that develop a plugin to supply storage offered by openstack for k8s? 14:07:56 no. You described the goals very well 14:08:13 no 14:08:14 hongbin: it would be nice if you could list the current k8s goals of fuxi-k8s 14:08:21 and then feisky could do the same 14:08:27 so we can figure out common ground 14:08:27 apuimedo: sure 14:08:39 #link https://docs.openstack.org/developer/kuryr-kubernetes/specs/pike/fuxi_kubernetes.html the proposal 14:09:05 The idea is to develop two components: the volume provisioner and the flexvolume plugin 14:09:32 The volume provisioner is to listen to k8s api for pvc, and create pv and cinder/manila volume in occurance of event 14:09:55 The flexvolume plugin is for kubelet to conecting to cinder/manila volume 14:10:23 That is a brief summarize of hte design spec from fuxi 14:10:46 I believe there are some room to modify the design if necessary 14:10:55 We could discuss it based on feedback 14:10:55 cool. let me make an introduction of stackube 14:11:28 stackube aims to provide a kubernetes cluster with openstack components with both soft and hard multi-tenancy. 14:12:15 It support all k8s volumes , including flex volumes 14:12:57 but one main issue is that k8s mounts the volume on the host regardless of runtime 14:13:56 e.g. for hypervisors, pass the volume to VM has much more performance 14:14:12 that is one issue we should resolve in stackube 14:14:41 And I wonder whether fuxi could help on this issue 14:15:05 feisky: could you clarify "k8s mounts volume on the host regardless of runtime" 14:15:15 yep 14:16:18 for all volumes (including flex), kubelet will attach it to the host first and then mount the volume to a host path 14:16:47 then kubelet set container volumes based on this mountpoint 14:17:09 the process is independent of container runtime 14:17:36 that is true 14:17:46 hyper will do something that is differently? 14:17:48 feisky stack cube? Link? 14:18:03 https://github.com/openstack/stackube 14:18:10 IMO K8’s distort/model should be completely independent 14:18:40 I think there’s more value in providing an independent plugin that only cares about upstream k8’s integration 14:18:44 If that makes sense? 14:19:13 distort/model? 14:19:16 Distribution/model… sorry typo 14:19:47 ah 14:19:52 :) 14:19:55 Silly auto correct 14:20:54 :-) 14:21:09 So think libstorage…. There’s no reason Cinder shouldn’t actually be “libstorage” just needs the integration layer from Fuxi to make that work 14:22:20 Not sure if anybody agrees, or maybe not sure what I’m referring to there? 14:22:26 jgriffith:sorry, what will independent plugin do? 14:23:03 jgriffith: yes, i think the process of mounting volumes to host should be all the same, I try to figure out how hyper will do it differently 14:23:04 zengche so the idea IMO would be that you have a K8’s deployment… deploy Cinder… install Fuji… consume cinder from fuxi 14:23:13 err.. K8’s 14:23:23 And I shouldn’t care what K8’s deployment you’re using 14:23:40 Ideally the Fuxi code runs in a pod as well 14:23:58 hongbin: for hyper runtime it would probably be mounted to qemu, wouldn't it feisky ? 14:24:20 qpuimedo: that's prefered way 14:24:36 apuimedo hongbin so the other thing is that if you’re using that model you can use the open stack provider in K8’s then no? 14:25:15 jgriffith: it's not pods running on VMs, the "VMs" are the pods 14:25:23 jgriffith: i am not faimilar with k8s cloud provider, dims , you can comment on that? 14:25:40 so I'm doubtful that would work with hyper runtime 14:26:00 apuimedo yeah, I don’t know anything about hyper 14:26:24 cloud provider assumes that the volume will be attached to something that was started by Nova 14:27:09 as I said, k8s doesn't care about runtime now, so it's impossible to mount the volume to hyperVM now 14:27:16 jgriffith: simplifying. It is a qemu-kvm accelerated VM managed by the hyper runtime that only runs the pod 14:27:24 and by the way, stackube doesn't require a cloud provider 14:27:49 qpuimedo: right. the vm is managed by hyperd 14:28:48 feisky sure, not pretending to know about those two products 14:28:59 feisky: I'm glad the effort I put into reading the code two years ago keeps fresh in my mind 14:28:59 feisky: it sounds like hyper would like to develop your own volume plugin instead of using flexvolume? 14:29:02 :P 14:29:15 hongbin +1 14:30:10 hongbin: no, we will still confirm to kubernetes volume plugin. 14:30:42 feisky: then, you would like to develop a custom hyper flexvolume plugin for hyper? 14:30:43 but at now, since it is impossible, we will extract volume metadata and pass it to runtime directly 14:31:16 feisky: could you elaborate that? 14:31:18 it's not a flex plugin 14:31:33 e.g. we could take the info via annotations 14:31:48 feisky: that's the best approach 14:31:50 and CRI already support set annotations for runtimes 14:32:10 feisky do you have a link to this hyper project that you’re referring to? 14:32:40 jgriffith: do you mean stackube link? 14:33:02 oh, sorry, hyper is here https://github.com/hyperhq/hyperd 14:33:11 feisky :). thanks 14:33:27 feisky:what's the next when runtime got the volume metadata? does runtime can use volume directly? 14:33:29 and we have a runtime in kubernetes: https://github.com/kubernetes/frakti 14:34:02 zengche: yep, runtime will set the volume to VM 14:34:42 feisky:got it. it is better if you five me more details. thanks. 14:35:03 feisky:s/five/give 14:35:23 zengche: please take an eye on the stackube project, will make the design docs later 14:35:32 s/make/add 14:35:47 feisky any timeline on that ? 14:35:54 feisky:ok, i have seen your project. 14:36:05 I think it might be good to define and narrow scope here a bit 14:36:22 There’s a lot going on in this conversation 14:36:35 jgriffith: +1 14:36:53 Fuxi currently provides a way to consume Cinder from Docker… that’s great 14:37:13 IIRC it also allows either non-nova or nova configurations 14:37:15 jgriffith: sorry, let's focus on the volume itself 14:37:31 jgriffith: yes, that is correct 14:37:33 hongbin correct me if I’m wrong on any of this? 14:37:36 hongbin :) 14:37:55 So… what I wanted to propose was two things: 14:38:08 1. Convert the existing Fuxi python code to golang 14:38:27 2. Create a kubernetes integration 14:38:45 ++ 14:38:50 jgriffith: +1 14:38:51 Those two things are a lot of work, and #2 has a lot of details 14:39:20 hongbin: jgriffith: stackube will support fuxi intrinsically (stackube support all existing kubernetes volumes) 14:39:24 jgriffith: for #1, i guess we need to port os-brick to golang? 14:39:39 feisky: ack 14:39:51 hongbin did a good job of outlining the use of flex volume with a K8 watcher/listener for provisioning 14:40:11 That will be a lot of work: https://governance.openstack.org/tc/resolutions/20170329-golang-use-case.html 14:40:29 Keep in mind that flexvol will (hopefully soon) eventually have the ability to do provisioning as well so the listener would be temporary 14:40:38 smcginnis indeed :) 14:40:45 i see 14:40:56 But dims has blazed a trail for us that I think will help 14:41:17 jgriffith: +1 I would love to see some more movement on the golang front. 14:41:22 There’s one other aspect of this as well…. 14:41:31 smcginnis: you mean that at some point there will not be a need for watching the k8s volume resources? 14:42:02 dims showed me some work that splits cinder in the existing cloud provider in K8’s, so it may be possible to use that plugin with or without nova 14:42:16 In which case you get dynamic provisioning and attach etc 14:42:47 This is where things get interesting IMO, but it’s also why I say there’s a LOT of work to be done 14:43:04 apuimedo: No, just pointing out rewriting in go is more than just taking the existing code and redoing it in another language. 14:43:08 And if we can all agree as a team and work on it together it could be pretty cool 14:44:04 Luckily the go part of Fuxi we have an example we can use: https://github.com/j-griffith/cinder-docker-driver 14:44:25 But we need to decide on things like gophercloud vs openstack-golang... 14:44:31 #link https://github.com/j-griffith/cinder-docker-driver 14:44:33 And most of all infra testing 14:45:15 does everybody agree to have the docker volume driver as a base (like j-griffith/cinder-docker-driver and openstack/fuxi do)? 14:45:30 jgriffith : use gophercloud, let's settle on that 14:45:32 i have no problem fo rthat 14:45:38 just mentioning cause it feels very runtime specific 14:45:45 dims works for me 14:45:53 apuimedo how so? 14:46:04 they are docker volume API drivers 14:46:13 so I won't expect them to work with rkt/hyper 14:46:21 apuimedo ahh.. got ya 14:46:47 apuimedo so that’s a fair point, but there are two aspect to Fuxi and that driver layer 14:47:07 I would like to separate the consolidation of the docker driver 14:47:25 apuimedo yeah, that might work. 14:47:29 and the effort on having k8s cinder/manila support for baremetal/pod-in-vm 14:47:46 apuimedo What I was hoping to do is have a single package/sdk to talk to Cinder and issue the calls 14:47:49 That’s all 14:48:10 jgriffith: that's what fuxi-k8s proposes for now. And kuryr accepted it 14:48:23 I just want to make sure that we are fine with the runtime lock-in it brings 14:48:43 apuimedo I think there are ways to address that 14:48:58 I agree with you that I’d love to see support for things like rkt 14:49:04 we could modify the spec to make it runtime agnostic in the future 14:49:10 jgriffith: sure, you can have drivers both in the k8s watcher and in the flexvolume 14:49:14 for different runtimes 14:49:30 hongbin apuimedo we could also layer packages to make it flexible enough 14:49:32 but we need to scope and define the steps 14:50:13 jgriffith: sounds reasonable 14:50:16 jgriffith: kuryr-k8s already supports drivers, so that a same event handler could talk to a docker volume api or to something else 14:50:49 hongbin apuimedo so maybe a few of us could work on defining this a bit over the next week and we can reconvene next week to see what people like/dislike etc? 14:50:54 (although, tbh, I think it is cleaner that the k8s watcher talks directly to cinder and manila without going via the docker volume api) 14:51:28 for the flexvolume part, talking to the docker volume API or another driver looks good though 14:51:32 apuimedo I don’t disagree with that… but the existing cinderclient is not a great fit for this IMO 14:51:45 jgriffith: how so? 14:51:47 That’s where the desire for another layer comes into play 14:51:54 you mean python-cinderclient? 14:51:59 apuimedo yes 14:52:40 jgriffith: does your docker driver use plain HTTP requests or does it use gophercloud? 14:52:49 apuimedo to be clear, that’s the base layer regardless… but there’s extra logic you’re likely going to want on top of it 14:53:04 apuimedo gopher cloud…. So just goes down to cinderclient 14:53:15 jgriffith: ok 14:53:59 Are we going to use the kuryr-k8s watcher (python) or are we going to make a new one in golang? 14:54:23 for the flexvolume and docker driver there seems to be a big agreement to move it to golang 14:54:31 apuimedo I don’t know :). I guess if there’s a watcher that works there’s no good reason to reinvent one 14:54:46 +1 14:54:49 But I don’t know anything about it I’m afraid :( 14:55:44 jgriffith: it simply allows you to have pluggable handlers for K8s API objects 14:55:54 and in the handlers you do whatever you like 14:56:33 Cool, I’ll check it out 14:56:38 It will probably move to use the new kubernetes python client 14:56:50 #link https://github.com/openstack/kuryr-kubernetes 14:56:53 ok, we are almost run out of time 14:57:11 in the last 4 minutes, could we briefly summarize hte next step? 14:57:43 jgriffith: hongbin: Obviously, when we wanted to start doing the watcher, we would have had a much easier time in Golang, since then we could use k8s official golang client, but at the time the TC was still against golang 14:58:37 apuimedo: yes, in addition, converting everything to goland is a lot of work :) 14:58:43 apuimedo : jgriffith : folks, sent an email out on gophercloud (and stop the golang client thingy) http://lists.openstack.org/pipermail/openstack-dev/2017-May/117284.html 14:58:49 fuxi currently is at 0.1.0 release. I wonder if it is possible that the next step is to move it to golang taking in code from jgriffith with the API and options it currently exposes 14:59:32 hongbin: would you agree to that? 14:59:54 apuimedo: yes, we can do that, i would try to figure out how to do it step-by-step 15:00:22 dims will probably be able to help on how to do that infra wise 15:00:35 apuimedo: possibly, we would convert part of hte code as flexvolume plugin first 15:00:43 yep, count me in 15:01:13 hongbin: where should the flexvolume adapter live? In openstack/fuxi ? 15:01:59 apuimedo: i don't have specific idea for now, we could discuss it later 15:02:13 ok, overflow on openstack-kuryr channel 15:02:23 #endmeeting