16:00:01 #startmeeting kolla 16:00:01 i dont understand this conversation 16:00:05 Meeting started Wed Dec 21 16:00:01 2016 UTC and is due to finish in 60 minutes. The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:10 The meeting name has been set to 'kolla' 16:00:19 #topic w00t for kolla - rollcall 16:00:21 o/ 16:00:22 o/ 16:00:23 woot! o/ 16:00:24 howdy 16:00:25 \0/ 16:00:26 hi... 16:00:32 hello 16:00:36 oops 16:00:41 o/ 16:00:44 sup zhubingbing 16:00:44 o/ 16:00:46 hi 16:00:50 o/ 16:00:51 sup duonhq 16:01:04 o/ 16:01:27 woot 16:01:38 thanks sayantan_, im not the only one now 16:01:49 haha 16:02:06 ; 16:02:32 hey eglute 16:02:35 hey egonzalez90 16:02:37 yeah something is broken with our community:/ I know! pbourke is not around, he always stand behind the woot 16:02:38 sorry eglute 16:02:42 o/ 16:02:51 anyway, moving on 16:02:53 hey is chrismasing 16:02:54 woot! 16:03:00 ha! 16:03:01 there he is! 16:03:03 ;) 16:03:05 pbourke is a rebel ;) 16:03:09 #topic announcements 16:03:27 We have 2 new cores in kolla-k8s! please welcome srwilkers and portdirect:) 16:03:35 *clap* 16:03:42 * duonghq clap 16:03:46 Congrats guys! 16:03:47 clap 16:03:51 welcome to the core reviewer team folks :) 16:03:54 congrats guys 16:03:56 clap 16:03:57 congrats! 16:03:58 I mean we all welcomed them already, multiple times, but now +2 powers are new 16:04:07 o/ 16:04:09 thanks :) its been great working with you all. 16:04:12 congrats! 16:04:29 next announcement: next meeting is cancelled, have a great hannukah/holidays/christmas 16:04:39 ;) 16:04:45 :) 16:04:50 or brief break from reality ;) 16:04:51 *clap* 16:05:02 or sol invictus if that's your thing 16:05:46 #topic agenda 16:05:50 * static uid/gid - SamYaple 16:05:50 * replace heka with fluentd - zhubingbing 16:05:50 * we're overloading the term "pod" in a way that can make devs/ops assume they are less functional then they are. What should we do? - this is a old one, which may not talked in last meet, we can skip it if no one care. 16:05:50 * kolla-salt! - SamYaple 16:05:50 * kolla-kubernetes UX for service layers - portdirect 16:05:50 moment 16:05:56 i have a community announcement 16:05:57 yeah 16:06:00 ahh sorry 16:06:04 if i may inc0 16:06:04 go ahead sdake 16:06:15 http://paste.openstack.org/show/593033/ 16:06:21 Atlanta Sheraton where PTG is held is SOLD OUT - book hotels soon nearby 16:06:37 soon as in as soon as possible :) 16:06:39 fluentd progress 16:06:46 thanks for the heads up on that 16:06:52 need to get moving on that front on my end 16:07:03 let's rent airbnb kolla house 16:07:15 * srwilkers thinks 16:07:19 inc0 if your co will let you do that more power to ya :) 16:07:22 inc0 many big corps don't allow that 16:07:32 yeah intel doesn't too 16:07:32 i would pay out of pocket for that the more i think about it 16:07:33 inc0: also please lets us know the vitual PTG plan 16:07:42 virtual* 16:07:47 i'm going to apply for the travel support fund if anyone has experince with that? would be great to get some tips after meeting. 16:08:01 +1 for virtual access to PTG 16:08:05 portdirect Jeffrey4l has attempted it - ask him - although i think the deadline has passed 16:08:10 portdirect, me too 16:08:17 sdake, we have 2nd deadline 16:08:18 sdake, the first has passed. theres another round coming up 16:08:20 portdirect: yeah, deadline passed 16:08:34 portdirect: there is another deadline in jan 1st week 16:08:37 actually have a reminder on my phone a week out from the next deadline to bug portdirect about it 16:08:38 sdake: there is another round but I'm not holding out hope - virtual attendance would be awesome if possible 16:08:39 i will ping you the details 16:08:51 thanks coolsvap :) 16:09:07 ok moving on, agenda:) 16:09:22 portdirect do you want to add your configmap discussion to it 16:09:22 ? 16:09:34 portdirect: travel support registration is open till Jan 1st week 16:09:43 I'll bolt that onto ux, thanks for reminding me :) 16:09:49 kk 16:09:51 Can we also briefly discuss about documentation? 16:09:57 ok, so let's go with meeting:) 16:10:04 sayantan_, if time allows, yeah 16:10:08 cool 16:10:14 #topic static uid/gid - SamYaple 16:10:17 go ahead Sam 16:10:26 i dont know if i can link things... but 16:10:31 #link https://review.openstack.org/#/c/412231/ 16:10:39 SamYaple, you can 16:10:48 then ive done something wrong. 16:10:51 either way, proceeding 16:11:12 thats the static uid/gid patch. it is passing the gate (with the exception of ubuntu-binary for reasons im aware of) 16:11:25 *however* the gate is not testing kvm 16:11:39 kvm is no bueno? 16:11:44 kvm should work 16:11:49 its not tested though. 16:11:50 SamYaple how are only 3 containers affected? 16:12:01 sdake:? 16:12:08 SamYaple in the commit message 16:12:16 SamYaple i'd think it affects all containers 16:12:30 sdake: unifying users as in, in ubuntu its _chrony and centos its chrony 16:12:37 the new user is now jsut chrony for both 16:12:44 oh i see 16:12:47 ill reword 16:12:51 cool 16:13:03 i'll review in the new year when its out of [wip] :) 16:13:08 I have built all centos-* and ubuntu-* containers to check all the users and groups 16:13:17 next patch wil lremove [wip] 16:13:27 im not changing anyhitng, just removing wip 16:13:30 cool, looks great SamYaple 16:13:32 question 16:13:46 upgrades - we'll need to chown stuff that already exist 16:13:51 Jeffrey4l are your concerns met? 16:13:55 yeah thanks for all your work on this - and reaching out to other groups to find the best way forward 16:13:58 sdake, yep. listening 16:14:01 inc0 that is handled by the json api 16:14:09 inc0: upgrades is a seperate concern to this patch. we have the infrastructure in place to make the upgrade possible 16:14:12 inc0, we need use json to chown. 16:14:15 yeah, but are gids/uilds correct in config.json? 16:14:16 but its not implemented here obviously 16:14:24 inc0: we dont use uid/gid there 16:14:26 we use names 16:14:32 inc0 that is a depends-on patch that needs to hit kolla-ansible repo 16:14:32 and those are based on the container at runtime 16:14:33 ah yeah, true 16:14:35 it seems to me there are some chowns in the extend_start scripts that can go now we have the json 16:14:37 am I wrong on that? 16:14:40 so let's confirm that names are correct 16:14:46 pbourke: nope. those use names too 16:14:47 i.e. a lot of bootstrap steps can go away 16:14:59 im not sure about that pbourke 16:15:10 because docker volume initially is incorrectly permissioned 16:15:15 pbourke, inc0 we have permissions in set_configs which part of dockers are already fixed in newton cycle. 16:15:29 others should be done in this cycle. 16:15:39 Jeffrey4l: roger, just I think there's some duplication 16:15:50 anyway, biggest change is likely around nova-libvirt and that should be tested throughly 16:15:52 between set_configs and extend_start in some cases 16:15:59 ive got ubuntu-binary and ubuntu-source on baremetal 16:16:09 but i dont have baremetal for centos to test that works 16:16:18 qemu is obviuosly working since gate is passing 16:16:52 pbourke, yep. maybe we can remove extend_start part. and move all the permission related chown into set_config, which is more better, imo. 16:17:07 Jeffrey4l: sounds good 16:17:19 Jeffrey4l: i like that too 16:17:25 ok,can we move on? 16:17:33 if no one has more question, yea 16:17:41 btw, we are re implement a ansible module in set_configs :( 16:17:47 #topic replace heka with fluentd - zhubingbing 16:17:52 hi all 16:17:56 today I talk about the progress of flutend 16:18:10 Link http://paste.openstack.org/show/593033/ 16:18:47 I will finish openstack service and apache type service to working in this week, please help review;) 16:18:57 thanks! 16:19:05 First, fluentd heka collected with the same field 16:19:50 nice work zhubingbing 16:20:30 thanks 16:20:49 is this tracked in a blueprint? 16:21:19 yes 16:21:36 and i now see heka collected log a bit of a problem, I will be optimized after the replacement 16:21:56 sweet zhubingbing :) 16:22:42 task status I will write in bp 16:22:52 kk moving on 16:22:53 ? 16:23:23 I guess that means yes:) 16:23:26 #topic overloading the term "pod" 16:23:31 portdirect you have floor 16:23:58 kfox1111? 16:24:05 but I'll have a go if you want 16:24:05 ok. so... 16:24:14 ^^ 16:24:22 in some places in our api we're making the user type in pod. 16:24:37 but pod has a specific definition in k8s that is different from the way we're using it. 16:25:11 we're using it to mean, any k8s object that ends up creating 1 or more pods. 16:25:24 in k8s its just a single instance that is transient, and if it dies, goes away. 16:25:43 so the 'official' term in k8s for what we mean (daemonset, deployment etc) is pod-controller i believe - but thats a pretty horrible term 16:25:54 so may lead people to thinking our things are less funcional then they are. 16:26:00 pod-controller isn't quite right either. 16:26:11 the pod controller is the thing that takes in the thing we give it, and launches the pods. 16:26:52 so, I don't think there is a generic term for "daemonset, deployment, replication controller,' etc) 16:26:58 but we either need one, 16:27:04 how about k8s-object? 16:27:04 or just refer to the type specifically. 16:27:07 no matter what you start in kube, it gets translated into running "pod" (other than bunch of other things) 16:27:08 pod controller is the generic term that they used to use 16:27:19 so instead of say rabbit-pod, we call it rabbit-statefulset say. 16:27:19 naming is hard :) 16:27:21 but it kinda died about 1.2 16:27:37 portdirect: a controller is an extension to k8s that runs k8s. 16:27:43 portdirect: we're not doing that. we're consuming that. 16:27:52 I think using pod as a name make sense to me.. 16:28:14 ok - I'm just reporting how it used to be (I hated it at the time) 16:28:25 sbezverk: but a daemonset is not a pod. but we're calling it so. :/ 16:28:31 portdirect: ok. 16:28:55 if we need a name for it, im for the approach you mentioned kfox1111 -- rabbit-statefulset in that example 16:29:11 kfox1111: at the end of daemonset is POD too 16:29:28 yeah - lets actually call it what it is be it a daemonset or statefulset or whatever 16:29:29 when you do kubectl get pods 16:29:39 it returns daemonset pods 16:29:41 i'm totally with portdirect on that. 16:29:41 really at the base of it all, they're all just API objects at the end of the day 16:29:45 leads to so much confusion otherwise. 16:31:24 also determine the lifecycle cahricteristics/actions that can be performed (eg rolling updates etc) 16:31:35 I guess my vote's the more descriptive postfix too. 16:31:45 I do not get what kind of confusing you have in rabbitmq-pod 16:31:47 portdirect: yeah. 16:32:10 sbezverk, i think the confusion being mentioned here could be individuals coming from outside of kolla-k8s 16:32:10 sbezverk: its confusion to new people, not those used to the way kolla-kubernetes has been using the term. 16:32:20 who want to contribute and may not be clear to others who have seen it used in a different way 16:32:30 srwilkers ++ 16:32:47 yeah, if you tell a k8s person that we deploy pods they will think we are mad 16:32:53 hopefully its just a git mv aaway :) 16:32:56 yeah. 16:33:15 * portdirect is mad, but tried to hide it 16:33:20 not fooling anyone 16:33:23 im on to you 16:33:27 how about a mailing list discussion to gather ideas? 16:33:31 I can take an action on that 16:33:36 now that we understand the problem 16:33:41 sdake: that would be great 16:33:54 i can't add #action, inc0 has to do it ;) 16:33:56 sdake: works for me. 16:34:04 #chair sdake 16:34:05 Current chairs: inc0 sdake 16:34:12 sdake, now you can 16:34:13 i am having issues with mailing list. i will correct them i guess. 16:34:15 adding all these to names seems waste of time in typing and then spelling errors 16:34:23 v1k0d3n, i can help you with that 16:34:24 Yeah, at least some of our target audience is the kube-centric, not the stack-centric. 16:34:25 #action sdake to start mL discussiona around pod naming conventions 16:34:37 I bet there is going to be bunch of issues with daemonsets vs deamonset etc 16:34:40 hey wirehead_ :) 16:34:44 thanks steve. been waiting forever for emails and nothing. 16:34:49 sbezverk git mv away :) 16:34:59 Actual picture of sdake right now: http://www.newvideo.com/wp-content/uploads/boxart/NNVG1644.jpg 16:35:05 v1k0d3n need to understand the problem before it can be fixed :) 16:35:12 v1k0d3n, if you need to wait more than 1hr for new email - something is wrong 16:35:43 sdake: i can take this outside of the meeting. 16:35:46 v1k0d3n ya - anyone is free to start up a discussion on ml 16:35:54 v1k0d3n wfm :) 16:36:10 can we move on? 16:36:25 yup? 16:36:27 inc0 note i removed the dead horse beating from the agenda and added docs instead 16:36:36 thanks 16:36:39 inc0 i think it missed the cleanup :) 16:36:42 #topic kolla-salt 16:36:49 SamYaple, go ahead 16:36:53 woot! 16:36:59 #link https://etherpad.openstack.org/p/kolla-salt 16:37:12 I've been hearing of it for at least a year now 16:37:12 I am purposing a new kolla-* project, deploy kolla containers with saltstack 16:37:17 :P 16:37:24 SamYaple the correct term is deliverable - not project 16:37:31 details 16:37:34 Let’s not get salty over it, tho, inc0. 16:37:54 salt, years ago, didnt quite have the feature set that it needed to deploy complex tihngs (orchestrate them) 16:37:55 but we need to takie it with a grain of salt tho 16:38:08 thats why i used ansible, but salt is pretty awesome 16:38:21 please take a look at that etherpad. and ideas, comments, etc 16:38:29 wirehead_, :) 16:38:33 I also have a working repo up 16:38:36 #link https://github.com/SamYaple/kolla-salt 16:38:43 SamYaple, so here is how it's going to be done - you create kolla-salt project as normally is created 16:38:59 there is guide online, I'm sure you know it 16:38:59 i deploy a simple service (memcache) and a complex one (rabbitmq) so you can see how im laying it out currently 16:39:09 yea inc0. ill do the paperwork 16:39:11 and add all kolla cores to kolla-salt core team 16:39:16 + yourself ofc 16:39:45 for those of you that care about this, youll find that the repo above implement reconfigure (restart service if service config changes) 16:39:55 AS WELL AS conncurrency locking 16:40:09 it will only restart things one at a time (or 2, 3, etc if configured) 16:40:28 so thats going to be heavily used to do task locking 16:40:37 check for service recovery before unlocking 16:41:00 sounds fun 16:41:09 lots of interesting features to use in salt 16:41:09 and it's python, even more python than ansible 16:41:15 and it's not gpl v3 \o/ 16:41:17 very much so 16:41:29 basically, we can template the _tasks_ files 16:41:39 heck, we can write the tasks in pure python 16:41:41 sounds like fun 16:41:54 kolla is already sweet thanks SamYaple for adding some salt flavor :) 16:41:55 have at it man:) 16:41:58 anyway. please look and comment. hit me up for more details 16:42:08 coolsvap: :) 16:42:14 thats it for me 16:42:25 wirehead_, pundemic is spreading;) 16:42:41 pandemonium more like it :) 16:42:42 You mean, kolla is being a-salted with puns? 16:42:43 k, next topic 16:42:47 #topic kolla-kubernetes UX for service layers - portdirect 16:42:53 oooh. 16:43:01 So for the user experience of kolla-k8s I'd like to dicuss two things today, the service layer (which kfox1111 , and sbezverk have been making great progress on) and configuration. 16:43:13 Up until now we have used kolla-ansible for config and that has been great to get us going, but I'm concerned that its going to start holding us back 16:43:26 It the area where we have the most flexibility at the moment, but once we have defined a path will be hard to change from once we have some user's 16:43:44 I think we need to have a think about how ew do this, personally I'd like us to use helms values to perform all config, so we can ship with an overideable set of insecure defaults for testing and deveopment, 16:43:58 making it as easy as possible to the user. But it would be gre3at to get other peoples options on this, and think that the meeting we have with helm will be very usefull potentially for gettind gudiance as to what is possible. 16:44:15 I've been working on some ceph stuff for upstream, and have been exporing using a helm plugin for config there - meaning we can disconnect the configuration totally from the deployment, I'm not 100% sure its the right approach but seems promising. 16:44:30 thoughts? 16:44:38 I have the same concern with helm as I did with docker env vars. openstack's so configurble for a reason. every config option was needed to be set by someone at somepoint. 16:44:45 Indeed. 16:44:56 portdirect: +1 need in config management, we cannot relay on kolla ansible to get config 16:45:04 i agree as well 16:45:06 templating that all out could be very difficult to maintain support for all options. 16:45:25 +1 portdirect 16:45:27 so, I think there must be a way not to use helm config's. 16:45:35 yeah thats where the plugin stuff becomes interesting 16:45:38 Also, options-that-depend-on-other-options in order to not be instant footguns. 16:45:38 that being said, i wouldn't be against a batteries included option. 16:46:18 so you can load in configmaps totally seperatly from the deplyment - and so load in any config you want or a system has created for you 16:46:48 ive done alot of work with kolla configs, and i think reusing other configs is a mistake. id be ok with a configmanagement-type repo, but i worry that might be too much overhead 16:46:57 portdirect so I started this: 16:46:59 #link https://review.openstack.org/#/c/399147/ 16:47:09 kolla-k8s maintaining their own configs would be best i think 16:47:21 +1 SamYaple 16:47:25 and we should revive it I think 16:47:29 SamYaple, or have central configs 16:47:32 inc0: there is lots of comments in there 16:47:35 I'm personally for trying to rewrite the templates a bit so they are more reusable across deliverables. 16:47:36 yea im against that 16:47:45 inc0: look at the comments, they are recent 16:47:45 reveiw.opensatck is down for me? 16:47:49 Do people think that it’s more likely that you’d have one or two of the services heavily customized and the rest stock? 16:47:55 and back up 16:48:05 remove logic from the templates and do it with var passing. 16:48:13 wirehead_: i think so 16:48:19 how about in the short term the kolla-kubernetes maintains its own config options with a goal in the future to unifying it (post ocata) 16:48:20 wirehead_: i think keystone is the *most* customized. followed by neutron and then nova. in my experince 16:48:21 wirehead_: esp neutron... 16:48:21 wirehead_: I heavily customize mine usually. :/ 16:48:31 sdake: that works for me 16:48:37 Alternatively, for those cases, would you potentially want to have multiple discrete installs, linked, instead of a single install? 16:48:45 then we can think of a central config repo and judge if its really needed 16:48:51 SamYaple right 16:48:52 wirehead_: thats my utimate aim 16:49:31 sdake: I'm a bit worried about the overhead of maintaining a fork of config, but if people step up to do so thats ok. 16:49:32 Because the real problem is “Here’s the master helm chart for batteries-included, that I’ve changed to include my sick-and-twisted helm charts for Keystone and Neutron and Nova, and thus it’s going to drift away from batteries-included without active effort and break everything" 16:50:01 kfox1111: well youre just pushing that overhead onto _everyone_ else by making it central to kolla repo 16:50:08 and making changes more difficult at that 16:50:28 SamYaple: right now, we're just using kolla-ansible. so 0 overhead. 16:50:39 disagree. look at the kolla-ansible configs 16:50:42 Ido think we need to address taht at some point. just not sure when. 16:50:47 that overhead has been pushed to kolla-ansible 16:50:57 SamYaple: there have been a few patches, but very minor. 16:50:58 question... 16:51:06 just to put it out there. 16:51:18 when CRI comes (which it's out there in 1.5) 16:51:20 kfox1111: still, the overhead has been pushed. someone has to do the work, you are saying "not kolla-k8s" 16:51:32 and runtimes look different than just the normal docker one we use today. 16:51:37 if the overhead is minor, the kolla-k8s should be able to do it themselves 16:51:42 could the helm deploys take that in? 16:51:56 if the dont then we have failed 16:52:00 for instance, is the assumption always going to be that you're using kolla images? 16:52:03 SamYaple would you mind taking an action to start a thread on this topic on the ml in the 1 hour timer that inc0 as set above? :) 16:52:14 SamYaple: a fork is harder to maintain then a few config overrides. jsut my 2 cents. 16:52:30 i dont think we can solve this in our hourly meeting 16:52:35 sdake: +1 16:52:36 v1k0d3n: for kolla0k8s i think the assmption is that we are always going to use kolla images 16:52:43 or we can take this spec 16:52:45 so what our goal/thought was, is that helm is first class, so that any runtime could be used with heavy configmap data, and other projects could use that (i.e. a kolla-kubernetes for instance). 16:52:47 inc0: yea 16:52:51 v1k0d3n: but not nessiarily use the docker runtime to use them 16:52:53 I'll read through comments 16:52:53 sdake: there is a spec on this with all this detailed 16:52:56 let's use spec please 16:52:59 SamYaple cool 16:53:00 *1 16:53:03 that works for me 16:53:16 ok moving on? 16:53:20 in the meantime kolla-k8s needs to keep moving 16:53:20 i mean to be fair, and inc0 could at least speak up...i brought up helm in barcelona, so it got added to the road map because the community thought this was a good approach. 16:53:22 sounds good, tlts move on 16:53:45 ok 16:53:56 v1k0d3n: I do still think configmaps in helm as a batteries included thing is a good idea. 16:54:15 not sure if that is a focus for 0.5, 0.6, etc? 16:54:26 kfox1111 mailing list discussion should be able to answer that 16:54:30 and do they get stored in kolla repo and get pulled into kolla-kubernetes, 16:54:39 or stored as a fork in kolla-kubernetes? 16:54:52 and the rest of your qs ;) 16:54:57 sdake: +1 to ml discussion 16:55:01 lets give sayantan_ some time for docs :) 16:55:03 kfox1111: i just need a very technical explanation with the helm folks to come to the same place as you on this one. 16:55:06 we only have 5 minutes left 16:55:09 kfox1111: can you breifly summarise the sate fo the service layers 16:55:24 as you have been doing loads of work with sbezverk on that :) 16:55:36 ok, let's overflow documentation to next years meeting 16:55:46 sorry sayantan_ 16:55:50 sbezverk and I have been working on prototyping some service layer packages. 16:55:51 inc0 - wfm as long as its first up :) 16:55:53 no prob :) 16:56:04 so far the microservices concept seems to be holding up well. 16:56:30 helm 2.1.2 just came out, so we have more options now for microservices api that could make the service layer cconfig more smooth. 16:56:48 a possible layout here: http://paste.openstack.org/show/86qF0iC5tc5a5o0GhV8l/ 16:56:59 yep microservices and services concept looks really nice 16:57:00 I'm prototyping up an example of that in the neutron service package. 16:57:38 mariadb service is ready and glance will be ready soon.. 16:58:01 neutron service is pretty much ready too. 16:58:04 i started toying with cinder last night 16:58:11 should have a WIP patchset up soon 16:58:11 but I want to get the globals thing figured out, 16:58:24 and how to get mariadb/memcached/rabbit per service package too. 16:58:39 the globals thing will require a lot of changes from the microservices though. 16:58:51 so would like to get that ironed out so we can try and get those in before 0.4.0 16:59:39 okay. are we going to merge those in for 0.4.0 then? i remember we talked a bit about that yesterday but didnt come to a conclusion 16:59:51 let's get a list of left non-microservices components 17:00:00 ok guys 17:00:02 and try to fix them before Jan release date 17:00:03 srwilkers: the changes to microservices I mean. 17:00:09 we're out of time 17:00:10 time is up :) 17:00:12 kfox1111, oh right on 17:00:12 thanks everyone 17:00:17 #endmeeting kolla