16:00:01 <inc0> #startmeeting kolla
16:00:01 <SamYaple> i dont understand this conversation
16:00:05 <openstack> Meeting started Wed Dec 21 16:00:01 2016 UTC and is due to finish in 60 minutes.  The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:10 <openstack> The meeting name has been set to 'kolla'
16:00:19 <inc0> #topic w00t for kolla - rollcall
16:00:21 <SamYaple> o/
16:00:22 <duonghq> o/
16:00:23 <srwilkers> woot! o/
16:00:24 <inc0> howdy
16:00:25 <Jeffrey4l> \0/
16:00:26 <sp__> hi...
16:00:32 <portdirect> hello
16:00:36 <duonghq> oops
16:00:41 <coolsvap> o/
16:00:44 <duonghq> sup zhubingbing
16:00:44 <zhubingbing> o/
16:00:46 <zhubingbing> hi
16:00:50 <kfox1111> o/
16:00:51 <zhubingbing> sup duonhq
16:01:04 <sdake> o/
16:01:27 <sayantan_> woot
16:01:38 <srwilkers> thanks sayantan_, im not the only one now
16:01:49 <sayantan_> haha
16:02:06 <zhubingbing>16:02:32 <duonghq> hey eglute
16:02:35 <duonghq> hey egonzalez90
16:02:37 <inc0> yeah something is broken with our community:/ I know! pbourke is not around, he always stand behind the woot
16:02:38 <duonghq> sorry eglute
16:02:42 <egonzalez90> o/
16:02:51 <inc0> anyway, moving on
16:02:53 <SamYaple> hey is chrismasing
16:02:54 <pbourke> woot!
16:03:00 <inc0> ha!
16:03:01 <SamYaple> there he is!
16:03:03 <zhubingbing> ;)
16:03:05 <sdake> pbourke is a rebel ;)
16:03:09 <inc0> #topic announcements
16:03:27 <inc0> We have 2 new cores in kolla-k8s! please welcome srwilkers and portdirect:)
16:03:35 <SamYaple> *clap*
16:03:42 * duonghq clap
16:03:46 <sayantan_> Congrats guys!
16:03:47 <sp__> clap
16:03:51 <sdake> welcome to the core reviewer team folks :)
16:03:54 <sp__> congrats guys
16:03:56 <Jeffrey4l> clap
16:03:57 <kfox1111> congrats!
16:03:58 <inc0> I mean we all welcomed them already, multiple times, but now +2 powers are new
16:04:07 <portdirect> o/
16:04:09 <srwilkers> thanks :) its been great working with you all.
16:04:12 <egonzalez90> congrats!
16:04:29 <inc0> next announcement: next meeting is cancelled, have a great hannukah/holidays/christmas
16:04:39 <zhubingbing> ;)
16:04:45 <sp__> :)
16:04:50 <sdake> or brief break from reality ;)
16:04:51 <SamYaple> *clap*
16:05:02 <inc0> or sol invictus if that's your thing
16:05:46 <inc0> #topic agenda
16:05:50 <inc0> * static uid/gid - SamYaple
16:05:50 <inc0> * replace heka with fluentd - zhubingbing
16:05:50 <inc0> * we're overloading the term "pod" in a way that can make devs/ops assume they are less functional then they are. What should we do? - this is a old one, which may not talked in last meet, we can skip it if no one care.
16:05:50 <inc0> * kolla-salt! - SamYaple
16:05:50 <inc0> * kolla-kubernetes UX for service layers - portdirect
16:05:50 <sdake> moment
16:05:56 <sdake> i have a community announcement
16:05:57 <zhubingbing> yeah
16:06:00 <inc0> ahh sorry
16:06:04 <sdake> if i may inc0
16:06:04 <inc0> go ahead sdake
16:06:15 <zhubingbing> http://paste.openstack.org/show/593033/
16:06:21 <sdake> Atlanta Sheraton where PTG is held is SOLD OUT - book hotels soon nearby
16:06:37 <sdake> soon as in as soon as possible :)
16:06:39 <zhubingbing> fluentd  progress
16:06:46 <srwilkers> thanks for the heads up on that
16:06:52 <srwilkers> need to get moving on that front on my end
16:07:03 <inc0> let's rent airbnb kolla house
16:07:15 * srwilkers thinks
16:07:19 <sdake> inc0 if your co will let you do that more power to ya :)
16:07:22 <sdake> inc0 many big corps don't allow that
16:07:32 <inc0> yeah intel doesn't too
16:07:32 <srwilkers> i would pay out of pocket for that the more i think about it
16:07:33 <sp__> inc0: also please lets us know the vitual PTG plan
16:07:42 <sp__> virtual*
16:07:47 <portdirect> i'm going to apply for the travel support fund if anyone has experince with that? would be great to get some tips after meeting.
16:08:01 <sbezverk> +1 for virtual access to PTG
16:08:05 <sdake> portdirect Jeffrey4l has attempted it - ask him - although i think the  deadline has passed
16:08:10 <duonghq> portdirect,  me too
16:08:17 <duonghq> sdake, we have 2nd deadline
16:08:18 <srwilkers> sdake, the first has passed. theres another round coming up
16:08:20 <egonzalez90> portdirect: yeah, deadline passed
16:08:34 <coolsvap> portdirect: there is another deadline in jan 1st week
16:08:37 <srwilkers> actually have a reminder on my phone a week out from the next deadline to bug portdirect about it
16:08:38 <portdirect> sdake: there is another round but I'm not holding out hope - virtual attendance would be awesome if possible
16:08:39 <coolsvap> i will ping you the details
16:08:51 <portdirect> thanks coolsvap :)
16:09:07 <inc0> ok moving on, agenda:)
16:09:22 <inc0> portdirect do you want to add your configmap discussion to it
16:09:22 <inc0> ?
16:09:34 <sp__> portdirect:   travel support registration is open till Jan 1st week
16:09:43 <portdirect> I'll bolt that onto ux, thanks for reminding me :)
16:09:49 <inc0> kk
16:09:51 <sayantan_> Can we also briefly discuss about documentation?
16:09:57 <inc0> ok, so let's go  with meeting:)
16:10:04 <inc0> sayantan_, if time allows, yeah
16:10:08 <sayantan_> cool
16:10:14 <inc0> #topic static uid/gid - SamYaple
16:10:17 <inc0> go ahead Sam
16:10:26 <SamYaple> i dont know if i can link things... but
16:10:31 <SamYaple> #link https://review.openstack.org/#/c/412231/
16:10:39 <duonghq> SamYaple, you can
16:10:48 <SamYaple> then ive done something wrong.
16:10:51 <SamYaple> either way, proceeding
16:11:12 <SamYaple> thats the static uid/gid patch. it is passing the gate (with the exception of ubuntu-binary for reasons im aware of)
16:11:25 <SamYaple> *however* the gate is not testing kvm
16:11:39 <inc0> kvm is no bueno?
16:11:44 <SamYaple> kvm should work
16:11:49 <SamYaple> its not tested though.
16:11:50 <sdake> SamYaple how are only 3 containers affected?
16:12:01 <SamYaple> sdake:?
16:12:08 <sdake> SamYaple in the commit message
16:12:16 <sdake> SamYaple i'd think it affects all containers
16:12:30 <SamYaple> sdake: unifying users as in, in ubuntu its _chrony and centos its chrony
16:12:37 <SamYaple> the new user is now jsut chrony for both
16:12:44 <sdake> oh i see
16:12:47 <SamYaple> ill reword
16:12:51 <sdake> cool
16:13:03 <sdake> i'll review in the new year when its out of [wip] :)
16:13:08 <SamYaple> I have built all centos-* and ubuntu-* containers to check all the users and groups
16:13:17 <SamYaple> next patch wil lremove [wip]
16:13:27 <SamYaple> im not changing anyhitng, just removing wip
16:13:30 <inc0> cool, looks great SamYaple
16:13:32 <inc0> question
16:13:46 <inc0> upgrades  - we'll need to chown stuff that already exist
16:13:51 <sdake> Jeffrey4l are your concerns met?
16:13:55 <portdirect> yeah thanks for all your work on this - and reaching out to other groups to find the best way forward
16:13:58 <Jeffrey4l> sdake, yep. listening
16:14:01 <sdake> inc0 that is handled by the json api
16:14:09 <SamYaple> inc0: upgrades is a seperate concern to this patch. we have the infrastructure in place to make the upgrade possible
16:14:12 <Jeffrey4l> inc0, we need use json to chown.
16:14:15 <inc0> yeah, but are gids/uilds correct in config.json?
16:14:16 <SamYaple> but its not implemented here obviously
16:14:24 <SamYaple> inc0: we dont use uid/gid there
16:14:26 <SamYaple> we use names
16:14:32 <sdake> inc0 that is a depends-on patch that needs to hit kolla-ansible repo
16:14:32 <SamYaple> and those are based on the container at runtime
16:14:33 <inc0> ah yeah, true
16:14:35 <pbourke> it seems to me there are some chowns in the extend_start scripts that can go now we have the json
16:14:37 <pbourke> am I wrong on that?
16:14:40 <inc0> so let's confirm that names are correct
16:14:46 <SamYaple> pbourke: nope. those use names too
16:14:47 <pbourke> i.e. a lot of bootstrap steps can go away
16:14:59 <SamYaple> im not sure about that pbourke
16:15:10 <SamYaple> because docker volume initially is incorrectly permissioned
16:15:15 <Jeffrey4l> pbourke, inc0 we have permissions in set_configs which part of dockers are already fixed in newton cycle.
16:15:29 <Jeffrey4l> others should be done in this cycle.
16:15:39 <pbourke> Jeffrey4l: roger, just I think there's some duplication
16:15:50 <SamYaple> anyway, biggest change is likely around nova-libvirt and that should be tested throughly
16:15:52 <pbourke> between set_configs and extend_start in some cases
16:15:59 <SamYaple> ive got ubuntu-binary and ubuntu-source on baremetal
16:16:09 <SamYaple> but i dont have baremetal for centos to test that works
16:16:18 <SamYaple> qemu is obviuosly working since gate is passing
16:16:52 <Jeffrey4l> pbourke, yep. maybe we can remove extend_start part. and move all the permission related chown into set_config, which is more better, imo.
16:17:07 <pbourke> Jeffrey4l: sounds good
16:17:19 <SamYaple> Jeffrey4l: i like that too
16:17:25 <inc0> ok,can we move on?
16:17:33 <SamYaple> if no one has more question, yea
16:17:41 <Jeffrey4l> btw, we are re implement a ansible module in set_configs :(
16:17:47 <inc0> #topic replace heka with fluentd - zhubingbing
16:17:52 <zhubingbing> hi all
16:17:56 <zhubingbing> today I talk about the progress of flutend
16:18:10 <zhubingbing> Link http://paste.openstack.org/show/593033/
16:18:47 <zhubingbing> I will finish openstack service and apache type service to working in this week, please help review;)
16:18:57 <inc0> thanks!
16:19:05 <zhubingbing> First, fluentd  heka collected with the same field
16:19:50 <pbourke> nice work zhubingbing
16:20:30 <zhubingbing> thanks
16:20:49 <pbourke> is this tracked in a blueprint?
16:21:19 <zhubingbing> yes
16:21:36 <zhubingbing> and i now see heka collected log a bit of a problem, I will be optimized after the replacement
16:21:56 <sdake> sweet zhubingbing :)
16:22:42 <zhubingbing> task status  I will write in bp
16:22:52 <inc0> kk moving on
16:22:53 <inc0> ?
16:23:23 <inc0> I guess that means yes:)
16:23:26 <inc0> #topic overloading the term "pod"
16:23:31 <inc0> portdirect you have floor
16:23:58 <portdirect> kfox1111?
16:24:05 <portdirect> but I'll have a go if you want
16:24:05 <kfox1111> ok. so...
16:24:14 <portdirect> ^^
16:24:22 <kfox1111> in some places in our api we're making the user type in pod.
16:24:37 <kfox1111> but pod has a specific definition in k8s that is different from the way we're using it.
16:25:11 <kfox1111> we're using it to mean, any k8s object that ends up creating 1 or more pods.
16:25:24 <kfox1111> in k8s its just a single instance that is transient, and if it dies, goes away.
16:25:43 <portdirect> so the 'official' term in k8s for what we mean (daemonset, deployment etc) is pod-controller i believe - but thats a pretty horrible term
16:25:54 <kfox1111> so may lead people to thinking our things are less funcional then they are.
16:26:00 <kfox1111> pod-controller isn't quite right either.
16:26:11 <kfox1111> the pod controller is the thing that takes in the thing we give it, and launches the pods.
16:26:52 <kfox1111> so, I don't think there is a generic term for "daemonset, deployment, replication controller,' etc)
16:26:58 <kfox1111> but we either need one,
16:27:04 <sdake> how about k8s-object?
16:27:04 <kfox1111> or just refer to the type specifically.
16:27:07 <sbezverk> no matter what you start in kube, it gets translated into running "pod" (other than bunch of other things)
16:27:08 <portdirect> pod controller is the generic term that they used to use
16:27:19 <kfox1111> so instead of say rabbit-pod, we call it rabbit-statefulset say.
16:27:19 <sdake> naming is hard :)
16:27:21 <portdirect> but it kinda died about 1.2
16:27:37 <kfox1111> portdirect: a controller is an extension to k8s that runs k8s.
16:27:43 <kfox1111> portdirect: we're not doing that. we're consuming that.
16:27:52 <sbezverk> I think using pod as a name make sense to me..
16:28:14 <portdirect> ok - I'm just reporting how it used to be (I hated it at the time)
16:28:25 <kfox1111> sbezverk: but a daemonset is not a pod. but we're calling it so. :/
16:28:31 <kfox1111> portdirect: ok.
16:28:55 <srwilkers> if we need a name for it, im for the approach you mentioned kfox1111 -- rabbit-statefulset in that example
16:29:11 <sbezverk> kfox1111: at the end of daemonset is POD too
16:29:28 <portdirect> yeah - lets actually call it what it is be it a daemonset or statefulset or whatever
16:29:29 <sbezverk> when you do kubectl get pods
16:29:39 <sbezverk> it returns daemonset pods
16:29:41 <v1k0d3n> i'm totally with portdirect on that.
16:29:41 <srwilkers> really at the base of it all, they're all just API objects at the end of the day
16:29:45 <v1k0d3n> leads to so much confusion otherwise.
16:31:24 <portdirect> also determine the lifecycle cahricteristics/actions that can be performed (eg rolling updates etc)
16:31:35 <kfox1111> I guess my vote's the more descriptive postfix too.
16:31:45 <sbezverk> I do not get what kind of confusing you have in rabbitmq-pod
16:31:47 <kfox1111> portdirect: yeah.
16:32:10 <srwilkers> sbezverk, i think the confusion being mentioned here could be individuals coming from outside of kolla-k8s
16:32:10 <kfox1111> sbezverk: its confusion to new people, not those used to the way kolla-kubernetes has been using the term.
16:32:20 <srwilkers> who want to contribute and may not be clear to others who have seen it used in a different way
16:32:30 <sdake> srwilkers ++
16:32:47 <portdirect> yeah, if you tell a k8s person that we deploy pods they will think we are mad
16:32:53 <sdake> hopefully its just a git mv aaway :)
16:32:56 <kfox1111> yeah.
16:33:15 * portdirect is mad, but tried to hide it
16:33:20 <srwilkers> not fooling anyone
16:33:23 <srwilkers> im on to you
16:33:27 <sdake> how about a mailing list discussion to gather ideas?
16:33:31 <sdake> I can take an action on that
16:33:36 <sdake> now that we understand the problem
16:33:41 <portdirect> sdake:  that would be great
16:33:54 <sdake> i can't add #action, inc0 has to do it ;)
16:33:56 <kfox1111> sdake: works for me.
16:34:04 <inc0> #chair sdake
16:34:05 <openstack> Current chairs: inc0 sdake
16:34:12 <inc0> sdake, now you can
16:34:13 <v1k0d3n> i am having issues with mailing list. i will correct them i guess.
16:34:15 <sbezverk> adding all these to names seems waste of time in typing and then spelling errors
16:34:23 <srwilkers> v1k0d3n, i can help you with that
16:34:24 <wirehead_> Yeah, at least some of our target audience is the kube-centric, not the stack-centric.
16:34:25 <sdake> #action sdake to start mL discussiona around pod naming conventions
16:34:37 <sbezverk> I bet there is going to be bunch of issues with daemonsets vs deamonset etc
16:34:40 <srwilkers> hey wirehead_ :)
16:34:44 <v1k0d3n> thanks steve. been waiting forever for emails and nothing.
16:34:49 <sdake> sbezverk git mv away :)
16:34:59 <wirehead_> Actual picture of sdake right now: http://www.newvideo.com/wp-content/uploads/boxart/NNVG1644.jpg
16:35:05 <sdake> v1k0d3n need to understand the problem before it can be fixed :)
16:35:12 <inc0> v1k0d3n, if you need to wait more than 1hr for new email - something is wrong
16:35:43 <v1k0d3n> sdake: i can take this outside of the meeting.
16:35:46 <sdake> v1k0d3n ya - anyone is free to start up a discussion on ml
16:35:54 <sdake> v1k0d3n wfm :)
16:36:10 <inc0> can we move on?
16:36:25 <portdirect> yup?
16:36:27 <sdake> inc0 note i removed the dead horse beating from the agenda and added docs instead
16:36:36 <inc0> thanks
16:36:39 <sdake> inc0 i think it missed the cleanup :)
16:36:42 <inc0> #topic kolla-salt
16:36:49 <inc0> SamYaple, go ahead
16:36:53 <SamYaple> woot!
16:36:59 <SamYaple> #link https://etherpad.openstack.org/p/kolla-salt
16:37:12 <inc0> I've been hearing of it for at least a year now
16:37:12 <SamYaple> I am purposing a new kolla-* project, deploy kolla containers with saltstack
16:37:17 <SamYaple> :P
16:37:24 <sdake> SamYaple the correct term is deliverable - not project
16:37:31 <SamYaple> details
16:37:34 <wirehead_> Let’s not get salty over it, tho, inc0.
16:37:54 <SamYaple> salt, years ago, didnt quite have the feature set that it needed to deploy complex tihngs (orchestrate them)
16:37:55 <inc0> but we need to takie it with a grain of salt tho
16:38:08 <SamYaple> thats why i used ansible, but salt is pretty awesome
16:38:21 <SamYaple> please take a look at that etherpad. and ideas, comments, etc
16:38:29 <srwilkers> wirehead_, :)
16:38:33 <SamYaple> I also have a working repo up
16:38:36 <SamYaple> #link https://github.com/SamYaple/kolla-salt
16:38:43 <inc0> SamYaple, so here is how it's going to be done - you create kolla-salt project as normally is created
16:38:59 <inc0> there is guide online, I'm sure you know it
16:38:59 <SamYaple> i deploy a simple service (memcache) and a complex one (rabbitmq) so you can see how im laying it out currently
16:39:09 <SamYaple> yea inc0. ill do the paperwork
16:39:11 <inc0> and add all kolla cores to kolla-salt core team
16:39:16 <inc0> + yourself ofc
16:39:45 <SamYaple> for those of you that care about this, youll find that the repo above implement reconfigure (restart service if service config changes)
16:39:55 <SamYaple> AS WELL AS conncurrency locking
16:40:09 <SamYaple> it will only restart things one at a time (or 2, 3, etc if configured)
16:40:28 <SamYaple> so thats going to be heavily used to do task locking
16:40:37 <SamYaple> check for service recovery before unlocking
16:41:00 <inc0> sounds fun
16:41:09 <SamYaple> lots of interesting features to use in salt
16:41:09 <inc0> and it's python, even more python than ansible
16:41:15 <inc0> and it's not gpl v3 \o/
16:41:17 <SamYaple> very much so
16:41:29 <SamYaple> basically, we can template the _tasks_ files
16:41:39 <SamYaple> heck, we can write the tasks in pure python
16:41:41 <inc0> sounds like fun
16:41:54 <coolsvap> kolla is already sweet thanks SamYaple for adding some salt flavor :)
16:41:55 <inc0> have at it man:)
16:41:58 <SamYaple> anyway. please look and comment. hit me up for more details
16:42:08 <SamYaple> coolsvap: :)
16:42:14 <SamYaple> thats it for me
16:42:25 <inc0> wirehead_, pundemic is spreading;)
16:42:41 <sdake> pandemonium more like it :)
16:42:42 <wirehead_> You mean, kolla is being a-salted with puns?
16:42:43 <inc0> k, next topic
16:42:47 <inc0> #topic kolla-kubernetes UX for service layers - portdirect
16:42:53 <wirehead_> oooh.
16:43:01 <portdirect> So for the user experience of kolla-k8s I'd like to dicuss two things today, the service layer (which kfox1111 , and sbezverk have been making great progress on) and configuration.
16:43:13 <portdirect> Up until now we have used kolla-ansible for config and that has been great to get us going, but I'm concerned that its going to start holding us back
16:43:26 <portdirect> It the area where we have the most flexibility at the moment, but once we have defined a path will be hard to change from once we have some user's
16:43:44 <portdirect> I think we need to have a think about how ew do this, personally I'd like us to use helms values to perform all config, so we can ship with an overideable set of insecure defaults for testing and deveopment,
16:43:58 <portdirect> making it as easy as possible to the user. But it would be gre3at to get other peoples options on this, and think that the meeting we have with helm will be very usefull potentially for gettind gudiance as to what is possible.
16:44:15 <portdirect> I've been working on some ceph stuff for upstream, and have been exporing using a helm plugin for config there - meaning we can disconnect the configuration totally from the deployment, I'm not 100% sure its the right approach but seems promising.
16:44:30 <portdirect> thoughts?
16:44:38 <kfox1111> I have the same concern with helm as I did with docker env vars. openstack's so configurble for a reason. every config option was needed to be set by someone at somepoint.
16:44:45 <wirehead_> Indeed.
16:44:56 <sbezverk> portdirect: +1 need in config management, we cannot relay on kolla ansible to get config
16:45:04 <srwilkers> i agree as well
16:45:06 <kfox1111> templating that all out could be very difficult to maintain support for all options.
16:45:25 <v1k0d3n> +1 portdirect
16:45:27 <kfox1111> so, I think there must be a way not to use helm config's.
16:45:35 <portdirect> yeah thats where the plugin stuff becomes interesting
16:45:38 <wirehead_> Also, options-that-depend-on-other-options in order to not be instant footguns.
16:45:38 <kfox1111> that being said, i wouldn't be against a batteries included option.
16:46:18 <portdirect> so you can load in configmaps totally seperatly from the deplyment - and so load in any config you want or a system has created for you
16:46:48 <SamYaple> ive done alot of work with kolla configs, and i think reusing other configs is a mistake. id be ok with a configmanagement-type repo, but i worry that might be too much overhead
16:46:57 <inc0> portdirect so I started this:
16:46:59 <inc0> #link https://review.openstack.org/#/c/399147/
16:47:09 <SamYaple> kolla-k8s maintaining their own configs would be best i think
16:47:21 <portdirect> +1 SamYaple
16:47:25 <inc0> and we should revive it I think
16:47:29 <inc0> SamYaple, or have central configs
16:47:32 <SamYaple> inc0: there is lots of comments in there
16:47:35 <kfox1111> I'm personally for trying to rewrite the templates a bit so they are more reusable across deliverables.
16:47:36 <SamYaple> yea im against that
16:47:45 <SamYaple> inc0: look at the comments, they are recent
16:47:45 <portdirect> reveiw.opensatck is down for me?
16:47:49 <wirehead_> Do people think that it’s more likely that you’d have one or two of the services heavily customized and the rest stock?
16:47:55 <portdirect> and back up
16:48:05 <kfox1111> remove logic from the templates and do it with var passing.
16:48:13 <portdirect> wirehead_:  i think so
16:48:19 <sdake> how about in the short term the kolla-kubernetes maintains its own config options with a goal in the future to unifying it (post ocata)
16:48:20 <SamYaple> wirehead_: i think keystone is the *most* customized. followed by neutron and then nova. in my experince
16:48:21 <portdirect> wirehead_:  esp neutron...
16:48:21 <kfox1111> wirehead_: I heavily customize mine usually. :/
16:48:31 <SamYaple> sdake: that works for me
16:48:37 <wirehead_> Alternatively, for those cases, would you potentially want to have multiple discrete installs, linked, instead of a single install?
16:48:45 <SamYaple> then we can think of a central config repo and judge if its really needed
16:48:51 <sdake> SamYaple right
16:48:52 <portdirect> wirehead_:  thats my utimate aim
16:49:31 <kfox1111> sdake: I'm a bit worried about the overhead of maintaining a fork of config, but if people step up to do so thats ok.
16:49:32 <wirehead_> Because the real problem is “Here’s the master helm chart for batteries-included, that I’ve changed to include my sick-and-twisted helm charts for Keystone and Neutron and Nova, and thus it’s going to drift away from batteries-included without active effort and break everything"
16:50:01 <SamYaple> kfox1111: well youre just pushing that overhead onto _everyone_ else by making it central to kolla repo
16:50:08 <SamYaple> and making changes more difficult at that
16:50:28 <kfox1111> SamYaple: right now, we're just using kolla-ansible. so 0 overhead.
16:50:39 <SamYaple> disagree. look at the kolla-ansible configs
16:50:42 <kfox1111> Ido think we need to address taht at some point. just not sure when.
16:50:47 <SamYaple> that overhead has been pushed to kolla-ansible
16:50:57 <kfox1111> SamYaple: there have been a few patches, but very minor.
16:50:58 <v1k0d3n> question...
16:51:06 <v1k0d3n> just to put it out there.
16:51:18 <v1k0d3n> when CRI comes (which it's out there in 1.5)
16:51:20 <SamYaple> kfox1111: still, the overhead has been pushed. someone has to do the work, you are saying "not kolla-k8s"
16:51:32 <v1k0d3n> and runtimes look different than just the normal docker one we use today.
16:51:37 <SamYaple> if the overhead is minor, the kolla-k8s should be able to do it themselves
16:51:42 <v1k0d3n> could the helm deploys take that in?
16:51:56 <portdirect> if the dont then we have failed
16:52:00 <v1k0d3n> for instance, is the assumption always going to be that you're using kolla images?
16:52:03 <sdake> SamYaple would you mind taking an action to start a thread on this topic on the ml in the 1 hour timer  that inc0 as set above? :)
16:52:14 <kfox1111> SamYaple: a fork is harder to maintain then a few config overrides. jsut my 2 cents.
16:52:30 <sdake> i dont think we can solve this in our hourly meeting
16:52:35 <kfox1111> sdake: +1
16:52:36 <portdirect> v1k0d3n: for kolla0k8s i think the assmption is that we are always going to use kolla images
16:52:43 <inc0> or we can take this spec
16:52:45 <v1k0d3n> so what our goal/thought was, is that helm is first class, so that any runtime could be used with heavy configmap data, and other projects could use that (i.e. a kolla-kubernetes for instance).
16:52:47 <SamYaple> inc0: yea
16:52:51 <portdirect> v1k0d3n: but not nessiarily use the docker runtime to use them
16:52:53 <inc0> I'll read through comments
16:52:53 <SamYaple> sdake: there is a spec on this with all this detailed
16:52:56 <inc0> let's use spec please
16:52:59 <sdake> SamYaple cool
16:53:00 <SamYaple> *1
16:53:03 <sdake> that works for me
16:53:16 <inc0> ok moving on?
16:53:20 <sdake> in the meantime kolla-k8s needs to keep moving
16:53:20 <v1k0d3n> i mean to be fair, and inc0 could at least speak up...i brought up helm in barcelona, so it got added to the road map because the community thought this was a good approach.
16:53:22 <portdirect> sounds good, tlts move on
16:53:45 <v1k0d3n> ok
16:53:56 <kfox1111> v1k0d3n: I do still think configmaps in helm as a batteries included thing is a good idea.
16:54:15 <kfox1111> not sure if that is a focus for 0.5, 0.6, etc?
16:54:26 <sdake> kfox1111 mailing list discussion should be able to answer that
16:54:30 <kfox1111> and do they get stored in kolla repo and get pulled into kolla-kubernetes,
16:54:39 <kfox1111> or stored as a fork in kolla-kubernetes?
16:54:52 <sdake> and the rest of your qs ;)
16:54:57 <kfox1111> sdake: +1 to ml discussion
16:55:01 <sdake> lets give sayantan_ some time for docs :)
16:55:03 <v1k0d3n> kfox1111: i just need a very technical explanation with the helm folks to come to the same place as you on this one.
16:55:06 <sdake> we only have 5 minutes left
16:55:09 <portdirect> kfox1111: can you breifly summarise the sate fo the service layers
16:55:24 <portdirect> as you have been doing loads of work with sbezverk  on that :)
16:55:36 <inc0> ok, let's overflow documentation to next years meeting
16:55:46 <inc0> sorry sayantan_
16:55:50 <kfox1111> sbezverk and I have been working on prototyping some service layer packages.
16:55:51 <sdake> inc0 - wfm as long as its first up :)
16:55:53 <sayantan_> no prob :)
16:56:04 <kfox1111> so far the microservices concept seems to be holding up well.
16:56:30 <kfox1111> helm 2.1.2 just came out, so we have more options now for microservices api that could make the service layer cconfig more smooth.
16:56:48 <kfox1111> a possible layout here: http://paste.openstack.org/show/86qF0iC5tc5a5o0GhV8l/
16:56:59 <sbezverk> yep microservices and services concept looks really nice
16:57:00 <kfox1111> I'm prototyping up an example of that in the neutron service package.
16:57:38 <sbezverk> mariadb service is ready and glance will be ready soon..
16:58:01 <kfox1111> neutron service is pretty much ready too.
16:58:04 <srwilkers> i started toying with cinder last night
16:58:11 <srwilkers> should have a WIP patchset up soon
16:58:11 <kfox1111> but I want to get the globals thing figured out,
16:58:24 <kfox1111> and how to get mariadb/memcached/rabbit per service package too.
16:58:39 <kfox1111> the globals thing will require a lot of changes from the microservices though.
16:58:51 <kfox1111> so would like to get that ironed out so we can try and get those in before 0.4.0
16:59:39 <srwilkers> okay. are we going to merge those in for 0.4.0 then? i remember we talked a bit about that yesterday but didnt come to a conclusion
16:59:51 <sbezverk> let's get a list of left non-microservices components
17:00:00 <inc0> ok guys
17:00:02 <sbezverk> and try to fix them before Jan release date
17:00:03 <kfox1111> srwilkers: the changes to microservices I mean.
17:00:09 <inc0> we're out of time
17:00:10 <sdake> time is up :)
17:00:12 <srwilkers> kfox1111, oh right on
17:00:12 <inc0> thanks everyone
17:00:17 <inc0> #endmeeting kolla