15:01:53 <v1k0d3n> #startmeeting openstack-helm
15:01:54 <openstack> Meeting started Tue May 23 15:01:53 2017 UTC and is due to finish in 60 minutes.  The chair is v1k0d3n. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:00 <openstack> The meeting name has been set to 'openstack_helm'
15:02:06 <v1k0d3n> welcome team
15:02:12 <v1k0d3n> https://etherpad.openstack.org/p/openstack-helm-meeting-2017-05-23
15:02:30 <jaugustine> o/
15:02:45 <v1k0d3n> or those of you who have some items you wish to add to theagenda, please add them.
15:02:59 <portdirect> ಠ‿ಠ
15:03:03 <alraddarla> 0/
15:03:05 <alraddarla> o/4
15:03:05 <SamYaple> o/
15:03:08 <alraddarla> i give up
15:03:27 <korzen> hello
15:03:30 <lrensing> o/
15:03:31 <dulek> o/
15:03:48 <v1k0d3n> hey SamYaple good to have you
15:03:57 <SamYaple> im my own person
15:04:01 <alanmeadows> \o/ STEVE HOLT.
15:07:48 <v1k0d3n> ok, sorry...just getting started.
15:08:01 <v1k0d3n> mostly some checks on where we're at with bugs and long standing reviews.
15:08:11 <dulek> 1692834 - I've checked this on Helm 2.3, it's fine. But I guess it's good to have 2.4 issues listed, right?
15:08:12 <v1k0d3n> unless people have other things that they want to add
15:09:01 <v1k0d3n> dulek: https://bugs.launchpad.net/openstack-helm/+bug/1690863 ?
15:09:02 <openstack> Launchpad bug 1690863 in openstack-helm "Helm 2.4.0 Issues with Openstack-Helm" [High,New]
15:09:21 <v1k0d3n> sure, it's general. maybe we should get more specific in that launchpad bug?
15:09:24 <dulek> v1k0d3n: Yeah, but this isn't listing the issues at all.
15:09:48 <dulek> I'm fine with closing my bug and expanding description of 1690863.
15:10:20 <v1k0d3n> this was really a placeholder. i thought that some folks were working against this issue. i'm completely fine with more details, and however the team wants to address doing that is fine with me.
15:10:44 <v1k0d3n> do we want to close 1690863 in favor of more detailed helm 2.4 related bug issues?
15:10:52 <srwilkers> yes
15:10:54 <v1k0d3n> (assuming that's a bit better).
15:11:31 <v1k0d3n> cool. done. so srwilkers are you aware of other 2.4 issues that could open launchpad bugs against?
15:12:16 <v1k0d3n> the 1690863 was really just a place holder to let users know that 2.3 is still recommended.
15:12:23 <srwilkers> not that im aware of, but if there are any, id rather see a verbose description of bugs
15:12:29 <portdirect> the only one I'm aware of is the hard failure of gotpl rendering errors
15:12:34 <srwilkers> launchpad bugs aren't really appropriate for pinning to a helm version
15:12:40 <srwilkers> should just be noted in the docs
15:12:45 <anticw> i'm using 2.4 and wondering what the issue is
15:12:57 <v1k0d3n> sure, as long as we can recommend to users that for now the recommendation is to use 2.3 until we're able to resolve and work against them.
15:13:01 <portdirect> lack of ceph secrets in helm-toolkit
15:13:07 <portdirect> anticw: ^^
15:13:13 <portdirect> renders fine if they are present
15:13:27 <v1k0d3n> right. +1 related to ceph issues.
15:13:29 <srwilkers> 2.3 is noted in the all-in-one docs and the multinode, so we should be good
15:13:40 <v1k0d3n> (in etherpad).
15:14:37 <v1k0d3n> ok, i'll close out the general 1690863. dulek can you mention using 2.3 in your issue until resolved?
15:15:05 <portdirect> v1k0d3n: i dont think we should close out 1690863?
15:15:42 <v1k0d3n> ok
15:15:48 <dulek> v1k0d3n: I've commented.
15:15:59 <v1k0d3n> thanks dulek. next then.
15:16:35 <v1k0d3n> are there really any other ceph issues than what's already been described? anticw i thought you may have run into issues, or was that only networking?
15:16:50 <v1k0d3n> (my brain is all over the place atm)
15:17:06 <alanmeadows> portdirect: really its the entire conversion of the ceph chart to the modern way -- from ceph.conf templates, secrets, to dependency checking
15:17:36 <lrensing> +1 alanmeadows
15:17:52 <v1k0d3n> good point alanmeadows. +1
15:18:14 <portdirect> alanmeadows: agreed (I think) though did you mean to direct that at me?
15:18:39 <portdirect> I'm 100% on the same page that it needs redone, but thought somone else was on it
15:19:13 <alanmeadows> i volunteered anticw, but just mentioning for awareness to all, ceph issues more then just the helm 2.4 secrets
15:19:50 <portdirect> :) yeah - in addition the the points you raise we should also have a set of snae defaults for diff size clusters
15:20:01 <portdirect> s/snae/sane
15:20:03 <anticw> portdirect: oh, i have some stuffs there
15:20:37 <portdirect> nice :)
15:21:32 <anticw> longer-term we should split multinode to mutinode.rst and largescale.rst
15:21:40 <portdirect> +1
15:21:52 <v1k0d3n> ok, moving on. networking issues. i think anticw has been working some of these as well. SK has some interest as well.
15:21:53 <anticw> but we don't know what the latter should really look like except in an abstract sense
15:22:41 <alanmeadows> To finish off your question, there is probably a long list of ceph issues, but I think many of them can simply be summed up with the fact that the first priority is modernizing the chart and bring it inline with the rest of openstack-helm--you can't override anything, you must set the proper environment variables before running sigil (huh), and you need to
15:22:41 <alanmeadows> have magic stuff in helm-toolkit/secrets
15:23:24 <portdirect> (plug) I think we should explore using the way i did secret gen for the upstream chart?
15:24:03 <portdirect> eg: https://github.com/ceph/ceph-docker/blob/master/examples/helm/ceph/templates/jobs/configmap.yaml
15:24:18 <portdirect> which removes the req for having anything installed on the host
15:24:26 <anticw> v1k0d3n: the 'networking issue' i think is the choice if IPs again in ceph.conf magics
15:25:34 <alanmeadows> worth a look, sure -- I'm frankly happy with starting simple, because this is a pain point for many -- static 'secrets' defined in values.yaml might be the best place to start that advanced users can override
15:26:50 <v1k0d3n> +1. most of the issues that i hear about are related to preparing the environment for ceph (in BM installations).
15:27:06 <v1k0d3n> easing this would be a huge win.
15:28:14 <portdirect> an alternate to condsider may be passing this off to a plugin - which is what i do for my personal lab: https://github.com/portdirect/marina#quckstart
15:28:24 <v1k0d3n> @alanmeadows we have all of the items we want on the roadmap for ceph, right?
15:28:49 <v1k0d3n> perhaps we can get these items in as individual blueprints to work against.
15:29:33 <alanmeadows> This is the full issue list that was created: https://docs.google.com/document/d/1uY8U1DZgGa-IT40fYmqNlpLqUnWGC0_CbwH_n2n21Aw/edit?usp=sharing
15:29:39 <alanmeadows> It needs to likely work its way into blueprints
15:30:06 <v1k0d3n> ok. i will get with the right folks, and work on getting these in.
15:30:10 <v1k0d3n> we can move on.
15:30:30 <portdirect> is jayahn about?
15:31:07 * portdirect must be too late over that side of the big blue ball...
15:32:17 <portdirect> korzen did some great work putting together an etherpad: https://etherpad.openstack.org/p/openstack-helm-meeting-neutron-multibackend
15:32:58 <portdirect> I think that jayahn was wanting to do the linuxbridge backend
15:33:40 <portdirect> and in my own time I'd be very keen to do OVN, as I've worked on this quite a bit (and my old openstack-on-k8s was built around it)
15:33:58 <SamYaple> always pushing an agenda this one
15:34:03 <v1k0d3n> for each of these items that get worked, can we make sure there are blueprints for them?
15:34:22 <v1k0d3n> that would help the community keep track of what's outstanding and who's working it.
15:34:26 <portdirect> yeah - what we really ned to determine is the path of least evil..
15:34:36 <portdirect> lines 30:32
15:34:45 <alanmeadows> I saw him sipping coffee from an OVN mug.
15:34:51 <anticw> i want coffee
15:34:59 <korzen> I have published some draft: https://review.openstack.org/#/c/466293/
15:35:09 <portdirect> we discussed this quite a bit at the summit - but we need to prototype some things i think so determine the best path
15:35:14 <korzen> LB can be incorporated in neutron chart
15:35:15 <v1k0d3n> i think @alanmeadows should probably weigh in on that etherpad
15:35:31 <portdirect> oh sweet nice korzen :)
15:35:31 <alanmeadows> I think the network node concept was something missing from the beginning, that is a welcome addition
15:35:50 <portdirect> +1
15:36:03 <portdirect> means we can prob get rid of that ovs labeling weirdness
15:36:17 <korzen> other SDNs need to have their separate charts
15:36:48 <korzen> and labeling to mark which node should operate which Neutron backend seems like a solution
15:36:50 <alanmeadows> well, you still have the need to get ovs both on compute nodes, and now network nodes
15:37:24 <portdirect> yeah - just that it could now be loosedned up a bit and made more generic
15:37:34 <portdirect> ie: sdn-agent or similar
15:37:38 <alanmeadows> so the same conundrum still exists, but with different names
15:37:58 <alanmeadows> sure, openvswitch label becomes something more palatable
15:38:11 <alanmeadows> but same requirement for that multi-category label, unless someone has better ideas
15:38:42 <portdirect> we could just make it that the sdn runs on compute and network nodes?
15:38:56 <portdirect> and dump it altogether
15:40:15 <korzen> sdn can have separate labels
15:40:30 <alanmeadows> thats what we do today, with a third label, to have it scheduled to two separate labels, we would have to get more fancy then nodeSelector
15:40:47 <alanmeadows> which sounds fine to me, I just couldn't be bothered
15:41:15 <portdirect> lol - i think that should be a twenty min fix :P
15:41:29 * alanmeadows starts the clock.
15:41:33 * portdirect suspects that will bite him
15:41:36 <portdirect> :)
15:42:12 <v1k0d3n> so maybe the best approach alanmeadows is to weigh in on that etherpad?
15:42:24 <alanmeadows> i will, but with this etherpad
15:42:40 <alanmeadows> did this group work out whether they feel all of these options listed here, which are pretty wide reaching
15:42:48 <alanmeadows> from calico, to (open)contrail to ovs
15:42:57 <portdirect> not yet its still a wip
15:42:57 <alanmeadows> whether they can be handled in a single neutron chart
15:43:16 <alanmeadows> or whether neutron-foobar would be inevitable
15:43:20 <portdirect> nope - thats the fundimental question at this stage
15:43:32 <alanmeadows> ok so more workthrough/attempts need to be done
15:43:35 <alanmeadows> for us to answer that question
15:43:55 <portdirect> at the summit we agreed that we would prototype both approaches in PS's and then determin the most paletable path forward
15:44:26 <alanmeadows> okay
15:44:37 <portdirect> lines 34:47 are where korzen has done a great job of highlighting most of the areas we need to consider
15:45:09 <v1k0d3n> 15 minutes til top of the hour folks.
15:46:30 <v1k0d3n> let's talk aboutu these neutron improvements in the openstack-helm channel.
15:46:58 <v1k0d3n> one thing that i think may be creeping up are cinder issues.
15:47:21 <v1k0d3n> i've heard a couple of folks mention this. can anyone go a bit deeper? anticw ?
15:47:45 <anticw> v1k0d3n: couple of issues, the default endpoint
15:47:48 <anticw> and secondly it doesn
15:47:57 <anticw> gah stupid ' button
15:48:00 <anticw> do not work
15:48:25 <anticw> the endpoint is an easy fix, we get obscure 500s without it
15:48:36 <anticw> do not work = tries to use iscsi in nova-compute for unknown reasons
15:49:30 <SamYaple> anticw: you cant do iscsi in network namespace for kernel reasons
15:49:31 <v1k0d3n> anticw: are these things you have  fixes for that could be submitted/reviewed, or are you still in the process of testing through?
15:49:59 <anticw> v1k0d3n: just local becuase i aren't sure if it's me or something larger
15:50:07 <anticw> SamYaple: don't want iscsi ... it should be using ceph
15:50:17 <SamYaple> ah sorry misread that anticw
15:50:24 <anticw> https://pastebin.com/5Uvjk7hR
15:51:33 <anticw> SamYaple: fwiw, i think we could get iscsi working as most of that stuff seems to run in the host namespace
15:51:49 <anticw> (but that's OT)
15:51:50 <v1k0d3n> anticw: let me redeploy a bare metal install, test and see if i get some of the same. i can reach out and we can work through them if you want/have time?
15:51:58 <anticw> v1k0d3n: no
15:52:04 <anticw> err, yes
15:52:06 <SamYaple> anticw: iscsi runs in the host namespace fine, there is a kernel bug for running in network namespaces
15:52:13 <v1k0d3n> :) haha
15:52:29 <v1k0d3n> 8 mins left
15:52:35 <portdirect> SamYaple: the only thing that jumps out at me is this: https://github.com/openstack/openstack-helm/blob/master/nova/values.yaml#L322
15:52:54 <portdirect> but that should not effect volume attach right?
15:53:09 <alanmeadows> that should be how nova stores ephemeral disks
15:53:28 <anticw> cinder-api is giving cinder-volume some json which it's not dealing with right, unclear which end is to blame
15:53:41 <anticw> we can talk about this over <- there later if you like
15:53:53 <v1k0d3n> that works anticw
15:54:07 <anticw> portdirect: really easy to repro if you have 5 mins
15:54:22 <v1k0d3n> one last thing too. i think SamYaple and alanmeadows: you guys are still exploring rootwrap use...is that correct?
15:54:53 <v1k0d3n> or have you guys come to some form of agreement? maybe we should stand up an etherpad so we can take it offline?
15:55:01 <v1k0d3n> (unless there already is one).
15:55:36 <SamYaple> the summation is i say rootwrap.conf is a config (hence why it is in /etc/) and it is not in the LOCI images
15:55:48 <SamYaple> the fear is it is image specific stuff, which i dont agree with that
15:55:52 <portdirect> +1
15:55:54 <alanmeadows> I have been going through OOTB rootwraps trying to discover something that would end up being quite image-centric
15:56:08 <SamYaple> for glance and cinder if you use ceph you dont even NEED rootwrap, so its also a greater attack surface
15:56:23 <SamYaple> the deploy tool would be able to limit that attack surface but it must control the rootwrap files
15:56:32 <portdirect> for cider you do for backup I think?
15:56:42 <SamYaple> i dont think so, but i oculd be wrong
15:56:52 <SamYaple> i could be wrong about ALL of this, but i havent seen anything to suggest i am
15:56:58 <SamYaple> so this is really pending more info
15:56:58 <alanmeadows> I have not been able to find an example yet, so I am willing to concede on bringing these into the fold of OSH, provided we come up with a path to customize not only rootwrap.conf, but allow injection of additional rootwrap.d files
15:57:15 <SamYaple> im happy to come back to it once/if an issue is found
15:57:30 <alanmeadows> to be honest, the main reason it was skipped for configuration overrides
15:57:35 <alanmeadows> was it was a substantial amount of work
15:57:45 <srwilkers> alanmeadows, i can tackle that
15:57:52 <SamYaple> yea sure alanmeadows, but to be fair it can be wholesale copied over at first
15:57:59 <SamYaple> maintenance on a straight copy is insigificant
15:58:17 <SamYaple> any paramaterized rootwrap is a thing that comes later
15:58:25 <alanmeadows> well, but I mean if we're going to own it, we have to give end users the ability to do things with it (templates)
15:58:29 <portdirect> they are pretty well commented atm - should just be a case of wrapping in conditionals i think
15:58:33 <alanmeadows> if we own it, and then jam it in, we've done a disservice
15:59:02 <alanmeadows> in fact, I think because this is not oslo-gen capable
15:59:12 <SamYaple> 'dont let perfect be the enemy of good'
15:59:21 <SamYaple> but im with you, i dont want it to sit in there
15:59:25 <alanmeadows> the function introduced with the cinder stuff, namely the helm-toolkit "toIni" function (casing correction needed)
15:59:31 <SamYaple> that doesnt mean it cant sit in there AT FIRST
15:59:37 <alanmeadows> may be perfect for rootwrap.conf
16:00:02 <SamYaple> reminder, openstack config files are not ini. that is all
16:00:12 <SamYaple> i think rootwrap follows ini still though
16:00:29 <v1k0d3n> we unfortunately have to wrap up.
16:00:35 <alanmeadows> its standard [HEADER] item=foo
16:00:39 <v1k0d3n> sorry that the meeting took a while today.
16:01:01 <v1k0d3n> alanmeadows and SamYaple...want to continue in #openstack-helm?
16:01:11 <SamYaple> sure i think we are basically done though
16:01:21 <v1k0d3n> sounds good.
16:01:36 <v1k0d3n> ok calling the meeting folks. thanks everyone for coming and contributing!
16:01:37 <alanmeadows> I think we can close with saying srwilkers can own this, just an etherpad with some comments to start out a trial on this
16:01:48 <SamYaple> o/
16:01:52 <srwilkers> bai
16:02:00 <v1k0d3n> #endmeeting