14:59:55 <mattmceuen> #startmeeting openstack-helm
14:59:56 <openstack> Meeting started Tue Jun 26 14:59:55 2018 UTC and is due to finish in 60 minutes.  The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:59:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:00 <openstack> The meeting name has been set to 'openstack_helm'
15:00:07 <mattmceuen> #topic rollcall
15:00:53 <mattmceuen> Hi everyone -- here's the agenda for our OSH team meeting today: https://etherpad.openstack.org/p/openstack-helm-meeting-2018-06-26
15:00:58 <rwellum> o/
15:01:04 <mattmceuen> I'll give a couple minutes for folks to update if they'd like to add to it
15:02:05 <rwellum> Thanks for adding the production question mattmceuen
15:03:23 <mattmceuen> First up:
15:03:26 <mattmceuen> #topic Hooking up OSH to provider networks is not well documented
15:03:39 <mattmceuen> Thanks for bringing this up piotrrr
15:03:43 <lamt> o/
15:03:53 <srwilkers> o/
15:03:59 <mattmceuen> This is a fine line IMO for a couple reasons:
15:04:01 <portdirect> \0
15:04:15 <mattmceuen> 1. don't want to duplicate Neutron config in the OSH docs, but just the OSH-specific bits
15:04:34 <mattmceuen> 2. don't want to dictate how a specific deployment should configure things as that's very deployment-specific
15:04:41 <mattmceuen> Very open to thoughts here.
15:04:43 <mattmceuen> ?
15:04:53 <portdirect> hmm
15:04:59 <portdirect> all points are valid
15:05:13 <portdirect> 0) our documentation is terrible in this regard
15:05:15 <mattmceuen> Becuase obviously a cloud that can't connect its vms is not all that useful, so problems worth overcoming :D
15:05:21 <portdirect> 1) 100% agree
15:05:28 <piotrrr> Thanks. That's fair enough. Shouldn't we in that case have a dedicated section covering those networking aspects, and refer to it from various OSH deployemnt docs, wherever needed?
15:05:36 <vadym> maybe you could at least mention the need to set up bridges in the multinode manual?
15:05:51 <portdirect> 2) again agree, the reference to provider networks in the therpad is testiment to this
15:06:52 <mattmceuen> vadym:  agree, that is a good minimal starting point -- that's basically making people aware of the documentation gap :)
15:07:12 <mattmceuen> "this page left intentionally blank" kind of thing
15:07:15 <piotrrr> indeed, that's another sub-item in that agenda point - the vm networking doesn't seem to work out of the box right now for multinode instrunctions.
15:09:01 <mattmceuen> There has to be some expectation of op-specific customization.  What would the right level of content be to include in OSH without duplicating ovs / neutron docs?  Obviously 1) calling out that it's a step that needs to happen, 2) linking to the appropriate neutron docs
15:10:41 <rwellum> I think there's a compromise - the baseline you mentioned last week. It's reasonable that an OSH guide would explain how to get basic networking working to me.
15:11:43 <piotrrr> at least having (1) would already be a good improvement over what's there now in multinode docs. but if applicable, might as well point to external sources. doesn't matter much. The important part is to communicate that op-specific customization is needed, and maybe some clues as to how to approach it.
15:12:16 <mattmceuen> That sounds like a reasonable approach.  I'll add a storyboard item to track the need and capture these thoughts there - we can always fine-tune once we have a PS
15:12:26 <portdirect> rwellum, can you take this on, as part of your multinode guide work?
15:13:07 <rwellum> Sure happy too.
15:13:16 <rwellum> Assign it to me if you want mattmceuen
15:13:24 <portdirect> nice - thats awesome dude
15:13:25 <piotrrr> yup. the big picture I would say, it should be crystal clear what are the points where ops should, or must, plug in their logic to cover op-specific things.
15:14:10 <mattmceuen> cool will do - thanks rwellum
15:14:35 <mattmceuen> yeah, agree piotrrr.  There's so much magic that you need to call out the places where there is no magic :)
15:14:57 <mattmceuen> Next
15:15:01 <mattmceuen> #topic     Loci/Kolla container images not production ready according to https://etherpad.openstack.org/p/openstack-helm-multinode-doc
15:15:18 <mattmceuen> To paste in Pete:
15:15:18 <mattmceuen> As they are built in openstack-infra using unsigned packages
15:15:18 <mattmceuen> We dont want to advertise somthing as prodution ready unless we can support them with the respect `production ready` deserves
15:15:18 <mattmceuen> CVE tracking etc
15:15:43 <rwellum> Confused, same images Kolla calls production ready?
15:16:04 <piotrrr> OSH is using kolla/loci images. kolla/loci images are not production ready. OSH in 1.0.0 will continue using those images. Will that mean that OSH 1.0.0 will *not* be production ready?
15:16:30 <mattmceuen> OSH will be production ready, and recommends that operators roll their own production ready images for prod use
15:16:46 <srwilkers> ^
15:17:12 <piotrrr> perfect, that answers my question. Is there a plan to have any docs on how a op can start with creating those?
15:17:23 <portdirect> rwellum: when did koll start calling the images to dockerhub production ready?
15:17:23 <mattmceuen> Or at minimum, snapshot a tested, integrated set of images so that the operator can control their destiny
15:18:20 <mattmceuen> This is another instance where we should reference out to the kolla/loci build process, I think, piotrrr - I'll add it to the list
15:19:00 <rwellum> Its their mission statement: "Kolla’s mission is to provide production-ready containers and deployment tools for operating OpenStack clouds."
15:19:34 <rwellum> So they haven't accomplished their mission I guess? :)
15:20:07 <portdirect> they provide the ability to build production ready containers
15:20:08 <rwellum> And Kolla-Ansible is used in production in many places
15:20:35 <piotrrr> do Loci images aspire to be production ready at some point?
15:20:46 <portdirect> the ones to dockerhub are like us (unless thomthing radical has changed) for dev and evaluation only
15:20:50 <piotrrr> or do they plan to remain used primarly for dev purposes?
15:20:57 <portdirect> both Kolla and LOCI are production ready
15:21:47 <rwellum> portdirect: you're contradicting yourself?
15:21:50 <portdirect> no
15:22:13 <portdirect> you can use kolla, and loci to build images for prodution
15:22:17 <srwilkers> he's saying they provide the toolsets to build production ready images
15:22:27 <srwilkers> not that their default, publicly available images are production ready
15:22:36 <mattmceuen> The containers on github are "reference builds", right - not the images intended to be deployed in a true prod setting?
15:22:36 <portdirect> both publish images to dockerhub, but the images on dockerhub are not recommend for production
15:22:43 <mattmceuen> *dockerhub
15:22:56 <rwellum> Even the images that are built from stable OpenStack releases?
15:23:01 <portdirect> yes
15:23:29 <mattmceuen> It's a subtle point, and I'll try to capture it in our doc to avoid misreading it as "Kolla and LOCI are not production ready"
15:23:30 <portdirect> you know the kolla images on dockerhub, were for a long time built from an insecure pc in my office?
15:24:28 <piotrrr> Alright. Thanks, portdirect. I guess that explains it. All of the above should be made clear somewhere in the OSH docs I would say.
15:24:37 <rwellum> I knew it was in someone's office :). But I thought since the infrastructure moved to OpenStack, and the level of testing they go through - ensured a certain quality now.
15:24:59 <portdirect> it does, but openstack infra does not sign packages etc
15:25:03 <portdirect> great for dev
15:25:12 <portdirect> not for PCI compliance etc
15:25:42 <rwellum> Ok - I'm caught up.
15:26:00 <rwellum> Be cool to ask this question on kolla irc
15:26:00 <mattmceuen> Alright - next up:
15:26:05 <mattmceuen> #topic Support multi versions of Rally - Let's have some time to think about it and discuss again next week.
15:26:33 <mattmceuen> There is some good discussion on this in the agenda, https://etherpad.openstack.org/p/openstack-helm-meeting-2018-06-26  if anyone wants to catch up
15:29:15 <mattmceuen> So the idea of tuning rally tests in `helm test` scenarios to be portable across different openstack versions resonates with me
15:29:43 <mattmceuen> as it's straightforward, and `helm test` is not intended to be a thorough functional test, but rather a sanity test to ensure the deployment was successful
15:31:45 <mattmceuen> Thoughts?  This was copied from last time, so I'm not sure who brought it up to begin with
15:32:03 <portdirect> i added a few inline
15:37:52 <rwellum> How does this relate to the 'multiple versions' aspect?
15:39:47 <mattmceuen> I think Jay brought up the topic from last time - portdirect's responses inline should be good to see, I'll shoot them to jayahn to catch up on
15:40:02 <mattmceuen> rwellum in which sense?
15:42:57 <rwellum> Is it multiple versions of rally or multiple versions of OpenStack - reading the etherpad and it's not clear
15:43:26 <portdirect> which it are we talking about here?
15:43:46 <rwellum> The title is: "Support multi versions of Rally"
15:44:08 <rwellum> I just don't see that in the discussion
15:44:36 <rwellum> Maybe just missing the point?
15:45:03 <portdirect> lines 31-35?
15:46:32 <mattmceuen> The point being that supporting multiple versions of openstack implies needing to support different variants of Rally (code and/or configuration)
15:47:23 <mattmceuen> And how can we best approach that without having to go so far as to branch OSH per openstack release, and instead rely on thing like overrides or Rally tests that run across versions
15:47:47 <mattmceuen> I shot it to Jay, we can bring it up again next time if he has any thoughts or concerns
15:47:56 <portdirect> i think the point you raise though re helm test is critical here
15:48:08 <mattmceuen> Then I'll add it into the etherpad :)
15:48:10 <portdirect> in that for our purpose 0.8 is actually fine
15:48:24 <portdirect> though dont get me wrong, im not happy with that as an answer
15:49:05 <portdirect> and we should strive to support rally/tempest ootb right up to master as well for helm test
15:49:27 <mattmceuen> Definitely
15:49:49 <mattmceuen> #topic PS needing review
15:50:04 <mattmceuen> Ok folks - any languishing PS that you'd like to request eyes on?
15:50:24 <roman_g> yours with docs update
15:50:36 <mattmceuen> :)
15:50:43 <roman_g> o/
15:51:04 <mattmceuen> I still need to push more changes to that, that's why I'm not pushing it hard - but I will get those done this week.  Thx roman_g
15:51:13 <mattmceuen> Has been a week
15:51:17 <roman_g> okok
15:51:50 <mattmceuen> #topic Roundtable
15:51:58 <mattmceuen> Anything else on y'alls minds?
15:53:11 <piotrrr> Are you guys coming over to the berlin summit?
15:53:25 <mattmceuen> That is the plan, yes :)
15:53:32 <mattmceuen> You?
15:53:46 <piotrrr> yup, we will be there
15:53:52 <mattmceuen> awesome
15:55:29 <mattmceuen> Alright everyone, thanks for your time and your effort!
15:55:34 <mattmceuen> Have a great day & week
15:55:36 <mattmceuen> #endmeeting