14:00:39 <mattmceuen> #startmeeting airship
14:00:40 <openstack> Meeting started Tue Jan  8 14:00:39 2019 UTC and is due to finish in 60 minutes.  The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:44 <openstack> The meeting name has been set to 'airship'
14:00:46 <mattmceuen> #topic rollcall
14:00:53 <mattmceuen> good morning / evening / middle of night everyone!
14:01:04 <evgenyl> o/
14:01:05 <hogepodge> Hi!
14:01:08 <mark-burnett> o/
14:01:10 <b-str> o/
14:01:17 <aaronsheffield> o/
14:01:33 <mattmceuen> Agenda for today: https://etherpad.openstack.org/p/airship-meeting-2019-01-08
14:01:35 <dwalt> o/
14:01:46 <mattmceuen> Please add in anything additional you'd like to discuss today
14:03:41 <mattmceuen> Alrighty
14:04:00 <mattmceuen> #topic Accessing tiller from behind a proxy
14:04:33 <mattmceuen> There was a discussion on the ML around accessing tiller from behind the proxy
14:04:44 <evgenyl> Yes, there is a problem with configuring Armada behind the proxy, no_proxy in grpc (tiller client) does not support cidr for no_proxy, as a result it cannot connect to Tiller.
14:04:51 <mattmceuen> It led to some good thoughts around how to streamline armada-tiller-shipyard communication in general
14:05:08 <mattmceuen> but first I was hoping evgenyl could walk through his use case to ground us
14:05:27 <evgenyl> Scott suggested to configure "proxy" parameter for every repo, I'm wondering if it can be done globally.
14:06:41 <evgenyl> If we configure tiller with DNS name and use it instead of IP, this should help with no_proxy.
14:06:49 <mattmceuen> is the proxy issue you're getting happening when you pull images from the repo, or after you pull images and armada is trying to talk to tiller?
14:06:57 <sthussey> tiller does have a dns name
14:06:59 <mattmceuen> trying to understand how the repo part fits into this
14:07:04 <sthussey> there is a tiller service that gets a in-cluster DNS entry
14:07:48 <sthussey> Ignore me
14:07:56 <mattmceuen> lol
14:07:58 <sthussey> That was the previous tiller chart in OSH
14:08:01 <mattmceuen> ahh
14:08:01 <evgenyl> mattmceuen: it happens when Armada starts to pull git repos.
14:08:39 <mattmceuen> I see - that's before armada is actually talking to tiller
14:08:41 <sthussey> This is why we define proxies per repo
14:08:53 <sthussey> Because it gives the granularity needed
14:09:31 <mattmceuen> evgenyl can you give sthussey's idea a try and let us know if it helps, and/or follow on issues?
14:09:58 <sthussey> I think the issue Evgeny was having isn't functional, it is UX
14:10:04 <evgenyl> Yes.
14:10:17 <evgenyl> I was wondering if it can be redefined globally.
14:10:18 <sthussey> Because in the case of AIAB behind the proxy, he needed to update the proxy for every chart in versions.yaml
14:10:43 <evgenyl> Does anybody know why DNS was removed from OSH chart?
14:10:53 <sthussey> We had two charts
14:10:56 <sthussey> We stopped using OSH
14:11:10 <sthussey> And use the chart in the Armada repo
14:11:15 <mattmceuen> #topic Streamlining tiller lookup logic
14:11:19 <mattmceuen> Segueway :)
14:11:26 <sthussey> We can certainly add a service to the chart
14:11:40 <sthussey> Which will yield a DNS entry that could be used
14:12:08 <portdirect> would just be a revert of: https://github.com/openstack/airship-armada/commit/eb7c112d2ee8e95ab5a4eda0076c69fdd3aeaf66
14:12:36 <evgenyl> Yes. There also an idea to move logic of tiller discovery from shipyard into Armada.
14:12:49 <sthussey> That likely will happen.
14:13:14 <mattmceuen> ++
14:13:21 <sthussey> Tiller will be running as a sidecar to the armada API in the near future
14:13:45 <sthussey> So resolving what address to access tiller at will need to be in Armada, so might as well make that location logic good for the CLI as well
14:13:54 <roman_g> o/
14:14:02 <mattmceuen> o/ roman_g
14:14:30 <evgenyl> sthussey: will it have a separate DNS name when running as a sidecar?
14:14:43 <evgenyl> Ot will it be possible to configure?
14:14:52 <sthussey> When running as a sidecar it will be inaccessible to anything but the Armada API it runs with
14:15:19 <sthussey> So there will need to be a few deployment patterns to support use cases of the Armada CLI
14:15:34 <evgenyl> I'm trying to understand if this DNS change will make sense if Tiller lands as a sidecar.
14:16:22 <portdirect> presumably then you'd just have it listen on localhost? so the service would not be exposed at all?
14:16:41 <evgenyl> Oh, in this case PodIP discovery won't be needed.
14:16:48 <sthussey> That is in the API case
14:16:53 <sthussey> Does work for the CLI case
14:16:56 <mattmceuen> sthussey, are you saying that we'd run tiller-in-armada-pod for an armada API based deployment, but to support the CLI we'd also support standalone tiller pods w/ Service (DNS) ?
14:16:58 <sthussey> Doesn't *
14:17:15 <sthussey> Right, would need to support multiple patterns
14:17:19 <mattmceuen> gotcha
14:18:11 <mattmceuen> so to say differently, you'd choose your adventure, one of:
14:18:11 <mattmceuen> 1. Deploy the Armada chart
14:18:11 <mattmceuen> 2. Deploy the Tiller chart (which will include a Service / DNS) and use the Armada client
14:18:29 <mattmceuen> depending on the operator use case
14:19:20 <mattmceuen> So in that case
14:19:30 <evgenyl> So you can use Armada client without having Armada API service?
14:19:46 <sthussey> yes
14:19:56 <evgenyl> Oh, ok, this is clear now.
14:19:59 <dwalt> The CLI looks for the tiller client with labels
14:20:06 <dwalt> tiller pod*
14:20:57 <mattmceuen> The normal way to use the Armada client is  ArmadaClient->Tiller directly
14:21:09 <mattmceuen> Although it can be configured to be ArmadaClient->ArmadaAPI->Tiller as well
14:22:11 <evgenyl> So to summarize it would like: ArmadaClient-> (DNS) Tiller, ArmadaClient->ArmadaAPI->(localhost)Tiller
14:22:47 <mattmceuen> Would you be running the armada client from inside the cluster or outside the cluster? Just making sure
14:22:56 <mark-burnett> AFAIK, nobody is using the configuration of Client -> API -> Tiller
14:23:36 <mattmceuen> I ask because, if inside the cluster, then the Service will be enough to provide DNS-based routability to tiller
14:24:13 <evgenyl> mattmceuen: I'm interested in a "standard" Shypard -> ArmadaAPI -> Tiller use case
14:24:16 <mattmceuen> but if you're trying to get to it from outside the cluster, you'd additionally need an ingress for the service into the cluster, along with *publicly* accessible DNS
14:24:20 <mattmceuen> ok
14:24:47 <portdirect> or just expose a port, and use the ip
14:24:53 <mattmceuen> yeah
14:25:23 <portdirect> the reason we added the service in osh, was much less about dns, but about having a stable vip to hit tiller with
14:26:15 <evgenyl> The problem with IP gets back to the initial issue with grpc not supporting cidr in no_proxy, isn't it?
14:26:38 <sthussey> Which isn't standard
14:26:42 <sthussey> So not surprising
14:26:45 <evgenyl> Yes.
14:26:46 <portdirect> yeah - it does not solve that issue, just providing ontext for where it came from
14:26:55 <portdirect> *context
14:27:02 <sthussey> So there are two issues in parallel - 1) address resolution and 2) routing
14:27:40 <sthussey> Address resolution can use DNS, explicit address or label selection. Only DNS would solve the routing issue
14:28:22 <sthussey> If we add a `proxy` setting at the armada/Manifest/v1 level that will apply to all charts, it solves #2 and makes #1 moot
14:30:41 <evgenyl> So are there any objections regarding to this? If not I can start looking into that in my spare time.
14:30:45 <mattmceuen> evgenyl can you try setting the proxy for each chart like sthusey had suggested earlier for now, and we can look to add a manifest-wide proxy setting in the future?
14:30:54 <mattmceuen> that would be awesome evgenyl :)
14:31:32 <evgenyl> Sure, let me update AIAB doc with this info as a quick fix.
14:31:41 <mattmceuen> thanks man
14:31:59 <mattmceuen> to wrap up the "tiller lookup" topic:
14:32:33 <mattmceuen> when we move tiller in-pod with armada, and armada changes to look tiller up via localhost, seems like a natural point to go ahead and take the lookup functionality out of shipyard, any objections?
14:32:44 <mattmceuen> any other loose ends?
14:33:13 <mattmceuen> alright - moving on:
14:33:17 <mattmceuen> #topic Read the Docs jobs not updating documentation
14:33:24 <mattmceuen> This one's yours dwalt
14:33:31 <dwalt> This is still an issue AFAIK
14:33:42 <dwalt> For example, https://review.openstack.org/#/c/628420/1 doesn't appear to have triggered an update
14:34:22 <dwalt> Has anyone heard anymore official information (outside of  http://lists.openstack.org/pipermail/openstack-infra/2018-December/006247.html) from OpenStack regarding this?
14:34:29 <sthussey> @roman_g
14:34:55 <evgenyl> My understanding is infra is waiting for this to be merged
14:34:57 <evgenyl> #link https://github.com/rtfd/readthedocs.org/issues/4986
14:35:32 <portdirect> https://github.com/rtfd/readthedocs.org/pull/5009
14:36:00 <portdirect> though no indication of the timeline before this is closed out
14:36:11 <mattmceuen> it seems pretty active
14:36:23 <evgenyl> Yes, it is broken for several months already.
14:36:30 <mattmceuen> I'm leaning toward doing manual rtd updates and give this a chance to get fixed
14:36:40 <mattmceuen> yes but promising that there's discussion 5 days ago too:)
14:36:59 <evgenyl> Sorry, not several months, for almost a month :)
14:37:30 <mattmceuen> i.e. hopefully the end is in sight... we can revert to a token-based push if it does stretch out a long time, but better not to spend the cycles on it if the issue gets resolved soon is what I'm thinking.
14:37:45 <mattmceuen> agree/disagree?
14:37:45 <dwalt> Agreed.
14:37:51 <evgenyl> ++
14:38:02 <mattmceuen> awesome
14:38:05 <mattmceuen> next topic:
14:38:09 <mattmceuen> #topic Adding Airship in the CNCF Landscape
14:38:16 <sthussey> Who is doing this manual updates?
14:38:19 <sthussey> these*
14:38:34 <mattmceuen> Kaspars is what I hear, sthussey
14:38:57 <dwalt> I am not sure if that's for all repos or just treasuremap
14:39:13 <dwalt> It has to be manually triggered via RTD from what I understand
14:39:23 <dwalt> So whoever has access to the account
14:39:24 <mattmceuen> let's bring it up in this chat when we need doc updates, kaspars or anyone with rtd access to our account can do it
14:39:38 <mattmceuen> (I think I might have access, it's been a while :) )
14:40:14 <mattmceuen> hogepodge for CNCF landscape, this one's yours!  Please educate us on what this thing is
14:40:22 <mattmceuen> https://landscape.cncf.io/grouping=no&license=open-source
14:40:28 <mattmceuen> I see Zuul in there
14:40:54 <hogepodge> So the CNCF landscape is an interactive document that tries to captures integrations between CNCF projects an other open source projects
14:41:10 <hogepodge> So, Zuul is there because it runs CI jobs for K8s on OpenStack, for example
14:41:23 <hogepodge> Or Kata because it can be a container runtime for Docker and Kubernetes.
14:41:40 <hogepodge> Since Airship is essentially deployment and hosting tooling, it makes sense to list it in the landscape
14:42:08 <portdirect> odd that it shows the market cap of the backing company for single vendor projects
14:42:09 <mattmceuen> Sounds reasonable to me
14:42:16 <hogepodge> If there are parts of Airship that can stand alone (like Armada say), there's also a possibility of listing those.
14:42:16 <portdirect> but that sounds great
14:42:22 <hogepodge> portdirect: it has some... limitations
14:42:58 <mattmceuen> is it as simple as filling out some request form hogepodge?  Or is there  beurocracy :)
14:43:08 <hogepodge> Like they want to pull all data from github, which is fine with mirrors, but for projects run in Gerrit it loses a bunch of data about issues, code reviews, and so on
14:43:23 <hogepodge> I've never actually added anything myself, I think it's a pull request
14:43:37 <hogepodge> #link https://github.com/cncf/landscapeapp
14:44:24 <hogepodge> #link https://github.com/cncf/landscape
14:44:49 <hogepodge> "If you think your company or project should be included, please open a pull request to add it to landscape.yml. For the logo, you can either upload an SVG to the hosted_logos directory or put a URL as the value, and it will be fetched."
14:45:15 <mattmceuen> Sounds reasonable to me -- I think perhaps a single PR with two entries, Airship (as a whole) and Armada, using the same Airship logo would be great
14:45:28 <mattmceuen> Any volunteers for putting in this PR with CNCF?
14:45:47 <portdirect> i could if you want?
14:46:06 <hogepodge> I was about to offer, but portdirect got there first ;-)
14:46:13 <mattmceuen> lol
14:46:23 <mattmceuen> he did have a question mark
14:46:40 <mattmceuen> portdirect if you have bandwidth that would be awesome, ty
14:46:49 <portdirect> np
14:46:56 <evrardjp> voluntold
14:47:04 <mattmceuen> thanks for the  find hogepodge, free publicity!
14:47:18 <portdirect> evrardjp: quite the opposite ;)
14:47:30 <hogepodge> I've been working with Dan on some of the openstack stuff, so it's been on my mind
14:47:34 <mattmceuen> volunplease
14:48:06 <mattmceuen> anything else on this topic?
14:48:23 <mattmceuen> #topic proposed topic: design overview of ovs-dpdk integration
14:48:33 <mattmceuen> georgek - this one's yours!  What do you have in mind?
14:48:39 <georgk> ok, thanks
14:49:01 <georgk> so, as you know there is a need and some discussions around ovs dpdk intergration in Airship
14:49:22 <georgk> I wanted to bring the topic here to have a sanity check of work items that have been indentified
14:49:30 <georgk> and to figure out who is wokring on this
14:49:38 <georgk> #link https://wiki.akraino.org/display/AK/Support+of+OVS-DPDK+in+Airship
14:50:14 <georgk> this wiki pages lists a couple of high-level work items that I have identified which need to be done in order to get the integration done
14:50:54 <mattmceuen> georgek are there user stories in storyboard for this you could point me toward?  I'm catching up on this topic
14:51:01 <portdirect> ^^
14:51:43 <georgk> right, that would be a good next step. no, I am not aware of those
14:52:10 <georgk> but I am not a huge fan of searching in storyboard (might just be me)
14:52:26 <georgk> if this list is reasonable, I happy transfer it to storyboard
14:52:45 <portdirect> it would provide more visibility for sure
14:53:23 <sthussey> that wiki seems to be a bit misinformed
14:53:24 <mattmceuen> portdirect, can we carry the OSH-related bits of this to the OSH meeting coming up next in openstack-meeting-5?
14:53:25 <hogepodge> georgk: aside, if you can file an issue about searching in storyboard in the storyboard storyboard we might have some resources coming online to take a look at it
14:53:29 <sthussey> Since OSH isn't part of Airship
14:53:58 <portdirect> i was side-stepping that for the moment sthussey
14:54:05 <mattmceuen> yeah, let's focus on the Airship-specific bits in the few mins left we have (I need to steal one min at the end for another topic)
14:54:17 <mattmceuen> "deploy neutron openvswitch agent	ensure chart of openvswitch agent is deployed"
14:54:33 <portdirect> but i think in general, if you want to get traction in airship then storyboard will be the forum to do that in
14:54:47 <portdirect> (the same applies for osh, but i digress)
14:55:13 <portdirect> as not many of us are chacking akrano wiki often, and it doesnt allow us to tie the work in with patchsets
14:55:22 <georgk> ok, I agree that it should be in storyboard, I wanted to get a sanity check if the items first
14:55:56 <mattmceuen> for deployment of the OVS agent - from an AIrship perspective, I think that item above is just a matter of supplying the right configuration to the helm chart via deployment manifests
14:56:10 <sthussey> Most of the Airship items on that list work today
14:56:31 <portdirect> `DPDK host config: mount hugepages` this is the area i expect you'll find most sticky
14:56:31 <georgk> sthussey: even better
14:56:35 <mattmceuen> Yeah, they're mostly "configs" which are operator-specific
14:56:52 <mattmceuen> so georgek they may take the form of "as an operator, how do I ..." which we can def help answer
14:57:16 <georgk> I generally consider most things to be configuration issue, but I´d like to avoid missing major gaps in Airship that need to be addressed
14:57:24 <mattmceuen> for sure
14:57:36 <mattmceuen> and I haven't looked deep in the list, there may be enhancements to airship needed
14:57:48 <portdirect> i dont really see any tbh
14:57:55 <mattmceuen> even better :)
14:58:03 <mattmceuen> sorry to rush along, but one timely topic needed
14:58:13 <mattmceuen> #topic pre-pre-PTG planning
14:58:24 <mattmceuen> We need to reserve a room for Airship at the PTG in Denver
14:58:29 <georgk> ok, so, we continue on the openstack-helm call, I suppose
14:58:31 <mattmceuen> seems like we were just in the PTG in Denver
14:58:40 <mattmceuen> that's what I'm mthinking too georgek
14:59:12 <mattmceuen> Need to have a ROUGH headcount.  Note I don't know how many folks my company will be able to send even, but broadly, who here suspects their company will be sending folks to the PTG?
14:59:41 <hogepodge> Note PTG is same week as Open Infra Summit, so you can combine both events (or pick just one)
15:00:11 <mattmceuen> two for one, how can I say no
15:00:15 <portdirect> :)
15:00:34 <mattmceuen> well give it some thought guys, I'll try to ballpark a number and get it back to the OSF today
15:00:50 <mattmceuen> we're out of time!  Thanks everyone for the discussion!
15:00:55 <mattmceuen> #endtopic
15:00:59 <mattmceuen> #endmeeting