15:00:41 <mgoddard> #startmeeting kolla
15:00:42 <openstack> Meeting started Wed Jun  5 15:00:41 2019 UTC and is due to finish in 60 minutes.  The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:45 <hrw> o/
15:00:46 <openstack> The meeting name has been set to 'kolla'
15:00:54 <rwellum> o/
15:00:55 <chason> o/
15:01:03 <scottsol> o/
15:01:15 <jroll> \o
15:01:27 <mnasiadka> o/
15:01:55 <priteau> \o
15:01:59 <mgoddard> I guess people like having the meeting in here then :)
15:02:18 <mgoddard> #topic agenda
15:02:20 <mgoddard> * Roll-call
15:02:23 <mgoddard> * Announcements
15:02:25 <mgoddard> ** IRC meetings now held in #openstack-kolla
15:02:27 <mgoddard> ** Virtual PTG summary email on openstack-discuss: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html
15:02:29 <mgoddard> ** Vote for Train cycle priorities: https://etherpad.openstack.org/p/kolla-train-priorities
15:02:31 <mgoddard> ** Lots of emails on openstack-discuss for PTG follow ups. Please read and reply
15:02:33 <mgoddard> * Review action items from last meeting
15:02:35 <mgoddard> * Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard
15:02:37 <mgoddard> * Stein release status
15:02:39 <mgoddard> #topic announcements
15:02:43 <mgoddard> #info IRC meetings now held in #openstack-kolla
15:02:55 <mgoddard> #info Virtual PTG summary email on openstack-discuss: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html
15:03:09 <mgoddard> #info Vote for Train cycle priorities: https://etherpad.openstack.org/p/kolla-train-priorities
15:03:19 <mgoddard> #info Lots of emails on openstack-discuss for PTG follow ups. Please read and reply
15:03:36 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html
15:03:49 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006895.html
15:03:54 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006896.html
15:04:00 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006897.html
15:04:06 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006898.html
15:04:19 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006899.html
15:04:25 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006900.html
15:04:37 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006901.html
15:04:42 <mgoddard> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006902.html
15:04:52 * hrw in two meetings at same time
15:05:26 <mgoddard> I got a bit email happy. Some are requesting feedback, so please read and reply if it affects you
15:05:46 <hrw> mgoddard: thanks for doing it.
15:06:05 <mgoddard> np
15:06:20 <mgoddard> #topic Review action items from last meeting
15:06:35 <mgoddard> mgoddard to try again again to disable ocata publishing job
15:06:37 <mgoddard> mgoddard to investigate bifrost CI failures
15:06:38 <hrw> btw - ubuntu:18.04 binary images will have py2/py3 each time. python3-ldappool depends on both py2 and py3 ;(
15:07:03 <mgoddard> hrw: doh. Do we need to follow up with ubuntu?
15:07:16 <hrw> there will be point
15:07:19 <kklimonda> hrw what is the dependency path pulling python2  ?
15:07:35 <mgoddard> #action mgoddard to try again again to disable ocata publishing job
15:07:48 <mgoddard> didn't get around to it still
15:07:49 <hrw> python3-ldappool -> python3-pyldap -> python-ldap -> * -> python2
15:08:10 <kklimonda> ouch
15:08:12 <mgoddard> I did fix the bifrost jobs. They now only fail if they run on the OVH cloud
15:08:19 <kklimonda> and python3-pyldap is a transition package
15:08:36 <hrw> yep
15:08:48 <kklimonda> depending on how much we care about that, we could create a fake package that satisfies the dependency
15:08:59 <mgoddard> #topic Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard
15:09:50 <hrw> kklimonda: have an idea to fix it. but let go with agenda now
15:10:09 <mgoddard> CI mostly looking good
15:10:29 <mgoddard> still need this one to land to fix bifrost on rocky: https://review.opendev.org/661958
15:11:03 <hrw> +2+2+w+z2 and not landed?
15:11:19 <hrw> ah. rocky
15:11:29 <mgoddard> oops, https://review.opendev.org/#/c/662029/
15:11:31 <hrw> #link https://review.opendev.org/#/c/662029/
15:11:42 <mgoddard> thanks
15:12:09 <mgoddard> once we have train priorities we can add those to the whiteboard
15:12:28 <mnasiadka> Now it will land :)
15:12:35 <mgoddard> thanks
15:12:48 <mgoddard> anything else from the whiteboard?
15:13:29 <mgoddard> #topic Stein release status
15:13:52 <mgoddard> thanks to mnasiadka we now have Ceph nautilus on centos \o/
15:14:23 <mgoddard> the patch has been ported to stein, after which we should finally be able to use RDO Stein packages
15:15:02 <mgoddard> probably a few patches to merge at that point, then finally we'll have a stein release
15:15:10 <chason> mnasiadka, nice work, dude. :)
15:15:25 <mgoddard> anyone else know of any stein blocking issues at this point?
15:15:28 <hrw> kklimonda: https://pastebin.com/sXT52g6P should fix it
15:16:17 <mgoddard> #topic Open discussion
15:16:31 <mgoddard> Nice short on the formalities today
15:16:54 <kklimonda> @hrw ah, so no other binary packages actually depend on `python3-pyldap` itself? if so, then yeah - seems like it should work.
15:16:56 <mgoddard> kklimonda and scottsol I think you wanted to discuss TLS?
15:17:14 <mgoddard> and stackedsax if you're awake?
15:17:15 * hrw on py3 - in queue
15:17:25 <mnasiadka> I have a question around FFU - is anybody planning to work on that?
15:17:38 <scottsol> yes :)
15:17:38 <mgoddard> we can do py3 ldap first if you like kklimonda and hrw ?
15:17:41 <kklimonda> @mgoddard yeah, I'd like to talk a bit about that and see if we can come to an agreement on how to implement that
15:17:59 <kklimonda> either is fine with me
15:18:15 <mgoddard> hrw: any more to say on py3 ldap ubuntu?
15:18:50 <hrw> mgoddard: py3 ldap? will see what can be done to solve it.
15:19:16 <mgoddard> ok. Let's move onto TLS first then FFU next
15:19:26 <kklimonda> @hrw if the patch you proposed doesn't work, we can use `equivs` to fake the package that is pulling python2
15:19:49 <hrw> mgoddard: go for it
15:20:03 <mgoddard> I'll summarise the issue for people then we can discuss
15:20:22 <mgoddard> currently in k-a we don't use TLS on the internal API network, only the external network
15:21:21 <openstackgerrit> Alex Schultz proposed openstack/kolla master: Remove tripleo-ui  https://review.opendev.org/662846
15:21:46 <mgoddard> there is a patch that disables the internal network, using external+TLS everywhere
15:21:52 <mgoddard> #link https://review.opendev.org/#/c/548407/
15:22:49 <mgoddard> this was built for a specific environment, but some people still want split API networks, but with TLS on the internal network
15:23:23 <mgoddard> there are two ways proposed to do this
15:24:02 <mgoddard> 1. use haproxy to terminate TLS, forward TCP to localhost backend or TLS to a remote backend with haproxy in front
15:24:28 <mgoddard> 2. TLS from haproxy to the backends
15:25:25 <mgoddard> 2 can save an extra TLS hop, but requires OpenStack APIs to support TLS. We'd need to make them all use apache to do this (many do already)
15:25:45 <mgoddard> hopefully that's a reasonable overview
15:25:59 <mgoddard> scottsol, kklimonda - what are your thoughts?
15:26:11 <mgoddard> (or anyone else)
15:26:31 <jroll> data point: we're doing something similar to (1) - we deploy haproxy on each controller node outside of kolla. that haproxy only fronts the services running on localhost. we have an (octavia, but can be whatever) load balancer up front that points at each haproxy as a backend.
15:26:49 <scottsol> I guess a first good step would be to evaluate how many services are missing the support for wsgi modifications
15:27:09 <jroll> so LB --TLS--> haproxy --noTLS--> local service
15:27:24 <kklimonda> I've started working on the second approach in the beginning, focusing only on keystone first as that's a requirement for us. It feels more of a classical approach, while the first approach reminds me of sidecar pattern in kubernetes
15:27:35 <scottsol> we'd maybe have to deploy a new container to handle the ones that dont support it or use option 1
15:27:58 <mgoddard> jroll: thanks, that's useful. 1. is like that but with kolla's haproxy providing both load balancer instances
15:28:25 <jroll> mgoddard: right :)
15:29:00 <jroll> re 2) all(?) openstack services should be able to be deployed as wsgi with apache/nginx/whatever in front, so it's doable. just need to do that work in kolla-ansible
15:29:29 <jroll> 2 seems ideal to me, it's a bit more flexible and less moving parts. just some extra work up front
15:29:33 <kklimonda> with second approach, we also have to work on distributing certs to all the nodes
15:30:02 <mgoddard> kklimonda: isn't that true in both cases?
15:30:31 <mgoddard> kklimonda: regardless of whether haproxy or apache is terminating TLS, it needs a cert
15:30:34 <kklimonda> mgoddard: with first approach we only need one (or three) certificate for haproxy.
15:31:12 <kklimonda> mgoddard: now, with services you'd probably need a certificate per host where the service is deployed, so that the IP/hostname matches?
15:31:52 <mgoddard> kklimonda: the cert needs to be valid not just on the VIP but on the host's IP. Or potentially we'd need separate certs
15:32:07 <kklimonda> yes, that would work
15:32:29 <mgoddard> kklimonda: because in 1. haproxy needs to listen on the host's IP, I think the problem is the same
15:32:34 <openstackgerrit> Jason Anderson proposed openstack/kolla master: Allow building images if parent was skipped  https://review.opendev.org/629275
15:32:42 <mgoddard> kklimonda: becase we can have two paths
15:32:59 <kklimonda> @mgoddard yes, you are right
15:33:01 <mgoddard> haproxy - TCP -> backend
15:33:29 <mgoddard> haproxy (VIP) - TLS -> haproxy (IP) - TCP -> backend
15:33:37 <kklimonda> so no difference on that front
15:33:41 <mgoddard> yeah
15:33:50 <kklimonda> with first approach we are adding an additional optional hop
15:34:06 <kklimonda> client->haproxy->remote haproxy->service
15:34:24 <kklimonda> but I wonder how much latency will that add
15:34:28 <mgoddard> scottsol makes a good point about assessing the existing support for apache
15:35:26 <kklimonda> mhm, I've assumed that all OS services have parity here, but that's a good point
15:35:35 <mgoddard> is that something you or stackedsax could pick up scottsol ?
15:36:27 <mgoddard> kklimonda: how far did you get with keystone? did you push a patch?
15:36:45 <scottsol> I will ask @stackedsax to do this
15:36:50 <stackedsax> yeah, I started looking into it.  I got a provisional list of which services had support and which didn't
15:36:59 <mgoddard> great
15:37:07 <scottsol> oh hey stackedsax
15:37:12 <kklimonda> can you share it? We could just make a list of all services in kolla and go one by one
15:37:13 <stackedsax> hihi
15:37:39 <mgoddard> we could start an etherpad to track this stuff?
15:37:56 <scottsol> that sounds like a good idea
15:38:15 <mgoddard> #link http://etherpad.openstack.org/p/kolla-internal-tls
15:38:16 <kklimonda> @mgoddard no, but I can push it - I'll that with how the existing patch merges internal and public endpoints
15:38:29 <kklimonda> @mgoddard I'll do it tomorrow and open a review
15:38:38 <mgoddard> kklimonda: that would be great - please add to the etherpad
15:38:44 <kklimonda> k
15:39:23 <mgoddard> stackedsax: could you add your list to the pad?
15:39:32 * stackedsax rummages through his notes...
15:41:07 <mgoddard> I've also added this feature to the whiteboard (https://etherpad.openstack.org/p/KollaWhiteBoard)
15:42:14 <mgoddard> everyone happy to move on?
15:42:19 <kklimonda> @mgoddard one last thing
15:42:24 <mgoddard> sure
15:42:52 <kklimonda> @mgoddard is the approach of "merging" internal, admin and public endpoints, by sharing same IP and certificate, ok?
15:43:12 <kklimonda> any reason that we'd like to keep them separate?
15:43:21 <mgoddard> kklimonda: I think scottsol and stackedsax have a use case to keep them separate
15:43:51 <mgoddard> kklimonda: which do you prefer?
15:45:00 <kklimonda> @mgoddard well, so far merging resulted less work, but if there is an usecase for separation then we should support that
15:45:18 <mgoddard> scottsol: stackedsax ?
15:45:53 <scottsol> we'd prefer to keep them separate if possible so that we don't have to send admin api traffic over a public vip
15:46:53 <mgoddard> it should still be possible to use a single network by not enabling the external network
15:48:35 <kklimonda> mhm, in that case I think it makes sense to keep support separate endpoints
15:48:49 <mgoddard> seems reasonable
15:50:21 <mgoddard> it might be worth putting together a couple of paragraphs of design proposal for discussion once we've agreed on 1 or 2
15:50:31 <scottsol> We could be convinced if it adds a massive overhead of work but I'd say for now we'd certainly prefer to keep the separation
15:50:32 <mgoddard> so we're all on the same page
15:50:59 <stackedsax> I added my list to the etherpad.  Can I turn off authorship colors?  I'm pleased with my magenta, but my eyes are also bleeding.
15:51:13 <kklimonda> you can change your color ;)
15:51:32 <kklimonda> or disable it alltogether in settings
15:52:46 <stackedsax> yeah, I was wondering whether y'all were fine with me turning off the colors for the moment
15:53:15 <stackedsax> or if you found it useful to keep track
15:53:52 <mgoddard> I don't mind too much, it can be useful
15:54:00 <priteau> I thought Etherpad colours were a personal setting
15:54:21 <mgoddard> just added a requirement not to break existing setups with a cert only for the VIP
15:54:41 <mgoddard> to do that the second leg of encryption would need to be optional
15:54:42 <stackedsax> priteau: you can turn them off altogether if you like
15:54:51 <stackedsax> I'll leave them on... for now :D
15:55:02 <mgoddard> good discussion, let's move onto FFU for mnasiadka
15:55:30 <mgoddard> #link https://review.opendev.org/614508
15:55:40 <mgoddard> FFU spec by spsurya
15:55:59 <stackedsax> oh, one last little tidbit: I'd love for someone to go through my list with me and identify which services we should be focusing on.  Doesn't have to be now, I'd just like to know who to ping.
15:56:08 <mgoddard> There was also https://review.opendev.org/619501, for stopping containers
15:57:16 <mgoddard> stackedsax: that is quite related to our recent agreement to declare a level of support for each service
15:57:17 <scottsol> stackedsax I can help you with that
15:57:29 <mnasiadka> mgoddard: just wanted to know if somebody plans on working on it - previously it was mainly egonzalez
15:58:06 <mgoddard> mnasiadka: I'm not aware of anyone. egonzalez and spsurya were doing bits, but I don't think it was finished
15:58:25 <mnasiadka> mgoddard: it wasn’t
15:58:34 <mgoddard> mnasiadka: do you need it?
15:59:20 <mnasiadka> mgoddard: not particularly now ;)
16:00:02 <mgoddard> stackedsax: you can try 'grep -rn wsgi docker/' in kolla to see which are using apache/wsgi
16:00:36 <mnasiadka> mgoddard: also would be bad to see it abandoned - ai think egonzalez worked on this for some time
16:01:04 <stackedsax> mgoddard: will do
16:01:24 <stackedsax> scottsol: cool, will ping you offline from here
16:01:27 <mgoddard> mnasiadka: sure. If someone wants to pick it up that would be great
16:01:48 <mgoddard> mnasiadka: if you're looking for people to help you could ask the ML
16:01:56 <mnasiadka> mgoddard: will do
16:02:17 <mgoddard> would at least be interested to know what remains to be done
16:02:35 <mgoddard> ok, we're over time
16:02:47 <mgoddard> thanks everyone, some good discussion today
16:02:52 <mgoddard> #endmeeting