15:00:41 #startmeeting kolla 15:00:42 Meeting started Wed Jun 5 15:00:41 2019 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:45 o/ 15:00:46 The meeting name has been set to 'kolla' 15:00:54 o/ 15:00:55 o/ 15:01:03 o/ 15:01:15 \o 15:01:27 o/ 15:01:55 \o 15:01:59 I guess people like having the meeting in here then :) 15:02:18 #topic agenda 15:02:20 * Roll-call 15:02:23 * Announcements 15:02:25 ** IRC meetings now held in #openstack-kolla 15:02:27 ** Virtual PTG summary email on openstack-discuss: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html 15:02:29 ** Vote for Train cycle priorities: https://etherpad.openstack.org/p/kolla-train-priorities 15:02:31 ** Lots of emails on openstack-discuss for PTG follow ups. Please read and reply 15:02:33 * Review action items from last meeting 15:02:35 * Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard 15:02:37 * Stein release status 15:02:39 #topic announcements 15:02:43 #info IRC meetings now held in #openstack-kolla 15:02:55 #info Virtual PTG summary email on openstack-discuss: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html 15:03:09 #info Vote for Train cycle priorities: https://etherpad.openstack.org/p/kolla-train-priorities 15:03:19 #info Lots of emails on openstack-discuss for PTG follow ups. Please read and reply 15:03:36 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006793.html 15:03:49 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006895.html 15:03:54 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006896.html 15:04:00 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006897.html 15:04:06 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006898.html 15:04:19 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006899.html 15:04:25 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006900.html 15:04:37 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006901.html 15:04:42 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006902.html 15:04:52 * hrw in two meetings at same time 15:05:26 I got a bit email happy. Some are requesting feedback, so please read and reply if it affects you 15:05:46 mgoddard: thanks for doing it. 15:06:05 np 15:06:20 #topic Review action items from last meeting 15:06:35 mgoddard to try again again to disable ocata publishing job 15:06:37 mgoddard to investigate bifrost CI failures 15:06:38 btw - ubuntu:18.04 binary images will have py2/py3 each time. python3-ldappool depends on both py2 and py3 ;( 15:07:03 hrw: doh. Do we need to follow up with ubuntu? 15:07:16 there will be point 15:07:19 hrw what is the dependency path pulling python2 ? 15:07:35 #action mgoddard to try again again to disable ocata publishing job 15:07:48 didn't get around to it still 15:07:49 python3-ldappool -> python3-pyldap -> python-ldap -> * -> python2 15:08:10 ouch 15:08:12 I did fix the bifrost jobs. They now only fail if they run on the OVH cloud 15:08:19 and python3-pyldap is a transition package 15:08:36 yep 15:08:48 depending on how much we care about that, we could create a fake package that satisfies the dependency 15:08:59 #topic Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard 15:09:50 kklimonda: have an idea to fix it. but let go with agenda now 15:10:09 CI mostly looking good 15:10:29 still need this one to land to fix bifrost on rocky: https://review.opendev.org/661958 15:11:03 +2+2+w+z2 and not landed? 15:11:19 ah. rocky 15:11:29 oops, https://review.opendev.org/#/c/662029/ 15:11:31 #link https://review.opendev.org/#/c/662029/ 15:11:42 thanks 15:12:09 once we have train priorities we can add those to the whiteboard 15:12:28 Now it will land :) 15:12:35 thanks 15:12:48 anything else from the whiteboard? 15:13:29 #topic Stein release status 15:13:52 thanks to mnasiadka we now have Ceph nautilus on centos \o/ 15:14:23 the patch has been ported to stein, after which we should finally be able to use RDO Stein packages 15:15:02 probably a few patches to merge at that point, then finally we'll have a stein release 15:15:10 mnasiadka, nice work, dude. :) 15:15:25 anyone else know of any stein blocking issues at this point? 15:15:28 kklimonda: https://pastebin.com/sXT52g6P should fix it 15:16:17 #topic Open discussion 15:16:31 Nice short on the formalities today 15:16:54 @hrw ah, so no other binary packages actually depend on `python3-pyldap` itself? if so, then yeah - seems like it should work. 15:16:56 kklimonda and scottsol I think you wanted to discuss TLS? 15:17:14 and stackedsax if you're awake? 15:17:15 * hrw on py3 - in queue 15:17:25 I have a question around FFU - is anybody planning to work on that? 15:17:38 yes :) 15:17:38 we can do py3 ldap first if you like kklimonda and hrw ? 15:17:41 @mgoddard yeah, I'd like to talk a bit about that and see if we can come to an agreement on how to implement that 15:17:59 either is fine with me 15:18:15 hrw: any more to say on py3 ldap ubuntu? 15:18:50 mgoddard: py3 ldap? will see what can be done to solve it. 15:19:16 ok. Let's move onto TLS first then FFU next 15:19:26 @hrw if the patch you proposed doesn't work, we can use `equivs` to fake the package that is pulling python2 15:19:49 mgoddard: go for it 15:20:03 I'll summarise the issue for people then we can discuss 15:20:22 currently in k-a we don't use TLS on the internal API network, only the external network 15:21:21 Alex Schultz proposed openstack/kolla master: Remove tripleo-ui https://review.opendev.org/662846 15:21:46 there is a patch that disables the internal network, using external+TLS everywhere 15:21:52 #link https://review.opendev.org/#/c/548407/ 15:22:49 this was built for a specific environment, but some people still want split API networks, but with TLS on the internal network 15:23:23 there are two ways proposed to do this 15:24:02 1. use haproxy to terminate TLS, forward TCP to localhost backend or TLS to a remote backend with haproxy in front 15:24:28 2. TLS from haproxy to the backends 15:25:25 2 can save an extra TLS hop, but requires OpenStack APIs to support TLS. We'd need to make them all use apache to do this (many do already) 15:25:45 hopefully that's a reasonable overview 15:25:59 scottsol, kklimonda - what are your thoughts? 15:26:11 (or anyone else) 15:26:31 data point: we're doing something similar to (1) - we deploy haproxy on each controller node outside of kolla. that haproxy only fronts the services running on localhost. we have an (octavia, but can be whatever) load balancer up front that points at each haproxy as a backend. 15:26:49 I guess a first good step would be to evaluate how many services are missing the support for wsgi modifications 15:27:09 so LB --TLS--> haproxy --noTLS--> local service 15:27:24 I've started working on the second approach in the beginning, focusing only on keystone first as that's a requirement for us. It feels more of a classical approach, while the first approach reminds me of sidecar pattern in kubernetes 15:27:35 we'd maybe have to deploy a new container to handle the ones that dont support it or use option 1 15:27:58 jroll: thanks, that's useful. 1. is like that but with kolla's haproxy providing both load balancer instances 15:28:25 mgoddard: right :) 15:29:00 re 2) all(?) openstack services should be able to be deployed as wsgi with apache/nginx/whatever in front, so it's doable. just need to do that work in kolla-ansible 15:29:29 2 seems ideal to me, it's a bit more flexible and less moving parts. just some extra work up front 15:29:33 with second approach, we also have to work on distributing certs to all the nodes 15:30:02 kklimonda: isn't that true in both cases? 15:30:31 kklimonda: regardless of whether haproxy or apache is terminating TLS, it needs a cert 15:30:34 mgoddard: with first approach we only need one (or three) certificate for haproxy. 15:31:12 mgoddard: now, with services you'd probably need a certificate per host where the service is deployed, so that the IP/hostname matches? 15:31:52 kklimonda: the cert needs to be valid not just on the VIP but on the host's IP. Or potentially we'd need separate certs 15:32:07 yes, that would work 15:32:29 kklimonda: because in 1. haproxy needs to listen on the host's IP, I think the problem is the same 15:32:34 Jason Anderson proposed openstack/kolla master: Allow building images if parent was skipped https://review.opendev.org/629275 15:32:42 kklimonda: becase we can have two paths 15:32:59 @mgoddard yes, you are right 15:33:01 haproxy - TCP -> backend 15:33:29 haproxy (VIP) - TLS -> haproxy (IP) - TCP -> backend 15:33:37 so no difference on that front 15:33:41 yeah 15:33:50 with first approach we are adding an additional optional hop 15:34:06 client->haproxy->remote haproxy->service 15:34:24 but I wonder how much latency will that add 15:34:28 scottsol makes a good point about assessing the existing support for apache 15:35:26 mhm, I've assumed that all OS services have parity here, but that's a good point 15:35:35 is that something you or stackedsax could pick up scottsol ? 15:36:27 kklimonda: how far did you get with keystone? did you push a patch? 15:36:45 I will ask @stackedsax to do this 15:36:50 yeah, I started looking into it. I got a provisional list of which services had support and which didn't 15:36:59 great 15:37:07 oh hey stackedsax 15:37:12 can you share it? We could just make a list of all services in kolla and go one by one 15:37:13 hihi 15:37:39 we could start an etherpad to track this stuff? 15:37:56 that sounds like a good idea 15:38:15 #link http://etherpad.openstack.org/p/kolla-internal-tls 15:38:16 @mgoddard no, but I can push it - I'll that with how the existing patch merges internal and public endpoints 15:38:29 @mgoddard I'll do it tomorrow and open a review 15:38:38 kklimonda: that would be great - please add to the etherpad 15:38:44 k 15:39:23 stackedsax: could you add your list to the pad? 15:39:32 * stackedsax rummages through his notes... 15:41:07 I've also added this feature to the whiteboard (https://etherpad.openstack.org/p/KollaWhiteBoard) 15:42:14 everyone happy to move on? 15:42:19 @mgoddard one last thing 15:42:24 sure 15:42:52 @mgoddard is the approach of "merging" internal, admin and public endpoints, by sharing same IP and certificate, ok? 15:43:12 any reason that we'd like to keep them separate? 15:43:21 kklimonda: I think scottsol and stackedsax have a use case to keep them separate 15:43:51 kklimonda: which do you prefer? 15:45:00 @mgoddard well, so far merging resulted less work, but if there is an usecase for separation then we should support that 15:45:18 scottsol: stackedsax ? 15:45:53 we'd prefer to keep them separate if possible so that we don't have to send admin api traffic over a public vip 15:46:53 it should still be possible to use a single network by not enabling the external network 15:48:35 mhm, in that case I think it makes sense to keep support separate endpoints 15:48:49 seems reasonable 15:50:21 it might be worth putting together a couple of paragraphs of design proposal for discussion once we've agreed on 1 or 2 15:50:31 We could be convinced if it adds a massive overhead of work but I'd say for now we'd certainly prefer to keep the separation 15:50:32 so we're all on the same page 15:50:59 I added my list to the etherpad. Can I turn off authorship colors? I'm pleased with my magenta, but my eyes are also bleeding. 15:51:13 you can change your color ;) 15:51:32 or disable it alltogether in settings 15:52:46 yeah, I was wondering whether y'all were fine with me turning off the colors for the moment 15:53:15 or if you found it useful to keep track 15:53:52 I don't mind too much, it can be useful 15:54:00 I thought Etherpad colours were a personal setting 15:54:21 just added a requirement not to break existing setups with a cert only for the VIP 15:54:41 to do that the second leg of encryption would need to be optional 15:54:42 priteau: you can turn them off altogether if you like 15:54:51 I'll leave them on... for now :D 15:55:02 good discussion, let's move onto FFU for mnasiadka 15:55:30 #link https://review.opendev.org/614508 15:55:40 FFU spec by spsurya 15:55:59 oh, one last little tidbit: I'd love for someone to go through my list with me and identify which services we should be focusing on. Doesn't have to be now, I'd just like to know who to ping. 15:56:08 There was also https://review.opendev.org/619501, for stopping containers 15:57:16 stackedsax: that is quite related to our recent agreement to declare a level of support for each service 15:57:17 stackedsax I can help you with that 15:57:29 mgoddard: just wanted to know if somebody plans on working on it - previously it was mainly egonzalez 15:58:06 mnasiadka: I'm not aware of anyone. egonzalez and spsurya were doing bits, but I don't think it was finished 15:58:25 mgoddard: it wasn’t 15:58:34 mnasiadka: do you need it? 15:59:20 mgoddard: not particularly now ;) 16:00:02 stackedsax: you can try 'grep -rn wsgi docker/' in kolla to see which are using apache/wsgi 16:00:36 mgoddard: also would be bad to see it abandoned - ai think egonzalez worked on this for some time 16:01:04 mgoddard: will do 16:01:24 scottsol: cool, will ping you offline from here 16:01:27 mnasiadka: sure. If someone wants to pick it up that would be great 16:01:48 mnasiadka: if you're looking for people to help you could ask the ML 16:01:56 mgoddard: will do 16:02:17 would at least be interested to know what remains to be done 16:02:35 ok, we're over time 16:02:47 thanks everyone, some good discussion today 16:02:52 #endmeeting