15:02:46 #startmeeting kolla 15:02:47 Meeting started Wed Aug 21 15:02:46 2019 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:50 The meeting name has been set to 'kolla' 15:02:52 #topic rollcall 15:03:05 Raise those hands now 15:03:10 o/ 15:03:12 o/ 15:03:18 Lo 15:03:27 o/ 15:03:37 o/ (for about 30 mins) 15:04:00 o/ 15:04:05 #topic agenda 15:04:07 * Roll-call 15:04:10 * Announcements 15:04:12 ** Kayobe RC1 and stable/stein branches created 15:04:14 * Review action items from last meeting 15:04:16 * Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard 15:04:18 * Kayobe Stein release status 15:04:20 * Train release planning 15:04:22 * Ceph ansible migration 15:04:24 * Kolla Ansible TLS Internal API 15:04:26 #topic announcements 15:04:28 #info Kayobe RC1 and stable/stein branches created 15:04:32 #info Kayobe RC1 and stable/stein branches created 15:04:53 finally :-) 15:05:00 congrats kayobees 15:05:05 I think there are a couple more bug fixes we might want to fix and/or backport 15:05:14 then we should be good to release 15:05:24 Any others? 15:05:57 #topic review action items from last meeting 15:06:00 mgoddard to ask infra about restarting gerrit 15:06:03 mgoddard or someone else to check stable backports 15:06:29 1 - just did it, clarkb said they will schedule one soon 15:06:40 2 - I backported a bunch of stuff last week, mostly merged now 15:06:55 I guess we should do stable branch releases 15:06:58 * yoctozepto the merge ensurer 15:07:10 #action mgoddard to release stable branches 15:07:19 sure, why not, just merge me those latest Zun fixes 15:07:20 #topic Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard 15:07:50 had some CI hiccups last week, thanks for fixing 15:08:00 I think we're green again now? 15:08:06 green again 15:08:17 I think TripleO is back in green as well 15:08:31 nope 15:08:38 was for a moment though for sure 15:08:43 could be something random 15:08:52 cloudnull will let us know when they fix themselves 15:09:25 k 15:09:38 please everyone keep priority feature status up to date 15:10:29 moved ubuntu py3 to done 15:11:31 ok, move on 15:11:34 #topic Kayobe Stein release status 15:11:40 Covered it earlier 15:12:10 Any patches in particular we need? 15:12:21 priteau, dougsz, jovial? 15:13:15 Would be nice to get the venvs update merged for the release, still need to review it 15:13:17 https://review.opendev.org/#/c/666635/ could be nice 15:13:26 priteau: +1 15:13:37 (that's the one) 15:13:46 added RP+1 15:13:58 I think that's the main one 15:14:09 What about https://review.opendev.org/#/c/670502/? 15:14:19 ( iPXE boot with UEFI compute nodes) 15:14:20 I was about to suggest it 15:14:28 It's quite useful to have it 15:14:30 oh yeah 15:14:35 that would also be good 15:14:53 What about the Arista code from Stig? 15:15:48 too late I think 15:15:51 he's out this week 15:16:19 If the code is ready, we could merge and add the release note ourselves? 15:16:36 that's true 15:16:59 is that something you have time for priteau ? 15:17:21 Writing the release note? yes, shouldn't take long 15:17:25 thanks 15:17:47 We're fairly confident that the Arista code works well to be shipped? 15:18:02 Well I assume Stig tested it... 15:18:11 it looked ok to me 15:18:15 I guess we can do a 6.0.1 if needed 15:18:19 yeah 15:18:41 ok, I think we're good. Let's aim to have a release before next meeting 15:18:50 hi jovial 15:19:00 just finishing up kayobe stein discussion 15:19:02 hey 15:19:27 if there are any patches you want in the release, ping me 15:19:39 #topic Train release planning 15:20:26 #link https://releases.openstack.org/train/schedule.html 15:21:00 It's probably a good time to define our own delayed release schedule 15:21:21 Main feature freeze is Sep 09 - Sep 13 15:21:59 I think for Stein we lagged by 3 weeks, so Sep 30 - Oct 04 ? 15:22:30 there is no new Ceph, so sounds reasonable :D 15:23:43 let's be optimistic and aim for 3 week lag on release :) 15:23:44 sounds reasonable 15:23:49 main release is Oct 14 - Oct 18 15:24:07 so we should aim for Nov 04 - Nov 08 15:24:34 of course we're limited by RDO and others here, so let's not kick ourselves if we miss :) 15:24:47 but hopefully we don't have last minute mariadb issues 15:25:14 mgoddard: only binaries, sources should be safer 15:25:24 let's aim for that on our sources 15:26:20 there is also CentOS 8 to trip us up this time 15:27:00 I've added those dates to the release status section on the whiteboard 15:27:09 and added some risks to release 15:27:22 any other risks, apart from RDO and centos 8? 15:27:36 well, for binaries we can test packages from trunk.rdoproject.org for CentOS at least 15:27:38 centos 8 is not ready yet 15:27:56 yoctozepto: well, it's semi-ready, so in two weeks we might be in a different position 15:28:01 nor 7.7 for that matter, there is some serious lag 15:28:17 #link https://wiki.centos.org/About/Building_8 15:28:45 do we have to promise releasing for centos 8? centos-7-based images should be fine for train 15:28:46 only one step left incomplete 15:29:02 it means we don't easily get py3 on centos 15:29:19 yoctozepto: for now rdo builds on top of centos7, so we are safe 15:29:22 not sure if RDO train will support centos7 15:29:39 but I remember them claiming they will support centos 8 only 15:29:44 +1 15:30:08 oh noez 15:30:14 -100 15:30:44 anything else to discuss for train? 15:30:51 oh, hi, sorry I'm late 15:30:59 any feature progress need to be discussed? 15:31:02 hi stackedsax, np 15:31:33 ok, let's get to the main course 15:31:46 #topic Ceph ansible migration 15:31:55 oh boy 15:31:58 mnasiadka has been doing some good investigation here 15:32:21 yes, hacking ceph-ansible stable-4.0 so it works 15:32:33 but investigation is in the spec more or less 15:32:43 https://review.opendev.org/#/c/544980/ 15:33:16 bottom line is - there are no ceph container images for Ubuntu 15:33:26 mnasiadka to provide them 15:33:28 there are for Debian, OpenSuse and CentOS 15:33:46 yoctozepto: if I'll be bored - then maybe :) 15:34:06 did you collect intel as to why they are gone 15:34:27 for now I propose to work on CentOS, so we have some experience regarding deployment, migration, etc 15:34:41 yes, definitely 15:34:44 yoctozepto: simply there was no maintainer, funny thing is ceph-ansible started with Ubuntu as the only platform supported 15:34:55 yeah, I heard so 15:34:56 yoctozepto: but you know, Red Hat and so on :) 15:35:45 Ubuntu shouldn't be hard to add back based on the PR to remove it - the problematic thing is nfs-ganesha 2.8 vs 2.7 in CentOS, and only built for xenial - but probably we could surive 15:35:47 survive 15:36:06 rebuild ganesha? 15:36:09 should not be hard 15:36:11 I don't think there is a lot of users of ceph-nfs 15:36:20 me neither 15:36:29 I think CephFS and Manila has bigger user base in OpenStack world 15:37:25 So, bottom line is - please review the spec, I'll share my deployment experiences in the form of updating the external Ceph docs 15:37:34 I think we need to know what they would expect in terms of maintenance for ubuntu, and whether anyone is prepared to take it on 15:37:52 mgoddard: judging by the fact Debian has not been updated in 8 months - I don't think that's a lot :) 15:38:19 perhaps not, but maybe they would want someone from their core team to be interested 15:38:45 it's the key thing that could block the whole proposal, right 15:38:48 mgoddard: and still the ceph-container builder pushes only centos builds to docker hub - so we would need to add container image building on our side I think 15:39:14 mgoddard: I'll try to find out what means maintaining, if that's not a lot of work, I think I can do this 15:39:22 is that process automated? 15:40:46 mgoddard: you mean the builds? there is a travis job that builds the centos ones, it's as simple as make FLAVORS="what do you want to build" build, and then push it somewhere 15:41:12 would be better if they just published ubuntu 15:41:30 they don't publish debian, so I guess that's the same department 15:41:33 I'll ask about it 15:41:57 and what if we have to do baremetal ceph 15:42:07 how well is it supported there 15:42:13 I suppose we'd need to document building their images, but wouldn't really want to start pushing them to our dockerhub or anything 15:42:26 yoctozepto: better than containers I think :) 15:44:00 any more to say on ceph? 15:44:12 mnasiadka: ^ baremetal 15:44:58 baremetal means not containerized, just packages installed in the OS? 15:45:07 yes 15:46:41 well, that's two settings in group_vars less than containerized? 15:47:00 Do we want to have that tested in the CI as well? I don't think it goes well with Kolla philosophy :) 15:47:11 ceph is not openstack 15:47:16 if it feels better outside 15:47:24 then be it 15:48:04 do we have any info on how many people use ceph containers vs baremetal? 15:48:05 though would prefer containers 15:49:15 mgoddard: nope, ceph user survey did not include that question in 2018 15:49:46 ok 15:49:57 The free-of-charge Linux distributions – Ubuntu (65.9%), Debian (8.6%), CentOS (28%), openSUSE (0.34%) – combined make up the largest share of deployments. Red Hat Enterprise Linux is used by 8.9%, and SUSE Linux Enterprise by 2.3%. 15:50:05 but that shows Ubuntu is on the top :D 15:50:46 https://ceph.com/ceph-blog/ceph-user-survey-2018-results/ 15:51:01 that suggests either people are using old releases of ceph-containers, or not using ceph-containers 15:51:14 i.e. baremetal 15:51:56 unless we have any more, let's move onto the last topic 15:52:05 Most users are using ceph-deploy instead of ceph-ansible 15:52:12 so that answers your question 15:52:19 ok 15:52:34 #topic Kolla Ansible TLS Internal API 15:52:54 stackedsax, scottsol, generalfuzz, kklimonda: how's it going? 15:53:16 generalfuzz has been working away at this 15:53:24 essentially, we'll be pursuing the pan that seemed to be the consensus from our discussion last week: enabling TLS termination with each and every service 15:53:53 I am working on bringing the already in progress transactions to completion 15:54:03 (generalfuzz had to step away, so I'm speaking for him at the moment) 15:54:06 oh, or that 15:54:11 hi generalfuzz 15:54:19 great, frontend patch is looking close 15:54:51 I replied to your comments about documentation, we can talk about that after the meeting 15:54:59 sure 15:55:08 so what's the next steps from here? 15:55:33 after that, we wanted to help get some of scottsol's reviews worked through, and then start the long road of going through each service and getting them going with TLS 15:55:50 Next I am tackling https://review.opendev.org/548407 which introduces a new test that needs fixing 15:56:43 With the goal of completing all transactions in https://review.opendev.org/#/q/topic:bp/add-ssl-internal-network+(status:open+OR+status:merged 15:57:18 generalfuzz: I think there's a bit of overlap between that one and https://review.opendev.org/664517 15:57:50 are all PRs linked to bp/add-ssl-internal-network? it would ease the pain of reviewing :) 15:58:28 Ok, do you have a recommendation on which one I should focus on? 15:59:54 https://review.opendev.org/548407 uses a slightly different approach to the currently proposed design 16:00:28 we really weren't keen on 548407's approach 16:00:57 the changes to the tests and certificate generation are probably worth extracting, but I think much of the rest is covered in https://review.opendev.org/#/c/664517/2 16:01:04 I'm on board to be not keen 16:01:10 or, scottsol and co. would prefer not to go that route for our own infrastructure 16:01:55 I quite like that they are separate patches, so maybe we need a new patch (or two) with the internal certificate generation and test updates? 16:02:18 keeping the self-signed certs patch as it is 16:02:21 make sense? 16:02:45 then we can abandon Clint's patch and stop confusing people :) 16:02:53 Yes. One patch for internal certificate generation and testing 16:03:25 sure 16:04:03 ok, we're out of time 16:04:16 thanks for joining, everyone 16:04:23 Just FYI, I'm on vacation next week, but will be back at it on September 3rd 16:04:26 \o 16:04:29 #endmeeting