15:01:14 #startmeeting kolla 15:01:18 Meeting started Wed Apr 10 15:01:14 2019 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:22 The meeting name has been set to 'kolla' 15:01:30 o/ 15:01:34 Hello everyone 15:01:37 #topic rollcall 15:01:41 o/ 15:01:44 o/ 15:02:31 \o\ 15:02:34 o/ 15:03:48 #topic announcements 15:03:58 #info Kolla master is no longer in feature freeze 15:04:08 blockers unblocked 15:04:14 Development can now start for Train cycle features 15:04:38 stein released then? 15:04:49 JamesBenson: not yet, but we have a stable branch 15:04:54 k 15:05:01 #info RC1 and stable/stein branches have been created 15:05:26 this will not be our last RC, but it good progress 15:05:35 #info Stable branch releases this week (hopefully) 15:05:58 Waiting for reviews on https://review.openstack.org/650411 and https://review.openstack.org/650412 15:06:13 (from release team) 15:06:49 #action mgoddard to nudge release team about stable releases 15:06:55 #info mgoddard away 2019-04-16 to 2019-04-23, then at summit/PTG, returning 2019-05-07 15:07:27 is someone else able to take the meeting next week? 15:08:15 if not, please cancel it 15:08:18 #topic Review action items from last meeting 15:08:40 Only chason and I were present for the last meeting, so no action items :) 15:08:48 #topic Kolla whiteboard https://etherpad.openstack.org/p/KollaWhiteBoard 15:09:00 CI is green \o/ 15:09:42 mostly focusing on bugs & stability right now, but please keep the whiteboard up to date with features being worked on for Train 15:10:14 #topic Stein release status 15:10:30 I think we're looking fairly good right now 15:10:35 We have a stein branch 15:10:51 Need to switch to stein RDO repos when available 15:11:11 #action mgoddard to propose stein RDO switch patch 15:11:30 Anyone else have thoughts on how it's looking? 15:11:35 Anything we should be doing? 15:12:03 sorry to say but not looked much how we are with stein in k-a 15:12:04 Sorry for be late.;( 15:13:14 chason: np 15:13:15 RDO Stein packages might take a while, https://bugs.centos.org/view.php?id=15991 15:13:16 mgoddard I think we should give more love to kolla-cli project. 15:13:37 chason: I did think about that earlier. We haven't touched it 15:14:08 #link https://bugs.centos.org/view.php?id=15991 15:14:28 kolla-cli is in bad state, even functional tox jobs are failing 15:14:55 does anyone use it? 15:15:18 I doubt, it might be PTG material to discuss it’s future. 15:15:38 http://mirror.centos.org/centos-7/7/cloud/x86_64/openstack-stein/ starts to be populated 15:16:17 are we obliged to release it, given that its under kolla governance? 15:16:27 x86-64 for now only but we may switch now 15:16:47 never used kolla-cli not even looked at it ;( 15:17:18 mgoddard: looking at branches/tags - it was never released 15:17:20 I think it was mostly an Oracle code drop, then they stopped contributing :) 15:17:30 mgoddard: true 15:17:45 mgoddard Maybe kolla-cli is not good enough, so many people did not use it.. 15:17:46 data point for "does anyone use it": I've been using placement for about 6 months, and I just learned of kolla-cli now :) 15:17:55 er, s/placement/kolla/ 15:18:26 chason: so then I think it needs a lot of love :) 15:18:55 I don't know if it's about the quality of the tool, more about awareness 15:19:45 And its documentation is not comprehensive enough either. 15:19:57 it looked interesting, but overlaps with kayobe so we have never tried it 15:20:19 And CI is broken, so in order to extend it we would need to fix it. 15:20:33 ok, I suggest we continue to not release it until someone complains 15:20:42 we can discuss future at the PTG 15:20:54 anyone disagree? 15:21:26 agree, let's discuss this stuff at the PTG. 15:21:43 +1 15:21:45 makes sense to me 15:22:13 I've added it to the PTG agenda 15:22:16 #topic Bug review & triage process 15:22:28 Did you add this one mnasiadka? 15:23:11 I think we both did, just to remark we need to start to keep an eye on bugs :) 15:23:46 Yeah. I've started doing it a bit more often, and going through the backlog every so often 15:24:14 I think we have a lot of debt here 15:24:31 Good, me too - we have a lot of bugs that have been moved between milestones for n releases 15:24:43 it would be nice if we could share the burden of cleaning up 15:24:56 any volunteers? 15:25:44 I knew that would make it go quiet :) 15:27:13 guess its me and mnasiadka then 15:27:33 just have to take it slow 15:27:53 we could do with a process to make sure we're not going over the same bugs 15:28:14 I think it would be good to have some... bug report/dashboard, and at least start handling new bugs 15:28:43 mgoddard mnasiadka I will help you if I have time. :) 15:28:50 thanks chason 15:28:57 But haven’t investigated that yet, launchpad default pages are not the best :) 15:29:28 I do wonder how much effort we should be putting into LP if we are going to be forced to use storyboard at some point 15:30:08 I wonder how teams are managing assigning bugs to milestones between storyboard and lp 15:31:04 my guess is they're not 15:32:05 So it’s the only reason to stick to lp now, it’s nice to see bugs needed to be fixed before we do a release. 15:32:45 yeah. it could be done with tags, but I don't know how you track which release its fixed in 15:33:11 And we can use the lp api to list bugs closed after latest release to have a pointer if we need do a release 15:33:21 mgoddard: this is another thing ;) 15:33:44 mnasiadka: do you think you could write up a short process for bug triage and backlog grooming? 15:34:13 mgoddard: yeah, I’ll do it. 15:34:27 mnasiadka: some way that a distributed team could share the load without going over the same issues 15:34:46 e.g. check for NEW bugs, move to another state when processed 15:34:57 obviously a bit more detail than that 15:35:20 Obviously ;) 15:36:00 #action mnasiadka to write up a short bug triage & grooming process in docs 15:36:07 thanks 15:36:15 #topic Ocata and Pike - Extended Maintenance vs EOL 15:36:28 let EOL ocata 15:36:42 +1 15:36:49 for context - ocata and (soon) pike are in extended maintenance 15:36:58 in this phase, there are no more releases 15:37:00 no opinion on pike - for me it can go eol too 15:37:05 but the branch may be kept alive 15:37:22 first branch I used was queens 15:37:23 officially, the branch may move to EOL after CI has been failing for 6 months 15:37:47 ocata images have now been failing for 2-3 months 15:37:48 and ocata CI is broken due to kafka 15:38:04 we could EOL in july if we leave it 15:38:04 But we are not obliged to fix bugs if somebody raises them, when a branch is in EM? 15:38:20 https://review.openstack.org/651112 fixes ocata but is abandoned to make it EOL 15:38:20 mnasiadka: I don't think so 15:38:39 are we ever obliged to fix bugs? :) 15:38:49 sorry being late 15:38:56 hi egonzalez, no 15:38:58 mgoddard: may joke that it depends on who reports them 15:38:59 np 15:39:09 hrw: true :) 15:39:11 we're not abligated to maintain 15:39:28 only people who whish to keep the branch active should do it 15:39:36 yep, what egonzalez said 15:39:50 btw, I just remembered I said we might use ocata in that thread - update, we won't be using ocata :) 15:40:16 so if someone turns up and wants ocata, I'm happy to review patches to keep it alive, but I don't want to have to make patches myself 15:40:19 cores still have to review, but not maintain anything 15:40:36 if is broken is up to people who wants to maintain 15:40:40 sounds like we're happy to let ocata die (RIP) 15:41:03 currently, the periodic publishing job runs and fails every day. I propose we disable it on ocata 15:41:27 I feel embarassed every time I see the email :) 15:42:29 +2 15:42:31 Yeah, this is embarassing :) 15:42:46 #action mgoddard to stop ocata publish job 15:43:04 how about pike? are we happy to keep it in EM for now? 15:43:56 when it comes to backports I always try to find time to review them. 15:44:37 never used pike 15:44:50 +1, backports should be implicit RP+1 IMO 15:45:11 agree 15:45:31 ok, lets keep pike alive for now 15:45:40 #topic Virtual PTG 15:46:17 we should schedule this soon 15:46:43 shall we go for 2x 4 hour slots? 15:46:56 too short? too long? 15:47:53 It depends how many topics we have and how much people will attend 15:48:18 #link https://etherpad.openstack.org/p/kolla-train-ptg 15:48:38 Last time it was like... 4 people on webex the last day, and people coming in and going out ;) 15:48:48 5 attendees, 13 topics 15:49:25 only 1 topic not proposed by me... does anyone else want to talk about anything? 15:49:58 So +1 for 2x4 hrs, we can extend later if we feel it’s required 15:50:01 we could go for 1x 4 hour slot with a reduced topic list and schedule another later 15:51:11 Yeah, maybe not two days in a row, might be easier for people to join :) 15:51:46 ok, let's schedule 2 sessions, we can cancel the second if we don't need it 15:52:01 maybe something in the middle, 1x5 or 6 hours 15:52:06 #action mgoddard to send a doodley poll to ML about PTG 15:52:21 then a midcycle meeting 15:52:31 egonzalez: my concern with a long session is with timezones 15:52:42 thats true, good point 15:52:56 I understand you have to take time off though 15:53:18 I'll include hours in the poll and see how much overlap we get 15:53:54 #topic Libvirt TLS https://review.openstack.org/650448 (klindgren) 15:54:09 klindgren was supposed to be joining but hasn't 15:54:29 they wanted to add TLS encryption to libvirt 15:54:46 the difficult part (as always) is around certificates 15:54:56 and they wanted some guidance on direction 15:55:30 in their use case, they have some external tool that generates a server & client cert for each host and puts it on the box. 15:56:02 when that isn't possible, kolla ansible needs to put certs in place on each host 15:56:13 unlike the API cert, these are typically different for each host 15:56:25 options are: 15:57:14 1. copy cert files from localhost e.g. /etc/kolla/config/nova/libvirt-tls//.cert 15:57:55 2. store files in YAML config, e.g. nova_libvirt_client_tls_cert. 15:58:05 hm. klindgren uses cacert. 15:58:38 hrw: yeah, would typically be necessary for an internal CA? 15:58:55 no idea 15:59:20 I was happy when let's encrypt started cause then I could just run script and ignore all cert issues 15:59:39 hrw: I think that only works if your host is publicly accessible 15:59:44 yep 16:00:12 so I think klindgren is looking for feedback on what approach to go for 16:01:32 I was wary of providing too strong an opinion without anyone else agreeing 16:01:32 I probably prefer 1., since it avoids putting cert files in YAML 16:01:32 anyways, times up 16:01:32 thanks for coming everyone 16:01:33 #endmeeting