15:00:10 <gmann> #startmeeting tc 15:00:10 <opendevmeet> Meeting started Thu Jul 15 15:00:10 2021 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:10 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:10 <opendevmeet> The meeting name has been set to 'tc' 15:00:24 <gmann> #topic Roll call 15:00:24 <gmann> o/ 15:00:33 <ricolin> o/ 15:00:49 <mnaser> hola 15:00:51 <dansmith> o/ 15:01:17 <gmann> we have 3 members absent today. 15:01:18 <gmann> yoctozepto on PTO 15:01:24 <gmann> spotz on PTO 15:01:30 <gmann> jungleboyj on PTO 15:02:21 <gmann> let's start 15:02:26 <gmann> #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions 15:02:35 <gmann> ^^ today agenda 15:02:50 <gmann> #topic Follow up on past action items 15:02:59 <gmann> two AI from last meeting 15:03:00 <gmann> clarkb to convey the ELK service shutdown deadline on ML 15:03:15 <gmann> clarkb send it to ML #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023578.html 15:03:28 <gmann> gmann to send ML to fix warning and oslo side changes to convert them to error 15:03:31 <gmann> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023646.html 15:04:24 <gmann> gibi mentioned about sqlAlchemy warning also which need keystone fix to merge to get oslo.db 10.0.0 in g-r 15:04:29 <diablo_rojo_phone> O/ 15:04:41 <gmann> #link https://review.opendev.org/c/openstack/keystone/+/799672 15:04:53 <gmann> seems less active member in kesytone. 15:05:03 <gmann> knikolla: ^^ if you see this msg 15:05:53 <gmann> stephen already pinged keystone team on keystone channel so let's see if we can merge it soon 15:06:02 <gmann> #topic Gate health check (dansmith/yoctozepto) 15:06:17 <gmann> dansmith: any update you would like to share? 15:06:17 <dansmith> gate has seemed fairly good to me lately, hard to complain much 15:07:15 <ricolin> also check-arm64 was blocked last week, but back to normal now 15:07:20 <gmann> one issue i am aware of and is fixed now. tempest-full-py3 was broken on ussuri due to python3 disable via base job 15:07:29 <dansmith> oh, 15:07:29 <gmann> +1 15:07:30 <tosky> (tempest-slow-py3) 15:07:39 <gmann> yeah tempest-slow-py3 15:07:46 <dansmith> not really gate, but is the depends-on is still broken? 15:08:05 <gmann> dansmith: that is fixed now 15:08:10 <fungi> i don't believe so, i saw the message about it mention an immediate revert 15:08:11 <dansmith> okay cool 15:08:15 <gmann> worked for tempest-slow-py3 fix testing 15:08:41 <clarkb> yes as soon as we identified the issue we pushed and landed a revert of the change that broke depends-on. Then restarted as soon as that had applied to the servers 15:08:44 <fungi> don't believe it to still be broken, i meant 15:08:56 <gmann> clarkb: +1 15:09:02 <dansmith> okay I thought it was broken for a while 15:09:27 <clarkb> dansmith: from Sunday evening to about Tuesday Noonish relative to our timezone 15:09:37 <dansmith> ah okay 15:10:06 <dansmith> anyway, nothing else gate-ish from me 15:10:17 <gmann> ok, let's move next then 15:10:30 <gmann> #topic Migration from 'Freenode' to 'OFTC' (gmann) 15:10:59 <gmann> while doing this for deprecation repo, i found few repo not deprecated or retired properly. also some setup in project-config side need update 15:11:06 <gmann> project-config side things are merged 15:11:50 <gmann> for retired repo, I am leaving the OFTC ref update because 1. they are many repo 2. need to add setup in project-config to get it updated to github repo 15:12:06 <gmann> if anyone has time I will not object. 15:12:49 <gmann> #topic PTG Planning 15:13:05 <gmann> Doodle poll for slot selection 15:13:25 <gmann> please vote your availability/preference 15:14:07 <diablo_rojo_phone> Will do today. 15:14:13 <gmann> thanks 15:14:22 <gmann> ricolin: jungleboyj you too 15:14:36 <gmann> we need to book slot by 21st July 15:15:08 <ricolin> I thought I already vote, but will check again 15:15:20 <gmann> also I sent doodle poll for TC+PTL interaction session which is 2 hrs either on Monday or Tuesday 15:15:22 <gmann> #link https://doodle.com/poll/ua72h8aip4srsy8s 15:15:33 <gmann> ricolin: i think you voted on TC+PTL sessions not on TC PTG 15:15:36 <gmann> please check 15:15:46 <ricolin> you're right 15:15:51 <ricolin> will vote right now 15:15:59 <dansmith> too many doodles 15:16:01 <gmann> ricolin: thanks 15:16:44 <gmann> for TC sessions, I am thinking to book slot for two days and 4 hrs each day ? 15:16:58 <gmann> that should be enough? what you all say ? 15:17:24 <dansmith> I'll have a hard time making all of that, as usual, but sure 15:17:32 <ricolin> done 15:18:21 <ricolin> gmann, I think that's good enough 15:18:30 <gmann> k 15:18:33 <gmann> and this is etherpad to collect the topic #link https://etherpad.opendev.org/p/tc-yoga-ptg 15:18:56 <gmann> please start adding the topic you would like to discuss 15:19:28 <gmann> anything else on PTG thing? 15:19:31 <diablo_rojo_phone> I assume we also want to coordinate with the k8s folks for some time? 15:20:15 <ricolin> diablo_rojo_phone, +1 15:20:56 <gmann> sure, last time k8s folks did not join but we are always fine if they would like to. we can have 1 hr slot for that if ok for them 15:21:04 <diablo_rojo_phone> Something to keep in mind. 15:21:34 <diablo_rojo_phone> Yeah I think with more heads up and if we dictate a time to them and put an ical on their ml we should get more engagement. 15:22:12 <gmann> sure, i did last time on ML also. I can do this time too. 15:22:27 <ricolin> IMO we like to include that, maybe we need more than 8 hours(4 a day) 15:22:45 <gmann> ricolin: time slot is not issue I think. 15:22:50 <ricolin> s/we like/if we like/ 15:22:55 <ricolin> gmann, Okay 15:22:59 <gmann> but yes we can extend if needed 15:24:07 <gmann> added this in etherpad 15:24:09 <gmann> anything else ? 15:24:33 <gmann> #topic ELK services plan and help status 15:24:36 <gmann> Help status 15:24:59 <gmann> I think there is no help yet. clarkb fungi anything you heard from anyone ? 15:25:09 <clarkb> I have not 15:25:21 <gmann> k 15:25:22 <gmann> Reducing the size of the existing system 15:25:29 <gmann> clarkb: ^^ go ahead 15:25:31 <clarkb> Since increasing the log workers to 60% of our total it is keeping up much better than before 15:26:24 <clarkb> On the Elasticsearch cluster side of things we are using ~2.4TB of disk right now. We have 6TB total but only 5TB is usuable. The reason for this is we have 6 nodes with 1TB each and we are resilient to a single node failure which means we need to fit within the disk available on 5 instances 15:26:55 <clarkb> Given that the current disk usage is 2.4TB or so we can probably reduce the cluster size to 5 nodes. Then we would have 5TB total and 4TB useable. 15:27:15 <clarkb> If we reduce to 4 nodes then we get 3TB usable and I think that is too close for comfort 15:27:58 <clarkb> One thing to keep in mind is that growing the system again if we shrink and it falls over is likely to be difficult. For this reason I think we can take our time. Keep monitoring usage patterns a bit before we commit to anything 15:28:23 <clarkb> But based on the numbers available today I would say we should shrink the log workers to 60% of their total size now and reduce the elasticsearch cluster size by one instance 15:28:38 <gmann> 2.4 TB is usual usage like during peak time of release etc or just current one ? 15:29:12 <clarkb> gmann: just during the current usage. Its hard to look at numbers historically like that because cacti doesn't give us great resolution. But maybe fungi or corvus have tricks to find that data more ccurately 15:29:31 <gmann> 'shrink the log workers to 60% of their total size' - shrink or increase ? 15:30:00 <gmann> initially you mentioned increasing 15:30:10 <clarkb> gmann: shrink. We have 20 instances now. I have disabled the processes on 8 of them and we seem to be keeping up. That means we can shrink to 60% I think 15:30:29 <clarkb> gmann: last week I had it set to 50% and we were not keeping up so I increased that to 60% but that is still a shrink compared to 100% 15:30:39 <gmann> ohk got it 15:31:00 <gmann> i though 60% more from what we had:) 15:31:15 <gmann> 8thought 15:31:31 <gmann> I think this is reasonable proposal. 15:31:41 <clarkb> Anyway that is what the current data says we can do. Lets watch it a bit more and see if more data changes anything 15:31:54 <clarkb> But if this stays consistent we can probably go ahead and make those changes more permanent 15:32:45 <gmann> clarkb: is it fine to monitor until Xena release ? 15:32:56 <gmann> or you think we should decide early than that ? 15:33:36 <clarkb> its probably ok to monitor until then. Particularly during feature freeze as that is when demand tends to be highest 15:33:45 <gmann> yeah 15:35:02 <clarkb> that was all I had. We can watch it and if those numbers hold up make the changes after the xena release (or maybe after feature freeze) 15:35:20 <gmann> +1 sounds perfect 15:35:56 <gmann> clarkb: anything else you would like to keep discussing on this in TC meeting or is it fine to remove it from agenda for now and re-add during Xena release ? 15:36:25 <clarkb> Should be fine to remove for now 15:36:31 <gmann> ok 15:36:48 <gmann> thanks a lot clarkb for reporting on data and help on this. 15:37:14 <gmann> #topic Open Reviews 15:37:17 <gmann> #link https://review.opendev.org/q/projects:openstack/governance+is:open 15:37:34 <gmann> many open reviews, let's check them quickly and vote accordingly 15:38:11 <ricolin> will do 15:38:15 <gmann> tc-members please vote on the Yoga testing runtime #link https://review.opendev.org/c/openstack/governance/+/799927 15:38:39 <gmann> which is same as what we had in Xena 15:38:58 <gmann> centos-stream9 can be added later once that is released 15:39:21 <clarkb> gmann: are you planning to support both 8 and 9? 15:39:37 <clarkb> my selfish preference is that you pick only the one (as it allows us to delete images more quickly) 15:39:38 <gmann> clarkb: no, one, means updating 8->9 15:39:41 <clarkb> got it 15:40:56 <gmann> need one more vote on this project-update #link https://review.opendev.org/c/openstack/governance/+/799826 15:42:12 <gmann> others are either having enough required vote or waiting for depends-on /zuul fix. 15:42:14 <ricolin> voted 15:42:17 <gmann> thanks 15:42:31 <gmann> ricolin: this quick one for governance-sig https://review.opendev.org/c/openstack/governance-sigs/+/800135 15:42:47 <gmann> anything else we need to discuss for today meeting? 15:42:54 <ricolin> done 15:43:00 <gmann> thanks 15:43:03 <ricolin> yes 15:43:33 <ricolin> one thing, sorry for the delay but I sended the ML for collect pain point out http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023659.html 15:44:08 <gmann> +1 15:44:09 <ricolin> for the eliminate pain point idea 15:44:28 <ricolin> Let's see if we can get valid pain point feedback from teams 15:45:07 <gmann> thanks ricolin for doing that. 15:45:17 <ricolin> NP, will keep tracking 15:45:25 <gmann> sure 15:45:52 <gmann> anything else? 15:46:20 <gmann> thanks all for joining, let's close meeting 15:46:23 <gmann> #endmeeting