16:00:26 <cgoncalves> #startmeeting Octavia
16:00:27 <openstack> Meeting started Wed Oct  9 16:00:26 2019 UTC and is due to finish in 60 minutes.  The chair is cgoncalves. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:31 <openstack> The meeting name has been set to 'octavia'
16:00:37 <rm_work> o/
16:00:46 <johnsom> o/
16:00:51 <gthiemonge> Hi!
16:01:00 <cgoncalves> hello!
16:01:09 <haleyb> hi
16:01:33 <cgoncalves> I'm leading the meeting today, sorry about that :) our PTL is currently on the road
16:01:43 <cgoncalves> #topic Announcements
16:02:29 <rm_work> Well, I opted to wait to drive after the meeting but still only on cellphone ;)
16:02:50 <cgoncalves> a security vulnerability was found last week. I want to acknowledge f0o and rm_work for their work on discovering and fixing it promptly!
16:02:52 <cgoncalves> [OSSA-2019-005] Octavia Amphora-Agent not requiring Client-Certificate (CVE-2019-17134)
16:03:25 <johnsom> +1 great work
16:03:30 <rm_work> We'll skip acknowledging me for also causing it ;)
16:03:43 <cgoncalves> the team released point releases for all maintained stable branches (down to queens). we also backported the fix to pike and ocata (extended maintenance, no point releases anymore)
16:04:16 <cgoncalves> we released Train RC2 to include this fix, too
16:04:40 <cgoncalves> #link  lists.openstack.org/pipermail/openstack-discuss/2019-October/010031.html
16:04:55 <cgoncalves> Train release due next week
16:05:01 <cgoncalves> #link  https://releases.openstack.org/train/schedule.html
16:05:43 <cgoncalves> please let the team know if there's any urgent bug fix patch that should be part of Train GA so that we can release Train RC3 first
16:05:54 <cgoncalves> otherwise, Train GA will be Train RC2
16:06:16 <johnsom> Deadline is basically today BTW.
16:06:45 <cgoncalves> right
16:06:58 <rm_work> It's too late to add that DIB bump as a bugfix? :)
16:07:24 <johnsom> We could backport that and get it in an RC3.
16:07:40 <cgoncalves> we might also want to make another octavia-dashboard RC to include translations, right johnsom?
16:07:48 <johnsom> It sets a higher minimum for DIB to get a fix for pypi mirrors.
16:07:57 <cgoncalves> +1
16:08:03 <rm_work> Ah just a higher MINIMUM?
16:08:27 <johnsom> Our gates of course pull the newest version, so not impacted. It's really just for packagers.
16:08:30 <cgoncalves> hmm. not sure of the backport post-GA. it might not fly well with the release team
16:08:39 <rm_work> It's just a minimum
16:08:49 <johnsom> The dashboard RC is already proposed to pick up the latest translations.
16:09:29 <cgoncalves> I know, yet a requirement change. we were called out recently because of that on rocky and stein point releases as we assert for the stable-policy tag
16:09:34 <johnsom> Right, it just "declares" that we want a minimum of DIB 2.24.0
16:10:09 <johnsom> Right, if we don't do this, and you set a pypi mirror, you might break
16:10:43 <johnsom> I would lean towards doing the backport and just getting it in. What are others thoughts?
16:11:16 <cgoncalves> could we still try to have it in RC3?
16:11:43 <johnsom> That is the question on the table. It already merged on master
16:12:02 <johnsom> If we backport it, approve it, get it merged, we can add it to RC3
16:12:22 <cgoncalves> right. have you proposed the backport to stable/train?
16:12:37 <johnsom> No, it just merged and I wanted to bring it up here
16:13:02 <johnsom> Because if it doesn't go in RC3, we don't want to backport it.
16:13:08 <rm_work> I'm ok to do it
16:13:10 <cgoncalves> ok. +1 to try to have it in RC3
16:13:14 <johnsom> That would be a stable policy issue
16:13:32 <openstackgerrit> Michael Johnson proposed openstack/octavia stable/train: Bump diskimage-builder minimum to 2.24.0  https://review.opendev.org/687610
16:13:44 <johnsom> Ok, there is the proposal for train.
16:13:51 <cgoncalves> +2
16:14:17 <cgoncalves> thank you. let's propose Train RC3 as soon as it merges
16:14:21 <johnsom> rm_work if you +A, I will do the RC3 once that lands
16:15:48 <cgoncalves> anything else on this topic of Train release?
16:16:21 <cgoncalves> three items to share on the upcoming Shanghai Summit and PTG:
16:16:36 <cgoncalves> 1. PTG schedule is available
16:16:44 <cgoncalves> #link https://www.openstack.org/ptg
16:16:53 <cgoncalves> 2. Meet the project leaders" opportunities in Shanghai
16:17:01 <cgoncalves> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009925.html
16:17:34 <rm_work> Yep will do
16:17:53 <cgoncalves> the Foundation is inviting PTLs, core reviewers and other types of community leaders to meet individual contributors
16:17:54 <rm_work> I added myself to the Etherpad list for that
16:18:05 <rm_work> And I just got my passport back with my Visa
16:18:22 <cgoncalves> cool!
16:18:50 <rm_work> So assuming nothing dumb happens re: US+China in the next month, I'll be attending both of those sessions
16:19:07 <rm_work> You coming too?
16:19:16 <cgoncalves> so, folks, don't hesitate to come and meet!
16:19:49 <cgoncalves> I still need to organize my calendar for that week
16:20:07 <cgoncalves> I will give it high priority, sure
16:20:20 <cgoncalves> 3. Forum Schedule
16:20:22 <johnsom> I will not be attending in person, but if there are sessions at the PTG you would like me to join I will via video or IRC.
16:20:36 <cgoncalves> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009939.html
16:21:27 <cgoncalves> definitely. having you and others connected remotely can be arranged
16:22:26 <cgoncalves> so, on Forum schedule. the schedule is published. please have a look. there are some interesting sessions
16:23:14 <cgoncalves> this is all we have in today's agenda for announcements. anything else?
16:23:52 <cgoncalves> #topic Brief progress reports / bugs needing review
16:24:57 <johnsom> I am back from vacation. Mostly caught up on e-mail. I have some rebasing to do, then I plan to look at a bug about secrets being deleted from barbican. After that, I hope to get back to working on the failover flow(s).
16:25:24 <cgoncalves> nothing much worth sharing from my side. with my stable liaison hat on, we released point releases ensuring the CVE fix was also included
16:26:11 <cgoncalves> colin-, ^ your contact point for the secrets delete issue ;)
16:28:39 <cgoncalves> folks are playing shy today :)
16:28:49 <cgoncalves> #topic Open Discussion
16:29:13 <sapd1_> hello cgoncalves
16:29:20 <sapd1_> Did you get my question?
16:29:58 <cgoncalves> sapd1_, I did. sorry I have not replied yet :/
16:30:24 <cgoncalves> someone added two items to the agenda for this topic
16:30:34 <johnsom> sapd1_ Is the spec for that posted? Last I remember we wanted to have some discussion on that feature
16:30:38 <cgoncalves> sapd1_, if you want we can discuss after those
16:31:15 <gthiemonge> cgoncalves: yep, I've found an issue with octavia-worker graceful termination, it was not properly handled
16:31:43 <sapd1_> cgoncalves, ok. We will discuss later.
16:31:49 <gthiemonge> I fixed it, and now with have a 60 seconds timeout when the service shutdowns
16:32:01 <cgoncalves> #link https://review.opendev.org/#/c/684201/
16:32:31 <johnsom> I think we need a longer timeout due to the time it can take nova to boot the VM.
16:32:31 <gthiemonge> it means that if an existing flow lasts more than 60sec, it may be interrupted (and then we could see some resources in PENDING_* statuses)
16:33:32 <gthiemonge> johnsom: you're right, so I'd like to have some feedback... what could be a good timeout value?
16:34:35 <cgoncalves> it would be super interesting to hear from octavia operators
16:35:07 <johnsom> I know that if nested virtualization is used, RAX hosts can take up to 18 minutes. Though I don't think we should target for nested virt.
16:35:25 <cgoncalves> colin-, eandersson, lxkong : ping :)
16:35:59 <johnsom> I would throw out a proposal of 5 minutes. However, it should be awesome if folks could look in their worker logs and give us an idea of how long an LB create is taking.
16:36:18 <cgoncalves> agreed. it would be ideal having what we agree to to be a good default for prod envs and then for CI we override
16:36:28 <johnsom> I wonder if we can create a awk/grep/something that could pull those numbers out of the log?
16:37:00 <gthiemonge> johnsom: I've a script, but it need some cleanup, then I could share it
16:37:16 <johnsom> Ok, cool!
16:37:29 <cgoncalves> nice!
16:38:08 <johnsom> I know my workstation is about 30 seconds, but that seems to be faster than others are seeing. I would be super handy for us to have some real numbers to refer to
16:38:22 <cgoncalves> how should we move forward? should we set something like 5 minutes as suggested for now and later tune it to something different if needed?
16:38:42 <cgoncalves> or should we wait for feedback before merging gthiemonge's patch?
16:39:21 <johnsom> Yeah, I would propose something based on the feedback we get. We know that later, with the jobboard work this will be less of an issue so we could drop the timeout value.
16:40:15 <cgoncalves> right. reason why I asked is because this is a bug right now and that should be fixed and backported to stable branches
16:41:19 <cgoncalves> any timeout (configurable) would be better than what we have today
16:43:13 <gthiemonge> 1min works great for me in devstack ;-)
16:43:37 <johnsom> lol, would work for my workstation too.
16:44:29 <cgoncalves> my preference would go to a higher timeout for now. 5 minutes as it was suggested
16:45:05 <gthiemonge> I think 5min is the default timeout for devstack's systemd services
16:45:09 <johnsom> Let's do this. We have a patch posted. It needs reviewed. If we can post a script for folks, let's give it until next meeting to commit to a value.
16:45:18 <cgoncalves> but, again, anything is better than what we have today so... :)
16:45:19 <gthiemonge> ack
16:45:20 <johnsom> We can nag people to run the script throughout the week
16:45:40 <cgoncalves> sounds good
16:47:54 <cgoncalves> are we good here? shall we discuss the other item?
16:48:11 <johnsom> I think so...???
16:49:38 <cgoncalves> ok, we can talk about it next week
16:50:01 <cgoncalves> anything else? we have 10 minutes left
16:51:16 <cgoncalves> all right. thanks everyone for joining!
16:51:24 <gthiemonge> cgoncalves: thank you!
16:51:30 <cgoncalves> #endmeeting