13:59:51 <mlavalle> #startmeeting networking
13:59:52 <openstack> Meeting started Tue Jul  3 13:59:51 2018 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:59:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:59:55 <openstack> The meeting name has been set to 'networking'
13:59:59 <annp_> hi
13:59:59 <manjeets_> hi
14:00:09 <njohnston> o/
14:00:14 <hongbin> o/
14:01:08 <mlavalle> Hi there!
14:01:15 <rubasov> o/
14:01:28 <cliffparsons> hi there
14:01:35 <haleyb> hi
14:01:36 <mlavalle> manjeets_, njohnston: more people in the US than I expected ;-)
14:01:52 <mlavalle> haleyb: also
14:02:09 <mlavalle> #topic Announcements
14:03:23 <mlavalle> The Rocky-3 milestone is fast approaching. It is less than 3 weeks away: July 23 - 27
14:04:08 <mlavalle> The call for presentations for the Berlin Summit will be open until July 17th
14:04:23 <mlavalle> I look forward to many nice submissions from the Neutron team
14:04:52 <mlavalle> Any other announcements?
14:04:58 <boden> mlavalle I think the release candidate for client libs (including neutron-lib) is July 19th, no?
14:05:06 <boden> finalized by the 26th
14:05:12 <mlavalle> Yes
14:05:45 <boden> ok so that date is rapidly approaching, if there are any things we want in neutron-lib for Rocky, it seems now is the time :)
14:06:05 <mlavalle> Thanks for the reminder boden
14:06:26 <mlavalle> I'll add it to the agenda in wiki
14:06:54 <mlavalle> ok, moving on
14:07:01 <mlavalle> #topic Blueprints
14:08:10 <mlavalle> These are the blueprints we are targeting for Rocky 3:
14:08:16 <mlavalle> #link https://launchpad.net/neutron/+milestone/rocky-3
14:08:43 <mlavalle> any updates on blueprints?
14:09:16 <boden> for https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db I’ll reiterate that isn’t realistic for Rocky anymore
14:09:38 <boden> but if we can try to land https://review.openstack.org/#/c/565593/  it will allow some more consumption patches to start flowing
14:10:11 <mlavalle> boden: I will review it today
14:10:21 <boden> thanks mlavalle
14:10:38 <mlavalle> rubasov: any updates on bandwidth based scheduling?
14:10:40 <rubasov> for https://blueprints.launchpad.net/neutron/+spec/strict-minimum-bandwidth-support I'd like to call the attention/judgement of reviewers familiar with hierarchical port binding use cases to this change: https://review.openstack.org/574783
14:11:06 <mlavalle> you read my mind ;-)
14:11:15 <rubasov> mlavalle: :-)
14:11:53 <rubasov> also I hope to upload more non-wip patches this week
14:12:02 <mlavalle> I assume that tempest-full-py3 is not related to the change, right?
14:12:22 <rubasov> mlavalle: yep, that's unrelated
14:12:47 <mlavalle> ok, cool. I will take a look between today and tomorrow
14:12:54 <rubasov> mlavalle: thank you
14:13:02 <mlavalle> and encourage others to do the same
14:13:10 <lajoskatona_> mlavalle/rubasov: I am back now so after my jetleg I start working again ;)
14:13:24 <rubasov> lajoskatona_: welcome back
14:13:38 <mlavalle> nice to see you back, lajoskatona_
14:13:41 <amotoki> hi sorry for late....
14:14:33 <mlavalle> In regards to multiple port bindings, the entire series is here: https://review.openstack.org/#/q/topic:bp/live-migration-portbinding+(status:open+OR+status:merged)
14:15:19 <mlavalle> I addressed some comments from haleyb and mriedem in the patch that originates the series: https://review.openstack.org/#/c/414251/
14:15:31 <mlavalle> and rebased the entire series yesterday
14:16:22 <mlavalle> I have some re-checking to do but as soon as zuul returns green in all of them, I think the series is good to go
14:16:37 <mlavalle> please take a look when you have a chance
14:18:07 <mlavalle> Thanks to haleyb, manjeets for revieweing the series of port forwarding: https://review.openstack.org/#/q/topic:bp/port_forwarding+(status:open+OR+status:merged)
14:18:46 <mlavalle> njohnston also reviewed the neutron-lib patch. thanks
14:18:55 <mlavalle> please keep it up
14:19:07 <mlavalle> I will go over the series today and tomorrow
14:20:01 <mlavalle> annp_: do you want to talk about wsgi server?
14:20:14 <annp_> mlavalle, thanks.
14:20:39 <annp_> Thanks slawek, hongbin and Manjee for your great reviewing. I’ve just update new patch-set to address your comments. Could you please take a look at https://review.openstack.org/#/c/555608/
14:21:01 <manjeets__> ack
14:21:03 <annp_> regarding to fwaas rpc timeout, firewall plugin tried to initialize rpc listener. I think fwaas rpc listener should be served by neutron-rpc-server worker. I’ve just pushed a patch to fix that https://review.openstack.org/#/c/579433/. Now, I’m waiting feedback from zigo about the result of the test.
14:21:17 <annp_> :)
14:21:27 <njohnston> nice +1
14:21:29 <zigo> o/
14:21:39 <zigo> I'm sorry but I couldn't work on it today.
14:21:44 <zigo> I'll check on it tomorrow.
14:21:56 <zigo> Thanks a lot annp !
14:22:06 <annp_> zigo, No problem! :) I'm waiting you. :)
14:22:19 <annp_> Regards to Neutron WSGI integration in devstack https://review.openstack.org/#/c/473718/ All tempest test was passed, however there are neutron-grenade gate still failed by cinder removing volume.
14:22:23 <amotoki> annp_: do we have similar pattern in other service plugins other than fwaas?
14:23:18 <annp_> amotoki, I've checked metering plugin.
14:24:02 <annp_> amotoki, metering plugin or l3_router_plugin initilize rpc listener by rpc worker.
14:24:21 <amotoki> annp_: thanks. it sounds nice.
14:25:23 <annp_> amotoki, yeah! :)
14:25:53 <annp_> amotoki, do you have any idea related to neutron wsgi integration issue?
14:26:00 <mlavalle> ok, thanks annp for your hard work. and thanks to zigo for all his support
14:26:26 <annp_> mlavalle, thanks :)
14:26:42 <zigo> One more thing: if that works, would it be hard to do a backport of the patch to Queens?
14:27:58 <mlavalle> zigo: of https://review.openstack.org/#/c/579433/?
14:29:11 <amotoki> perhaps zigo is thinking all wsgi patches each of which is small.
14:30:11 <mlavalle> ack, haleyb let's take a look at it after the meeting
14:30:15 <zigo> Well, both? :)
14:30:32 <zigo> WSGI support was supposed to be a Queens release goal ... :P
14:30:43 <mlavalle> yeap, I realize that
14:30:54 <mlavalle> I knew you would bring it up ;-)
14:31:03 <zigo> Sorry, I'm a pain ... :P
14:31:21 <mlavalle> zigo: you are not. you are very helpful. just kidding
14:31:34 <zigo> I got something else that I want to bring up, not sure if that's the time.
14:31:46 <mlavalle> go ahead
14:33:05 <mlavalle> mhhh, zigo, let's talk about it at the end of the meeting
14:33:12 <mlavalle> #topic Community Goals
14:33:13 <zigo> Ok.
14:33:33 <annp_> mlavalle, that's all for WSGI.
14:33:45 <mlavalle> amotoki do you want to talk about policy in code?
14:33:46 <annp_> please go ahead.
14:33:53 <mlavalle> annp_: Thanks!
14:34:10 <amotoki> sorry for late progress in policy-in-code. I succeeded to block all meetings this wed and fri. perhaps it will be two whole day works I guess.
14:34:33 <mlavalle> is there anything we can do to help?
14:35:07 <amotoki> patches will consist of neutron-lib policy migration and policy-in-code.
14:35:39 <amotoki> I will resolve merge conflict this week and testing will be helpful
14:36:33 <mlavalle> thanks for the update amotoki. looking forward to those patches
14:36:39 <amotoki> (1) neutron-lib policy, (2) remove policy.py from neutron (3) policy conversion in neutron, (4) policy convesion in subprojects (ike fwaas, dynamic-routing)
14:37:00 <amotoki> (1)-(3) would be straightforward and I am refreshing my memory
14:37:16 <amotoki> (4) might need some help
14:37:43 <mlavalle> ok, let us know how we can help
14:37:51 <amotoki> I will send a mail to summarize patches once I push them
14:38:30 <amotoki> the lib freeze deadline is in my mind.
14:40:24 <mlavalle> I also want to remind the team that we have this etherpad: https://etherpad.openstack.org/p/py3-neutron-pike
14:40:34 <mlavalle> for py35 support tasks
14:40:50 <mlavalle> please pick up some to help us moving in this goal
14:41:35 <amotoki> is it covered by the CI meeting?
14:42:01 <njohnston> mlavalle: reading the py35 etherpad, it is not clear where the available items to work are.  Is that the "Old Failure categories" section?
14:43:05 <amotoki> njohnston: perhaps we need to check how they work now again.
14:43:20 <mlavalle> yeah
14:44:15 * zigo wrote everything in a buffer and is ready for end of meeting topics ...
14:44:28 <mlavalle> njohnston: we probably need to re-order that etherpad
14:44:39 <amotoki> "python3 first" will be one of the Stein goal, so we need to refresh the etherpad
14:44:58 <njohnston> +1
14:45:01 <mlavalle> I will take a look over the next few days
14:45:25 <mlavalle> njohnston: are you interested in helping with that?
14:45:58 <njohnston> I wanted to see if I could pitch in, yes
14:46:17 <mlavalle> ok, I'll take a look this week and ping you in channel
14:46:32 <mlavalle> #topic neutron-lib
14:46:44 <mlavalle> boden: anything on neutron-lib?
14:46:53 <boden> a few things, I will try to be brief
14:47:14 <boden> we releases neutron-lib 1.17.0 last week FYI.. if you want to use it bump your requirements and lower-constraints
14:47:47 <boden> a friendly reminder that many networking projects are not fully setup for zuul v3 and local testing: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html
14:48:14 <boden> ^ causes friction when trying to land neutron-lib patches, os if you get a chance please have a looksee
14:49:19 <amotoki> what you mean by "not fully setup for zuul v3" ?
14:49:19 <boden> another reminder that if your networking project wants to stay “Current” and receive neutron-lib patches that we please ask to help land them as mentioned in http://lists.openstack.org/pipermail/openstack-dev/2018-June/131841.html
14:49:30 <boden> amotoki pls see email and related link
14:49:31 <manjeets_> boden for 1.17.0 https://review.openstack.org/#/c/577245/
14:49:34 <boden> its in detail tehre
14:49:49 <boden> manjeets_ ack
14:49:57 <amotoki> boden: thanks. perhaps networking projects and horizon plugins have same problem
14:50:12 <boden> amotoki yes this is about the networking projects in general
14:50:39 <njohnston> boden: I was wondering, was there ever a solution for projects that do "from neutron.tests import base"?  ISTR that was on the list of "not sure how to solve that" issues a while back.
14:50:47 <boden> amotoki I hope the etherpad linked in is helpful, and I think it outlines the “situation”, but if not feel free to ping me
14:51:25 <amotoki> boden: yes, that is totally helpful. as a quick look I am in the same page.
14:52:06 <mlavalle> ok, let's move on
14:52:10 <boden> njohnston we have a base test class in neutron-lib, tho I don’t know that it’s fully compatible yet
14:52:20 <mlavalle> #topic CLI / SDK
14:52:28 <mlavalle> amotoki: anything on this topic?
14:52:58 <amotoki> I will look thru client stuffs next week after policy-in-code.
14:53:09 <mlavalle> ok, thanks
14:53:13 <amotoki> client freeze wil be after the lib freeze, so we have time.
14:53:31 <amotoki> on SDK, we don't have much plan in Rocky
14:53:46 <amotoki> that's all from me
14:53:48 <mlavalle> #topic Open Agenda
14:53:58 <mlavalle> zigo: please go on
14:54:32 <zigo> In neutron/agent/l3/dvr_local_router.py, there's the function _set_subnet_arp_info() called:
14:54:32 <zigo> 1/ Each time l3 is restarted
14:54:32 <zigo> 2/ Each time an interface of a VM is added to a compute if no other VM of that tenant was there
14:54:32 <zigo> Problem is, in our use case in Infomaniak, we have 2000 VM, most of then in a single tenant. The result is, this operation takes about 15 minutes, with more than 4200 call to "neutron-rootwrap ip neight add" : one call per IP & MAC association. This is just horrible for a customer to wait for 15 minutes for its VM to be up, especially that it establishes L2 peers (east-west traffic) before any north-bound connectivity
14:54:32 <zigo> work is done.
14:54:32 <zigo> Many times, the call is even not needed ("ip neigh show" shows it's most of the time already done and not needed).
14:54:34 <zigo> So my question is: what do you think would be the best strategy to optimize this, having in mind that more than half of this time is spent in invoking neutron-rootwrap.
14:55:33 <zigo> I thought about using the batch mode for the ip command.
14:55:46 <zigo> Like, write a file, call once ...
14:56:35 <zigo> That, and switching to oslo.privsep.
14:56:49 <zigo> Good ideas or not?
14:57:15 <mlavalle> zigo: seems ok to me in principle.
14:57:33 <mlavalle> would you mind filiung a bug so we can take it from there
14:57:41 <zigo> Ok, I'll try to come with a patch then.
14:57:49 <zigo> Will do.
14:58:05 <mlavalle> we can discuss it in the L3 meting, that takes place on Thursday at 15000 UTC
14:58:13 <zigo> IMO, it's best if I try, because it's kind of not easy to reproduce the issue with thousands of VMs.
14:58:23 <zigo> Oh ok, thanks.
14:58:25 <mlavalle> oh, I know
14:58:45 <mlavalle> the hard part is to reproduce the situation
14:59:18 <mlavalle> ok, thanks for attending
14:59:21 <amotoki> it would be super helpful if we can emulate such situation
14:59:36 <mlavalle> talk to you next week
14:59:40 <manjeets_> see you guys !
14:59:42 <mlavalle> #endmeeting