16:11:04 <Sukhdev> #startmeeting networking_ml2
16:11:04 <openstack> Meeting started Wed Dec 13 16:11:04 2017 UTC and is due to finish in 60 minutes.  The chair is Sukhdev. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:11:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:11:07 <openstack> The meeting name has been set to 'networking_ml2'
16:11:12 <Sukhdev> rkukura : thanks
16:11:20 <rkukura> no problem
16:11:43 <Sukhdev> #agenda
16:11:50 <Sukhdev> #link: https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_December_13.2C_2017
16:12:13 <Sukhdev> #topic: Announcements
16:12:56 <Sukhdev> This meeting has been changed from once a week to once a month on 2nd Wednesday of every month
16:13:22 <Sukhdev> We can hold this meeting earlier, if there is an issue to be discussed
16:13:44 <Sukhdev> In such a case, simply add the agenda to the ML2
16:14:09 <rkukura> If things get busy, we could even switch back weekly meetings, or could hold extra meetings later in the month
16:14:38 <Sukhdev> rkukura : +1
16:14:58 <yamamoto> do you have irc-meetings change?
16:15:18 <Sukhdev> yamamoto : I intend to make that change
16:15:32 <Sukhdev> yamamoto : will push the patch
16:16:13 <Sukhdev> Another announcement - OpenStack PTG is happening in the last week of Feb 2018
16:16:31 <yamamoto> what do you mean by "add the agenda to the ML2"?  wiki?
16:17:37 <Sukhdev> yamamoto : add to the agenda here - https://wiki.openstack.org/wiki/Meetings/ML2
16:17:57 <Sukhdev> yamamoto : like I added for today's meeting
16:18:59 <Sukhdev> Any other announcement?
16:19:01 <yamamoto> so, we are supposed to watch the wiki?
16:19:40 <Sukhdev> yamamoto : Ideally, add to the wiki and shoot an email on the dev list
16:19:49 <yamamoto> ok
16:19:59 <Sukhdev> so that interested folks can join/participate
16:20:04 <rkukura> I’d suggest sending to openstack-dev, and the ML2 co-chairs
16:20:38 <Sukhdev> yamamoto : Or you can wait until the next official slot of the meeting
16:21:08 <rkukura> For the scheduled monthly meetings, anyone can add to the agenda, with no email necessary. If someone wants an extra meeting, then email is important.
16:22:03 <Sukhdev> #topic: Agenda
16:22:11 <rkukura> If we give up the slot except for once a month, then actually scheduling an IRC channel might be difficult
16:22:58 <Sukhdev> I had no specific agenda for today - the purpose was to make this announcements
16:23:18 <Sukhdev> yamamoto caboucha rkukura : do you have any agenda item to discuss today?
16:23:24 <rkukura> nothing from me
16:23:33 <caboucha> me neither
16:23:43 <yamamoto> i have
16:23:53 <Sukhdev> yamamoto : please do
16:24:20 <yamamoto> i guess this bug is interesting people here  https://bugs.launchpad.net/neutron/+bug/1736650
16:24:21 <openstack> Launchpad bug 1736650 in neutron "linuxbrige manages non linuxbridge ports" [Undecided,Confirmed]
16:24:56 <Sukhdev> #link:  https://bugs.launchpad.net/neutron/+bug/1736650
16:26:44 <yamamoto> i'm wondering what's the best way to fix it
16:28:22 <yamamoto> the bug is about linuxbridge vs midonet but it seems the problem is not specific to those MDs.
16:28:22 <Sukhdev> yamamoto : hmmm... interesting issue indeed
16:29:14 <Sukhdev> yamamoto : I have run many times with ovsdriver and other drivers and never seen this, but, have never tried with linuxbridge
16:29:24 <rkukura> yamamoto: It sounds like the agents that see the device get created have no way to know whether they are bound to it until after the do the RPC to get the device details, right?
16:30:10 <yamamoto> rkukura: that's my understanding yes
16:31:52 <rkukura> does midonet use this RPC in the same way as linuxbridge?
16:32:47 <yamamoto> midonet doesn't use agent rpc at all
16:33:08 <rkukura> I see
16:34:38 <Sukhdev> yamamoto : your explanation on the bug makes sense -
16:34:43 <rkukura> so it seems that the RPC handler in neutron-server would somehow need to verify that the RPC came from an agent of a bound MD, and return some sort of error if its not from an agent that should be involved in the binding
16:35:54 <rkukura> it would be ideal if neutron-server could be updated to do this based on info already in the RPC message, so we don’t need to version the RPC
16:37:12 <yamamoto> i guess it's possible to reject such a request as get_device_details takes agent_id.
16:37:13 <rkukura> I think the agent_id and the host_id would be available already, right?
16:37:25 <yamamoto> yes
16:38:06 <rkukura> We’d need ML2’s RPC handler to check with the bound MDs to see if any of them use that agent
16:40:08 <Sukhdev_> I don't believe that information is available - which driver is using which agent
16:40:32 <rkukura> right - it would probably require adding a new method to the ML2 driver API
16:40:35 <Sukhdev_> Sorry my computer rebooted....switching to phone
16:41:21 <yamamoto> there's no clean way. but there are some code already which do "getattr(mech_driver.obj, 'agent_type', None)"
16:41:25 <rkukura> way back when we designed ML2, we had originally thought that RPCs like this would get handled by mechanism drivers, but that necver happened
16:42:01 <rkukura> yamamoto: something like that might do the trick
16:44:59 <Sukhdev_> yamamoto: that getattr probably does not work because the other side is not plumbed
16:45:12 <rkukura> I’m not seeing where that check_segment_for_agent method is currently called
16:46:52 <yamamoto> rkukura: it's called by neutron/services/segments/db.py
16:46:57 <yamamoto> another getattr!
16:48:07 <rkukura> So that probably doesn’t have anything to do with the get_device_details RPC handler, but a similar approach could be used for it
16:48:12 <yamamoto> Sukhdev_: what do you mean by the other side?
16:49:17 <yamamoto> rkukura: yes, and hopefully with a reasonable api this time.
16:49:29 <Sukhdev_> Binding of the driver to agent
16:50:37 <rkukura> Sukhdev_: I think the agent_id is already sent from the agent to the server in the RPC request message
16:50:38 <Sukhdev_> yamamoto: I mean mapping of driver to agent
16:51:25 <rkukura> the port binding identifies the MDs
16:51:56 <rkukura> and this getattr could be used to ask those MDs if they recognize the agent that made the RPC request
16:53:57 <yamamoto> it might not work if there are multiple MDs which are compatible with an agent type. is it possible?
16:55:43 <Sukhdev_> yamamoto: possible, but, not probable....
16:55:52 <rkukura> I’d think we’d need to loop over the MDs that the port is bound to, not the ones registered as in check_segment_for_agent
16:57:12 <rkukura> If multiple MDs are compatible with the same agent type, and at least one of those MDs are part of the port binding, then we are probably OK to respond normally to the get_device_details RPC
16:57:15 <Sukhdev_> Time check.... 3 minutes left
16:58:11 <yamamoto> rkukura: i guess it's the best we can do without changing rpc or db
16:58:11 <rkukura> Its the case where none of the bound MDs use the agent that we may not want to set the port status, etc.
16:59:35 <rkukura> I probably would have defined the agent_type as a list, so that a single MD could work with multiple agent types, but that is probably not critical, and if the agent_type attiibute is already available on the OVS and LB agents, I’d say just use it
17:00:25 <rkukura> looks like we are out of time
17:00:40 <Sukhdev_> Time is up folks, let's take it to neutron channel
17:00:49 <yamamoto> i'll put a link to this discussion in the bug
17:01:00 <rkukura> hopefully it was helpful!
17:01:09 <Sukhdev_> Thanks for attending today's meeting
17:01:18 <rkukura> see you in January!
17:01:25 <yamamoto> thank you
17:01:32 <Sukhdev_> #endmeeting
17:01:35 <rkukura> bye
17:01:42 <rkukura> thanks Sukhdev_!
17:02:13 <Sukhdev_> rkukura: can you issue endmeeting
17:02:24 <rkukura> #endmeeting
17:02:30 <rkukura> I’m not a chair
17:02:47 <Sukhdev_> #endmeeting
17:02:55 <rkukura> Sukhdev_: you may need to do that when you get back in as Sukhdev
17:03:21 <Sukhdev_> Darn it. My laptop rebooted:-)
17:04:23 <yamamoto> or wait and retry.  i think anyone can endmeeting after 1 hour.
17:04:57 <Sukhdev_> I am power cycling the laptop.. Hopefully will work
17:05:38 <rkukura> #endmeeting
17:06:47 <Sukhdev> #endmeeting