16:05:05 #startmeeting networking_ml2 16:05:05 Meeting started Wed Jul 5 16:05:05 2017 UTC and is due to finish in 60 minutes. The chair is rkukura. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:05:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:05:09 The meeting name has been set to 'networking_ml2' 16:05:40 hi tbachman, trevormc, yamamoto_, sadasu 16:05:49 anyone else here for the Ml2 meeting? 16:05:58 rkukura: hi :) 16:06:09 #topic Agenda 16:06:34 #link https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_July_5.2C_2017 16:06:42 hi Sukhdev! 16:06:46 just getting started 16:07:06 nothing specific on today’s agenda - is there anything anyone wants to be sure to cover? 16:07:11 Hi 16:07:27 rkukura: I have a generic port-binding question, if no one has anything else 16:07:37 Sorry for being late 16:07:56 tbachman: lets cover that under Open Discussion 16:08:14 #topic Announcements 16:08:41 Only announcmement I have is that I will most likely miss next week’s IRC meeting 16:08:50 Any other announcements? 16:09:26 I shall cover next week 16:09:33 #topic RFEs/Bugs 16:09:38 Sukhdev: thanks 16:09:58 trevormc: Any update on your SR-IOV related work? 16:10:19 I did push up an interface to iplex 16:10:22 but it still needs some work 16:10:38 #link https://review.openstack.org/#/c/476547/ 16:10:54 It also needs to run on third party ci ! :) 16:11:33 Just a minor update, working on it more this week. 16:12:07 I have another update on the QinQ refactor patch too. Just a little speed bump. 16:12:27 trevormc: thanks! 16:12:48 anything else on these? 16:13:06 nothing on SR-IOV but one for QinQ 16:13:38 trevormc: go ahead 16:13:43 The OVO integration for type_vlan merged see https://review.openstack.org/#/c/367810/ this required me to move the logic to helpers.py , not sure if this was the correct approach but you can see it here 16:13:53 #link https://review.openstack.org/#/c/458531/ 16:14:15 might need to tag the author of the integration patch as a reviewer.. 16:14:42 I updated the commit msg and made inline comments about it. 16:15:04 thats all the updates from me, thanks rkukura 16:16:16 thanks trevormc! 16:16:23 any other questions on these? 16:16:33 any other bugs/RFEs to discuss? 16:17:23 guess not… 16:17:28 #topic Open Discussion 16:17:44 * tbachman raises hand 16:17:49 tbachman: go ahead 16:18:00 I have more of a “best practices” question 16:18:02 for port binding 16:18:29 I’m looking into handling port binding for hetergenous port types 16:19:00 it seems like one of the idioms is to depend on agents running on the host that the port is being bound to 16:19:29 I was curious if there was ever any thought into introspection of nova in order to discern this information, rather than rely on a given agent 16:19:35 when there is an agent involved, port binding needs to ensure the agent is there and can provide the connectivity 16:19:46 and also wasn’t sure how multiple agents on the same host worked 16:20:18 (e.g. can a single host support multiple port bindings) 16:20:20 tbachman: not all ML2 mech drivers rely on agents 16:20:53 sadasu: good point. What mechanism is used then to determine whether or not they should bind? 16:21:03 (if this can be generalized) 16:21:07 I think supporting SR-IOV and normal OVS on the same host is possible 16:21:22 tbachman: yes, single host can bind multiple types of ports 16:22:12 I guess my question is: is it a bad idea to introspect nova to get information for port binding? 16:22:18 I can see an argument against it 16:22:25 Any mechanism driver needs some way to determine whether or not it can bind for a particular host, but its up to the driver to use whatever info makes most sense - the agents_db is just one option 16:22:28 for example, since nova is already calling into neutron to do port binding 16:22:35 as far as I know, only Cisco SR-IOV ports don't need an agent to complete binding 16:23:06 sadasu: thanks for that input — will have a look! 16:23:18 tbachman: what info from nova do you think might be useful for this? 16:23:20 tbachman: introspecting nova and not using an agent is fine 16:23:34 rkukura: I guess in this case, hypervisor type 16:23:38 e.g. ESX 16:23:41 KVM 16:23:42 as long as you have all the information you are looking for, from Nova 16:24:08 sadasu: I also need to figure out if I can get that info (so good point ;-) ) 16:24:39 Hypervisor type is available 16:24:43 anyway, I don’t want to dominate the Open Discussion section with this — just wanted to find out if there are other examples. It sounds like there are. 16:25:10 Sukhdev: +1. yes hypervisor type is available from nova 16:25:22 and if folks thought it was a bad idea for neutron to call back into nova during nova’s call to neutron for port-binding 16:25:36 Port binding happens outside any transaction, so having an MD’s bind_port make REST calls on nova is reaonable, I think 16:25:44 rkukura: thx! 16:26:15 tbachman: anything else on this? 16:26:26 rkukura: not now. Thanks all for the input! 16:26:48 anything else to discuss today before we wrap up? 16:26:54 or could you use port binding extension to get this information from nova during the 1st bind_port call instead of another RSP API request/response? 16:27:45 sadasu: that would be great. I’m not sure it would be easy to get that into nova, however :( 16:28:56 tbachman: Agreed. Just giving you options. 16:29:04 I have seen deployment where they pass this information via config 16:29:37 Sukhdev: yeah — I prefer dynamic stuff (i.e. no restart of services needed for changes). Thanks for pointing that out, tho! 16:29:41 sadasu: yeah — thx! 16:30:37 * tbachman steps aside to allow other discussions 16:30:44 rkukura: thanks for the floor 16:30:46 :) 16:30:53 thanks tbachman! 16:31:04 any other topics for discussion, or are we done for today? 16:31:19 We are good 16:31:39 ok, last call… 16:31:39 PTG attendees? Sukhdev is a No, rkukura is a maybe 16:31:40 not from me. thanks 16:31:47 Maybe from me. 16:31:59 sadasu: still a maybe 16:32:32 sadasu: are you going? 16:32:32 i'm a maybe 16:32:40 I am maybe too! 16:33:10 :-) 16:33:10 ok, lets wrap up for today 16:33:14 thanks everyone! 16:33:20 o/ 16:33:22 #endmeeting