16:01:44 #startmeeting networking_ml2 16:01:44 Meeting started Wed Jan 29 16:01:44 2014 UTC and is due to finish in 60 minutes. The chair is rkukura. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:47 The meeting name has been set to 'networking_ml2' 16:01:48 Good Morning ..all 16:01:53 Hi 16:01:58 #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda 16:03:02 I'm trinath .. working for FSL SDN Mechanism driver.. 16:03:17 #topic Action Item Review 16:03:35 hi trinaths! welcome! 16:03:51 hi 16:04:07 Hi 16:04:25 Hi! Working on the UCS manager mechanism driver supporting sr-iov 16:04:34 looks like only AI was for me to send summary around ML2 binding to the list 16:05:09 this is my first time to the meeting.. How we start on.. 16:05:33 I just posted a proposal which covers flow of info from bound MechanismDriver to nova's GenericVIFDriver 16:05:48 #link http://lists.openstack.org/pipermail/openstack-dev/2014-January/025812.html 16:06:05 rkukura: would add the binding:profile to the discussion? 16:06:16 irenab: agreed 16:06:17 rkukura: just went through your summary on portbinding changes wrt ML2 16:06:44 thanks...like the generic solution for passing info to genericVIF driver via bound mech drivers 16:07:10 any feedback on the proposal to use binding:vif_details for both VIF security and PCI details? 16:07:56 rkukura: will follow on the mailing list after more deep review 16:08:20 This proposal is purely for data flowing out from the plugin/driver, not for input data, so it is read-only 16:08:47 but not sure if it makes sense to add PCI details to any port (even as None) 16:08:59 rkukura: agree with the generic idea, is it possible to go into more details? 16:09:10 As irenab mentioned, we will be filing BP to implement binding:profile in ML2 to handle data flowing into the plugin/driver 16:09:42 I have not read the proposal (will review it later) - so, the ML drivers will push the info back to the ML2 plugin and then this info gets pushed to nova, right? 16:10:00 can a port have both VIF security and PCI address info attached to it? 16:10:09 will this proposal handle that case too? 16:10:14 sadasu: Its really what's already in Nachi's patches, but just renamed from binding:vif_secuirty to binding:vif_details so it can be used for other things. 16:10:56 rkukura: will this vif_inof available vig get_Device_details for agents? 16:11:01 sadasu: yes - the set of key/value pairs in binding:vif_details depends on the value of binding:vif_type 16:12:48 irenab: I think we need a separate effort to involve the bound MD in responding to the get_device_details RPC. Is that needed for PCI-passthru? 16:12:57 rkukura: ok..agreed 16:14:02 rkukura: I think I need it for my case..will have to get back to you 16:14:12 rkukura: not sure, maybe needed. So the current patch does not extend the device_details with vif_info, right? 16:14:16 Lets discuss feedback on the binding:vif_details proposal on the list, and hopefully get nachi onboard with a plan to finally resolve the VIF security issue 16:14:32 irenab, rkukura : this looks like asomya proposal, MD should be able to add info to get_device_details 16:14:51 matrohon: Yes, that is what I was saying is a separate effort. 16:14:55 but maybe not the same as those return to nova 16:14:58 matrohon: can please put link? 16:15:08 We need to know whether its a priority for icehouse 16:15:13 hi, i just read rkukura's proposal of vif_detail in the dev list. 16:15:22 binding:* attributes are all vif_details.... 16:15:47 do we go vif_details to a generic dictionary? 16:16:13 amotoki: True, but do we want a proliferation of lots of top-level attributes that aren't for end users? Or is one dictionary sufficient for the MD->VIFDriver path? 16:17:27 amotoki: Yes, the proposal is for binding:vif_details to be a generic dictionary whose contents are interpreted based on the value of binding:vif_type. 16:17:43 understood. 16:18:05 irenab : https://docs.google.com/document/d/1ZHb2zzPmkSOpM6PR8M9sx2SJOJPHblaP5eVXHr5zOFg/edit# 16:18:10 Any more quick questions/comments on that proposal now, or we can take it to the list 16:18:18 ui am not sure now.. it seems we can split binding attrs into subcategories: MD->VIF, VIF->MD, VIF<->MD. 16:18:20 matrohon: thanks 16:18:39 rkukura: does this review cover issue w/ vlan# being accessible for delete_port_postcommit()? 16:18:56 rcurran: Trying to get to that 16:19:07 on this commit? 16:19:49 amotoki: This proposal covers MD->VIF. Looking at using binding:profile for inputs to the MD for binding purposes. 16:20:11 rkukura : what is the difference between binding:vif_details and binding:profile 16:20:55 Regarding my action item, I've been trying to get this port attribute stuff resolved so I can post a clear description of the proposed changes to port binding regarding transactions and access to original vs. new binding details. 16:21:17 binding:vif_details is output from the MD, binding:profile is input to the MD 16:21:24 matrohon: at now binding:profile is reserved for the attr which a pllugin (a driver in ML2 case) can use freely 16:21:37 binding:profile is bidirectional attribute. 16:21:57 amotoki, rkukura : thanks 16:22:31 amotoki: I'm not aware of current cases where binding:profile is used for output data, and was trying to avoid the complication of merging input data with output data during updates. 16:23:19 Lets take the binding:profile and binding:vif_details discussion to the list, and move on with the agenda 16:23:37 ok. 16:24:44 I plan to post the proposal that rcurran was asking about to openstack-dev in the next day or to, so I'll keep this action item open 16:25:37 #action rkukura to post proposal for portbinding changes to call MDs outside of transactions and with all needed info 16:26:04 #topic bugs 16:26:17 fyi rkukura - i plan on pushing up a review w/ the work around solution of saving off vlan in delete_port_precommit() (for use in postcommit) i'll change this once your code gets in 16:27:00 rcurran: Great! Sorry I've taken so long to write that up, but I think you have the general idea from these meetings 16:27:21 #link https://bugs.launchpad.net/neutron/+bugs?field.tag=ml2 16:27:58 I just reported a new potential bug and tagged it with ml2 16:28:02 https://bugs.launchpad.net/neutron/+bug/1274160 16:28:26 3 high priority bugs, 2 in progress 16:28:45 * mestery walks in very late. 16:29:12 and it looks like safchain is taking the 3rd: https://bugs.launchpad.net/neutron/+bug/1237807 16:29:29 hi mestery - we just moved from AIs to bugs 16:29:33 regarding db migration issue , there are two opinions and seems no consensus. 16:29:35 rkukura: Thanks! 16:30:24 amotoki: What's the disagreement? 16:31:24 woops... slow connection. the question is how to handle havana migration. 16:33:54 amotoki: We you going to describe the issue? 16:34:33 I see there are links to discussions - how can we bring this to conclusing? 16:34:40 conclusion 16:36:02 on other bugs, lets fix what we can, review fixes, and hopefully work through these soon 16:36:43 Please speak up of any seem to have wrong priorities - we'll look at the high and maybe medium in these meetings to make sure we are progressing 16:37:19 sure 16:37:26 I will check the situation of db migration again. several fixes are related. 16:37:27 #topic ovs-firewall-driver 16:37:33 asadoughi: any update on this? 16:38:13 hi. no news. no new reviews were made because of the gating issues and no new code pushed for the same reason. moving forward with it now. our cores allowed to review again? 16:38:57 asadoughi: I think we've been allowed (expected) to keep reviewing, just not approve 16:39:08 I'll admit I'm behind on reviews 16:39:33 ah, well, again, i'd like to get reviews on the code that's already out there if possible. 16:39:46 asadoughi: Can you paste the review here please? 16:39:47 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/ovs-firewall-driver,n,z 16:39:56 asadoughi: You are a psychic. :) 16:40:05 We should prioritize reviewing fixes for gate issues, but need to keep making progress 16:40:17 rkukura: i agree with that sentiment 16:40:37 rkukura: +1 16:40:54 rkukura: any ETA on the gate issues fixes? 16:41:34 I don't have any current info on the gate issues - does anyone else? 16:42:36 i don't have a concise idea, but anyone interested can look for the "state of the gate" e-mails. 16:42:44 I have nothing too. neutron-channel is good place to check the status. 16:42:56 right 16:42:59 yes, #openstack-neutron too 16:43:13 Lets make sure to fix/review any ML2 issues effecting the gate ASAP 16:43:28 asadoughi: sorry for the late. my understanding was completely wrong on source-port. i will resume the review. 16:43:57 amotoki: ok. thanks. 16:44:06 asadoughi: I'll try to look these over this week as well 16:44:22 #topic new MechanismDrivers 16:45:16 mestery: What do you have in mind for covering these in this meeting? Should we go through status of each, or just see if there are any general issues/questions? 16:45:30 can we discuss on my FSL Mechanism Driver.. 16:45:33 rkukura: Maybe just general issues now. 16:45:41 Such as what trinaths wants to discuss :) 16:45:47 trinaths: Please go ahead. 16:45:51 trinaths: Sure 16:46:05 thank youy mestery... :) 16:46:33 * mestery has to step out now. 16:46:54 We have developed an ML2 mechanism driver to prost the network/subnet/port related data to out Cloud Resource Discovery Service.. 16:46:56 gate issue , evendentually a infra is doing a tox upgrade 16:47:03 I have submitted the code base for review.. 16:47:14 got few comments 16:47:23 I was not clear on a comment 16:47:35 trinaths : sounds great 16:47:59 on UNIT test case for the driver 16:48:31 the driver needs CRD Client..to send data to CRD Sever. 16:48:39 but in UNIT testing its not possible.. 16:49:06 rkukura: do you want to discuss the vnic_type port attribute we talked during PCI passthru meeting? 16:49:19 i think trinaths is referring to the now required 3rd party testing for all ML2 mech drivers 16:49:27 trinaths: Need to fake it 16:49:34 trinaths: you can Mock the operation - look into Arista ML driver 16:49:40 yes true said.. I need to FAKE it.. 16:50:10 trinaths: Is this a new dependency on a client python library? 16:50:28 can any one check the code in the link here as a guidance to me on how to fake it.. any in a right path.. #linkhttps://review.openstack.org/#/c/69838/1/neutron/tests/unit/ml2/drivers/test_fslsdn_mech.py 16:50:48 #link https://review.openstack.org/#/c/69838/1/neutron/tests/unit/ml2/drivers/test_fslsdn_mech.py 16:50:57 no rkukura.. ! 16:51:16 Where does the crdclient module come from? 16:51:20 we have a CRD Client.. some thing like neutron client 16:51:39 CRD client is which we ourselves developed 16:52:10 trinaths: please look into the unit test example in Arista Driver - we Mock the sync operation. You can so something similar 16:52:18 #link https://review.openstack.org/#/c/69838/1 16:52:34 okay.. let me check the same.. sukhdev 16:53:05 Don't the other drivers all include needed client, and just need to mock in the unit tests because there is no server to talk to? 16:53:31 rkukura: yes 16:53:41 cisco_nexus does mock our server api 16:54:08 trinaths: Have you looked at the Tail-F NCS driver? I believe they are doing a similar thing to what you want to do. 16:54:13 So, there are plenty examples 16:54:20 So seems trinaths's patch needs to include crdclient or set it up to be pip-installed or something 16:54:56 #link http://stackoverflow.com/questions/8658043/how-to-mock-an-import 16:55:24 yes rkukura.. crd client needs to be installed like neutron-client 16:55:33 only 5 minutes left - lets move this discussion to the review, or to IRC or email 16:55:54 Any other issues regarding new drivers? 16:56:20 Before we run out of time: Please review this and see if the approach is right: https://review.openstack.org/#/c/69792/ 16:56:36 irenab: on SRIOV Mech Drivers depend on binding:profile, need it in hight priority 16:56:43 I think its the brocade driver that needs to disable bulk ops - would be good to discuss that, but much time 16:56:57 https://review.openstack.org/#/c/68996/ 16:57:07 BigSwitch needs no bulk 16:57:21 So I proposed that approach 16:58:00 ktbenton1: Is this because bulk isn't properly implemented in ML2, or something else? 16:58:29 It's because the backend for the driver doesn't support bulk operations 16:58:44 so we need a way to change the native_bulk flag that ML2 advertises 16:59:18 Don't the bulk operations get implemented as non-bulk operations? 16:59:51 Not with native_bulk enabled 17:00:13 Lets work through this in the review, we are out of time here 17:00:17 ok 17:00:17 This is what I mentioned last week; Deals with mechanism drivers raising exception in post commit ops. Need to add more unit tests. https://review.openstack.org/#/c/69792/ 17:01:08 banix: Thanks. Looks like its got some good review input. I'll take a look too 17:01:15 We are out of time 17:01:25 #endmeeting