15:01:37 <carl_baldwin> #startmeeting neutron_l3
15:01:38 <openstack> Meeting started Thu Apr 30 15:01:37 2015 UTC and is due to finish in 60 minutes.  The chair is carl_baldwin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:39 <johnbelamaric> hi
15:01:42 <openstack> The meeting name has been set to 'neutron_l3'
15:01:45 <carl_baldwin> #topic Announcements
15:01:47 <yalie> mlavalle: thanks
15:01:50 <carl_baldwin> #link https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
15:02:01 <saggi1> mlavalle: We just want to talk a bit about it. And try and get our head around how we fit into liberty
15:02:02 <carl_baldwin> Kilo released today!
15:02:26 * pc_m yay!
15:03:03 <carl_baldwin> Also, if you have not yet heard, there are changes coming to the feature proposal process.  Not merged yet but appears to be gaining momentum.
15:03:05 <carl_baldwin> #link https://review.openstack.org/#/c/177342/
15:04:42 <carl_baldwin> From my perspective, what this means is that there may be less work involved in proposing features for a cycle.  The more detailed design discussions can happen more independently when there is more confidence that a feature can be included in the scope of a release.
15:05:17 <carl_baldwin> If this is the first you’ve heard, I encourage you to go take a look at the review I linked.
15:05:36 <gsagie_> ok, thanks carl
15:05:40 <carl_baldwin> Any other announcements?
15:05:42 <yalie> thanks
15:05:43 <vikram__> thanks
15:06:23 <carl_baldwin> #topic Bugs
15:06:43 <carl_baldwin> I bumped up the priority of bug 1438819
15:06:43 <openstack> bug 1438819 in neutron "Router gets address allocation from all new gw subnets" [High,In progress] https://launchpad.net/bugs/1438819 - Assigned to Andrew Boik (drewboik)
15:07:33 <carl_baldwin> Unfortunately, we didn’t understand the full effect of the bug until it was too late for Kilo.  So, it is release noted.
15:08:20 <carl_baldwin> I didn’t mark it critical because it is not a common use case to add a subnet to an external network.
15:08:30 <carl_baldwin> Any other bugs we should be aware of?
15:09:50 <carl_baldwin> #topic Dragonflow
15:10:06 * carl_baldwin hopes he got that right.
15:11:00 <gsagie_> You got it right
15:11:02 <carl_baldwin> saggi1: Do you have links to introduce this to the team?
15:11:07 <carl_baldwin> gsagie_: ^
15:11:31 <saggi1> Sure, https://launchpad.net/dragonflow
15:11:50 <saggi1> carl_baldwin: Already pointed it out in a previous meeting
15:11:59 <gsagie_> And you can also read the great blog posts from Eran Gampel here: http://blog.gampel.net
15:12:22 <mlavalle> saggi1: this is the repo, right: https://github.com/stackforge/dragonflow
15:12:26 <saggi1> mlavalle: yes
15:12:29 <saggi1> Basically we don't try and be a new full driver
15:12:58 <saggi1> We base ourselves on the LegcayRouter and try and optimize where we can using ovs flows on the compute node
15:13:29 <saggi1> For instance, if you have two VMs from different subnets on the same host. We detect that and install a flow that bypasses the normal routing
15:13:29 <gsagie_> The idea in a nutshell is to achieve DVR on top of "legacy" L3 implementation using an SDN controller (based on Ryu framework) so we achieve DVR without the need of compute node L3 agents and without namespaces
15:13:44 <carl_baldwin> saggi1: gsagie_:  Are there db side changes too?
15:13:52 <saggi1> carl_baldwin: no
15:14:06 <saggi1> carl_baldwin: We use the topology information for our optimizations
15:14:18 <saggi1> tenants, routers, subnets
15:14:19 <gsagie_> and i believe in the future we will be able to eliminate the L2 agent as we can also leverage security groups driver that configure flows remotely
15:14:23 <carl_baldwin> This is all contained within the L3 agent process?
15:14:28 <mlavalle> saggi1: so, the router namespaces in the network node still exist?
15:14:37 <gsagie_> mlavalle : no
15:14:49 <gsagie_> we use open flow flows to achieve the DVR functionallity
15:15:07 <gsagie_> mlavalle : sorry, for SNAT we use them still
15:15:26 <tidwellr1> so it's completely flow-based otherwise?
15:15:27 <gsagie_> mlavalle : but we have future plans to distribute that just as well, design wise its possible
15:15:44 <mlavalle> gsagie_: also with flows, right?
15:15:46 <vikram__> gsagie: did this support vpn as well?
15:16:10 <saggi1> We want to offload as much as we can to openflow flows
15:16:25 <yalie> gsagie_: is there dependency on the verison of ovs?
15:16:36 <saggi1> 1.3 currently
15:16:39 <saggi1> yalie: ^
15:16:40 <gsagie_> mlavalle : in the current design SNAT/FIP is still using legacy L3
15:16:44 <gsagie_> yalie: 2.1.3
15:17:18 <gsagie_> but we have plans to also implement that using flows, so its mostly a matter of work resources
15:17:54 <mlavalle> gsagie_: got it
15:18:48 <saggi1> The design allows us to add more features as we go falling back to other, non-flow, packet delivery for anything we don't implement.
15:19:03 <gsagie_> I think that getting rid of the agents in the compute node might prove to be a big complexity reducer, and i think the design is simple enough in the controller side so this might be a good reference design for SDN
15:19:04 <carl_baldwin> So, to be clear, you have E/W routing working and N/S routing is still done with the LegacyRouter implementation?
15:19:21 <gsagie_> carl_baldwin: currently yes
15:19:37 <carl_baldwin> gsagie_: It is a good start.
15:20:08 <mlavalle> gsagie_: what are the benefits performance / throughput wise?
15:20:56 <gsagie_> mlavalle: we are working on benchmarking this versus current DVR, no results yet, but management complexity wise, i think you can see that we don't need the namespaces is an improvment
15:21:35 <saggi1> mlavalle: We don't have a lot of data on it. We are still trying to asses how to properly test "real world" use cases. So any suggestions about what to benchmark will be most appriciated.
15:22:19 <vikram__> gsagie: how it's different from other open controllers like open-contrail?
15:22:19 <saggi1> Hammering E\W communication saw a 20% increase in throughput IIRC but it was a very simple benchmark.
15:22:25 <carl_baldwin> I’m very happy to see this work being done.  I’d like to continue to support this effort.  How has been your experience integrating with the current L3 implementation?  I’m hoping your perspective might help to improve the modularity of the code.
15:22:30 <mlavalle> saggi1: I work for a big deployer. We have a tendency to do thins based on flows. I am going to bring this up to my cowworkers and we may get back to you
15:23:14 <saggi1> carl_baldwin: It's been problematic. But it's getting better.
15:24:14 <saggi1> carl_baldwin: A lot of dvr specific code is causing odd bus for us.
15:24:39 <saggi1> That being said in the last couple of months there was a lot of work in decoupling l2 from l3 and it's been very helpful.
15:24:53 <gsagie_> vikram_ : we are trying to build this in a very simple way, we leverage Ryu for the simplicity but any other SDN controller can be used, but we don't want to introduce more complexity if we don't need too
15:24:57 <carl_baldwin> saggi1: gsagie_:  I wonder if we could get together with a little more time to discuss it.  Either at summit — which may or may not be possible for me — or at another time.
15:25:09 <carl_baldwin> saggi1: I’m glad to hear that we’re moving in a good direction.
15:25:23 <vikram__> Carl: +1
15:25:27 <mlavalle> saggi1: by the way, we are doing things with Ryu, so there's affinity already
15:25:36 <vikram__> gsagie: thanks got it
15:25:51 <gsagie_> carl_baldwin: We would love to meet in the summit and further discuss it, hopefully we will also have some numbers to show you
15:26:19 <mlavalle> gsagie_, carl_baldwin: I am available, willing to meet at summit
15:26:38 <pc_m> would be interesting to learn more
15:26:43 <gsagie_> I think we would love to make this a joint effort, because i think that idea wise this design can be a good reference
15:26:48 <tidwellr1> I would love to learn a little more too
15:27:13 <vikram__> idea sounds really interesting. I am in.
15:27:24 <carl_baldwin> We’ll find some time at the summit for it.  I’m not sure about my complete schedule but I’d like to fit this in.  My focus will be on what steps we can take to allow your efforts to continue successfully in parallel with the existing L3 implementation.
15:28:01 <yalie> so, this is a new implement of DVR, will re-use the API now?
15:28:05 <saggi1> carl_baldwin: That is exactly what we want to nail down. Since, as you can see, we depend on a lot of core l3 code
15:28:56 <gsagie_> carl_baldwin: is the L3 reference implementation going out of tree?
15:29:16 <carl_baldwin> I’m thinking about the Friday as a “contributor meetup”
15:29:31 <mlavalle> carl_baldwin: +1
15:29:47 <vikram__> timing?
15:30:36 <carl_baldwin> gsagie_: I’m not 100% sure yet.  So far, there are no immediate plans.
15:30:59 <carl_baldwin> gsagie_: saggi1:  Will you be staying until Friday?
15:31:31 <gsagie_> carld_baldwin: yes
15:31:36 <gsagie_> carl_baldwin
15:32:52 <carl_baldwin> #action carl_baldwin to setup a time and place on Friday for a contributor meetup on L3 modularity and supporting development of dragonflow.
15:33:23 <gsagie_> thanks carl !
15:33:28 <saggi1> thanks
15:34:07 <carl_baldwin> gsagie_: saggi1:  Thank you for coming to the meeting.  I look forward to hearing more about your work.  I will read the blog posts and look through the code repository that you have linked.
15:34:16 <carl_baldwin> Anything else on this topic for now?
15:34:30 <gsagie_> not from me, thanks
15:34:45 <yalie> I have a question
15:34:54 <gsagie_> feel free to approach any of us online if you have any question
15:35:00 <gsagie_> yalie: yes?
15:35:11 <yalie> about the gateway of subnet, when a VM act as a router
15:35:23 <mlavalle> gsagie_: you hang out in #openstack-neutron?
15:35:28 <gsagie_> yes
15:35:32 <saggi1> mlavalle: yes
15:35:34 <yalie> we can't assign the gateway with a VM' port IP
15:35:52 <mlavalle> :-)
15:35:53 <yalie> but when the VM as a service like router, we need it
15:36:05 <yalie> could we remove this limitation?
15:36:38 <saggi1> yalie: We would need to get that information from the DB. To know that this VM is a router. Since this is where we get the topology informationfrom
15:36:39 <carl_baldwin> yalie: Could we postpone this for Open Discussion.  Or, we could discuss in the neutron channel just after the meeting.
15:36:53 <saggi1> carl_baldwin: sure
15:36:53 <yalie> carl_baldwin: yes, thanks
15:37:27 <carl_baldwin> #topic bgp-dynamic-routing
15:37:41 <carl_baldwin> devessa cannot make it today.
15:37:54 <carl_baldwin> tidwellr1: Do you want to give an update quickly?
15:37:59 <tidwellr1> sure
15:38:53 <tidwellr1> I'm working through the tutorial here https://wiki.openstack.org/wiki/Neutron/DynamicRouting/TestingDynamicRouting
15:39:40 <tidwellr1> I'm deviating from it slightly as I'm interested in how to go about automated testing og BGP advertisements
15:40:26 <tidwellr1> I'm mixing extra quagga instances, but following instructions otherwise
15:41:40 <carl_baldwin> tidwellr1: Sounds like good progress but we’ve caught you in the middle of getting it on its feet.
15:41:55 <tidwellr1> that's OK
15:42:09 <tidwellr1> I'm really coming at this from the testing perspective
15:42:21 <carl_baldwin> tidwellr1: understood.  Keep up the good work.
15:43:22 <tidwellr1> I'll have more to share next week
15:43:36 <carl_baldwin> tidwellr1: Do you think it would be difficult to run quagga on the same VM instance with devstack, adding routes to br-ex instead of eth0?  It may be very difficult still to get automated testing in the gate needing more than one instance.
15:45:02 <carl_baldwin> Something to think about… We should probably move on and get to ipam before the meeting time is over.
15:45:10 <carl_baldwin> #topic neutron-ipam
15:45:15 <tidwellr1> yeah, that's a concern. I have a couple ideas I'm going to play with, I'll report back next week
15:45:29 <carl_baldwin> johnbelamaric: tidwellr1: pavel_bondar:  ping
15:45:37 <pavel_bondar> pong
15:45:43 <johnbelamaric> carl_baldwin: hello
15:46:07 <carl_baldwin> I’ve been watching the progress with getting the tests to pass.
15:46:12 <carl_baldwin> Nice work so far.
15:46:21 <pavel_bondar> it's close:)
15:46:35 <carl_baldwin> I also started reviewing the patch but I did not finish.  It is a pretty large patch.
15:47:49 <pavel_bondar> I switched back to pass subnet_id in interface instead of subnet_dict and has a workaround for issue with OpenContrail tests
15:48:38 <johnbelamaric> carl_baldwin, pavel_bondar, tidwellr1: I think there is one comment left to address from Ryan regarding duplicate code
15:49:30 <pavel_bondar> Also I sent to ML my finding about original issue I had with OpenContrail tests and fetching subnet using plugin. #link http://lists.openstack.org/pipermail/openstack-dev/2015-April/063004.html
15:49:49 <pavel_bondar> john, could you please point which comment?
15:50:39 <johnbelamaric> *looking*
15:50:42 <carl_baldwin> pavel_bondar: Thanks for pointing out the ML post.  I have not visited the ML yet today.  :)
15:51:14 <tidwellr1> I struggled with the OpenContrail tests as well, they extend the db_plugin tests in some interesting ways
15:53:00 <pavel_bondar> tidwellr1: yeah, they do not call super methods directly, but instead create new http request, so they are quite different from others
15:54:07 <tidwellr1> I ended up writing new test cases that the OpenContrail tests wouldn't extend as my hack, but that's not really an option here is it :)
15:55:39 <johnbelamaric> pavel_bondar: nevermind, you fixed it in PS 51
15:57:08 <pavel_bondar> johnbelamaric: yeah, right, workaround was used to bypass plugin level and call _get_subnet directly from db_base
15:57:31 <johnbelamaric> pavel_bondar: I was referring to the open comment I thought there was, it was done in 51
15:57:54 <pavel_bondar> ah:)
15:57:56 <pavel_bondar> got it
15:59:13 <pavel_bondar> but yeah, OpenContrail issue is not high priority for now since I have workaround, but it is still interesting why it deletes port on fetching subnet
15:59:34 <carl_baldwin> We can take this to the neutron room.  We’re out of time.  :)
15:59:49 <carl_baldwin> Thanks for all your work.
15:59:55 <carl_baldwin> #endmeeting