09:00:38 #startmeeting Dragonflow 09:00:39 Meeting started Mon Jun 12 09:00:38 2017 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:40 Hey 09:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:43 The meeting name has been set to 'dragonflow' 09:00:49 Hello. Who is here for the Dragonflow weekly? 09:01:18 Hi all 09:02:09 Let's wait another minute. Maybe nick-ma or xiaohhui will join 09:02:11 Then we can start 09:03:31 All right, then 09:03:59 hi 09:04:06 #info lihi dimak leyal irenab in meeting 09:04:12 #info Roadmap 09:04:18 #topic Roadmap 09:04:38 Pike roadmap is coming along nicely. 09:05:00 IPv6 is almost complete. lihi I see you uploaded the last patch? 09:05:40 Yes, lets cross fingers that it will pass the gate :) 09:05:47 Hear! Hear! 09:06:04 Northbound API refactor is finally complete. Last patches were merged a few minutes ago 09:06:20 Which broke the VLAN aware VMs patch - but I rebased it. It should be good to go. 09:06:25 🎊 09:06:49 Lastly - SFC. dimak, any updates? 09:07:12 I'm still working on getting it to play nice in multinode 09:07:22 I'll rebase once it works 09:07:44 Sure. You have an ETA? 09:08:11 Around 2weeks-ish with all the ongoing stuff 09:08:23 I'll need to make some changes to network access apps 09:08:28 All right. 09:08:49 I started working on LBaaS. I hope to have a spec up by the end of the week 09:09:25 Anything else for roadmap? 09:10:37 #topic Open Discussion 09:10:49 Any open issues anyone wants to discuss? 09:11:15 I took a look at the l3 flavor extension 09:11:18 oanson, lbass driver for octavia? 09:11:30 It will be part of the LBaaS spec 09:11:50 The design will include L4,L7. 09:12:04 Pools, monitors, classification, etc. 09:12:05 Did you get to look at ovs+ebpf? 09:12:16 It just takes time to sift through the docs 09:12:22 its interal for DF, but from the OpenStack perspecive, it will be driver for Octavia, right? 09:12:30 dimak, no. Do you have a link? 09:12:50 Wait I have it 09:12:51 http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf 09:12:59 Yes 09:13:10 Plus there's bcc 09:13:20 they have p4 to ebpf compiler 09:13:32 Yes. That looks cool 09:13:33 and some examples 09:13:45 like the http monitor 09:14:02 oanson it this supposed to be used instead or parallel to openflow? 09:14:49 dimak, Did you look at ODL's southbound abstraction? 09:14:58 I haven't 09:14:59 lih, we haven't decided 09:15:20 lihi, I read some slides that delegate some actions to ebpf programs 09:15:24 In general, we're having issues deciding how all the applications play together when not tightly-coupled. 09:15:33 and some slides that implement openflow in ebpf 09:15:54 Having the apps implement themselves partly in OpenFlow, partly eBPF, partly P4, adds another layer of complexity. 09:15:54 But I'm not sure any of this work was opensourced 09:16:12 oanson, about the coupling 09:16:19 Yes 09:16:21 oanson, seems the issues of tightly coupled apps is not related to the southbound API 09:16:45 as long as we have some notion of apps and their boundries, each app can implement itself as it sees best 09:16:56 dimak, +1 09:17:14 The datapath will just glue all the ports together 09:17:29 irenab, true. I'm just saying that making the southbound API more flexible makes coupling/de-coupling the apps more complicated 09:17:35 You can create a linux bridge + iptables and have taps for input output 09:17:40 I am still concern if this way will lead to the optimised DP 09:17:42 then ovs bridge with ports 09:17:45 dimak, how will you define these boundaries? 09:17:47 or ebpf 09:18:14 by describing their method, be it taps, ovs table, veth pairs 09:18:56 I'm not sure I want to throw in iptables, veth pairs, etc. to the mix 09:19:08 sounds like there can be some meta language to define the app, and it can be 'compiled' to any SB API 09:19:26 P4? 09:19:34 irenab, yes, but it has to be defined. 09:19:40 agree 09:19:41 P4 is a good direction, I think 09:19:47 But I was talking about something else 09:19:54 need to consider the troubleshooting too, not to over complicate 09:19:57 You can let each app have its own s/b 09:20:10 and just connect the apps to create the pipeline 09:20:34 irenab, yes. If we used a known language (e.g. p4) we can use/write general-purpose tools 09:20:57 dimak, yes, but I want to be able to 'glue' apps with something better than connecting their ports. Otherwise we're just doing SFC graphs 09:21:33 I don't want the l2 app to get the packet via a port and write to a port. I want to look more like a 'p4 function' 09:21:34 I think its best to have some language for all apps 09:22:02 But that also restricts you in some way 09:22:46 dimak, Yes. But I think this will allow us to have better dataplane performance in the long run. I think once we use ports, we will have to pay for them. 09:23:03 ports/interfaces/iptables/veth pairs/etc. 09:23:15 The thing is, if we have OVS only apps, we don't have to use ports :) 09:23:19 We can optimize that 09:23:35 and also reorder the tables ahead of time to avoid resubmitting 09:24:03 Then I'm confused about what you said before. Could you start over? 09:24:17 But every so often openflow won't cut it, you'll packet-in or go to other datapath 09:25:04 True 09:25:16 I meant that we can define some DSL (P4) for OVS apps 09:25:22 But I was hoping that eBPF/P4 will make these cases smaller 09:25:42 and we don't have to pay much performance to glue those 09:25:52 dimak, yes. 09:26:05 But maybe we'd like to offer "bring your own datapath backend" for third party apps 09:26:41 If they provide an DSL codeblock (e.g. P4 function, eBPF function, etc.) that's a non-issue 09:26:50 If they want to work with a port-pair, they can use SFC 09:26:59 (That's what it's there for) 09:27:15 What if they want to work with a HW card on the node? 09:27:29 or switch 09:27:33 yeah 09:27:46 I think most smart-nics support eBPF/P4. Dunno about switches (I think they're still OpenFlow) 09:28:42 Anyway, I am not sure its a usecase for the cloud 09:29:51 In the end, we want to be an application platform. That's also a cloud use-case. If you want a network functionality added, just add an app to Dragonflow. That's why we're unhappy with the SB in the first place. 09:31:50 All right. I think we run this conversation to the ground. Any action items so far, or dimak you want to take this offline? 09:32:06 We can take this offline 09:32:19 I also mentioned l3 flavors earlier :) 09:32:47 dimak, sorry. I missed that. Say again? 09:32:52 you mean having DF Router with neutron refeence L2? 09:32:55 Yes 09:33:11 It would require quite a lot of heavy lifting at the moment 09:33:22 but with SB api maybe we can do it easier 09:33:39 it will mean that we'll have some reduced DF controller with l3+snat apps 09:33:55 plus new app for classification/dispatch from OVS agent tables 09:34:01 Yes. What heavy lifting is needed? 09:34:29 Plus it means that we still need our NB database just for the agenty 09:34:42 the l3 agent 09:34:52 Yes. We can't do without our database. 09:35:11 And pub-sub, though we can implement rabbit drivers and piggyback on it 09:35:15 But using the etcd backend makes this a non-issue. etcd already exists, and by definition we can assume it's there and use it 09:35:23 because any neutron deployment already uses rabbit 09:35:51 Yes. pub_sub driver can be either rabbitmq or etcd. I'd prefer etcd, but both is event better :) 09:37:08 dimak, you mentioned a design for l3 flavour (previous meeting). How is that coming along? 09:37:25 You could mention there all the things we're missing. 09:37:37 I think thats about it 09:37:49 It might help us cookie-cut southbound API requirements. 09:38:16 A southbound API can help here a lot I think 09:38:26 To make the apps more composable 09:38:44 classification/dispatch app should be easy (the current one is less than 100 lines) 09:39:25 We can try making the table assignment more dynamic, for now. 09:39:30 If needed. 09:39:32 Yes 09:39:53 There's also the fields we use, need to see ovs agent does that 09:40:10 Technically dragonflow should support starting with only L3 and an snat app. It's a configuration issue. It needs to be tested though 09:40:42 We can try to make the field names configurable 09:41:04 Umm, will we require the ml2 plugin for port/network events? 09:41:40 We may need a watered-down plugin here too. 09:42:02 Might be interesting to break that apart to what we need. I think irenab's been saying that for a while now :) 09:42:16 indeed 09:42:26 So now we also have a use-case 09:42:40 If we decide to push it for rocky we should have write a spec for it 09:43:02 Rocky? 09:43:14 Don't we have Queens first? 09:43:19 oh right 09:43:52 I don't know if its a lot of work. I think it's mostly gluing things we have together and testing them. 09:43:56 Or am I missing something? 09:44:05 there is also some mixed deployment use case, for running both ova and DF in same cluster 09:44:06 its mostly that 09:44:19 irenab, I think we should already support that 09:44:26 should this have higher priority? 09:44:31 And if we don't, we should regardless of this scenario 09:44:43 oanson, we don't have that yet 09:44:51 I think it may be helful for misgration from ovs to df 09:44:54 Well, that's a shame :( 09:45:18 We should see if ovs is going to be superseded by ovn, 09:45:30 anyway, seems we may need some priority over all interesting items that were raised 09:45:32 dimak, wait, what 09:45:33 ? 09:45:56 irenab, yes. dimak, why don't you add all this to the spec. We'll try to carve out a list of priorities and time-tables? 09:46:02 Sure 09:46:03 We'll see who can chip in next week. 09:46:13 (Or during the week, if you get the spec up earlier) 09:46:18 I'll open a bp and upload it 09:46:27 dimak, a bug, not a bp. 09:46:46 I think it would be best to phase the bps out. It's easier to manage the open tasks when it's in one place 09:46:58 Ok 09:47:02 You can add a 'feature', 'rfe', or 'blueprint' tag to the bug if you'd like 09:47:03 I don't mind 09:47:27 Great. Thanks. 09:47:37 Anything else? 09:48:00 All right, then. 09:48:10 Thanks everyone for coming. 09:48:27 thanks, bye 09:48:27 #endmeeting