09:02:03 #startmeeting Dragonflow 09:02:03 Meeting started Mon Jun 26 09:02:03 2017 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:02:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:02:06 The meeting name has been set to 'dragonflow' 09:02:34 All right. Let's wait another minute to see if anyone is coming in late (and let me update the agenda) 09:02:42 Hi 09:03:21 All right. Agenda is updated ( https://wiki.openstack.org/wiki/Meetings/Dragonflow ) 09:04:39 All right. Let's start 09:04:43 #topic Roadmap 09:04:53 I'll starting by announcing that our gate is broken. 09:04:55 Again. 09:04:59 :( 09:05:24 Fix is available here. It's in Neutron. I hope it'll pass review tonight. https://review.openstack.org/#/c/476511/ 09:05:58 Many thanks to Kevin and Armando (not using nicks not to alert them for no reason) for their great help. 09:06:21 And this one: https://review.openstack.org/#/c/477206/ for the nova issue 09:06:49 I'd recommend adding the neutron fix as a Depends-On, and rebasing all new patches on top of dimak's fix 09:07:03 Roadmap! (Really, this time) 09:07:33 I can add depends-on on my patch and see if we get a verified+1 09:07:34 SFC - I didn't see many updates this week. But dimak, you were tackling the dnat issue, right? 09:07:48 I worked on both 09:07:57 Won't hurt 09:08:02 Managed to get SFC working on multinode 09:08:11 I'll upload my patches today 09:08:31 I algo got rid of that split l2 table patch 09:08:33 Cool. 09:08:45 Great. Happy to hear it 09:08:52 And I won't need to touch tunnel/provider apps 09:09:19 I also put in some hours on DNAT effort 09:09:23 You mean for SFC 09:09:31 Yes 09:09:35 I'm guessing for dnat you will need to touch provider 09:09:41 yes 09:09:59 Found several things that needed fixing 09:10:41 I'll break them out of the dnat patch chain today or tomorrow 09:10:50 To make the merging easier :) 09:11:36 Yes. That would be great. 09:11:36 itamaro, I also had some changes in provider app, will be happy if you could take a look 09:11:47 sure 09:12:03 thanks 09:12:43 Anything else for SFC? 09:12:59 I have an updated change in the gen_mapping which I would like u to have a look. 09:13:13 itamaro, linky? 09:13:34 it will uploaded after tests today. 09:13:34 https://review.openstack.org/#/c/475166/ 09:13:45 I have posted a comment there 09:14:11 All right. Let's get back to this one in the open discussion section 09:14:17 I think having 2 patches is preferable in case of snat 09:14:23 sure 09:15:23 All right. LBaaS 09:15:53 I'm uploading this instance a spec 09:16:03 It is *very* much WIP 09:16:08 But I wanted to show some progress 09:16:14 https://review.openstack.org/477463 09:16:53 There are a bunch of things I am not happy about yet, and I hope for next week I can resolve them 09:17:24 L3 flavour 09:17:26 https://review.openstack.org/#/c/475174/ 09:17:45 I'll take a look today 09:17:50 The spec breaks the gate... So it won't be merged 09:17:54 (the lbaas) 09:17:55 dimak, I wouldn't bother. 09:18:06 ok then 09:18:13 It's there to show that I am not ignoring it. But it's very WIP 09:18:31 I'll take a look 09:18:43 the l3 flavor spec didn't get much attention 09:18:59 Yes. Sorry about that 09:19:23 I'll try to get to it this week 09:19:36 ETCD publisher 09:19:39 lihi, ? 09:20:24 Not much progress. Only yesterday (EOD) I managed to get a working env with etcd 09:20:55 All right. Please also review https://review.openstack.org/#/c/476401/ 09:21:02 sure 09:21:21 RPM packaging - dimak I know we haven't mentioned this in a while 09:21:33 Yes I havent touched it since :( 09:21:50 But I seem to remember you volunteered, and if I recall, it isn't a big undertaking 09:21:54 (Unless I'm wrong) 09:21:55 juggling too many things 09:21:59 All right. 09:22:14 I don't think its a huge endeavor 09:22:24 btw we should also take a look at df kolla images 09:22:27 Anyone wants to take RPM packaging? I don't think it should take long, and I think it will have a good impact on PR 09:22:34 and update them if needed 09:22:40 Yes 09:22:48 But we're getting spread a bit thin 09:23:08 I think we need to add bgp service there (iirc) 09:23:10 Why don't I take the RPM package, and lihi, can you look at the kolla deployment in parallel? 09:23:24 Let's start with setting it up and testing it. 09:23:54 I'll take this time to add that I think we have 3 gate jobs that are important to add: OSA, kolla, and devstack/multinode 09:24:19 Do we need both OSA and Kolla? 09:24:28 OSA gate already exists, but it will probably remain unstable until lihi finishes the etcd publisher work (which is why I think it's so important) 09:24:37 Sure 09:24:42 dimak, because both deployments are already written, and we want to support as much as possible 09:24:55 More gates, more breakage ;) 09:25:06 non-voting gates, obviously 09:25:18 and so far the etcd gate seems to be worth its bread 09:25:24 How long does it take to get through DF in OSA? 09:25:58 to gate* 09:26:49 Dunno. The gate currently fails after 4-5 minutes 09:27:04 Oh 09:27:06 a shame 09:27:37 Yeah. 09:28:19 Once the etcd publisher is finished, I think we can make it much more stable. Most of the issues were that the ZMQ publisher wasn't able to guess its container's IP. 09:28:57 We plan to drop zmq after we have etcd publisher? 09:29:12 No 09:29:30 We plan to support both. ZMQ might be better suiting in some scenarios 09:29:42 Since we're driver-based 09:30:07 A deployment can choose what fits them best 09:30:08 +1 09:31:08 All right. In that case, dimak, I'm guessing we're also waiting with Skydive? 09:31:17 yeah 09:31:48 Anything else for roadmap? 09:32:19 #topic Bugs 09:32:30 I am looking at bug 1690775 09:32:31 bug 1690775 in DragonFlow "Remove special handling for lport/ofport in local controller" [High,In progress] https://launchpad.net/bugs/1690775 - Assigned to Omer Anson (omer-anson) 09:33:02 I am looking at getting the NbAPI/controller to fire events for all dependent instances. 09:33:35 This came up because in classifier, I need both lport and ovs port. So I need to write a mechanism to make classifier app wait for both events 09:33:40 But this is something reusable. 09:33:47 Yes, it could save us .get_object() is not None checks 09:33:50 So I'm thinking to let the framework/IS do it 09:34:16 I have an early draft written. Once it passes unit tests I'll upload it to see what the gate thinks of it 09:34:29 (Assuming it will be non-broken for an hour) 09:34:41 Anything else for bugs? 09:35:08 We don't have a bug for that 09:35:21 Sorry? 09:35:29 but we need to put some effort into the topology/refresher/consistency mechanism 09:35:39 it doens't cover all models 09:35:51 only ones that come with topic (and version for consistency) 09:36:13 so topicless objects like listeners/chassis are never refreshed 09:37:13 Yes 09:37:21 This is a long-standing problem 09:37:44 I also opened another bug a few weeks ago 09:37:52 about db drivers ignoring topic 09:37:56 (except for redis) 09:38:09 And redis is doing it badly 09:38:10 which makes selective topology distribution not so selective 09:38:20 yes redis is far from optimal 09:38:24 dimak, are there bugs for all these issues? 09:39:00 Let's open bugs on these. The redis mis-implementation and db drivers ignoring topics is a medium, since its a performance issue. 09:39:11 I have a bug on the latter 09:39:23 The topology thing is a High. It would also do well to organise all our thoughts there 09:39:54 https://bugs.launchpad.net/dragonflow/+bug/1696142 09:39:55 Launchpad bug 1696142 in DragonFlow "Topic is ignored in NbApi.get_all" [Undecided,New] 09:40:42 All right. Could you also open bugs for the other two? 09:40:44 BTW just came across https://bugs.launchpad.net/dragonflow/+bug/1697439 09:40:44 Launchpad bug 1697439 in DragonFlow "router_port_rarp_cache and floatingip_rarp_cache dictionaries consider just mac address as key " [High,New] 09:40:51 I'll take as I already fixed it 09:40:55 in the DNAT patches 09:41:04 (the l3 part is not uploaded yet but done) 09:41:16 Sounds good 09:41:39 I'll open bugs for the sync/topology 09:42:02 Great. Thanks! 09:42:05 Anything else for bugs? 09:42:45 #topic Open Discussion 09:42:51 itamaro, you had something to say here? 09:43:46 there is a newly discoved bug related to bum flows 09:43:57 Yes 09:44:01 https://bugs.launchpad.net/dragonflow/+bug/1700473 09:44:03 Launchpad bug 1700473 in DragonFlow "bum flow on tunnel vtp is limited to 989 replications" [Undecided,New] 09:44:19 which need some thinking 09:44:29 You can use groups to overcome it I think 09:44:59 Check the max number of buckets OVS will treat 09:45:42 ok. lets take it offline... 09:45:55 👍 09:45:58 That's actually a cool solution. I like it! 09:46:40 The floor is free? 09:46:54 I think https://bugs.launchpad.net/dragonflow/+bug/1647362 now deserves a lower priority. Does it blocks anyone? 09:46:55 Launchpad bug 1647362 in DragonFlow "Fullstack tests does not simulate actual networks settings" [High,In progress] - Assigned to Lihi Wishnitzer (lihiwish) 09:47:29 Bumped down to medium. It blocked me with trunk ports, but that's solved now 09:48:11 Anything else? 09:48:55 All right. Thanks everyone 09:49:02 Come again next week. 09:49:09 #endmeeting