08:00:18 #startmeeting Dragonflow 08:00:19 Meeting started Mon Jul 24 08:00:18 2017 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:00:21 Hello. 08:00:22 The meeting name has been set to 'dragonflow' 08:00:24 Hi 08:00:25 Hi 08:00:28 Hey 08:00:31 Who's here for the Dragonflow weekly? 08:00:36 hi 08:01:00 Cool. Let's start 08:01:03 #topic Roadmap 08:01:22 SFC 08:01:29 Almost in :) 08:01:30 dimak, Are we done? 08:02:11 I think it is pretty done 08:02:16 What do you mean? How long does it take to get a tiny patch of 2222 lines reviewed? 08:02:19 Unless someone has more comments 08:02:20 https://review.openstack.org/#/c/424146/ 08:02:34 😉 08:02:50 😇 08:03:00 On it 08:03:15 Thanks! 08:03:43 I'll report that I didn't do squat last week. In my defense I was sick. But LBaaS and RPM packaging didn't see any cpu time 08:04:23 L3 flavour - dimak I saw you updated the patch. I asked there a question 08:04:33 dimak, is there some networking-sfc test cases that you checked ? 08:04:40 lihi, leyal, do you have any questions about it, so we can move forwards? 08:04:55 oanson, I'll check and get back to your comment 08:05:05 irenab, I'm looking into that as well 08:05:10 Nope 08:05:23 dimak, I'd rather discuss it here so the spec can be accepted today, if everyone agrees 08:05:37 I'll upload L3 service provider for Dragonflow today or tomorrow 08:05:49 based on my recent experience with Trunk ports, we should pay more attention on the test coverage 08:06:01 My plan is to retire DF's L3 plug in in favor of using the vanilla L3 router plugin with DF service provider (driver) 08:06:19 irenab, there is only so much a test can cover - it's not an integration. 08:06:44 I asked if the solution in the spec *requires* modification in the other l2 implementations. We should support the other implementation out-of-the-box if possible 08:07:07 oanson, maybe we can discuss it after the roadmap, but coverage should be improved 08:07:11 dimak, that's L3s mechanism similar to L2s ML2, right? 08:07:20 Yes, as I said above, I'll check the ref implementation first and post a reply 08:07:27 oanson, yes 08:07:27 irenab, agreed. But I recommend low expectations :) 08:07:46 oanson, its a long journey 08:08:01 irenab, sorry? 08:08:57 oanson, adding more testys and improve stability takes time. So, agreed about expectations 08:09:09 Sure 08:09:29 dimak, regarding the l3 flavor - did you answer my question? (I may have missed it) 08:09:40 Yes 08:09:48 regarding L3 flavor, it should not impact L2 plugin, only if there is something broken there 08:09:57 Service provider is a kin to mech driver in ML2 08:10:05 Well, not exactly 08:10:08 there is similar work done by yamamoto for Midonet 08:10:12 dimak, I meant regarding my comment on the spec. 08:10:28 Like irenab said, we shouldn't have to modified other people's l2 implementation 08:10:32 oanson, I answered that as well, I need time to check ref impl 08:10:38 I see. 08:10:50 Sorry - I didn't realise that was the answer. 08:10:57 All right. Let's continue then. 08:11:04 ETCD publisher 08:11:07 lihi, ? 08:11:26 I thought I made the new library to work, but I was fooling myself. But I made some progress, so hopefully now this time it will work (I'm still testing it ) 08:11:59 If you need some extra eyes, upload a patch :) 08:12:22 Tag it with 🚧 08:13:09 Does gerrit supports emojis? 08:13:33 I'm not sure :( 08:13:41 dimak, Under construction? 08:13:46 Yes 08:13:56 No emoji for work in progress? 08:14:28 I guess that letting the gate fail will be faster than opening the systemd logs in my VM 08:14:42 Or anywhere 08:15:34 All right, let's move on 08:15:48 leyal, what about the PXE/DHCP ? 08:16:21 I see the spec is merged 08:16:36 2 patches in 2 waiting for review and 1 i still need to do some changes 08:16:52 links? 08:16:55 The spec and the change in the DB .. 08:17:15 https://review.openstack.org/#/c/475167/27 08:17:24 https://review.openstack.org/#/c/475718/23 08:18:54 Sure. We'll look at them 08:19:10 I also write a draft (in dragon-flow) of about requirement for Ironic support.. 08:19:27 That would be cool to see 08:19:54 leyal, not sure if related, but there are wierd failures at the gate on second patch 08:20:23 non voting 08:20:26 irenab, I think it's a gate thing 08:20:28 I will look at that .. 08:20:55 oanson, any known issue on gate? 08:21:04 Not that I know 08:21:05 Gate should be good for now 08:21:28 leyal, I think you should leave the gate for now 08:21:39 Ok 08:21:51 If this shows up more often we'll assign someone to look into it (then it may be you) 08:22:14 Anything else for roadmap? 08:22:57 #topic Bugs 08:23:20 dimak was nice enough to close bug 1690775 for us :) 08:23:20 bug 1690775 in DragonFlow "Remove special handling for lport/ofport in local controller" [High,In progress] https://launchpad.net/bugs/1690775 - Assigned to Omer Anson (omer-anson) 08:23:32 Not closed yet though :( 08:23:52 The last one kinda fails 08:24:29 still the py35 thing? 08:24:42 Nope, the one that removes update_lport 08:25:03 https://review.openstack.org/#/c/486411/ 08:25:21 But most of the code is gone 08:26:19 Do you need another set of eys? 08:26:23 eyes*? 08:26:48 Nope, its the issue we discussed earlier today, about port locality 08:27:13 Sure 08:27:39 Anything else for bugs? 08:27:47 I've opened some bugs with irenab on trunk app 08:28:08 most of those have been fixed or have a fix at the gate 08:28:20 This one is still not though https://bugs.launchpad.net/dragonflow/+bug/1705503 08:28:20 Launchpad bug 1705503 in DragonFlow "Trunk subport will not be available after controller restart" [High,Confirmed] 08:28:20 oanson, there is one on child port not being set to Active 08:28:24 Sure. Where they classified with importance? 08:29:34 Yes 08:29:35 irenab, this one: bug 1705397 ? 08:29:35 bug 1705397 in DragonFlow "Sub port of Trunk port is not updated to Active status" [Undecided,New] https://launchpad.net/bugs/1705397 08:29:46 yes 08:29:58 irenab, is it blocking integration with kuryr? 08:30:26 similar to nova, kuryr expects port to be aCtive before moving on with processing Pod creation request 08:30:37 yes, blocking 08:31:07 without port being Active, it assumes that Data Plane is not set yet 08:31:19 Sure. Then set to high. 08:31:43 oanson, I do not have the power ... 08:31:44 I'll try and fix it this week. 08:31:53 Already done. 08:32:36 We have many bugs of Undecided importance 08:33:03 Anyone feels like taking a look at those? Just to sort the importance? 08:33:12 Sure 08:33:26 dimak, thanks. 08:33:40 Anything else for bugs? 08:34:32 #topic Open Discussion 08:34:43 I want the floor 08:35:04 I will be on PTO next week, and the one after that. 08:35:10 Anyone wants to take over the meetings? 08:35:49 Cool 08:35:57 I can take the one next week, the week after I am on PTO 08:36:15 irenab, thanks. 08:36:34 oanson, it will be the US friendly time, right? 08:36:35 Worst comes, we'll cancel the one in two weeks 08:36:38 Yes 08:36:39 Yes 08:37:05 Anything else on this? 08:37:30 Second item is test coverage. 08:38:36 tox has a thing where it shows the coverage by unit tests 08:39:00 And the fullstack tests are a bit of a mess, seeing as they test flows rather than functionality. 08:39:18 maybe we can do more tempest 08:39:28 dimak, we need to get tempest to work first 08:39:30 or at the very least make the job we have green 08:39:36 Yeah 08:39:49 But then, yeah, tempest is the best fullstack thing we can do 08:40:03 Maybe we should roadmap it 08:40:18 Let's finish the dnat issue first, and take it from there 08:40:37 I promise the DNAT patch will be done one day :) 08:40:37 That is our most critical point right now 08:40:47 Before the PTG? 08:41:03 Note that metadata service is not getting any requests either 08:41:18 dimak, oanson , context/bug? 08:41:40 Tempest is broken because dnat doesn't work out of the box. That is bug 1636829 08:41:40 bug 1636829 in DragonFlow "Conflict between flat network and DNAT app" [Critical,In progress] https://launchpad.net/bugs/1636829 - Assigned to Dima Kuznetsov (dimakuz) 08:41:41 1 moment 08:42:17 I have a fix, that puts dnat on top of provider app 08:42:17 Not *just* because of dnat, but dnat is the first thing we need to fix 08:42:40 Linky? 08:42:47 With the lport/ofport bug mostly out of the way, I hope I'll have cleaner approach 08:42:58 https://review.openstack.org/#/c/475362/ 08:43:18 It is somewhat half-baked right now 08:43:44 dimak, could we maybe have a workaround: Increase the priority for dnat just for dnat's IPs? 08:43:58 I can try that 08:44:10 It will at least make the bug Hgih, not Critical 08:44:13 🤢 08:44:14 High* 08:44:24 I don't have that emoji 08:44:48 😷* 08:45:25 Sorry, :( 08:45:59 I am releasing the floor 08:46:06 Anyone wants it? 08:47:09 Anything else for open discussion? 08:47:17 Anything else in general? 08:47:31 Lunch! 08:47:40 Hooray. Let's go eat! 08:47:46 Thanks everyone for coming. 08:47:55 #endmeeting