09:00:06 #startmeeting Dragonflow 09:00:07 Meeting started Mon Jan 23 09:00:06 2017 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:10 The meeting name has been set to 'dragonflow' 09:00:24 hello 09:00:25 Hey 09:00:28 Hi 09:00:41 Hi 09:00:47 #info xiaohhui dimak lihi in meeting 09:01:27 Let's wait another minute. 09:01:33 hi 09:01:37 I'll update the schedule in the meantime :) 09:02:15 #info nick-ma rajivk also in meeting 09:02:25 All right. Let's begin! 09:02:27 #topic Roadmap 09:02:33 IPv6 - lihi, take it away 09:03:15 I have made some progress, I home to upload a patch in the next days (router advertisement) 09:03:28 I've managed to send the router advertisement as response. I just want to make sure that I'm advertising all the possible messages. 09:03:55 Once that happens, that means that IPv6 enabled VMs will work? 09:03:57 With SLAAC? 09:05:09 And this means that DHCPv6 and Security Groups are left, right? 09:05:17 I think it should work already :) 09:05:20 Yes 09:05:46 and another small section of the ND 09:06:06 That's great. It would be great if you could also write up how you test it - what VM images you use, and how you set them up (cloud-init script, etc.) 09:06:06 But I think it is less important than DHCPv6 (Redirect) 09:07:11 Great. Thanks! Anything else? 09:07:25 OK, sure 09:07:47 nope 09:07:49 As far as I know, there was no progress in SFC 09:08:08 I actually rebased my work on the refactoring patches 09:08:20 and managed to throw away a lot of the gluework patches 09:08:28 dimak, that's good. 09:08:35 Less to review :) 09:08:46 😛 09:08:57 service health. There's the patch https://review.openstack.org/#/c/415997/ 09:09:23 From the looks of it, there are only cosmetic changes. Nothing major. 09:09:28 rajivk, anything to report? 09:09:28 I am not able to work on it. I will submit patch soon. 09:09:41 May be tomorrow. 09:09:48 Great. Thanks! 09:10:18 TAPaaS - yuli_s are you here? 09:10:26 Hi 09:10:44 Hi 09:10:46 All right. I'll try and catch him offline. 09:10:53 ishafran, itamaro hi 09:11:02 #info ishafran itamaro are also in meeting 09:11:14 ishafran, right on time! It's Anonymous sNAT time! 09:11:19 I saw there's a new patch 09:11:24 :) 09:11:37 I pushed new review request for code 09:11:43 https://review.openstack.org/#/c/417799/2 09:12:11 Great. Thanks. 09:12:29 going to start second implementation alternative 09:12:31 Everyone, please review the spec: https://review.openstack.org/#/c/397992/ so we can move forwards on this 09:13:01 ishafran, you mean the one marked 'Model alternative [2]' in the spec? 09:13:11 yes 09:13:18 Great! 09:13:59 anything else for sNAT? 09:14:18 no 09:14:27 NB Refactor and API. 09:14:45 dimak uploaded many patches. dimak, are they ready for review? 09:15:10 I hope they will be soon 09:15:23 Jenkins is happy with almost all of them :) 09:15:47 All right. You mentioned you're going to unify some of them, like the namespace patch. Is that still planned? 09:16:12 I wanted to discuss new models and their impact on df_publisher_service but I can bring it up in the open discussion 09:16:22 I think that would be best 09:16:29 irenab, are you in? 09:16:49 I don't plan on unifying at the moment 09:17:10 dimak, great. Thanks! 09:17:27 and I'll drop namespace patch if xiaohhui will insist on it. I've updated it to address concerns raised 09:17:52 will check it. 09:17:58 thanks :) 09:17:58 irenab isn't here. The is a spec is in WF-1, but I Think it's fairly advanced. I guess she'll update during the week. 09:18:21 Anything else for roadmap? 09:18:58 #topic bugs 09:19:38 There are many 'High' priority bugs. 09:19:56 Those around sNAT/dNAT will be solved with dimak's and ishafran's work 09:20:03 (I'll close the one on me) 09:20:32 Bug 1655935 will be solved by the NB refactor 09:20:32 bug 1655935 in DragonFlow "df-controller does not create tunnel to newer controllers" [High,New] https://launchpad.net/bugs/1655935 09:20:32 My chassis / tunnels bug needs update 09:20:40 This one 09:20:59 its probably due to df_publisher being down due to bind error 09:21:02 (address in use?) 09:21:17 which happens all the time in devstack with zmq 09:21:19 Yes. That looks like bug 1651643 09:21:19 bug 1651643 in DragonFlow "metadata service cannot start due to zmq binding conflict" [High,In progress] https://launchpad.net/bugs/1651643 - Assigned to Li Ma (nick-ma-z) 09:21:45 we are actually using virtual tunnel port now, so there will be no tunnel between every 2 controllers 09:22:28 Yes 09:22:56 dimak, I'm putting 1655935 on you. Either way, your work (or xiaohhui 's previous patches) should solve the issue 09:23:14 Ok 09:23:16 nick-ma_, are you managing with bug 1651643, or do you want help? 09:23:16 bug 1651643 in DragonFlow "metadata service cannot start due to zmq binding conflict" [High,In progress] https://launchpad.net/bugs/1651643 - Assigned to Li Ma (nick-ma-z) 09:23:37 the design of zmq driver implies that only neutron server starts publisher, but some applications break it. so, we need to rethink the pubsub implementation. 09:24:00 the metadata service is running now. 09:24:13 That's good. 09:24:32 Do you have a list of which apps break the assumption? 09:25:57 I think maybe all in-code pub/sub (in zmq) should use the multiproc pub/sub. Only the df_publisher service should use the tcp port-binding publisher 09:26:20 We'll make sure in deployment (devstack, etc.) that the publisher service is up if it's needed. 09:26:30 nick-ma_, does that make sense? Or am I missing something completely? 09:28:54 All right, I'll add that question to the bug, and we'll continue there 09:29:02 Anything else for bugs? 09:29:52 #topic Open Discussion 09:30:06 The floor is for the taking. 09:30:15 my network seems not good 09:30:22 nick-ma, in case you missed it, I said we'll continue the discussion on the bug page 09:30:29 OK 09:30:30 (It's a little slower, but more stable (: ) 09:30:43 I wanted to bring up the publisher thing I mentioned earlier 09:31:20 Sure. Go ahead 09:31:22 sure. 09:31:42 It seems that for some models we notify events from NbApi but for others from publisher service 09:32:11 You mean the use of table monitors, and the such? 09:32:19 And it makes sense because that way we can filter updates of fields deemed irrelevant to all nodes (e.g. timestamps) 09:32:23 Yeah 09:33:27 dimak, yes? 09:33:33 It creates a centralized POF 09:33:53 and several neutron servers will notify the same events right? 09:34:02 spof? 09:34:07 yes 09:34:10 sorry 09:34:18 Yes. Looks like it. 09:34:36 POF - Point of Failure, right? 09:34:38 yes 09:34:57 yes, i'm also thinking of that when i was trying to fix the bug. 09:35:21 I think that instead of bouncing the updates through publisher 09:35:27 This was our way to avoid controller-side publishers. Since chassis are updated by controllers. 09:35:31 we can decide locally whether we notify or not 09:36:30 we can define some fields as 'volatile' on new models, then if an update is only on these fields, no use to send event 09:36:50 but over all, all models behave the same 09:37:03 dimak, this sort of change will have to wait until after we solve the bug 09:37:14 Since this sort of behaviour is causing us issues already 09:37:23 and updates will be sent regardless of publisher service is up or not 09:38:14 That makes sense - if a chassis can't send the update - it is not really up. Same for a publisher. 09:39:04 we can keep monitoring stale entries in publisher service and issue state changes to down if needed 09:40:27 what do you mean about the state? 09:40:43 like marking chassis as 'down' 09:40:53 Then we can decide if this monitoring is done in the publisher service or the controllers themselves - since it will be done rarely (once a minute, or so) 09:40:54 and notifying all other nodes 09:41:30 I think this is a good direction. But I want to wait with it until after the bug is solved. 09:41:37 sure 09:41:46 Thanks for bringing this up. 09:41:49 yes, we can discuss with it later. 09:42:06 The floor is free again. 09:43:41 All right. I think we can head for lunch/dinner early. 09:43:49 Thanks everyone for the great work. Keep it up! 09:43:55 thanks. 09:44:06 #endmeeting