19:00:00 #startmeeting Dragonflow 19:00:00 Meeting started Mon Aug 28 19:00:00 2017 UTC and is due to finish in 60 minutes. The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:05 The meeting name has been set to 'dragonflow' 19:00:13 Hi all. 19:00:19 Who is here for the Dragonflow weekly? 19:00:53 hello 19:01:00 o/ 19:01:04 Hi 19:01:12 I really hope it's not just the two of us 19:01:40 mlavalle just joined the channel :) 19:01:46 o/ 19:01:54 mlavalle, hi. Welcome 19:02:01 thaks1 19:02:04 thanks1 19:02:06 1 19:02:09 ! 19:02:09 Let's wait another half-minute. Maybe leyal or irenab will join too 19:02:34 mlavalle, holding up against the floods and the tornadoes? 19:02:36 mlavalle, post-lunch drowsiness? 19:03:02 oanson: no, just clumsiness 19:03:16 I see. 19:03:20 All right. Let's begin! 19:03:22 #topic roadmap 19:03:32 Reminder, the agenda is here: https://wiki.openstack.org/wiki/Meetings/Dragonflow#Meeting_28th_August.2C_2017_.281900_UTC.29 19:03:48 LBaaS: Updated the spec according to irenab's and pino's notes 19:03:54 It passes gate :) 19:04:19 Cool, I promise to give it another go tomorrow 19:04:20 L3 flavour - dimak you were busy with bugs this week. Any progress on this? 19:04:37 dimak, no rush on the LBaaS. It's a Queens feature. 19:04:39 Nope, it was mainly bugs 19:04:48 Trying to get tempest happy 19:04:53 Sure, no worries. 19:05:03 I saw it *was* happy today, but we'll get to that 19:05:18 etcd publisher - lihi got the etcd3gw-based db driver in. 19:05:56 There is a small holdback with the publisher - the code assumes utf-8, but we want it to be raw binary data (we use msgpack, which is very much not utf-8 compatible) 19:05:58 oanson, I opened a bug earlier on the unique key allocation race 19:06:10 dimak, linky? 19:06:22 https://bugs.launchpad.net/dragonflow/+bug/1713496 19:06:22 Launchpad bug 1713496 in DragonFlow "etcd driver implement allocate_unique_key in a raceful manner" [Undecided,New] 19:06:40 I don't think it's just etcd. I think it's all drivers. 19:06:55 I recall redis having similar behaviour (from reading the code) 19:07:00 I looked at redis and it should be ok 19:07:29 Then the solution shouldn't be too complex. 19:07:34 I'll send something in tomorrow 19:07:40 INCR key - Increments the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation 19:08:13 Yep. Looks all right. 19:08:15 I think we should move it to df-db init anyway 19:08:55 there we can iterate all models and create the keys for all models that have unique_key 19:08:56 We could do both. Putting it in df-db init means scanning all models and identifying the ones having unique_key. 19:09:06 ^ 19:09:09 Glad we think alike :) 19:09:39 All right. I'll consult lihi and see who does what, since she touched that code last 19:09:48 Sure 19:10:00 RPM packaging - sadly I've ignored this. I apologize. 19:10:25 I've added a new item to the roadmap - grenade gate. I want us to start behaving properly with past versions 19:10:30 That means having an upgrade path. 19:10:48 Now with the NB model framework, this shouldn't be too difficult to maintain. 19:11:08 yes 19:11:19 I've uploaded to relevant patches. Once gerrit cooperates, I'll also have the links 19:11:46 https://review.openstack.org/496837 and https://review.openstack.org/401210 . Actually the second one isn't mine 19:11:55 I'm renewing the work xiaohhui did in the past 19:12:16 I remember that 19:12:27 we should consider controllers with old versions 19:12:39 so we can allow gradual upgrade 19:12:45 That's a good idea. 19:12:50 Any thoughts on how to do that? 19:13:11 Or just have a deprecation period on destructive DB changes? 19:13:15 We can do what nova does 19:13:26 What does nova do? 19:13:58 If an agent receives an object it does not support (by version, they use OVO) 19:14:11 then the bounce it off nova-scheduler 19:14:19 and it translates it to an older version 19:14:44 Hmm... Give me a day to think if we can do this without OVO 19:15:04 We should be able to. We have the database version, and the nb_api can hide all this logic 19:15:16 OVO is there to handle object metadata, its type and version 19:15:31 and some other fields 19:15:53 I'll read more about it too. I saw Neutron are working for almost two cycles (I think) to make the transfer. 19:16:26 #action oanson read about OVO. Add migration and upgrade spec 19:16:44 Anything else for roadmap? 19:16:47 the only issue with that approach is that all conductor instances have to be upgraded first (and maybe at once?) 19:17:03 Nope 19:17:31 dimak, let's defer this conversation to the spec. If it's just us, I suspect we will have to repeat this conversation when the others join :) 19:17:49 yeah 19:17:53 Cool 19:17:53 #topic Bugs 19:18:06 I'm please to report that we have only two high bugs left 19:18:21 Bug #1707496 19:18:22 bug 1707496 in DragonFlow "Stabilize tempest job" [High,In progress] https://launchpad.net/bugs/1707496 - Assigned to Dima Kuznetsov (dimakuz) 19:18:27 there's another one I caught today actually 19:18:40 Shame on us... :( 19:18:43 well, not excatly today 19:18:59 One bug at a time. Let's finish with these two. They'll be quick 19:19:05 but dnat doesn't delete flows properly when fip is deleted before it is disassociated 19:19:29 dimak, I saw that the last item on this bug is a devstack patch. 19:19:52 Its the only one at the moment 19:20:03 I suggest that if there's no response by, say, Wednesday, we do a workaround locally. We can remove it in Queens once the devstack patch is merged. 19:20:04 but we should look into bgp tests 19:20:17 they still fail occasionally 19:20:23 Yes. 19:20:33 I'll set up an environment and see if I can reproduce locally 19:20:50 I finally managed to get an env up with DR so I can run them a few times 19:20:58 Otherwise, I will contact the Neutron DR guys. I think I saw these gates are non-voting on their end. I can ask why. 19:21:02 lets discuss this tomorrow 19:21:04 Sure 19:21:27 But I think the BGP issues do not block the bug. If that's the only issue, it can be closed. 19:21:36 Bug #1712503 19:21:38 bug 1712503 in DragonFlow "L3 application does not take care of allowed address pairs" [High,In progress] https://launchpad.net/bugs/1712503 - Assigned to Dima Kuznetsov (dimakuz) 19:22:13 I saw there is a patch up for this one. The gate fails for some reason on an ARP request 19:22:14 I have a fix up for that as well 19:22:40 I'm still debugging 19:23:15 dimak, sorry? 19:23:38 I didn't understand - you have a fix, or it's not 100% yet? 19:23:39 Still debugging the failing fullstack test 19:24:03 Sure. Got it. 19:24:16 Not sure yet whether the fault is with the test or the fix :) 19:25:03 It looks like it's failing on an ARP packet. But I haven't dug any deeper 19:25:33 If it's taking too long, we could consider a tempest test for now. Add a fullstack test in another patch 19:25:49 If there is a tempest test ready-made for it, that is 19:25:57 Sure 19:26:20 Now what's this new bug you've found? but dnat doesn't delete flows properly when fip is deleted before it is disassociated 19:27:01 Deleting associated FIP will leave its flows in place 19:27:09 Except flow pollution (which I agree is bad), what else does it cause? 19:27:38 Stale NAT rule in the network 19:28:01 That means dNAT still exists after it's been removed? 19:28:03 I haven't tried it but I assume you'd still be able to access the port by FIP's address 19:28:13 I'll confirm it tomorrow 19:28:32 Mostly I want to see if we can say it's Medium, and bump it to next week 19:29:05 Ok 19:29:08 I'll report back 19:29:15 Cheers. 19:29:18 Anything else for bugs? 19:29:25 Not on my end 19:29:42 mlavalle, I saw you uploaded a patch to fix our vagrant deployment? 19:29:50 correct 19:30:05 and the problem is excatly the line you commented 19:30:21 I'll remove it and try again 19:30:53 I ran into it in the all-in-one configuration. We need to go over them and make sure they're all working. I did etcd and redis, but there are many more. 19:30:57 mlavalle, great. Thanks! 19:31:28 #topic Open Discussion 19:31:33 yeah, for the time being, I want to get it working with etcd and once that works, I'll try others 19:31:45 mlavalle, sounds like a plan! 19:32:18 I added two items for open discussion today: vPTG and Tagging 19:32:26 Let's start with tagging: That happens on Friday 19:32:30 LOL, vPTG 19:32:51 <_pino> Is it vPTG or VTG? 19:33:14 I like vPTG 19:33:15 I think it's Virtual Project Team Gathering. But then again, vPTG is not a TLA. 19:33:15 <_pino> Eshed and I will be in California, but I will try to join remotely. 19:34:17 <_pino> How many sessions/hours were you thinking of doing for DF? Is there already an etherpad with the topics? 19:34:24 The PTG itself has a remote option? I recall last PTG they were playing around with the option, but it wasn't 100% successful 19:34:44 I think it really depends on each team 19:34:56 I can raise the idea with the Neutron team 19:35:15 _pino, we haven't built an agenda yet. In any case, it will probably be a week after the pPTG. 19:35:39 mlavalle, that would be great for us. Then we can listen in from here. 19:36:03 ok, the weekly meeting is tomorrow morning. I'll bring it up 19:36:11 Cool. Thanks. 19:36:41 I suspect we won't be the only project doing a vPTG. It might be worthwhile to collaborate and work out a framework where projects can inter-connect 19:36:46 as far as the vPTG, I will be attending New Employee orientation the 19 - 20 19:37:16 and flying back from Santa Clara on the 21st 19:37:30 please take that into consideration 19:37:39 I think 18th-20th are the only dates we have, due to local holidays. It's either that or end of September (I haven't checked the dates). 19:37:41 20th is holiday here 19:38:01 Let me review the calendar and see what we can do. 19:38:55 I would have said I don't mind working the holidays here (I don't like holidays), but some toddler might object :) 19:39:22 is it Yom Kipur? 19:39:41 No, New Years (I think). 19:40:18 Yep, New Years. Yom Kipur is the 30th. 19:40:36 5778? 19:41:08 No, you got me. I stopped counting when I finished middle school, and I don't remember that far back. :) 19:41:38 LOL 19:41:59 mlavalle, I think you're a few billion years off :P 19:42:13 about a dozen 19:42:40 well, yeah, it depends on your frame of reference 19:42:57 <_pino> It just occurred to me - so you're not going to do the vPTG during the week of Sept. 11 - that would make attendance very low? So you can do it any other time of our choosing? 19:43:13 I like the frame of reference where the universe is 47 years, 7 months, and about 28 days old 19:43:21 ++ 19:43:42 _pino, yeah. But we want to keep it near the original pPTG. Since that's the start of the cycle. 19:43:57 I think that makes sense 19:44:42 There are a couple of items we want to discuss in the vPTG, which might take the cycle: Selective-proactive, upgrades, deployment, etc. 19:45:49 Let's continue this offline. I also want to talk to the other projects which do this to see how they handle things like that. 19:46:10 This will also give mlavalle a chance to inquire about adding virtual presence to the Neutron PTG. 19:46:19 yes 19:46:55 Next item I want to discuss is tagging. It happens this Friday, or any split-second when there are no High/Critical bugs. 19:47:19 Once we tag, we should also inform OSA. They need to know when the tag is ready. 19:47:52 Any new bugs we discover can be back-ported, but within reason. 19:48:15 Anything else for Open Discussion? 19:48:41 Anything else in general? 19:48:59 Have a great evening 19:49:23 Yep. Thanks everyone for coming. 19:49:30 dimak: did you watch the season finale of GoT? 19:49:36 Not yet 19:49:45 The meeting is holding it up 19:49:47 :) 19:49:48 enjoy! 19:49:53 Thanks 19:50:21 #endmeeting