14:00:37 #startmeeting networking 14:00:38 Log: http://eavesdrop.openstack.org/meetings/senlin/2017/senlin.2017-01-17-13.00.log.html 14:00:48 Meeting started Tue Jan 17 14:00:37 2017 UTC and is due to finish in 60 minutes. The chair is jlibosva. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:51 The meeting name has been set to 'networking' 14:00:57 o/ 14:00:58 hi 14:00:58 o/ 14:00:59 Hello friends! 14:01:04 Hi 14:01:05 o/ 14:01:06 howdy 14:01:11 :) 14:01:19 #topic Announcements 14:01:30 Hi 14:01:40 hi 14:01:42 The Project Team Gathering (PTG) is approaching fast. Please read the following email 14:01:45 o/ 14:01:50 o/ 14:01:55 #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110040.html 14:02:19 If you have a topic or idea that you think should be discussed there, feel free to write it down on this etherpad 14:02:21 #link https://etherpad.openstack.org/p/neutron-ptg-pike 14:02:25 hi o/ 14:02:34 o/ 14:02:57 Note that there is also PTG Travel Support Program that can help with funding, if you are for some reason unable to join the gathering 14:03:00 hi 14:03:13 Deadline for applications to this program has been extended and ends by the end of the day TODAY 14:03:19 #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110031.html 14:03:53 * jlibosva slows down a bit with links but more are to come :) 14:04:19 Yesterday a new neutron-lib 1.1.0 was relased. yay 14:04:28 Congratulations to all who made it happen! Good stuff. 14:04:29 \o/ 14:04:46 yay! 14:04:50 woop! 14:04:56 You can read the enthusiastic announcement and a lot more here 14:04:58 link http://lists.openstack.org/pipermail/release-announce/2017-January/000372.html 14:05:00 #link http://lists.openstack.org/pipermail/release-announce/2017-January/000372.html 14:05:00 Hi 14:05:13 have we bumped minimal already? 14:05:23 ihrachys: i didn't see this yet. 14:06:31 this is all I wanted to announce 14:06:39 Anybody has anything else to announce? 14:06:52 yes. friendly reminder: next week is FF 14:07:09 so, just one week's left to squeeze all changes 14:07:11 we already have neutron-lib>=1.1.0 now in master 14:07:47 dasm: I think it is better to release neutronclient this week 14:07:47 amotoki: hmm... this one shows 1.0.0 :/ 14:07:49 https://github.com/openstack/neutron/blob/master/requirements.txt#L19 14:08:05 dasm: maybe it's not synced with global reqs yet? 14:08:08 amotoki: ack. we still have one week, but we can work on this 14:08:13 to avoid a situation where our client does not breaks others 14:08:31 dasm: I will ping you after checking the situation 14:08:40 amotoki: ack, thanks 14:09:04 we tend to release our client lately in a release and broke something several times.... let's avoid this 14:09:36 dasm: thanks for FF reminder 14:09:51 anything else? 14:09:54 dasm: fyi http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n110 14:09:58 jlibosva: amotoki: you're both right. global-requirements has already neutron-lib 1.1.0 14:10:08 amotoki: thanks, just noticed the same 14:11:03 moving on 14:11:11 #topic Blueprints 14:11:19 #link https://launchpad.net/neutron/+milestone/ocata-3 14:11:31 We're getting to the end of milestone 3 very soon 14:11:39 amotoki dasm: https://review.openstack.org/#/c/419345/ 14:12:04 ah, there it goes :) 14:12:07 hichihara: thanks 14:12:10 hichihara: thanks. now just wait for effect on all gates :D 14:12:33 and let's pray for no failures ;) 14:12:52 So back to milestone3, per planned schedule should be Jan 23 - Jan 27 14:13:03 which is the same week as mentioned FF 14:13:34 Does anybody want to raise here any bug/patch/blueprint that lacks proper attention and must get to ocata-3? 14:13:44 Hi! 14:13:50 hi 14:13:57 I've got 3 patches that are ready and waiting for some reviews: https://review.openstack.org/#/c/419815/ https://review.openstack.org/#/c/415226/ https://review.openstack.org/#/c/404182/ 14:14:40 ataraday_: good, thanks for bringing this up 14:14:49 I have one ready for review: https://review.openstack.org/273546 14:15:09 It is working doe long time, now fixed functional tests 14:15:33 would also be good to give this OVO patch some love: https://review.openstack.org/#/c/306685/ 14:15:39 korzen: cool, thanks. I'm sure jschwarz will love it ;) 14:16:02 and to make review progress on port bindings rework that will be used for multiple port bindings: https://review.openstack.org/#/c/407868/ and https://review.openstack.org/#/c/404293/ 14:16:10 ihrachys: do you want a dedicated topic for that? I saw no patches on wiki 14:16:15 jlibosva: nah 14:16:28 I think I mentioned already what's really important 14:16:43 ihrachys: ok, thanks 14:18:05 any other patches ready to land that are worth attention? 14:18:52 https://review.openstack.org/#/c/396651/ 14:19:17 I have this one, refactoring the QoS drivers to something more decoupled 14:19:18 ajo: thanks, this one is huuuge :) 14:19:24 I'm sorry, yes ':D 14:19:34 and I broke it on last changes, but I should push a new one now :) 14:20:04 ajo: do think the related bug is doable in ocata-3 timeframe? 14:20:17 jlibosva seems huge, but it's more moving stuff around, than creating new logic 14:21:06 I'm unsure, but it would be beneficial to let driver implementers switch to the new driver model as soon as they can 14:21:13 ajo: is it ready for another review run? 14:21:36 ihrachys it is if you want, I have a -1 on jenkins I'm fixing now, but it must be a small change 14:21:40 I see qos tests failing 14:21:50 yes 14:22:01 ok, ping me when everything is in shape Jenkins wise 14:22:17 apparently passing unit test locally is not a warranty, :) 14:22:18 ack, it should be good in a couple of hours 14:22:20 I'll ping you, thanks ihrachys 14:22:37 ajo: I asked because the bug is not set for milestone 3 and that could hide it from reviewers that prioritize o3 bugfixes 14:23:21 oh, thanks jlibosva, may be we should set it for milestone-3, or add it on a separate bug on milestone-3 14:23:32 jlibosva: john-davidge reminded me about this handy link to all o-3 related changes 14:23:33 ajo: yeah, I was also thinking about separate bug 14:23:34 #link 14:23:36 https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Fnetworking%2Dofagent+OR+project%3Aopenstack%2Fnetworking%2Dbgpvpn+OR+project%3Aopenstack%2Fnetworking%2Dovn+OR+project%3Aopenstack%2Fnetworking%2Dmidonet+OR+project%3Aopenstack%2Fnetworking%2Dbagpipe+OR+project%3Aopenstack%2Fneutron%2Dlib+OR+project%3Aopenstack%2Fnetworking%2Dsfc+OR+project%3Aopenstack%2Fpython%2Dneutronclie 14:23:37 to be honest, the whole thing is probably not m-3 doable 14:23:38 nt+OR+project%3Aopenstack%2Fneutron%2Dspecs+OR+project%3Aopenstack%2Fnetworking%2Dodl+OR+project%3Aopenstack%2Fneutron%2Dfwaas+OR+project%3Aopenstack%2Fneutron+OR+project%3Aopenstack%2Fneutron%2Ddynamic%2Drouting%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D%2D1+NOT+label%3ACode%2DReview%3E%3D%2D2%2Cself+branch%3Amaster&title=Neutron+ocata%2D3+Review+Inbox&Approved+RFE+neutron=%28mes 14:23:40 sage%3A1458890+OR+message%3A1463784+OR+message%3A1468366+OR+message%3A1492714+OR+message%3A1498987+OR+message%3A1504039+OR+message%3A1507499+OR+message%3A1516195+OR+message%3A1520719+OR+message%3A1521291+OR+message%3A1522102+OR+message%3A1525059+OR+message%3A1560961+OR+message%3A1561824+OR+message%3A1563967+OR+message%3A1566520+OR+message%3A1577488+OR+message%3A1578989+OR+message%3A1579068+OR+messa 14:23:42 ge%3A1580327+OR+message%3A1583184+OR+message%3A1585770+OR+message%3A1586056%29&High+Bugs+neutron=%28message%3A1365461+OR+message%3A1375625+OR+message%3A1506567+OR+message%3A1570122+OR+message%3A1580648+OR+message%3A1599936+OR+message%3A1610483+OR+message%3A1611626+OR+message%3A1626010+OR+message%3A1634123+OR+message%3A1642223+OR+message%3A1644415+OR+message%3A1647432+OR+message%3A1649124+OR+message 14:23:44 %3A1649317+OR+message%3A1649503+OR+message%3A1654991+OR+message%3A1655281%29&Blueprints+neutron=%28topic%3Abp%2Fadopt%2Doslo%2Dversioned%2Dobjects%2Dfor%2Ddb+OR+topic%3Abp%2Fneutron%2Dlib+OR+topic%3Abp%2Fonline%2Dupgrades+OR+topic%3Abp%2Fpush%2Dnotifications+OR+topic%3Abp%2Frouted%2Dnetworks+OR+topic%3Abp%2Fagentless%2Ddriver+OR+topic%3Abp%2Fenginefacade%2Dswitch+OR+topic%3Abp%2Ffwaas%2Dapi%2D2.0+O 14:23:45 this refactor: yes 14:23:46 R+topic%3Abp%2Fl2%2Dapi%2Dextensions+OR+topic%3Abp%2Fneutron%2Din%2Dtree%2Dapi%2Dref+OR+topic%3Abp%2Fsecurity%2Dgroup%2Dlogging+OR+topic%3Abp%2Ftroubleshooting%29&Approved+RFE+python%2Dneutronclient=%28message%3A1457556%29&High+Bugs+python%2Dneutronclient=%28message%3A1549876+OR+message%3A1643849%29 14:23:48 :( sorry 14:23:49 dasm: !!! 14:23:49 dasm: is it a link or a spam? 14:23:52 #link http://status.openstack.org/reviews/ 14:24:01 * ihrachys passes a prize to dasm 14:24:02 dasm: Haha! That's why I didn't try to link you directly to it :P 14:24:11 john-davidge: ;) 14:24:13 lol 14:24:14 lol 14:24:17 ok, I'm adding a separate bug for it, thank jlibosva 14:24:21 ajo: thanks 14:24:26 in fact, I thought I had it hmm 14:24:42 I also have one o3 patch that lacks eyes and love - https://review.openstack.org/#/c/402174/ 14:25:03 and thanks dasm for the link :) 14:25:17 jlibosva oh right, I think that one is good to go probably 14:25:19 it's simple 14:25:32 jlibosva: I'll take a look later today 14:25:33 I commited a new patch fixing a tiny typo in comments 14:25:45 jlibosva: the patchset I meant 14:25:53 mlavalle: thanks you! :) 14:26:42 so if there are no other patches/bp to highlight we can move on to the next topix 14:27:00 and the next topix is 14:27:00 Sorry. I have one https://review.openstack.org/#/c/203509 need more attention. 14:27:01 #topic Bugs and gate failures 14:27:07 #undo 14:27:08 Removing item from minutes: #topic Bugs and gate failures 14:27:50 annp but that looks like a spec, makes sense for pike, 14:28:03 yeah, that is a spec 14:28:17 I thought jlibosva was asking about code patches that need attention due to FF 14:28:18 annp: I think it gathers enough attentions these weeks. active discussion happens recently 14:28:45 annp: thanks for bringing this up 14:29:10 yeah, even though it already links some patches, it'll likely be discussed further in the next cycle 14:29:57 anything else? 14:30:09 #topic Bugs and gate failures 14:30:10 Ok, I understand please go ahead 14:30:14 annp: thanks :) 14:30:29 We started experiencing a lack of memory on gate jobs 14:30:35 #link https://bugs.launchpad.net/neutron/+bug/1656386 14:30:36 Launchpad bug 1656386 in neutron "Memory leaks on Neutron jobs" [Critical,New] 14:30:53 At first it appeared it's only linuxbridge jobs but then I saw also other multinode job to fail because of insufficient memory 14:31:25 I wanted to bring this to attention in case there is someone who loves memory leaks and stuff :) 14:31:50 aouch 14:31:55 jlibosva: do we have an entry in logstash for that one? 14:32:16 jlibosva: just for monitoring the number of hits 14:32:22 jlibosva it would be great to have some sort of memory usage output at the end of test runs 14:32:24 electrocucaracha: good point, I think we don't have that 14:32:42 ajo: the oom-killer dumps the processes before picking a victim 14:32:51 ah, nice 14:33:01 ajo: and also I think worlddump collects that as well 14:33:14 jlibosva: ok, I'll doublecheck and maybe add something there 14:33:20 electrocucaracha: thanks! 14:33:44 jlibosva: worlddump is called in grenade only 14:33:44 ajo: i tried to investigate it a little. it seems like during end of tempest run, swap is going through the roof and oom-killer tries to "solve" this by killing something 14:33:55 jlibosva : have we run a fulll tempest job on a local ( like devstack ) node to check ? 14:34:05 ihrachys: oh, I thought it's called on every failure. ok, nevermind, thanks for correcting me 14:34:18 oh and we have ps output: http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/ps.txt.gz 14:35:03 reedip_: i didn't see any local problems with this issue. probably good idea would be to try and reproduce on env similar to gate (like 8gb ram + 2gb swap) 14:35:38 in neutron-full failures in the bug comment, we got "Out of memory: Kill process 20219 (mysqld) score 34 or sacrifice child". 14:35:44 dasm : Hmm, that can be done , and probably we can use ps --forest to see a better detail of the tree 14:36:01 reedip_: I looked at project config and all full runs are multinode it seems 14:36:24 dasm, jlibosva on those ps listings I don't see anything neutron outstanding in numbers 14:36:29 jlibosva : oh then reproducing it as dsvm wouldnt be helpful unless its also failing 14:36:31 I see cinder using a lot of memory though 14:37:19 well, where a lot of memory is 0.8GB , not huge 14:37:26 ajo: IIRC I saw nova-api and mysqld being big. But we can dig into it later to not waste time on a single bug here on a meeting 14:37:28 how much memory do test VMs have? 14:37:32 ack 14:37:34 ajo: 8G I think 14:37:36 makes sense 14:37:49 yes, 8GB 14:37:55 bug deputy was boden for last week but I don't see him around 14:38:11 and we don't have a bug deputy for this week! 14:38:32 so unless there is some other critical bug that you are aware of, I'd like to find a volunteer :) 14:38:51 for this week, starting probably yesterday 14:38:51 I haven't done it before, but I can give it a shot 14:39:08 janzian++ 14:39:17 janzian: you're very welcome to do it :) 14:39:20 janzian: thank you 14:39:23 janzian: thanks 14:39:46 we should also pick a deputy for the next week 14:39:57 is there any other hero that will server next week? 14:40:05 sorry, serve* :) 14:41:07 it's a very prestigious role 14:41:49 ok, so I take next week 14:41:59 selling used cars is not for you :) 14:42:17 jlibosva let me take it 14:42:18 It's been a long time for me 14:42:33 haleyb: maybe I should wave my hands more :) 14:42:34 thanks Tocayo! 14:42:43 :D 14:42:49 ajo: alright, sold to ajo :) 14:43:02 \m/ 14:43:32 #topic Docs 14:43:41 john-davidge: hello :) 14:43:47 jlibosva: Hello :) 14:43:51 john-davidge: do you want to update? 14:44:20 One interesting bug to raise #link https://bugs.launchpad.net/openstack-manuals/+bug/1656378 14:44:20 Launchpad bug 1656378 in openstack-manuals "Networking Guide uses RFC1918 IPv4 ranges instead of RFC5737" [High,Confirmed] 14:44:45 There will be an effort across the networking guide to address that, possibly devref too if its needed 14:45:30 If anybody is interested in seeking out and destroying instances of non-compliance it would be much appreciated 14:45:36 john-davidge: it already uses 2001:db8 for IPv6 right? 14:45:42 otherwise our top priority remians the migration to OSC 14:45:46 haleyb: Yes 14:45:57 cool 14:46:09 RFC5737 defines IP ranges for documentation. It is worth checked. 14:46:29 haleyb: Obviously the IPv6 team is always on the ball :) 14:46:58 john-davidge: obviously :) 14:47:09 lol 14:48:18 That's all from me 14:48:26 john-davidge: cool, thanks for the link :) 14:48:33 #topic Transition to OSC 14:48:41 amotoki: do you want to update about OSC? 14:48:50 yeah 14:49:12 A patch in discussion is FIP associate/disassociate https://review.openstack.org/#/c/383025/ 14:49:36 It seems we need a discussion with Dean. 14:49:42 #link https://review.openstack.org/#/c/383025/ 14:49:54 If you are interested please show your opinion. 14:50:03 I had an opinion to change the options 14:50:27 I haven't checked the overall status. sorry for late, but it will be reported at latest this week. 14:50:44 * the end of this week 14:50:53 amotoki: ok, thank you for update. I hope the discussion will continue on that patch 14:51:34 what I am not sure is which patches of OSC plugins want to be merged in Ocata neutronclient release. 14:51:51 next topic should be neutron-lib but since I don't see boden here, we can move to on demand agenda as there is a topic there. So unless anybody wants to discuss neutron-lib, I'd pass on that 14:53:03 jli 14:53:21 amotoki: maybe dasm can help as release liaison? 14:53:50 jlibosva: nothing about neutron-lib. but afaik majority of things were merged 14:53:50 jlibosva: yes as we discussed at the beginning 14:54:02 ok, thanks, moving on 14:54:07 #topic Disable security group filter refresh on DHCP port changes 14:54:20 mdorman: do you want the stage? :) 14:54:39 sure. really i’m just looking for advice on how to go forward with https://review.openstack.org/#/c/416380/ 14:55:11 for us, personally, we will probably just turn off DHCP to work around the problem (we don’t really use it anyway),but this seems like a scalabiliy thing that could affect others. 14:56:16 but currently we allow users to change IP addresses of dhcp ports after DHCP ports are created. 14:56:30 it would be nice if we have an alternative. 14:56:33 the idea of that patch was to stop refreshing all security group filters on all ports any time a dhcp port changes. but turns out that is actually a breaking fix because there are inbound rules on the port specific to the dhcp agents on that network. so i think the proposal in the comments is to do away with those specific inbound rules and replace them with a blanket rule that would allow all dhcp traffic in. 14:56:37 seems like there is some kind of discussion going on on that patch 14:56:46 amotoki: correct. that’s the current issue 14:57:07 yes. i just wanted to raise the issue and try to get some more eyeballs 14:57:10 wouldn't it be reasonable to allow any dhcp in from the specific DHCP servers? 14:57:34 ajo that’s the current behavior i believe. 14:57:37 hmm 14:57:50 and wouldn't that only be an issue if you move the dhcp server IPs around? 14:57:53 the problem is when a dhcp agent is added/removed/changed, then the rules on all ports in the network have to be updated 14:57:55 mdorman: yep, more eyes are definitely useful :) thanks for bringing this up 14:57:56 * ajo opens the review 14:58:09 ajo: correct 14:58:17 let's continue the discussion and question on #-neutron or the review!! 14:58:17 mdorman aha, makes sense 14:58:27 so it becomes an scalability issue in such case 14:58:36 for ovsfw we could use conjunctive rules... 14:58:49 I wonder if for iptables we could use a generic chain used from all ports for that 14:58:55 well... from all ports on specific networks 14:58:58 ajo: yup, exactly. we run only providers networks, i nsome cases with 1000s of ports. so any time a dhcp agent changes, thre is an avalanche of rpcs to neutron-server to refresh all the rules 14:58:58 amotoki: +1 14:59:01 one chain per network or so 14:59:04 we are out of time.... 14:59:05 we're running out of time anyway 14:59:11 ack 14:59:28 fair enough. happy to move to neutron channel 14:59:46 thanks everyone for showing up :) and have a good day 14:59:48 mdorman: thanks for raising it anyway 14:59:54 #endmeeting