Thursday, 2014-01-02

*** mili__ has joined #openstack-neutron00:07
*** mili_ has quit IRC00:11
*** mili__ has quit IRC00:15
*** mili_ has joined #openstack-neutron00:15
*** clev has quit IRC00:27
*** mili_ has quit IRC00:43
*** aymenfrikha has quit IRC00:43
*** yongli has joined #openstack-neutron01:05
*** banix has joined #openstack-neutron01:10
*** banix has quit IRC01:18
*** banix has joined #openstack-neutron01:20
*** banix has quit IRC01:36
*** otherwiseguy has quit IRC01:41
*** Jianyong has joined #openstack-neutron01:46
*** WackoRobie has quit IRC02:54
*** dguitarbite has joined #openstack-neutron03:22
*** dguitarbite has quit IRC03:34
*** banix has joined #openstack-neutron03:35
openstackgerritA change was merged to openstack/neutron: Remove unused imports  https://review.openstack.org/6460503:45
*** nati_ueno has joined #openstack-neutron04:04
*** changbl has quit IRC04:05
*** changbl has joined #openstack-neutron04:10
*** nati_ueno has quit IRC04:16
openstackgerritZhiQiang Fan proposed a change to openstack/python-neutronclient: Refactor tests/unit/test_shell.py  https://review.openstack.org/4161504:28
*** bashok has joined #openstack-neutron04:42
*** julim has joined #openstack-neutron04:46
*** jecarey has joined #openstack-neutron04:53
*** banix has quit IRC04:57
*** h6w has joined #openstack-neutron05:07
*** bashok has quit IRC05:09
*** bashok has joined #openstack-neutron05:09
h6wQuick question.  If I want my VMs to be accessible to other machines on the VLAN, what overlaps in terms of ranges.  Is it the "Public" address range, the "Floating IP" address range, both, or something else?05:11
h6wIt seems that if I connect the VM to the "Public" network, it's assigned an IP according to Horizon, but there's no DHCP on the network so it doesn't get assigned that IP.05:15
h6wHowever, if I assign it the IP given by Horizon (ip a a <ipaddr> dev eth0) then I can ping/access the rest of the same network.05:29
*** ashaikh_ has joined #openstack-neutron05:37
*** ashaikh has quit IRC05:38
*** ashaikh_ is now known as ashaikh05:38
*** bashok_ has joined #openstack-neutron05:38
*** bashok__ has joined #openstack-neutron05:39
*** bashok_ has quit IRC05:39
*** bashok has quit IRC05:42
*** yfried has quit IRC05:46
*** chandankumar has joined #openstack-neutron05:48
*** networkstatic has quit IRC06:01
*** jistr has joined #openstack-neutron06:08
*** h6w has left #openstack-neutron06:09
*** networkstatic has joined #openstack-neutron06:22
openstackgerritJenkins proposed a change to openstack/neutron: Imported Translations from Transifex  https://review.openstack.org/6387706:34
*** zhhuabj has quit IRC06:44
*** zhhuabj has joined #openstack-neutron06:44
*** evgenyf has joined #openstack-neutron06:50
*** zigo has joined #openstack-neutron06:56
*** yfried has joined #openstack-neutron06:59
*** Jianyong has quit IRC07:03
*** garyk has joined #openstack-neutron07:03
*** Jianyong has joined #openstack-neutron07:06
*** ashaikh has quit IRC07:21
*** irenab_ has joined #openstack-neutron07:29
*** sputnik13 has joined #openstack-neutron07:37
*** majopela has joined #openstack-neutron08:04
*** amuller has joined #openstack-neutron08:07
*** dguitarbite_ has joined #openstack-neutron08:12
*** ihrachys has joined #openstack-neutron08:17
*** pasquier-s has joined #openstack-neutron08:21
yfriedsalv-orlando: ping08:24
*** jlibosva has joined #openstack-neutron08:28
irenab_hi, any chance someone can review https://review.openstack.org/#/c/53609/ ?08:35
majopelahi irenab_, on it,08:41
majopela:)08:41
irenab_majopela: thanks!08:42
*** amuller has quit IRC08:46
*** amuller_ has joined #openstack-neutron08:46
*** jpich has joined #openstack-neutron09:00
*** ygbo has joined #openstack-neutron09:03
*** amuller_ has quit IRC09:15
*** amuller_ has joined #openstack-neutron09:24
*** amuller_ is now known as amuller09:25
*** tziOm has joined #openstack-neutron09:29
*** networkstatic has quit IRC10:00
*** sputnik13 has quit IRC10:17
openstackgerritSean M. Collins proposed a change to openstack/neutron: Ensure entries in dnsmasq belong to a subnet using DHCP  https://review.openstack.org/6457810:25
*** Jianyong has quit IRC10:30
*** dguitarbite_ has quit IRC10:43
yfriedsalv-orlando: ping10:43
tziOmAnyone with some deeper insight to the ovs/vxlan implementation used in ovs/neutron?10:59
*** pcm has joined #openstack-neutron11:13
*** pcm has quit IRC11:14
*** pcm has joined #openstack-neutron11:14
majopelatziOm, what do you want to know exactly? (just curious, probably I don't have a deep enough insight)11:15
tziOmFrom what I can see, the one must specify endpoint ip or "flow" .. the way I am thinking, this kindoff breaks the good concept of vxlan.. using multicast to do neigbour-discovery (arp) and havind a bridge "arp" table for the vxlan link..11:18
tziOmafaik this implementation demands that one plugs in a openflow switch (?) for the control, instead of the vxlan "interface" doing the job..11:19
majopelaI'm not sure if openvswitch does already implement vxlan multicast11:23
majopelaand, about openflow, it does use it under the hood, yes11:24
majopelabut in software, you don't need to use a physical openflow switch...11:25
*** bashok__ has quit IRC11:28
*** jroovers has joined #openstack-neutron11:30
*** jorisroovers has joined #openstack-neutron11:39
*** jroovers has quit IRC11:41
*** WackoRobie has joined #openstack-neutron12:06
*** armax has joined #openstack-neutron12:10
*** WackoRobie has quit IRC12:10
openstackgerritAkihiro Motoki proposed a change to openstack/neutron: Return request-id in API response  https://review.openstack.org/5827012:13
*** aymenfrikha has joined #openstack-neutron12:16
*** zzelle has joined #openstack-neutron12:25
*** jdev789 has joined #openstack-neutron12:32
*** jistr has quit IRC12:39
*** aymenfrikha has quit IRC12:48
*** jdev789 has quit IRC12:48
*** jorisroovers has quit IRC12:49
*** markvoelker has joined #openstack-neutron12:52
*** b3nt_pin` is now known as beagles12:59
*** [1]evgenyf has joined #openstack-neutron13:00
*** beagles is now known as Guest797513:00
*** evgenyf has quit IRC13:02
*** [1]evgenyf is now known as evgenyf13:02
*** aymenfrikha has joined #openstack-neutron13:04
*** jdev789 has joined #openstack-neutron13:05
openstackgerritA change was merged to openstack/neutron: Fix empty network deletion in db_base_plugin for postgresql  https://review.openstack.org/6359713:05
*** jistr has joined #openstack-neutron13:07
*** Guest7975 has quit IRC13:08
*** b3nt_pin has joined #openstack-neutron13:12
*** Jianyong has joined #openstack-neutron13:12
openstackgerritenikanorov proposed a change to openstack/neutron: Fix race in get_network(s) in OVS plugin  https://review.openstack.org/6391813:15
*** sputnik13 has joined #openstack-neutron13:22
*** sputnik13 has quit IRC13:27
*** yfried has quit IRC13:30
*** WackoRobie has joined #openstack-neutron13:35
anteayawhoever tobbe at tail-f dot com is, let's get some stable urls and logs in the Tail-f NCS Jenkins third party testing messages, failure messages with no logs are pretty useless to developers https://review.openstack.org/#/c/63918/13:42
*** b3nt_pin is now known as beagles13:44
openstackgerritSean M. Collins proposed a change to openstack/neutron: Ensure entries in dnsmasq belong to a subnet using DHCP  https://review.openstack.org/6457813:44
sc68calanteaya: ++++++++ a million13:49
openstackgerritPaul Michali proposed a change to openstack/neutron: Remove duplication for get_resources in services  https://review.openstack.org/6467613:49
*** bvandenh has joined #openstack-neutron13:57
*** irenab_ has quit IRC13:58
*** aymenfrikha has quit IRC14:01
*** aymenfrikha has joined #openstack-neutron14:01
*** rkukura has left #openstack-neutron14:05
*** s3wong has joined #openstack-neutron14:06
*** peristeri has joined #openstack-neutron14:07
*** aymenfrikha has quit IRC14:11
anteayasc68cal: http://lists.openstack.org/pipermail/openstack-dev/2014-January/023292.html14:11
*** rkukura has joined #openstack-neutron14:12
*** [1]evgenyf has joined #openstack-neutron14:15
sc68calanteaya: thanks - I'm not surprised that this topic is causing a vendor plugin to barf - https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/dnsmasq-mode-keyword,n,z14:15
sc68calBut we're trying to make IPv6 work correctly in Neutron with the Comcast IPv6 deployment14:16
sc68calas well as make it more flexible, for some of the other ways v6 is being deployed by other Neutron users14:16
*** evgenyf has quit IRC14:17
*** [1]evgenyf is now known as evgenyf14:17
sc68calBut no logs mean we have no way to figure out what to fix14:18
anteayano logs for any reason is unhelpful14:18
anteayalogs need to come before voting14:19
anteayawe are working on that in the documentation14:19
anteayanow if you want to take a look at two projects that really get it right, check out smokestack and turbo-hipster14:19
sc68calanteaya: Gotcha. Yeah I might spend a sprint doing a spike on third party testing. We've got a full lab environment that I have set up with DevStack for v6 testing14:20
sc68calso I might build a bot to start running tempest tests on it - although our deployment has some differences, so we'd need to add some config knobs to Tempest14:21
sc68calFor instance, we don't run the l3 agent, so no floating IPs, and Tempest assumes that floating IPs always exist14:21
sc68calI think it requests some, and when it comes back with none it just sits there, confused.14:22
anteayahere is a great example of how to respond to an email about third party testing: http://lists.openstack.org/pipermail/openstack-dev/2013-December/023247.html14:22
anteayathis is uselful14:22
anteayafor those listening at home14:22
anteayahmmm14:22
anteayawell ensure the logs cover those changes14:22
anteayaso we know what is being tested14:23
sc68calyup14:23
anteayayou know you can stand up a jenkins and get a gerrit plug in to listen to our output stream?14:23
sc68calanteaya: yep14:23
anteayaso you can minimize your work14:23
anteayaand you can take the code for our entire testing infra if you want it14:24
anteayapuppet stands it up14:24
anteayaI haven't done that one myself yet, I can't find the time *sigh*14:24
anteayabut I want to14:24
sc68calYeah - we'd need to slice up our lab env a bit for that, since the 3 machines are currently being used for dev/qa14:25
sc68calbefore we could hand it totally over to puppet14:25
sc68calmight be able to do that for some of the machines currently dedicated to QA14:25
*** jprovazn has joined #openstack-neutron14:28
anteayasc68cal: interesting14:32
anteayakeep me informed on your progress and I will do what I can to help14:32
anteayakeeping in mind, I haven't sailed that ship myself yet14:32
*** aymenfrikha has joined #openstack-neutron14:35
*** jdev789 has quit IRC14:45
*** jdev789 has joined #openstack-neutron14:46
*** jdev789 has quit IRC14:46
*** clev has joined #openstack-neutron14:52
*** markmcclain has quit IRC14:54
*** Jianyong has quit IRC14:54
*** amuller has quit IRC14:58
*** jdev789 has joined #openstack-neutron14:59
*** jdev789 has quit IRC15:01
*** yfried has joined #openstack-neutron15:04
*** otherwiseguy has joined #openstack-neutron15:06
anteayaotherwiseguy: hello there15:12
anteayaare you free to help the neutron quality cause?15:12
anteayawe could use you15:12
*** dkehn_ is now known as dkehn15:17
*** clev has quit IRC15:20
*** s3wong has quit IRC15:25
*** markmcclain has joined #openstack-neutron15:26
*** aymenfrikha has quit IRC15:35
*** clev has joined #openstack-neutron15:35
*** bvandenh has quit IRC15:36
openstackgerritAmir Sadoughi proposed a change to openstack/python-neutronclient: Added --source-port-range-min, --source-port-range-max  https://review.openstack.org/6213015:44
*** banix has joined #openstack-neutron15:44
*** haleyb has joined #openstack-neutron15:51
*** clev has quit IRC15:52
*** clev has joined #openstack-neutron15:53
*** ashaikh has joined #openstack-neutron15:54
*** alagalah has joined #openstack-neutron16:02
*** thedodd has joined #openstack-neutron16:08
*** jgrimm has joined #openstack-neutron16:12
*** thedodd has quit IRC16:16
ihrachyshi. is it enough to set 'qpid_topology_version = 2' in neutron.conf to make sure neutron uses the new topology for its interactions? How can I check which topology is currently using by neutron (or any other openstack module)?16:17
*** jprovazn has quit IRC16:19
*** thedodd has joined #openstack-neutron16:21
*** markmcclain has quit IRC16:21
*** jlibosva has quit IRC16:25
*** jlibosva has joined #openstack-neutron16:28
*** mlavalle has joined #openstack-neutron16:29
*** yfried is now known as yfried_brb16:30
*** jprovazn has joined #openstack-neutron16:30
*** jistr has quit IRC16:32
anteayalots of test failures happening right now, debugging is happening in -infra16:38
anteayaearly direction might be a new MySQL_python release that happened today16:38
anteayaif so, global requirements should send out a version pin which will be a high priority patch16:40
anteayawatch for it, I will probably be offline when it happens16:41
*** yfried_brb is now known as yfried16:42
openstackgerritA change was merged to openstack/neutron: Imported Translations from Transifex  https://review.openstack.org/6387716:44
openstackgerritYves-Gwenael Bourhis proposed a change to openstack/neutron: Changed DictModel to dict with attribute access.  https://review.openstack.org/6469616:46
anteayamy cab is here, see you16:49
*** carl_baldwin has joined #openstack-neutron16:57
*** amuller has joined #openstack-neutron17:01
*** garyk has quit IRC17:06
*** zz_ajo is now known as ajo17:07
*** SumitNaiksatam has quit IRC17:18
*** ygbo has quit IRC17:18
*** jaypipes has joined #openstack-neutron17:18
*** jaypipes has quit IRC17:19
*** markmcclain has joined #openstack-neutron17:24
*** rwsu has joined #openstack-neutron17:27
*** aymenfrikha has joined #openstack-neutron17:29
*** mlavalle has quit IRC17:32
*** jpich has quit IRC17:33
*** evgenyf has quit IRC17:35
enikanorov__salv-orlando: Hi17:35
salv-orlandohi17:35
enikanorov__good you are here :) I wanted to ask about NeutronDbPluginV2.register_dict_extend_funcs capability. You have introduced this, haven't you?17:36
salv-orlandoyes, I did.17:37
salv-orlandoI'm the one who's guilty of that crime.17:37
enikanorov__the question would be: what were the reasons to make it a class method and store the dict in the class rather than in the instance17:37
salv-orlandothe fact that we don't have a reliable way of invoking __init__ for a mixin17:38
salv-orlandoor to have e mechanism in the base db class which initialises the mixins before using them17:38
salv-orlandoas the plugins are instructed to *NOT* call super.__init__17:38
enikanorov__ok, I knew there should be reasons...17:39
enikanorov__so here's the problem I'm trying to fix:17:39
enikanorov__i'm trying to add a relationship and another dict_extend method for ovs plugin17:40
*** alagalah has quit IRC17:40
enikanorov__in the usual way it is done in mixins17:40
*** jaypipes has joined #openstack-neutron17:40
enikanorov__but i'm getting lots of ut failures, because mapping persists in NeutronDbPluginV217:41
*** jlibosva has quit IRC17:41
*** jlibosva has joined #openstack-neutron17:41
enikanorov__I'm a bit stuck because the way register_dict_extend_funcs works is for a whole lifetime of application17:42
enikanorov__so i can't reset it in reliable way17:42
salv-orlandosince the method is added to the class17:42
enikanorov__(by saying 'application i mean unit tests)17:42
salv-orlandoit stays even when you switch plugins17:42
enikanorov__yeah17:43
salv-orlandobut I think we solved this by allowing to use a stringified form17:43
salv-orlandodo you have this patch in gerrit?17:43
enikanorov__yes17:43
salv-orlandolink?17:43
jaypipesenikanorov__, salv-orlando: Happy New Year :) Do you agree with Amir on this bug: https://bugs.launchpad.net/neutron/+bug/1264608 that the bug should go into a blueprint?17:43
enikanorov__salv-orlando: https://review.openstack.org/#/c/63918/17:43
salv-orlandojaypipes: I did not see this bug beforehand. looking at it.17:45
jaypipescheers17:45
salv-orlandoenikanorov__: looking at your patch too17:47
*** layer427expert has joined #openstack-neutron17:47
enikanorov__running two threads, huh?17:47
enikanorov__:-)17:47
*** layer427expert has quit IRC17:49
*** harlowja has joined #openstack-neutron17:49
*** layer427expert has joined #openstack-neutron17:50
*** layer427_ has joined #openstack-neutron17:50
*** layer427expert has quit IRC17:50
salv-orlandoenikanorov__: left a comment for you on gerrit. I think I have a hint to the reason of the failure.17:53
enikanorov__let me see. also i think i've found a solution17:53
enikanorov__yeah17:53
enikanorov__thats' what i think too. i need just to rename it17:54
salv-orlandojaypipes: the bug is definitely valid. To me it could be both a bug or a blueprint, depending on the extent of the changes needed.17:54
salv-orlandoAre you confident that just by grouping ovs-vsctl calls you'll get better performance?17:54
salv-orlandoIn the past I did not notice much difference (gain <10%)17:54
salv-orlandobecause that's something I tried for other reasons...17:55
jaypipessalv-orlando: yes. It took >1 hour to start the plugin with only 90 compute nodes.17:55
salv-orlandojaypipes: so I guess you did restart the agent on all compute nodes.17:55
jaypipessalv-orlando: during this time the l3 router node just crunched away at about 10 load agvg.17:56
jaypipessalv-orlando: yep.17:56
jaypipesoh, .... no, not the compute nodes.17:56
jaypipesonly restarted the l3 router node agent plugin.17:56
jaypipessalv-orlando: OVS 2.0 would certainly help, but we were on OVS 1.11 on this particular deployment.17:57
salv-orlandojaypipes: got it. Did you restart the l3-agent as well?17:57
*** garyk has joined #openstack-neutron17:57
jaypipessalv-orlando: yes.17:57
salv-orlandohelp me… I do not remember is OVS 1.11 has multithreaded vswitchd17:57
jaypipessalv-orlando: nope, 2.0 has multi-thread17:57
salv-orlandojaypipes: thanks. Are startup times for the l3 agent long as well or are they 'normal'?17:59
*** mlavalle has joined #openstack-neutron17:59
jaypipessalv-orlando: can't remember :( since the L3 agent calls out to the ovs-plugin-agent, I think it's affected as well, but I'm not entirely sure18:00
salv-orlandoI'm asking this because actually you restarted only one agent - so the size of the zone in terms of compute nodes is probably not the problem, which might instead lie in the load being handled by the l3 agent in terms of number of routers and interfaces18:00
jaypipessalv-orlando: the size of the zone dictates how many calls to ovs-vsctl add-br et al are made.18:01
jaypipessalv-orlando: since for each compute node, at least an add-br, add-tun is called.18:01
jaypipesor rather, add br-int, add br-tun18:01
salv-orlandojaypipes: sure, but you just told me that you restarted the plugin agent on one node only (the one hosting the network node), and not on all compute nodes :)18:02
jaypipessalv-orlando: and all of those calls are so slow because they all go through a rootwrap process18:02
jaypipessalv-orlando: if you combine all those calls into a single call to rootwrap ovs-vsctl add-br-int -- add br-tun ... -- add br-int ... -- add br-tun ..., etc ... then the call is much much faster to complete.18:03
jaypipessalv-orlando: yes, only restarted the agent on the L3 router node, not the compute nodes.18:03
jaypipessalv-orlando: but it still calls out to add a bunch of bridge int patches and tuns on restart.18:04
jaypipessalv-orlando: perhaps that is the true bug? :)18:04
salv-orlandoyeah perhaps… because it should create, for each node, be it compute or network, only ONE integration bridge and only ONE tunnel bridge18:05
salv-orlandoeven if you're using provider networks mapped to vlans18:05
*** bjornar has joined #openstack-neutron18:05
jaypipessalv-orlando: yes, but with a hundred compute nodes, that's 200 interfaces, each of which gets its own rootwrap'd external process.18:05
*** layer427_ has quit IRC18:05
salv-orlandojaypipes: ok, you're talking about the ports, not the bridges :) Yes, it's more clear now. I got misled because you said 'ovs-vsctl add' and I thought add bridge18:06
jaypipessalv-orlando: so, when you restart the L3 agent, and do a ps aux | grep python | grep ovs (or similar), you see, for about two hours, various python processes spawning running rootwrap'd ovs-vsctl commands setting up each and every int bridge and tun for each compute node.18:06
jaypipessalv-orlando: ah, sorry, ports, yes...18:07
jaypipessalv-orlando: sorry about that!18:07
jaypipessalv-orlando: /me still relatively new to these things.18:07
salv-orlandono problem. I've doing this for longer, but most of them are still new to me.18:07
jaypipeshehe :)18:07
*** layer427expert has joined #openstack-neutron18:07
jaypipessalv-orlando: as peter feiner has pointed out on numerous occasions, rootwrap is deadly slow, so combining commands into a single call to a rootwrap'd process is a big time win.\18:09
*** layer427expert has quit IRC18:09
*** amuller has quit IRC18:12
salv-orlandojaypipes: I am looking at the code to refresh about current status. root wrap is an issue, but single command execution might become challenging because tunnel port setup also requires flow setup (ovs-ofctl); in a nutshell I seem to recall 1 ovs-vsctl call and 2 ovs-ofctl call for each port. So in your case you should be able to go down from 600 calls to 401, which is already a good win18:12
salv-orlandojaypipes: and on another note, for a much better handling of tunnels, you should enable l2_population on the ovs agent; but I think you've already done that18:14
jaypipessalv-orlando: well, you could go even further than that, and do all the vsctl commands in one rootwrap'd call and all the ovs-ofctl calls in a second one.18:15
jaypipessalv-orlando: yes, we enabled l2_population.18:15
salv-orlandojaypipes: as these commands have runtime-dependent parameters, you should probably create on-the-fly a sort of script to feed to root wrap and then execute it.18:16
salv-orlandonot sure how folks will see executing code generated from the code itself.18:16
salv-orlandobut that is surely a way to work around the root wrap overhead18:17
jaypipessalv-orlando: yes, exactly.18:17
*** layer427expert has joined #openstack-neutron18:18
salv-orlandoon the other hand you're observing huge startup times, which will make your clouds just useless18:19
*** networkstatic has joined #openstack-neutron18:20
salv-orlandoso I wonder what percentage of this time is the root wrap overhead. If you have 200 nodes, assuming at least 1 vm on each node, you will need 199 tunnels on each host.18:20
jaypipescorrect.18:21
openstackgerritenikanorov proposed a change to openstack/neutron: Fix race in get_network(s) in OVS plugin  https://review.openstack.org/6391818:21
*** markmcclain has quit IRC18:21
salv-orlandoand if setting up a tunnel takes 3600 seconds, that is about 18 seconds per tunnel, which is a lot. It would be great to measure how much is the root wrap overhead.18:22
salv-orlandoBecause I've been chasing as well failures due to ovs-vsctl taking a lot of time, but in my case switching to OVS 2.0 solved everything.18:22
jaypipessalv-orlando: but if you use the batched version of the ovs-vsctl CLI tool, you do a singular transaction within OVS to create multiple ports... so you get both the speedup of removing the multiple rootwrap calls as well as the speedup frmo having a single call to OVS...18:24
salv-orlandojaypipes: while I agree on the former, I tried the latter and did not get much improvement, because the cli tool makes a distinct ovsdb call for each command you pass on the command line18:25
jaypipessalv-orlando: oh? I did not realize that... that is certainly not clear from the man pages.18:26
salv-orlandothe man pages are from the user perspective, and yes that's a single transaction.18:26
salv-orlandobut the interactions with the kernel driver are not atomic18:27
salv-orlandoI mean sorry… they are atomic18:27
salv-orlandobut they're passed one at a time.18:27
salv-orlandoanyway - I noticed from the code that your bottleneck appear to be in tunnel_sync18:28
salv-orlandowhich is executed in the first iteration of rpc_loop18:28
jaypipesk18:28
salv-orlandois that correct?18:28
*** layer427expert has quit IRC18:28
jaypipessalv-orlando: tunnel_sync was one of the places, yes... the others were the methods that began with setup_xxx() in __init__().18:31
salv-orlandojaypipes: right. From my analysis the setup_xxx methods should have a fixed execution time (i.e.: no loops there), but setup_tunnel_sync might take a long time, unless l2_population is enabled, which you have. So I'm still puzzled.18:32
*** markwash has joined #openstack-neutron18:32
sdaguesalv-orlando: any idea if ovs 2.0 is in the cloud archive for ubuntu?18:33
salv-orlandoI did not find it last time I checked; I installed it from sources18:34
*** mlavalle has quit IRC18:35
*** markmcclain has joined #openstack-neutron18:37
*** layer427expert has joined #openstack-neutron18:37
*** networkstatic has quit IRC18:38
*** chandankumar has quit IRC18:38
*** nati_ueno has joined #openstack-neutron18:39
salv-orlandojaypipes: another thing to check with people more expert than me on tunnelling in the ovs agent is whether destroying and recreating the tunnel bridge at each restart is really needed18:40
jaypipessalv-orlando: indeed, it probably isn't, but because of the use of --existing, the code is executing it anyway.18:41
*** jroovers has joined #openstack-neutron18:41
*** layer427expert has quit IRC18:42
salv-orlandoyeah, but to me seems the code is actually destroying the bridge and recreating it. Which means that it will take a lot of time to recreate ports which are exactly as before, and also will cause a data plane outage which is something that should be always avoided.18:44
salv-orlandoI will check this probably tomorrow, I don't have any multi-host environment right now.18:44
*** layer427expert has joined #openstack-neutron18:45
*** nati_ueno has quit IRC18:47
*** jroovers has quit IRC18:48
*** nati_ueno has joined #openstack-neutron18:49
*** networkstatic has joined #openstack-neutron18:49
*** SumitNaiksatam has joined #openstack-neutron18:49
*** nati_ueno has quit IRC18:54
bjornarIs neutron using the vxlan implementation in ovs or the kernel one?18:55
*** thedodd has quit IRC18:58
mesteryOVS18:58
mesteryAlthough, with kernels 3.12 and OVS >= 2.0, the two are one and same, save for the multicast support in the kernel verison which OVS doesn't use.18:58
*** markvoelker has quit IRC18:59
*** gizmoguy_ has quit IRC18:59
*** sc68cal has quit IRC18:59
bjornarmestery, one and the same with 3.12? that means it uses kernel implementation in 3.12+ and ovs >= 2.0?19:00
*** evgenyf has joined #openstack-neutron19:00
mesterybjornar: Yes. If you have OVS >= 2.0, and linux kernel >= 3.12, the same API calls for VXLAN encap/decap are used by the OVS VXLAN code and the kernel VXLAN ports.19:01
bjornarBecause I was playing a bit with it the other day (kernel with vxlan and ovs >= 2.0) and I dont really understand the ovs implementation (when it comes to option remote_ip and key .. as opposed to kernels much more useful multicast group and vni...19:01
*** mili_ has joined #openstack-neutron19:01
bjornarmestery, the ovs version is more of a "ptp-vxlan" as long as you dont have a openflow controller, am i right?19:02
mesterybjornar: Yes, that's correct.19:03
*** clev has quit IRC19:03
bjornarSo it it planned to make the ovs implementation work "correctly", that is out of the box without openflow controller?19:03
*** markmcclain has quit IRC19:04
mesterybjornar: Personally, I see VXLAN as useful with and without multicast, depends on your comfort level with multicast and the scaling of your controller with the lack of multicast.19:04
mesterybjornar: I don't understand what you mean by correctly.19:04
*** mili_ has quit IRC19:04
bjornarmestery, correctly as in works with multicast19:04
bjornar..and keeps a "arp" cache19:05
*** markmcclain has joined #openstack-neutron19:05
*** markvoelker has joined #openstack-neutron19:06
*** gizmoguy_ has joined #openstack-neutron19:06
*** sc68cal has joined #openstack-neutron19:06
*** mili_ has joined #openstack-neutron19:06
bjornarsay I have three vms in a teenant network running on 3 different nodes, with kernel vxlan I could add a if with vni=3 and some multicast address to each bridge, and everything would work19:06
mesterybjornar: There has been some interest in that, yes. But no one has submitted patches upstream into OVS for that yet.19:06
bjornarmestery, but could not the "patches" just use the kernel version and move the interface in on the bridge19:07
bjornarI mean.. in current state ovs-vxlan is worth "nothing" standalong with ovs19:07
mesterybjornar: Yes, you could conceivably do that.19:08
mesterybjornar: Worth nothing? How is being able to program tunnel interfaces from your OpenFlow controller worth nothning?19:08
bjornarwith option:multicast_grp:239.1.1.8 option:vni=123 .. it would work as killer19:09
mesterybjornar: Through OVSDB and OpenFlow, you can control OVS completely from a remote controller. Seems like there might be some value there.19:09
bjornarmestery, I said worth nothing _standalone_19:09
*** mili_ has quit IRC19:10
mesterybjornar: If you add with multicast, then I agree.19:10
bjornaropenflow and sdn is fine and all, but not everyone needs it19:10
mesterybjornar: Choice is always good.19:10
bjornarand vxlan dont "need" it ... with multicast ;)19:10
*** mili_ has joined #openstack-neutron19:10
*** mili_ has joined #openstack-neutron19:11
*** mili_ has quit IRC19:11
bjornarmestery, but thinking the openflow way.. what are my options for a controller that will handle vxlan with openstack/neutron atm?19:11
*** mili_ has joined #openstack-neutron19:12
mesterybjornar: Ryu likley supports this, I haven't checked recently. OpenDaylight will support this when it releases in a few weeks. And VMware NSX/NVP supports this as well.19:12
bjornarmestery, and its part of openflow 1.3?19:13
mesterybjornar: You would currently need to use OVSDB and OpenFlow combined. The tunnel stuff is not part of OpenFlow at the moment, though I have not looked at the latest 1.4 stuff recently.19:14
bjornarmestery, ok.. could I check this out using odl from git atm?19:15
mesteryFollow my handy-dandy instructions here (http://www.siliconloons.com/?p=523), and pop over to #opendaylight-ovsdb with questions.19:15
mestery:)19:15
*** markmcclain has quit IRC19:15
*** markmcclain has joined #openstack-neutron19:17
*** mlavalle has joined #openstack-neutron19:18
bjornarmestery, will look at it.. just installed odl some hours ago..19:19
mesterybjornar: Good luck and have fun!19:19
bjornarmestery, so what is the remote_ip=flow ..?19:20
bjornarwhat does "flow" resolve to?19:20
mesterybjornar: Instead of pinning a single IP to a VXLAN port, you can set the remote IP used on a per-flow basis.19:20
bjornarmestery, Can the ovs vxlan implementation even be called vxlan when it does not support multicast?19:21
mesterybjornar: It allows ovs-vswitchd to maintain IP to port mappings, and pass that info down to the datapath for encap and use a single VXLAN port representation in the kernel.19:21
mesterybjornar: I think it depends on your view of VXLAN. We may be getting philosophical now. :)19:21
*** thedodd has joined #openstack-neutron19:29
openstackgerritenikanorov proposed a change to openstack/neutron: Fix race in get_network(s) in OVS plugin  https://review.openstack.org/6391819:30
*** larsks has quit IRC19:35
bjornarmestery, isnt it like "it depends on your view of TCP"? .. ;)19:38
mesterybjornar: Heh :)19:38
bjornarBut I understand the interest of certain companies :) .. that vxlan does not support multicast19:38
mesterybjornar: I don't know of those interests of which you speak, I only know of people who won't touch multicast with a ten-foot pole because of previous experience with it.19:39
bjornarmulticast is fine on l3 with some decent routing19:42
*** jlibosva has quit IRC19:45
bjornarmestery, any way to run some mininet/ovs/vxlan testing without the neutron parts?19:49
*** markmcclain has quit IRC19:49
mesteryWith ODL, yes. With ODL and Neutron, no.19:50
bjornarSo how would I go about doing this with odl?19:51
*** mili__ has joined #openstack-neutron19:51
*** hermatize has joined #openstack-neutron19:52
*** markmcclain has joined #openstack-neutron19:53
*** mili_ has quit IRC19:53
openstackgerritAaron Rosen proposed a change to openstack/python-neutronclient: Combine debug and verbose commandline options  https://review.openstack.org/6246919:54
mesterybjornar: I think you've crossed into the need to pop over to #opendaylight and/or #opendaylight-ovsdb now. :)19:55
bjornaryeah19:56
hermatizeHello - I would like to start contributing code to OpenStack, can anyone point me in the right direction?19:58
hermatizeI'm a network/security guy during the day, learning software dev at night :-)19:58
*** jgrimm has quit IRC20:00
*** dave_tucker_zzz is now known as dave_tucker20:07
*** markmcclain has quit IRC20:09
*** hermatize has quit IRC20:09
peristerihermatize: you could start looking at https://bugs.launchpad.net/neutron20:10
*** markmcclain has joined #openstack-neutron20:11
*** jprovazn has quit IRC20:14
*** mrsnivvel has quit IRC20:14
*** mrsnivvel has joined #openstack-neutron20:15
*** dave_tucker is now known as dave_tucker_zzz20:20
*** zzelle has quit IRC20:25
openstackgerritAaron Rosen proposed a change to openstack/neutron: Bump api_workers from 0 to 4  https://review.openstack.org/5978720:29
*** hermatize has joined #openstack-neutron20:35
*** hermatize has quit IRC20:37
*** zzelle_ has joined #openstack-neutron20:38
bjornarmestery, what kind of tunneling does the "handy-dandy instructions" use?20:39
*** hermatize has joined #openstack-neutron20:41
*** zzelle_ has quit IRC20:42
*** networkstatic has quit IRC20:43
*** zzelle_ has joined #openstack-neutron20:43
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Remove unnecessary call to get_dhcp_port from DeviceManager  https://review.openstack.org/6349220:44
*** zzelle__ has joined #openstack-neutron20:46
*** zzelle__ has quit IRC20:46
*** SumitNaiksatam has quit IRC20:48
*** zzelle_ has quit IRC20:49
*** RajeshMohan has quit IRC20:49
mesterybjornar: By default, GRE, but there are instructions on changing that to VXLAN I can send as well.20:50
*** RajeshMohan has joined #openstack-neutron20:50
*** zzelle_ has joined #openstack-neutron20:50
bjornarmestery, would be nice.. did not see any reference to either..20:50
* mestery nods.20:50
*** zzelle_ is now known as zzelle20:51
*** zzelle has quit IRC20:52
*** zzelle has joined #openstack-neutron20:52
*** layer427expert has quit IRC20:55
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Refactor to remove _recycle_ip  https://review.openstack.org/6472420:58
*** zzelle has quit IRC21:00
*** zzelle has joined #openstack-neutron21:01
*** layer427expert has joined #openstack-neutron21:02
*** thedodd has quit IRC21:05
*** majopela has quit IRC21:05
bjornarmestery, did you have the link? or do you need to write?21:07
mesterybjornar: Lets move this over to #opendaylight-ovsdb, I know some folks there that can help you with that ASAP.21:08
*** yfried has quit IRC21:13
*** majopela has joined #openstack-neutron21:20
*** clev has joined #openstack-neutron21:26
*** dave_tucker_zzz is now known as dave_tucker21:29
*** banix has quit IRC21:31
*** evgenyf has quit IRC21:31
*** sputnik13 has joined #openstack-neutron21:45
*** networkstatic has joined #openstack-neutron21:46
*** beagles has quit IRC21:50
*** layer427expert has quit IRC21:59
*** majopela has quit IRC21:59
*** peristeri has quit IRC22:00
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Refactor to remove _recycle_ip  https://review.openstack.org/6472422:07
*** layer427expert has joined #openstack-neutron22:10
*** thedodd has joined #openstack-neutron22:10
*** ashaikh has quit IRC22:12
*** majopela has joined #openstack-neutron22:12
*** hermatize has quit IRC22:15
*** WackoRobie has quit IRC22:15
*** h6w has joined #openstack-neutron22:16
*** majopela has quit IRC22:17
*** armax has left #openstack-neutron22:17
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Simplify ip allocation/recycling to relieve db pressure  https://review.openstack.org/5801722:18
*** hermatize has joined #openstack-neutron22:19
*** ijw has quit IRC22:22
*** ijw has joined #openstack-neutron22:23
*** markwash has quit IRC22:27
*** sputnik13 has quit IRC22:29
*** bjornar has quit IRC22:31
*** markmcclain has quit IRC22:32
*** rwsu has quit IRC22:33
h6wIs it possible to have my instances allocated IPs in the same range as my physical network?  I thought that was what Neutron-VLAN was for.  But it seems to be NATing instead of sharing.22:34
h6wWhich range is the correct one to be overlapping with my physical network?  Public or Floating IP, or both?22:34
h6wSorry, I mean NATing instead of bridging.22:35
*** clev has quit IRC22:38
openstackgerritAmir Sadoughi proposed a change to openstack/python-neutronclient: Added --source-port-range-min, --source-port-range-max  https://review.openstack.org/6213022:38
*** rwsu has joined #openstack-neutron22:40
*** hermatize has quit IRC22:44
*** ashaikh has joined #openstack-neutron22:44
*** layer427expert has quit IRC22:50
*** rwsu has quit IRC22:59
*** ijw has quit IRC23:01
openstackgerritBerezovsky Irena proposed a change to openstack/neutron: Add update from agent to plugin on device up  https://review.openstack.org/5360923:09
*** zzelle has quit IRC23:09
*** thedodd has quit IRC23:11
anteayaokay group this is markmcclain's bug and could be considered the top gate bug right now: https://bugs.launchpad.net/tempest/+bug/125389623:11
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Simplify ip allocation/recycling to relieve db pressure  https://review.openstack.org/5801723:12
anteaya83 hits in teh last 7 days excluding experiemental jobs23:12
*** rwsu has joined #openstack-neutron23:12
anteayawho has the ability to lend a hand?23:12
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Simplify ip allocation/recycling to relieve db pressure  https://review.openstack.org/5801723:26
openstackgerritCarl Baldwin proposed a change to openstack/neutron: Refactor to remove _recycle_ip  https://review.openstack.org/6472423:27
openstackgerritArmando Migliaccio proposed a change to openstack/neutron: Rename nicira configuration elements to match new naming structure  https://review.openstack.org/6474723:31
*** mlavalle has quit IRC23:33
*** bjornar has joined #openstack-neutron23:56
*** markwash has joined #openstack-neutron23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!