05:30:15 #startmeeting taas 05:30:16 Meeting started Wed Jul 27 05:30:15 2016 UTC and is due to finish in 60 minutes. The chair is anil_rao. Information about MeetBot at http://wiki.debian.org/MeetBot. 05:30:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 05:30:19 The meeting name has been set to 'taas' 05:30:24 hi 05:30:26 Hello 05:30:28 hi 05:30:29 hi 05:30:57 #topic Performance measurement 05:31:12 link: http://lists.openstack.org/pipermail/openstack-dev/attachments/20160727/04c7bcbc/attachment-0001.pdf 05:31:32 Thanks Kaz for providing the new results. 05:31:38 There is a mistake in the graph I showed last week IRC meeting. 05:31:49 The attached file is a collected edition. 05:32:40 I was reading on a blog that OVS is able to do around 260K pps 05:33:00 Apparently this is a a new version with lots of perf enhancements. 05:33:39 could you share URL of the blog? 05:33:54 sure. give me a minute 05:34:19 #link http://blog.ipspace.net/2014/11/open-vswitch-performance-revisited.html 05:34:26 the vhost-net thread was sharing the cpu with something cpu-consuming? 05:35:06 anil_rao: thiank you 05:36:55 CPU pin down was enabled 05:37:48 kaz: so the cpu was dedicated to the vhost-net instance? 05:39:56 yamamot__: vhost-net is running on host OS, so it was not dedicated, I think. 05:41:37 you mean vcpu threads are pinned? 05:42:01 yamamot__: yes 05:42:42 do you have a record of which cpu was running which thread? 05:43:26 No I do not. 05:44:51 If my understanding of the graph is correct it appears that when mirroring is not enabled, iperf is able to drive 95K pps. 05:45:35 When mirroring is enabled you get 70K pps of production traffic + 70K pps of mirrored traffic. 05:45:50 anil_rao: yes 05:46:28 so when mirroring if off the 95K limit is either from iperf of the VMs themselves and not OVS. is this correct? 05:46:46 the screenshot in p4 is for which case? 05:47:59 yamamot__: this is for mirror is enabled case. 05:48:44 anil_raw: yes, I guess 05:49:01 I am curious to know why the non mirrored case tops off at 95K pps. Was vhost-net consuming 100% cpu in that case too. 05:51:38 I was logged out of the IRC chat a few min back. Did the blog link i sent out reach you all? 05:52:10 we know that vhost-net cosume 100% cpu , but we do not know the reason. 05:52:27 anil_rao: okay 05:52:40 soichi: Thanks 05:54:27 This is an interesting result. I will need some time to fully understand the behavior. 05:54:52 we will try to investigate 05:55:05 I mean, when mirroring is enabled we are essentially seeing 140k pps (70K prod + 70K mirrored). 05:55:23 yes 05:56:08 So it seems to indicate that the 95K non mirrored limit is perhaps a limit with the dest VM not being able to receive faster than that rate. 05:56:49 Hi sorry, I forgot that today we had a meeting 05:57:01 Hi Reedip 05:57:06 reedip: hi 05:57:11 hi 05:57:31 going through the meeting logs... 05:58:40 Our lab is down because of an AC upgrade. So I will only have result for you all next week. 05:59:48 any more on this topic, otherwise we can move to the next one on the agenda. 06:00:11 hi 06:00:15 I guess the limit of 95K is a limit of source VM's CPU bound of vhost-net 06:02:39 we will make additional experiments and report the result 06:02:53 soichi: Thanks 06:03:00 let's go no next topic 06:03:13 #topic Design of traffic isolation by using flow based tunneling (cont.) 06:04:00 Kaz found that Netron code was updated 06:04:28 soichi: In what manner? 06:05:13 I found that it is not able to use ":taas" in a neutron database because a new version of neutron has a strict type check of IP address. 06:06:38 So we need to consider a new method of discovering IP address. 06:07:02 yes, we need to revise our proposal 06:08:47 we will submit revised one when we find a good idea. 06:09:05 what's wrong with adding a column? 06:09:54 yamamot__: it shoud be an idea, tahnk you 06:11:16 move to next topic? 06:11:53 soichi,kaz: I was wondering if we can return the right IP when the destination replies to the broadcast packet(s). 06:13:17 broadcast packets need to send to appropriate tunnel. 06:15:04 it means send to tunnels for production trafiic 06:15:14 for taas we will always send via TAAS_SEND_FLOOD table, right? 06:16:23 soichi,kaz: I will study this some more and we can discuss options in a later meeting. 06:16:54 before mac learning, we send via TAAS_SEND_FLOOD table 06:17:45 after mac lerning we will send via TAAS_SEND_UCAST table 06:19:29 kaz: In today's exercise we are not really performing mac learning. Instead, we just learn about the location of the destination of a tap service instance. I am assuming that the same will be true with your flow based tunneling. 06:21:11 thank you for your comment. I would like to discuss next week. 06:21:40 sure. Let's move to the next topic. 06:21:49 #topic Can not capture the packet, if the two VM on the same host 06:22:09 #link Media:ingress_packet-20160720-r1.png 06:24:04 Thanks for the detailed description of the problem. I am examining this issue...looks like the port vlan is not visible to the OVS flow table. 06:25:28 I have reproduced the issue on my setup. I'll report back when I have a better handle on this. 06:26:02 anil_rao: thank you 06:26:43 What I did notice is that traffic coming from outside the host gets properly tagged by the flows in OVS. 06:27:33 so this only affects ingress monitoring when the source is on the same host. 06:28:11 anil_rao: that's right 06:28:40 #topic Open Discussion 06:28:48 Let's vote to TaaS and related presentations. 06:28:56 I found Vinay's presentation: "Tapping in NFV cloud : A real world showcase by Swisscom, Ericsson and IXIA." in the Telecom/NFV Operations track. 06:29:09 +1 06:30:50 we have run out of time. Next week then? 06:30:57 #endmeeting