05:31:15 #startmeeting taas 05:31:17 Meeting started Wed Sep 7 05:31:15 2016 UTC and is due to finish in 60 minutes. The chair is anil_rao. Information about MeetBot at http://wiki.debian.org/MeetBot. 05:31:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 05:31:20 The meeting name has been set to 'taas' 05:31:23 hi 05:31:26 hi 05:31:27 Hi 05:32:04 #topic Performance measurement (progress report) 05:33:10 #link: https://wiki.openstack.org/w/images/2/22/Increasing_Source_VMs-20160906.png 05:33:14 I uploaded a document about performance measurement last week. 05:33:35 I guess that it is the cause that the softirq is unbalanced to one physical cpu. 05:34:28 So we tried to balance softirq among cpus. 05:34:58 kaz: That is interesting to see. 05:35:23 It seems that the values of received packets without mirror are increasing. 05:35:31 i guess because of softirq balancing. 05:37:10 Can you please clarify what do you mean by "when source VMs are increased" 05:38:17 please see the last weeks slide, page 1 05:38:43 #link: https://wiki.openstack.org/w/images/7/74/Increasing_Source_VMs-20160831.png 05:39:24 this means that the number of vms are increasing and the number of destination vm and monitor vm are fixed. 05:39:51 Thanks. 05:40:13 Kaz: so overall the total flows being monitored is increased 05:40:39 yes 05:40:51 So, if I am reading this right, when mirroring is enabled, we half the throughput of the receiving VM but send at the same rate to the monitor VM. 05:41:57 yes, i think so 05:42:00 anil_rao: looks like 05:44:29 last week, i got several valuable comments from anil 05:44:42 1) it is better to measure in case of TCP 05:45:37 2) it is better to use SIPP benchmark, too 05:46:29 soichi: I am not sure if those cases are better but they would serve to highlight other aspects. :-) 05:46:48 okay, i see 05:48:38 kaz: I did not fully understand the last (2nd) bullet item below the graph in slide #4. 05:49:34 Compared to the results in last week's graph both cases (w and w/o mirroring) have improved after IRQ balancing. 05:55:22 Without mirroring, the receiving VM is getting between 200K and 250K pps. With mirroring it gets between 100K and 130K pps, but the monitor VM also receives at the same rate. 05:56:17 Both the receiving VM and the monitor VM are on the same host, so we are essentially dealing with the same volume of traffic, split between 2 VMs. 05:56:42 Shouldn't this be expected? 05:57:30 +1 05:58:24 sorry, i don't know why, yet. 05:59:52 What IPerf is doing is maxing out the bandwidth (for any given packet size). So once we have reached that point, without miorroring and then turn on mirroring we can expect the performance to bve half. 06:00:25 +1 06:00:57 It would be interesting to drive the receiving VM at say 100K pps without mirroring and then turn on mirroring. In that case we should expect no change in the performance of the receiving VM. 06:01:30 i think so, too 06:01:37 The receiving VM should continue to get 100K pps but the monitor VM should get the same rate too. 06:01:50 sure 06:02:15 I will try. 06:02:35 Thanks kaz. 06:02:44 thanks 06:03:38 Looking at last week's result I see the same behavior there too. I.e. without IRQ balancing we were still seeing the case where when mirroring was turned on, the receiving VM + monitor VM was getting the same rate as just the receiving VM without mirroring. 06:04:07 IRQ blancing has definitely helped improve the overall host throughput. 06:04:30 yes 06:04:37 yeah 06:05:12 These are good results! 06:05:31 thank you 06:06:28 To actually demonstrate the overhead of monitoring, it might be better to not saturate the system's bandwidth limit. I.e. we keep enough room for the extra volume generated from mirroring. This way we should be able to show that mirroring doesn't affect the receiving VM (or at least that is the goal) 06:06:28 hi 06:07:00 reedip: Hi 06:07:05 reedip: hi 06:07:13 sorry, was late, reading up the logs 06:08:03 anil_rao: agree 06:08:14 anil_rao: +1 06:09:40 Here is a proposal for the test. 06:09:54 Compute highest throughput for the receiving VM. 06:10:17 Send at less than half that rate to the receiving VM (for multiple source VMs) 06:10:24 Enable mirroring. 06:10:47 See the diffrence in the rate at the receiving VM and the monitor VM. 06:11:01 Expected result: No change to receiving VM. Same rate at monitor VM. 06:11:23 In reality there might be a little difference and we should report that. 06:12:15 OK, i will try that. 06:12:52 Thanks kaz. I look forward to the results. 06:12:54 i guess we can see increse of CPU usage on the host 06:13:21 after enable mirroring 06:13:38 soichi: Yes. That would be nice to measure. 06:13:43 soichi: I think so 06:14:00 If we don't hit 100% we get the true overhead, otherwise the overhead is clipped and we don't get a worthwhile result. 06:14:19 +1 06:14:39 I agree 06:15:10 If folks are interested, we can discuss the TaaS bug related to ingress side mirroring. 06:15:24 sure 06:15:24 ani_rao: +1 06:15:28 +1 06:15:36 #topic Open Discussion 06:16:18 I had send out a mail to the Neutron mailing list with a detailed description of the problem, root-cause and a proposal to move forward. 06:17:10 In summary, given the way OVS treats VLAN tagged ports on a host, we don't have any options left for solutions completely within the scope of TaaS. 06:18:00 We will need the core Neutron OVS driver to explicitly tag VLAN ids for packets coming in to br-int from the 'instance' ports. 06:18:15 +1 06:18:47 I am prototying this solution and will report back to the mailing list when I have a working version. 06:18:59 anil_rao: Can we handle this specific case by not forwarding the mirrored traffic to the br-tap but handle it in br-int 06:19:25 it will be a crude solution but might work 06:20:03 vnyyad: We cannot avoid forwarding to br-tap because the mirror destination may be on a different host. 06:20:30 Here is the basic problem. 06:20:36 hmmm... yes true realized it... 06:20:46 OVS does not tag packets flowing within the same host's br-int. 06:21:12 Neutron specifies that port MACs are unique only within a network. 06:21:39 This means that it is (technically) possible to two ports on differnet networks but on the same host to have the same MAC. 06:21:50 yes 06:22:15 If these two networks belong to different tenants, TaaS would have really broken tenant isolation because we would leak traffic of one tenant to another. 06:22:34 yes 06:22:40 +1 06:22:49 yes... a thin chance of happening but nevertheless can happen 06:23:06 vnyyad: Yes. :-( 06:24:20 My prototype involves having the Neutron driver explictly add a vlan tag (corresponding to the port) for all packets coming in via that port. After that TaaS works without any modification. 06:24:45 +1 06:25:08 This way out currently solution for broadcast/multicast ingress traffic also works as is. 06:25:18 out --> our 06:25:34 any rational why they dont tag, may be its an optimization 06:25:34 it soudns good 06:25:45 but this solution should be good to have 06:26:23 i have another topic 06:26:29 When OVS works in normal mode they operate as a legacy switch and just keep track of ports and tags internally without having to actually tag packets. 06:27:00 Neutron has also set br-int in legacy (or normal) mode for its typical operation. So everything seemed good until now. 06:27:21 ok 06:27:29 We are the first applciation that is trying to detect packets ingressing a VM's vNIC in br-int. However, I am sure there will be others soon. 06:27:43 for sure 06:28:37 anil_rao: would you please submit to the vBrownBag Tech Talks at Barcelona Summit? Submission deadline: Sep. 15th 06:28:38 Looks like we are running out of time. Any other topics 06:28:38 in my understand, speaksers will be anil, kaz, and reedip (3 min. each?) 06:28:53 soichi: I will do that tomorrow morning. 06:29:03 okay, thank you 06:30:32 We'll continue the discussion next week. 06:30:38 bye 06:30:42 bye 06:30:43 #endmeeting