05:33:13 <anil_rao> #startmeeting taas
05:33:14 <openstack> Meeting started Wed Jul 20 05:33:13 2016 UTC and is due to finish in 60 minutes.  The chair is anil_rao. Information about MeetBot at http://wiki.debian.org/MeetBot.
05:33:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
05:33:17 <openstack> The meeting name has been set to 'taas'
05:33:21 <anil_rao> Hi
05:33:23 <soichi> hi
05:33:25 <kaz> hi
05:33:51 <anil_rao> Let's get started
05:33:59 <anil_rao> #topic Performance measurement
05:34:27 <anil_rao> #link http://lists.openstack.org/pipermail/openstack-dev/attachments/20160518/724a5f6d/attachment-0001.pdf
05:35:26 <kaz> we measured a performance between VMs on a single host.
05:36:11 <kaz> Please see page 2 of attached PDF file
05:36:18 <anil_rao> kaz: Sure.
05:36:46 <yamamoto> hi
05:36:53 <anil_rao> Hi
05:37:08 <kaz> The left graph shows received packets per second
05:37:44 <kaz> and the right graph shows the throughput
05:38:31 <kaz> Left graph shows that as compared with the case when the port
05:38:43 <kaz> mirroring is disabled, the value of packets per seconds is reduced
05:39:02 <kaz> when it is enabled.
05:39:15 <kaz> How do you think?
05:39:33 <anil_rao> I have some comments ... Can we discuss?
05:39:41 <kaz> sure
05:40:07 <anil_rao> I am curious...how were you mesuring the PPS and througput in the Monitor VM?
05:41:44 <kaz> By using our analystic tool.
05:42:06 <kaz> Throuput was calculated by using pps.
05:42:35 <anil_rao> OK. And I am assuming you are doing the same for the Dst VM (or is that via iperf itself)?
05:42:35 <yamamoto> our analystic tool?
05:43:32 <soichi> yamamoto: developed at Fujitsu Lab.
05:43:55 <yamamoto> how the tool measures pps?
05:44:27 <kaz> anl_rao: yes
05:45:03 <soichi> yamamoto: i guess just counting number of recieved packts
05:45:51 <anil_rao> Looking at the left graph and its first data point,
05:46:24 <anil_rao> it appears that without monitoring, PPS is around 120k.
05:46:46 <kaz> anil_rao: The PPS on Dst VM was measured by using our tool
05:46:54 <anil_rao> I am guessing that is the highest that iperf was able to push between Src and Dst
05:47:32 <kaz> anil_rao: I think so.
05:49:55 <anil_rao> kaz: You all are guessing that this limit of 70k PPS is the result of vhost-net getting pegged at 100% CPU utilization
05:50:40 <kaz> I think so
05:51:24 <anil_rao> Did you happen to notice vhost-net CPU usage when mirroring was disabled. It would be nice to know if the non-mirroring curve is the result of iperf or vhost-net
05:51:27 <yamamoto> does the vhost thread eats 100% regardless of monitoring?
05:52:40 <kaz> yamamto: yes
05:55:36 <soichi> we are planning to measure under the environment more VMs are deployed (on a same host) and increase the number of tap_flow.
05:55:36 <anil_rao> The throughput for the non-mirror case also seems very low.
05:57:03 <anil_rao> If both the source and destination VMs are on the same host, their traffic should be restricted to just br-int.
05:57:34 <kaz> We will check it again
05:58:13 <anil_rao> A max of 900 bps for nearly 1500 byte packets seems very low.
05:59:15 <kaz> sorry, the unit of Throuput is Mbps
05:59:46 <anil_rao> Yes, that is what I had guessed. :-)
06:00:06 <kaz> thank you
06:00:51 <anil_rao> So if we are see around 900 Mbps without mirroring and around 800 Mbps with mirroing it doesn't look too bad.
06:01:33 <anil_rao> I am still curious why the PPS line with mirroring is so flat.
06:02:12 <anil_rao> Definitely looks like we have hit some upper bound ... perhaps maxed out the cpu.
06:03:45 <anil_rao> since this is single host experiment, when mirroring it turned on we are just incurring the cost of packet duplication for the most part. Tunneling cost is absent.
06:04:34 <soichi> yes
06:04:36 <anil_rao> We had a slight delay in our office. I am going to get the new hardware only later this week. I will set it up and try some experiments from my side.
06:04:59 <anil_rao> I'll get back with both single node and multi-node experiments.
06:05:11 <anil_rao> Sorry for the delay.
06:05:12 <soichi> okay, thank you
06:05:18 <soichi> no problem
06:05:54 <anil_rao> I'll also look more into vhost-net. This has me really interested now.
06:06:06 <anil_rao> Thanks for the experiments and data.
06:07:29 <anil_rao> Any other thoughts, otherwise we can move to the next topic.
06:07:59 <soichi> let's move to the next topic
06:08:01 <kaz> sure
06:08:21 <anil_rao> #topic Design of traffic isolation by using flow based tunneling
06:08:38 <anil_rao> #link Media:flow_based_tunneling-20160720-r1.png
06:09:44 <kaz> We are designing to isolate production and mirror traffiics in the underla network by using flow based tunneling
06:11:40 <soichi> The overview is shown on the slide 1.
06:12:22 <soichi> we need to configure OVS to use flow based tunneling as shown in the slide 2.
06:13:43 <soichi> Kaz found that he need to modify not only TaaS but also Neutron as shown on the slide 4.
06:14:33 <anil_rao> Is slide 3 for production traffic or mirrored traffic?
06:15:21 <soichi> both
06:16:19 <soichi> same algorithm for both production and mirror traffics
06:16:23 <yamamoto> why do you use local_ip="0"?
06:17:42 <kaz> I reffered networking-ofanget source code.
06:17:59 <kaz> ofanget->ofagent
06:18:37 <yamamoto> i vaguely remember it was necessary for the ovs version available at that time.  is it still necessary?
06:19:25 <kaz> Sorry, i did not check it
06:19:49 <yamamoto> i guess it's ok as far as it works though.
06:21:31 <soichi> Slide 6 shows how to discover remote IP for mirror traffic.
06:23:21 <soichi> Kaz proposes a suffix ":taas" to identify IP address
06:25:59 <yamamoto> soichi: it sounds a bit hack to me
06:26:09 <anil_rao> The changes to the flows, both Neutron and TaaS, are quite minimal. I am guessing that the routing tables on the respective hosts will ensure that tunnel traffic gets out via the right NICs (based on their IP addresses).
06:29:15 <soichi> We have only 2 minutes, so let's contine discussion offline and/or next week IRC
06:29:55 <anil_rao> soichi: Sure. Let me examine this some more and I'll provide some comments.
06:30:10 <anil_rao> Thanks for sharing the proposal. It is quite interesting.
06:30:13 <soichi> yes, please. thank you
06:30:31 <anil_rao> We are out of time. We'll continue next week folks.
06:30:39 <anil_rao> #endmeeting