06:30:08 <anil_rao> #startmeeting taas
06:30:09 <openstack> Meeting started Wed Feb 17 06:30:08 2016 UTC and is due to finish in 60 minutes.  The chair is anil_rao. Information about MeetBot at http://wiki.debian.org/MeetBot.
06:30:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
06:30:13 <openstack> The meeting name has been set to 'taas'
06:30:15 <kaz> hi
06:30:33 <anil_rao> Hi
06:30:38 <vasubabu> hi
06:31:16 <fawadkhaliq> hello folks
06:31:25 <anil_rao> Looks like there is only one topic today
06:31:40 <anil_rao> Shall we get started
06:32:22 <anil_rao> #topic tap-service-create specify dest port or create destination port
06:33:24 <reedip> anil_rao: whats your take on this?
06:33:29 <anil_rao> I notice that the latest version of the spec is suggesting that tap-service-create create the destination port
06:33:46 <reedip> anil_rao : yup , thats right
06:33:55 <fawadkhaliq> I think both should be supported.
06:34:01 <fawadkhaliq> There is no harm, is there?
06:34:12 <anil_rao> Actually, this is not how it is currently supported (as the spec seems to suggest).
06:34:35 <anil_rao> One problem I see is this (let me explain)
06:34:57 <anil_rao> If tap-service-create were to instantiate the destination port, there would be no monitoring VM at that time.
06:35:49 <anil_rao> So the port doesn't really get bound to a compute node. Later then the monitoring VM is instantiated and consumes that port we will need a callback in order to place the flows for receiving traffic on the host where the destination port is bound
06:36:50 <anil_rao> Not saying that this can't be done but the current implementaion has taken the easy route of working with an existing port that is already being used by the monitoring VM.
06:37:47 <anil_rao> I am curious to know what workflow folks expect to see when tap-service-create instantiates the destination port.
06:37:53 <fawadkhaliq> anil_rao: makes sense. We will probably have to support this anyway later because an operator may attach detach VM ports any time.
06:38:30 <fawadkhaliq> since the functionality is there, no worries. From design perspective, we keep it in and track the lack of support via a bug report maybe and release note?
06:38:45 <fawadkhaliq> s/is there/is not there/
06:39:00 <yamamoto> anil_rao: do you mean the current impl is relying on the VM being launched beforehand?
06:39:18 <anil_rao> yamamoto: Yes
06:39:47 <yamamoto> it sounds like a bug
06:39:47 <anil_rao> I do not fully understand what it means to create a tap-service instance before the monitoring VM is instantiated.
06:40:27 <anil_rao> Ideally I'd like to see multiple tap-service instances sending traffic to the same destination port (and thereby to the same monitoring VM)
06:41:01 <anil_rao> How can we achieve this if every tap-service instance creates its own destination port?
06:42:01 <fawadkhaliq> Since TaaS interfaces with the Neutron port resource, dependency on VM presence is really outside the scope and it will be hard to enforce such a workflow.
06:42:47 <fawadkhaliq> From user perspective, I don't see how TaaS can enforce it at the code level
06:42:48 <anil_rao> Well, we cannot deliver traffic to a port that is not consumed by some entity. What am I missing here?
06:42:52 <yamamoto> anyway let's concentrate to finish "the easy route" right now
06:43:01 <fawadkhaliq> So lets say..
06:43:26 <fawadkhaliq> User creates a port (not attached to any VM) and then create TaaS service, we will still have the same problem
06:43:56 <anil_rao> We can fail tap-service-create if the port is not bound to some host. Just a thought
06:44:11 <yamamoto> fawadkhaliq: right
06:44:55 <fawadkhaliq> anil_rao: agree and understand the limitation. Do you think from API design perspective we should block it?
06:45:14 <anil_rao> My question is this: Someone creates a tap-service instance, there is no consumer of the destination port and tap-flows are attached to it. Where and how to we move packets to the destination port.
06:45:46 <fawadkhaliq> anil_rao: in the reference implementation, we can return an error from the TaaS backend (agent etc).
06:46:30 <fawadkhaliq> anil_rao: as you mentioned, requirement would be to make sure port is bound is to something. So an error can raised until it is supported to have update_port implemented as well.
06:46:34 <anil_rao> We can return an error but I want to know what folks feel about this situation. Do folks agree that we need a consumer of the destination port for the tap-service to be functional
06:47:21 <reedip> anil_rao : if we want to limit the use-cases for Mitaka Release, I think we should have a consumer of the port for Tap-service to be operational
06:47:54 <anil_rao> What about the long term solution. I wanted to understand what it means for a (floating) destination port to be associated with a tap-service instance.
06:48:04 <reedip> fawadkhaliq, yamamoto: what you are saying is correct, but we can update it in the next iteration of TaaS, keeping the following behavior, sort of a limited release.
06:48:27 <fawadkhaliq> reedip: anil_rao sure, no issues on what we have right now :-)
06:49:05 <yamamoto> file a bug, document the limitation for now.  i guess it can be fixed by using the existing handle_port callback
06:49:17 <fawadkhaliq> yamamoto: +1
06:49:51 <fawadkhaliq> thank you openstackstatus ;-)
06:50:20 <anil_rao> yamamoto: Can you please describe the workflow for the case where the destination port is not claimed.
06:51:09 <yamamoto> my expectation for that situation is that packets are just dropped.
06:51:45 <fawadkhaliq> anil_rao: here is a use-case I can of
06:53:39 <fawadkhaliq> user create TaaS service with a particular port and it stays static. Operator chooses to move the port around because let's say there is upgrade in the VM (destination) or replacement has to be done. In this case, a simple detach and attach will work without changing anything in TaaS and in the time period when port is not connected, packet will be dropped.
06:53:58 <anil_rao> yamamoto: We will then also need logic to ensure that we don't mirror (duplicate) packets. Dropping after duplicating can be very wasteful, especially if there are a lot of tap-flows assocaited with a tap-service
06:54:19 <fawadkhaliq> anil_rao: agree on that and optimization will need to added.
06:54:45 <fawadkhaliq> to be added*
06:55:02 <yamamoto> anil_rao: agreed
06:55:15 <reedip> anil_rao : that is true when considering long term implications
06:55:23 <anil_rao> I am therefore not seeing the real need for supporting such a scenario. I would sincerely appreciate some reasoning behind going this route. I mean what do we gain from supporting a floating destination port
06:56:04 <anil_rao> Perhaps we can discuss this later, if necessary. :-)
06:56:32 <reedip> anil_rao: +1 ( for later discussion as this may be a major use-case )
06:56:47 <fawadkhaliq> floating port is an internal concept anyway and no relationship with the API at this point, right?
06:57:10 <fawadkhaliq> block can be implemented at the agent layer
06:57:18 <reedip> fawadkhaliq: I guess yes
06:57:46 <anil_rao> If we just look at the API definition that is correct. However, I would like to think that our API represents something more tangible, hence my question. Please pardon me if I am off base here
06:59:05 <anil_rao> There is another thing that I wanted to discuss.
07:00:11 <anil_rao> I created a DevStack setup, multi-node but I am having problems getting it up. My controller node complains that ovs-ofctl is not found although I don't intend to have the controller node be a part of the dataplane. Does anyone have any ideas of what may be going on?
07:00:47 <anil_rao> My local.conf used to work with earlier DevStack versions but not anymore. :-(
07:01:14 <vasubabu> which build
07:01:15 <vasubabu> ?
07:01:27 <anil_rao> Latest
07:01:49 <fawadkhaliq> anil_rao: I can share my local.conf, that works for me.
07:02:05 <vasubabu> i am using liberty
07:02:17 <anil_rao> fawadkhaliq: Thanks that would be great.
07:02:31 <fawadkhaliq> anil_rao: npp, will share shortly
07:03:01 <anil_rao> Please note that I typically use separate nodes for controller, network and compute since I like to keep the dataplane separate from the controller node.
07:03:51 <anil_rao> yamamoto: I have a question regarding the local.conf setup for enabling TaaS
07:05:47 <yamamoto> ?
07:06:30 <anil_rao> does the instruction in README.rst under "devstack" work for multi-node setups?
07:07:39 <yamamoto> taas_openvswitch_agent is only for the compute/network nodes
07:08:06 <vasubabu> README.rst should be updated
07:08:17 <yamamoto> and "taas" is for controller
07:08:29 <anil_rao> yamamoto: OK. Thanks. :-)
07:08:51 <yamamoto> otherwise i expect it works.  but i haven't tried by myself. :-)
07:09:17 <anil_rao> I should get around the DevStack setup issues tomorrow and then I'll have data to report back to the group.
07:10:08 <vasubabu> anil_rao: i have one question on taas standard tap flows
07:10:27 <anil_rao> vasubabu: Yes?
07:11:34 <vasubabu> In Multinode setup, if we bringup instance on compute node all standard tap flows created by taas_agent are getting deleted
07:11:53 <vasubabu> have u observe this issue
07:11:55 <vasubabu> ?
07:12:33 <anil_rao> vasubabu:Can you kindly elaborate on "bringup instance on compute node"?
07:13:14 <vasubabu> just spawn instance
07:14:07 <anil_rao> vasubabu: I have not observed this but I can surely verify this particular case as soon as my new setup is available.
07:14:27 <vasubabu> it is happening with 1st instance
07:14:33 <kaz> I found this.
07:15:39 <kaz> I have posted on gerrit.
07:15:55 <yamamoto> vasubabu: it sounds like a known "agent uuid cookie" bug
07:16:47 <kaz> yamamoto: yes
07:17:17 <yamamoto> https://bugs.launchpad.net/neutron/+bug/1525775
07:17:17 <openstack> Launchpad bug 1525775 in neutron "When ovs-agent is restarted flows creatd by other than ovs-agent are deleted." [Undecided,In progress] - Assigned to SUZUKI, Kazuhiro (kaz-k)
07:18:19 <vasubabu> ok, is there any work around for this
07:18:20 <vasubabu> ?
07:18:46 <vasubabu> currently i am creating all flows using ovs-ofctl
07:18:52 <vasubabu> again
07:20:22 <kaz> I posted workaround for it, but I cannot find the link.
07:21:57 <anil_rao> Any other topics for discussion?
07:22:11 <anil_rao> #topic open
07:22:36 <fawadk> sorry I missed a few minutes, whats the plan of action with the spec given its updated to original state
07:23:30 <yamamoto> anil_rao: can you consider to triage bugs or give a permission to someone?
07:23:37 <vasubabu> anil_rao: what if user trigger instance migration attached to tap-serivce/tap-flow port
07:23:43 <anil_rao> fawadk: I think one item that needs correction there is the issue of destionation port that we discussed today.
07:24:09 <fawadk> anil_rao: okay, let's take the discussion on the spec for that.
07:24:21 <anil_rao> yamamoto: Vinay and I will send eamil to the mailing list with recommendations for some more cores. I think that will help speed things up.
07:24:47 <anil_rao> We should get this started in the next day or so.
07:25:04 <fawadk> anil_rao: great, thanks
07:25:40 <yamamoto> anil_rao: LP bugs permission doesn't need to be tied to core reviewers.
07:26:27 <anil_rao> yamamoto: Sure. I agree. Feel free to triage the bugs. I'll do my part too.
07:26:46 <yamamoto> i don't have a permission to change priority etc
07:27:13 <anil_rao> We'll try to get that sorted out as soon as possible.
07:27:19 <yamamoto> anil_rao: thank you
07:27:52 <anil_rao> does anyone have anything new to share regarding the big-tent discussion?
07:28:10 <reedip> anil_rao: I think as the project is owned by vnyad, therefore, it might be difficult for triaging the bug
07:28:38 <anil_rao> reedip: We'll get that sorted out.
07:29:30 <yamamoto> anil_rao: studium discussion is still in progress.   https://review.openstack.org/#/c/275888/
07:29:57 <anil_rao> yamamoto: Thanks. We should all keep an eye out for that discussion
07:30:03 <reedip> yamamoto: however, this has been merged : https://review.openstack.org/#/c/278025/3
07:30:09 <anil_rao> We have run out of time. Thanks all.
07:30:24 <yamamoto> reedip: sure
07:30:27 <anil_rao> Lets continue next week.
07:30:31 <yamamoto> bye
07:30:37 <reedip> tc
07:30:38 <anil_rao> #endmeeting