14:01:16 <njohnston__> #startmeeting neutron_qos
14:01:16 <openstack> Meeting started Wed May 18 14:01:16 2016 UTC and is due to finish in 60 minutes.  The chair is njohnston__. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:21 <openstack> The meeting name has been set to 'neutron_qos'
14:01:21 <moshele> ajo_: hi
14:01:26 <njohnston__> #chair ajo_
14:01:27 <openstack> Current chairs: ajo_ njohnston__
14:01:30 <ajo_> o/ moshele njohnston_  :)
14:01:38 <moshele> njohnston__: hi
14:01:41 <njohnston__> ok, lets get started
14:01:42 <ajo_> thanks for chairing mr njohnston_ :)
14:01:49 <tmorin> hi everyone (i'll be mostly listening)
14:01:50 <njohnston__> #topic Announcements
14:01:56 <hichihara> hi
14:01:56 <njohnston__> I just want to note that we are now 2 weeks from N-1.  Tempus fugit!
14:02:07 <ajo_> really fugit....
14:02:24 <njohnston__> moving along smartly...
14:02:24 <njohnston__> #topic Late-Breaking News
14:02:30 <njohnston__> moshele was wondering if we remove openflow rules when a VM is deleted.  I did not check this, did anyone else?
14:03:10 <ajo_> njohnston_, I didn't, moshele did you find it fail, or you just think it was a possibility ?
14:03:18 <ajo_> in any case, we shall verify and provide a bug report
14:03:21 <njohnston__> I plan to give it a go later today, but I have to recreate my testing environment.
14:03:40 <ajo_> moshele, can you comment what you said on #openstack-neutron so we have it registered here on the meeting log?
14:03:52 <moshele> ajo_: I don't know how to check ovs, but I saw that in the extention_manager we are passing port=None
14:04:08 <moshele> ajo_: sure
14:04:34 <moshele> did you test the ovs qos when deleting vm ? does it remove the openflow rules?
14:04:56 <njohnston__> I think we did, but my memory is hazy enough at this point that I want to recheck
14:05:10 <njohnston__> This one should be relatively easy to prove or disprove experimentally.
14:05:38 <moshele> so what I see is the in https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L451 we are getting port=None
14:05:43 <njohnston__> just compare the output of ovs-ofctl dump-flows pre and post VM deletion
14:05:56 <moshele> and I think  it is because we delete it in https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py#L786
14:06:04 <ajo_> ok, so let's get it tested, and report a bug if necessary,
14:06:42 <moshele> ok we will try it
14:06:42 <ajo_> if moshele suspicions are right, we may need to introduce some sort of memory to keep the ofport/vif port, etc.. or any necessary detail for cleanup
14:06:45 <njohnston__> #action Test ovs-ofctl dump-flows output pre and post VM deletion to make sure that QoS flows are removed. njohnston
14:07:04 <ajo_> #undo
14:07:05 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0xa035210>
14:07:10 <ajo_> #action njohnston Test ovs-ofctl dump-flows output pre and post VM deletion to make sure that QoS flows are removed.
14:07:11 <ajo_> O:)
14:07:29 <njohnston__> moshele: If I verify that you are correct do you mind filing the bug with the proper references in the code?
14:07:32 <moshele> ajo_: we just encounter this becuase we wanted to write l2 extention to ovs
14:07:49 <moshele> njohnston__: sure
14:08:11 <moshele> and on the delete port we got port=None
14:08:13 <njohnston__> #action moshele File bug report after njohnston verifies bug truthiness.
14:08:23 <ajo_> moshele, so probably, that's the case, I suspect we will have to make the cleanup process more robust
14:08:46 <njohnston__> ok, let's move on.
14:08:49 <njohnston__> #topic Bugs
14:08:50 <ajo_> of bw limit it wasn't necessary, because things were cleaned up automatically when the vif was deleted.
14:09:03 <njohnston__> #link http://bit.ly/1WhXlzm
14:09:14 <moshele> ajo_: as you remember I had cleanup issues with SR-IOV
14:09:20 <njohnston__> Congrats to slaweq for merging https://review.openstack.org/#/c/300210/
14:09:49 <njohnston__> #link https://bugs.launchpad.net/bugs/1515533
14:09:51 <openstack> Launchpad bug 1515533 in neutron "QoS policy is not enforced when using a previously used port" [Low,In progress] - Assigned to Ilya Chukhnakov (ichukhnakov)
14:09:54 <slaweq> hello
14:09:57 <ajo_> slaweq++
14:09:57 <ajo_> :-)
14:09:58 <njohnston__> This was just picked up by Ilya Chukhnakov (ichukhnakov)
14:10:05 <njohnston__> Change: https://review.openstack.org/317655
14:10:22 <moshele> ajo_: I see
14:11:04 <njohnston__> next bug is #link https://launchpad.net/bugs/1486607
14:11:05 <openstack> Launchpad bug 1486607 in neutron "tenants seem like they were able to detach admin enforced QoS policies from ports or networks" [Medium,In progress] - Assigned to Nate Johnston (nate-johnston)
14:11:26 <njohnston__> This is mine.  Since we don't test permutations of the server policies, we don't need a new API test.  This is now ready to merge I think.  After that I will look at backports.
14:11:27 <ajo_> ohhh, thanks, she found the reason :) instead of doing a workaround :D
14:11:28 <ajo_> nice
14:11:59 <njohnston__> ajo_: Yeah, she got to the root of it I think, it's good work.
14:12:00 <ajo_> njohnston_, were you able to verify manually ?
14:12:19 <njohnston__> ajo_: yes, I was able to verify manually
14:12:32 <ajo_> njohnston_, I should re-review and verify, probably it's good to go by my last look at the code
14:12:45 <ajo_> njohnston_, thanks, wonderful :D
14:12:49 <njohnston__> ajo_: thank you sir
14:13:23 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1507654
14:13:25 <openstack> Launchpad bug 1507654 in neutron "Use VersionedObjectSerializer for RPC push/pull interfaces" [Low,In progress] - Assigned to Artur Korzeniewski (artur-korzeniewski)
14:13:36 <njohnston__> I pinged Artur in the bug to provide a change he is working on.
14:14:10 <njohnston__> he responded with change https://review.openstack.org/#/c/269056/ so we can look at that
14:14:19 <ajo_> it's unclear to me
14:14:20 <ajo_> ihrachys,  ^
14:14:33 <ajo_> can you comment on that bug  and artur's response ?
14:15:06 <njohnston__> Speaking of ihrachys...
14:15:07 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1507761
14:15:08 <openstack> Launchpad bug 1507761 in neutron "qos wrong units in max-burst-kbps option (per-second is wrong)" [Low,In progress] - Assigned to Slawek Kaplonski (slaweq)
14:15:13 <njohnston__> This one appears to be dead, with Ihar preferring a documentation change.  Time to abandon it?
14:15:23 <slaweq> this can't be solved for now
14:15:27 <ihrachys> ajo_: I think korzen described everything clearly there?..
14:15:29 <slaweq> I made some note in docs
14:15:32 <ihrachys> ajo_: we don't plan to pursue it right now
14:15:35 <slaweq> and that's all what I can do with it
14:15:40 <ihrachys> I think the bug should be closed for now
14:15:53 <ajo_> ihrachys, so, the serializer will be switched when other rpc endpoints start sending ovos,
14:15:56 <ajo_> did I get it right?
14:16:21 <ihrachys> ajo_: when/if we send real objects (and not dicts that represent them, as we do with callbacks)
14:16:32 <ajo_> I'd propose closing this as won't fix, until that's changed
14:16:36 <ajo_> and then, we reconsider
14:16:39 <ihrachys> if that's the case, we would be able to pass objects directly into oslo.messaging.
14:16:48 <ihrachys> but there is no use case for that right now
14:17:02 <ajo_> beyond rpc callbacks
14:17:03 <ajo_> ok
14:17:23 <njohnston__> #action njohnston close bug 1507654 as "won't fix"
14:17:24 <openstack> bug 1507654 in neutron "Use VersionedObjectSerializer for RPC push/pull interfaces" [Low,In progress] https://launchpad.net/bugs/1507654 - Assigned to Artur Korzeniewski (artur-korzeniewski)
14:17:51 <njohnston__> back to bug 1507761
14:17:52 <openstack> bug 1507761 in neutron "qos wrong units in max-burst-kbps option (per-second is wrong)" [Low,In progress] https://launchpad.net/bugs/1507761 - Assigned to Slawek Kaplonski (slaweq)
14:18:08 <slaweq> njohnston__: as I said, change in docs is done
14:18:16 <slaweq> and nothing else can be done here for now
14:18:41 <njohnston__> slaweq: Time to abandon https://review.openstack.org/#/c/291633/ then?
14:18:56 <slaweq> ajo_: ihrachys: what You think?
14:18:58 <slaweq> IMHO yes
14:19:00 <ajo_> njohnston_, I have moved the bug into won't fix
14:19:24 <ajo_> yes slaweq , you can kill it for now
14:19:27 <njohnston__> ajo_: Thanks!
14:19:40 <ajo_> it's nice to clean bugs :P :)
14:19:47 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1515564
14:19:48 <openstack> Launchpad bug 1515564 in neutron "Internal server error when running qos-bandwidth-limit-rule-update as a tenant" [Low,In progress] - Assigned to Liyingjun (liyingjun)
14:19:55 <njohnston__> Change is https://review.openstack.org/#/c/244680/
14:20:12 <njohnston__> This is in need of core reviews - it has no negative reviews, but nobody has reviewed it since April.
14:20:43 <ihrachys> slaweq: yes
14:21:17 <ajo_> ack, I will review
14:21:24 <njohnston__> thanks ajo_
14:21:53 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1558614
14:21:54 <openstack> Launchpad bug 1558614 in neutron "The QoS notification_driver is just a service_provider, and we should look into moving to that" [Low,In progress] - Assigned to Ching Sun (ching-sun)
14:22:06 <ajo_> as a note
14:22:17 <ajo_> that will clash with mfranc213_ work on the qos plugin refactor
14:22:31 <ajo_> https://review.openstack.org/#/c/294463/
14:22:33 <ajo_> #link https://review.openstack.org/#/c/294463/
14:22:57 <njohnston__> Duly noted, I'm not sure if she is watching but I'll make sure she's aware.
14:23:14 <ajo_> about service providers...
14:23:19 <ajo_> I'm not sure that bug is really valid
14:23:37 <ajo_> I believe service providers were made to allow serveral service providers at once,...
14:23:48 <ajo_> and then you could be able to pick the right one when you create the resources
14:24:03 <ajo_> in the case of the qos plugin, this is only an interface to the backend inplementation(s)
14:24:22 <ihrachys> right.
14:24:23 <ajo_> and the user can't pick nothing,
14:24:28 <njohnston__> agreed
14:24:32 <ajo_> nothing->anything
14:24:49 <njohnston__> I think we can mark it 'opinion' and move on
14:24:57 <ajo_> njohnston_, makes sense
14:25:08 <ajo_> I will do it, and put the explanation I just gave
14:25:24 <njohnston__> ajo_: Thanks!
14:25:29 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1515533
14:25:30 <openstack> Launchpad bug 1515533 in neutron "QoS policy is not enforced when using a previously used port" [Low,In progress] - Assigned to Ilya Chukhnakov (ichukhnakov)
14:25:46 <njohnston__> #undo
14:25:47 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x9cb9c90>
14:26:39 <njohnston__> #topic Approved RFEs
14:26:48 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1468353
14:26:50 <openstack> Launchpad bug 1468353 in neutron "[RFE] QoS DSCP marking rule support" [Wishlist,Fix committed] - Assigned to David Shaughnessy (david-shaughnessy)
14:26:53 <njohnston__> This is the RFE for DSCP.  It's just waiting on the fullstack change, which looks good to merge - just needs core reviews.
14:27:04 <njohnston__> Change is: https://review.openstack.org/#/c/288392
14:27:19 <njohnston__> and
14:27:21 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1560961
14:27:22 <openstack> Launchpad bug 1560961 in neutron "[RFE] Allow instance-ingress bandwidth limiting" [Wishlist,In progress] - Assigned to Slawek Kaplonski (slaweq)
14:27:32 <njohnston__> Change is: https://review.openstack.org/#/c/303626/
14:27:41 <njohnston__> looks like slaweq is working this with gsagie on that
14:27:46 <ajo_> I will re-review, sorry for being a bit slow on this
14:27:49 <slaweq> this is going forward
14:28:03 <slaweq> I have working ingress limit for both linuxbridge and ovs agents
14:28:10 <njohnston__> slaweq: That's great
14:28:19 <slaweq> I have some comments in review which I need to address
14:28:38 <slaweq> and also I found that there is no functional tests for linuxbridge agent
14:28:57 <ajo_> slaweq, keep an eye on mfranc213_ plugin refactor, you may need to adopt her changes, although it will be easy
14:29:02 <njohnston__> I was shocked when I read that, slaweq
14:29:04 <slaweq> so I started adding it in separate patch: https://review.openstack.org/#/c/317099/
14:29:11 <ajo_> mfranc213_, : I suspect we should prioritize your patch, so everything lands on top of your refactor
14:29:24 <slaweq> ajo_: ofcourse I will rebase my patch when her will be merged
14:29:35 <ajo_> and, said that, it was already on good condition IMHO, so I will do a final pass after meeting to see if here's something left
14:29:43 <slaweq> I think that her patch is closer to merge
14:29:46 <slaweq> :)
14:30:12 <njohnston__> #topic RFEs that don't have the rfe-approved tag
14:30:15 <slaweq> so maybe I will have to change mine littlebit :)
14:30:32 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1505627
14:30:33 <openstack> Launchpad bug 1505627 in neutron "[RFE] QoS Explicit Congestion Notification (ECN) Support" [Wishlist,Confirmed] - Assigned to Reedip (reedip-banerjee)
14:31:02 <ajo_> that one, I discussed it with reedip
14:31:03 <njohnston__> I was unclear on whether amotoki's concerns had been addressed
14:31:29 <ajo_> and needs a thought, as per my discussions with him on the summit, it was clear he wanted to leverage ECN, but the mechanisms were not clear
14:31:31 <njohnston__> Ah, what did reedip say ajo_?
14:31:45 <ajo_> how do you detect congestion, when, .. etc
14:32:03 <ajo_> njohnston_, I'd mark that as incomplete for now
14:32:07 <njohnston__> I think that is all handled in the IP stack and in the intervening network devices between two ECN-enabled nodes
14:32:16 <ajo_> until he provides more input, or we have more people willing to work on it
14:32:17 <njohnston__> the congestion detection, I mena
14:32:20 <mfranc213_> ajo_ ack (and sorry late)
14:32:22 <davidsha> Thats the impression I was under as well
14:32:38 <ajo_> njohnston_, yes, but if we do something about ecn...
14:32:50 <ajo_> the hypervisor, or the network nodes, become network devices
14:32:54 <ajo_> which may act upon congestion
14:33:17 <njohnston__> correct
14:33:46 <ajo_> one way could be ... altering bw limit rules, to trigger ECN over bw limits, instead of dropping
14:34:17 <njohnston__> this sounds like something that should be handled at the OVS layer (or LB equivalent)
14:34:27 <njohnston__> because it's processing at a per-packet level
14:34:36 <ajo_> yes,
14:34:54 <ajo_> my thinking, is that we wait for more feedback from reedip, or any other interested party on ECN
14:34:54 <njohnston__> as opposed to just making sure that ECN is marked as enabled and letting whatever congestion behavior is built into OVS etc. handle it
14:35:00 <njohnston__> agreed
14:35:31 <njohnston__> ajo_: Do you mind commenting on the bug since you already have the dialogue with reedip?
14:35:42 <ajo_> njohnston_, correct, that makes sense
14:35:47 <njohnston__> thanks
14:35:48 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1505631
14:35:49 <openstack> Launchpad bug 1505631 in neutron "[RFE] QoS VLAN 802.1p Support" [Wishlist,Confirmed] - Assigned to Reedip (reedip-banerjee)
14:36:08 <slaweq> ajo_: but isn't this ECN related to Your specs about openflows?
14:37:04 <davidsha> slaweq: flow management is it?
14:37:23 <slaweq> davidsha: yep
14:37:37 <ajo_> slaweq, eventually, it could be, yes
14:37:38 <davidsha> slaweq: If it's done like dscp it shouldn't have a problem
14:38:18 <njohnston__> And it should be done like DSCP, probably pretty well integrated into the DSCP implementation given that DSCP and ECN share the same byte in the IP header.
14:38:21 <slaweq> ok, I just though that if it is another flow in bridges then it will be harder and harder to maintenance
14:38:32 <slaweq> without this flow management :)
14:38:42 <njohnston__> no, it would just become a mod to DSCP
14:38:48 <ajo_> davidsha, is right, the problem becomes more vicious when we have external extensions, but in-tree extension know about each other, generally
14:38:55 <njohnston__> so if you set DSCP mark  16 the flow would set nw_tos to 64
14:39:04 <slaweq> njohnston__: ok, I'm not very familiar with that, thx for explanation
14:39:13 <njohnston__> but if you set DSCP 16 and ECN enabled then the flow would set nw_tos to 65
14:39:15 <davidsha> slaweq: dscp modifies the packet at the highest priority and the resubmits it back to the table for more processing.
14:39:41 <ajo_> njohnston_, yes, we may need something that sets a bit,
14:39:45 <njohnston__> so a single flow rule, with the correct nw_tos value that is the combination of ECN and DSCP values
14:40:07 <slaweq> ok, that sounds good easy then :)
14:40:11 <mfranc213_> +1
14:40:11 <davidsha> njohnston__: I think there is another command for setting the ecn bits
14:40:28 <mfranc213_> that would be better, if there is
14:40:33 <njohnston__> davidsha: I defer to your greater knowledge then :-)
14:40:44 <ajo_> yeah, I think there was something like that
14:40:51 <davidsha> njohnston__: I'll double check :)
14:40:57 <ajo_> but, let's not use more minutes on ECN until that's clear :D
14:41:05 <njohnston__> ok, back to https://bugs.launchpad.net/neutron/+bug/1505631
14:41:06 <openstack> Launchpad bug 1505631 in neutron "[RFE] QoS VLAN 802.1p Support" [Wishlist,Confirmed] - Assigned to Reedip (reedip-banerjee)
14:41:21 <njohnston__> I don't have a good grasp on what the status and/or trajectory is for this
14:41:26 <ajo_> mod_nw_ecn:ecn
14:41:27 <ajo_> Sets  the ECN bits in the IPv4 ToS or IPv6 traffic class field to ecn, which must be a value between 0 and 3, inclusive.  This action
14:41:27 <ajo_> does not modify the six most significant bits of the field (the DSCP bits).
14:41:27 <davidsha> yup: mod_nw_ecn
14:41:28 <ajo_> :-)
14:41:41 <mfranc213_> nice
14:42:10 <davidsha> It can also set the congestion bit too which I didn't expect
14:42:36 <njohnston__> moving on to
14:42:37 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1527671
14:42:38 <openstack> Launchpad bug 1527671 in neutron "[RFE] traffic classification in Neutron QoS" [Wishlist,Incomplete]
14:42:47 <njohnston__> Looks like this was split to be two bugs, and this one was reduced to just the traffic classification part.
14:42:54 <ajo_> yes,
14:43:00 <njohnston__> I believe this has been an area of activity of late
14:43:14 <ajo_> davidsha, was looking at this, and we had a meeting with the flow-classifier guys from SFC
14:43:26 <ajo_> let me look for the meeting link
14:43:47 <ajo_> #link http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html
14:44:16 <ajo_> an RFE will be filled, to ask for flow classifier objects in the core
14:44:32 <ajo_> and an API, I gues
14:44:35 <ajo_> guess
14:44:40 <ajo_> that discussion may take some time
14:44:54 <ajo_> davidsha, did I miss anything?
14:45:33 <davidsha> ajo_: No thats pretty much it
14:45:45 <ajo_> ok, njohnston_ , next? :)
14:45:51 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1557457
14:45:52 <openstack> Launchpad bug 1557457 in neutron "[RFE] rate-limit external connectivity traffic." [Wishlist,Incomplete]
14:46:09 <njohnston__> I guess this is already marked as incomplete, moving on
14:46:15 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1560963
14:46:16 <openstack> Launchpad bug 1560963 in neutron "[RFE] Minimum bandwidth support (egress)" [Wishlist,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)
14:46:24 <njohnston__> Thanks Rodolfo for stepping in and taking this on!  r
14:46:35 <njohnston__> Spec filed: https://review.openstack.org/#/c/316082/
14:46:38 <ralonsoh> Please, take a look at the spec
14:46:52 <ajo_> ohh, nice
14:46:55 <ajo_> I didn't notice
14:47:30 <njohnston__> It's good stuff
14:47:34 <hichihara> I guess that we need to propose anything(about generic resource pool) into nova so that VM is scheduled aware minimum bandwidth.
14:47:43 <hichihara> Do we have work items in nova side about minimum bandwidth feature?
14:48:14 <ralonsoh> Not now, but the scheduler will have the generic resource pool
14:48:22 <ajo_> hichihara, we have another RFE for that
14:48:26 <ajo_> I discussed with them
14:48:27 <ralonsoh> I will propose a spec for the scheduler
14:48:37 <hichihara> ajo_: link?
14:48:44 <ajo_> sec
14:49:06 <ajo_> #link https://bugs.launchpad.net/neutron/+bug/1578989
14:49:07 <openstack> Launchpad bug 1578989 in neutron "[RFE] Strict minimum bandwidth support (egress)" [Wishlist,New]
14:49:20 <davidsha> Just with the ingress bandwidth limit and bandwidth guarantees. If we mark the bandwidth limit on the QoS table, will that limit traffic going in and out of those Queues?
14:49:27 <davidsha> nvm
14:49:28 <hichihara> ajo_: Thank you!
14:49:30 <ajo_> hichihara, see link [3]
14:49:40 <ajo_> to my understanding, we need their framework to be up
14:49:43 <ajo_> and then we're just another resource
14:49:45 <ajo_> NIC_BW
14:49:55 <ajo_> I was lately thinking, that we can separate per direction
14:50:01 <ajo_> EGRESS_NIC_BW
14:50:06 <ajo_> and may be per physnet
14:50:27 <njohnston__> last RFE
14:50:28 <njohnston__> #link https://bugs.launchpad.net/neutron/+bug/1580149
14:50:29 <openstack> Launchpad bug 1580149 in neutron "[RFE] Rename API options related to QoS bandwidth limit rule" [Wishlist,Confirmed] - Assigned to Slawek Kaplonski (slaweq)
14:50:46 <ajo_> hichihara, , ralonsoh follow the thread and see if there's something missing,
14:50:55 <ajo_> btw hi ralonsoh !!! I didnt' see you were on IRC :D
14:50:57 <ralonsoh> ajo_: ok
14:50:59 <slaweq> I didn't start work on it yet
14:51:03 <ajo_> thanks for taking on the min bw :D !!!
14:51:20 <hichihara> ajo_: Thanks, I will catch up it. and also I will review the spec.
14:51:23 <ralonsoh> ajo_: no problem!
14:51:44 <njohnston__> Last comment on 1580149, slaweq seemed to defer to ajo_ and ihrachys to confirm his position
14:52:10 <njohnston__> slaweq: Are you good to start regardless, or would it be better to get more comments in the bug?
14:52:13 <slaweq> yes, it would be great if You can confirm it
14:52:20 <ajo_> slaweq, about that (1580149) I see armax chimed In, I'll try to discuss with him
14:52:25 <ajo_> to see if he has a different opinion
14:52:37 <slaweq> I will start it soon
14:52:49 <slaweq> ok, thx ajo_
14:52:51 <njohnston__> #topic Other Changes
14:52:58 <njohnston__> #link https://review.openstack.org/#/c/317645/
14:53:13 <njohnston__> That is the docs change slaweq mentioned earlier: "[networking] Add information about default burst value"
14:53:29 <njohnston__> #topic Open Discussion
14:53:35 <slaweq> njohnston__: earlier I mentioned other change (which is merged already) :)
14:53:53 <njohnston__> slaweq: Ah yes I got them crossed in my head :-)
14:53:55 <slaweq> this one is related to patch: https://review.openstack.org/#/c/311609/
14:54:04 <ajo_> and
14:54:07 <ajo_> #link https://review.openstack.org/294463
14:54:20 <ajo_> thanks again mfranc213_  ;)
14:54:51 <njohnston__> Anything else that folks want to discuss?  Or any changes that aren't on my tracking radar?
14:54:53 <mfranc213_> :)
14:54:54 <davidsha> Just the previous comment I made:
14:55:04 <davidsha> "Just with the ingress bandwidth limit and bandwidth guarantees. If we mark the bandwidth limit on the QoS table, will that limit traffic going in and out of those Queues?"
14:55:58 <ajo_> davidsha, what do you mean?
14:57:11 <davidsha> ajo_: So the Queues are tied to the QoS table and I'm wondering if the bandwidth limit on the QoS table in slaweq patch will interfere with the min guarantee in ralonsoh
14:57:26 <ajo_> ah, no
14:57:42 <ajo_> davidsha, they are queues in different ponts
14:57:43 <ajo_> points
14:58:03 <ajo_> btw, ralonsoh I have to share with you the ideas they're having in OVN,
14:58:22 <ajo_> may be they have a way to do it without patch_ports -> veths
14:58:33 * njohnston__ is interested in that as well
14:58:45 <ajo_> instead of setting the queues in the veths, they set the queues in the physical interfaces of the host
14:59:00 <ralonsoh> that could be a good solution
14:59:10 <ajo_> and... I don't know how... they connect the queue to ovs/openflow (To have the queue id)
14:59:25 <ajo_> ralonsoh, that works for egress, but for min bw ingress.. it may not work... :)
14:59:29 <ajo_> anyway,
14:59:34 <ajo_> it's better if we don't affect performance :)
14:59:44 <njohnston__> time -1, any last minute items?
14:59:48 <ajo_> we can make the veths if people has interest on min bw ingress
14:59:53 <ralonsoh> i know, I need to modify now the flows incoming in the OVS
15:00:11 <ajo_> ralonsoh, lots of fun, I will try to help on that
15:00:21 <ajo_> I think we should endmeeting
15:00:23 <ralonsoh> ajo_: no problem!
15:00:24 <njohnston__> All right, that's time?  All right, have a wonderful day and talk to you in 2 weeks.
15:00:27 <ralonsoh> bye
15:00:28 <njohnston__> #endmeeting