22:01:27 #startmeeting neutron_drivers 22:01:28 Meeting started Thu Jul 27 22:01:27 2017 UTC and is due to finish in 60 minutes. The chair is kevinbenton. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:01:31 The meeting name has been set to 'neutron_drivers' 22:01:31 ihrachys: you broke your chair? 22:02:06 this week is feature freeze 22:02:06 armax, no, just falling asleep 22:02:23 starting next week any feature related things that need to merge will have to get an FFE 22:02:33 oh 22:02:41 boooo 22:02:59 excluding simple things like docs 22:03:04 or additional API tests are always welcome 22:03:06 etc 22:03:32 kevinbenton: how are you planning to handle FFE requests? 22:03:41 what's the process? email to openstack-dev? 22:03:46 yeah, email to openstack-dev 22:04:07 well, I used to do it when preparing the postmortem 22:04:12 who decided before, the ptl? 22:04:33 but you’re free to do it differently 22:04:55 let me think about it and i'll let you know next week 22:05:41 before I forget, let's take a quick look at https://bugs.launchpad.net/neutron/+bug/1705719 because the patches are ready to go 22:05:41 Launchpad bug 1705719 in neutron "[RFE] QinQ network driver" [Wishlist,Triaged] - Assigned to Trevor McCasland (twm2016) 22:05:51 (well are at least close when i reviewed yesterday) 22:07:09 https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bug/1705719 22:07:49 * armax goes back to the discussion we had last week 22:08:21 the change will break everyone who based their type drivers on SegmentTypeDriver. smth to consider close to the end of the cycle. 22:09:05 http://codesearch.openstack.org/?q=SegmentTypeDriver&i=nope&files=&repos= 22:09:26 bagpipe in the list 22:09:55 ihrachys: i'm not sure segmenttypedriver changes with the refactor 22:10:10 the argument to __init__ changes from db model to OVO 22:10:21 oooh 22:10:22 look here https://review.openstack.org/#/c/483020/3..5/neutron/plugins/ml2/drivers/helpers.py 22:10:24 the OVO component 22:11:01 ihrachys: don't we need that in the switch to OVO anyway? 22:11:23 it's a good change, just a churn at the end of cycle, that's all I have to say. 22:11:44 ihrachys: i suppose we can put in a compatibility shim if we are worried that detects if what is passed in is OVO 22:11:44 we'll need to advertise to consumers; prepare patch for bagpipe at least 22:11:54 I asked about that before in the patch, not sure if any of that happened 22:12:13 kevinbenton: in the last meeting you said this driver will be used by the srviov mech driver 22:12:41 but I see no link anywhere 22:13:10 armax: i don't think they have that part up yet 22:13:33 OK, but shouldn’t that be captured somewhere so that we can more meaningfully review the code that’s currently under review? 22:13:40 what if we screw up the migration? 22:13:45 what migration? 22:13:49 the DB migration 22:14:05 I mean what if we overlook a detail in the model because we don’t have the full picture 22:14:47 so we can't approve the RFE until we see an implementation? 22:14:53 no 22:14:57 i'm fine blocking from merging this cycle 22:14:58 I didn’t say that 22:15:01 if there is no impl 22:15:08 but the RFE has nothing 22:15:14 except a skinny description 22:15:32 no spec either 22:15:34 isn't that the point of an RFE? 22:15:55 describe the use case 22:16:11 RFE itself is a feature request. If it turns out we need more discussion on the direction of the implemention during review, can't we switch to a blueprint? 22:16:12 the use case mention nothing of SRIOV 22:16:57 the RFE’s description is rather inadequate IMO to understand what’s going on 22:17:00 I quote 22:17:01 We can implement this by first refactoring VLAN's allocation logic and then extending it to handle a second layer of VLAN tagging. Essentially replacing vlan_id with a s_tag:c_tag pair 22:17:03 because that's an implementation detail? 22:17:05 that’s not even supposed to be in tehre 22:17:16 OK, ignore me 22:17:45 did we lose him? 22:18:06 * ihrachys pulls popcorn closer 22:18:18 it’s not an implementation detail 22:18:35 it’s an important part of the end-to-end solution required for the feature being requested 22:18:41 it helps us understand what documentation is required etc 22:19:23 armax: put what you want to see on the RFE and we can come back to it next week when he replies 22:19:51 well, do you first agree with me? 22:19:56 talking about which driver should be supported feels like an implementation detail to me 22:20:00 otherwise I am not gonna waste my time 22:20:15 agree on armax's point. this is on implementation approach. 22:20:22 but that defeats the point of the whole RFE process 22:20:34 which you’re diluting to making it a rubberstamping process 22:20:52 armax: no, RFE is IMO a user story thing 22:20:52 if we go straight to review without even looking at the bigger picture 22:21:03 dude, have you read the description of the RFE? 22:21:13 i don't want to have this conversation again 22:21:38 we can ask what driver the implementation needs to be for on the RFE 22:21:49 and discuss next week 22:21:51 for what it’s worth 22:21:58 I don’t see why we’re even refactoring at this point 22:22:13 we might as well better off duplicating allocation logic as of now 22:22:26 as refactoring feels like an early optmization 22:22:35 that said, I don’t even understand why it’s in the RFE description 22:23:13 I am somewhat concerned taht we’re shortcutting the process and that may backfire spectacurlarly 22:23:16 that’s all 22:23:22 I am voicing my concern 22:23:31 you seem like annoyed that I do 22:23:42 so kick me out of the team let’s be done with this farce 22:24:00 i don't like RFE's getting too bogged down with implementation details 22:24:07 i agree that the mention of the refactor should be removed 22:24:28 an RFE should describe a use case 22:24:55 a use case is a description of a step-wise process of interaction between user and system or system to system 22:25:01 it doesn’t have to have implementation details 22:25:23 but I see no use case there 22:25:36 QinQ encap for isolation of vlans 22:25:37 if you magically see it, then enlighten us, that’s all I am asking 22:25:57 OK, never mind 22:26:01 let’s move on 22:26:10 let's you get out of the 4096 limit 22:26:28 let’s move on 22:26:51 #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 22:27:10 #link https://bugs.launchpad.net/neutron/+bug/1604222 22:27:10 Launchpad bug 1604222 in neutron "[RFE] Implement vlan transparent for openvswitch ML2 driver" [Wishlist,Triaged] 22:27:26 so at this point, I don't think we have anyone to implement this 22:28:20 However, IIRC we switched to using push/pop vlan in the OVS agent 22:28:37 so once we have a version with qinq support maybe it will magically work? 22:29:09 either way, i think this will just have to go to rfe-deferred 22:29:29 anyone disagree? 22:30:03 go for it. we can revive. 22:30:05 I wonder what is the difference between this and QinQ RFE.. 22:30:06 yes let's postpone 22:30:25 amotoki: QinQ rfe is neutron allocating the inner and outer tags 22:30:43 kevinbenton: ah I see! 22:30:57 this one is just encapsulating whatever the tenant gives in neutron provided encap 22:31:08 QinQ as a big range of labels. 22:32:05 yeah 22:32:24 amotoki: this is called QinQ because internally OVS needs the support to double-tag 22:32:32 on the other hand, in this rfe, tenants specify inner tag and neutron assign outer tag. 22:32:38 yep 22:32:39 i totally understand 22:33:17 #link https://bugs.launchpad.net/neutron/+bug/1669630 22:33:17 Launchpad bug 1669630 in neutron "Network RBAC acceptance workflow" [Wishlist,Triaged] 22:33:25 not sure what state that should go into 22:33:39 just postponed until Adrian reaches out with time? 22:33:52 yes 22:35:00 #link https://bugs.launchpad.net/neutron/+bug/1682247 22:35:00 Launchpad bug 1682247 in neutron "Neutron should be able to fetch hostkeys for ports" [Wishlist,Triaged] 22:35:26 mordred: you happen to be around? 22:35:51 I'm thinking we can go to rfe-postponed for now on this one 22:38:39 kevinbenton: yes - that's fine - it'll be a little while until I can get to that one again 22:43:25 he mentioned a couple of weeks ago that he was contemplating an alternative 22:43:25 so if we don't hear from him, it is reasonable to postponoe 22:44:31 thoughts? 22:45:11 sounds good to postpone 22:45:25 kevinbenton: you missed a couple lines from mlavalle but mlavalle is back so will let them replay for you 22:45:25 * mlavalle got disconnected 22:45:47 oh, my bouncer might have disconnected as well 22:45:57 yeah, there was a lag 22:45:59 kevinbenton: yes, excess flood 22:46:02 i didn't get anything from anyone since i started blathering 22:46:20 last message was which would obviate the need for this spec then disconnect then connect then thoughts? 22:46:24 I ended up re-connecting 22:46:34 holy crap! 22:46:41 i wrote a whole story about another spec 22:46:43 :) 22:46:47 one sec 22:46:50 kevinbenton: ya whcih you pasted and flooded out 22:47:01 kevinbenton: so you'll need to paste a few lines at a time to avoid being kicked 22:47:18 clarkb: i think my bouncer actually had the connection issue and then dumped them all at once 22:47:20 ok 22:47:22 here it comes 22:47:31 #link https://bugs.launchpad.net/neutron/+bug/1689830 22:47:36 freenode was dead for me for a while, I guess it's everyone 22:47:39 So we got more details on https://bugs.launchpad.net/neutron/+bug/1689830 22:47:45 use case is that you have multiple tenants on a shared network 22:47:51 and they want to run VRRP between their VMs or some kind of IP sharing mechanism like it 22:47:51 ihrachys: yeah, it wasn't only you 22:47:57 so each VM needs to be able to use the shared IP 22:48:02 normally you would add this to allowed_address_pairs 22:48:09 but currently we block that on shared networks because you can add any address you want to allowed_address_pairs, including those of other tenants' ports 22:48:18 So this RFE is to provide a mechanism to only allow addresses that are not in use by other tenants 22:48:27 Right now I'm thinking we should just have a separate attribute from allowed address pairs with completely independent API validation/policy 22:48:35 To avoid overcomplicating the allowed address pairs code 22:48:40 thoughts? 22:48:44 (end of message) :) 22:49:47 Launchpad bug 1689830 in neutron "[RFE] advanced policy for allowed addres pairs" [Wishlist,Triaged] 22:49:51 This might even be a good time to formalize the notion of a port impersonated by another port 22:50:38 like an attribute on the port called 'can_impersonate' which takes a list of other port UUIDs 22:50:40 or we can allow to specify a port to allowed-address-pair of another port. 22:51:10 amotoki: yeah, i think we're suggesting the same thing 22:51:49 by cross referencing another port directly we don't have to worry about stale IPs in allowed address pairs if a tenant deletes the shared port 22:52:32 armax: how do you feel about the use case now the submitter clarified? 22:52:39 and the port specified in allowed address pairs is also owned by the same tenant 22:52:50 mlavalle: right 22:52:59 it’s fine 22:53:04 standard forcing tenant match unless your an admin 22:53:12 correct 22:53:28 mlavalle: do you think a port on a different network can be speciifed? 22:53:41 i am not sure it works well. 22:53:41 i think no 22:53:47 me neither 22:54:02 in a same page 22:54:08 let me capture this in the RFE and see if that would work for the submitter 22:54:29 i just confirmed what means by "by the same tenant".... 22:54:29 and we can maybe target this for Queens 22:54:59 amotoki: yeah, logic would be regular user can only setup a port to impersonate another port on the same network with the same tenant 22:55:46 that's right 22:55:57 agent-side code and backends will need to be updated to read from this new attribute so Pike is unrealistic 22:56:10 kevinbenton: yes. this simplifies a workflow: currently a user needs to crete a port to reserve IP address and specify the IP to allowed-addres-pair. 22:56:27 amotoki: ++ 22:58:04 ok, i left a comment 22:58:10 good 22:58:12 we can revisit at next meeting 22:58:20 that's all the time we have 22:58:29 any last minute announcements or anything? 22:58:36 not from me? 22:58:42 mlavalle: i'm not sure? 22:58:45 :) 22:58:49 I am sure ;-) 22:58:50 ok 22:58:55 thanks everyone 22:58:55 LOL 22:58:58 o/ 22:59:06 #endmeeting