14:00:03 <slaweq_> #startmeeting neutron_drivers
14:00:03 <openstack> Meeting started Fri May  8 14:00:03 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:07 <openstack> The meeting name has been set to 'neutron_drivers'
14:00:16 <slaweq_> #chair slaweq
14:00:16 <openstack> Warning: Nick not in channel: slaweq
14:00:17 <openstack> Current chairs: slaweq slaweq_
14:00:24 <slaweq> hello
14:00:29 <mlavalle> o/
14:00:46 <haleyb> hi, glad we have a backup slaweq in the channel :)
14:00:50 <njohnston_> o/
14:01:00 <yamamoto> hi
14:01:03 <slaweq> :)
14:01:12 <amotoki> hi
14:01:21 <slaweq> I think we can start
14:01:26 <slaweq> #topic RFEs
14:01:33 <slaweq> we have 2 RFEs for today
14:01:43 <slaweq> first one:
14:01:44 <ralonsoh> hi
14:01:46 <slaweq> https://bugs.launchpad.net/neutron/+bug/1875516 - [RFE] Allow sharing security groups as read-only
14:01:46 <openstack> Launchpad bug 1875516 in neutron "[RFE] Allow sharing security groups as read-only" [Wishlist,New]
14:02:03 <slaweq> proposed by rm_work
14:03:27 <njohnston_> there is also a proposed spec here: https://review.opendev.org/#/c/724207/2/specs/ussuri/share-security-groups-readonly.rst,unified
14:03:27 <patchbot> patch 724207 - neutron-specs - Allow sharing security groups as read-only - 2 patch sets
14:03:54 <slaweq> thx njohnston_ :)
14:04:03 <njohnston_> Personally I think this makes a lot of sense
14:04:34 <njohnston_> and it will be very useful in a private cloud situation where a centralized group can develop and share security policies
14:04:50 <mlavalle> +1
14:04:52 <amotoki> it makes sense to me too. I think it is a common usecase to share a sg with read-only
14:05:10 <slaweq> I'm also fine with this rfe and spec
14:05:41 <slaweq> I think that this is also like that in fwaas, right?
14:05:56 <slaweq> but I'm not sure if it can be readonly there
14:06:53 <ralonsoh> +1 to this idea, makes sense
14:06:57 <amotoki> IIRC in case of fwaas we just support  'shared' so it would be read-only
14:07:18 <slaweq> amotoki: ok, thx for confirmation
14:08:05 <njohnston_> Yes, the shared attribute served much the same purpose
14:08:17 <slaweq> mlavalle: do You know if rm_work is going to implement that?
14:08:30 <mlavalle> he is
14:08:40 <slaweq> great
14:08:50 <mlavalle> and if he doesn't, we, Verizon Media, will implement it
14:08:59 <mlavalle> we need it
14:09:00 <slaweq> and this can be later reused for other resources if needed
14:09:12 <slaweq> so I'm +1 for this RFE
14:09:13 <njohnston_> indeed
14:09:21 <njohnston_> I am glad that we're incorporating some of the value of fwaas into the mainline since fwaas is terminal
14:09:54 <mlavalle> we are also doing it with the group adrees stuff
14:10:06 <mlavalle> inciorporating it to sg
14:10:17 <njohnston_> yep
14:10:28 <njohnston_> it is a good thing
14:10:31 <slaweq> haleyb: yamamoto: any thoughts?
14:10:51 <yamamoto> this rfe seems fine to me
14:10:59 <haleyb> +1 from me, seems useful to more than just SG perhaps later
14:11:25 <slaweq> great, so I'm going to mark this rfe as approved then and track it for Victoria
14:11:27 <slaweq> thx
14:11:33 <slaweq> let's move on to the second one
14:11:43 <slaweq> https://bugs.launchpad.net/neutron/+bug/1873091/ - [RFE] Neutron ports dns_assignment does not match the designate DNS records for Neutron port
14:11:43 <openstack> Launchpad bug 1873091 in neutron "[RFE] Neutron ports dns_assignment does not match the designate DNS records for Neutron port" [Wishlist,Triaged]
14:11:55 <slaweq> this is going back to use again :)
14:12:15 <slaweq> mlavalle: as You were involved in that in the past maybe You can do some introduction of the problem :)
14:12:27 <mlavalle> yes
14:13:55 <mlavalle> we implemented internal DNS resolution back in LIberty according to this spec: https://specs.openstack.org/openstack/neutron-specs/specs/liberty/internal-dns-resolution.html
14:14:23 <mlavalle> and that's the behavior that we have enforced so far
14:14:58 <mlavalle> however, that behavior is proving to be too restrictive for some use cases that have arisen more recently
14:15:10 <amotoki> the original spec did not cover dns_domain field for a port. "dns_domain for ports" extension was implemented later.
14:15:34 <amotoki> is my understanding correct?
14:15:39 <mlavalle> yes, but what is restricting this behavior is dns_domain in neutron.conf
14:16:51 <mlavalle> the dns_assignment field is built form what is configured in neutron.conf
14:17:36 <njohnston_> I just have to say that I have always found the whole DNS situation between nova, neutron, and designate very confusing so I would welcome anything that makes it simpler.
14:17:46 <mlavalle> and then, yes, dns_domain was added as an attribute to networks and ports later for the purpose of integrating with DEsignate or another external DNSaaS
14:19:19 <slaweq> I remember we had such discussion in the past - we even merged some patches to change that behaviour and later we reverted them
14:19:38 <slaweq> and I agree that it is a mess now
14:19:59 <mlavalle> yes, because at that point we didn't have strong use cases to expand the behavior. But now we have two of them, indicated in the RFE
14:21:23 <mlavalle> so my thinking is that we should open up the behavior to encompass new use cases while maintaining backwards compatibility, which is what triggred the revert
14:22:21 <mlavalle> we could go through a detailed spec to normalize the behavior to everybody's liking while maintaining backwards compatibility
14:22:29 <njohnston_> I agree with mlavalle.  Does this require any changes that would need to be coordinated with designate or can we do all the work internal to neutron?
14:22:42 <mlavalle> it's internal
14:22:48 <njohnston_> very good
14:23:00 <mlavalle> the integration with external DNSaaS won't be touched
14:23:35 <amotoki> I think the main point of this RFE is that dns_assignments field in a port can be different from corresponding DNS records in designate.
14:24:11 <mlavalle> yes
14:24:17 <amotoki> I don't fully understand what happens when we sync dns_assignments field in this case.
14:24:20 <mlavalle> that's part of the issue
14:24:35 <amotoki> anyway I agree we need to clarify our behavior and how we can improve it.
14:25:04 <slaweq> so in fact we have:
14:25:11 <mlavalle> with the experience that we have now, we can write a good spec and move forward with this
14:25:20 <slaweq> 1. domain_name in the config file - this one is used to build dns_assignment
14:25:47 <slaweq> 2. domain_name in the network object - this one is used designate
14:25:53 <slaweq> is that correct?
14:25:58 <mlavalle> and
14:26:07 <mlavalle> 3. dns_domain in port
14:26:20 <mlavalle> the last 2 won't be affected by this proposal
14:26:43 <mlavalle> because those are for external DNSaaS integration
14:26:45 <slaweq> sorry s/domain_name/dns_domain in my points :)
14:27:02 <mlavalle> only point 1 will be worked on
14:27:28 <mlavalle> with a spec in the middle
14:27:37 <slaweq> mlavalle: dns_domain in port can override dns_domain set for network, right?
14:27:47 <mlavalle> yes
14:28:03 <slaweq> ok, so at least those 2 aren't unrelated to each other :)
14:29:04 <mlavalle> yeah, I never said those two were unrelated... only that they are targeted at external dns integration
14:29:04 <ralonsoh> (It would be great to see a good document as an output of this RFE, just to have a good reference for developers and users)
14:29:32 <slaweq> mlavalle: I know, I was just summarizing it for myself :)
14:29:52 <mlavalle> ;-)
14:30:22 <slaweq> I agree that we will need spec to move forward with this
14:30:49 <slaweq> but my feeling is that we will then get back to what was the issue in https://bugs.launchpad.net/neutron/+bug/1826419 ;)
14:30:49 <openstack> Launchpad bug 1826419 in Ubuntu Cloud Archive queens "dhcp agent configured with mismatching domain and host entries" [Undecided,Fix committed]
14:30:56 <mlavalle> hjensas is also intereted on this
14:30:57 <slaweq> so "revert of revert" :)
14:31:49 <mlavalle> well, that is why I emphasized that for the purposes of the upcoming spec, we need to be careful in preserving the current behavior for those users who want it
14:32:06 <njohnston_> I'm all in favor of this, I think this is an area that needs reform and it is most welcome.  A spec is definitely necessary and should also help demystify and disseminate understanding of how the dns integration works.
14:32:07 <slaweq> mlavalle: are You going to propose such spec?
14:32:22 <mlavalle> yes
14:32:39 <slaweq> thx mlavalle
14:32:44 <mlavalle> toegther with one of my co-workers, hamza
14:34:01 <slaweq> so I think we can approve this RFE with a note that we first need detailed spec to demystify current situation and good plan of how to proceed with this
14:34:07 <slaweq> what do You think?
14:34:29 <amotoki> +1
14:34:42 <njohnston_> +1
14:34:42 <ralonsoh> +1
14:34:54 <haleyb> +1
14:35:59 <yamamoto> +1
14:36:12 <slaweq> thx
14:36:23 <slaweq> I will summarize this discussion in comment to the LP
14:36:31 <slaweq> that's all what I had for today
14:36:47 <slaweq> topic #On Demand Agenda
14:36:53 <slaweq> I have one short reminder
14:37:02 <slaweq> PTG (virtual) will be in less than month
14:37:21 <slaweq> please add Your ideas of the topics for Neutron in https://etherpad.opendev.org/p/neutron-victoria-ptg
14:37:23 <slaweq> :)
14:37:31 <slaweq> and that's all from my side for today
14:37:39 <slaweq> anything else You want to discuss today?
14:38:04 <ralonsoh> no thanks
14:38:10 <amotoki> nothing from me
14:38:19 <njohnston_> nothing here.  thanks all!
14:38:39 <yamamoto> nothing from me
14:38:51 <haleyb> nothing here
14:38:52 <amotoki> slaweq: it was really nice to have an agenda in the list
14:39:22 <njohnston_> +1 thanks for doing that
14:39:28 <slaweq> amotoki: thx, I will try to do my best to send it every week
14:39:41 <slaweq> just to let everyone to be better prepared to this meeting :)
14:39:50 <slaweq> ok, thx for attending
14:39:57 <slaweq> and have a great weekend :)
14:40:00 <ralonsoh> bye
14:40:01 <slaweq> bye
14:40:06 <slaweq> #endmeeting