14:00:13 <haleyb> #startmeeting neutron_drivers
14:00:13 <opendevmeet> Meeting started Fri Jun 13 14:00:13 2025 UTC and is due to finish in 60 minutes.  The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:13 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:13 <opendevmeet> The meeting name has been set to 'neutron_drivers'
14:00:15 <haleyb> Ping list: ykarel, mlavalle, mtomaska, slaweq, obondarev, tobias-urdin, lajoskatona, haleyb, ralonsoh
14:00:16 <mlavalle> \o
14:00:32 <slaweq> o/
14:00:33 <ralonsoh> hello
14:00:50 <jimin3shin[m]> Hello
14:01:00 <lajoskatona> o/
14:01:31 <haleyb> there was only one item on the agenda, Dai said he would join
14:01:40 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2112446
14:02:52 <jimin3shin[m]> I added my rfe on the agenda. Is that not going to be discussed today?
14:03:47 <haleyb> jimin3shin[m]: i don't see it on the agenda, unless i have the wrong nick for you?
14:03:54 <ralonsoh> https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
14:03:57 <ralonsoh> nothing there
14:04:04 <ralonsoh> let's first discuss the open topic
14:04:15 <ralonsoh> daidv, hello!
14:04:46 <daidv2> Hi ralonsoh
14:05:15 <jimin3shin[m]> ralonsoh: It's in the "Ready for discussion" section.
14:05:16 <ralonsoh> please, present your RFE
14:06:15 <opendevreview> Merged openstack/neutron-tempest-plugin master: [CI][2025.1] Skip more failing test in ubuntu jammy ovn job  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/952546
14:06:22 <haleyb> jimin3shin[m]: i see it now, can talk about it after the current one from daidv2
14:06:52 <jimin3shin[m]> Okay, thank you!
14:07:13 <haleyb> daidv2: can you give an overview of your RFE?
14:08:35 <haleyb> i don't know if he's having connection issues
14:09:25 <mlavalle> he is having trouble with the connection
14:09:40 <haleyb> so maybe we can discuss jimin3shin[m]'s RFE while that is figured out
14:09:42 <daidv> sorry ralonsoh , i was quited
14:10:04 <daidv> I'm online now
14:10:10 <haleyb> daidv: great
14:11:07 <daidv> So, can we discuss about my propose first or ?
14:11:27 <mlavalle> yes
14:11:30 <mlavalle> go ahead
14:11:32 <haleyb> daidv: yes, since you're back online we can discuss your's
14:12:51 <daidv> Oki, normally, openstack neutron will provide internal dns resolution/"dns-proxy" over dhcp agent (dnsmasq)
14:13:13 <daidv> we already impleted distributed DHCP aka flow-based dhcp
14:13:55 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/1900934 for reference
14:14:18 <daidv> My idea is having the same way to provide dns-proxy, then we can make internal dns resolution work like aws/gcp over 169.254.169.253/169.254.169.254
14:14:48 <daidv> based on ovs flow and an os-ken app work as dns-proxy
14:14:54 <mlavalle> or similar to OVN, right?
14:15:08 <daidv> 100% yes
14:15:55 <daidv> ovn also check the north bound db
14:16:11 <ralonsoh> that means only stored records in Neutron will be provided locally?
14:16:15 <daidv> and then decide if it need to forward to real dns
14:16:34 <daidv> ralonsoh: no, my idea is dns proxy only
14:16:36 <lajoskatona> you mean external DNS service?
14:17:06 <daidv> yeah, not really external DNS but internal cloud provider dns for both perpose
14:17:30 <daidv> 1. resolve like ***.database.openstack.local to private IP
14:18:04 <daidv> 2. even it can resolve public domain depend on real DNS server which is deployed by cloud owner
14:18:41 <daidv> The case study from my side is i want to resolve the domain of database service without dhcp agent, without ovn
14:19:11 <daidv> But we can do more than that, for any cloud service which is need domain and failover
14:19:58 <daidv> in case of database service, when we failover the database instance, the ip is changed, so dns is a common solution to dealing with it
14:20:10 <daidv> to make in-VPC instance connect do database on that VPC
14:20:14 <daidv> VPC=VXLAN
14:22:14 <lajoskatona> thanks, as I see this can be useful not only for db instances but for a wider audience
14:23:56 <daidv> sure, for example, vpn cluster high available based on dns
14:24:17 <ralonsoh> how do we store the DNS records in Neutron?
14:24:22 <ralonsoh> I'm trying to find it
14:24:53 <daidv> Now, neutron have plugin to trigger creating new dns record on Designate
14:25:13 <daidv> but that's different work
14:25:43 <daidv> 1. We are not normally need create dns record for each VM
14:26:16 <daidv> 2. after Designate store dns record to DB, it will sync with miniDNS <-> PowerDNS/Bind9
14:26:49 <daidv> So, we still need a way to connect to our PowerDNS/Bind9 over dnsmasq inside DHCP Agent/over OVN
14:27:04 <daidv> or we need provide VLAN for VM and let network team do that
14:28:02 <haleyb> that seems outside the scope of this rfe?
14:28:12 <daidv> I think so
14:29:09 <ralonsoh> ok, I think that could be a discussion for the spec, I would like to know what new information will be needed to be passed, via RPC or whatever, to the OVS agent
14:29:10 <daidv> we dont care about how to create dns record, it can be created by neutron-designate or create by CMP-outside openstack
14:29:30 <mlavalle> ralonsoh: ++
14:30:08 <daidv> ah oki, we need to provide DNS Server IP for OVS Agent via configuration
14:30:53 <daidv> and we can will a ovs flow (automatically for everytime restart ovs agent) to capture packet go to 169.254.169.253:53
14:31:04 <daidv> and then sent it to CONTROLLER:0
14:31:45 <lajoskatona> can't that flow be installed by the agent extension?
14:31:47 <daidv> inside controller, we make a os-ken app like dhcp app now, but it will work like a dns proxy
14:32:01 <lajoskatona> I think I asked that in the wip review as well
14:32:33 <daidv> lajoskatona: sure, we can, when extension is enabled --> we restart ovs agent --> flow will be installed
14:32:46 <lajoskatona> but perhaps ralonsoh is right about spec to see this and have space for questions
14:33:29 <daidv> https://specs.openstack.org/openstack/neutron-specs/specs/wallaby/distributed_dhcp.html
14:33:42 <daidv> The solution is similar with this spec
14:34:00 <slaweq> so you simply pass all dns requests to the u/s dns servers which are '2606:4700:4700::1111', '1.1.1.1' in your PoC? Do I get it right?
14:34:12 <lajoskatona> yes, that's good that we already have similar example
14:34:37 <ralonsoh> I have the same question as slaweq, that doesn't match with your previous comments
14:34:58 <daidv> slaweq: only packet match 169.254.169.253:53 and dns servers can be internal
14:35:30 <daidv> It depend on the design of each system
14:35:40 <slaweq> but how you will resolve internal names like for e.g. those db servers which you've mentioned earlier?
14:36:13 <slaweq> with dhcp agent we have dnsmasq which knows that such hostname (vm name) is associated with this IP address and replies to such requests IIRC
14:36:18 <daidv> my internal DNS server hold that dns records
14:36:47 <daidv> dns records is created by designate or my own cmp
14:37:22 <daidv> imagine, I use trove, so Trove generate dns record in Designate
14:37:29 <slaweq> ok, so it is not like in e.g. ovn so that neutron-ovs-agent will have this knowledge, but it will behave only as proxy to pass dns requests to some external dns server  (configured by you)
14:37:32 <slaweq> correct?
14:38:07 <daidv> correct
14:38:38 <ralonsoh> then why don't set this DNS server in the DHCP information?
14:38:41 <slaweq> so what's the point of having all of this, can't you just configure your dns server in the vm?
14:38:52 <ralonsoh> ^^ same comment
14:38:55 <slaweq> ralonsoh exactly, you were faster asking this :)
14:39:17 <daidv> how can you make VM inside VXLAN network without external network
14:39:25 <daidv> connect to DNS Server?
14:39:46 <daidv> so we can make it possible
14:39:58 <ralonsoh> ok, now I understand
14:40:11 <ralonsoh> so only the compute node needs to have access to this DNS server
14:40:11 <slaweq> yeah, makes sense if it is for the isolated networks
14:40:16 <slaweq> I missed that piece too
14:40:18 <daidv> of course, we will config 169.254.169.253 in Subnet DHCP
14:40:25 <daidv> Subnet DNS
14:40:45 <daidv> ralonsoh: that, only compute
14:40:56 <slaweq> now it makes sense for me, thx for explanation
14:41:08 <ralonsoh> then OK for me, that extension makes sense, I think I'm ready to vote
14:41:28 <mlavalle> will we still have a spec?
14:41:34 <ralonsoh> yes for sure
14:41:40 <haleyb> i would like to see a spec as well
14:42:01 <mlavalle> in that case I'm ready to vote, then
14:42:12 <haleyb> we can vote, then daidv can you work on a spec and we can discuss more details there?
14:42:19 <ralonsoh> +1 (plus a spec)
14:42:26 <mlavalle> +1 with a spec
14:42:29 <haleyb> +1
14:42:38 <lajoskatona> +1
14:42:51 <slaweq> +1 from me
14:43:06 <slaweq> just remember about updating our docs too :)
14:43:27 <haleyb> ack. i'll mark it approved and add a note
14:43:36 <haleyb> thanks daidv for presenting!
14:43:45 <daidv> thank you all
14:44:19 <daidv> I will make a spec, and comment to the RFE
14:44:43 <haleyb> jimin3shin[m]: ok, thanks for waiting, i think we have time to discuss your rfe
14:44:50 <haleyb> #link https://bugs.launchpad.net/neutron/+bug/2107316
14:45:18 <haleyb> and i'll admit i missed your response in the bug about attending today
14:46:59 <jimin3shin[m]> oh I didn't notice that I have to put this in the on demand agenda
14:47:25 <jimin3shin[m]> do I have to describe about the rfe?
14:47:27 <haleyb> jimin3shin[m]: np, it's maybe not obvious
14:47:44 <mlavalle> that's ok, let's move on, we are running out of time
14:49:01 <jimin3shin[m]> I suggest add more information to the network-ip-availabilites response, to calculate the number of available ips "in the whole network", and the number of available ips "in the allocation pools".
14:49:19 <jimin3shin[m]> Current api response doesn't have enough information to calculate them.
14:50:07 <jimin3shin[m]> This is spec for the change.
14:50:07 <jimin3shin[m]> https://review.opendev.org/c/openstack/neutron-specs/+/947826
14:50:43 <lajoskatona> thanks for spec
14:52:20 <ralonsoh> so basically you are adding used_ips_in_allocation_pool and allocation_pool_ips
14:52:23 <ralonsoh> right?
14:53:23 <ralonsoh> (and changing used_ips with used_ips_in_total)
14:54:05 <daidv> This look like a "summary" information instead of port list by specific subnet
14:54:09 <ralonsoh> hello?
14:54:21 <haleyb> so my first question is can we just add more info in the GET without adding an extension?
14:54:44 <ralonsoh> if we change the API that will require an extension
14:54:52 <opendevreview> Elvira García Ruiz proposed openstack/neutron stable/2025.1: Consider logging options when using OVNdbsync  https://review.opendev.org/c/openstack/neutron/+/952583
14:54:57 <ralonsoh> but we can add fields only, instead of changing the current ones
14:55:02 <ralonsoh> this is always easiest
14:55:05 <jimin3shin[m]> oh I see
14:55:43 <jimin3shin[m]> then I think it's better to add more fields instead of changing them, because in the spec I was changing the value of the total_ips
14:55:44 <haleyb> from reading the older bug https://bugs.launchpad.net/neutron/+bug/1843211 it was hard to see what the plan was
14:56:40 <jimin3shin[m]> in the current api response, total_ips is different depending on if the network have allocation pools
14:57:06 <opendevreview> Elvira García Ruiz proposed openstack/neutron stable/2025.1: Consider logging options when using OVNdbsync  https://review.opendev.org/c/openstack/neutron/+/952583
14:57:31 <jimin3shin[m]> so I suggested to change total_ips  to "total_ips and allocation_pool_ips"
14:58:15 <jimin3shin[m]> but I think it's better to keep total_ips, and add two more fields for "total_ips and allocation_pool_ips" in the spec
14:58:23 <haleyb> and totals for used ips correct?
14:58:39 <slaweq> there is one thing to remember regarding allocation_pools - IIRC even if allocation_pool(s) are defined, admin can still allocate manually IP address from outside of the allocation pool for the port. It is only for the automated allocation for the regular users
14:58:41 <jimin3shin[m]> yes
14:58:53 <slaweq> so I think that proposed change has a lot of sense
14:59:22 <ralonsoh> agree, is just a matter of knowing what means each parameter exactly
14:59:36 <slaweq> total_ips, total_available_ips and then available_allocation_pool_ips makes sense IMO
14:59:38 <ralonsoh> the documentation in the Neutron API must be sharp
14:59:52 <slaweq> ralonsoh ++
15:00:00 <lajoskatona> +1
15:00:09 <mlavalle> +1
15:00:18 <haleyb> yes, if we are updating this let's make sure it's clear
15:00:19 <ralonsoh> in any case, as slaweq mentioned, this is good
15:00:23 <ralonsoh> so +1 for it
15:00:30 <haleyb> +1
15:00:43 <lajoskatona> +1 from me also for the idea and RFE
15:00:43 <slaweq> +1
15:01:12 <haleyb> jimin3shin[m]: ok, i'll approve it, we can continue review in the spec you already proposed
15:01:22 <haleyb> #link https://review.opendev.org/c/openstack/neutron-specs/+/947826
15:01:52 <haleyb> ok, we are at time, thanks for attending everyone, have a good weekend!
15:01:56 <haleyb> #endmeeting