14:00:13 #startmeeting neutron_drivers 14:00:13 Meeting started Fri Jun 13 14:00:13 2025 UTC and is due to finish in 60 minutes. The chair is haleyb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:13 The meeting name has been set to 'neutron_drivers' 14:00:15 Ping list: ykarel, mlavalle, mtomaska, slaweq, obondarev, tobias-urdin, lajoskatona, haleyb, ralonsoh 14:00:16 \o 14:00:32 o/ 14:00:33 hello 14:00:50 Hello 14:01:00 o/ 14:01:31 there was only one item on the agenda, Dai said he would join 14:01:40 #link https://bugs.launchpad.net/neutron/+bug/2112446 14:02:52 I added my rfe on the agenda. Is that not going to be discussed today? 14:03:47 jimin3shin[m]: i don't see it on the agenda, unless i have the wrong nick for you? 14:03:54 https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 14:03:57 nothing there 14:04:04 let's first discuss the open topic 14:04:15 daidv, hello! 14:04:46 Hi ralonsoh 14:05:15 ralonsoh: It's in the "Ready for discussion" section. 14:05:16 please, present your RFE 14:06:15 Merged openstack/neutron-tempest-plugin master: [CI][2025.1] Skip more failing test in ubuntu jammy ovn job https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/952546 14:06:22 jimin3shin[m]: i see it now, can talk about it after the current one from daidv2 14:06:52 Okay, thank you! 14:07:13 daidv2: can you give an overview of your RFE? 14:08:35 i don't know if he's having connection issues 14:09:25 he is having trouble with the connection 14:09:40 so maybe we can discuss jimin3shin[m]'s RFE while that is figured out 14:09:42 sorry ralonsoh , i was quited 14:10:04 I'm online now 14:10:10 daidv: great 14:11:07 So, can we discuss about my propose first or ? 14:11:27 yes 14:11:30 go ahead 14:11:32 daidv: yes, since you're back online we can discuss your's 14:12:51 Oki, normally, openstack neutron will provide internal dns resolution/"dns-proxy" over dhcp agent (dnsmasq) 14:13:13 we already impleted distributed DHCP aka flow-based dhcp 14:13:55 #link https://bugs.launchpad.net/neutron/+bug/1900934 for reference 14:14:18 My idea is having the same way to provide dns-proxy, then we can make internal dns resolution work like aws/gcp over 169.254.169.253/169.254.169.254 14:14:48 based on ovs flow and an os-ken app work as dns-proxy 14:14:54 or similar to OVN, right? 14:15:08 100% yes 14:15:55 ovn also check the north bound db 14:16:11 that means only stored records in Neutron will be provided locally? 14:16:15 and then decide if it need to forward to real dns 14:16:34 ralonsoh: no, my idea is dns proxy only 14:16:36 you mean external DNS service? 14:17:06 yeah, not really external DNS but internal cloud provider dns for both perpose 14:17:30 1. resolve like ***.database.openstack.local to private IP 14:18:04 2. even it can resolve public domain depend on real DNS server which is deployed by cloud owner 14:18:41 The case study from my side is i want to resolve the domain of database service without dhcp agent, without ovn 14:19:11 But we can do more than that, for any cloud service which is need domain and failover 14:19:58 in case of database service, when we failover the database instance, the ip is changed, so dns is a common solution to dealing with it 14:20:10 to make in-VPC instance connect do database on that VPC 14:20:14 VPC=VXLAN 14:22:14 thanks, as I see this can be useful not only for db instances but for a wider audience 14:23:56 sure, for example, vpn cluster high available based on dns 14:24:17 how do we store the DNS records in Neutron? 14:24:22 I'm trying to find it 14:24:53 Now, neutron have plugin to trigger creating new dns record on Designate 14:25:13 but that's different work 14:25:43 1. We are not normally need create dns record for each VM 14:26:16 2. after Designate store dns record to DB, it will sync with miniDNS <-> PowerDNS/Bind9 14:26:49 So, we still need a way to connect to our PowerDNS/Bind9 over dnsmasq inside DHCP Agent/over OVN 14:27:04 or we need provide VLAN for VM and let network team do that 14:28:02 that seems outside the scope of this rfe? 14:28:12 I think so 14:29:09 ok, I think that could be a discussion for the spec, I would like to know what new information will be needed to be passed, via RPC or whatever, to the OVS agent 14:29:10 we dont care about how to create dns record, it can be created by neutron-designate or create by CMP-outside openstack 14:29:30 ralonsoh: ++ 14:30:08 ah oki, we need to provide DNS Server IP for OVS Agent via configuration 14:30:53 and we can will a ovs flow (automatically for everytime restart ovs agent) to capture packet go to 169.254.169.253:53 14:31:04 and then sent it to CONTROLLER:0 14:31:45 can't that flow be installed by the agent extension? 14:31:47 inside controller, we make a os-ken app like dhcp app now, but it will work like a dns proxy 14:32:01 I think I asked that in the wip review as well 14:32:33 lajoskatona: sure, we can, when extension is enabled --> we restart ovs agent --> flow will be installed 14:32:46 but perhaps ralonsoh is right about spec to see this and have space for questions 14:33:29 https://specs.openstack.org/openstack/neutron-specs/specs/wallaby/distributed_dhcp.html 14:33:42 The solution is similar with this spec 14:34:00 so you simply pass all dns requests to the u/s dns servers which are '2606:4700:4700::1111', '1.1.1.1' in your PoC? Do I get it right? 14:34:12 yes, that's good that we already have similar example 14:34:37 I have the same question as slaweq, that doesn't match with your previous comments 14:34:58 slaweq: only packet match 169.254.169.253:53 and dns servers can be internal 14:35:30 It depend on the design of each system 14:35:40 but how you will resolve internal names like for e.g. those db servers which you've mentioned earlier? 14:36:13 with dhcp agent we have dnsmasq which knows that such hostname (vm name) is associated with this IP address and replies to such requests IIRC 14:36:18 my internal DNS server hold that dns records 14:36:47 dns records is created by designate or my own cmp 14:37:22 imagine, I use trove, so Trove generate dns record in Designate 14:37:29 ok, so it is not like in e.g. ovn so that neutron-ovs-agent will have this knowledge, but it will behave only as proxy to pass dns requests to some external dns server (configured by you) 14:37:32 correct? 14:38:07 correct 14:38:38 then why don't set this DNS server in the DHCP information? 14:38:41 so what's the point of having all of this, can't you just configure your dns server in the vm? 14:38:52 ^^ same comment 14:38:55 ralonsoh exactly, you were faster asking this :) 14:39:17 how can you make VM inside VXLAN network without external network 14:39:25 connect to DNS Server? 14:39:46 so we can make it possible 14:39:58 ok, now I understand 14:40:11 so only the compute node needs to have access to this DNS server 14:40:11 yeah, makes sense if it is for the isolated networks 14:40:16 I missed that piece too 14:40:18 of course, we will config 169.254.169.253 in Subnet DHCP 14:40:25 Subnet DNS 14:40:45 ralonsoh: that, only compute 14:40:56 now it makes sense for me, thx for explanation 14:41:08 then OK for me, that extension makes sense, I think I'm ready to vote 14:41:28 will we still have a spec? 14:41:34 yes for sure 14:41:40 i would like to see a spec as well 14:42:01 in that case I'm ready to vote, then 14:42:12 we can vote, then daidv can you work on a spec and we can discuss more details there? 14:42:19 +1 (plus a spec) 14:42:26 +1 with a spec 14:42:29 +1 14:42:38 +1 14:42:51 +1 from me 14:43:06 just remember about updating our docs too :) 14:43:27 ack. i'll mark it approved and add a note 14:43:36 thanks daidv for presenting! 14:43:45 thank you all 14:44:19 I will make a spec, and comment to the RFE 14:44:43 jimin3shin[m]: ok, thanks for waiting, i think we have time to discuss your rfe 14:44:50 #link https://bugs.launchpad.net/neutron/+bug/2107316 14:45:18 and i'll admit i missed your response in the bug about attending today 14:46:59 oh I didn't notice that I have to put this in the on demand agenda 14:47:25 do I have to describe about the rfe? 14:47:27 jimin3shin[m]: np, it's maybe not obvious 14:47:44 that's ok, let's move on, we are running out of time 14:49:01 I suggest add more information to the network-ip-availabilites response, to calculate the number of available ips "in the whole network", and the number of available ips "in the allocation pools". 14:49:19 Current api response doesn't have enough information to calculate them. 14:50:07 This is spec for the change. 14:50:07 https://review.opendev.org/c/openstack/neutron-specs/+/947826 14:50:43 thanks for spec 14:52:20 so basically you are adding used_ips_in_allocation_pool and allocation_pool_ips 14:52:23 right? 14:53:23 (and changing used_ips with used_ips_in_total) 14:54:05 This look like a "summary" information instead of port list by specific subnet 14:54:09 hello? 14:54:21 so my first question is can we just add more info in the GET without adding an extension? 14:54:44 if we change the API that will require an extension 14:54:52 Elvira García Ruiz proposed openstack/neutron stable/2025.1: Consider logging options when using OVNdbsync https://review.opendev.org/c/openstack/neutron/+/952583 14:54:57 but we can add fields only, instead of changing the current ones 14:55:02 this is always easiest 14:55:05 oh I see 14:55:43 then I think it's better to add more fields instead of changing them, because in the spec I was changing the value of the total_ips 14:55:44 from reading the older bug https://bugs.launchpad.net/neutron/+bug/1843211 it was hard to see what the plan was 14:56:40 in the current api response, total_ips is different depending on if the network have allocation pools 14:57:06 Elvira García Ruiz proposed openstack/neutron stable/2025.1: Consider logging options when using OVNdbsync https://review.opendev.org/c/openstack/neutron/+/952583 14:57:31 so I suggested to change total_ips to "total_ips and allocation_pool_ips" 14:58:15 but I think it's better to keep total_ips, and add two more fields for "total_ips and allocation_pool_ips" in the spec 14:58:23 and totals for used ips correct? 14:58:39 there is one thing to remember regarding allocation_pools - IIRC even if allocation_pool(s) are defined, admin can still allocate manually IP address from outside of the allocation pool for the port. It is only for the automated allocation for the regular users 14:58:41 yes 14:58:53 so I think that proposed change has a lot of sense 14:59:22 agree, is just a matter of knowing what means each parameter exactly 14:59:36 total_ips, total_available_ips and then available_allocation_pool_ips makes sense IMO 14:59:38 the documentation in the Neutron API must be sharp 14:59:52 ralonsoh ++ 15:00:00 +1 15:00:09 +1 15:00:18 yes, if we are updating this let's make sure it's clear 15:00:19 in any case, as slaweq mentioned, this is good 15:00:23 so +1 for it 15:00:30 +1 15:00:43 +1 from me also for the idea and RFE 15:00:43 +1 15:01:12 jimin3shin[m]: ok, i'll approve it, we can continue review in the spec you already proposed 15:01:22 #link https://review.opendev.org/c/openstack/neutron-specs/+/947826 15:01:52 ok, we are at time, thanks for attending everyone, have a good weekend! 15:01:56 #endmeeting