14:00:07 <ralonsoh> #startmeeting neutron_drivers
14:00:07 <opendevmeet> Meeting started Fri Jun 30 14:00:07 2023 UTC and is due to finish in 60 minutes.  The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:07 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:07 <opendevmeet> The meeting name has been set to 'neutron_drivers'
14:00:10 <ralonsoh> hello all!
14:00:14 <ykarel> o/
14:00:21 <haleyb> o/
14:00:29 <slaweq> o/
14:00:31 <ralonsoh> just before starting: please check https://review.opendev.org/c/openstack/neutron-specs/+/885324
14:00:45 <ralonsoh> next week will the the last to merge a neutron spec for this cycle
14:00:53 <ralonsoh> that's all (before starting)
14:01:18 <mlavalle> \o/
14:01:20 <ralonsoh> frickler?
14:01:43 <ralonsoh> I think we have quorum, let's start
14:01:51 <ralonsoh> 1) [RFE] Formalize use of subnet service-type for draining subnets
14:01:57 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2024921
14:02:20 <ralonsoh> is frickler here?
14:02:30 <obondarev> hi
14:02:56 <ralonsoh> ok, let's move to the second one for now
14:02:58 <ralonsoh> 2) [RFE] Caching instance id in the metadata agent to have less RPC messages sent to server
14:03:03 <ralonsoh> https://bugs.launchpad.net/neutron/+bug/2024581
14:03:07 <ralonsoh> slaweq, please
14:03:24 <slaweq> I opened it based on some feedback from the forum in Vancouver
14:04:05 <slaweq> basically someone in the large scale deployments room was saying that adding caching of the instance_id in metadata agent lowered load on rabbitmq significantly
14:04:44 <slaweq> and as we checked with ralonsoh it seems that we are asking neutron server for instance id with every request to the metadata service
14:05:16 <slaweq> maybe we should ask for it once and then cache it for short time locally
14:05:44 <slaweq> so when guest vm is booted and cloud-init is doing many requests to the metadata server it will do just one rpc query to neutron-server
14:05:46 <mlavalle> so, each agent in each compute would have a cache?
14:05:48 <lajoskatona> sahid added some comment which they use in their env if I understand well to use caching for metadata
14:05:54 <slaweq> that's whole idea
14:06:11 <ralonsoh> the OVS RPC cache implementation is better than using solo_cache
14:06:11 <lajoskatona> without code change, would be interestingto try it (I never personally but happy to test it)
14:06:29 <ralonsoh> the RPC cache will subscribe to the resource events (ports in this case)
14:06:38 <ralonsoh> and will have always the updated information
14:06:54 <slaweq> lajoskatona I didn't know we can use that memcache there
14:07:08 <slaweq> maybe that's the solution then, I will need to test it
14:07:12 <ralonsoh> and will run faster because it won't issue any RPC call (regardless that it will be catch by oslo cache)
14:07:15 <lajoskatona> neither me, so omething new to check for met atl east :-)
14:07:19 <slaweq> thx a lot
14:07:37 <mlavalle> yeah, let's give it a try
14:07:55 <slaweq> so I will check it and we can come back to that rfe later
14:09:36 <ralonsoh> ok, I'm against using the oslo cache, just for the records
14:09:53 <slaweq> why?
14:10:05 <ralonsoh> because that won't have the most updated information
14:10:19 <ralonsoh> oslo cache catches the RPC calls and store the info
14:10:34 <ralonsoh> but it doesn't store the latest DB info
14:10:46 <ralonsoh> as we can achieve with the OVS RPC cache implementation
14:10:52 <ralonsoh> OVS agent
14:11:28 <ralonsoh> in any case, the oslo cache is just configuration, no code is needed
14:11:33 <lajoskatona> I think if it is documented as option with all the effects it can be a choise for the operator, of course without knowing now if it is really working
14:11:48 <slaweq> yep
14:12:07 <ralonsoh> that's ok, but are we going to go further?
14:12:15 <slaweq> it has some (small) cons so if we will test it and document properly, it can work IMO
14:12:45 <slaweq> I will test this oslo cache thing and will then see if we need any changes in docs or somewhere else
14:12:54 <ralonsoh> ok, so the output of this RFE is documentation, right?
14:13:07 <slaweq> if I will need any other discussion about it, I will get back to You to bring it here :)
14:13:12 <ralonsoh> perfect
14:13:15 <lajoskatona> +1
14:13:20 <mlavalle> +1
14:13:25 <lajoskatona> thanksslaweq for bringing it here
14:13:48 <ralonsoh> so we have 2 votes in favor of this RFE
14:13:50 <ralonsoh> +1 mine too
14:13:58 <ralonsoh> please, vote for this RFE
14:14:14 <haleyb> +1 from me
14:14:21 <ralonsoh> obondarev?
14:15:11 <ralonsoh> ok, we have enough votes I think
14:15:17 <ralonsoh> so RFE approved
14:15:20 <ykarel> +1
14:15:23 <obondarev__> which one? sorry got disconnected
14:15:24 <mlavalle> he seems tp have dropped off
14:15:47 <ralonsoh> obondarev__, https://bugs.launchpad.net/neutron/+bug/2024581
14:16:06 <obondarev> thanks ralonsoh, lgtm , +1
14:16:09 <ralonsoh> thanks
14:16:14 <ralonsoh> I'll update the LP bug
14:16:40 <ralonsoh> ok, let's go for the third one
14:16:43 <ralonsoh> 3) [rfe][ml2] Add a new API that supports cloning a specified security group
14:16:51 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2025055
14:17:04 <ralonsoh> I don't know the nickname of Liu Xie
14:17:27 <ralonsoh> in any case, did you check the proposal?
14:17:42 <ralonsoh> what is proposed is to have an API to clone SG+rules
14:18:45 <ralonsoh> any feedback? comment?
14:19:12 <obondarev> not sure how common is this use case
14:19:40 <obondarev> when one would need an SG with same rules?
14:19:48 <mlavalle> In principle I side with Luyong, comment 2
14:20:00 <slaweq> IMO this can be easily scripted using existing API and having that in neutron would be overcomplicating things
14:20:01 <obondarev> +1 https://bugs.launchpad.net/neutron/+bug/2025055/comments/2
14:20:13 <slaweq> but that's just my opinion
14:20:28 <mlavalle> luyong and I agree
14:20:36 <mlavalle> with slaweq I mean
14:20:41 <lajoskatona> +1 for Liu's comment
14:20:56 <ralonsoh> and I agree too, the Neutron API should be "atomic"
14:21:33 <haleyb> so me it seemed somewhat possibly related to the default SG template work, i.e. admin wants to define the default set of SG rules? I didn't get a response to my question though
14:21:34 <mlavalle> I liked the way Yulong put it: "concise and fundamental"
14:21:58 <ralonsoh> haleyb, could help, for sure
14:22:01 <obondarev> haleyb: well noted
14:22:06 <ralonsoh> and slaweq is working on it
14:22:46 <ralonsoh> ok, I think we all think the same here, if I'm not wrong
14:23:04 <ralonsoh> let's vote first
14:23:06 <ralonsoh> -1
14:23:11 <obondarev> -1
14:23:13 <slaweq> -1
14:23:13 <haleyb> -1
14:23:16 <ykarel> -1
14:23:30 <ralonsoh> I'll update the LP bug with the feedback provided in this conversation
14:23:31 <lajoskatona> +1
14:23:39 <lajoskatona> -1
14:23:40 <mlavalle> =1
14:23:41 <ralonsoh> ahh
14:23:45 <lajoskatona> I mean(s=orry)
14:23:47 <mlavalle> +1
14:24:15 <ralonsoh> mlavalle, +1?
14:24:52 <ralonsoh> anyway, the RFE is not approved, I'll update the LP today
14:24:55 <ralonsoh> thanks
14:24:56 <haleyb> he would need a +7
14:25:00 <mlavalle> +1
14:25:08 <slaweq> haha
14:25:32 <obondarev> already +3 :)
14:25:36 <slaweq> even +2+W is just kind of +3 in total so far from +7 :P
14:26:02 <ralonsoh> think about people reading these logs 10 years later
14:26:15 <slaweq> LOL
14:26:16 <ralonsoh> ok, let's jump again to the first RFE
14:26:21 <ralonsoh> 1) [RFE] Formalize use of subnet service-type for draining subnets
14:26:27 <ralonsoh> #link https://bugs.launchpad.net/neutron/+bug/2024921
14:26:35 <ralonsoh> I'll try to summarize it
14:26:42 <ralonsoh> but I think you know this proposal
14:26:53 <mlavalle> it's just documentation, isn't it?
14:26:58 <ralonsoh> not really
14:27:05 <obondarev> this one is basically doc update, right?
14:27:07 <slaweq> IIUC correctly this rfe is about making officially supported something what already works in Neutron
14:27:18 <slaweq> doc and some testing maybe
14:27:24 <ralonsoh> we need to add this service type to the IPAM module
14:27:36 <ralonsoh> in order to avoid it when assigning IPs
14:27:46 <ralonsoh> so no, is not only documentation
14:28:08 <ralonsoh> but we need to state on the constant to be used as service type
14:28:32 <ralonsoh> but apart from this, what do you think about this proposal?
14:28:38 <ralonsoh> apart from the implementation
14:28:56 <slaweq> sounds reasonable for me
14:29:01 <slaweq> so +1
14:29:04 <mlavalle> seems the use case is justified
14:29:11 <obondarev> i thought any "unknown" service type would prevent IPAM from allocating IPs from this subnet. The author said they already using it like this
14:29:16 <lajoskatona> yeah some testing (tempest maybe) is necessary to keep the functionalty working
14:29:16 <obondarev> am I missing something?
14:29:37 <mlavalle> that's what I understood
14:29:41 <mlavalle> same as obondarev
14:29:46 <ralonsoh> obondarev, yes, but that's the point: not using any random value
14:30:04 <obondarev> got it, fair enough, thanks
14:30:16 <obondarev> +1
14:30:27 <ralonsoh> but yes, now IPAM will skip any service type not null
14:30:35 <haleyb> we can bikeshed on the name later :)
14:30:43 <ralonsoh> exactly
14:30:54 <ralonsoh> so +1 from, makes sense to have this feature
14:31:01 <mlavalle> +1
14:31:07 <haleyb> +1, especially to get it documented on how to use
14:31:39 <ralonsoh> ok, that was quite productive today!
14:31:43 <ralonsoh> 3 RFEs in 30 mins
14:31:52 <ralonsoh> I'll update the LP bugs now
14:32:03 <ralonsoh> anything else you want to bring here??
14:32:26 <ralonsoh> so thank you all for attending the meeting
14:32:32 <ralonsoh> and have a nice weekend
14:32:37 <ralonsoh> #endmeeting