15:01:50 <TheJulia> #startmeeting ironic 15:01:50 <opendevmeet> Meeting started Mon Nov 17 15:01:50 2025 UTC and is due to finish in 60 minutes. The chair is TheJulia. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:50 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:50 <opendevmeet> The meeting name has been set to 'ironic' 15:01:51 <alegacy> o/ 15:01:53 <kubajj> o/ 15:01:54 <iurygregory> o/ 15:01:56 <clif> o/ 15:02:04 <TheJulia> Good morning fellow followers of Bare Metal Irony! 15:02:10 <cardoe> Anyone know if Harald Jensås on IRC? 15:02:16 <JayF> \o 15:02:36 <JayF> hjensas: ^ 15:02:42 <cardoe> (I know the meeting started but before I forget) TheJulia: https://review.opendev.org/c/openstack/networking-baremetal/+/967367 might intersect a little with our VXLAN 15:02:54 <cardoe> Thank you JayF. 15:02:55 <dtantsur> o/ 15:02:59 <rpittau> o/ 15:03:02 <cardoe> o/ for attendance :D 15:03:08 <TheJulia> cardoe: you may want to add to the etherpad I created in the agenda 15:03:12 <TheJulia> Anyhow! 15:03:22 <TheJulia> Does everyone have coffeee? 15:03:32 <clif> some, but never enough 15:03:37 <TheJulia> (This is now a prompt on our agenda, so trying to make sure we're all awake!) 15:03:50 <TheJulia> #topic Announcements / Reminders 15:03:58 <TheJulia> #undo 15:03:58 <opendevmeet> Removing item from minutes: #topic Announcements / Reminders 15:04:05 <TheJulia> Our agenda can be located on the wiki! 15:04:14 <TheJulia> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_November_17.2C_2025 15:04:18 <TheJulia> #topic Announcements / Reminders 15:04:33 <cid> o/ 15:04:34 <TheJulia> Our standing reminder to review items labeled with the hashtag "ironic-week-prio". 15:04:45 <TheJulia> #link https://tinyurl.com/ironic-weekly-prio-dash 15:05:16 <TheJulia> This week is week R-19, meaning we're past the first milestone of the overall OpenStack project for the current development cycle. 15:05:18 <cardoe> Yes. ALL OF YOU REVIEW! 15:05:25 <TheJulia> #link https://releases.openstack.org/gazpacho/schedule.html 15:05:58 <cardoe> Anytime that link has a second page we should aim to whittle that down to 1 page. That might involve kicking patches out of that list if they just aren't ready to land this week. 15:06:06 <TheJulia> A general reminder as well, we're quickly approaching holidays 15:06:28 <TheJulia> Some folks only have ~3-4 more weeks to the year. So... Get stuff posted and under review! 15:06:45 <TheJulia> Does anyone else have anything to announce or remind us of? 15:06:59 <dtantsur> #link https://groups.google.com/g/metal3-dev/c/GnyEuOCk5Gc Metal3 virtual meetup this Thursday 15:07:04 <TheJulia> cardoe: I was thinking about that, and then I was thinking "I need to review harder!" 15:07:08 <dtantsur> not very NA friendly time, unfortunately 15:07:43 <TheJulia> Seriously though, the key right now is to advance and good is better than nothing and perfection is the enemy of good. 15:07:54 <TheJulia> dtantsur: oh well :( 15:08:02 <TheJulia> Anything else, or are we safe to proceed? 15:08:23 * dtantsur has nothing 15:08:37 <TheJulia> Okay then! 15:08:52 <TheJulia> #topic Working Group Updates 15:08:59 <TheJulia> We have 3 working group etherpads now! 15:09:09 <TheJulia> First up is Standalone networking! 15:09:23 <alegacy> Thanks for the reviews on my new split patch series. 15:09:23 <TheJulia> alegacy: since you've been driving that, anything to update us on aside from you splitting the patches up last week? 15:09:38 <alegacy> I've updated my patches with fixes to those comments. 15:09:43 <TheJulia> Excellent! 15:09:49 <alegacy> except for a couple that are questions that I was unclear about. 15:10:15 <alegacy> having some trouble with zuul job on this one (https://review.opendev.org/c/openstack/bifrost/+/962038) and the next one up for it... not sure I did anything to break it though. 15:11:03 <alegacy> other than that... just in a holding pattern. 15:11:11 <TheJulia> Any determinable failure pattern emerging? 15:11:24 <alegacy> seems like a timeout to boot a node 15:11:42 <alegacy> same thing happened several updates ago but cleared after a recheck 15:11:53 <TheJulia> best to check the console log then. I suspect it may be unlikely for you to have broken it at that point, but still good to look 15:11:55 <alegacy> this time it didn't clear up 15:12:15 <janders> o/ 15:12:21 <TheJulia> Greetings janders! 15:12:23 <janders> (sorry for being late, in transit) 15:12:25 <TheJulia> Okay, onward! 15:12:31 <TheJulia> Next up, asyncio! 15:12:57 <janders> if that's OK I have a couple updates/questions - would be awesome if I can squeeze in after the current topic 15:13:19 <cid> I doubt we have any updates in regards to asyncio yet! 15:13:35 <cid> clear 15:13:45 <TheJulia> cid: fair 15:13:51 <dtantsur> nothing from me too 15:13:57 <TheJulia> Next up, our new addition! VXLAN Networking 15:14:13 <TheJulia> This etherpad is fresh and new, having been recently created just a half hour ago! 15:14:22 <TheJulia> #link https://etherpad.opendev.org/p/ironic-vxlan 15:14:29 <cardoe> Still has that fresh out of the oven smell 15:14:32 <TheJulia> I think the tl;dr is we're trying to spread context and awareness 15:14:35 <TheJulia> cardoe: indeed! 15:15:05 <TheJulia> There is a new version of the ironic spec. It seems like the Neutron one to get the idea across is where they are wanting a demo 15:15:14 <TheJulia> I guess, more discussion and consensus building will be necessary! 15:15:20 <cardoe> I'm looking to update my docs a little bit on L2VNI vs L3VNI so that we can convey the segment binding. 15:16:03 <TheJulia> cardoe: checkout my spec, I detail all the why "L3VNI" and why "existing overlay VXLAN" doesn't work. 15:16:22 <TheJulia> I also have a crazy idea, support geneve since internally ovn entirely ignores the segmentation_id 15:16:31 <cardoe> Yeah I saw you add that which made me think I should add some details. 15:16:33 <TheJulia> and we still need to plumb an attachment 15:16:37 <TheJulia> cool cool 15:16:44 <TheJulia> Onward if there is nothing else 15:17:10 <TheJulia> We have no standing Discussion topics right now, so onward to the Bug Deputy! 15:17:19 <TheJulia> #topic Bug Deputy Updates 15:17:34 <TheJulia> cid: the floor is yours 15:17:45 <cid> There were two bugs and two RFEs. 15:18:20 <cid> One I'm not certain if it's worthy of an RFE 15:18:21 <cid> https://bugs.launchpad.net/ironic/+bug/2131055 - Support segmented serial console port range (RFE?) 15:18:21 <cid> https://bugs.launchpad.net/networking-generic-switch/+bug/2114451 - Ports should contain a reference to the segment_id they are bound to 15:18:58 <TheJulia> so that n-g-s one, I think it is actually a bug. Honestly, if it didn't exist and wasn't already highlighted in neutron code as being problematic, I might have raised it under embargo. 15:19:33 <TheJulia> I'm +1 to the first RFE 15:20:13 <dtantsur> No objections to the RFE either 15:20:13 <cid> ++ 15:21:16 <TheJulia> Is there anything further to discuss regarding these RFE's? 15:22:20 <cardoe> Can n-g-s store extra data in a binding on a neutron port? Cause if not, we need it to be fixed in neutron. 15:22:51 <TheJulia> Technically, we might be able to if the field exists. 15:23:03 <TheJulia> But it *is* an awful bug 15:23:22 <TheJulia> I added it to NGS because the same pattern is referenced, fwiw 15:24:28 <cardoe> yep agreed. 15:24:39 <TheJulia> So onward to Open Dsicussion? 15:24:55 <janders> I have two items (if my transit comms allow) 15:25:14 <TheJulia> #topic Open Discussion 15:25:19 <TheJulia> janders: the floor is yours 15:25:23 <janders> 1) Fujitsu 15:25:39 <janders> the iRMC deprecation patch is merged 15:25:52 <janders> communications with FJ went relatively well 15:26:03 <janders> my main question is: what is our plan for removal of iRMC 15:26:22 <janders> normally we would want it deprecated for a release and them move on to removal 15:26:33 <janders> but from PTG my impression is we want to move as soon as practical 15:26:51 <janders> I'd be interested in your thoughts, in particularly TheJulia and JayF 15:27:16 <JayF> Was iRMC in that list of dying drivers a couple years ago? 15:27:23 <TheJulia> JayF: yes 15:27:41 <TheJulia> Deprecation ages ago, actually 15:28:00 <dtantsur> We only deprecated it today officially 15:28:06 <JayF> https://specs.openstack.org/openstack/ironic-specs/priorities/2024-1-workitems.html#marking-multiple-drivers-for-removal 15:28:14 <JayF> we did not list iRMC then 15:28:16 <TheJulia> Yeah, the internals were only marked today 15:28:21 <JayF> we put it in current work items 15:28:27 <JayF> so that means an 18 month timer /should/ start 15:28:29 <TheJulia> There is a prior deprecation release note 15:28:32 <JayF> oh, good 15:28:36 <dtantsur> Where? 15:28:47 <TheJulia> janders went to edit it and I -1'ed editing it 15:29:09 <JayF> what's 18 months from when that was posted? That's what we technically owe; although realistically I think technical requirements may rule over promised support timelines given the vendor bailed 15:29:31 <dtantsur> I'm only aware of a release note today 15:29:45 <TheJulia> I need to jump to another meeting in a few minutes, but I'll dig it up 15:29:59 <janders> I can't get to gerrit from my current wifi at the moment so if someone could pull it out that would be awesome 15:30:00 <JayF> In any event, we can't keep the snmp driver around longer than this cycle 15:30:16 <JayF> so I am of the opinion iRMC goes away next cycle even if it's a promise breaker 15:30:28 <JayF> is it great? No. Is it better than keeping vulnerable SNMP libraries around even longer? yes. 15:30:34 <dtantsur> I concur 15:30:40 <janders> ++ 15:30:57 <janders> but - no removal this cycle, right? 15:31:45 <janders> during PTG we were thinking moving quickly so trying to quantify how quickly, it makes sense to me that we need to give folks some minimal notice 15:31:47 <JayF> SNMP driver is gone *next* cycle 15:31:54 <JayF> iRMC depends on SNMP driver 15:32:01 <JayF> so that serves as a hard end date 15:32:06 <janders> agreed 15:32:25 <janders> ok so this sounds like a sufficient answer to me 15:32:30 <janders> any closing thoughts on the iRMC topic for now? 15:32:37 <dtantsur> For Metal3, I'm planning removal after the upcoming releases 15:32:40 <dtantsur> basically, early next year 15:33:48 <janders> thanks for your inputs, folks 15:34:20 <opendevreview> Merged openstack/ironic-python-agent-builder stable/2025.2: Wait up to 30 seconds for config drive https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/966822 15:34:29 <TheJulia> change If0d124352dec7072d7f806d60628eefe3619a8b0 15:34:52 <TheJulia> 2020 it was first put in a release note 15:35:12 <JayF> so we could kill it this cycle if we wanted 15:35:13 <TheJulia> so I think alignment with SNMP works. 15:35:19 <JayF> yep 15:35:32 <TheJulia> cool cool 15:35:41 <TheJulia> Anyway, Anything else for Open Dsicussion 15:36:18 <dtantsur> TheJulia: this was reverted 15:36:21 <dtantsur> anyway 15:36:23 <janders> if no further thoughts on iRMC deprecation, I'd like to redfish monitoring topic. I can't pull up the patch atm (wifi issues) but I wanted to clarify how to go about making healthchecks configurable and 15:36:25 <TheJulia> doh! 15:36:33 <janders> see if we can reach consensus on that 15:36:37 <TheJulia> ... wait, we left the release note? 15:37:15 <dtantsur> TheJulia: added another one in 8bd138ca85cf80911153064ab9286f6a3fd90118 15:37:34 <TheJulia> gaaaah 15:37:38 * TheJulia sighs 15:37:48 <TheJulia> It has to go with the snmp driver anyhow 15:37:51 <TheJulia> so *shrugs* 15:38:13 <dtantsur> janders: I think the underlying problem is that we keep piling things into the power sync loop without really considering a wider picture 15:38:25 <dtantsur> which is not terrible, to be clear. but it does cause discussions like this one 15:38:52 <janders> should I reconsider putting syncing health info somewhere else? 15:39:19 <dtantsur> there is no somewhere, that's the problem 15:39:23 <janders> or do we keep it as-is but start thinking about a better way (especially in case we want to pull in detailed health metrics from more/man components) 15:39:30 <janders> I understand 15:39:30 <dtantsur> and I don't think it's on you to really rethink the whole thing 15:39:43 <JayF> I think we should answer the bigger question but it doesn't have to be in your change. Like I said in the patch, I mainly want an escape hatch if someone has misbehaving hardware or if a driver exists in the future where that's a pricey call. 15:39:44 <dtantsur> (some of my thoughts are in https://bugs.launchpad.net/ironic/+bug/2049913 but who has time for that..) 15:40:18 <dtantsur> Honestly, we may need to go even further and have a step-like mechanism for periodic inspection 15:40:20 <dtantsur> but I shut up :) 15:40:52 <dtantsur> I don't have really hard objections to an option for disabling health status fetching, as long as we set the Node.health field to something that indicates it 15:40:52 <janders> all noted 15:41:07 <janders> so dtantsur would you be happy with me re-introducing the config option? 15:41:08 <dtantsur> (i.e. "Disabled" instead of a generic null) 15:41:18 <janders> OK 15:41:24 <janders> (sorry, laggy link) 15:41:29 <dtantsur> np 15:41:35 <janders> ok, thank you 15:41:39 <janders> that's it from me 15:41:56 <janders> I gotta run (little one getting impatient, gotta keep driving) 15:41:56 * dtantsur keeps pondering "sync steps" and potentially "sync runbooks" 15:42:03 <janders> thanks and see you next time o/ 15:42:10 <dtantsur> thank you janders 15:42:36 <JayF> dtantsur: sync? 15:43:55 <JayF> dtantsur: fwiw the only issue I'd have with your RFE around integrated-inspection is that I have a specific ask to try and ensure paths exist to use Ironic w/minimal redfish surface, primarily related to us sometimes being the first folks to get hardware (and finding things like the three headed server monster) .... and IME, inspection is much more likely to break than basic boot management and power control. 15:44:19 <dtantsur> JayF: sync derived from "power sync". Like steps for periodic collection of various data. 15:44:31 <dtantsur> Think about defining a runbook that says what you want to be collected and how often. 15:44:37 <JayF> whoa 15:44:46 <dtantsur> "Please disable power sync for this node but do collect health information" 15:44:57 * dtantsur is getting carried away 15:45:00 <JayF> That's a fun idea 15:45:10 <JayF> and integrates with your existing RFE in a way that doesn't trip up my needs 15:45:19 <dtantsur> yeah 15:47:08 <TheJulia> Is there anything else to discuss today? 15:47:28 <cardoe> dtantsur: thanks for putting that RFE up there. 15:47:56 <cardoe> Today we're doing redfish inspection to create the ports and then agent inspection for other bits but that can't touch the ports. 15:48:28 <cardoe> We'd also want to run the redfish inspection even when the machine is active. 15:50:53 <TheJulia> I'm going to wrap the meeting, thanks folks! 15:51:32 <dtantsur> thanks all! 15:51:37 <TheJulia> #endmeeting