21:00:54 #startmeeting scientific-sig 21:00:54 Meeting started Tue Aug 31 21:00:54 2021 UTC and is due to finish in 60 minutes. The chair is oneswig_. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:54 The meeting name has been set to 'scientific_sig' 21:01:11 Greetings... 21:02:32 It's been a good day at the OpenStack upgrade coal face 21:03:00 hi there :) 21:03:09 (in two meetings, but listening in) 21:03:16 Hey janders, long time no see! How's things? 21:03:26 good, thank you! :) 21:03:36 Are you still in Canberra? 21:03:46 yeah sorry I've got schedule clashes on both slots, but maybe I will just start joining and listening in in the background 21:04:13 No shame in lurking :-) 21:04:17 oneswig_: no, we moved North (Queensland) early last year 21:04:40 Nice. Mind the crocs 21:04:52 oneswig_: what's happening with the upgrades? :) 21:04:57 thanks, will do! :) 21:06:23 Catching up a client deployment. The upgrade went smoothly, but then Neutron DHCP namespaces wouldn't die on one of the controllers. A reboot freshened things up but I'd prefer to know where the dangling reference came from 21:06:39 https://photos.app.goo.gl/tMgboXFndJtWE5aZ8 < he +1s your comment 21:06:58 killing the IP netns failed with "resource in use" or somesuch 21:08:05 That's a beast! 21:08:47 sorry to hear about DHCP hassles but glad it got resolved quickly 21:09:11 I guess in the greater scheme of things it's not too bad as far as upgrade issues go 21:09:34 was this related to Neutron DHCP CVE? 21:09:38 (the upgrade) 21:10:07 No, thankfully not. 21:10:26 http://lists.openstack.org/pipermail/openstack-discuss/2021-August/024568.html I spotted this going through the scrollback on IRC 21:10:49 now that you mentioned Neutron DHCP I figured I will mention in case it's relevant/useful 21:12:38 Thanks - I saw that earlier. A classic pattern for software that invokes a lot of subprocesses, I guess. 21:14:00 No backports beyond Ussuri, interestingly - that leaves out a chunk of the OpenStack installed base. 21:16:34 Was that croc photo taken by you? It looks pretty fearsome 21:18:56 anyone aboot? 21:18:59 The other bit of fun in the last week was Open Infra Live 21:19:05 hey b1airo, morning. 21:19:18 morning 21:19:23 How's things? 21:19:25 #chair b1airo 21:19:25 Current chairs: b1airo oneswig_ 21:19:44 (just juggling kid setup - school at home, hooray...) 21:20:21 I know that feeling! 21:21:05 don't get me started o_0 21:21:38 Open Infra Live passed me by, what'd we miss? 21:21:44 #link Open Infra Live - Software-defined Supercomputing https://www.youtube.com/watch?v=fOJTHanmOFg 21:21:59 buncha folks yappin', usual kind of thing 21:22:05 :-) 21:22:43 It was great to talk with some people working on the now and the next of HPC infrastructure management 21:24:22 ah, this was the one SteveQ mentioned 21:24:45 That would be the one. 21:25:28 curious to hear CSCS one given we missed Sadif on the ISC panel 21:26:24 She's referring to the new Cray system, "Alps", which is sprouting some APIs for software-defined infrastructure 21:28:29 hmm, sounds dubious assuming such APIs are vendor implementations 21:29:20 It's likely coming from a team who used to be Scalable Informatics. It would be interesting to know more about what they can do. 21:29:31 Nice to have the concept validated! 21:31:59 janders: are you still working on Ironic? 21:32:34 oneswig_: yes! :) 21:33:14 https://review.opendev.org/c/openstack/ironic/+/800001 < that's what I've been up to lately 21:34:02 Verify steps. Yet-another kind of steps (to add to clean and deploy steps). Can be used to execute custom steps on node enroll/verification. 21:34:13 useful for cleaning lifecycle controllers, iDRAC resets, etc 21:34:30 preemptive maintenance kind of a thing 21:34:52 Neat! If only the BMCs could be fixed in the first place :-) 21:35:19 im still hoping to get some storage cleaning improvements into Xena (kind of a follow up to NVMe cleaning, but for hybrid HDD NVMe nodes) 21:35:21 Can't think why you're describing this in the terminology of Dell's product portfolio of course :-P 21:35:31 :D 21:36:21 IMHO this only speaks highly of Dell 21:36:37 they've done a lot of good work implementing fixes to these issues 21:36:57 I can say hand on my heart having to reset BMCs every now and then is a cross-vendor problem 21:37:03 We've tried to use the iDRAC driver and given up for this exact reason - the iDRAC getting it's jobs confused. janders: are you saying it's worth revisiting? 21:37:06 I don't think any is immune to that 21:37:50 oneswig_: when this merges, it should fix the problem of a repurposed node having leftover LC jobs from the previous life 21:38:04 which tends to cause really obscure problems, especially in vMedia driven deployments 21:38:47 I can imagine 21:39:23 if LC jobs are causing issues in other circumstances than repurposing the node, I think it's a separate problem that this mechanism probably won't fix 21:41:25 Last time I think it was BIOS parameter reconfiguration as a trait-driven deploy step 21:43:46 breakfast time, will be on and off again 21:44:04 in case im not back in the next 15 minutes, it was great to chat with you guys! :) 21:44:06 man, I just had second helpings of dessert :-) 21:44:09 will check out the YT video 21:44:19 thanks for sharing 21:44:20 Perhaps we should wrap up here - bon appetite janders, good to see you 21:44:34 thank you oneswig_ 21:44:43 b1airo: anything else to cover at your end? 21:44:48 have a good night (and have a good day b1airo) :) 21:44:55 same to you! 21:45:08 thanks gents - i'm watching the vid :-) 21:45:40 but yes, I reckon the lifecycle management function in iDRACs is a dog 21:45:44 :-P 21:50:40 OK, shall we wrap up there b1airo? The day/night beckons 21:51:02 yep sorrry, didn't reallise you were waiting on me! 21:51:20 one hand on Eva's chromebook, one eye on Teams, etc 21:51:44 wasn't waiting on you, working through the loose ends of the day 21:53:10 Cheers then - have a good one 21:53:36 #endmeeting