11:00:36 #startmeeting scientific-sig 11:00:37 Meeting started Wed Nov 18 11:00:36 2020 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 11:00:40 The meeting name has been set to 'scientific_sig' 11:00:47 greetings 11:02:04 g'day oneswig 11:02:08 how are things? 11:02:41 Hi janders - going well thanks. I have again not put any thought into preparing for the SIG meeting, alas :-( 11:03:08 Was just thinking about Lustre and Manila again - seems like a lot of people would be interested in this 11:03:33 yeah... I was never ever a fan of having NFS gateways in between 11:04:08 Morning. We certainly would be. Some new hardware specifically for Secure Lustre testing will be arriving soon. 11:04:11 Does take away much of the performance advantage 11:04:25 Hello verdurin, how timely! 11:04:41 * janders is looking up a picture that illustrates the issue very well 11:05:05 With the SIG spanning the globe, it's hard to bring every party to one place and time. 11:06:25 janders: your former colleagues at CSIRO were using BeeGFS, right? Were they also using Lustre? 11:07:46 parallel filesystem / native: https://www.travelweekly.com.au/wp-content/uploads/2019/05/Qantas-Dreamliner.png 11:08:06 parallel filesystem + NFS re-export: https://c8.alamy.com/comp/FJCA88/cook-transport-low-loader-truck-taking-a-wide-load-consisting-of-a-FJCA88.jpg 11:08:12 I reckon it's not far off... 11:08:43 ha ha! Good analogy 11:08:52 oneswig I they were exploring Lustre towards the end of my time with CSIRO 11:09:56 janders: anyone there who would be a good contact for a discussion on this? 11:12:48 There's been some interesting talk recently about DDN Lustre and Kubernetes CSI 11:13:04 An improbable pairing that apparently works 11:17:09 oneswig not sure :( 11:17:28 regarding k8s + Lustre - not entirely surprised 11:18:04 Lustre's a user-space driver so may containerise well. 11:19:34 The question always arises, doing the development is one thing, supporting and sustaining it another matter altogether 11:23:01 agreed 11:23:33 but I think there is soemething in containers directly consuming filesystems, pending a reasonable security model 11:24:04 feels like one of the ways of the future (superfast object being another one) 11:25:09 I think it has good potential too. 11:26:20 janders: this might interest you: https://www.stackhpc.com/sc20-top500.html 11:27:12 We got a machine deployed with 1274 bare metal nodes using Ironic, and into the top 100 (just) 11:27:52 that is awesome! :) 11:28:04 can I re-post this on #openstack-ironic? :) 11:28:05 It's getting redeployed for production now. There's going to be some mixed baremetal and virt, which will be implemented *somewhat* like SuperCloud 11:28:21 janders: of course :-) 11:28:26 ...and the idea lives on - fantastic! 11:28:42 I am very glad to hear this 11:28:54 The method isn't an exact copy of your approach, but hypervisors will exist in the overcloud Ironic. 11:30:42 Looks nice. Does the main text refer to the machine at the Other Place? 11:30:52 Not the UM6P one? 11:31:32 verdurin: They are somewhat similar and borrow from each other. 11:32:34 The telemetry graph, I don't think it says, is from a day of LINPACK benchmarking. You can see the carbon footprint of HPC 11:42:16 I got the free pass to Supercomputing but must admit I've yet to use it. Anyone followed the keynotes or other parts? 11:43:48 I watched a few bits last week, have lacked the time to look at anything this week. 11:48:21 alas a similar situation here. 11:48:32 Any other business to raise? 11:48:37 same here with Kubecon :( 11:48:54 I think we're good 11:48:59 it was great to chat! :) 11:49:22 Likewise, thanks janders verdurin 11:49:27 Yes, bye. 11:49:32 I will follow up with ideas on Lustre 11:49:46 until next time 11:49:48 #endmeeting