11:00:17 #startmeeting scientific-sig 11:00:18 Meeting started Wed Jan 30 11:00:17 2019 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 11:00:22 The meeting name has been set to 'scientific_sig' 11:00:36 greetings 11:00:38 g'day 11:00:48 hey janders 11:00:48 particularly exciting agenda today I see.. 11:00:50 evening 11:00:54 Ironic SIG and GPFS-Manila 11:00:58 how good is that? :) 11:01:01 right up your street? 11:01:11 hell yeah! :) 11:01:22 #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_30th_2019 11:02:03 With GPFS+Manila we were seeking people with experience of it, which may be harder to find 11:02:39 Indeed.. Can't contribute myself from that angle, at least not yet. Very interested though! 11:03:26 Well let's cover the other item... 11:03:30 #topic baremetal SIG 11:03:54 #link Etherpad for Baremetal SIG https://etherpad.openstack.org/p/bare-metal-sig 11:04:13 See some familiar names there already 11:04:35 usual suspects... indeed! :) 11:05:18 hey Blair! :) 11:05:33 what are the temperatures like across the ditch? 11:05:38 How does this group differ from Ironic itself? A purpose of advocacy? 11:05:43 evening 11:05:45 heard it's almost as hot as here 11:05:52 Hi b1airo, good to see you 11:05:55 #chair b1airo 11:05:56 Current chairs: b1airo oneswig 11:06:13 not as hot as across there janders ! but still pretty damn warm for this part of the world 11:06:23 Those poor sheep 11:06:35 they had it coming 11:06:45 I saw 37C in Nelson over the weekend and went whooa 11:06:47 oh wait, you're talking about the heat? 11:06:54 Nice and cool in all that merino wool 11:07:14 yeah i believe nelson area records have been broken 11:08:08 the humidity is so much lower here and atmosphere so clean you really feel the sun and dryness more than in e.g. melbourne 11:08:41 yeah the sun is brutal over there 11:08:47 where are you based janders ? 11:09:07 Canberra gets a bit of that too.. not so much at higher latitudes and lower altitudes :) 11:09:08 Canberra 11:09:33 toasty, no doubt. 11:09:43 aah that's right, an in-betweener ;-) 11:09:50 Bare metal SIG anyone? :-) 11:10:09 just throwing that out there oneswig ? 11:10:32 how is it different.. from Chris'es email this group sounds a bit like promoting "ironic first" approach 11:10:33 It's great to hear you guys shooting the breeze, all the same... 11:11:42 I wasn't sure how much help Ironic needed here, but it might be interesting if the SIG was to help try to position it against (say) Foreman and XCAT 11:12:25 OMG.. is foreman still alive? 11:13:02 Certainly is, we've got a project starting next week with a site that uses it for all deployment. 11:13:08 wow 11:13:43 I have horror stories with foreman from two different major projects so pretty badly biased 11:14:09 The principal argument against change is usually "if it aint broke..." but perhaps from what you say this doesn't apply here 11:14:24 but - how does it stand against xCAT and that Ubuntu thing (BMaaS) - from my perspective much more standard APIs 11:14:34 there's a ton of ansible out there ready to run against ironic - with or without nova 11:14:53 Have you guys played with ElastiCluster much? 11:15:01 yes, a bit. 11:15:17 I recently ran it with baremetal flavor instead of usual VMs 11:15:26 Last time I tried there were problems using it with new OpenStack 11:15:27 other than a glitch with secgroups, it *just worked* 11:15:38 I think it depends on a deprecated client 11:15:40 VMs, baremetals, whatever. Here is your slurm 11:16:00 I don't think more niche products like xCAT or BMaaS have this sort of capability 11:16:18 Ironic has the potential of bringing Infrastructure-as-code to the metal 11:16:27 exactly 11:16:27 which from my perspective is an absolute killer 11:16:28 janders: true and good 11:17:06 That's more positioning openstack (bare metal in this case) against provisioning systems though 11:17:46 so - to link back to your question I think the Ironic SIG might be an opportunity to promote ironic-centric thinking about OpenStack 11:18:02 janders: when did you use ElastiCluster? I'm wondering if the issue I hit has been fixed, or if you haven't encountered it yet. 11:18:06 where VMs are sort of secondary and not expected to hang around for very long 11:18:11 last week 11:18:28 Yeah, I think the Foundation is aware that Ironic is a significant asset, especially as more people move to deploy their apps on top of Kubernetes rathen than inside VMs. It makes sense to advertise it more widely. 11:18:28 it's an opportunity to get more people deploying with OpenStack from the ground-up iguess 11:18:37 other than the secgroups (I had them disabled which generated API responses from Neutron that EC couldnt handle) it literally *just worked* 11:18:42 janders: sounds encouraging, will try again. 11:18:58 I will dig out how I deployed it 11:19:12 hi priteau 11:19:12 it's a PITA but I've got a friend who is relying on EC for his clinical work 11:19:23 he gave me some hints 11:19:37 I think latest git + virtualenv on el7.6 is the way to go 11:19:59 indeed priteau , though the truely security conscious folks will still want VMs to run each tenant's Kubernetes in 11:20:59 Perhaps this SIG can help with gap analysis for bare metal 11:21:29 Although I think the Ironic team are very responsive and interactive on that stuff (and welcome your patches...) 11:22:08 OK, I guess we should move on, that's covered now 11:22:39 #topic Denver! 11:22:46 Are we all going? 11:22:55 +1 11:23:03 I expect to (but haven't booked anything yet) 11:23:38 I hope to be there and presenting - let's see how it goes 11:23:41 :) 11:23:42 By default I assume we carry on with the usual format of SIG activity - meeting plus lightning talks 11:23:55 not me but Sanger are hoping to send someone 11:24:00 good luck janders, sure it'll go well :-) 11:24:07 Hi daveholland 11:24:20 d 11:24:24 thanks heaps to Blair and Tim for reviewing my presentation proposal 11:24:30 I think I saw Mike Lowe and Tim Randles saying they would probably attend 11:24:51 possibly also Jon Mills 11:25:04 Tim is planning a trip to the baseball for interested SIG members, a day or two after the summit. 11:25:26 Denver is (almost) back to back with the Red Hat Summit - it would be a nice trip if stars align.. 11:26:08 Colorado Rockies, on the Friday - mail tim.randles@gmail.com 11:26:11 baseball again huh 11:26:38 that sounds like a better plan! 11:27:15 janders: yes we spotted that too 11:27:17 janders: when/where is the RH summit? 11:27:27 Boston AFAIK 11:27:49 is that Red Hat OPEN? 11:28:23 open for registration? preso submission? else? 11:28:33 rego yes presos not anymore 11:28:50 no I think that was a different RH email I'm mixing up 11:29:26 I missed the WHEN bit - 7-9 May 11:29:48 back to topic, I think the forum format we conventionally use, seems to work OK. Any thoughts on changing it? 11:31:23 I think it's good 11:31:46 aint broke ;-) 11:31:54 works for me (TM) 11:31:55 I haven't seen where/when SIG session submission goes in, will keep an eye out for that. 11:32:35 This time there are also PTG sessions planned to run the same week, which should provide more technical meat for those that seek it . 11:33:22 OK, next topic? 11:33:26 it's good to have the two together again 11:33:39 (mainstream Summit and PTG) 11:33:40 #action oneswig to submit SIG sessions in the same format as previous summits. 11:33:48 janders: +1, agreed. 11:34:11 Certainly works for a small company on a finite travel budget :-) 11:34:24 yep, don't think i've heard anyone say they really thought the split was positive 11:34:50 also really helps balancing travel with getting stuff done 11:35:07 Time for GPFS? 11:35:09 esp for the long haulers 11:35:10 possibly true of a small cohort of core devs that didn't like the extra distractions though 11:35:33 #topic GPFS+Manila 11:35:41 ..and the ones that write code that's really bad from the operational standpoint :P 11:35:57 any GPFS aficionado here today? 11:36:10 I suspect we've all come looking for experience but does anyone here have experience of using GPFS with Manila? 11:36:24 I'm interested but don't have much to share (yet) 11:36:29 Not with Manila, no 11:36:39 is there a native driver for that these days? 11:37:10 My last GPFS backed system was Mitaka - and back then I don't think Manila integration existed 11:37:54 There is a driver. I'm asking around for GPFS users who have tried it. 11:38:00 nova/cinder/glance (and I think swift) *just worked* though 11:38:11 excellent 11:38:36 I anticipate my team will be working on Queens-GPFS integration in the coming weeks, will report back as we learn more 11:38:48 This might be one to follow up on. If anyone here can find someone who has used it, we can invite them along to talk about it. 11:39:04 i'm not sure if the driver works at fileset level per share or does something else (horrific) like NFS sharing a loopback mounted local filesystem atop the GPFS... 11:39:41 would you be interested in appliance based model (say a DDN brick with a GPFS personality) or JBOD approach? 11:40:06 i'm interested in both the Manila integration and detail of the underlying security model to enable multi-tenant GPFS 11:40:12 https://docs.openstack.org/manila/latest/admin/gpfs_driver.html 11:40:19 I will likely start with the former (just because I have a spare brick) but what I'm really after is the latter 11:40:25 Hi tbarron! 11:40:29 dunno that anyone is maintaining this %%% 11:40:30 Thanks for dropping in 11:40:33 ears burning? 11:40:33 ^^ 11:40:38 hi 11:40:59 oneswig: :) 11:41:20 looks like it's *supposed* to work either native or with ganesha 11:41:38 THIS PAGE LAST UPDATED: 2017-09-28 09:26:05 11:41:44 point taken and noted 11:41:52 On a related matter, I didn't hear back from Andrew Elwell, who was previously looking for interest in developing the same for Lustre+Manila 11:42:13 tbarron: do you get enquiries on that? 11:42:33 oneswig: only from scientific HPC types :) 11:42:45 no surprise there 11:42:47 oneswig: seriously, it comes up but 11:43:10 not from anyone who has time (or who gets paid) to work on it 11:43:24 right! 11:43:35 some of the largest GPFS deployments are enterprise customers 11:43:42 I'd *love* to have manila working with these 11:43:42 I guess with the GPFS driver as a case in point, ongoing maintenance the work is the bigger issue than the initial development 11:43:46 looks like it's not their use case then.. 11:44:55 daveholland: does this come up at Sanger? 11:46:03 do you guys have any experience with running GPFS with OpenStack instances accessing the filesystem, without Manila? 11:46:06 oneswig: we can definitely see the use case (and it would save users messing with running their own NFS server inside a tenant) but we haven't had time to get it working, nor RH support AIUI 11:46:26 I'm thinking fabric-based access model, similar to what you're doing with BeeGFS oneswig 11:47:23 we do that at NeSI janders 11:47:27 we are a bit prejudiced against GPFS but that's based on 10-year-old bad experiences. Also we find the users running GPFS inside the tenant (atop Cinder volumes) to provide Kubernetes storage. It's all turtles 11:47:37 janders: that would prevent the performance baby going out with the cloud-native bathwater 11:47:52 I wonder if there's anything in the guts of GPFS that would get upset with the operators doing anything like this 11:48:12 It looks like the main action for today's session is for us to go to our networks and find people who have experience of the Manila driver. 11:48:19 Lustre can be tricky like this (certain bits of Lnet need to be done in software) 11:48:35 sounds good oneswig 11:48:52 I am happy to report back as I made progress with my GPFS work 11:49:04 janders: that would be great. 11:49:09 hopefully my storage gurus won't get scared and run away 11:49:18 they are keen so far 11:49:25 but you know what I'm capable of 11:49:33 (but with one eye on the door handle?) 11:49:48 OK, let's do that. 11:49:58 #topic AOB 11:49:59 "Where's the NETAPP badge?" 11:50:05 yeah.. the VMware guys never quite recovered from the idea of running VMware on Ironic :) 11:50:06 b1airo: ha! 11:50:24 man.. don't get me started on that one 11:50:27 :D 11:50:29 janders: 11:50:34 so cruel 11:50:50 I had a couple of small things 11:51:05 would you like me to send you the details of my EC setup? 11:51:10 The CERN OpenStack Day has a website now - https://openstackdayscern.web.cern.ch/ 11:51:45 Similarly there will be a scientific computing track at OpenInfra Days London - https://openinfradays.co.uk/ 11:52:22 I don't think there's a formal CFP method for either yet. 11:52:53 i'm keen to talk to folks who are running Singularity on their clusters, in particular their chosen deployment config from a security perspective and the impact of how they support users on it 11:53:39 b1airo: you should have a chat with Tim Randles and Michael Jennings about that (although their opinions can be somewhat forceful, they are well informed!) 11:54:06 The major concern as I understand is the setuid launch exec 11:54:11 blairo: we are offering Singularity, I don't have the details, Pete will - I'll ping him 11:55:22 There was also a CFP for HPCAC Lugano closing I believe this week: http://www.hpcadvisorycouncil.com/events/2019/swiss-workshop/submissions.php 11:55:47 a set of setuid binaries i believe oneswig. yes that's probably a principle issue, mind you, i guess slurm is in the same boat so... 11:56:02 that's a great conference if you're in Europe, going from previous years. 11:56:35 agreed - shame I gotta be here early April 11:56:52 what's still unclear to me is what level OS etc is required for fully functional non setuid operation, or if there are still gaps 11:57:51 b1airo: fair point although in the back of my mind I've half-forgotten some disadvantage about process trees - that might have been the docker daemon though 11:59:18 Ah, we are at the hour. 11:59:21 Any more to add? 11:59:57 all good here 12:00:03 b1airo: the OS version, has to be a 4.x kernel (possible 4.7+) with rootless user namespace support configured at compile time. IIRC 12:00:03 thanks guys 12:00:08 see you next week 12:00:14 So quite special 12:00:23 Thanks all, time to stop 12:00:28 thanks, bye 12:00:30 #endmeeting