11:00:17 <oneswig> #startmeeting scientific-sig
11:00:18 <openstack> Meeting started Wed Jan 30 11:00:17 2019 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
11:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
11:00:22 <openstack> The meeting name has been set to 'scientific_sig'
11:00:36 <oneswig> greetings
11:00:38 <janders> g'day
11:00:48 <oneswig> hey janders
11:00:48 <janders> particularly exciting agenda today I see..
11:00:50 <oneswig> evening
11:00:54 <janders> Ironic SIG and GPFS-Manila
11:00:58 <janders> how good is that? :)
11:01:01 <oneswig> right up your street?
11:01:11 <janders> hell yeah! :)
11:01:22 <oneswig> #link Agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_30th_2019
11:02:03 <oneswig> With GPFS+Manila we were seeking people with experience of it, which may be harder to find
11:02:39 <janders> Indeed.. Can't contribute myself from that angle, at least not yet. Very interested though!
11:03:26 <oneswig> Well let's cover the other item...
11:03:30 <oneswig> #topic baremetal SIG
11:03:54 <oneswig> #link Etherpad for Baremetal SIG https://etherpad.openstack.org/p/bare-metal-sig
11:04:13 <oneswig> See some familiar names there already
11:04:35 <janders> usual suspects... indeed! :)
11:05:18 <janders> hey Blair! :)
11:05:33 <janders> what are the temperatures like across the ditch?
11:05:38 <oneswig> How does this group differ from Ironic itself?  A purpose of advocacy?
11:05:43 <b1airo> evening
11:05:45 <janders> heard it's almost as hot as here
11:05:52 <oneswig> Hi b1airo, good to see you
11:05:55 <oneswig> #chair b1airo
11:05:56 <openstack> Current chairs: b1airo oneswig
11:06:13 <b1airo> not as hot as across there janders ! but still pretty damn warm for this part of the world
11:06:23 <oneswig> Those poor sheep
11:06:35 <b1airo> they had it coming
11:06:45 <janders> I saw 37C in Nelson over the weekend and went whooa
11:06:47 <b1airo> oh wait, you're talking about the heat?
11:06:54 <oneswig> Nice and cool in all that merino wool
11:07:14 <b1airo> yeah i believe nelson area records have been broken
11:08:08 <b1airo> the humidity is so much lower here and atmosphere so clean you really feel the sun and dryness more than in e.g. melbourne
11:08:41 <janders> yeah the sun is brutal over there
11:08:47 <b1airo> where are you based janders ?
11:09:07 <janders> Canberra gets a bit of that too.. not so much at higher latitudes and lower altitudes :)
11:09:08 <janders> Canberra
11:09:33 <oneswig> toasty, no doubt.
11:09:43 <b1airo> aah that's right, an in-betweener ;-)
11:09:50 <oneswig> Bare metal SIG anyone? :-)
11:10:09 <b1airo> just throwing that out there oneswig ?
11:10:32 <janders> how is it different.. from Chris'es email this group sounds a bit like promoting "ironic first" approach
11:10:33 <oneswig> It's great to hear you guys shooting the breeze, all the same...
11:11:42 <oneswig> I wasn't sure how much help Ironic needed here, but it might be interesting if the SIG was to help try to position it against (say) Foreman and XCAT
11:12:25 <janders> OMG.. is foreman still alive?
11:13:02 <oneswig> Certainly is, we've got a project starting next week with a site that uses it for all deployment.
11:13:08 <janders> wow
11:13:43 <janders> I have horror stories with foreman from two different major projects so pretty badly biased
11:14:09 <oneswig> The principal argument against change is usually "if it aint broke..." but perhaps from what you say this doesn't apply here
11:14:24 <janders> but - how does it stand against xCAT and that Ubuntu thing (BMaaS) - from my perspective much more standard APIs
11:14:34 <janders> there's a ton of ansible out there ready to run against ironic - with or without nova
11:14:53 <janders> Have you guys played with ElastiCluster much?
11:15:01 <oneswig> yes, a bit.
11:15:17 <janders> I recently ran it with baremetal flavor instead of usual VMs
11:15:26 <oneswig> Last time I tried there were problems using it with new OpenStack
11:15:27 <janders> other than a glitch with secgroups, it *just worked*
11:15:38 <oneswig> I think it depends on a deprecated client
11:15:40 <janders> VMs, baremetals, whatever. Here is your slurm
11:16:00 <janders> I don't think more niche products like xCAT or BMaaS have this sort of capability
11:16:18 <janders> Ironic has the potential of bringing Infrastructure-as-code to the metal
11:16:27 <b1airo> exactly
11:16:27 <janders> which from my perspective is an absolute killer
11:16:28 <oneswig> janders: true and good
11:17:06 <oneswig> That's more positioning openstack (bare metal in this case) against provisioning systems though
11:17:46 <janders> so - to link back to your question I think the Ironic SIG might be an opportunity to promote ironic-centric thinking about OpenStack
11:18:02 <oneswig> janders: when did you use ElastiCluster?  I'm wondering if the issue I hit has been fixed, or if you haven't encountered it yet.
11:18:06 <janders> where VMs are sort of secondary and not expected to hang around for very long
11:18:11 <janders> last week
11:18:28 <priteau> Yeah, I think the Foundation is aware that Ironic is a significant asset, especially as more people move to deploy their apps on top of Kubernetes rathen than inside VMs. It makes sense to advertise it more widely.
11:18:28 <b1airo> it's an opportunity to get more people deploying with OpenStack from the ground-up iguess
11:18:37 <janders> other than the secgroups (I had them disabled which generated API responses from Neutron that EC couldnt handle) it literally *just worked*
11:18:42 <oneswig> janders: sounds encouraging, will try again.
11:18:58 <janders> I will dig out how I deployed it
11:19:12 <oneswig> hi priteau
11:19:12 <janders> it's a PITA but I've got a friend who is relying on EC for his clinical work
11:19:23 <janders> he gave me some hints
11:19:37 <janders> I think latest git + virtualenv on el7.6 is the way to go
11:19:59 <b1airo> indeed priteau , though the truely security conscious folks will still want VMs to run each tenant's Kubernetes in
11:20:59 <oneswig> Perhaps this SIG can help with gap analysis for bare metal
11:21:29 <oneswig> Although I think the Ironic team are very responsive and interactive on that stuff (and welcome your patches...)
11:22:08 <oneswig> OK, I guess we should move on, that's covered now
11:22:39 <oneswig> #topic Denver!
11:22:46 <oneswig> Are we all going?
11:22:55 <b1airo> +1
11:23:03 <oneswig> I expect to (but haven't booked anything yet)
11:23:38 <janders> I hope to be there and presenting - let's see how it goes
11:23:41 <janders> :)
11:23:42 <oneswig> By default I assume we carry on with the usual format of SIG activity - meeting plus lightning talks
11:23:55 <daveholland> not me but Sanger are hoping to send someone
11:24:00 <oneswig> good luck janders, sure it'll go well :-)
11:24:07 <oneswig> Hi daveholland
11:24:20 <oneswig> d
11:24:24 <janders> thanks heaps to Blair and Tim for reviewing my presentation proposal
11:24:30 <b1airo> I think I saw Mike Lowe and Tim Randles saying they would probably attend
11:24:51 <b1airo> possibly also Jon Mills
11:25:04 <oneswig> Tim is planning a trip to the baseball for interested SIG members, a day or two after the summit.
11:25:26 <janders> Denver is (almost) back to back with the Red Hat Summit - it would be a nice trip if stars align..
11:26:08 <oneswig> Colorado Rockies, on the Friday - mail tim.randles@gmail.com
11:26:11 <b1airo> baseball again huh
11:26:38 <b1airo> that sounds like a better plan!
11:27:15 <daveholland> janders: yes we spotted that too
11:27:17 <oneswig> janders: when/where is the RH summit?
11:27:27 <janders> Boston AFAIK
11:27:49 <oneswig> is that Red Hat OPEN?
11:28:23 <janders> open for registration? preso submission? else?
11:28:33 <janders> rego yes presos not anymore
11:28:50 <oneswig> no I think that was a different RH email I'm mixing up
11:29:26 <janders> I missed the WHEN bit - 7-9 May
11:29:48 <oneswig> back to topic, I think the forum format we conventionally use, seems to work OK.  Any thoughts on changing it?
11:31:23 <janders> I think it's good
11:31:46 <b1airo> aint broke ;-)
11:31:54 <daveholland> works for me (TM)
11:31:55 <oneswig> I haven't seen where/when SIG session submission goes in, will keep an eye out for that.
11:32:35 <oneswig> This time there are also PTG sessions planned to run the same week, which should provide more technical meat for those that seek it .
11:33:22 <oneswig> OK, next topic?
11:33:26 <janders> it's good to have the two together again
11:33:39 <janders> (mainstream Summit and PTG)
11:33:40 <oneswig> #action oneswig to submit SIG sessions in the same format as previous summits.
11:33:48 <oneswig> janders: +1, agreed.
11:34:11 <oneswig> Certainly works for a small company on a finite travel budget :-)
11:34:24 <b1airo> yep, don't think i've heard anyone say they really thought the split was positive
11:34:50 <janders> also really helps balancing travel with getting stuff done
11:35:07 <oneswig> Time for GPFS?
11:35:09 <janders> esp for the long haulers
11:35:10 <b1airo> possibly true of a small cohort of core devs that didn't like the extra distractions though
11:35:33 <oneswig> #topic GPFS+Manila
11:35:41 <janders> ..and the ones that write code that's really bad from the operational standpoint :P
11:35:57 <b1airo> any GPFS aficionado here today?
11:36:10 <oneswig> I suspect we've all come looking for experience but does anyone here have experience of using GPFS with Manila?
11:36:24 <janders> I'm interested but don't have much to share (yet)
11:36:29 <janders> Not with Manila, no
11:36:39 <janders> is there a native driver for that these days?
11:37:10 <janders> My last GPFS backed system was Mitaka - and back then I don't think Manila integration existed
11:37:54 <oneswig> There is a driver.  I'm asking around for GPFS users who have tried it.
11:38:00 <janders> nova/cinder/glance (and I think swift) *just worked* though
11:38:11 <janders> excellent
11:38:36 <janders> I anticipate my team will be working on Queens-GPFS integration in the coming weeks, will report back as we learn more
11:38:48 <oneswig> This might be one to follow up on.  If anyone here can find someone who has used it, we can invite them along to talk about it.
11:39:04 <b1airo> i'm not sure if the driver works at fileset level per share or does something else (horrific) like NFS sharing a loopback mounted local filesystem atop the GPFS...
11:39:41 <janders> would you be interested in appliance based model (say a DDN brick with a GPFS personality) or JBOD approach?
11:40:06 <b1airo> i'm interested in both the Manila integration and detail of the underlying security model to enable multi-tenant GPFS
11:40:12 <tbarron> https://docs.openstack.org/manila/latest/admin/gpfs_driver.html
11:40:19 <janders> I will likely start with the former (just because I have a spare brick) but what I'm really after is the latter
11:40:25 <oneswig> Hi tbarron!
11:40:29 <tbarron> dunno that anyone is maintaining this %%%
11:40:30 <oneswig> Thanks for dropping in
11:40:33 <oneswig> ears burning?
11:40:33 <tbarron> ^^
11:40:38 <tbarron> hi
11:40:59 <tbarron> oneswig: :)
11:41:20 <tbarron> looks like it's *supposed* to work either native or with ganesha
11:41:38 <janders> THIS PAGE LAST UPDATED: 2017-09-28 09:26:05
11:41:44 <janders> point taken and noted
11:41:52 <oneswig> On a related matter, I didn't hear back from Andrew Elwell, who was previously looking for interest in developing the same for Lustre+Manila
11:42:13 <oneswig> tbarron: do you get enquiries on that?
11:42:33 <tbarron> oneswig: only from scientific HPC types :)
11:42:45 <oneswig> no surprise there
11:42:47 <tbarron> oneswig: seriously, it comes up but
11:43:10 <tbarron> not from anyone who has time (or who gets paid) to work on it
11:43:24 <janders> right!
11:43:35 <janders> some of the largest GPFS deployments are enterprise customers
11:43:42 <tbarron> I'd *love* to have manila working with these
11:43:42 <oneswig> I guess with the GPFS driver as a case in point, ongoing maintenance the work is the bigger issue than the initial development
11:43:46 <janders> looks like it's not their use case then..
11:44:55 <oneswig> daveholland: does this come up at Sanger?
11:46:03 <janders> do you guys have any experience with running GPFS with OpenStack instances accessing the filesystem, without Manila?
11:46:06 <daveholland> oneswig: we can definitely see the use case (and it would save users messing with running their own NFS server inside a tenant) but we haven't had time to get it working, nor RH support AIUI
11:46:26 <janders> I'm thinking fabric-based access model, similar to what you're doing with BeeGFS oneswig
11:47:23 <b1airo> we do that at NeSI janders
11:47:27 <daveholland> we are a bit prejudiced against GPFS but that's based on 10-year-old bad experiences. Also we find the users running GPFS inside the tenant (atop Cinder volumes) to provide Kubernetes storage. It's all turtles
11:47:37 <oneswig> janders: that would prevent the performance baby going out with the cloud-native bathwater
11:47:52 <janders> I wonder if there's anything in the guts of GPFS that would get upset with the operators doing anything like this
11:48:12 <oneswig> It looks like the main action for today's session is for us to go to our networks and find people who have experience of the Manila driver.
11:48:19 <janders> Lustre can be tricky like this (certain bits of Lnet need to be done in software)
11:48:35 <janders> sounds good oneswig
11:48:52 <janders> I am happy to report back as I made progress with my GPFS work
11:49:04 <oneswig> janders: that would be great.
11:49:09 <janders> hopefully my storage gurus won't get scared and run away
11:49:18 <janders> they are keen so far
11:49:25 <janders> but you know what I'm capable of
11:49:33 <oneswig> (but with one eye on the door handle?)
11:49:48 <oneswig> OK, let's do that.
11:49:58 <oneswig> #topic AOB
11:49:59 <b1airo> "Where's the NETAPP badge?"
11:50:05 <janders> yeah.. the VMware guys never quite recovered from the idea of running VMware on Ironic :)
11:50:06 <oneswig> b1airo: ha!
11:50:24 <janders> man.. don't get me started on that one
11:50:27 <janders> :D
11:50:29 <b1airo> janders:
11:50:34 <b1airo> so cruel
11:50:50 <oneswig> I had a couple of small things
11:51:05 <janders> would you like me to send you the details of my EC setup?
11:51:10 <oneswig> The CERN OpenStack Day has a website now - https://openstackdayscern.web.cern.ch/
11:51:45 <oneswig> Similarly there will be a scientific computing track at OpenInfra Days London - https://openinfradays.co.uk/
11:52:22 <oneswig> I don't think there's a formal CFP method for either yet.
11:52:53 <b1airo> i'm keen to talk to folks who are running Singularity on their clusters, in particular their chosen deployment config from a security perspective and the impact of how they support users on it
11:53:39 <oneswig> b1airo: you should have a chat with Tim Randles and Michael Jennings about that (although their opinions can be somewhat forceful, they are well informed!)
11:54:06 <oneswig> The major concern as I understand is the setuid launch exec
11:54:11 <daveholland> blairo: we are offering Singularity, I don't have the details, Pete will - I'll ping him
11:55:22 <oneswig> There was also a CFP for HPCAC Lugano closing I believe this week: http://www.hpcadvisorycouncil.com/events/2019/swiss-workshop/submissions.php
11:55:47 <b1airo> a set of setuid binaries i believe oneswig. yes that's probably a principle issue, mind you, i guess slurm is in the same boat so...
11:56:02 <oneswig> that's a great conference if you're in Europe, going from previous years.
11:56:35 <janders> agreed - shame I gotta be here early April
11:56:52 <b1airo> what's still unclear to me is what level OS etc is required for fully functional non setuid operation, or if there are still gaps
11:57:51 <oneswig> b1airo: fair point although in the back of my mind I've half-forgotten some disadvantage about process trees - that might have been the docker daemon though
11:59:18 <oneswig> Ah, we are at the hour.
11:59:21 <oneswig> Any more to add?
11:59:57 <b1airo> all good here
12:00:03 <oneswig> b1airo: the OS version, has to be a 4.x kernel (possible 4.7+) with rootless user namespace support configured at compile time.  IIRC
12:00:03 <janders> thanks guys
12:00:08 <janders> see you next week
12:00:14 <oneswig> So quite special
12:00:23 <oneswig> Thanks all, time to stop
12:00:28 <daveholland> thanks, bye
12:00:30 <oneswig> #endmeeting