21:00:33 <martial> #startmeeting Scientific-SIG
21:00:34 <openstack> Meeting started Tue Feb 20 21:00:33 2018 UTC and is due to finish in 60 minutes.  The chair is martial. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:36 <oneswig> oh go on then :-)
21:00:37 <openstack> The meeting name has been set to 'scientific_sig'
21:00:46 <martial> oops
21:00:54 <martial> #chair oneswig
21:00:55 <openstack> Current chairs: martial oneswig
21:01:03 <armstrong> Hello
21:01:10 <oneswig> #link agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_20th_2018
21:01:16 <oneswig> Grreeetings
21:01:20 <Ender948> Greetings
21:02:04 <martial> Greetings as well :)
21:02:13 <oneswig> Hello, welcome
21:02:33 <martial> #chair b1airo
21:02:34 <openstack> Current chairs: b1airo martial oneswig
21:02:41 <martial> #topic Conference Calendar
21:02:42 <oneswig> Hi b1airo, morning
21:02:47 <b1airo> Morning
21:03:08 <Ender948> good morning to you
21:03:13 <oneswig> There were a couple of announcements here I came across
21:03:23 <b1airo> I'm a little distracted here sorry - 8 o'clock getting out the door to school (and a birthday morning)
21:03:39 <oneswig> Tim Randles forwarded the ScienceCloud conference in Tempe, Arizona.
21:03:48 <oneswig> #link https://sites.google.com/site/sciencecloudhpdc/
21:04:15 <oneswig> And there's HPCAC Lugano, which I saw jmlowe at last year
21:04:24 <oneswig> #link https://www.cscs.ch/publications/press-releases/swiss-hpc-advisory-council-conference-2018-hpcxxl-user-group/
21:04:36 <jmlowe> I'm disappointed that I probably won't make it again this year
21:04:58 <oneswig> Ah, too bad.  I'm hoping to talk on Ceph on NVME + 100G fabric
21:05:35 <jmlowe> oh, that sounds nice
21:06:20 <jmlowe> A few of us will be at https://www.pearc18.pearc.org/
21:06:20 <oneswig> I had a mail earlier today about SREcon - I went to one two years ago and thought it was interesting trip into the parallel universe of hyperscaler infrastructure
21:07:18 <oneswig> jmlowe: I am piqued.  Looks really interesting
21:07:41 <martial> jmlowe: sounds fun indeed
21:08:11 <oneswig> martial: was there a conference Khalil was looking at for ORCA or did I imagine that?
21:08:31 <jmlowe> it used to be the Teragrid/XSEDE conference, broadened scope, and we have a panel accepted with PSC,LANL,IU,NASA,MSI represented
21:08:34 <martial> oneswig: no you are not mistaken
21:09:03 <martial> oneswig: there is one for the NIST Public Working Group on Cloud Federation (IEEE P2302) and joint to the Open Research Cloud
21:09:18 <martial> it will be March 20-21 2018 (so very soon) in Gaithersburg MD
21:09:58 <martial> am hoping jmlowe rbudden and trandles among other could join
21:10:05 <trandles> sorry, I'm here, late
21:10:22 <oneswig> Hi trandles
21:10:32 <trandles> just ran in from another meeting...
21:10:34 <trandles> hi oneswig
21:10:51 <jmlowe> oof, that's going to be rough for me
21:10:53 <oneswig> Just talking over conferences - have you previously been to ScienceCloud?
21:11:54 <trandles> I have not been to ScienceCloud
21:12:31 <trandles> <sorry, reading back through the meetbot log now...>
21:12:58 <oneswig> OK, move on to the next item?
21:13:21 <oneswig> #topic managing dedicated capacity
21:13:30 <jmlowe> science cloud, pearc18, hpcac lugano, summit, did I miss any?
21:13:36 <oneswig> SC?
21:14:04 <jmlowe> yep and NIST Public Working Group on Cloud Federation (IEEE P2302) + ORC
21:14:49 <oneswig> This item came up from a question on how people manage reserved access to a system which has been part-funded in return for guaranteed quota
21:14:59 <jmlowe> with a schedule like this we could almost meet monthly in person
21:16:04 <oneswig> The decision-making process would be reminiscent of the part-time parliament!
21:16:08 <oneswig> #link https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwi-9rb1tLXZAhUBesAKHaOxCdYQFggnMAA&url=https%3A%2F%2Flamport.azurewebsites.net%2Fpubs%2Flamport-paxos.pdf&usg=AOvVaw1AHQDriKZ-fF2C7BTNr6qm
21:16:09 <jmlowe> Would this also cover mechanisms to keep users on the portion of hardware they funded?
21:17:27 <b1airo> jmlowe: yes, that is one of the problems
21:17:38 <oneswig> jmlowe: it's an interesting question. In a pool of nodes, I guess there might be some of a different spec, which would make that easy.  Otherwise I guess it is not relevant (unless there are data protection issues?)
21:17:56 <jmlowe> ok, that is a problem I have a nontrivial chance of encountering in the next 12 months
21:18:31 <b1airo> Pretty common for researchers to want a particular thing different from standard offerings, especially when it is "their" money
21:19:11 <b1airo> We have two different classes of this problem:
21:19:42 <b1airo> 1) researcher co-investment wanting dedicated access to their own compute pool
21:20:18 <b1airo> 2) broader pool of locally prioritised capacity within a community cloud
21:21:09 <b1airo> For #1 things like dedicated flavors are ok
21:22:05 <b1airo> For #2 it is harder as we don't really want more full sets of standard flavors just to tie to specific aggregates
21:24:09 <b1airo> I had the idea of using the imagepropertiesisolation filter for this - i.e. set a "secret" on the aggregate/s you are putting prioritised capacity into and tell the users you want to allow into that aggregate to set the matching property on their glance images. But the filter (or docs) are buggy it seems :-/
21:24:46 <oneswig> In that it doesn't isolate?
21:24:48 <b1airo> Am I making any sense?
21:25:14 <b1airo> Yeah
21:25:49 <jmlowe> my specific case is that IU will purchase identical hardware as an expansion, I'd like to keep my same control plane and add additional projects,domains (with domain specific auth), and whatever else I needed to keep the IU projects on IU funded hardware, so multiple cases of 1?
21:25:51 <b1airo> In fact we aren't even seeing the image properties in the dict the filter is using
21:27:14 <b1airo> Does the domain come into it in the hardware isolation side jmlowe ? I haven't seen any filters that mention it
21:28:01 <jmlowe> I don't think so, but I will need to use a different auth for those users, probably ldap
21:31:03 <jmlowe> I already use domain specific configuration to put novice users in the default domain with sql identity backend and advanced users in another domain with an ldap identity backend, what's one more domain and ldap identity backend?
21:32:48 <b1airo> Interesting jmlowe , what's the use case for that? Surely the users don't know what Keystone backend they're in?
21:32:55 <jmlowe> I can scope projects to a domain, but afaik that's as far as it goes, there's no domain filter for host aggregates
21:33:44 <oneswig> b1airo: in your case #1, does the research project funding the kit allow others to use the resources when they are not?
21:35:03 <jmlowe> no but for the default domain the domain is masked, and they don't actually know their credentials, the atmosphere ui does all the openstack calls on their behalf and they just need to use openid to prove their identity to atmosphere, the other ones use the "TACC" domain because their credentials are actually the ones issued by TACC, but I digress
21:35:15 <persia> Similarly in case #2, can the funding be applied via a billing model (x seconds of y machines) rather than being directly associated with specific hardware?
21:35:43 <b1airo> oneswig: that might be a 1b case, but generally not
21:36:51 <oneswig> Seems a limited benefit to stretching the same openstack over that resource, unless you've an eye to the project's ending?
21:37:14 <b1airo> persia: there are lots of different models possible for sure, but at the moment I'm just thinking about the most common ones we see today
21:38:09 <b1airo> Not sure I follow oneswig ? You suggesting to build a dedicated cloud for one to few machines?
21:38:40 <oneswig> Is that the scale?  Perhaps not then.  Just thinking, it's not very share-y, this resource
21:39:44 <b1airo> From operations perspective it is, also usability across the broader federated cloud is easier - same auth, APIs, dashboard, images, etc etc
21:40:51 <jmlowe> I'm looking at 32 nodes myself ~%10 addition to my existing
21:41:08 <jmlowe> Plus a handful of gpu's
21:41:39 <oneswig> b1airo: good points. In our world it's a duplication of the playbook, but the clouds aren't federated like that.
21:42:08 <jmlowe> I think a domain dropdown in horizon login is new in pike (if so I can't believe it took this long)
21:44:15 <oneswig> b1airo: does the PI also pay for network infrastructure?
21:44:35 <b1airo> Also control infrastructure overhead to consider oneswig
21:45:10 <oneswig> VMs in your setup, right?
21:46:48 <b1airo> oneswig: yeah we typically structure it so they pay for the server ports but fabric ports are covered centrally
21:47:24 <oneswig> b1airo: do you think you'll get your setup with aggregates that other projects cannot schedule to?  Or are you having a major rethink?
21:47:26 <b1airo> oneswig: yep virt control plane currently
21:48:02 <b1airo> We're debugging the filter, will see how we go
21:48:26 <b1airo> The other option for #2 is the aggregatetenancyisolationfilter
21:49:08 <b1airo> But it has a major shortcoming in that it is limited to 255 characters worth of tenant IDs in the allowed access list
21:50:20 <b1airo> We could carry a patch to merge all matching aggregate keys, I.e., use multiple overlapping aggregates to get all relevant projects/tenants. I doubt such a change would get upstream
21:51:30 <oneswig> Should we move on?
21:51:53 <b1airo> Yeah seems like
21:52:02 <oneswig> #topic public voting for Vancouver summit
21:52:03 <martial> did not know about this option for #2, thnaks b1airo
21:52:05 <b1airo> I will report back once we've figured it out
21:52:25 <oneswig> keep us updated b1airo
21:52:37 <oneswig> OK, so public voting's underway
21:53:08 <oneswig> If anyone has a presentation proposal, or has seen an interesting one they wanted to share, lets gather together in an etherpad
21:53:20 <oneswig> #link presentations of interest https://etherpad.openstack.org/p/Scientific-SIG-Vancouver-presentation-picks
21:54:06 <oneswig> Andrey Brito who presented to the SIG earlier this year on SGX has submitted a proposal for their work
21:54:11 <martial> all of those sound very interesting already
21:54:39 <oneswig> martial: you're stepping up this time, right?  Let's see the details :-)
21:55:27 <oneswig> Public votes are a guideline but could be the tie-breaker
21:56:09 <oneswig> That's all I had on that - do add your talks if you've got something proposed
21:56:17 <oneswig> #topic AOB
21:56:25 <martial> nope, not this time, likely will participate in a forum session on federation following the effort of our meeting in March
21:57:25 <oneswig> I heard that DK Panda's team has released a new version of Spark-RDMA - http://hibd.cse.ohio-state.edu/#spark - got a fix in for an issue we hit so we'll be trying that.
21:57:41 <martial> (with Khalil, Craig Lee and Robert Bohn)
21:59:21 <oneswig> We are nearly out of time - any final comments?
22:00:33 <oneswig> OK, thanks everyone
22:00:37 <oneswig> #endmeeting