21:01:04 #startmeeting scientific-wg 21:01:05 Meeting started Tue May 16 21:01:04 2017 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:08 The meeting name has been set to 'scientific_wg' 21:01:10 aloha! 21:01:17 Hi Stig 21:01:28 #chair martial 21:01:29 Current chairs: martial oneswig 21:01:30 so no use of the dedicated channel? 21:01:31 Hi Martial 21:01:37 because there are two it seems :) 21:01:50 Not as yet ... 21:01:56 #science-wg has people in it and #scientific-wg has a bot :) 21:02:19 Ah. Well I welcome feedback on https://review.openstack.org/#/c/459884/ 21:02:44 #link Agenda for today https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_16th_2017 21:03:17 martial: you going to start a science-wg meeting as well? :-) 21:03:43 nope, not my intention, pointing people over here if anything 21:03:48 Do we have Blair today? 21:04:33 martial: np, it'll shake out. I think the review means the channel gets eavesdropped 21:05:14 OK, shall we start? Hope you had a good summit - sounds like I missed out on plenty 21:05:30 #topic Boston summit roundup 21:05:42 doh! 21:05:52 Hi all! I only got back yesterday morning so still not sure which way is up 21:05:58 Hi jmlowe, all ready for the LUG? 21:06:01 #chair b1airo 21:06:02 Current chairs: b1airo martial oneswig 21:06:04 Hi b1airo 21:06:12 well the bot one is probably the real one and I'm not in it 21:06:28 hey Mike 21:06:37 ah, but how is real defined in IRC? 21:06:42 oneswig: just signed up yesterday, I'll work registration for a bit to earn my free code 21:06:52 Hey martial 21:07:22 jmlowe: I've heard the Cambridge team are gearing up for it. 21:07:46 So how did it go at the summit? 21:07:54 oh, great, I'm trying to wind up for a big use openstack for everything pitch here, it will help 21:07:59 You mean they're not planning to ditch Lustre in favour of CephFS :-) 21:08:05 #link https://etherpad.openstack.org/p/Scientific-WG-boston 21:08:19 Ah thanks martial 21:08:19 so a lot of the conversation from the SWG happened in the Etherpad 21:09:03 Blair was kind enough to share his GPU work and some conversations he had with Nvidia 21:09:19 Yeah good turn outs for our sessions and some great lightening talks, only negative was no one volunteering to lead anything 21:09:28 I'd love to do that some day, safety over speed an all 21:09:54 We talked about Identity Federation, more on that through the Open Research Cloud (ORC) Declaration (ORCD?) 21:09:58 b1airo: ah, too bad. 21:10:21 Did jmlowe just say he'd love to volunteer to lead? 21:10:28 science-wg events were well attended I thought 21:10:33 stig: your work was discussed too (Too bad you could not be there) 21:10:37 And for next summit I'd suggest we simply to do sessions: one double session BoF and one lightening talks 21:10:37 #link http://www.stackhpc.com/monasca-log-api.html 21:10:50 wait what? (was actually looking over the etherpad to volunteer for something) 21:10:55 *simply do two 21:10:57 b1airo: we might do two Lighting talks too 21:11:05 martial: cool! 21:11:18 mike: you are still chair for the next HPC 21:11:48 right? If you are, maybe we can use the extra Lighting Talk for the SWG to add some of your proposed talks? 21:12:13 (through the HPC track I mean) 21:12:23 Which conference? 21:12:33 OpenStack Summit Australia 21:12:47 For the telemetry effort, I also mentioned our work here at NIST 21:12:52 dmoni? 21:12:54 How is it? 21:13:19 I met with my team today and we are going to try to release dmoni / ansible scripts / heat templates and VM config files mid june 21:13:22 github likely 21:13:43 martial: cool, keep us informed. 21:13:47 then ask people to test it 21:13:55 Oh, um, I didn't know I had signed up to chair the fall summit hpc track, happy to review but showing up in person might be tricky 21:14:03 mike: bummer 21:14:06 martial: How did Cyborg go? 21:14:30 stig: Cyborg went well, we had a person from the team do a lighting talk 21:14:58 The special hardware forum session went reasonably well even if it got sidetracked in Nova scheduling for a while 21:15:00 stig: and the full session presented the aim of the project and how to get attached to it 21:15:17 b1airo: true, that was a good discussion as well 21:15:28 b1airo: was it clear if/how it is distinct from the newly-evolving scheduler traits? 21:15:32 #link https://etherpad.openstack.org/p/BOS-forum-special-hardware 21:16:04 hello 21:16:06 Lighting talks 21:16:08 #link https://etherpad.openstack.org/p/Scientific-WG-Boston-Lightning 21:16:15 Hi rbudden 21:16:16 Hi Robert :) 21:16:21 hi guys 21:16:33 got distracted on our ironic cluster, so apologies for being late 21:16:37 Was there a prize from Arkady for the lightning talks? 21:16:43 rbudden: that Bridges thing? 21:16:47 yep ;) 21:16:47 I've heard of it 21:17:24 o/ sorry I'm late 21:17:27 oneswig: I haven't yet been back and watched Jay's placement API talks, but I guess the main thing is that Cyborg aims to lifecycle manage accelerators, and may provide scheduling info to Nova via placement as needed 21:17:36 stig: Google Home I think 21:17:44 As an aside, had a weird problem today - all new CentOS images built today are not starting their network, don't know why and it's bugging me... 21:17:53 Hi trandles 21:17:56 Jay was in the special hardware session and didn't poopoo anything in particular 21:18:21 I might volunteer to take on the Scientific Datasets activity for this cycle 21:18:22 Actually had most of Nova core in thete 21:18:32 Mike: thank you 21:18:33 jmlowe: w00t! 21:18:43 Back in 5... 21:19:09 b1airo: most of Nova core, no pressure then 21:19:10 stig: yes Scientific Dataset was the next item on the list ... Mike just solved this question :) 21:19:30 jmlowe: would be great, how is this tackled at IU? 21:19:35 stig: then we had an interesting "OpenStack bugbears" 21:19:37 A few weeks ago we grabed some bad centos cloud images, they were yanked but not before they caused us problems 21:20:14 jmlowe: bad in what way? 21:20:41 and then there was Greg and the interview. Talked to the gentleman for a bit on Thursday but he mentioned he would be around today ... is he here? 21:21:11 blair and I were also in many of the forum meeting where organization of the WG was discussed 21:21:21 No sign as yet but we have the questions, should reserve at least 20 mins for that 21:21:23 nothing too critical there yet 21:21:26 oneswig: not sure, just remember Jeremy talking at the summit about finding some terminally broken cloud images in their repo a couple of weeks back 21:22:04 it was a well attended meeting with over 30 people in the room (and names in the Etherpad) 21:22:17 jmlowe: hmmm... I'll clear caches and try again. Would hate for this to be the root cause... 21:22:49 among the todos ... ##Todo: extend book chapter on federation (keystone / OpenID) 21:23:00 nice work martial - I see quite a few familiar folk in the etherpad, am doubly sorry to miss now! 21:23:39 stig: hopefully Australia (might be the one missing that one, reached out to the Federation about travel support ... awaiting to hear back) 21:23:43 martial: indeed, there's a pre-draft section there that needs much content 21:23:47 oneswig: scientific data sets, we have more datasets showing up than we have room for, try to offload to wrangler's 10PB of lustre and reexport over nfs with some per tenant provider vlans, the rest we encourage to put on volumes and export over nfs to their other instances 21:24:51 jmlowe: will need to follow up about this. I've got you in my sights :-) 21:25:06 related to ORC (I like that acronym of course :) ) 21:25:07 OpenID federation with globus auth in horizon is on my todo list, probably just in time for our annual review in July 21:25:09 We should also cover the cloud congress... move on? 21:25:23 #topic ORCD / cloud congress 21:25:32 take it away martial 21:26:01 topics of conversation were Federation / Promoting Teaching & Learning / Improve, Share, and Standardize Operational processes / Making federated cloud usage simple to adopt 21:26:24 Assist with Reproducibility / Standards and Open Source / Reduce friction from Policy / Cost / Funding Models 21:27:01 Security / Governance / Support / Federation 21:27:09 a very busy couple days 21:27:30 forgot Resource Sharing 21:27:59 the next steps are as follow: 21:28:04 how many people managed to attend and was it a good mix? 21:28:10 the commercial cloud vendors were certainly present 21:28:12 - Leave open Google Folder for some time for additional input and then we will compile the declaration. The Google Folder docs will “close” off for edit in 2 weeks. 21:28:35 stig: yes mike is very correct and a few people from the research side 21:28:42 https://docs.google.com/document/d/1AmB59CaWBTklH9NIb_6vkif51eXLpapPegf_7ZyulBo/edit 21:28:59 (not sharing the link as pound link to be safe) 21:29:16 if you want to add to it/view the discussions, follow the link 21:29:27 - Next main meeting in Sydney November around OpenStack Summit. 21:29:44 - creation of Working Groups 21:30:24 that's pretty much it on the ORC'd 21:30:52 Thanks martial for the update 21:31:03 stig: feel free to review the link I just shared 21:31:11 am looking now 21:31:15 the conversation is just starting 21:31:49 I think it's a victory if there's any cross-fertilisation here 21:31:51 same problem as the BoF ... moderator asking a lot things akin to "does this work for everybody" 21:32:07 and nobody saying no 21:32:34 so we will see how this evolves 21:32:36 Before anything is decided, everything is possible 21:33:09 Good to hear that the effort will continue. 21:33:31 Was there much discussion on funding? I saw it on the agenda 21:33:41 yes and no 21:33:51 there was peopel identified as funding agency present 21:34:00 but no real talk about funding sources 21:34:13 my colleague Robert Bohn was on the "funding agency" panel 21:34:15 when discussing funding and governance, "effort" should be capitalised...it's going to take a lot of Effort to tackle those issues 21:34:50 but he was here to talk Federation (and the effort run by his team on this matter) 21:35:22 Tim: you are very right, it was very ... chaotic 21:35:40 (now was it chaotic good or chaotic evil ...) 21:35:49 Another potential new focus area is cloud workload traces - KateK is looking for a student to work on it in Chameleon over the US summer 21:36:05 I think chaotic good actually 21:36:36 b1airo: got a link to a role description? Might know some people 21:36:45 blair: we ought to publicize this for her 21:36:47 b1airo: we have a workload effort ongoing that might benefit from discussion with a wider audience 21:36:53 (like you just did) 21:36:57 is Pierre around? 21:37:09 seems not. 21:37:26 #link http://www.nimbusproject.org/news/#421 21:37:33 :) 21:37:57 Thanks b1airo. OK, we ought to look over Gene's questions 21:38:13 or we'll be dashing madly at the end (as usual).. 21:38:26 Yes good point 21:38:43 How about I put the question as topic and you guys chip in with some soundbites? 21:38:53 sounds good to me 21:39:13 #topic Why as a student or researcher in university should I care about Scientific Working Group? 21:39:40 That's an interesting one, given none of us are actually students and not really researchers either. 21:39:48 (everybody feel free to contribute your take on it) 21:40:16 Mostly I'd say the SWG resonates with the architects and admins of research computing services. 21:40:45 Yes agreed, those people are sometimes also (or were) researchers 21:40:59 I've heard of the term "ResOps" before - people dedicated to outreach into research faculties to bring scientists onto the cloud platform most effectively. 21:40:59 It's a relatively rare opportunity to connect with those architects and admins 21:41:26 But possible focus areas like workload traces and dataset sharing are much more concretely relevant to researchers 21:41:48 It's about bringing the benefits of cloud to their workflows? 21:41:49 We have on open job, just posted last week to higher another, Jeremy Fischer from IU is our "ResOps" person and we need another 21:42:22 #topic Why do researchers choose OpenStack as their IaaS platform? 21:42:39 or maybe: Researchers and students often encounter needs for High Performance Computing or Distributed Computing, or simply for Infrastructure as a Service components. The SWG help aggregate knowledge of user and operators who have tried to setup and use such models and can help guide the research model for functional solutions 21:42:41 There is also interest amongst us in scientific application sharing/packaging for cloud 21:42:45 (oops too late on the last one) 21:43:42 The traditional HPC model is limited in what it can achieve, novel solutions based on Mesos, Kubernetes, OpenStack allow the deployments of specialized solutions on Commercial Of the Shelf as well as specialized hardware 21:43:48 Lots of reasons for that - flexibility in architecture, security, data locality 21:44:07 Research computing services see the advantages of converging a zoo of clusters into a single managed resource. Academia, as much as anywhere, suffers from beige-box "shadow IT" 21:44:14 Because it's free (as-in money and open source) with a large, very active community. I don't feel like I'll suddenly be left with an abandoned platform when choosing OpenStack. 21:44:17 Spartan talk ... 21:44:33 trandles: +1 21:44:39 #link https://www.openstack.org/videos/barcelona-2016/spartan-a-hpc-cloud-hybrid-delivering-performance-and-flexibility 21:44:49 cost and community are two major factors 21:44:54 OpenStack is free if your time costs you nothing! 21:45:04 Lol 21:45:07 lol 21:45:12 It is the defacto standard, from the campus, to the regional like Minnesota Supercomputing Institute to the National like Jetstream and bridges, and even international SKA, Nektar ( international depends on where you are standing) you have a uniform api for programmable cyber-infrastructure (tm) 21:45:40 tm included I see 21:45:49 b1airo: surprised you're letting the guys from across town get away without some comment on the local derby... 21:45:57 I could fill the rest of the meeting with discussion of that term 21:46:26 jmlowe: you ever applied for funding for something? :-) 21:46:47 as long as I get everything done that the program demands, my time is free when working on "free" software :P 21:46:48 #topic What are the key difference between scientific OpenStack Clouds and other general OpenStack Clouds? 21:46:50 oneswig: old news, they do what we do 12 months later :-) 21:47:06 one pi coined cyber-infrastructure another added programmable 21:47:46 OK, this is where the bulk of the WG's value add comes in. 21:48:05 Integration with other research infrastructure is probably the big difference, e.g., major HPC, data archives, instruments 21:48:33 The mix of memory, interconnect, networks local and upstream, experienced HPC staff, access to large parallel filesystems 21:49:07 Scientific deployments are also often quite open, e.g., outside the institutional firewall 21:49:10 Different workload characteristics (that we're struggling to characterize effectively) 21:49:13 For us, there's problems that run on our cloud that are affected by Amdahl's law. Cloud workloads typically scale out in a way that scientific applications don't (or can't). Tight coupling between instances is the principal expression of this difference in application. 21:49:15 if you are running a big pile of webservers you aren't going to have the same rule of thumb for processors to memory 21:49:22 jmlowe: +1 unique hardware definitely sets things appart 21:49:44 the SWG is about the use cases of integration of novel HPC models within a research cloud, including the use of specialized hardware (from GPUs to NUMA links) as well as specialized methodologies or distributed algorithms (MPI, ...) 21:49:47 What jmlowe said is pretty much what I menat 21:49:52 ... meant... 21:50:08 (10 minutes mark) 21:50:15 #topic What kinds of workloads do researchers run on their OpenStack Clouds? 21:50:35 Machine learning training models 21:50:41 Data Science evaluations 21:50:47 Easy: all of the workloads, and then some 21:50:56 oneswig: you should flog your Lugano talk, the video is posted, very compelling case for doing all the above in research with openstack 21:51:13 Natural Language Processing, Machine Translation, Video Surveillance, ... 21:51:16 data science frameworks that don't play well with HPC workload managers (DASK, Spark, etc.) 21:51:47 Tim: did no say HPC in this particular case 21:51:51 simply OpenStack 21:52:05 I've got a guy from UTSA running NAMD doing mpi over our 10GigE vxlan tenant networks 21:52:10 We've worked on a couple of generic research computing resources which take all-comers. But we've also seen some very specialised applications such as medical informatics, or radio astronomy. Much of it is categorised as "the long tail of HPC", ie the stuff that doesn't fit well into conventional HPC infrastructure 21:52:27 but I agree with earlier comments, think of a topic ... OpenStack can likely do it 21:52:36 #topic How can researchers speed up their work with OpenStack? 21:52:39 (and make coffee and pancakes :) ... ) 21:52:52 lots and lots of educational allocations on our clouds 21:52:54 Is this about the fabled metric of "time to paper"? 21:53:05 haha 21:54:02 One great way to speed things up is with orchestration and the higher level openstack projects 21:54:05 It's about the situations where the development cycles spend as much time between keyboard and chair as they do between compute, network and storage. If researchers can get up and running (and stay up and running) faster with OpenStack, it's a win. 21:54:11 Researchers can speed up their work by using a runtime environment they control at a scale they might not be able to afford or support. 21:54:18 so crawl with nova boot, walk with heat, run with sahara 21:54:50 jmlowe: what's next after that? 21:54:52 heat templates, ansible [why is the name failing me now], VM configurations => experiment => mutli tenant + ro data access + SDN=> segregated private experiment run 21:55:24 #topic What kinds of challenges do researchers face when using OpenStack clouds in their organization? 21:55:25 => repeatibility 21:55:37 I had a guy who spent a couple of days trying to run some generic k8s heat template from a tutorial somewhere, had him just use magnum and he was off and running on his k8s cluster in 10 min, enter at the level of customization you need and forget the rest 21:56:08 Biggest challenge we see is that researchers are not sysadmins 21:56:55 b1airo: +1 That's how we justify our entire existence. We focus on the computing infrastructure so they can focus on being scientists. 21:57:13 I like that 21:57:14 one of the major problems I havae with reproducability is the idea that you keep everything the same, reproducability is not me going into your lab and using your graduated cylinders etc, it is me doing it with my equipment and getting roughly the same results 21:57:17 b1airo: +1 21:57:35 Scientific-wg tagline?! 21:57:41 ;) 21:57:44 Yeah and then the corollary challenge for us is how much effort to spend on the infrastructure versus helping with the science 21:58:11 b1airo: +1 21:58:16 flanders_ +1 :) 21:58:20 yep, ‘user services’ vs ‘facilities’ 21:58:57 There's a difference in mindset. Research computing has this level of order that doesn't apply in cloud. HPC users assume they can book a number of physical nodes and network switches. There's time sharing and strict queuing. In comparison, cloud users get resource like they're crowding round an ice cream shop! 21:59:15 #topic What features are missing in OpenStack to provide better infrastructure for scientific research? 21:59:32 Next meeting perhaps...? This could take a little while 21:59:44 spot instances? 21:59:45 lol but it's our chance to be selfish 21:59:53 preemptible instances and resource reservation has been a long-sought-after goal 21:59:53 Yeah maybe we should carry those two over to next meeting... 22:00:03 yeah 22:00:31 Alas, we are out of time. 22:00:32 (not sure there is a meeting after, so if we need to overrun, others can tell us :) ) 22:00:51 pipe up if you're waiting or we'll nail this last question... 22:01:38 (and we can move the the #scientific-wg if needed) 22:01:45 seems we can go on 22:01:53 answers anybody? 22:02:28 When looking at HPC workloads on OpenStack, exposing physical resource into the virtual world has been key for hypervisor efficiency gains. The next level may be placement within the physical network. How can we deliver the benefits of cloud but pare it down to something so close to the metal? 22:03:24 that question would have been a lot easier a couple years ago but now I feel like a lot of gaps are being filled 22:03:54 In essence a lot of the WG members are "physicalising" the virtual resources, and somehow the OpenStack managed infrastructure is still flexible enough to be a game changer. 22:04:20 ... final comments ? 22:05:03 OK, lets wrap up - thanks everyone 22:05:12 #endmeeting