09:00:36 #startmeeting scientific_wg 09:00:37 Meeting started Wed Apr 12 09:00:36 2017 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:40 The meeting name has been set to 'scientific_wg' 09:01:05 Good morning good afternoon good evening 09:01:21 #link agenda https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_April_12th_2017 09:01:34 Hello oneswig 09:01:45 Hello from Lugano 09:01:57 Hi priteau I have something to ask of you later ;-) 09:02:16 Hello ma9_ - I'm in the front row next to Mike. 09:02:21 Good morning! 09:02:22 evening / 09:02:41 hello! 09:02:46 I know, I'm Marco from CSCs 09:02:53 so this is how the other half lives? 09:02:57 ha! Hi Marco :-) 09:03:20 #chair b1airo 09:03:21 Current chairs: b1airo oneswig 09:03:29 Morning. 09:03:50 how's Laguno? 09:04:10 or Lugano even 09:04:11 b1airo: splendid - don't know how ma9_ gets any work done out here 09:04:23 :D 09:05:00 OK, shall we get started? 09:05:18 #topic Roundup on resource reservation & other forum topics 09:05:25 it's round-up time at the family station here now that daylight saving has finished so i will be back and forth 09:05:47 OK b1airo - know how that is! 09:06:11 I had a quick one here. I put in two forum topics - BIOS/RAID config and Cinder multi-attach 09:06:41 the BIOS/RAID one was rejected :-( but suggested to cover in the virt + baremetal session 09:06:58 the Cinder volume multi-attach I think was also to be covered in virt + baremetal (oddly) 09:07:15 The main discussion going on has been around resource reservation 09:07:15 that is odd 09:07:21 priteau: can you give a summary? 09:08:35 The discussion has focused on what should be handled by Nova itself and what should be managed by another service, like Blazar 09:08:36 jmlowe: it is, there's also a good deal of interest on multi-attach from the telco world it seems 09:09:15 Jay Pipes, one of the Nova core developers, thinks Nova shouldn't handle any temporal access (i.e. things happening in the future) 09:09:31 Does anybody have a multi-attach use case other than providing large read only data sets? 09:10:16 jmlowe: would be good for a scalable bare metal cluster boot. (another read-only case) 09:11:00 A discussion has also started on how to handle instance reservation (rather than just physical host reservation, which is supported in Blazar today) 09:11:23 #link https://etherpad.openstack.org/p/new-instance-reservation 09:11:40 priteau: where are things with OPIE currently? Seems OK to have a separate service for managing future reservations, but when the future becomes now, what would happen if the machine's full 09:12:04 priteau: instance reservation - that would be a nice development in flexibility 09:12:28 The instance reservation discussion is driven by Masahito Muroi who is the Blazar PTL - I haven't had time to read it yet 09:12:31 I have concerns about the use of host aggregates, are they thinking about a refined mechanism for Blazar to manipulate nova? 09:13:19 I don't know much about OPIE. I hope this forum session will give time for OPIE and Blazar devs to talk about how the two systems will work together 09:13:47 yes! and us to advocate use-cases 09:13:59 priteau: that would be ideal :-) 09:14:19 What I like is how the forum has stimulated some useful discussion already 09:14:45 jmlowe: Maybe we could use the placement API instead of aggregates 09:14:47 i'd like to understand jay's opposition to having anymore smarts about this in nova 09:15:47 i can understand if he prefers to integrate other pluggable services in the scheduler backend, but i'm worried that Nova API actually needs to be extended to support these use-cases nicely 09:16:59 so in terms of forum topics i also have one for hypervisor tuning 09:17:11 b1airo: yes, I didn't quite understand his reluctance either 09:17:24 b1airo: has it been approved for discussion?? 09:17:43 and one for special hardware (i need to reach out to the Cyborg folks) 09:18:54 Tim Randles and I also prepared a topic about customization of Ironic deployments steps which was rejected, but would be discussed in the Ironic feedback session 09:18:59 b1airo: had a small update for you from here wrt special hardware 09:19:08 will follow up later 09:19:42 priteau: what was the issue? We have hopes of using the deployment steps as a means of BIOS/RAID reconfig 09:20:38 oneswig: it was for supporting booting to ramdisks or speeding up reboots with kexec: http://forumtopics.openstack.org/cfp/details/125 09:20:45 aha 09:21:59 Well we still have the chance to discuss these things at the forum, within a merged session. Be interesting to hear what the Ironic team think of it 09:22:31 Did anyone else propose forum topics they can update on? 09:23:11 OK, lets move on the agenda 09:23:27 #topic Updates from HPCAC conference in Lugano 09:23:45 zioproto is giving an OpenStack tutorial as we speak. 09:23:57 (That's his apology for today I guess) 09:24:12 simon-AS559: not quite yet - Gabriele from Intel currently talking burst buffers and DAOS 09:24:25 zioproto is up later on I believe 09:24:33 Right, 15 minutes according to schedule :-) 09:24:33 then lunch 09:24:45 14:15 local time 09:24:54 We had jmlowe earlier with a great show-and-tell on Jetstream 09:25:40 oneswig: do you have a link for the meeting? 09:25:58 http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php 09:26:06 thanks, simon-AS559 09:26:12 verdurin: I believe the sessions are filmed, haven't checked for uploaded presentations 09:26:26 DK Panda has spoken a couple of times. One on MPI and one on big data. 09:26:39 the presentations are usually published on the agenda page afterwards http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php 09:26:56 b1airo: DK Panda says GPUs + MVAPICH2-virth integration is "something we'll be working on very soon" 09:26:56 The new one for me was the SPACK package manager 09:27:06 some videos usually appear on http://insidehpc.com/ 09:27:24 so virtualised gpu-direct is on the roadmap 09:27:24 some promise of sriov live migration, but was hard to nail him down on that 09:28:32 priteau: apparently you have heat stacks on chameleon for the OSU HiBD environment - is that true and are they shareable? 09:28:49 The thing about SPACK that interested me was the possibility of dropping in complex hpc applications with a package manager bootstrapped by cloud-init or heat software configs 09:28:56 (This is the hadoop/spark/memcached/hbase + RDMA tecnology) 09:30:33 oneswig: There is a RDMA Hadoop Heat template in progress. It will be at https://www.chameleoncloud.org/appliances/ when ready 09:31:22 ok, existing rdma-hadoop looks to be just an image not a heat template? 09:31:40 priteau: Do you think the heat templates will be portable? I'm hoping just to rename a few flavors etc and try it out here. 09:32:12 jmlowe: What's in the catalog is just an old standalone image now, the Heat template + new image is still being developed 09:32:48 priteau: ack, just verifying 09:32:57 priteau: is that coming from OSU or UChicago? 09:33:01 OSU 09:34:08 Gabriele has just put up a slide with "BlobFS" on it - you heard it here first :-) 09:34:09 oneswig: because it's on bare-metal, the Heat template actually deploys bare-metal nodes, and then runs VMs on top of that. So if you're using a virtualized OpenStack, you will have to adapt the template to remove the unnecessary code to launch VMs 09:34:28 fascinating 09:34:42 priteau: I'd be running bare metal instances in this case 09:35:03 do you mean you install kvm on the nodes? 09:35:03 how is it putting the rdma in the vm? sriov? 09:35:12 jmlowe: yes, SR-IOV 09:37:07 priteau, sounds like the "install openstack kvm cloud" and "bring up hadoop/spark/... cluster on openstack" are really two different things..? 09:37:50 oh, schedule change, openstack tutorial starting now by Saverio Proto 09:38:00 reading back... a 15 min OpenStack tutorial?! that's going to be 40k feet :-) 09:38:31 I attempted 10k feet, it was rough 09:39:08 b1airo: I don't understand your question 09:39:16 zioproto's up right now, we can live-stream it! Speaking at normal speed currently, I assume he'll accellerate to a singularity by the end 09:39:42 +1 09:40:04 exciting, similar raison d'etat to my project, jetstream 09:41:06 priteau: you install KVM in order to virtualise the hadoop workers - it's a two-stage process? 09:42:34 switch seems to give you the option of attaching to a science dmz or internal campus network 09:43:00 oneswig: I assume KVM is pre-installed, but there is some setup done to launch the VMs (not using OpenStack, just plain libvirt IIRC). I don't know all the details, OSU does 09:43:09 runs mitaka newton coming in a few weeks 09:43:49 good god, they are running windows vm's 09:45:03 very interesting, they bridge the customer campus networks over the switch backbone with vpn 09:45:24 into a provider network on the openstack side 09:45:42 I like this layer-2 extension of a local intranet into OpenStack, makes it very simple for user to assimilate the VMs into their world 09:46:00 sounds similar to what we do 09:46:02 Wonder what the side-effects are - simon-AS559 any comment? 09:46:41 One side effect is we need to get l2gateway to work :-) 09:46:45 I just did a one off to let one tenant break out of our vxlan and us a vlan to get access to a lustre filesystem 09:46:58 But it's a bit early to say because we don't really have production deployments of this. 09:47:26 Also the focus isn't really high-performance networking; more to get inside the customer's firewall to be able to access things like Active Directory and other servers. 09:47:31 I'm sure it could be refined, was lots of hand wiring by me 09:48:18 Yes, we're trying to reduce that hand-wiring to a minimum; l2gateway is helpful there. 09:48:19 oh, lots of bridge mappings, looks more similar to what I did 09:48:42 simon-AS559: sure, the convenience is the advantage here, but I'm guessing you'll have a second network attached for non-local-intranet traffic? 09:50:04 Yes, although that's not strictly necessary if you manage everything from the VPC (intranet<->cloud) side 09:51:20 they are using a plugin that I know nothing about, l2gw 09:52:27 I've noticed time, we should cover other business! 09:52:45 Update on the Boston Open Research Cloud declaration? 09:52:46 end of live stream 09:52:49 There's an agenda now 09:53:03 On https://drive.google.com/drive/folders/0B4Y7flFgUgf9dElkaFkwbUhKblU 09:53:15 "BOCD-Tentative Workshop Agenda-V4.docx" 09:53:18 Just FYI 09:53:20 #topic Boston roundup 09:53:51 thanks simon-AS559, seems to be taking shape well. 09:54:12 Yep, I like what I saw. Have a look! 09:54:54 simon-AS559: See you in Boston? 09:55:04 anybody else attending? 09:55:07 Yes! (Thanks for the live stream btw) 09:55:39 o/ 09:56:22 I'll be in Boston too 09:56:33 any particular points those not attend would like us to carry with us to Boston? 09:56:42 On a different matter, we have full subsidy for an evening social and Tim's on the case for a nice venue. 09:57:00 jmlowe: of course, we all want selfies from you lot drinking beer together :-) 09:57:28 oneswig: it only counts if you can attract a locust 09:58:05 verdurin: I suspect you may need to bring one with you if you're going to pull that trick off again! 09:58:13 well, I'll drink a large quantity of beer and see what happens 09:58:57 sounds like a plan 09:59:07 One other topic to squeeze in: jmlowe had a good idea - an IRC channel for WG chat 09:59:26 Should be easy to arrange. Any thoughts? 09:59:54 would be great to spend more time chatting outside of the meetings 10:00:01 yes, good idea 10:00:04 keep the meetings a bit more focused 10:00:25 no idea how to get an "official" channel 10:00:43 have to go - bye, and thanks for reports from Lugano 10:00:51 If you want to start a new IRC channel, please consult with the InfrastructureTeam in #openstack-infra or at openstack-infra@lists.openstack.org to ensure it gets registered appropriately. 10:00:54 https://wiki.openstack.org/wiki/IRC 10:00:59 I can ask, I've done other infra stuff 10:01:02 thanks priteau 10:01:07 ah, time up. 10:01:07 perfect 10:01:11 last comments? 10:01:32 OK, thanks all! 10:01:35 #endmeeting