09:00:36 <oneswig> #startmeeting scientific_wg
09:00:37 <openstack> Meeting started Wed Apr 12 09:00:36 2017 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:40 <openstack> The meeting name has been set to 'scientific_wg'
09:01:05 <oneswig> Good morning good afternoon good evening
09:01:21 <oneswig> #link agenda https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_April_12th_2017
09:01:34 <priteau> Hello oneswig
09:01:45 <ma9_> Hello from Lugano
09:01:57 <oneswig> Hi priteau I have something to ask of you later ;-)
09:02:16 <oneswig> Hello ma9_ - I'm in the front row next to Mike.
09:02:21 <jmlowe> Good morning!
09:02:22 <b1airo> evening / <insert time appropriate greeting>
09:02:41 <dariov> hello!
09:02:46 <ma9_> I know, I'm Marco from CSCs
09:02:53 <jmlowe> so this is how the other half lives?
09:02:57 <oneswig> ha!  Hi Marco :-)
09:03:20 <oneswig> #chair b1airo
09:03:21 <openstack> Current chairs: b1airo oneswig
09:03:29 <verdurin> Morning.
09:03:50 <b1airo> how's Laguno?
09:04:10 <b1airo> or Lugano even
09:04:11 <oneswig> b1airo: splendid - don't know how ma9_ gets any work done out here
09:04:23 <mpasserini> :D
09:05:00 <oneswig> OK, shall we get started?
09:05:18 <oneswig> #topic Roundup on resource reservation & other forum topics
09:05:25 <b1airo> it's round-up time at the family station here now that daylight saving has finished so i will be back and forth
09:05:47 <oneswig> OK b1airo - know how that is!
09:06:11 <oneswig> I had a quick one here.  I put in two forum topics - BIOS/RAID config and Cinder multi-attach
09:06:41 <oneswig> the BIOS/RAID one was rejected :-( but suggested to cover in the virt + baremetal session
09:06:58 <oneswig> the Cinder volume multi-attach I think was also to be covered in virt + baremetal (oddly)
09:07:15 <oneswig> The main discussion going on has been around resource reservation
09:07:15 <jmlowe> that is odd
09:07:21 <oneswig> priteau: can you give a summary?
09:08:35 <priteau> The discussion has focused on what should be handled by Nova itself and what should be managed by another service, like Blazar
09:08:36 <oneswig> jmlowe: it is, there's also a good deal of interest on multi-attach from the telco world it seems
09:09:15 <priteau> Jay Pipes, one of the Nova core developers, thinks Nova shouldn't handle any temporal access (i.e. things happening in the future)
09:09:31 <jmlowe> Does anybody have a multi-attach use case other than providing large read only data sets?
09:10:16 <oneswig> jmlowe: would be good for a scalable bare metal cluster boot. (another read-only case)
09:11:00 <priteau> A discussion has also started on how to handle instance reservation (rather than just physical host reservation, which is supported in Blazar today)
09:11:23 <priteau> #link https://etherpad.openstack.org/p/new-instance-reservation
09:11:40 <oneswig> priteau: where are things with OPIE currently?  Seems OK to have a separate service for managing future reservations, but when the future becomes now, what would happen if the machine's full
09:12:04 <oneswig> priteau: instance reservation - that would be a nice development in flexibility
09:12:28 <priteau> The instance reservation discussion is driven by Masahito Muroi who is the Blazar PTL - I haven't had time to read it yet
09:12:31 <jmlowe> I have concerns about the use of host aggregates, are they thinking about a refined mechanism for Blazar to manipulate nova?
09:13:19 <priteau> I don't know much about OPIE. I hope this forum session will give time for OPIE and Blazar devs to talk about how the two systems will work together
09:13:47 <b1airo> yes! and us to advocate use-cases
09:13:59 <oneswig> priteau: that would be ideal :-)
09:14:19 <oneswig> What I like is how the forum has stimulated some useful discussion already
09:14:45 <priteau> jmlowe: Maybe we could use the placement API instead of aggregates
09:14:47 <b1airo> i'd like to understand jay's opposition to having anymore smarts about this in nova
09:15:47 <b1airo> i can understand if he prefers to integrate other pluggable services in the scheduler backend, but i'm worried that Nova API actually needs to be extended to support these use-cases nicely
09:16:59 <b1airo> so in terms of forum topics i also have one for hypervisor tuning
09:17:11 <verdurin> b1airo: yes, I didn't quite understand his reluctance either
09:17:24 <oneswig> b1airo: has it been approved for discussion??
09:17:43 <b1airo> and one for special hardware (i need to reach out to the Cyborg folks)
09:18:54 <priteau> Tim Randles and I also prepared a topic about customization of Ironic deployments steps which was rejected, but would be discussed in the Ironic feedback session
09:18:59 <oneswig> b1airo: had a small update for you from here wrt special hardware
09:19:08 <oneswig> will follow up later
09:19:42 <oneswig> priteau: what was the issue?  We have hopes of using the deployment steps as a means of BIOS/RAID reconfig
09:20:38 <priteau> oneswig: it was for supporting booting to ramdisks or speeding up reboots with kexec: http://forumtopics.openstack.org/cfp/details/125
09:20:45 <oneswig> aha
09:21:59 <oneswig> Well we still have the chance to discuss these things at the forum, within a merged session.  Be interesting to hear what the Ironic team think of it
09:22:31 <oneswig> Did anyone else propose forum topics they can update on?
09:23:11 <oneswig> OK, lets move on the agenda
09:23:27 <oneswig> #topic Updates from HPCAC conference in Lugano
09:23:45 <simon-AS559> zioproto is giving an OpenStack tutorial as we speak.
09:23:57 <simon-AS559> (That's his apology for today I guess)
09:24:12 <oneswig> simon-AS559: not quite yet - Gabriele from Intel currently talking burst buffers and DAOS
09:24:25 <oneswig> zioproto is up later on I believe
09:24:33 <simon-AS559> Right, 15 minutes according to schedule :-)
09:24:33 <jmlowe> then lunch
09:24:45 <jmlowe> 14:15 local time
09:24:54 <oneswig> We had jmlowe earlier with a great show-and-tell on Jetstream
09:25:40 <verdurin> oneswig: do you have a link for the meeting?
09:25:58 <simon-AS559> http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
09:26:06 <verdurin> thanks, simon-AS559
09:26:12 <oneswig> verdurin: I believe the sessions are filmed, haven't checked for uploaded presentations
09:26:26 <oneswig> DK Panda has spoken a couple of times.  One on MPI and one on big data.
09:26:39 <mpasserini> the presentations are usually published on the agenda page afterwards http://hpcadvisorycouncil.com/events/2017/swiss-workshop/agenda.php
09:26:56 <oneswig> b1airo: DK Panda says GPUs + MVAPICH2-virth integration is "something we'll be working on very soon"
09:26:56 <jmlowe> The new one for me was the SPACK package manager
09:27:06 <mpasserini> some videos usually appear on http://insidehpc.com/
09:27:24 <oneswig> so virtualised gpu-direct is on the roadmap
09:27:24 <jmlowe> some promise of sriov live migration, but was hard to nail him down on that
09:28:32 <oneswig> priteau: apparently you have heat stacks on chameleon for the OSU HiBD environment - is that true and are they shareable?
09:28:49 <jmlowe> The thing about SPACK that interested me was the possibility of dropping in complex hpc applications with a package manager bootstrapped by cloud-init or heat software configs
09:28:56 <oneswig> (This is the hadoop/spark/memcached/hbase + RDMA tecnology)
09:30:33 <priteau> oneswig: There is a RDMA Hadoop Heat template in progress. It will be at https://www.chameleoncloud.org/appliances/ when ready
09:31:22 <jmlowe> ok, existing rdma-hadoop looks to be just an image not a heat template?
09:31:40 <oneswig> priteau: Do you think the heat templates will be portable? I'm hoping just to rename a few flavors etc and try it out here.
09:32:12 <priteau> jmlowe: What's in the catalog is just an old standalone image now, the Heat template + new image is still being developed
09:32:48 <jmlowe> priteau: ack, just verifying
09:32:57 <oneswig> priteau: is that coming from OSU or UChicago?
09:33:01 <priteau> OSU
09:34:08 <oneswig> Gabriele has just put up a slide with "BlobFS" on it - you heard it here first :-)
09:34:09 <priteau> oneswig: because it's on bare-metal, the Heat template actually deploys bare-metal nodes, and then runs VMs on top of that. So if you're using a virtualized OpenStack, you will have to adapt the template to remove the unnecessary code to launch VMs
09:34:28 <jmlowe> fascinating
09:34:42 <oneswig> priteau: I'd be running bare metal instances in this case
09:35:03 <oneswig> do you mean you install kvm on the nodes?
09:35:03 <jmlowe> how is it putting the rdma in the vm? sriov?
09:35:12 <priteau> jmlowe: yes, SR-IOV
09:37:07 <b1airo> priteau, sounds like the "install openstack kvm cloud" and "bring up hadoop/spark/... cluster on openstack" are really two different things..?
09:37:50 <jmlowe> oh, schedule change, openstack tutorial starting now by Saverio Proto
09:38:00 <b1airo> reading back... a 15 min OpenStack tutorial?! that's going to be 40k feet :-)
09:38:31 <jmlowe> I attempted 10k feet, it was rough
09:39:08 <priteau> b1airo: I don't understand your question
09:39:16 <oneswig> zioproto's up right now, we can live-stream it!  Speaking at normal speed currently, I assume he'll accellerate to a singularity by the end
09:39:42 <simon-AS559> +1
09:40:04 <jmlowe> exciting, similar raison d'etat to my project, jetstream
09:41:06 <oneswig> priteau: you install KVM in order to virtualise the hadoop workers - it's a two-stage process?
09:42:34 <jmlowe> switch seems to give you the option of attaching to a science dmz or internal campus network
09:43:00 <priteau> oneswig: I assume KVM is pre-installed, but there is some setup done to launch the VMs (not using OpenStack, just plain libvirt IIRC). I don't know all the details, OSU does
09:43:09 <jmlowe> runs mitaka newton coming in a few weeks
09:43:49 <jmlowe> good god, they are running windows vm's
09:45:03 <jmlowe> very interesting, they bridge the customer campus networks over the switch backbone with vpn
09:45:24 <jmlowe> into a provider network on the openstack side
09:45:42 <oneswig> I like this layer-2 extension of a local intranet into OpenStack, makes it very simple for user to assimilate the VMs into their world
09:46:00 <b1airo> sounds similar to what we do
09:46:02 <oneswig> Wonder what the side-effects are - simon-AS559 any comment?
09:46:41 <simon-AS559> One side effect is we need to get l2gateway to work :-)
09:46:45 <jmlowe> I just did a one off to let one tenant break out of our vxlan and us a vlan to get access to a lustre filesystem
09:46:58 <simon-AS559> But it's a bit early to say because we don't really have production deployments of this.
09:47:26 <simon-AS559> Also the focus isn't really high-performance networking; more to get inside the customer's firewall to be able to access things like Active Directory and other servers.
09:47:31 <jmlowe> I'm sure it could be refined, was lots of hand wiring by me
09:48:18 <simon-AS559> Yes, we're trying to reduce that hand-wiring to a minimum; l2gateway is helpful there.
09:48:19 <jmlowe> oh, lots of bridge mappings, looks more similar to what I did
09:48:42 <oneswig> simon-AS559: sure, the convenience is the advantage here, but I'm guessing you'll have a second network attached for non-local-intranet traffic?
09:50:04 <simon-AS559> Yes, although that's not strictly necessary if you manage everything from the VPC (intranet<->cloud) side
09:51:20 <jmlowe> they are using a plugin that I know nothing about, l2gw
09:52:27 <oneswig> I've noticed time, we should cover other business!
09:52:45 <simon-AS559> Update on the Boston Open Research Cloud declaration?
09:52:46 <jmlowe> end of live stream
09:52:49 <simon-AS559> There's an agenda now
09:53:03 <simon-AS559> On https://drive.google.com/drive/folders/0B4Y7flFgUgf9dElkaFkwbUhKblU
09:53:15 <simon-AS559> "BOCD-Tentative Workshop Agenda-V4.docx"
09:53:18 <simon-AS559> Just FYI
09:53:20 <oneswig> #topic Boston roundup
09:53:51 <oneswig> thanks simon-AS559, seems to be taking shape well.
09:54:12 <simon-AS559> Yep, I like what I saw. Have a look!
09:54:54 <jmlowe> simon-AS559: See you in Boston?
09:55:04 <jmlowe> anybody else attending?
09:55:07 <simon-AS559> Yes! (Thanks for the live stream btw)
09:55:39 <b1airo> o/
09:56:22 <priteau> I'll be in Boston too
09:56:33 <jmlowe> any particular points those not attend would like us to carry with us to Boston?
09:56:42 <oneswig> On a different matter, we have full subsidy for an evening social and Tim's on the case for a nice venue.
09:57:00 <oneswig> jmlowe: of course, we all want selfies from you lot drinking beer together :-)
09:57:28 <verdurin> oneswig: it only counts if you can attract a locust
09:58:05 <oneswig> verdurin: I suspect you may need to bring one with you if you're going to pull that trick off again!
09:58:13 <jmlowe> well, I'll drink a large quantity of beer and see what happens
09:58:57 <simon-AS559> sounds like a plan
09:59:07 <oneswig> One other topic to squeeze in: jmlowe had a good idea - an IRC channel for WG chat
09:59:26 <oneswig> Should be easy to arrange.  Any thoughts?
09:59:54 <jmlowe> would be great to spend more time chatting outside of the meetings
10:00:01 <verdurin> yes, good idea
10:00:04 <jmlowe> keep the meetings a bit more focused
10:00:25 <jmlowe> no idea how to get an "official" channel
10:00:43 <verdurin> have to go - bye, and thanks for reports from Lugano
10:00:51 <priteau> If you want to start a new IRC channel, please consult with the InfrastructureTeam in #openstack-infra or at openstack-infra@lists.openstack.org to ensure it gets registered appropriately.
10:00:54 <priteau> https://wiki.openstack.org/wiki/IRC
10:00:59 <oneswig> I can ask, I've done other infra stuff
10:01:02 <oneswig> thanks priteau
10:01:07 <oneswig> ah, time up.
10:01:07 <jmlowe> perfect
10:01:11 <oneswig> last comments?
10:01:32 <oneswig> OK, thanks all!
10:01:35 <oneswig> #endmeeting