21:05:50 <b1airo> #startmeeting scientific-wg
21:05:50 <openstack> Meeting started Tue Jul 12 21:05:50 2016 UTC and is due to finish in 60 minutes.  The chair is b1airo. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:05:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:05:54 <openstack> The meeting name has been set to 'scientific_wg'
21:05:58 <b1airo> #chair oneswig
21:05:59 <openstack> Current chairs: b1airo oneswig
21:06:24 <oneswig> Hi everyone
21:07:06 <b1airo> #topic Review of Activity Areas and opportunities for progress
21:07:39 <b1airo> #topic Bare metal
21:08:27 <oneswig> b1airo: any development with your plans here?
21:08:50 <b1airo> purely for selfish reasons (we are planning a datacentre move and complete border, core, DC network refresh and design) i'd like to talk about Ironic's network requirements
21:09:54 <oneswig> my experience (a year old now) is that provisioning is flat but any other network can have VLAN segmentation
21:10:54 <b1airo> traditionally we've used Cisco FEXs as our out-of-band switching, connecting e.g. to iLO/DRAC and we also usually have a separate 1G interface into our hypervisors (separate to the high-bandwidth front-side/fabric) just in case we have driver issues etc
21:11:47 <b1airo> oneswig, i believe there is at least some work towards segregated provisioning nets
21:12:08 <priteau> oneswig: How do you configure Ironic to do perform provisioning on a different network than the one used by the instance in the end?
21:12:13 <rbudden> b1airo: we have a similar setup
21:12:14 <oneswig> There's a long-standing patch, still digging for it
21:12:55 <b1airo> when i last looked i figured FEXs wouldn't work, but since then i see Cisco seem to have broad Nexus support in an ML2 driver
21:12:59 <oneswig> priteau: we had trusted instance images, I'm not sure if you can "unconnect" the provisioning network once the node is up
21:13:10 <b1airo> anyone used it (with or without Ironic)?
21:13:50 <priteau> oneswig: I see. In Chameleon we are developing our solution for this, currently in testing phase
21:13:57 <b1airo> oneswig, priteau - sorry i should have been clear, disconnecting the BM instance from the provisioning network would be a requirement for us
21:14:07 <priteau> Will be specific to our networking hardware (Dell S6000)
21:14:29 <b1airo> so we'd be trying to use the DRAC provisioning driver in the first instance
21:15:10 <oneswig> b1airo, priteau: how do you / will you hide access to the provisioning network?
21:15:35 <b1airo> and looking for a provisioning network whose Neutron driver untrunked the relevant V[X]LAN from the BM instances physical ports after boot
21:16:09 <priteau> oneswig: we are instrumenting OpenStack to change VLAN port config once the provisioning is done
21:16:47 <b1airo> priteau, cool - you had to do that yourself? i assumed it'd end up being Neutron driver dependent, no?
21:17:47 <priteau> b1airo: Yes we had to develop Neutron integration ourselves
21:18:22 <priteau> it's currently in testing phase
21:18:23 <b1airo> and of course we'd prefer not to have a flat provisioning network too, to avoid boot hijacking hijinx and so forth, but not strictly needed as we probably wouldn't use this for anything more than our own UnderCloud provisioning and perhaps HPC team
21:18:25 <oneswig> priteau: is there an Ironic state machine transition you can trigger this from?  I wonder if there's an 'official' way to do that
21:19:11 <priteau> oneswig: I haven't seen the code yet actually, it's developed by Northwestern University and quite recent
21:19:41 <b1airo> #action b1airo to ask about Ironic provisioning network unplugging on os-dev[ironic]
21:19:44 <oneswig> priteau: in a more general form I am sure this would find value in many other places
21:20:04 <priteau> I know there is similar work happening upstream
21:20:15 <oneswig> priteau: I assume Rackspace onmetal has already solved this
21:20:19 <priteau> it was discussed at the Austin meeting
21:21:11 <b1airo> ok, i've scratched my itch sufficiently - thanks! shall we move on unless anyone has other BM things to discuss?
21:21:37 <oneswig> nothing here
21:21:45 <b1airo> #topic Parallel filesystems
21:22:53 <b1airo> so, seems like a HPFS panel at Barcelona might be a goer
21:23:45 <oneswig> b1airo: I think it's going to be popular!
21:24:03 <oneswig> you might find it difficult to get a room big enough once the panel are all sitting down :-)
21:24:17 <jmlowe> Great, how many panelists have you found?
21:24:26 <b1airo> i particularly liked jmlowe's comment
21:24:39 <b1airo> oneswig, lol
21:25:28 <b1airo> IIUC the max for a panel session is 4 panelists plus a moderator
21:25:30 <oneswig> jmlowe: not sure exactly but I saw some quick uptake following your mail
21:25:49 <b1airo> though i can't imagine that is a hard ceiling
21:26:11 <b1airo> just on that thread we already have 4 without counting me :-)
21:26:31 <b1airo> not sure i'd make a great moderator, but i'd give it a crack
21:27:36 <b1airo> i may give it another 24 hours in the ether and then through something into the proposals
21:27:40 <oneswig> b1airo: asking the questions not answering them?  Given your experience it makes more sense the other way?
21:28:15 <devananda> b1airo: ironic/neutron network integration has been underway for a couple cycles - we're trying to get it in, but the feature is complex, and involves nova integration as well
21:28:25 <b1airo> oneswig, may as well have a moderator that knows the lay of the land to some extent though
21:28:44 <oneswig> There was also the question on vendors vs users
21:28:49 <b1airo> hi devananda - thanks for jumping in!
21:28:50 <oneswig> Hi devananda!
21:29:00 <rbudden> hello devananda!
21:29:10 <devananda> b1airo: short version is that ironic will call out to neutron at specific stages in provisioning/cleaning to attach/detach the instance from provider networks, so that tenants are isolated from the control plane -- and from each other
21:29:39 <oneswig> devananda: any idea on an ETA?
21:30:05 <devananda> on the network side, neutron ML2 framework is used. I've been testing with OVS, which seems to have broad support (your TOR must have HW VTEP support, at least)
21:30:08 <b1airo> devananda, ok cool, so that's what i had thought was happening in current versions or is still in dev?
21:30:35 <devananda> b1airo: there's a partial implementation, where we can move the Node onto a special 'cleaning_network', that exists today
21:30:46 <oneswig> devananda: what is the current solution for mapping ironic ports to physical switch ports for manipulating vlan state - is it a per-driver solution?
21:30:49 <devananda> but tenant isolation and tenant-defined networks is not in-tree yet
21:31:14 <devananda> oneswig: that's in flight still. several companies have downstream solutions, but the upstream / generic solution is taking time
21:31:34 <b1airo> right, so a BM instance must be connected to a specific provider network then?
21:31:39 <oneswig> devananda: ok thanks, which solutions are leading this?
21:31:54 <devananda> as far as ETA - I'd love to have a definitive answer, but I don't. I _HOPE_ we'll land enough of this in Newton to cal lit functional
21:33:06 <devananda> oneswig: arista, HP, rackspace, cisco are the main contributors to the code right now. I won't speak to any vendor-specific products/solutions
21:33:42 <oneswig> devananda: LLDP-enabled ramdisk?
21:33:44 <b1airo> devananda, i was originally asking about this in the context of building a new out of band network infrastructure in a new DC. i'm assuming if we go with a policy of separate physical iLO + 1x embedded connection that we'll have covered ourselves sufficiently, but then comes the question of vendor support for Neutron
21:34:28 <devananda> b1airo: if your TOR supports OVSDB and HW-VTEP, I think it should work
21:34:48 <devananda> I've even done some testing on relatively old Juniper switches (with new firmware) and had some success
21:35:04 <devananda> oneswig: that's been proposed, but is separate work
21:35:25 <b1airo> awesome, thanks
21:35:36 <devananda> np! hope that info is helpful :)
21:35:41 <oneswig> Thanks devananda
21:36:03 <b1airo> oneswig, back to your question re. vendors in HPFS panel -
21:36:42 <oneswig> I think the "war stories" can't come from manufacturers of munitions...
21:36:57 <b1airo> i think Dean specifically would be fine, but ultimately if there are other interested end-users/deployers it'd probably be better to have them
21:37:29 <b1airo> certainly we don't want product managers or "technical" sales
21:38:31 <oneswig> It's most important to have speakers who who are well informed and objective on vendor products
21:38:53 <b1airo> brb...
21:38:54 <oneswig> Perhaps he is both, I don't know
21:40:43 <oneswig> How do people envisage structuring a panel on HPFS?  Might be good to have lightning talks from each member on their use case scenario before open discussion
21:41:15 <b1airo> back
21:41:36 <b1airo> (2yo decided to fall off the bathroom stool)
21:42:01 <oneswig> all well I hope :-)
21:42:06 <b1airo> oneswig, good idea yeah
21:42:23 <b1airo> and then leave plenty of time for audience question/discussion
21:42:52 <oneswig> I wonder if it's possible to ask for more than 40 minutes for the session
21:43:34 <oneswig> Recall the Austin summit, we had 4x Ironic lightning talks and no panel discussion in 40 minutes
21:44:03 <b1airo> oh PS: i'm one of the track chairs for HPC/Research, did you ever through your name in oneswig ?
21:44:23 <oneswig> I did, think it's come through
21:44:59 <b1airo> ok, so we are in a good position to try and get a longer spot then :-)
21:45:59 <oneswig> Perhaps we can run a couple of much longer sessions, drawing from WG activity areas
21:46:40 <b1airo> that'd be nice, 40 mins in a good energised session never seems enough
21:47:45 <b1airo> i guess bare-metal would have enough working group contributors, and it'd probably facilitate good UC <-> Dev crossover
21:48:34 <oneswig> I wonder if we could have a session in which representatives from various deploy projects (OpenStack-Ansible, TripleO, etc.) demonstrate how to deploy and configure some HPC-like capabilities using their projects
21:48:41 <b1airo> not sure about accounting/scheduling though, i feel like there are a lot of large subtopics there without a lot of clear solutions
21:48:54 <oneswig> b1airo: agreed, too nebulous right now
21:48:57 <rbudden> oneswig: that would be interesting
21:50:01 <b1airo> oneswig, think it's worth calling out to the relevant dev list topics for takers?
21:50:06 <oneswig> #action oneswig to make enquiries re: deployment and report back
21:50:14 <oneswig> b1airo: my thoughts exactly :-)
21:50:22 <b1airo> great minds
21:50:36 <oneswig> I wouldn't know...
21:50:46 <b1airo> ok, i think we'll skip account and scheduling unless anyone has something to raise?
21:50:57 <b1airo> (in today's meeting i mean)
21:51:21 <b1airo> #topic OpenStack & HPC white paper
21:51:36 <b1airo> anything to talk about on this oneswig ?
21:51:48 <oneswig> So this is starting to come together, here's the latest
21:51:59 <b1airo> i guess you are just in the think of plugging away whilst also trying to help with the Cambridge deployment?
21:52:13 <oneswig> In time for SC, we are looking to generate content on 5ish topics
21:52:36 <oneswig> I'm pretty maxed out on the Cambridge deployment right now but as of this week we have a lull
21:53:14 <oneswig> Each topic, a problem statement, solution concept plus some positive words from a subject matter expert
21:53:25 <oneswig> I'll find those topics...
21:53:55 <rbudden> is this related to or in conjunction with the SC Panel?
21:54:18 <oneswig> rbudden: in addition - this is basically brochures and a whitepaper download
21:54:28 <rbudden> ok great
21:54:30 <oneswig> OK they are:
21:54:30 <b1airo> rbudden, related, though i guess oneswig might be calling in the direction of the panel for SMEs as needed
21:54:55 <oneswig> 1) Virtualisation strategies and alternatives for HPC
21:54:55 <rbudden> i missed last meeting, but did have an action item on this… I have PSC approval for passing out brochures, etc. at our boot
21:55:05 <oneswig> 2) OpenStack and HPC network fabrics
21:55:21 <oneswig> 3) OpenStack and high performance data (filesystems, objects etc)
21:55:31 <oneswig> 4) OpenStack and HPC workload management
21:55:35 <b1airo> rbudden, brilliant, thanks for following that up!
21:55:44 <oneswig> 5) OpenStack and HPC infrastructure management
21:56:13 <rbudden> b1airo: no problem. if we want to do demos/presentations that possible with additional heads up and/or approval
21:56:15 <oneswig> Possible bonus topic on federation, but that's not HPC-specific plus I think we'll have our hands full
21:57:10 <oneswig> I'm generating much of this content as part of my contract with Cambridge but will be seeking SME input in many areas - look out y'all
21:57:43 <b1airo> #topic Other Business
21:57:51 <b1airo> time's almost up
21:58:28 <b1airo> i did quickly want to call out for contributions to the hypervisor tuning guide, particularly on hypervisors other than KVM
21:59:26 <b1airo> Joe Topjian from Cybera put it together originally from the output of various hypervisor tuning ops sessions, but we haven't quite figured out what to do with it from there
21:59:44 <oneswig> Isn't KVM something like 90% of deployments now?
21:59:49 <b1airo> would be nice to lift it out of wiki obscurity
22:00:11 <b1airo> oneswig, yes it's certainly the most popular, don't know about the actual numbers breakdown of the top of my head though
22:00:17 <rbudden> KVM is pretty dominate
22:00:39 <b1airo> the Xen people always have nice t-shorts though, so i have a soft spot for them :-D
22:00:41 <rbudden> i remember the Havana Summit and I was the only one doing Xen in general in an entire room :P
22:00:51 <b1airo> *t-shirts
22:00:54 <oneswig> I wonder if there's anything you can tune with Canonical's LXD
22:01:03 <b1airo> not sure what a t-short is, but it sounds uncomfortable
22:01:43 <b1airo> oneswig, i guess there would be
22:02:04 <jmlowe> sorry, re SC'16: I'm still waiting for the scheduling for our booth, it's just a question of how many time slots for booth talks, at least 30 min /day would be minimum
22:02:18 <oneswig> #action oneswig to ask his friends at canonical re LXD tuning
22:02:58 <oneswig> We are over time
22:03:02 <b1airo> jmlowe, are you planning to show off any of your OpenStack work?
22:03:20 <rbudden> speaking of SC, one thing to add, if we want talks/demos we should prep that earlier. I can likely get approval for those but the content/timeslots need arranged earlier rather than later
22:04:28 <b1airo> rbudden, yes certainly need to have that organised well in advance. at this stage i suspect it'll largely be the whitepaper oneswig was talking about plus some other generic foundation materials
22:04:38 <jmlowe> b1airo: yes, I was also thinking general Openstack and HPC booth talks open to any wg member
22:05:19 <rbudden> b1airo: sounds good. I do have a pair of NUCs I can bring if we had a purpose to show a demo of sorts
22:05:55 <b1airo> in terms of demos, that'd be cool but i'm not sure we'll get anything from folks that do not already have a booth presence, unless the foundation has a wheel-in demo system already going (only things i have seen are vendor specific)
22:06:54 <b1airo> jmlowe, ok that's great to know. we may get a better idea once we see some of the summit proposals and then match up with folks who might be at SC
22:07:31 <rbudden> jmlowe: maybe the talk we are working on about Murano/Heat w/VM Repo would be interesting to demo
22:08:03 * b1airo ears prick up
22:08:20 <jmlowe> Yep, should cross you inbox as a summit talk soon
22:08:30 <rbudden> b1airo: we are working on a talk for Barcelona
22:08:37 <rbudden> we won’t ruin the surprise ;)
22:08:40 <b1airo> excellent!
22:08:42 <oneswig> Can't get you guys off the stage it seems :-)
22:08:49 <rbudden> haha, that’s the plan!
22:09:09 <b1airo> ok, must run - got to get to work o_0
22:09:11 <oneswig> Look forward to it
22:09:18 <oneswig> OK, lets wrap up?
22:09:20 <jmlowe> Cross site collaboration is a great way to wrestle loose some travel money
22:09:22 <rbudden> yeah we are a bit over anyway
22:09:38 <b1airo> thanks once again all!
22:09:43 <oneswig> thanks everyone
22:09:47 <b1airo> #endmeeting