21:05:50 #startmeeting scientific-wg 21:05:50 Meeting started Tue Jul 12 21:05:50 2016 UTC and is due to finish in 60 minutes. The chair is b1airo. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:05:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:05:54 The meeting name has been set to 'scientific_wg' 21:05:58 #chair oneswig 21:05:59 Current chairs: b1airo oneswig 21:06:24 Hi everyone 21:07:06 #topic Review of Activity Areas and opportunities for progress 21:07:39 #topic Bare metal 21:08:27 b1airo: any development with your plans here? 21:08:50 purely for selfish reasons (we are planning a datacentre move and complete border, core, DC network refresh and design) i'd like to talk about Ironic's network requirements 21:09:54 my experience (a year old now) is that provisioning is flat but any other network can have VLAN segmentation 21:10:54 traditionally we've used Cisco FEXs as our out-of-band switching, connecting e.g. to iLO/DRAC and we also usually have a separate 1G interface into our hypervisors (separate to the high-bandwidth front-side/fabric) just in case we have driver issues etc 21:11:47 oneswig, i believe there is at least some work towards segregated provisioning nets 21:12:08 oneswig: How do you configure Ironic to do perform provisioning on a different network than the one used by the instance in the end? 21:12:13 b1airo: we have a similar setup 21:12:14 There's a long-standing patch, still digging for it 21:12:55 when i last looked i figured FEXs wouldn't work, but since then i see Cisco seem to have broad Nexus support in an ML2 driver 21:12:59 priteau: we had trusted instance images, I'm not sure if you can "unconnect" the provisioning network once the node is up 21:13:10 anyone used it (with or without Ironic)? 21:13:50 oneswig: I see. In Chameleon we are developing our solution for this, currently in testing phase 21:13:57 oneswig, priteau - sorry i should have been clear, disconnecting the BM instance from the provisioning network would be a requirement for us 21:14:07 Will be specific to our networking hardware (Dell S6000) 21:14:29 so we'd be trying to use the DRAC provisioning driver in the first instance 21:15:10 b1airo, priteau: how do you / will you hide access to the provisioning network? 21:15:35 and looking for a provisioning network whose Neutron driver untrunked the relevant V[X]LAN from the BM instances physical ports after boot 21:16:09 oneswig: we are instrumenting OpenStack to change VLAN port config once the provisioning is done 21:16:47 priteau, cool - you had to do that yourself? i assumed it'd end up being Neutron driver dependent, no? 21:17:47 b1airo: Yes we had to develop Neutron integration ourselves 21:18:22 it's currently in testing phase 21:18:23 and of course we'd prefer not to have a flat provisioning network too, to avoid boot hijacking hijinx and so forth, but not strictly needed as we probably wouldn't use this for anything more than our own UnderCloud provisioning and perhaps HPC team 21:18:25 priteau: is there an Ironic state machine transition you can trigger this from? I wonder if there's an 'official' way to do that 21:19:11 oneswig: I haven't seen the code yet actually, it's developed by Northwestern University and quite recent 21:19:41 #action b1airo to ask about Ironic provisioning network unplugging on os-dev[ironic] 21:19:44 priteau: in a more general form I am sure this would find value in many other places 21:20:04 I know there is similar work happening upstream 21:20:15 priteau: I assume Rackspace onmetal has already solved this 21:20:19 it was discussed at the Austin meeting 21:21:11 ok, i've scratched my itch sufficiently - thanks! shall we move on unless anyone has other BM things to discuss? 21:21:37 nothing here 21:21:45 #topic Parallel filesystems 21:22:53 so, seems like a HPFS panel at Barcelona might be a goer 21:23:45 b1airo: I think it's going to be popular! 21:24:03 you might find it difficult to get a room big enough once the panel are all sitting down :-) 21:24:17 Great, how many panelists have you found? 21:24:26 i particularly liked jmlowe's comment 21:24:39 oneswig, lol 21:25:28 IIUC the max for a panel session is 4 panelists plus a moderator 21:25:30 jmlowe: not sure exactly but I saw some quick uptake following your mail 21:25:49 though i can't imagine that is a hard ceiling 21:26:11 just on that thread we already have 4 without counting me :-) 21:26:31 not sure i'd make a great moderator, but i'd give it a crack 21:27:36 i may give it another 24 hours in the ether and then through something into the proposals 21:27:40 b1airo: asking the questions not answering them? Given your experience it makes more sense the other way? 21:28:15 b1airo: ironic/neutron network integration has been underway for a couple cycles - we're trying to get it in, but the feature is complex, and involves nova integration as well 21:28:25 oneswig, may as well have a moderator that knows the lay of the land to some extent though 21:28:44 There was also the question on vendors vs users 21:28:49 hi devananda - thanks for jumping in! 21:28:50 Hi devananda! 21:29:00 hello devananda! 21:29:10 b1airo: short version is that ironic will call out to neutron at specific stages in provisioning/cleaning to attach/detach the instance from provider networks, so that tenants are isolated from the control plane -- and from each other 21:29:39 devananda: any idea on an ETA? 21:30:05 on the network side, neutron ML2 framework is used. I've been testing with OVS, which seems to have broad support (your TOR must have HW VTEP support, at least) 21:30:08 devananda, ok cool, so that's what i had thought was happening in current versions or is still in dev? 21:30:35 b1airo: there's a partial implementation, where we can move the Node onto a special 'cleaning_network', that exists today 21:30:46 devananda: what is the current solution for mapping ironic ports to physical switch ports for manipulating vlan state - is it a per-driver solution? 21:30:49 but tenant isolation and tenant-defined networks is not in-tree yet 21:31:14 oneswig: that's in flight still. several companies have downstream solutions, but the upstream / generic solution is taking time 21:31:34 right, so a BM instance must be connected to a specific provider network then? 21:31:39 devananda: ok thanks, which solutions are leading this? 21:31:54 as far as ETA - I'd love to have a definitive answer, but I don't. I _HOPE_ we'll land enough of this in Newton to cal lit functional 21:33:06 oneswig: arista, HP, rackspace, cisco are the main contributors to the code right now. I won't speak to any vendor-specific products/solutions 21:33:42 devananda: LLDP-enabled ramdisk? 21:33:44 devananda, i was originally asking about this in the context of building a new out of band network infrastructure in a new DC. i'm assuming if we go with a policy of separate physical iLO + 1x embedded connection that we'll have covered ourselves sufficiently, but then comes the question of vendor support for Neutron 21:34:28 b1airo: if your TOR supports OVSDB and HW-VTEP, I think it should work 21:34:48 I've even done some testing on relatively old Juniper switches (with new firmware) and had some success 21:35:04 oneswig: that's been proposed, but is separate work 21:35:25 awesome, thanks 21:35:36 np! hope that info is helpful :) 21:35:41 Thanks devananda 21:36:03 oneswig, back to your question re. vendors in HPFS panel - 21:36:42 I think the "war stories" can't come from manufacturers of munitions... 21:36:57 i think Dean specifically would be fine, but ultimately if there are other interested end-users/deployers it'd probably be better to have them 21:37:29 certainly we don't want product managers or "technical" sales 21:38:31 It's most important to have speakers who who are well informed and objective on vendor products 21:38:53 brb... 21:38:54 Perhaps he is both, I don't know 21:40:43 How do people envisage structuring a panel on HPFS? Might be good to have lightning talks from each member on their use case scenario before open discussion 21:41:15 back 21:41:36 (2yo decided to fall off the bathroom stool) 21:42:01 all well I hope :-) 21:42:06 oneswig, good idea yeah 21:42:23 and then leave plenty of time for audience question/discussion 21:42:52 I wonder if it's possible to ask for more than 40 minutes for the session 21:43:34 Recall the Austin summit, we had 4x Ironic lightning talks and no panel discussion in 40 minutes 21:44:03 oh PS: i'm one of the track chairs for HPC/Research, did you ever through your name in oneswig ? 21:44:23 I did, think it's come through 21:44:59 ok, so we are in a good position to try and get a longer spot then :-) 21:45:59 Perhaps we can run a couple of much longer sessions, drawing from WG activity areas 21:46:40 that'd be nice, 40 mins in a good energised session never seems enough 21:47:45 i guess bare-metal would have enough working group contributors, and it'd probably facilitate good UC <-> Dev crossover 21:48:34 I wonder if we could have a session in which representatives from various deploy projects (OpenStack-Ansible, TripleO, etc.) demonstrate how to deploy and configure some HPC-like capabilities using their projects 21:48:41 not sure about accounting/scheduling though, i feel like there are a lot of large subtopics there without a lot of clear solutions 21:48:54 b1airo: agreed, too nebulous right now 21:48:57 oneswig: that would be interesting 21:50:01 oneswig, think it's worth calling out to the relevant dev list topics for takers? 21:50:06 #action oneswig to make enquiries re: deployment and report back 21:50:14 b1airo: my thoughts exactly :-) 21:50:22 great minds 21:50:36 I wouldn't know... 21:50:46 ok, i think we'll skip account and scheduling unless anyone has something to raise? 21:50:57 (in today's meeting i mean) 21:51:21 #topic OpenStack & HPC white paper 21:51:36 anything to talk about on this oneswig ? 21:51:48 So this is starting to come together, here's the latest 21:51:59 i guess you are just in the think of plugging away whilst also trying to help with the Cambridge deployment? 21:52:13 In time for SC, we are looking to generate content on 5ish topics 21:52:36 I'm pretty maxed out on the Cambridge deployment right now but as of this week we have a lull 21:53:14 Each topic, a problem statement, solution concept plus some positive words from a subject matter expert 21:53:25 I'll find those topics... 21:53:55 is this related to or in conjunction with the SC Panel? 21:54:18 rbudden: in addition - this is basically brochures and a whitepaper download 21:54:28 ok great 21:54:30 OK they are: 21:54:30 rbudden, related, though i guess oneswig might be calling in the direction of the panel for SMEs as needed 21:54:55 1) Virtualisation strategies and alternatives for HPC 21:54:55 i missed last meeting, but did have an action item on this… I have PSC approval for passing out brochures, etc. at our boot 21:55:05 2) OpenStack and HPC network fabrics 21:55:21 3) OpenStack and high performance data (filesystems, objects etc) 21:55:31 4) OpenStack and HPC workload management 21:55:35 rbudden, brilliant, thanks for following that up! 21:55:44 5) OpenStack and HPC infrastructure management 21:56:13 b1airo: no problem. if we want to do demos/presentations that possible with additional heads up and/or approval 21:56:15 Possible bonus topic on federation, but that's not HPC-specific plus I think we'll have our hands full 21:57:10 I'm generating much of this content as part of my contract with Cambridge but will be seeking SME input in many areas - look out y'all 21:57:43 #topic Other Business 21:57:51 time's almost up 21:58:28 i did quickly want to call out for contributions to the hypervisor tuning guide, particularly on hypervisors other than KVM 21:59:26 Joe Topjian from Cybera put it together originally from the output of various hypervisor tuning ops sessions, but we haven't quite figured out what to do with it from there 21:59:44 Isn't KVM something like 90% of deployments now? 21:59:49 would be nice to lift it out of wiki obscurity 22:00:11 oneswig, yes it's certainly the most popular, don't know about the actual numbers breakdown of the top of my head though 22:00:17 KVM is pretty dominate 22:00:39 the Xen people always have nice t-shorts though, so i have a soft spot for them :-D 22:00:41 i remember the Havana Summit and I was the only one doing Xen in general in an entire room :P 22:00:51 *t-shirts 22:00:54 I wonder if there's anything you can tune with Canonical's LXD 22:01:03 not sure what a t-short is, but it sounds uncomfortable 22:01:43 oneswig, i guess there would be 22:02:04 sorry, re SC'16: I'm still waiting for the scheduling for our booth, it's just a question of how many time slots for booth talks, at least 30 min /day would be minimum 22:02:18 #action oneswig to ask his friends at canonical re LXD tuning 22:02:58 We are over time 22:03:02 jmlowe, are you planning to show off any of your OpenStack work? 22:03:20 speaking of SC, one thing to add, if we want talks/demos we should prep that earlier. I can likely get approval for those but the content/timeslots need arranged earlier rather than later 22:04:28 rbudden, yes certainly need to have that organised well in advance. at this stage i suspect it'll largely be the whitepaper oneswig was talking about plus some other generic foundation materials 22:04:38 b1airo: yes, I was also thinking general Openstack and HPC booth talks open to any wg member 22:05:19 b1airo: sounds good. I do have a pair of NUCs I can bring if we had a purpose to show a demo of sorts 22:05:55 in terms of demos, that'd be cool but i'm not sure we'll get anything from folks that do not already have a booth presence, unless the foundation has a wheel-in demo system already going (only things i have seen are vendor specific) 22:06:54 jmlowe, ok that's great to know. we may get a better idea once we see some of the summit proposals and then match up with folks who might be at SC 22:07:31 jmlowe: maybe the talk we are working on about Murano/Heat w/VM Repo would be interesting to demo 22:08:03 * b1airo ears prick up 22:08:20 Yep, should cross you inbox as a summit talk soon 22:08:30 b1airo: we are working on a talk for Barcelona 22:08:37 we won’t ruin the surprise ;) 22:08:40 excellent! 22:08:42 Can't get you guys off the stage it seems :-) 22:08:49 haha, that’s the plan! 22:09:09 ok, must run - got to get to work o_0 22:09:11 Look forward to it 22:09:18 OK, lets wrap up? 22:09:20 Cross site collaboration is a great way to wrestle loose some travel money 22:09:22 yeah we are a bit over anyway 22:09:38 thanks once again all! 22:09:43 thanks everyone 22:09:47 #endmeeting