11:00:18 <oneswig> #startmeeting scientific-sig
11:00:19 <openstack> Meeting started Wed May  9 11:00:18 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.
11:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
11:00:23 <openstack> The meeting name has been set to 'scientific_sig'
11:00:34 <oneswig> Greetings...
11:00:52 <oneswig> #link agenda for today (such as it is) https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_8th_2018
11:01:05 <oneswig> Noting that actually it's May 9th...
11:01:06 <daveholland> hi
11:01:12 <daveholland> we must be in a time warp
11:01:15 <oneswig> Hi daveholland, morning
11:01:24 <oneswig> Eddies in the space-time continuum...
11:01:54 <priteau> Hello
11:02:05 <oneswig> Hi priteau!
11:02:37 <priteau> Hi oneswig!
11:02:37 <oneswig> #topic What has the Scientific SIG ever done for us :-)
11:02:47 <martial_> good topic :)
11:02:55 <oneswig> Morning martial_!
11:02:58 <oneswig> #chair martial_
11:02:59 <openstack> Current chairs: martial_ oneswig
11:03:07 <martial_> Good morning Stig, everybody
11:03:11 <oneswig> #link Etherpad for User Committee report https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_8th_2018
11:03:40 <oneswig> This is a gathering of OpenStack activities that are SIG-related from this cycle.
11:04:19 <oneswig> daveholland: I wondered if you've got anything you can link to about the Sanger Centre's OpenStack day?  Seems like a great contribution here.
11:04:49 <oneswig> I haven't seen anything from Pete about publishing the content
11:05:17 <daveholland> we have a list of links to presentations here: https://hpc-news.sanger.ac.uk/openstack-day-2018/
11:05:35 <martial_> that is very useful indeed
11:05:38 <daveholland> but no write-up as such :/
11:05:39 <oneswig> Ooh, thanks.  Can you add it to the list?
11:05:45 <daveholland> will do
11:05:54 <oneswig> Thanks daveholland
11:06:11 <oneswig> priteau: I think you've seen this already but was there anything to add from Chameleon?
11:07:36 <daveholland> (not sure I put it in the right place in the etherpad, but it's there)
11:07:38 <priteau> oneswig: Should we add that there was design work to push some Chameleon features back in upstream Blazar?
11:08:03 <oneswig> priteau: why not, I saw it getting discussed and it's clearly useful.
11:09:27 <oneswig> OK, shall we move on?
11:09:44 <oneswig> #topic Vancouver summit BoF
11:10:01 <oneswig> daveholland: are you or anyone else from Sanger planning to go to Vancouver?
11:10:31 <daveholland> oneswig: yes, Pete and I will be there
11:10:46 <oneswig> Oh, excellent.
11:11:04 <daveholland> I am trying to think of a lightning talk (maybe an overview of apps etc we're running on OpenStack)
11:11:20 <oneswig> I'd be interested in that, for one.
11:11:45 <oneswig> that project with a '4' in the middle of the acronym - what was it?
11:11:52 <daveholland> or, some of the admin-side landmines we've stepped on :) :(
11:12:02 <daveholland> GA4GH https://www.ga4gh.org/
11:12:23 <oneswig> That looks like it!
11:13:13 <oneswig> The responsible sharing of genomic and health-related data - think you could talk for a few minutes on that? :-)
11:13:27 <daveholland> I will ask Tim if he has any slides I can steal :-D
11:13:32 <oneswig> (something operational would be equally welcome)
11:13:57 <oneswig> Has this project just kicked off or is it already underway with productive output?
11:14:30 <daveholland> it dates back to 2013 but, I think, has been a bit of a slow-burner
11:15:20 <oneswig> In IEEE committee terms, that's barely getting going, but for the cloud world we've lived through several universes in that time frame!
11:15:29 <martial_> I might add a small "how we run automatic multimedia analytics on OpenStack, Container and Mesos" (title to be firmed up)
11:16:10 <martial_> I can also do a small "Cloud Federation: P2302 and ORCA"
11:16:43 <oneswig> All sounds good martial_.  Should we set up an Etherpad to gather these?  I'll go ahead
11:17:16 <martial_> #link https://etherpad.openstack.org/p/scientific-sig-vancouver2018-lighting-talks
11:17:57 <oneswig> ah, thanks martial_
11:18:37 <oneswig> priteau: who will be there from Chameleon - anyone?
11:20:18 <martial_> (moving to phone)
11:21:13 <oneswig> OK, shall we move on (not a lot to cover today though)
11:21:18 <oneswig> #topic AOB
11:21:27 <priteau> oneswig: no one from Chameleon this time
11:21:38 <oneswig> priteau: ah, thanks.  Too bad.
11:21:50 <priteau> lots of conflicting travel
11:22:25 <oneswig> daveholland: there's a plan for a SIG night out and you and Pete would of course be invited.  It's Tuesday night.
11:23:29 <oneswig> Smaller-scale this time, restricted to the core of the group.  It was too hard to control numbers previously.
11:23:48 <daveholland> oneswig: that's great, thanks - I will RSVP "yes" right now :)
11:24:06 <oneswig> Excellent, noted.
11:25:31 <oneswig> I didn't have anything else to cover this week - quiet week and all that.
11:25:45 <oneswig> Anything new with you?
11:26:30 <oneswig> We've been having fun with bare metal multi-tenant IB, and I'm hoping my colleague mgoddard is going to write that up for us all ... :-)
11:26:54 <daveholland> (quiet mutterings about Ceph/S3 quota difficulties, and hypervisors misconfigured to use Ceph for instance storage, just another week in the trenches)
11:27:22 <oneswig> daveholland: do you use anything like s3fs-fuse or goofys?  Got a preference?
11:27:39 <oneswig> trying to sort the wheat from the chaff here.
11:28:14 <oneswig> Forgot to add, b1airo sends his apologies - he's in San Francisco this week.
11:28:53 <oneswig> daveholland: are you involved with elixir?
11:29:04 <daveholland> we try to warn users off that idea (too many things blew up on us in the past) but, we have had some success with: s3fs, rclone mount, and a home-grown one: https://github.com/VertebrateResequencing/muxfys
11:29:53 <daveholland> I've no contact with Elixir but if you need a contact I can find a way to get you in touch
11:31:12 <oneswig> daveholland: just wondering because I think they are fellow travellers wrt EGI and federated AAI.  I've exchanged occasional mails with David Ocana on this... probably too occasional given the common issues
11:32:59 <oneswig> OK - anything more to cover for AOB?
11:33:08 <daveholland> yes, "we" (for various degrees of nearness) could probably all benefit from being better in touch
11:33:42 <oneswig> daveholland: That's our ethos, certainly - share and enjoy
11:33:43 <priteau> oneswig: we've recently integrated https://github.com/redbo/cloudfuse in our Chameleon appliances. It seems to work fine but we haven't done extensive testing or benchmarking, it's mostly for convenience
11:34:55 <oneswig> priteau: interesting!  So this seems to cover the Swift API side of things.  I'd be interested to hear how it works (doubly so if it doesn't)
11:35:44 <martial_> Cloudfuse interesting indeed
11:36:15 <priteau> Isn't it the same in s3fs-fuse?
11:36:37 <oneswig> priteau: How are the new SDN switches integrating?
11:36:43 <martial_> Last commit 2 years ago
11:37:16 <priteau> oneswig: Nicely! We are now testing the functionality to attach external OpenFlow controllers, it should be available to users soon.
11:38:02 <oneswig> I would also be interested to hear about the flowvisor / network slicing stuff and how that integrates with Ironic multi-tenant network environments...
11:38:21 <daveholland> that is interesting, yes please
11:38:55 <oneswig> priteau: does it work in-band or do you need to expose the network management port to a tenant's SDN controller?
11:40:17 <priteau> Those switches (from Corsa) have the concept of network namespaces, so we managed to get the switch to contact the controller via its uplink and a dedicated IP, rather than management port
11:41:04 <oneswig> that's an important advantage
11:41:05 <priteau> And anyway, the port which needs to be exposed is the controller's port, not the switch's port
11:41:22 <priteau> So the switch can sit behind NAT, for this purpose
11:41:29 <oneswig> Ah, good point
11:42:51 <priteau> Those switches do the hard work of isolating ports in separate VFCs (VFC = Virtual Forwarding Context). We modified networking-generic-switch to leverage this feature and map each Neutron network to a separate VFC.
11:43:21 <priteau> One downside is that we're limited to 63 VFCs
11:44:00 <daveholland> 63 per switch or overall?
11:44:19 <oneswig> That is slightly limited but probably your users requiring openflow are not that high?
11:44:43 <oneswig> Do you have a 64th context for ol'fashioned Ethernet?
11:44:43 <priteau> 63 per switch, I assume that's a hardware limit, maybe they have 64 processing cores
11:45:42 <priteau> But in a multi-rack system, you may want all your Neutron networks to be declared on all switches, so the limit would apply on your whole deployment
11:46:46 <daveholland> OK, you can't do something to provision the network (VLAN?) on the fly?
11:47:03 <priteau> We set up the VFCs to be acting as a learning switch by default, in that case there is an OpenFlow controller running on the switch, the users don't see it at all and it behaves like a normal switch
11:48:08 <priteau> oneswig: yes, we assume that we won't have that many OpenFlow users, although this also applies to anyone who might want an isolated network. For now we're way below this limit.
11:48:37 <oneswig> priteau: so you can't share a VFC and have VLANs for isolation?
11:49:06 <priteau> oneswig: I believe this could be possible, but we haven't explored it yet.
11:49:41 <priteau> We wanted to give the most control first, since that's mostly why we bought those switches
11:50:14 <oneswig> Is there a public-domain equivalent to what these switches do?
11:51:46 <priteau> I suppose you could do something similar for VMs using Open vSwitch, but it's software processing
11:52:05 <martial_> Corsa vfc switch you say?
11:52:41 <priteau> VFC is their own terminology for the isolation feature
11:53:17 <priteau> We have those switches at UC: https://www.corsa.com/products/dp2400/
11:53:31 <priteau> And one of those at TACC: https://www.corsa.com/products/dp2200/
11:54:08 <martial_> Nice thanks
11:54:15 <oneswig> Is there a migration path to P4?
11:55:30 <priteau> We have people in Chameleon looking at P4, but not using this hardware. I am not aware of any P4 support in Corsa switches.
11:55:53 <oneswig> OK - be interesting to know what their future plans are around this.
11:57:12 <oneswig> OK - nearly at the hour.  Any more to add?
11:58:20 <oneswig> #endmeeting