11:00:18 #startmeeting scientific-sig 11:00:19 Meeting started Wed May 9 11:00:18 2018 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 11:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 11:00:23 The meeting name has been set to 'scientific_sig' 11:00:34 Greetings... 11:00:52 #link agenda for today (such as it is) https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_8th_2018 11:01:05 Noting that actually it's May 9th... 11:01:06 hi 11:01:12 we must be in a time warp 11:01:15 Hi daveholland, morning 11:01:24 Eddies in the space-time continuum... 11:01:54 Hello 11:02:05 Hi priteau! 11:02:37 Hi oneswig! 11:02:37 #topic What has the Scientific SIG ever done for us :-) 11:02:47 good topic :) 11:02:55 Morning martial_! 11:02:58 #chair martial_ 11:02:59 Current chairs: martial_ oneswig 11:03:07 Good morning Stig, everybody 11:03:11 #link Etherpad for User Committee report https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_May_8th_2018 11:03:40 This is a gathering of OpenStack activities that are SIG-related from this cycle. 11:04:19 daveholland: I wondered if you've got anything you can link to about the Sanger Centre's OpenStack day? Seems like a great contribution here. 11:04:49 I haven't seen anything from Pete about publishing the content 11:05:17 we have a list of links to presentations here: https://hpc-news.sanger.ac.uk/openstack-day-2018/ 11:05:35 that is very useful indeed 11:05:38 but no write-up as such :/ 11:05:39 Ooh, thanks. Can you add it to the list? 11:05:45 will do 11:05:54 Thanks daveholland 11:06:11 priteau: I think you've seen this already but was there anything to add from Chameleon? 11:07:36 (not sure I put it in the right place in the etherpad, but it's there) 11:07:38 oneswig: Should we add that there was design work to push some Chameleon features back in upstream Blazar? 11:08:03 priteau: why not, I saw it getting discussed and it's clearly useful. 11:09:27 OK, shall we move on? 11:09:44 #topic Vancouver summit BoF 11:10:01 daveholland: are you or anyone else from Sanger planning to go to Vancouver? 11:10:31 oneswig: yes, Pete and I will be there 11:10:46 Oh, excellent. 11:11:04 I am trying to think of a lightning talk (maybe an overview of apps etc we're running on OpenStack) 11:11:20 I'd be interested in that, for one. 11:11:45 that project with a '4' in the middle of the acronym - what was it? 11:11:52 or, some of the admin-side landmines we've stepped on :) :( 11:12:02 GA4GH https://www.ga4gh.org/ 11:12:23 That looks like it! 11:13:13 The responsible sharing of genomic and health-related data - think you could talk for a few minutes on that? :-) 11:13:27 I will ask Tim if he has any slides I can steal :-D 11:13:32 (something operational would be equally welcome) 11:13:57 Has this project just kicked off or is it already underway with productive output? 11:14:30 it dates back to 2013 but, I think, has been a bit of a slow-burner 11:15:20 In IEEE committee terms, that's barely getting going, but for the cloud world we've lived through several universes in that time frame! 11:15:29 I might add a small "how we run automatic multimedia analytics on OpenStack, Container and Mesos" (title to be firmed up) 11:16:10 I can also do a small "Cloud Federation: P2302 and ORCA" 11:16:43 All sounds good martial_. Should we set up an Etherpad to gather these? I'll go ahead 11:17:16 #link https://etherpad.openstack.org/p/scientific-sig-vancouver2018-lighting-talks 11:17:57 ah, thanks martial_ 11:18:37 priteau: who will be there from Chameleon - anyone? 11:20:18 (moving to phone) 11:21:13 OK, shall we move on (not a lot to cover today though) 11:21:18 #topic AOB 11:21:27 oneswig: no one from Chameleon this time 11:21:38 priteau: ah, thanks. Too bad. 11:21:50 lots of conflicting travel 11:22:25 daveholland: there's a plan for a SIG night out and you and Pete would of course be invited. It's Tuesday night. 11:23:29 Smaller-scale this time, restricted to the core of the group. It was too hard to control numbers previously. 11:23:48 oneswig: that's great, thanks - I will RSVP "yes" right now :) 11:24:06 Excellent, noted. 11:25:31 I didn't have anything else to cover this week - quiet week and all that. 11:25:45 Anything new with you? 11:26:30 We've been having fun with bare metal multi-tenant IB, and I'm hoping my colleague mgoddard is going to write that up for us all ... :-) 11:26:54 (quiet mutterings about Ceph/S3 quota difficulties, and hypervisors misconfigured to use Ceph for instance storage, just another week in the trenches) 11:27:22 daveholland: do you use anything like s3fs-fuse or goofys? Got a preference? 11:27:39 trying to sort the wheat from the chaff here. 11:28:14 Forgot to add, b1airo sends his apologies - he's in San Francisco this week. 11:28:53 daveholland: are you involved with elixir? 11:29:04 we try to warn users off that idea (too many things blew up on us in the past) but, we have had some success with: s3fs, rclone mount, and a home-grown one: https://github.com/VertebrateResequencing/muxfys 11:29:53 I've no contact with Elixir but if you need a contact I can find a way to get you in touch 11:31:12 daveholland: just wondering because I think they are fellow travellers wrt EGI and federated AAI. I've exchanged occasional mails with David Ocana on this... probably too occasional given the common issues 11:32:59 OK - anything more to cover for AOB? 11:33:08 yes, "we" (for various degrees of nearness) could probably all benefit from being better in touch 11:33:42 daveholland: That's our ethos, certainly - share and enjoy 11:33:43 oneswig: we've recently integrated https://github.com/redbo/cloudfuse in our Chameleon appliances. It seems to work fine but we haven't done extensive testing or benchmarking, it's mostly for convenience 11:34:55 priteau: interesting! So this seems to cover the Swift API side of things. I'd be interested to hear how it works (doubly so if it doesn't) 11:35:44 Cloudfuse interesting indeed 11:36:15 Isn't it the same in s3fs-fuse? 11:36:37 priteau: How are the new SDN switches integrating? 11:36:43 Last commit 2 years ago 11:37:16 oneswig: Nicely! We are now testing the functionality to attach external OpenFlow controllers, it should be available to users soon. 11:38:02 I would also be interested to hear about the flowvisor / network slicing stuff and how that integrates with Ironic multi-tenant network environments... 11:38:21 that is interesting, yes please 11:38:55 priteau: does it work in-band or do you need to expose the network management port to a tenant's SDN controller? 11:40:17 Those switches (from Corsa) have the concept of network namespaces, so we managed to get the switch to contact the controller via its uplink and a dedicated IP, rather than management port 11:41:04 that's an important advantage 11:41:05 And anyway, the port which needs to be exposed is the controller's port, not the switch's port 11:41:22 So the switch can sit behind NAT, for this purpose 11:41:29 Ah, good point 11:42:51 Those switches do the hard work of isolating ports in separate VFCs (VFC = Virtual Forwarding Context). We modified networking-generic-switch to leverage this feature and map each Neutron network to a separate VFC. 11:43:21 One downside is that we're limited to 63 VFCs 11:44:00 63 per switch or overall? 11:44:19 That is slightly limited but probably your users requiring openflow are not that high? 11:44:43 Do you have a 64th context for ol'fashioned Ethernet? 11:44:43 63 per switch, I assume that's a hardware limit, maybe they have 64 processing cores 11:45:42 But in a multi-rack system, you may want all your Neutron networks to be declared on all switches, so the limit would apply on your whole deployment 11:46:46 OK, you can't do something to provision the network (VLAN?) on the fly? 11:47:03 We set up the VFCs to be acting as a learning switch by default, in that case there is an OpenFlow controller running on the switch, the users don't see it at all and it behaves like a normal switch 11:48:08 oneswig: yes, we assume that we won't have that many OpenFlow users, although this also applies to anyone who might want an isolated network. For now we're way below this limit. 11:48:37 priteau: so you can't share a VFC and have VLANs for isolation? 11:49:06 oneswig: I believe this could be possible, but we haven't explored it yet. 11:49:41 We wanted to give the most control first, since that's mostly why we bought those switches 11:50:14 Is there a public-domain equivalent to what these switches do? 11:51:46 I suppose you could do something similar for VMs using Open vSwitch, but it's software processing 11:52:05 Corsa vfc switch you say? 11:52:41 VFC is their own terminology for the isolation feature 11:53:17 We have those switches at UC: https://www.corsa.com/products/dp2400/ 11:53:31 And one of those at TACC: https://www.corsa.com/products/dp2200/ 11:54:08 Nice thanks 11:54:15 Is there a migration path to P4? 11:55:30 We have people in Chameleon looking at P4, but not using this hardware. I am not aware of any P4 support in Corsa switches. 11:55:53 OK - be interesting to know what their future plans are around this. 11:57:12 OK - nearly at the hour. Any more to add? 11:58:20 #endmeeting