21:03:06 <martial> #startmeeting scientific-wg
21:03:07 <openstack> Meeting started Tue Jun 27 21:03:06 2017 UTC and is due to finish in 60 minutes.  The chair is martial. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:10 <openstack> The meeting name has been set to 'scientific_wg'
21:03:26 <martial> Blair, are you around?
21:04:08 <martial> #chair b1airo
21:04:09 <openstack> Current chairs: b1airo martial
21:04:14 <martial> Hi Blair
21:04:20 <martial> just in time, I just started the meeting
21:05:10 <martial> welcome everybody to the weekly Scientific Working group meeting
21:05:32 <b1airo> morning
21:05:58 <martial> this week will mostly be a repeat of last week with the small addition that some of my team members will do a quick introduction of their work
21:06:08 <priteau> Hello
21:06:15 <lizhong> Hi, I am Lizhong from NIST, working with martial
21:06:42 <martial> hello Pierre
21:06:45 <Maxime_> Hi, I'm Maxime from NIST too, also working with Martial
21:06:54 <Pooneet_Thaper> Hello, I am Pooneet from NIST and I am also working with Martial
21:06:55 <trandles> o/ Hi folks
21:07:02 <b1airo> sorry i am juggling kid breakfast for a moment :-/
21:07:20 <martial> #topic Optimal meeting time
21:07:24 <martial> blair: no worries
21:07:42 <martial> so the discussion last week was to be 1100UTC
21:08:14 <martial> (seems to be 7pm EST)
21:08:38 <martial> b1airo: is that for the Tuesday meeting?
21:08:52 <trandles> I think your timezone conversion is a bit off there martial
21:09:03 <b1airo> 7am here now :-)
21:09:04 <priteau> martial: I think that's AM, for the Wednesday
21:09:40 <b1airo> yep, 1100UTC will make it 9pm here
21:10:02 <martial> oops sorry, let me try again 1100UTC for the Wednesday meeting (so 7 AM EST )
21:10:30 <martial> (thanks Pierre)
21:11:17 <martial> I see the action items have b1airo instigate the changes
21:11:17 <priteau> Instead of 0900 UTC currently
21:12:23 <martial> so the schedule would be tuesday 2100 utc and wednesday 1100 utc
21:12:36 <priteau> yep
21:12:57 <martial> okay, good thanks for clarifying
21:13:13 <martial> next topic, then
21:13:17 <martial> #topic Scientific app catalogues on OpenStack
21:13:20 <priteau> we still needed to check for meeting slot availability though
21:14:38 <martial> Pierre: b1airo can update us on this item when he is available
21:15:04 <priteau> yes, let's move on
21:15:31 <martial> related to the new topic, stig noted that "there appear to be several groups around the world who want to get started on making app catalogues for their users"
21:15:35 <martial> priteau added "Orchestration templates (Heat or CloudFormation or something else) should be handled as well as disk images
21:16:07 <martial> as well as "orchestrated templates"
21:16:14 <martial> #link http://eavesdrop.openstack.org/meetings/scientific_wg/2017/scientific_wg.2017-06-21-09.00.log.html
21:16:20 <martial> (the full log from last time)
21:17:40 <martial> anything to add to this?
21:18:57 <martial> I did not seem to see a consensus on the subject from the logs
21:19:30 <b1airo> ok, better half has taken over!
21:20:02 <martial> some talk about Murano, Heat templates and such
21:20:07 <martial> b1airo: welcome back
21:20:25 <martial> b1airo: question for you, have you confirmed the 1100 UTC slot availability?
21:20:28 <b1airo> hmm rather than having everyone read the logs in the meeting let's see what people are doing about app distribution and sharing on their clouds today?
21:21:23 <b1airo> no haven't looked at that yet, but it was the consensus from the ML and last week's meeting
21:22:54 <b1airo> martial, trandles - you guys weren't in last week's meeting, how do you approach "App/VM catalogs" ?
21:24:12 <trandles> We don't have any app catalog functionality.  Our "cloudy" solution, charliecloud, uses docker tools to build container images, so we use docker hub
21:24:44 <martial> As for us, I would invite my team to introduce the tool we were discussing at the beginning
21:25:00 <trandles> However I would like to implement some kind of catalog functionality for center-provided images already containing applications.  We just haven't started down that path yet.
21:25:53 <bastafidli_> We are using Juju and enabling it both on Ubuntu and CentOS
21:26:14 <lizhong> Hi, I'd like to introduce Conducere we've been developing
21:26:29 <lizhong> Here is a short introduction
21:26:57 <lizhong> Conducere, latin for conductor, is a simplified orchestration layer on top of OpenStack to instantiate a ready-to-use cluster for data science projects. It is designed to alleviate the complexity of deploying a reproducible infrastructure. It spawns a cluster and ensures creation time specialization of its nodes.
21:27:08 <b1airo> trandles, the "Contributed Images" we have on Nectar might be along the lines you are thinking - really just a bit of policy and procedure around a new tab in the Images page of Horizon
21:27:45 <priteau> lizhong: is it simplified as in simpler than Heat?
21:28:15 <lizhong> Features of Conducere:
21:28:16 <lizhong> - Spawn a cluster using OpenStack Heat
21:28:20 <lizhong> - Install tools and configure nodes and services using Ansible
21:28:24 <b1airo> bastafidli_, where is it that you use Juju ?
21:28:26 <lizhong> - Currently, supports Hadoop, Spark, Ganglia, NFS and NTP
21:28:31 <lizhong> - Customize roles of nodes: master, worker, client and storage
21:29:13 <lizhong> priteau: Actually, it uses Heat to spawn a cluster, but we have pre-defined templates
21:29:40 <b1airo> lizhong, so you essentially just use Heat to bootstrap and then hand-off to Ansible for the rest ?
21:30:00 <bastafidli_> b1airo, I am with Lenovo, we are utilizing Juju and Charms as there are many examples for existing applications, big data clusters, etc.
21:30:18 <priteau> lizhong: Are your templates public? I would be curious to see what they look like
21:30:20 <lizhong> That's correct. We also try to define roles for cluster nodes
21:30:45 <b1airo> bastafidli_, yes i looked at Juju again recently (after unsteady early experience) and was impressed with how it has grown
21:30:46 <martial> priteau: we are starting the review process to put everything on github
21:31:23 <lizhong> priteau, We need to do some paperwork at NIST befeor publishing it, but it'll be very soon.
21:31:43 <martial> b1airo: yes, but we have also availability of the extra tool developed by lizhong (that Dmoni we had been speaking for a bit) to collect overall cluster/host/program metrics
21:32:27 <b1airo> lizhong, you mentioned "reproducible infrastructure" - what do you mean by this with Conducere ?
21:33:57 <lizhong> blairo, the cluster is defined by config file, so with the same configration, we should be able to get the same cluster.
21:35:14 <b1airo> right, makes sense, just wasn't sure if there was anything more to it than "infra as code"
21:35:57 <lizhong> blairo, you are right, it's the same idea
21:36:40 <lizhong> We try to add components for Data science projects, e.g. Hadoop, Spark, etc.
21:36:45 <b1airo> lizhong, what sort of OpenStack features do your Heat templates assume the underlying cloud provides?
21:39:06 <martial> b1airo: I think a lot of that is described in the deployment manual but we are relying on the user to have at least a basic understanding of openstack (need the IDs and the routers setup before deploying the rest)
21:39:25 <Maxime_> This template assumes that the cluster provides router resources, and creates the networks and the nodes according to some provided parameters
21:40:38 <Maxime_> blairo: Does that answer your question?
21:40:51 <martial> one of the core needs I explained a while back is the desire to allow our partners to bring their algorithms in house and access data sets
21:40:52 <b1airo> ok, so Tenant Networks are a requirement then?
21:41:29 <b1airo> do the templates create new networks or can they use provider nets ?
21:41:48 <martial> yes a project networking need to be predefined
21:42:09 <Maxime_> blairo: The template creates the router, and a virtual network
21:42:53 <b1airo> ok, thanks
21:43:03 <b1airo> shall we move on then?
21:43:24 <martial> b1airo: I am hoping to get the project on github shortly (have to clear a few things internally first)
21:43:44 <martial> we welcome comments as always :)
21:44:06 <martial> #topic Security of research computing instances
21:44:45 <b1airo> when i suggested that topic i really intended to limit the discussion to network security, but that was a bit lost in the 8 mins or so we managed to spend on it last week
21:45:27 <martial> b1airo: hey you got 16 minutes :)
21:45:53 <b1airo> and this is honestly a self serving topic as i would like to introduce some firewall capabilities into our cloud, so interested in experiences with different options
21:47:34 <b1airo> right now our research cloud environment is essentially a big DMZ, we have Internet border taps to a firewall that give us some visibility, but honestly it is not that great today
21:48:17 <martial> b1airo: not much here from us at least
21:49:00 <b1airo> basically i'd like to provide opt-in firewalling to tenants
21:49:37 <b1airo> an OpenStack FWaaS doesn't seem like a real option today
21:49:37 <trandles> opt-in firewalling to tenants...does that mean tenants can opt-in to having a firewall they manage?
21:50:44 <b1airo> trandles, ideally they would get some management of it, but if that turns out to be too hard then i'll settle for having a firewall acting as the gateway on specific provider net/s
21:51:36 <b1airo> with some integration scripting to tell the firewall which IPs in the subnet belong to which tenants at any time
21:52:01 <trandles> I think "too hard" is likely.  If you do something like that, I'd like to hear how it impacted your support requirements.
21:52:21 <b1airo> i'd prefer to not force users into having tenant networks for it, as they mostly don't today
21:52:55 <trandles> we have experience with "automating" some iptables stuff to isolate an early VM-based user-defined software stack thing we did but it was fraught with peril
21:53:07 <b1airo> yes i agree, i think our itsec folks will be happy to assist if folks need customisation
21:53:28 <trandles> I put "automating" in quotes because it was largely managed by slurm prolog/epilog scripts and the users couldn't touch it
21:53:49 <trandles> too many failure modes left the firewall in a bad state
21:53:54 <priteau> b1airo: What are the limitations of FWaaS?
21:53:57 <trandles> so we ditched it entirely
21:55:35 <b1airo> priteau, honestly i need to read up on FWaaS v2 as last time i looked into this it wasn't ready
21:56:54 <b1airo> the other part is that when i talked to a few NGFW vendors, the only OpenStack integrations i came across were either not really integrations or completely ignored FWaaS and wanted to install stuff across the hypervisors etc
21:57:09 <priteau> no Horizon support in v2 apparently
21:58:34 <b1airo> yeah it still looks rather green
21:58:58 <b1airo> oh well, guess i'll let you know how we go!
21:59:19 <b1airo> time to wrap up
21:59:40 <martial> #endmeeting