21:00:18 #startmeeting scientific-sig 21:00:19 Meeting started Tue Jan 21 21:00:18 2020 UTC and is due to finish in 60 minutes. The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:23 The meeting name has been set to 'scientific_sig' 21:00:30 up up and away! 21:00:35 hello 21:00:38 #topic who's here 21:00:48 Hi all 21:00:50 hi all 21:00:54 hey! 21:01:06 that's a topic! 21:01:33 The new informal SIG. I'm not even wearing a tie tonight :-) 21:02:37 OK, I forgot to post an agenda on the wiki, apologies. 21:02:48 #chair martial 21:02:49 Current chairs: martial oneswig 21:03:18 #topic Conferences and CFPs for 2020 21:03:31 So what's coming up? 21:04:50 If you're in London in early March, this free 1-day conference at the Francis Crick Institute is excellent: https://cloud.ac.uk/ 21:04:50 GTC in March 21:05:06 OpenStack Summit in June (CFP coming out soon I was told) 21:05:11 PEARC ? 21:05:15 martial: where and when for GTC? You going? 21:05:27 GTC, LA, March 21:05:32 planning to 21:06:17 ISC, SuperCompCloud, working on the CFP and program committee (nominations taken) https://sites.google.com/view/supercompcloud 21:06:28 Once you're done with GTC, have some downtime lakeside in Ticino, Switzerland: https://www.hpcadvisorycouncil.com/events/2020/swiss-workshop/ 21:06:34 PEARC in Portland at the end of July 21:07:03 SC20 in Atlanta ... November? 21:07:08 yep 21:07:10 jmlowe: you planning a presentation to submit for PEARC this year? 21:07:36 Possibly, definitely if JS2 gets funded 21:10:53 I haven't checked yet when the deadline is for OpenStack Vancouver - anyone know? 21:11:32 CFP hasn't even opened has it? 21:11:37 don't think it is out yet 21:11:49 Ildiko mentioned it should be soon 21:12:02 There's nothing but dates and location on the normal summit page. 21:12:55 It's pretty minimal right now - https://www.openstack.org/summit/vancouver-2020/ 21:13:06 Oh, looks like an eventbrite registration page is open now. Totally new format by the look of it? 21:13:12 https://www.eventbrite.com/e/opendev-ptg-vancouver-2020-tickets-88923270897?aff=opendevptg2020&_ga=2.64326319.100008772.1579641137-574136464.1557423154 21:15:32 It actually looks great - quite a grassroots feel to the way it is presented. 21:16:44 looks very different to what we are used to indeed 21:16:46 Yeah, much less like a conference and more like a big integrated PTG? 21:17:02 sure does 21:17:40 This plenary opener - "Short kickoff with all attendees to set the goals for the day or discuss the outcomes of the previous day." - sounds a lot more "intimate", for sure. 21:18:08 indeed, it looks like the format has drastically changed 21:18:18 I think there will still be an upstream academy 21:19:20 Wonder how the various SIGs fit into the structure? Will they still support SIG meetings as part of the program? 21:19:33 Or maybe we'll have to self-organize 21:19:45 trandles: hopefully in a different way than in Shanghai, where it didn't work out at all 21:20:00 not sure yet, once the call for participation is open we can ask 21:20:23 yes that sound like it was less what was expected at the time 21:20:25 "PTG: Afternoon working sessions for project teams and SIGs to continue the morning�s discussions." 21:20:44 wait it only one day? 21:21:05 4 days 21:21:26 okay 8-11 21:21:30 makes more sense 21:21:32 They were saying it would be different, I wasn't sure I believed them until now 21:22:01 It's 4 days and they call out 4 focus areas, I wonder if each day will be dedicated to each area 21:22:14 I like it. I think we can get stuff done here. 21:22:36 gah - "if each day will be dedicated to one area" 21:22:59 It's certainly a lot cheaper than past summits! 21:23:11 well I look forward to learning new things 21:23:34 although the question remains about "is it a long PTG" 21:24:28 anything else? 21:24:32 definitely scope for an evening social! 21:24:40 OK, move on? 21:24:56 go 21:25:12 I appreciate social events closer to sea level 21:25:17 :) 21:25:37 #topic large-scale SIG data 21:25:58 I've been taking part in the large-scale SIG meetings 21:26:04 expand on the topic please? 21:26:10 They are looking for operator data on pain points 21:26:28 oh right, that is a lot of our SIG users :) 21:26:29 #link mailing list post http://lists.openstack.org/pipermail/openstack-discuss/2020-January/011820.html 21:27:10 #link etherpad for information on scaling https://etherpad.openstack.org/p/large-scale-sig-documentation 21:27:21 Could actually become a handy resource that one. 21:27:28 tldr; did they define large-scale? 21:28:32 Not precisely. But approaching 1000 hypervisors is definitely in it. 500+ will be showing a lot of the early signs of scaling trouble 21:30:14 Our contribution so far has been around "golden signals" for scaling problems - rates, latencies, errors, and all that. 21:30:50 to which end, the most useful thing I've seen to date is telemetry from HAProxy 21:31:40 HAProxy at that scale? The OSA documents explicitly suggest you don't want to use haproxy in production at all. 21:32:10 It will give response latencies by endpoint for example. Very cool. And a graph of 500 response codes. Highly useful. 21:32:57 trandles: that's surprising. We build upon it as standard. Multiple in a deploy, sometimes. 21:33:44 As someone who appreciates a tight piece of C code, what's the objection, and indeed the alternative? 21:33:48 um what!? no HAProxy, everything I have would fall over without it 21:33:53 Looking for the reference now, but IIRC, it says something like "HAProxy is appropriate for testing but you should use a real hardware director in production" 21:34:21 what are the alternatives? 21:34:35 sure, I've got a spare couple of million for a enterprise load balancer 21:35:07 trandles: did you heed this guidance? :-) 21:35:38 I certainly didn't 21:36:04 oneswig, I can't even get a deployment to finish. I spend most of my time googling to figure out what configuration variable I need to set that is left out of the OSA guide. 21:36:06 I don't think I've seen our OpenStack APIs generate enough traffic or connections to be a concern for it. Famous last words perhaps. 21:36:49 Maybe Vancouver should have one topic: decent documentation 21:36:52 trandles: feel your pain. My own notes are the same. I'm like, "thanks, former self"... 21:37:17 oneswig, I have started a new notebook of nothing but "gotchas and solutions" to deploying OpenStack 21:37:53 Thats awesome. Next time you do it, it'll sail through! 21:39:05 +1 on docs. I feel like the HA guides are lacking, and there’s an overall sense in the community that if you are doing production it’s assumed you are doing OSA or something similar which doesn’t really fit all deployment models 21:39:06 I feel that there is a book in the making here 21:39:47 # load balancer 21:39:47 # Ideally the load balancer should not use the Infrastructure hosts. 21:39:47 # Dedicated hardware is best for improved performance and security. 21:39:53 you're doing something seriously wrong if the openstack api's knock over haproxy for anything other than storing glance image 21:40:01 In the OSA config file 21:40:25 Anyway, for haproxy users, I highly recommend the telemetry. There's a prometheus endpoint, which we scrape directly or poll and store in Monasca. 21:40:26 But that's not exactly the comment I'm remembering 21:40:42 trandles: I might read that comment as "dedicated hardware for running haproxy" 21:40:52 oneswig, I agree 21:41:13 indeed. running haproxy on control plane nodes isn’t really a great idea 21:41:33 @trandles i’d be curious to chat sometime about your experiences with OSA 21:41:50 +1 on the dedicated hardware ... but that was a long time ago when we did our first deployment 21:42:13 oneswig, you said prometheus...did you replace gnocchi with it? 21:42:25 rbudden, any time 21:43:08 just from a security stand point, you greatly reduce your attack surface by only letting haproxy touch the outside and keeping your controllers inside 21:43:20 I saw it in two configs today - one with Prometheus (and for metric storage), the other with Monasca scraping the same endpoint (ending up in influx). 21:44:04 jmlowe: +1 this is rather important for our setups 21:44:42 rbudden, the reason I'm asking about prometheus/gnocchi is because gnocchi wouldn't install via OSA without serious fiddling and someone in OSA project pointed me to a third-person's ansible for prometheus installation 21:44:54 Interesting aside, the Red Hat monitoring framework SAF apparently standardises on collectd instead. 21:45:36 the future of Gnocchni and/or it’s replacement is concerning 21:46:22 being knee deep in a large scale Train deploy at the moment i’m curious if/when there will be a concensus in the community as to the direction of Telemetry 21:46:51 rbudden, I'm deploying train via OSA, we definitely need to have a call very soon 21:47:12 sounds like a plan 21:47:38 prometheus is getting wide adoption but the back-end storage has limitations, such as control over retention period. We've done a lot with influx (but never paid for the enterprise HA version) 21:47:46 we currently have a custom hybrid Puppet/Ansible model with the intent on moving it entirely to Ansible as time allows 21:48:57 Does anyone have good pointers for telemetry that they monitor? 21:49:32 for us it is heavy Prometheus and heavy Ansible (but we do only have six racks) 21:50:11 my understanding was Promethus with Ceilometer was not appropriate for billing data 21:50:45 10 minute warning btw - in case we're in the weeds ;) 21:51:30 As a peddlar of scalar telemetry, Prometheus doesn't get notifications, which can be useful for fine-grained timing. 21:51:42 I think the next topic is AOB ... so "in the weeds" works :) 21:51:53 #topic AOB 21:51:57 seamless transition 21:52:03 :) 21:52:18 Wanted to know if anyone has been using WekaIO? 21:52:25 I want my emojis to high give Stig :) 21:52:35 -g+f 21:52:43 oneswig, took me a few months to get WekaIO to stop cold calling me. :P 21:53:02 trandles: surely after the first call they knew who you were? :-) 21:53:55 Artful Dodger kept answering my phone 21:54:22 I've been working on getting it well integrated with OpenStack clients, in a mutually beneficial way. 21:55:16 With help we got a client mounting Weka in "fast mode", ie SR-IOV and DPDK 21:57:07 Don't have benchmark data yet, got held up building images with very specific versions (CentOS 7.6, MLNX-OFED 4.6). It can be hard to go back a few versions on cloud software :-) 21:58:32 I did see this today: https://www.hpcwire.com/off-the-wire/openstack-software-adds-native-upstream-support-for-hdr-200-gigabit-infiniband/ 21:58:53 oneswig: seems like a pain to be honest 21:58:57 Ah, saw that too. What's the big deal, surely one version of IB is like another? 21:58:58 trandles: that is cool 21:59:31 oneswig, no idea 22:00:03 janders would probably know, but I think he's been doing this for ages anyway! 22:00:10 I don't know enough about HDR 200 to know if it has to do with virtual functions, verbs-type stuff...?? 22:00:39 Don't know, will keep an eye out. 22:00:48 We are on the hour, time to close 22:00:58 thanks all 22:01:02 #endmeeting