09:00:34 #startmeeting large_scale_sig 09:00:35 Meeting started Wed Nov 27 09:00:34 2019 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:38 The meeting name has been set to 'large_scale_sig' 09:00:44 #topic Rollcall 09:00:49 hi 09:00:51 hi 09:00:54 o/ 09:01:00 hello 09:01:00 hi 09:01:00 hi 09:01:02 Hello all 09:01:04 Welcome to the first of what I hope will be a long series of meetings of this SIG! 09:01:06 o/ 09:01:20 I'd like to start by doing a quick round of introductions, I'll start 09:01:28 My name is Thierry Carrez, I manage the engineering team at the OpenStack Foundation 09:01:40 I'm Stig Telfer, CTO, StackHPC 09:01:40 My goal here is to facilitate a discussion between OpenStack users 09:01:50 and get them engaged to drive common improvements that will make everyone's life better 09:02:43 I'm Erkki Peura, architect for Nokia private cloud 09:02:53 Hi I'm Yusuke Tatsumi from Yahoo! JAPAN 09:03:10 I'am Pengju Jiao from China Mobile 09:03:11 I am Arnaud Morin, working for OVH in the team in charge of deploying and operating the Public Cloud infrastructure (based on openstack of course) 09:03:18 I'm Belmiro. I work at CERN deploying and maintaining our multi cell cloud 09:03:34 I'm Masahito Muroi, working for LINE as software engineer. 09:03:54 I'm Dinesh Bhor, I am from LINE Corp. I work as an Infrastructure Enginner. 09:04:51 Looks like we lost Yusuke 09:05:11 I'm re-logined. 09:05:18 Ah great!\ 09:05:29 I think we heard from everyone 09:05:32 #topic Agree on SIG name 09:05:43 We are currently using "large scale SIG" to describe this group 09:05:57 Before I formally file the paperwork to create the SIG I'd like to see if that name works 09:06:09 On one hand it's a bit vague and with a bit of a wide potential scope 09:06:21 On the other we already started to communicate with that name, so maybe it's simpler to continue 09:06:26 What is your opinion on that? 09:06:38 for me this name is correct 09:06:51 I'm here for the substance, whatever the name :-) 09:06:51 agree +1 09:06:55 +1 09:06:56 I think lets keep it same as now. 09:06:57 +1 09:07:02 Personally I'm ok with that name, as long as we set smaller-scope objectives and don't go in every direction 09:07:02 +1 09:07:12 agree 09:07:21 Like if we are clear on what we want to do, the name doesn't matter much 09:07:22 +1 09:07:46 #agreed Keep "large scale SIG" as the group name 09:07:54 #topic Volunteers for SIG chairing 09:08:06 As I said earlier my goal here is to facilitate this discussion, and I'm happy to help chairing the group at the beginning 09:08:16 As I said earlier my goal here is to facilitate this discussion, and I'm happy to help chairing the group at the beginning 09:08:18 err 09:08:28 But I'm not running a large scale deployment of openstack myself, so I'll gladly let anyone else interested take over 09:08:41 For now we'd need at least one person that can take over organizing the meeting when I won't be available 09:08:50 Is there any volunteer? 09:09:34 We are at the measuring phase of a large-scale deployment, we don't have any length of operational experience to draw from either. 09:09:59 I can help 09:10:30 we run large scale but I am not sure I can run the group for now, I'd prefer if someone else can take the lead 09:10:35 belmoreira: thanks! I'll list you as co-chair. I expect to take the bulk of the chairing work, but it's always good to have two names for continuity 09:11:27 #info Belmiro will co-chair with Thierry for now 09:11:38 unless there are other volunteers :) 09:12:02 We could have three chairs, especially if someone from the APAC timezones can help cover there 09:12:39 and we don;t have to decide today. Two is good for now 09:12:56 Maybe I can help. We run large scale openstack cluster in public cloud 09:13:34 jiaopengju: OK, I'll list you as co-chair as well. I like the idea of having geographic distribution for those 09:14:29 #info Pengju Jiao will co-chair with Belmiro and Thierry 09:14:39 #topic Meetings 09:14:49 Now we need to decide how we should make progress in this SIG 09:14:57 Do we need synchronous meetings like this one? 09:15:00 And if yes, how often should we have them? Is IRC fine? 09:15:10 Should we have a permanent IRC channel ? 09:15:25 (like #openstack-large-scale) 09:15:46 I like the idea to have a meeting to sync. 09:15:57 +1 09:15:58 Personally I feel like we'll need regular meetings, at least at the start, to get it off the ground 09:16:09 +1 09:16:13 +1 09:16:16 +1 09:16:37 makes sense to me, but how often - every 2 weeks? 09:16:44 Should we make those weekly for now? Or every two weeks ? 09:16:46 hah 09:16:57 Weekly might just be too often 09:17:07 prefer every 2 weeks 09:17:20 two weeks is ok for me :) 09:17:25 we can start with every 2 weeks 09:17:30 +1 09:17:32 Bi-weekly makes sense to me 09:17:46 #agreed IRC meeting every 2 weeks 09:17:46 every 2 weeks I think is a good compromise to start with 09:18:12 ttx: should it cover different time zones? I'm happy with this time but could go up to +12 hours from now too 09:18:34 oneswig: good question. The group is currently only Europe and APAC 09:18:44 which is why this time makes the most sense 09:18:58 If we had people from the US interested, we shoudl probably find a way to rotate 09:19:24 but it's not the case yet... so maybe a problem for another time? 09:20:18 True. Meeting every 2 weeks on this time leaves the option of an interleaved meeting at a different time. 09:20:22 (the trick being, there is just no convenient time for China/Japan + western Europe + US east _+US west 09:20:59 If we keep that day and time every two weeks, does that work for everyone (for now) ? 09:21:12 works for me 09:21:16 good for me 09:21:17 works for me 09:21:21 works for me 09:21:21 works for me 09:21:26 Fun fact, I won;t be able to run the meeting two weeks from now at this time, being at a conf 09:21:30 good for me (from APAC/Japan) 09:22:10 Do you think a permanent IRC channel would help? 09:22:21 Or should we push as much comm as possible to the ML ? 09:22:22 for IRC we already have the openstack-operators. I think we shouldn't create a different group ("the large scale operators") but expose everything that we discuss to all operators 09:22:54 If not, what's the candidates for IRC channels? the openstack-operators? 09:23:09 I feel like leaving communication traces to the mailing-list is a great way to be transparent and encourage others to join 09:23:54 We can definitely use #openstack-operators for one-off discussions 09:24:02 agreed - the scientific-sig has a separate IRC channel but it is not used 09:24:47 Agree with ML, for the trace and being able to catch back some topics 09:25:04 Actually, I'm not available on IRC at night. ML is good to me as first contact points. 09:25:06 But I'd rather not force everyone to monitor a IRC channel all the time... 09:25:16 I have no strong opinion on IRC channel. openstack-operators is good for me 09:25:16 masahito: yes 09:25:18 ah, I'm living Japan. 09:26:06 irc channel and ML are all good for me 09:26:07 For interactive communication, #openstack-operators sounds good to me. 09:26:24 I agree with ML as the main communication channel. And we can use the openstack-operators for one-off discussions as ttx suggested 09:26:42 OK so let's use the mailing-list as our main means of communication... with prefix [large-scale] or [largescale-sig] 09:27:07 maybe the latter, so that it's clear it's about the SIG 09:28:34 Also we'll likely use a lot of etherpads as we draft goals and create documentation 09:28:34 that is all asynchronous and will work better across all of our timezones 09:28:38 Does that work? 09:28:48 +1 09:28:51 +1 09:28:53 +1 09:28:56 +1 09:28:58 +1 09:29:06 +1 09:29:29 +1 09:29:54 #agreed Use openstack-discuss with [largescale-sig] for SIG topics. Prefer etherpads and other asynchronous methods of communciation. One-off synchronous discussions in #openstack-operators 09:30:54 ok, is there any other logistics questions we need to solve before discussing what we'll actually do? 09:31:33 #action ttx to propose large scale SIG creation changes to openstack-sigs repository 09:32:14 I'll take that as a "no" 09:32:18 #topic Discuss initial SIG objectives 09:32:36 So first of all I think it is important to set reasonable objectives 09:32:59 In my long experience of such groups in OpenStack history, we always start with a lot of energy 09:33:24 but then if we set large goals and go in every direction, that initial energy dissipates fast 09:33:40 especially when real world commitments start to disrupt progress 09:34:18 It's a lot better to set a small goal and make steady progress toward it 09:34:37 rather than set a large goal and abandon it because nobody has enough time 09:35:02 But the group should definitely end up producing *something* 09:35:23 otherwise without a focal point the energy also dissipates fast :) 09:35:52 We had several ideas raised in the discussion we had in Shanghai 09:36:00 Notes at: 09:36:04 #link https://etherpad.openstack.org/p/PVG-large-scale-SIG 09:36:43 amorin mentioned wanting to create or modify existing doc for sensible larger-scale config defaults 09:37:20 masahito has work within Oslo to instrument bottlenecks 09:37:21 the ML thread also adds some high level information 09:37:52 does anyone want to propose a topic for the group to initially focus on? 09:38:37 instrumentation is my primary focus at present. 09:38:53 what do you mean by instrumentation? 09:39:04 what is "large scale" definition? I think about 1k compute-node on one cluster. 09:39:18 How about to gather the existing information how operators are managing large deployments. During the summits we have a lot of presentations that discuss several aspects: cells, rabbit, ... 09:39:38 amorin: I'm thinking of how do I detect the bottlenecks as the system grows 09:39:50 ok 09:40:00 YusukeTatsumi: yes one issue was the difference between scale within one cluster (which was my original focus) and more generally large size deployments (lots of clusters) 09:40:33 Personally ai think if we focus on scaling within one cluster, it's already a large enough scope 09:40:44 signal relevant presentations in a document and maybe create a summary would help to avoid rethink a solution that maybe was solved by someone but didn't get a lot of exposure 09:40:45 and would raise very interesting questions 09:40:56 belmoreira: agree with that, and I think we can share good practices on config params within this topic 09:41:33 ^ was about how operators are managing large scale 09:41:58 Maybe that's two different axis we can work on. (1) Scaling within one cluster, and instrumentation of the bottlenecks there 09:42:26 (2) Document large scale configuration and tips &tricks 09:42:50 +1 09:43:14 That makes sense. 09:43:56 I think we should give all the users confidence in large clusters, so at first we should tell them how large about the clusters that running in product, and then show them how to do this 09:43:57 +1 09:43:58 My work ttx mentioned above is related to (1) with the direction. 09:44:32 A reasonable goal for (1) would be to identify the most obvious bottlenecks and start implementing instrumentation to actually be able to measure it 09:44:53 Reasonable goal for (2) is to produce some documentation 09:45:27 yup 09:45:40 I count masahito, oneswig interested in (1), amorin, belmoreira in (2) 09:45:59 I can join to (1) 09:46:05 honestly I think we shouldn't limit ourselves explicitly to one cell. Different workloads, use cases may require small but multiple cells. I would prefer to consider the architecture choice/botlenecks in terms of use case 09:46:21 join (1) 09:46:36 belmoreira: it's more one deployment than one cell isn't it? 09:47:17 belmoreira: I agree we should not limit the scope of the SIG to one cell. But raising scaling limits within a single cell/cluster is useful for everyone imho 09:47:31 so it can be one of the SIG's lines of work 09:48:15 ttx of course I agree with that 09:48:45 oneswig yes, deployment considering the use-case 09:48:45 There will always be a point where you have to do multiple cells and clusters... and I agree we should also discuss that within this SIG 09:49:06 there is also growth point of view, yep 09:49:50 Personally I can help both subgroups with their logistics and interactions with openstack project teams 09:50:06 Like to set up a repo and jobs to publish docs to a website etc 09:50:27 or grease the wheels with Oslo reviews etc 09:50:59 ttx that's great 09:51:38 Thanks. 09:51:43 OK, so in terms of immediate actions, and to make progress between now and next meeting... maybe we can start two threads on the ML, one of each subject, to further refine plans 09:52:25 Goal generally being to have a more detailed plan to discuss at next meeting for both areas 09:53:30 Or should we brainstorm on etherpads first before dropping it on the ML? 09:54:02 I recall there was an interesting discussion in Shanghai about prometheus endpoints being exposed by OpenStack services. I haven't seen any follow-up go by on that but it would be one interesting place to start. 09:55:04 One issue of discussing very early steps on the ML (compared to doing it on an etherpad) is that you'll get people outside the SIG starting to shoot down crazy ideas 09:55:05 I think it was in the context of a billing forum 09:55:13 and therefore that limits the discussion 09:55:58 What's your preference to further refine those two topics? 09:55:58 ttx yes :) the etherpad may be better to start the discussion 09:56:18 I trust your experience, etherpad is good 09:56:30 ok, so I'll post a summary of this meeting, and create two etherpads to refine those two topics 09:56:41 ok 09:57:00 #topic Next meeting 09:57:19 #action ttx to send meeting summary and create two etherpads to further refine the two initial goals 09:57:32 So as I said earlier, I won;t be around at that time in two weeks 09:58:08 also some of us will have the end-of-year holidays after that 09:58:22 So I'm wondering if we should not set the next meeting to Dec 18 09:58:23 december 11 I am not available neither 09:58:38 and then January 8 09:58:45 either date works for me 09:59:04 then we can go back to every 2 week 09:59:12 +1 09:59:13 dec 18 works for me 09:59:15 both work for me 09:59:22 dec 18 is fine 09:59:34 have to go - thanks ttx & all - see you next time 09:59:41 18 dec works for me 09:59:48 Alright! Thanks everyone for attending 09:59:48 ether days works 09:59:48 I will not be available in the 18 10:00:49 i would propose to work in the scope and then we meet next year 10:01:02 belmoreira: will you be available to sync with amorin ahead of the meeting on Dec 18? 10:01:08 It's ok to miss a meeting 10:01:46 I will be off all week 10:01:56 arf 10:02:14 OK, let's continue that discussion on the ML, and free up people 10:02:25 #info Next meeting date to be confirmed 10:02:29 #endmeeting