16:06:26 #startmeeting Large Deployments Team August 2015 Meeting 16:06:28 Meeting started Thu Aug 27 16:06:26 2015 UTC and is due to finish in 60 minutes. The chair is VW_. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:06:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:06:31 The meeting name has been set to 'large_deployments_team_august_2015_meeting' 16:07:25 #topic Neutron and Segments 16:07:43 so, klindgren, I moved from one OpenStack event to another 16:08:00 anything happen after the LDT session where the Neutron folks dialed in? 16:12:10 guessing not :) 16:12:34 no - sorry didn't think this started until like another 20 min 16:12:55 tis OK 16:13:18 I'm actually doing this, and sitting in keynotes and OSSV15, so .... 16:13:18 trying to rmeember 16:13:53 I believe there was some discussion around scale. Where scale in this case was relatively small 16:14:23 like 100 networks with neutron-client blowing up, and something about clients not handling multi-region well at all 16:14:34 when a common keystone was used 16:15:29 I believe all of those things were fixed upstream 16:15:42 but points to a lack of functional testing in the gate 16:16:38 I think that was it 16:17:40 Oh - some stuff about trying to get a taxonomy group formed 16:18:30 so that people can communicat using the same languauge that has the same definitions - since some words are overloaded in openstack/cloud 16:18:58 yeah - there was that too 16:19:07 not sure if anyone else from that conversation showed up 16:19:11 today that is 16:19:16 Some discussion around lawfull intercept as well 16:19:25 yeah 16:20:03 so, to me, that's part of the larger conversation of whehter or not there needs to be a Public Cloud Team seperate from the LDT 16:20:48 yea - I think a number of large deployers *ARE* public clouds 16:21:26 I do think there should be a public cloud working group myself 16:21:41 and you can run an internal cloud like a public cloud as well. 16:21:47 we are public but not necessarily large 16:22:14 But I would be fine with scale issues being LDT focus, and better multi-tenancy and other stuff being the focus of public cloud 16:23:51 that's fair 16:24:21 both are currently far from ideal :-) 16:24:59 indeed 16:25:01 Ok 16:25:02 I jsut ask that we try not to schedule LDT and public-cloud stuff together 16:25:24 #action VW_ talk to Tom F. about a Public Cloud working group of some sort 16:25:25 i think will be a lot of people in common on both groups 16:25:37 there are a lot of common folks gfa 16:25:41 ant that is my struggle 16:26:45 I am also ok expanding LDT to include both 16:26:57 serverascode: what blocks you from being comfortable from joining the LDT and helping drive the public cloud issues too? 16:27:23 I find the LDT is mostly about large scale networks 16:27:54 as klindgren says, as long as schedule don't overlap it will be ok 16:28:22 i think latter both groups can merge back if they are too small/similar 16:28:23 there aren't very many public clouds afaik, so I would imagine it would be small group 16:28:43 maybe experiment 2 groups and if does not work. go back to one group 16:30:06 those are fair comments 16:30:11 if we keep the same IRC channel and mailing list, split/join the groups should be easy IMHO 16:30:26 serverascode: we are focused on networks a lot right now becuase our interaction with Neutron, but we really try to think about scaling problems in general 16:31:22 yes gfa - LDT has pretty much settled on the "operators" - both list and channel - for pretty much everything 16:33:18 yeah, I might just be floating out here on my own with no real perspective :) I dunno. I'd just like a group of ppl with similar problems to talk about, and I don't think that is the LDT for me right now, but again I might be a one-off kinda situation 16:37:35 no, those are valid points serverascode 16:37:48 like I said, I can argue it either way 16:38:05 serverascode, gfa - are you coming to Tokyo 16:38:19 i'm :( 16:38:21 :) 16:38:48 yeah should be 16:39:18 ok - for now, let's do this. I'll see if we are going to get a good block of time for LDT (like we did in YVR) for LDT 16:39:35 if so, then why don't you join us and we'll talk this out for a bit in person 16:40:06 sounds good to me, thanks :) 16:40:30 sure! 16:40:41 I'll work on it and try to get an update out vie ML and the September LDT meeting 16:40:47 so, on that note 16:41:03 #topic LDT bug list for cells V1 16:41:22 ^ that is probably our next big collective push with devs 16:41:27 agree/disagree? 16:42:52 agree 16:43:54 Just want to try to consolidate the patchsets that we all carry 16:44:02 and try to get those upstreamed 16:44:18 since it looks like cells v1 is going to be around for atleast 1+ years 16:44:37 yep 16:44:41 (I also want to try to do the same thing for general opperators as well) 16:44:43 did we ever etherpad that? 16:45:00 we did - but Only a little bit 16:45:02 yes, I think this goes beyond LDT 16:45:47 https://etherpad.openstack.org/p/PAO-LDT-cells-patches 16:45:54 sweet! 16:46:17 I htink sam and I are the only ones that added anything 16:46:18 ;-) 16:46:28 *cough* VW *cough* 16:46:33 * VW_ duly notes the public shaming 16:47:11 This should be a big topic of ours in Tokyo 16:47:23 but we can do a lot of prework in the next two months 16:48:20 yea 16:48:41 ok, so I'll my act together and get some updates on there 16:48:46 then we'll get it out to the list 16:49:00 Cern should probably have some they can put there too 16:49:01 I plan on starting another etherpad of patches that we have for just business logic 16:49:34 That's a good idea 16:50:23 Those may be harder because all of us will probably end up with a few that are so specific, they won't go upstream 16:50:46 right - but until we talk about it - we never know 16:50:56 4 people might be doing the same thing 16:51:25 like 3 people having patches to fix vif plugging under cells v1 16:52:15 but as someone else mentioned it possible too that instead of a patch - we could add in a hook 16:52:27 if 4 people want to do something on x event but want to do different things 16:52:51 agreed 16:53:19 OK, let's put that at the top of the list for next month then 16:54:32 get some traction on the cells v1 in the interim and potentially tee both of them up for Tokyo working session 16:54:42 with the Public Cloud bit, of course 16:54:44 :) 16:55:00 wow - we may have just planned an agenda two months out 16:55:27 So the only other thing that I have is that we did some work to get a better handle on doing QoS/policing on vm traffic to prevent outbound ddos from vm's 16:55:39 ooh yeah 16:55:44 #topic open discussion 16:55:59 you should share some more details about that here for us to capture for all 16:56:21 but it was done via libvirt/qemu hooks to apply custom tc rules to a vm on startup/shutdown 16:56:49 interesting 16:57:26 but the requirement was to filter specific sized traffic to a lower rate, and attempt to be able to police traffic to a set threshold depnding on flavor 16:57:48 what have the results been? 16:58:03 no dedtactable outbound ddos's so far 16:58:28 since we had a free beta going on - we had a fair amount of cruft that got in 16:58:45 what you did is cool 16:59:18 but against is 100% outside of openstack 17:00:13 we have talked about elasticsearch, kibana, etc before. I don't think is OT 17:00:13 but figure other public cloud people probably have this issue and probably want a soltuoion - and the default tuneables in nova/neutron don't give you that ability yet 17:00:56 VW_ also no customer complaints - yet 17:01:10 Yeah, I'm really interested in where it goes. Like I told you when we chatted, our upstream Backbone folks usually detect and then our AUP suspends 17:01:19 we have used something similar on our VPS product for a few years now 17:01:22 always a good litmus test, klindgren 17:01:47 alright, well I'm happy to chat more, but I'll end the recording since we are at time 17:01:53 thank you all for joining. 17:02:04 sorry for being a bit distracted and conferencing while meeting at the same time 17:02:15 #end meeting 17:02:20 #endmeeting