16:06:26 <VW_> #startmeeting Large Deployments Team August 2015 Meeting
16:06:28 <openstack> Meeting started Thu Aug 27 16:06:26 2015 UTC and is due to finish in 60 minutes.  The chair is VW_. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:06:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:06:31 <openstack> The meeting name has been set to 'large_deployments_team_august_2015_meeting'
16:07:25 <VW_> #topic Neutron and Segments
16:07:43 <VW_> so, klindgren, I moved from one OpenStack event to another
16:08:00 <VW_> anything happen after the LDT session where the Neutron folks dialed in?
16:12:10 <VW_> guessing not :)
16:12:34 <klindgren> no - sorry didn't think this started until like another 20 min
16:12:55 <VW_> tis OK
16:13:18 <VW_> I'm actually doing this, and sitting in keynotes and OSSV15, so ....
16:13:18 <klindgren> trying to rmeember
16:13:53 <klindgren> I believe there was some discussion around scale.  Where scale in this case was relatively small
16:14:23 <klindgren> like 100 networks with neutron-client blowing up, and something about clients not handling multi-region well at all
16:14:34 <klindgren> when a common keystone was used
16:15:29 <klindgren> I believe all of those things were fixed upstream
16:15:42 <klindgren> but points to a lack of functional testing in the gate
16:16:38 <klindgren> I think that was it
16:17:40 <klindgren> Oh - some stuff about trying to get a taxonomy group formed
16:18:30 <klindgren> so that people can communicat using the same languauge that has the same definitions - since some words are overloaded in openstack/cloud
16:18:58 <VW_> yeah - there was that too
16:19:07 <VW_> not sure if anyone else from that conversation showed up
16:19:11 <VW_> today that is
16:19:16 <klindgren> Some discussion around lawfull intercept as well
16:19:25 <VW_> yeah
16:20:03 <VW_> so, to me, that's part of the larger conversation of whehter or not there needs to be a Public Cloud Team seperate from the LDT
16:20:48 <klindgren> yea - I think a number of large deployers *ARE* public clouds
16:21:26 <serverascode> I do think there should be a public cloud working group myself
16:21:41 <klindgren> and you can run an internal cloud like a public cloud as well.
16:21:47 <serverascode> we are public but not necessarily large
16:22:14 <klindgren> But I would be fine with scale issues being LDT focus, and better multi-tenancy and other stuff being the focus of public cloud
16:23:51 <VW_> that's fair
16:24:21 <klindgren> both are currently far from ideal :-)
16:24:59 <VW_> indeed
16:25:01 <VW_> Ok
16:25:02 <klindgren> I jsut ask that we try not to schedule LDT and public-cloud stuff together
16:25:24 <VW_> #action VW_ talk to Tom F. about a Public Cloud working group of some sort
16:25:25 <gfa> i think will be a lot of people in common on both groups
16:25:37 <VW_> there are a lot of common folks gfa
16:25:41 <VW_> ant that is my struggle
16:26:45 <klindgren> I am also ok expanding LDT to include both
16:26:57 <VW_> serverascode: what blocks you from being comfortable from joining the LDT and helping drive the public cloud issues too?
16:27:23 <serverascode> I find the LDT is mostly about large scale networks
16:27:54 <gfa> as klindgren says, as long as schedule don't overlap it will be ok
16:28:22 <gfa> i think latter both groups can merge back if they are too small/similar
16:28:23 <serverascode> there aren't very many public clouds afaik, so I would imagine it would be small group
16:28:43 <gfa> maybe experiment 2 groups and if does not work. go back to one group
16:30:06 <VW_> those are fair comments
16:30:11 <gfa> if we keep the same IRC channel and mailing list, split/join the groups should be easy IMHO
16:30:26 <VW_> serverascode: we are focused on networks a lot right now becuase our interaction with Neutron, but we really try to think about scaling problems in general
16:31:22 <VW_> yes gfa - LDT has pretty much settled on the "operators" - both list and channel - for pretty much everything
16:33:18 <serverascode> yeah, I might just be floating out here on my own with no real perspective :) I dunno. I'd just like a group of ppl with similar problems to talk about, and I don't think that is the LDT for me right now, but again I might be a one-off kinda situation
16:37:35 <VW_> no, those are valid points serverascode
16:37:48 <VW_> like I said, I can argue it either way
16:38:05 <VW_> serverascode, gfa - are you coming to Tokyo
16:38:19 <gfa> i'm :(
16:38:21 <gfa> :)
16:38:48 <serverascode> yeah should be
16:39:18 <VW_> ok - for now, let's do this.  I'll see if we are going to get a good block of time for LDT (like we did in YVR) for LDT
16:39:35 <VW_> if so, then why don't you join us and we'll talk this out for a bit in person
16:40:06 <serverascode> sounds good to me, thanks :)
16:40:30 <VW_> sure!
16:40:41 <VW_> I'll work on it and try to get an update out vie ML and the September LDT meeting
16:40:47 <VW_> so, on that note
16:41:03 <VW_> #topic LDT bug list for cells V1
16:41:22 <VW_> ^ that is probably our next big collective push with devs
16:41:27 <VW_> agree/disagree?
16:42:52 <klindgren> agree
16:43:54 <klindgren> Just want to try to consolidate the patchsets that we all carry
16:44:02 <klindgren> and try to get those upstreamed
16:44:18 <klindgren> since it looks like cells v1 is going to be around for atleast 1+ years
16:44:37 <VW_> yep
16:44:41 <klindgren> (I also want to try to do the same thing for general opperators as well)
16:44:43 <VW_> did we ever etherpad that?
16:45:00 <klindgren> we did - but Only a little bit
16:45:02 <VW_> yes, I think this goes beyond LDT
16:45:47 <klindgren> https://etherpad.openstack.org/p/PAO-LDT-cells-patches
16:45:54 <VW_> sweet!
16:46:17 <klindgren> I htink sam and I are the only ones that added anything
16:46:18 <klindgren> ;-)
16:46:28 <klindgren> *cough* VW *cough*
16:46:33 * VW_ duly notes the public shaming
16:47:11 <VW_> This should be a big topic of ours in Tokyo
16:47:23 <VW_> but we can do a lot of prework in the next two months
16:48:20 <klindgren> yea
16:48:41 <VW_> ok, so I'll my act together and get some updates on there
16:48:46 <VW_> then we'll get it out to the list
16:49:00 <VW_> Cern should probably have some they can put there too
16:49:01 <klindgren> I plan on starting another etherpad of patches that we have for just business logic
16:49:34 <VW_> That's a good idea
16:50:23 <VW_> Those may be harder because all of us will probably end up with a few that are so specific, they won't go upstream
16:50:46 <klindgren> right - but until we talk about it - we never know
16:50:56 <klindgren> 4 people might be doing the same thing
16:51:25 <klindgren> like 3 people having patches to fix vif plugging under cells v1
16:52:15 <klindgren> but as someone else mentioned it possible too that instead of a patch - we could add in a hook
16:52:27 <klindgren> if 4 people want to do something on x event but want to do different things
16:52:51 <VW_> agreed
16:53:19 <VW_> OK, let's put that at the top of the list for next month then
16:54:32 <VW_> get some traction on the cells v1 in the interim and potentially tee both of them up for Tokyo working session
16:54:42 <VW_> with the Public Cloud bit, of course
16:54:44 <VW_> :)
16:55:00 <VW_> wow - we may have just planned an agenda two months out
16:55:27 <klindgren> So the only other thing that I have is that we did some work to get a better handle on doing QoS/policing on vm traffic to prevent outbound ddos from vm's
16:55:39 <VW_> ooh yeah
16:55:44 <VW_> #topic open discussion
16:55:59 <VW_> you should share some more details about that here for us to capture for all
16:56:21 <klindgren> but it was done via libvirt/qemu hooks to apply custom tc rules to a vm on startup/shutdown
16:56:49 <VW_> interesting
16:57:26 <klindgren> but the requirement was to filter specific sized traffic to a lower rate, and attempt to be able to police traffic to a set threshold depnding on flavor
16:57:48 <VW_> what have the results been?
16:58:03 <klindgren> no dedtactable outbound ddos's so far
16:58:28 <klindgren> since we had a free beta going on - we had a fair amount of cruft that got in
16:58:45 <gfa> what you did is cool
16:59:18 <klindgren> but against is 100% outside of openstack
17:00:13 <gfa> we have talked about elasticsearch, kibana, etc before. I don't think is OT
17:00:13 <klindgren> but figure other public cloud people probably have this issue and probably want a soltuoion - and the default tuneables in nova/neutron don't give you that ability yet
17:00:56 <klindgren> VW_ also no customer complaints - yet
17:01:10 <VW_> Yeah, I'm really interested in where it goes.  Like I told you when we chatted, our upstream Backbone folks usually detect and then our AUP suspends
17:01:19 <klindgren> we have used something similar on our VPS product for a few years now
17:01:22 <VW_> always a good litmus test, klindgren
17:01:47 <VW_> alright, well I'm happy to chat more, but I'll end the recording since we are at time
17:01:53 <VW_> thank you all for joining.
17:02:04 <VW_> sorry for being a bit distracted and conferencing while meeting at the same time
17:02:15 <VW_> #end meeting
17:02:20 <VW_> #endmeeting