15:00:43 #startmeeting satori 15:00:44 Meeting started Mon Sep 22 15:00:43 2014 UTC and is due to finish in 60 minutes. The chair is zns. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:47 The meeting name has been set to 'satori' 15:02:44 Hi! Who's here for Satori? 15:04:25 Hi samstav - anyone else here for Satori? 15:04:56 o/ 15:05:31 Hi gondoi 15:05:58 Thanks for joining. Light agenda today, one topic, but it could get gamey. 15:06:06 #topic Action Items 15:06:13 No action items, I believe. 15:06:51 gondoi: could you help ups review/push through https://review.openstack.org/#/q/status:open+project:stackforge/satori,n,z 15:07:09 samstav - could you check https://review.openstack.org/#/c/118843/ 15:07:20 will do 15:07:43 #topic Affinity Check 15:08:33 We've received a request to be able to detect guests running on the same host within a tenant. 15:10:33 I think this would be a good opportunity to have satori provide that. We'd get some of our blueprints moving (ex. opinions: https://blueprints.launchpad.net/satori/+spec/poc-resource-opinions, plugins: https://blueprints.launchpad.net/satori/+spec/satori-plugin-support) 15:11:43 Cool. I was looking through the existing blueprints to pick out some relevant ones. That's them ^^ 15:12:34 btw samstav is an imposter. I am the real sam 15:12:41 What would be a good first step? 15:12:44 "sam I am" 15:13:32 http://img2.wikia.nocookie.net/__cb20120523053951/seuss/images/e/e3/Samiam.jpg 15:14:22 It feel to me like this would be an operation on a tenant, whereas all our functionality right now operates on one address. 15:14:35 That could be the first thing to address. 15:15:47 So we would have to run a discovery that would return all resources under a tenant, not just one IP. 15:16:03 To perform the affinity check, you would be required to run satori with input == tenant id ? 15:16:03 And then we would run some logic over that data to check for host affinity. 15:17:14 Yes. I think... should it be a different mode (ex. address/netloc discovery vs. tenant discovery) or should we have a generic "target" and the target could be a tenant id? 15:17:22 In other words, I think we would still want to run that opinion even if the input target is a netloc. 15:18:21 Hmm. Good question. Satori doesn't support a variety of input targets yet. mode or modeless.... 15:19:06 Opinions should probably have some logic in them that allows them to self-select and determine whether they should run or not on the data from a discovery. They could use the arguments fro a discovery or the data. 15:20:48 Correct me if I'm wrong. I think the logic for the affinity check would have the same expense whether you want to know if VM1 shares a host with any of the tenant's other VMs, vs. whether you want to know if *any* VMs share a host w/ *any* of the tenant's VMs 15:21:27 A con for modes is that we would have to add modes for different targets as we expand our scope. And then have to handle situations where modes might overlap. I prefer just one target input and if we need to disambiguate it we could. For example "satory key=value, ..." like "satori discover tenantId=12345" or "satori discover ip=10.1.2.3" or, when combined, "satori discover tenantId=12345 address=https://myapp.com" 15:22:14 I think you're right. It would be the same expense. But could there be opinions in the future that would have higher expense? 15:23:26 The key/values are basically things we know; I.e. given these facts, find these unknowns. 15:23:51 I think there could be opinions in the future that have a higher expense, yes 15:24:07 So, actually, maybe add a list of things to find: satori discover tenantId=12345 address=https://myapp.com find=dns-info,host,ip-address, ... 15:25:12 "I prefer just one target input and if we need to disambiguate it we could. " -- yes 15:25:26 What would the command look like for running the affinity check on a tenant? 15:25:52 Lets throw some ideas out there... 15:26:13 I was thinking the opinion would just "self-select" as you mentioned 15:27:18 *unless* opinions are turned off? Maybe... opinions are on by default, unless you turn them all off with something like `--no-opinions` and then, if they are on by default, you could selectively exclude opinions by opinion ID 15:28:08 like grep has --exclude and --exclude-dir 15:28:46 I like that. We use the environment variables commonly set up for an openstack client today (ex. OS_AUTH, OS_TENANT). We could actually make this work with the simplest of commands: "satori" - just that. By default it would run on the configured tenant in the environment, collect all the resources, and generate any opinions on them and print them out. 15:29:23 That is great 15:29:33 Can't get simpler than that. 15:29:39 Yep 15:30:20 Not collect more than 100 servers unless explicitly asked to. 15:30:33 Does this mean satori doesn't need the `discover` command? 15:30:53 How many subcommands does it have now? 15:30:55 Sorry, overtyped "We'd have to handle tenants with many resources and rate limit satori." 15:31:51 I guess not. It's implied in our command today and we've not received a use case for any other action. 15:32:20 Ok 15:32:56 re: ratelimiting, yes 15:33:12 What would the output look like for that? Maybe this is where a blueprint is needed.... 15:34:09 By 'that', you mean satori infers running a "tenant-mode" discovery based on environment variables and applies the affinity check opinion ? 15:34:24 Do we need a plugin for this or, just like we support python-novaclient and openstack by default now, we just include this opinion as a built-in. Plugins can come later when a request comes in that needs them. 15:34:44 Ye 15:34:46 s 15:34:48 cool 15:35:56 And how would resource limiting look: satori --max-resources=1000? 15:36:14 satori --max-servers=1000 --max-networks=20 15:36:25 Two options here. 15:36:28 Good quest re: plugin. I guess it does look like satori will support "anything" openstack by default. 15:36:38 Yep. 15:37:22 And if we start supporting opinions plugins we could turn "built-in" ones one and off if it makes sense. 15:38:52 Cool. I think the `--max-xyz` option sounds good. 15:38:57 option(s) 15:39:46 I was trying to think a little bit about discovering similar information from different clouds and being able to feed that data into the same opinion. Not sure we need to talk about that right this minute. 15:39:49 I imagine we would loop over the service catalog and get all resources form all service endpoints... maybe in the future... but we're just doing server using python-novaclient now. But for when that future comes would we want a generic option that says "get maximum one thousand items from each endpoint, no matter what the endpoint is"? 15:40:28 Or should it be determined by the number of pages that come back from each service: --max-pages=20. 15:40:55 to continue that previous sentence ... would/should the affinity check or other opinions need to be coupled with openstack ? I personally would not know what is applicable on another cloud platform. 15:41:44 Yes. I think so. For now. We would check the "host" key. I don't know what that key is on other providers. 15:42:42 ok. Can different hosts in different regions have the same host id? 15:42:58 * samstav_ hopes not 15:42:59 We've adopted ohai-solo for our data model for data plane discovery. We could just adopt openstack as our model for cloud resources and when we add support for other providers we would transform them into that. 15:43:23 cool 15:43:35 I don't know if we have the bandwidth to take on creating or moving to some other generic standard. I'm sure some exist... 15:44:40 Something like http://www.dmtf.org/sites/default/files/standards/documents/DSP0264_1.0.0.pdf 15:44:52 As a user I'd probably prefer --max-resources to --max-pages 15:45:25 I just don't feel we would be able to make any meaningful progress towards adoption that. We don't have the traction and resourcing yet... 15:45:48 Agreed. --max-resources means more. It's more certain. 15:46:39 Better link: #link http://www.dmtf.org/standards/cmwg 15:47:46 I suggest we punt on that and if we have to rewrite a new version that anchors its schema in CIMI (or some other standard) we just take the hit then. 15:48:38 agreed 15:49:05 How does --max-resources get distributed across endpoints? Would --max-resources=100 be just 100 servers, or 80 servers and 20 networks? 15:49:23 #agreed - schema will not be CIMI or other standard, but openstack for now. 15:50:23 #agreed - tenant discovery would be default discovery if no netloc/address provided. 15:51:53 #agreed - one 'target' input and no "modes" supplied at command line (ex. NOT mode=tenant-discovery) 15:51:53 re: distributing 'max-resources', I am not usre. 15:52:42 Maybe we just start with servers and we log an error if we reach maximum resources. 15:53:32 That sounds good 15:53:53 Sounds like we have enough alignment to create a blueprint. I'll create one. 15:54:09 #action zns create a blueprint for tenant affinity check 15:54:28 Cool. I think we are almost out of time but I was hoping we could talk about this for a minute: https://wiki.openstack.org/wiki/Satori/OpinionsProposal 15:54:47 #topic Opinions Proposal 15:54:51 GO for it. 15:55:07 Can we state that the blueprint proposal still looks good? 15:55:39 We had an in-depth discussion about it a while back I think. Did the discussion results make it into the proposal? 15:56:57 The opinion(s) datastructure in the proposal is pretty loose, as far as having some kind of mapping to the discovery results, primarily speaking, discovered resources. 15:57:44 Oh, I didn't read far enough. 15:58:18 The opinions live inside of the resource datastructure. 15:58:42 So basically, opposite of what I said a few lines ago. 15:58:54 I think the proposal looks good. We just need to look up the discussion we had and make sure it includes the outcome of the discussion. 15:59:04 Ok. 15:59:09 Cool. 15:59:31 Thanks, samstav_ (not samstav the imposter) 15:59:46 Thank you gondoi for joining. 15:59:50 #endmeeting