*** xtoddx has left #openstack | 00:11 | |
*** hisak has quit IRC | 00:16 | |
*** jakedahn has quit IRC | 00:41 | |
*** jkelly has joined #openstack | 00:41 | |
*** jakedahn has joined #openstack | 00:42 | |
*** zooko has quit IRC | 00:53 | |
*** maplebed has quit IRC | 00:55 | |
*** burris has quit IRC | 01:04 | |
*** joearnold has quit IRC | 01:05 | |
*** jkelly has quit IRC | 01:08 | |
*** aliguori has quit IRC | 01:26 | |
*** mtaylor has quit IRC | 01:33 | |
*** burris has joined #openstack | 01:38 | |
*** zooko` has joined #openstack | 01:43 | |
*** e1mer has joined #openstack | 01:48 | |
*** ArdRigh has quit IRC | 01:59 | |
*** miclorb has quit IRC | 02:09 | |
*** zooko` has quit IRC | 02:15 | |
*** zooko` has joined #openstack | 02:17 | |
*** Cybo-mobile has joined #openstack | 02:29 | |
*** jkelly has joined #openstack | 02:30 | |
*** adjohn has joined #openstack | 02:34 | |
*** Cybodog has quit IRC | 02:34 | |
*** mtaylor has joined #openstack | 02:35 | |
*** ChanServ sets mode: +v mtaylor | 02:35 | |
*** jkelly has left #openstack | 02:36 | |
*** silassewell has quit IRC | 02:41 | |
*** tobym has joined #openstack | 02:44 | |
*** Glacee has quit IRC | 02:52 | |
*** zooko` is now known as zooko | 02:52 | |
*** littleidea has quit IRC | 02:56 | |
*** miclorb_ has joined #openstack | 03:10 | |
*** littleidea has joined #openstack | 03:14 | |
*** zooko has quit IRC | 03:17 | |
*** deshantm_cosi has joined #openstack | 03:20 | |
deshantm_cosi | anybody have a recommended distro for installing openstack? | 03:21 |
---|---|---|
deshantm_cosi | I'm setting up one or more systems for testing this week | 03:21 |
*** chewbranca_ has quit IRC | 03:22 | |
creiht | deshantm_cosi: most of us use ubuntu server, so that will probably give you the best results | 03:23 |
deshantm_cosi | great | 03:23 |
deshantm_cosi | thanks | 03:23 |
*** vvuksan has joined #openstack | 03:30 | |
*** vvuksan has quit IRC | 03:30 | |
*** johndoy__ has joined #openstack | 03:32 | |
*** johndoy_ has quit IRC | 03:36 | |
*** kashyapc has joined #openstack | 03:44 | |
*** kashyapc has quit IRC | 04:01 | |
*** kashyapc has joined #openstack | 04:01 | |
*** adjohn_ has joined #openstack | 04:02 | |
*** adjohn has quit IRC | 04:04 | |
*** adjohn_ is now known as adjohn | 04:04 | |
*** Cybo-mobile has quit IRC | 04:08 | |
*** zooko has joined #openstack | 04:13 | |
*** zooko has quit IRC | 04:25 | |
*** omidhdl has joined #openstack | 04:29 | |
*** zooko has joined #openstack | 04:31 | |
*** dele_ted has joined #openstack | 04:39 | |
*** e1mer has quit IRC | 04:42 | |
*** bitmonk has quit IRC | 04:43 | |
*** zooko has quit IRC | 04:47 | |
*** bitmonk has joined #openstack | 04:48 | |
*** miclorb_ has quit IRC | 04:49 | |
*** miclorb has joined #openstack | 04:49 | |
*** miclorb__ has joined #openstack | 04:50 | |
*** miclorb has quit IRC | 04:54 | |
*** dele_ted has quit IRC | 04:55 | |
*** f4m8_ is now known as f4m8 | 04:57 | |
*** arcane has quit IRC | 05:01 | |
*** zooko has joined #openstack | 05:12 | |
*** rbergeron has quit IRC | 05:22 | |
*** allsystemsarego has joined #openstack | 05:55 | |
*** ibarrera has joined #openstack | 05:57 | |
*** mtaylor has quit IRC | 05:59 | |
*** sirp1 has quit IRC | 06:01 | |
*** zooko has quit IRC | 06:07 | |
*** zooko has joined #openstack | 06:11 | |
*** mtaylor has joined #openstack | 06:17 | |
*** ChanServ sets mode: +v mtaylor | 06:17 | |
*** burris has quit IRC | 06:20 | |
uvirtbot | New bug: #637805 in nova "keypairs shouldn't be in LDAP" [Low,New] https://launchpad.net/bugs/637805 | 06:21 |
*** tobym has quit IRC | 06:30 | |
*** zooko has quit IRC | 06:34 | |
*** abecc has quit IRC | 06:44 | |
*** debauer_ has joined #openstack | 06:46 | |
*** debauer__ has quit IRC | 06:46 | |
*** zooko has joined #openstack | 06:46 | |
*** kashyapc has quit IRC | 06:49 | |
uvirtbot | New bug: #637818 in swift "Add "Hand off" visibility to swift-get-nodes" [Undecided,New] https://launchpad.net/bugs/637818 | 06:57 |
*** brd_from_italy has joined #openstack | 07:01 | |
*** dele_ted has joined #openstack | 07:03 | |
*** calavera has joined #openstack | 07:07 | |
soren | vishy: As I understood it, the scoped session would be thread local. | 07:32 |
*** debauer_ has quit IRC | 07:33 | |
*** debauer_ has joined #openstack | 07:34 | |
soren | vishy: I've always thought that the way an ORM automagically turned these references into object attributes was one of the most convenient features of having an ORM to begin with. | 07:36 |
soren | vishy: ...but I guess we can revisit it for Austin+1. | 07:36 |
*** kashyapc has joined #openstack | 07:42 | |
*** jakedahn has quit IRC | 08:03 | |
*** adjohn has quit IRC | 08:11 | |
*** jkakar has joined #openstack | 08:13 | |
*** dele_ted has quit IRC | 08:34 | |
*** michalis has joined #openstack | 08:37 | |
*** omidhdl has quit IRC | 08:44 | |
*** miclorb__ has quit IRC | 08:46 | |
*** miclorb has joined #openstack | 08:46 | |
*** DubLo7 has quit IRC | 08:47 | |
*** miclor___ has joined #openstack | 08:49 | |
*** omidhdl has joined #openstack | 08:49 | |
*** miclorb has quit IRC | 08:50 | |
*** littleidea has quit IRC | 09:21 | |
*** zheng_li has joined #openstack | 09:26 | |
*** littleidea has joined #openstack | 09:40 | |
*** jkakar has quit IRC | 09:47 | |
*** littleidea has quit IRC | 09:49 | |
*** ctennis has quit IRC | 10:25 | |
*** ctennis has joined #openstack | 10:38 | |
*** ctennis has joined #openstack | 10:38 | |
*** omidhdl has left #openstack | 10:44 | |
*** kashyapc has quit IRC | 11:06 | |
*** vvuksan has joined #openstack | 11:09 | |
*** vvuksan1 has joined #openstack | 11:12 | |
*** vvuksan has quit IRC | 11:13 | |
*** vvuksan1 has quit IRC | 11:17 | |
*** vvuksan has joined #openstack | 11:17 | |
*** kashyapc has joined #openstack | 11:18 | |
*** michalis has quit IRC | 11:20 | |
*** vvuksan has quit IRC | 11:22 | |
*** arcane has joined #openstack | 11:24 | |
*** DubLo7 has joined #openstack | 11:29 | |
*** vvuksan has joined #openstack | 11:35 | |
*** DubLo7 has quit IRC | 11:38 | |
*** vvuksan has quit IRC | 11:40 | |
*** kashyapc has quit IRC | 11:43 | |
*** vvuksan has joined #openstack | 11:48 | |
*** tobym has joined #openstack | 12:00 | |
*** michalis has joined #openstack | 12:10 | |
*** miclor___ has quit IRC | 12:18 | |
*** Podilarius has joined #openstack | 12:21 | |
*** vvuksan has quit IRC | 12:22 | |
*** DubLo7 has joined #openstack | 12:24 | |
*** DubLo71 has joined #openstack | 12:28 | |
*** DubLo7 has quit IRC | 12:30 | |
*** tobym has quit IRC | 12:39 | |
*** tobym has joined #openstack | 12:40 | |
*** kuttan_1 has joined #openstack | 12:41 | |
*** tobym has quit IRC | 12:42 | |
*** kuttan_1 has quit IRC | 12:44 | |
*** aliguori has joined #openstack | 12:59 | |
*** jkakar has joined #openstack | 13:04 | |
*** tobym has joined #openstack | 13:06 | |
*** jkakar has quit IRC | 13:09 | |
*** jkakar has joined #openstack | 13:09 | |
*** hornbeck has quit IRC | 13:12 | |
*** jkakar has quit IRC | 13:15 | |
*** jkakar_ has joined #openstack | 13:15 | |
*** Cybodog has joined #openstack | 13:20 | |
*** klord has joined #openstack | 13:20 | |
*** gundlach has joined #openstack | 13:26 | |
*** burris has joined #openstack | 13:34 | |
*** gundlach has quit IRC | 13:35 | |
*** gundlach has joined #openstack | 13:36 | |
*** burris has joined #openstack | 13:36 | |
gundlach | when's today's release meeting? | 13:39 |
dendrobates | gundlach: 4pm cst, check the wiki for local times | 13:41 |
dendrobates | http://goo.gl/3ZYo | 13:41 |
gundlach | dendrobates: ty | 13:46 |
*** rnewson has joined #openstack | 13:47 | |
*** zooko has quit IRC | 13:50 | |
*** jkakar_ has quit IRC | 13:51 | |
creiht | http://djangozoom.com/ | 13:55 |
*** DubLo71 has quit IRC | 13:56 | |
*** dendrobates is now known as dendro-afk | 13:59 | |
soren | creiht: Anyone we know? | 13:59 |
creiht | A friend of mine, thought some people might find it of interest | 14:00 |
soren | Could be. I've requested an invite. It's always interesting to see how people use our stuff. | 14:01 |
*** mdomsch has joined #openstack | 14:04 | |
*** dendro-afk is now known as dendrobates | 14:05 | |
*** f4m8 is now known as f4m8_ | 14:08 | |
* soren pauses | 14:09 | |
*** tobym has quit IRC | 14:09 | |
*** npmap has joined #openstack | 14:11 | |
*** tobym has joined #openstack | 14:16 | |
dendrobates | https://launchpad.net/sprints/ods-b | 14:20 |
blamar | creiht: thanks for the link! invite requested | 14:34 |
*** pharkmillups has joined #openstack | 14:40 | |
*** jkakar has joined #openstack | 14:43 | |
*** ded has joined #openstack | 14:44 | |
*** sirp1 has joined #openstack | 14:50 | |
*** hornbeck has joined #openstack | 14:52 | |
*** amscanne has joined #openstack | 14:52 | |
dendrobates | register for the upcoming Openstack design summit. https://launchpad.net/sprints/ods-b/+attend | 14:57 |
*** littleidea has joined #openstack | 15:01 | |
*** zooko has joined #openstack | 15:01 | |
*** DubLo7 has joined #openstack | 15:10 | |
*** mtaylor has quit IRC | 15:13 | |
*** Cybodog has quit IRC | 15:16 | |
*** anm_ has quit IRC | 15:20 | |
*** zooko has quit IRC | 15:24 | |
*** anm_ has joined #openstack | 15:24 | |
gundlach | dendrobates: i've got some code i had to write for OpenStack that is independent enough to be a project on pypi (a ratelimiting package). should i upload it to pypi as ratelimiting and then consume it in OpenStack, or must it be kept in OpenStack? | 15:29 |
gundlach | [i'm leaning toward the former, and pvo agrees, but suggested checking w/ you too] | 15:30 |
dendrobates | gundlach: either is fine with me. But we would need to package it for fedora and ubuntu to make sure it is easy to use as a dependency | 15:30 |
gundlach | dendrobates: aren't we using pip to install python dependencies? | 15:31 |
gundlach | e.g. we also require webob, and i thought we just install that via pip | 15:31 |
dendrobates | we can use that, but distros frown on that and want packages so their installers/updaters know the state of the system | 15:32 |
* gundlach is not a packaging expert by any means | 15:32 | |
dendrobates | is this going into Austin? | 15:32 |
gundlach | y | 15:32 |
dendrobates | can we ship a copy with austin? | 15:32 |
dendrobates | and pull it out in Bexar | 15:33 |
gundlach | we can do whatever is best; it's just a 2 or 3 file package | 15:33 |
gundlach | certainly | 15:33 |
dendrobates | Bexar == Austin++ | 15:33 |
gundlach | right | 15:33 |
gundlach | does this imply that all the other python modules i've been requiring lately (eventlet, webob, routes) need to also be vetted? I have just been adding them to pip-requires willy-nilly | 15:34 |
gundlach | ratelimiting, once i uploaded it to pypi, would just be a 4th module in the same class as those 3 | 15:34 |
dendrobates | I think that is fine for now, but if we want to be shipped in a distro, by default, we will need to make sure all our dependancies are packaged. | 15:35 |
dendrobates | the distros will take care of it for us, mostly | 15:35 |
gundlach | ok, so 'fine for now' means i don't need to go make sure eventlet/webob/routes are already packaged for Fedora+Ubuntu? | 15:36 |
dendrobates | yeah, not your problem. the package maintainers will handle it. | 15:39 |
dendrobates | if we make our own packages, we just need to be sure we let them know | 15:40 |
gundlach | dendrobates: ok. | 15:40 |
dendrobates | $(package_mantainers) == soren | 15:40 |
gundlach | [interestingly, i just noticed on PyPI that Ian Bicking *just* released WebOb 1.0 within the last few minutes] | 15:41 |
gundlach | ok, thanks :) i'll bug soren when i release ratelimiting to pypi after austin. | 15:41 |
*** smithpg1002 has joined #openstack | 15:42 | |
creiht | gundlach: do you have a link to this rate limiting code? | 15:42 |
* creiht is curious | 15:42 | |
gundlach | creiht: not yet, i was holding off on setting up a google code project, etc. | 15:42 |
creiht | We need better rate limiting code in swift and wanted to see what you have :) | 15:42 |
gundlach | dendrobates is of the opinion that i should hold off until Bexar on putting it in pypi, so i'll be adding it to openstack | 15:42 |
gundlach | creiht: ok, i'll check whether jaypipes has made openstack.common yet, and if so i'll drop it in there. | 15:43 |
creiht | gundlach: is it rate limiting in terms of refusing requests after a certain rate, or does it slow down requests after a certain rate? | 15:43 |
*** calavera has quit IRC | 15:43 | |
*** dele_ted has joined #openstack | 15:43 | |
gundlach | creiht: refusing, and telling you how many secons to wait before retrying | 15:43 |
gundlach | (the Rackspace API needs this kind of functionality) | 15:44 |
jaypipes | gundlach: yes, I have :) | 15:44 |
gundlach | here you go: http://paste.openstack.org/show/24/ | 15:44 |
gundlach | jaypipes: ohai | 15:44 |
creiht | k | 15:44 |
gundlach | jaypipes: great, it's in trunk? i'll drop my code in it. | 15:44 |
creiht | we need more of a slow down requests if they are doing too much | 15:44 |
jaypipes | gundlach: though it's still not in Nova trunk, no | 15:44 |
gundlach | creiht: what does 'slow down' mean? | 15:44 |
gundlach | creiht: a request comes in -- do you want to queue it in memory? | 15:44 |
jaypipes | gundlach: so may be best to package it into pypi for right now.. | 15:44 |
gundlach | jaypipes: dendro vetoed that. | 15:44 |
creiht | so an example may be, wait a couple of milliseconds before returning the response | 15:45 |
dendrobates | gundlach: I didn;t veto it. | 15:45 |
jaypipes | gundlach: ah, ok. well, there's not much to openstack.common yet, because I'm still working on proposals... but certainly you could add it there. I'd still need packaging help from mtaylor though :( | 15:45 |
creiht | in addition to an actual cap like what you are talking about (we already have that in swift) | 15:45 |
dendrobates | I just said can we ship a copy too, for this release | 15:45 |
gundlach | dendrobates: not to imply dictatorship, just that you said it would be better to hold off | 15:45 |
gundlach | oh, ship a copy *too*. i see. eh, i think i'll just wait until Bexar, so i don't have to fork the code. no biggie. | 15:46 |
dendrobates | we are past the ubuntu freeze so adding new dependencies is hard | 15:46 |
gundlach | creiht: hmm, lemme think for a sec about how i'd add that | 15:46 |
gundlach | creiht: would you be using this as WSGI middleware? | 15:47 |
creiht | I would like to if possible | 15:47 |
gundlach | creiht: and you want to say 'each user may make no more than N requests per minute, and if they try to we'll start delaying them?' | 15:47 |
gundlach | what's the algorithm for how much to delay them? | 15:47 |
gholt | It'd need some modification to put rate limiting info into memcache as well. | 15:48 |
gundlach | (you could do a quick-and-dirty version by just sleeping the # of seconds until they're allowed to make a request) | 15:48 |
gundlach | gholt: i'm not using memcache -- i wrote a simple WSGI app instead, so you could pull off rate limiting in one request rather than 2 or 3 | 15:49 |
gundlach | makes atomicity easier as well | 15:49 |
gholt | Well, we can't shard on user, for instance. | 15:49 |
gundlach | gholt: yep, you can -- there's a note in the code i pasted above which talks about that | 15:49 |
creiht | gundlach: We need to be able to rate limit across all the proxies | 15:49 |
gholt | Oh, this is an app, not middleware, sorry. | 15:49 |
gundlach | just make a WSGI app that shards on username and fwds to the right backend. (which i didn't write because i'm pretty positive it exists in the wild) | 15:50 |
creiht | hrm | 15:50 |
gundlach | gholt: yeah, any middleware would be specific to an application, so i didn't include middleware in the package which i expected to ship to pypi | 15:50 |
*** ambo has joined #openstack | 15:50 | |
gholt | Yeah, sorry, I started off on the complete wrong track. :) | 15:50 |
creiht | hehe | 15:50 |
gundlach | creiht: so you put middleware in each one which calls out sideways to the WSGI app running on a separate server | 15:50 |
creiht | that doesn't sound web scale :) | 15:51 |
gholt | I think it'd be better to have middleware that shares state in memcache servers. | 15:51 |
gundlach | creiht: hmm, i thought carefully about it to make it scale properly. what sounds wrong? | 15:51 |
creiht | gundlach: how do you scale the rate limiting service? | 15:51 |
gundlach | gholt: i started down that path, with a rate limiter object that has different backends -- Local, list_of_memcacheds, redis -- but thought the current approach was better | 15:51 |
gundlach | creiht: shard by username. if you need 10 times the capacity that one server can suport, then you: | 15:52 |
gundlach | start 10 WSGI apps, and put 1 or more proxies in front which are stateless but shard by incoming username. | 15:52 |
gundlach | whoever wishes to consume the rate limiting service hits one of those N stateless proxies, which fwds to the rigth shard | 15:53 |
gundlach | if a shard goes down, the proxy stops rate limiting those users until the shard is replaced | 15:53 |
* creiht ponders | 15:53 | |
* gundlach isn't sure if that was clear -- ask me to say again differently if needed | 15:54 | |
creiht | I understand | 15:54 |
gholt | All that should work. I don't think it'd be a great fit for Swift where we already have spread proxy servers using a ring of memcache servers. | 15:54 |
gundlach | at a higher level -- if one machine can't scale, you can shard to many machines, and put frontends in front of the shards which fwd to the correct shard. | 15:54 |
gholt | Plus, we'd prefer not to have to manage additional machines and services if at all possible. | 15:55 |
gundlach | gholt: what do you mean 'spread proxy servers'? and does a 'ring' of memcacheds mean something different than just a bunch of them running in a cluster? | 15:55 |
* creiht doesn't want to manage a rate limiting cluster on top of the swift cluster that he already has to manage :) | 15:55 | |
gundlach | gholt, creiht: yeah, so here's the tradeoff that made me think another service was worth it: | 15:55 |
gundlach | if you use memcached to store your rate limiting info (a list of timestamps per action per user), then each time a request comes in from the web, you have to make multiple round trips to memcached | 15:56 |
gundlach | and i don't know that you can guarantee atomicity (e.g. if 5 requests come in on different proxies, they might trip over each other writing to memcached) | 15:56 |
gundlach | i didn't see immediately how to reuse memcached or otherwise, while still making a correct implementation (and not making lots of hops, sending a potentially large list of timestamps across the wire twice per request) | 15:57 |
gholt | We were using incr with time-based-key with memcache. | 15:57 |
gundlach | gholt: how's that work? e.g. i limit to 3 requests per minute | 15:58 |
gholt | It does mean that you can go over for a given second time span if that second laps two actual seconds. | 15:58 |
gholt | Oh, well, we're talking about limiting at 100s 100s per second. :) | 15:58 |
gholt | That was supposed to be 100s or 1000s, hehe | 15:58 |
gundlach | oh, gotcha. yes, at that rate you should use counters and memcache would suffice :) | 15:59 |
creiht | haha | 15:59 |
creiht | :) | 15:59 |
gundlach | um, hm how do you make it atomic? | 15:59 |
gundlach | i assume you're essentially pulling a counter, adding one to it, writing it back? | 15:59 |
gholt | Memcache incr is atomic (supposedly) | 15:59 |
gundlach | ah, i hadn't heard of that | 16:00 |
*** brd_from_italy has quit IRC | 16:00 | |
jero | redis is | 16:01 |
gundlach | jero: right -- though i didn't like the idea of keeping rate limiting counters in a persistent store :) | 16:02 |
*** abecc has joined #openstack | 16:05 | |
*** zheng_li has quit IRC | 16:12 | |
*** xtoddx has joined #openstack | 16:17 | |
*** tmarble_ is now known as tmarble | 16:17 | |
*** dele_ted has quit IRC | 16:22 | |
*** vvuksan has joined #openstack | 16:24 | |
*** maplebed has joined #openstack | 16:24 | |
*** jkakar has quit IRC | 16:27 | |
redbo | Did you mention swift needs to be able to rate limit some types of requests differently? If we used a rate limiting service, it'd probably be better if it'd just accept a key instead of semantically defining it as a username. | 16:29 |
*** ibarrera has quit IRC | 16:30 | |
*** rlucio has joined #openstack | 16:31 | |
gholt | Well, he's got an action_name parameter that can be used for that. | 16:32 |
redbo | oh, okay | 16:32 |
*** pharkmillups has quit IRC | 16:32 | |
*** vvuksan has quit IRC | 16:32 | |
gundlach | heh, i had just hopped to this window to ask if you guys need to rate limit on key, as does nova :) | 16:33 |
*** joearnold has joined #openstack | 16:33 | |
gundlach | gholt, creiht: so i'm reconsidering my approach with the knowledge that memcache supports atomic incr. can you think of a way to support rolling limits [e.g. 100 reqs/day] without storing 100 timestamps? | 16:35 |
redbo | I'm assuming that's the account name we'd rate limit on, not the authenticated user. | 16:35 |
*** brd_from_italy has joined #openstack | 16:35 | |
gundlach | redbo: username is just an arbitrary string that defines a separate set of ratelimiting buckets. | 16:35 |
gundlach | redbo: if swift supports multiple users per account, then yes you'd probably want the accountname. | 16:36 |
gholt | gundlach: You just incr a timestamped-named-key with a timeout of a second (or two to be safe). | 16:36 |
gundlach | [if you can think of a more generic name than 'username' i'd love to change it] | 16:36 |
gundlach | gholt: but that doesn't allow for *rolling* limits, does it? | 16:36 |
gundlach | e.g. i allow 100 requests a day. dude performs 100 requests at 11:59pm on thursday, then on Friday we allow 100 more at midnight? | 16:37 |
*** replicant has joined #openstack | 16:37 | |
redbo | okay. we'll hopefully have generic ACLs soon, so any user could access any account, and all of our scaling limitations are per account. | 16:37 |
gholt | Yeah, that's what I mentioned earlier. It wasn't too big a deal at 1s intervals, but might be at larger ones. | 16:37 |
redbo | well, per container and per account | 16:38 |
gholt | gundlach: For 100 per day, you could limit at 50 per twelve hours. :) | 16:38 |
gundlach | gholt: right. i think that it'll also lead to spiky request behavior, which is just what we're trying to avoid | 16:38 |
gundlach | gholt: i thought of that and abandoned it, because they're not the same thing... | 16:38 |
*** burris has quit IRC | 16:39 | |
gundlach | gholt: why not limit at 1 every 16 minutes or whatever that works out to? | 16:39 |
gholt | If you're trying to prevent a spike over 100, it should be the same. | 16:39 |
gundlach | gholt: (because then we're forcing regularity across time) | 16:39 |
dendrobates | release meeting today 21:00 UTC. Localtime: http://goo.gl/3ZYo Agenda at wiki.openstack.org/Meetings | 16:39 |
gundlach | i guess it does prevent the spike, but a 1-per-second rate limit is different from a 3600-per-hour rate limit, so i don't think i can cheat by dividing like that | 16:40 |
gholt | I'm not sure what/why you're limiting, but with us, we're just trying to manage system resources, not paid service levels or anything. So just preventing prolonged spikes works in our case. | 16:42 |
gundlach | yep, nova's managing system resources, but the limits are like 100/min or 100/hour. | 16:43 |
*** zheng_li has joined #openstack | 16:43 | |
gholt | Ah, as in you can pop 100 servers today, all in one second, or throughout the day? | 16:43 |
gundlach | gholt: correct -- all at once, or throughout the hour. | 16:44 |
gundlach | though, when you put it that way, if a user hasn't been doing anything for the last hour, there's nothing stopping him from making a spike... | 16:44 |
*** KanGouLya has quit IRC | 16:44 | |
gholt | So you're really trying to limit space resources within a time span, got it. We don't have that (except for basic bandwidth itself). Even with users eating up space as fast as they can, we should be able to add more capacity in time. It's the spikes in CPU usage and small request overload we have to manage. | 16:46 |
cory_ | the rolling window concept is tough as you have to keep track of every access | 16:46 |
gholt | Though, I suppose somebody with their own Swift cluster might want time limited quotas like that.. Hmm. | 16:46 |
gundlach | cory_: right -- i was keeping a ring of timestamps per action per user | 16:46 |
cory_ | and that ring has to be large enough for any possible rate limit | 16:47 |
cory_ | that's going to be fun :) | 16:47 |
gundlach | cory_: well, it's just a list at the moment -- even if we had '1000 per period' that's only 4k | 16:47 |
cory_ | that's true | 16:47 |
gundlach | i worked out the math for the Rackspace API (the reason i wrote this code in the first place) and we can handle 200k users | 16:47 |
gundlach | with one node | 16:47 |
cory_ | I guess it is just a list of every timestamp | 16:48 |
gundlach | cory_: to be clear -- each ring is sized based on the action's limit. e.g. we have a 5-length list for a 5/minute action, and a 100-length list for a 100/second action. | 16:48 |
cory_ | right | 16:48 |
gundlach | cory_: except now gholt and creiht are making me question my wisdom in not using memcache instead and throwing out the 'rolling window' concept :) | 16:49 |
cory_ | well, you seem to have completely different wants | 16:49 |
gholt | Hehe, well I'm pretty sure we're trying to solve to different, but admittedly similar, problems. | 16:49 |
*** blpiatt has quit IRC | 16:50 | |
cory_ | yeah, they ideas are very similar but the constraints are quite a bit different | 16:50 |
gundlach | cory_: i dunno -- whether you're trying to smooth out CPU spikes or I'm trying to smooth out load on the VM hosts, both are trying to smooth out the incoming rate of requests... | 16:50 |
*** pharkmillups has joined #openstack | 16:50 | |
gundlach | tell me again why they're different? | 16:51 |
cory_ | because of the 100 requests per day example | 16:51 |
*** pharkmillups has quit IRC | 16:51 | |
gholt | One wants to limit how fast you do something, the other how much you do something. | 16:51 |
cory_ | what he said | 16:52 |
gundlach | gholt: i don't think that's a real difference. you want to limit how fast users send read/write requests. i want to limit how fast users send reboot requests. both take a finite amount of resources to respond to behind the scenes. | 16:52 |
*** amscanne_ has joined #openstack | 16:52 | |
gundlach | i must be in the wrong because it's 2 against one, but i still don't buy it :) | 16:52 |
gholt | Hehehe | 16:52 |
cory_ | hehe | 16:52 |
cory_ | that's irc for you | 16:53 |
cory_ | it's kind of just a difference in how the "windows" behave | 16:53 |
gundlach | cory_: so if i did convert 100/day into 4/hour, and kept a simple counter per hour, now i've gotten rid of the rolling window, and if someone has lots of requests to perform then they'll end up saturating each hour with 4 requests and end up performing 100/day like they wanted. | 16:53 |
cory_ | you're right that it may just be a semantic measurement difference | 16:54 |
gundlach | only users who don't have much work to do will be shafted by the 4/hour interpretation of the 100/day limit, and they don't mind because they, well, don't have a lot to do. | 16:54 |
cory_ | you mean they have 100 tiny requests and have to wait a whole day to get them done? | 16:54 |
*** amscanne has quit IRC | 16:55 | |
gundlach | cory_: if our rate limit is 100/day, i argue that the requests wouldn't be tiny | 16:55 |
gundlach | or that wouldn't be our rate limit. instead, they are requests that we think we can't handle more than 100 of in a day per user | 16:55 |
cory_ | no, I'm just trying to follow your last example | 16:55 |
cory_ | gotcha | 16:55 |
cory_ | I kind of dont follow this part: "users who don't have much work to do " | 16:56 |
gundlach | cory_: users who only have 12 requests per day to accomplish, for instance. | 16:56 |
gundlach | they would like to get them all done at once but must stretch them over 3 hours | 16:56 |
cory_ | that's rate limiting, right? :) | 16:57 |
gholt | If you're limiting reboots independently, wouldn't you just limit to whatever each host can reasonably support? Would you really want to limit on user in that case? | 16:57 |
gundlach | gholt: reboots were a made-up example; i think RS actually rate limits at a less granular level, e.g. "100 POST requests per hour" where POST may be a server create, or a reboot, or backup | 16:58 |
gundlach | well, i give up -- i think i'll leave the code as is at least until austin is over with, for the sake of moving forward :) | 16:59 |
gholt | Hmm. Interesting... :) | 16:59 |
gholt | Hehe | 16:59 |
cory_ | it's definitely a fun discussion | 16:59 |
gundlach | thanks for the discussion and feedback, guys | 16:59 |
eday | gundlach: jumping in late here, but for rate limiting, I would just do a time-decay algorithm, that way they can hit their quote in 1 second, but then it takes <some time interval> to drop back to 0 (and incrementally allows more if they keep requesting) | 17:00 |
*** blpiatt has joined #openstack | 17:01 | |
gundlach | eday: how do i convert '100 requests per minute' into a decay function? | 17:01 |
eday | gundlach: tweak this: http://oddments.org/wiki/programming:python:rate_limit.py (and instead of sleep when limit is hit, return an error code) | 17:02 |
eday | gundlach: this requires storing a single key with current rate | 17:02 |
cory_ | ok, decay is a much simpler answer | 17:03 |
cory_ | that's nice :) | 17:03 |
gholt | Yeah, shiny. Can you build that into memcache? :) | 17:04 |
cory_ | takes too much math for my brain to figure out wht the decay functions mean in terms of X per time interval | 17:05 |
gundlach | eday: cool -- i'm trying to figure out if it would work for slower rates like 100/day -- maybe using float max_rate and rate would help | 17:06 |
gundlach | eday: and if i could make it work in memcache without race conditions. thanks for the tip! | 17:06 |
eday | yeah, it needs some more parameters to see how much to increase per 'hit' and decay window, but the algo should be the same | 17:07 |
*** rlucio has quit IRC | 17:07 | |
*** burris has joined #openstack | 17:07 | |
gholt | Man, that'd be cool built into memcache. An atomic call like rate(key, max, interval) or somesuch. | 17:09 |
*** joearnold has quit IRC | 17:10 | |
*** michalis has quit IRC | 17:10 | |
*** joearnold has joined #openstack | 17:12 | |
*** niczero has left #openstack | 17:27 | |
*** aliguori has quit IRC | 17:27 | |
DubLo7 | Just caught up on the rate limiting. Doesn't that imply adding authentication of some sort to memcached as well as tracking call count per user / ip? Sounds like an expensive addition. | 17:29 |
gundlach | DubLo7: yeah, i don't think we'd actually modify memcache -- we'd just use memcache as the storage of the map from key->(last_timestamp, counter) | 17:30 |
gundlach | where key is e.g. 'user michael performing a reboot action' | 17:30 |
gholt | Memcache as it is might work fine for slower rates, but at higher rates the read then write would miss stuff. | 17:32 |
DubLo7 | gundlach: I see. Well a bit more work because you want counts over a period of time. key -> (array_of_timestamps) might work… | 17:33 |
*** pharkmillups has joined #openstack | 17:34 | |
gundlach | DubLo7: that's actually the alg that i have written at the moment, but eday points out that his approach does about the same thing with much less memory usage | 17:34 |
_0x44 | gundlach: Why memcached instead of Redis (that we already 'sort-of' have)? | 17:34 |
gundlach | _0x44: because that's a persistent store, which feels wrong for tracking rate limiting. | 17:34 |
DubLo7 | this would be super simple in redis though… yeah | 17:34 |
gundlach | _0x44: and, in Swift's case, because Swift already has a bunch of memcaches up and wanted to see if they could use this code as well | 17:34 |
_0x44 | gundlach: Tell them no and shake your fist at them. | 17:35 |
* _0x44 helps. | 17:35 | |
*** mtaylor has joined #openstack | 17:35 | |
*** ChanServ sets mode: +v mtaylor | 17:35 | |
gholt | :) | 17:35 |
gundlach | _0x44: that is one approach | 17:35 |
gholt | Honestly, Swift's probably going to have to go with a different approach anyway. We were just curious if one-could-fit-all. Hehe | 17:36 |
_0x44 | I think my concern is really the expansion of datastores we're using. Doesn't this addition mean that memcached is now a dependency in addition to whatever other DB on the backend of the ORM? | 17:36 |
gundlach | gholt: what's wrong with what you're doing today? | 17:36 |
gundlach | _0x44: no, the rate limiter would have pluggable backends (as do several other pieces of openstack) | 17:36 |
gundlach | so it could store in memcache for swift, or in local memory for small deployments. | 17:37 |
gholt | gundlach: Well, right now we just return a refusal immediately on hitting the rate limit. We want to change that to just make the request delay executing the amount of time we want them to wait anyway. That doesn't work well with a 24hr rate limit though, hehe. | 17:37 |
_0x44 | gundlach: Oh, okay then, thanks. | 17:38 |
gundlach | _0x44: i don't think of memcached as a datastore -- it's a cache. i don't think it serves the same purpose as an ORM or redis. | 17:38 |
_0x44 | gundlach: It's a cache but data that isn't stored anywhere else would be stored there, no? | 17:38 |
DubLo7 | redis solution. key = ratelimit-userhash-timestamp, expires = rate limit time. previous_access_count = redis keys "ratelimit-userhash*", allow_access = (previous_access_count < max_allowed ) | 17:38 |
gundlach | _0x44: correct -- it would be stored there until the memcache server crashed | 17:39 |
_0x44 | gundlach: So if you wanted to query on that at all (for whatever reason), from the perspective of a user it's another datastore | 17:39 |
DubLo7 | I've done this already in my own project. It works, it's fast. | 17:39 |
_0x44 | (Despite being a cache) | 17:39 |
DubLo7 | You'll want to put it into it's own database and key is ratelimit-processname-userhash-timestamp. | 17:40 |
DubLo7 | plus it replicates across servers so easy. | 17:40 |
gundlach | _0x44: when you say you're concerned about the expansion of datastores, i think that's missing the point. memcache serves a purpose -- i need to store some stuff in memory but many servers need to access the same memory. redis/mysql/cassandra/gfs serve another purpose: i need to persist some data that many servers can access. | 17:41 |
gundlach | now, if i were arguing for using memcached for one purpose, and at the same time requiring some other in-memory store for another purpose, then that would be concerning | 17:42 |
gundlach | [though i don't know of any other in-memory stores because memcached is so ubiquitous] | 17:42 |
_0x44 | gundlach: I agree, except that you can accomplish what you're trying to with memcached in redis. | 17:43 |
gundlach | _0x44, DubLo7: we're discussing in #openstack-meeting today whether redis will be deprecated; so i'll have to wait till after that to talk about depending on redis :) | 17:44 |
gundlach | bbiab | 17:44 |
DubLo7 | gundalch: Is this all nasa stuff now btw? | 17:44 |
gholt | gundlach: Wasn't the only reason for not using redis the persistence? If it wasn't persisting, would it matter? | 17:44 |
DubLo7 | ah, ok | 17:44 |
creiht | can redis provide the same performance as memcache? | 17:45 |
gholt | Not sure it matters at slow rates. :) | 17:45 |
creiht | heh.. true | 17:46 |
* creiht has too much swift on the brain | 17:46 | |
_0x44 | creiht: According to antirez it's just as fast but supports more datastructures | 17:46 |
DubLo7 | creiht: redis is on part with it an the programmer lists the o(n) cost of each command. In my experience it's comparible speed and being able to query keys and extra key types makes any loss of speed so worth it | 17:47 |
DubLo7 | memcached is a blackbox. Once you put something in you don't know what's still there. | 17:48 |
creiht | can you make redis not snapshot the memory to disk? | 17:51 |
creiht | and I wonder how much of a performance impact snapshoting has | 17:51 |
eday | gundlach: i still don't really consider redis a first class persistent store, it's more of a cache snapshot to disk (which some memcached verisons have as well). we're just using as a persistent store :) | 17:52 |
*** dendrobates is now known as dendro-afk | 17:52 | |
*** kapil__ has joined #openstack | 17:52 | |
eday | creiht: I'm pretty sure you can disable the snapshots | 17:52 |
DubLo7 | it's open source and cleanly written. You could disable the command very easily. Setting the db to /dev/null could work, but I'm not sure what would happen on startup | 17:53 |
DubLo7 | I'll try it out... | 17:53 |
*** replicant has quit IRC | 17:54 | |
*** mdomsch has quit IRC | 17:59 | |
*** littleidea has quit IRC | 18:02 | |
gundlach | gholt: re 'slow rates' -- while an individual user would have slow rates, we're targeting supporting 1 million users, so the overall load should be high. | 18:06 |
DubLo7 | snapshots can be disabled, but the normal rdb database can't be disabled as-is | 18:06 |
DubLo7 | chmod -r dump.rdb and redis lives happily with a 10 byte db | 18:06 |
gundlach | _0x44: i hadn't heard that redis was as performant as memcache... i am surprised | 18:06 |
gholt | gundlanch: Yeah, that more a tease, and not a great one either, heh. | 18:07 |
gundlach | oh, ok :) | 18:07 |
DubLo7 | I've submitted patches to redis before. I can add a memory only option if you guys think it's important. Although I doubt it will be picked up in the official system. It's about 20 lines of code or less. | 18:10 |
gundlach | DubLo7: i don't think it's important -- we've got a working rate limiting system now, and were just talking about how to optimize it. | 18:11 |
gundlach | we aren't hurting to optimize (yet). | 18:11 |
gundlach | thanks, though! | 18:12 |
DubLo7 | no problem. | 18:14 |
*** dendro-afk is now known as dendrobates | 18:16 | |
*** littleidea has joined #openstack | 18:20 | |
*** pvo has joined #openstack | 18:26 | |
*** ChanServ sets mode: +v pvo | 18:26 | |
*** tobym has quit IRC | 18:27 | |
*** technoid_ has joined #openstack | 18:28 | |
*** joearnold has quit IRC | 18:44 | |
*** aliguori has joined #openstack | 18:51 | |
*** mtaylor has quit IRC | 18:52 | |
*** joearnold has joined #openstack | 18:53 | |
*** p-scottie has joined #openstack | 18:59 | |
*** stewart has quit IRC | 19:08 | |
*** tobym has joined #openstack | 19:11 | |
*** DogWater has quit IRC | 19:16 | |
*** User113 has joined #openstack | 19:16 | |
*** zooko has joined #openstack | 19:16 | |
*** maple_bed has joined #openstack | 19:17 | |
*** jakedahn has joined #openstack | 19:18 | |
*** maplebed has quit IRC | 19:18 | |
*** jakedahn has quit IRC | 19:27 | |
*** amscanne__ has joined #openstack | 19:28 | |
*** skippyish has joined #openstack | 19:29 | |
*** rlucio has joined #openstack | 19:29 | |
*** amscanne_ has quit IRC | 19:31 | |
*** amscanne_ has joined #openstack | 19:35 | |
*** amscanne__ has quit IRC | 19:39 | |
soren | dendrobates: I'm not sure what detail to add to https://blueprints.edge.launchpad.net/nova/+spec/austin-xen/+edit, really. There's not much to say. "Check that it works. If it doesn't, fix it." | 19:45 |
soren | Well, that's not entirely true. It can probably be summed up as "Add a libvirt template for Xen," though. | 19:47 |
soren | dendrobates: Would that be better? | 19:48 |
*** dabo has joined #openstack | 19:49 | |
dendrobates | sure. | 19:49 |
*** skippyish has quit IRC | 19:49 | |
*** ctennis has quit IRC | 19:50 | |
*** maple_be1 has quit IRC | 19:50 | |
*** jakedahn has joined #openstack | 19:51 | |
*** anotherjesse has joined #openstack | 19:51 | |
*** rlucio has quit IRC | 19:55 | |
*** keshav2 has joined #openstack | 19:57 | |
*** kapil__ has quit IRC | 19:57 | |
*** vishy has quit IRC | 19:59 | |
*** devcamcar has joined #openstack | 19:59 | |
*** vishy has joined #openstack | 19:59 | |
*** tobym has quit IRC | 20:00 | |
*** Rudd-O has joined #openstack | 20:03 | |
Rudd-O | hello guys | 20:04 |
vishy | hello | 20:06 |
*** devcamcar has quit IRC | 20:06 | |
uvirtbot | New bug: #638396 in swift "saio add-user command needs to be code" [High,New] https://launchpad.net/bugs/638396 | 20:06 |
Rudd-O | hey guys | 20:07 |
Rudd-O | whats up | 20:07 |
joshuamckenty | one hour from game time | 20:07 |
dendrobates | yup | 20:08 |
*** devcamcar has joined #openstack | 20:08 | |
Rudd-O | hey dendrobates | 20:09 |
Rudd-O | you are with rackspace, right? | 20:09 |
*** allsystemsarego has quit IRC | 20:09 | |
dendrobates | Rudd-O: yep | 20:09 |
Rudd-O | ah cool :-) | 20:09 |
dendrobates | and you are? | 20:09 |
Rudd-O | name's Manuel | 20:09 |
Rudd-O | I'm with Cloud.com | 20:09 |
Rudd-O | and VERY interested in Nova | 20:09 |
dendrobates | ah, nice to meet you. | 20:09 |
Rudd-O | likewise! | 20:10 |
*** ctennis has joined #openstack | 20:10 | |
dendrobates | joshuamckenty: are you back in Canada? | 20:10 |
joshuamckenty | No, still in Italy | 20:11 |
anotherjesse | back in italy | 20:11 |
joshuamckenty | yeah, that | 20:11 |
joshuamckenty | Actually, going to Switzerland tomorrow | 20:11 |
joshuamckenty | But I'm trying to figure out how to get back for the design summit | 20:11 |
joshuamckenty | (Can't believe it's going to be in Texas, though...) | 20:11 |
* anotherjesse recommends plane ... but a ocean voyage might be nice | 20:11 | |
*** jakedahn has quit IRC | 20:12 | |
creiht | A rocket would be really fast :) | 20:12 |
* joshuamckenty recommends getting someone to buy him a plane ticket... | 20:12 | |
joshuamckenty | ooh, I *do* work for NASA... | 20:12 |
Rudd-O | ahhh switzerland | 20:12 |
*** sirp1 has quit IRC | 20:13 | |
Rudd-O | beautiful landscape | 20:13 |
joshuamckenty | I wouldn't know - I'm never outside of the office, and the hotel | 20:13 |
joshuamckenty | No point, really, with the quality of the swiss beer | 20:13 |
*** tobym has joined #openstack | 20:15 | |
Rudd-O | hahha | 20:16 |
Rudd-O | understandable | 20:16 |
burris | does it bother you that SWIFT_HASH_PATH_SUFFIX has a default value and if you neglect to set the environment variable then a server might use a different suffix and cause chaos? | 20:16 |
*** ambo has left #openstack | 20:16 | |
creiht | gholt: -^ :) | 20:17 |
gholt | burris: Good point. And I hate that thing anyway. :) | 20:18 |
*** joearnold has joined #openstack | 20:18 | |
*** benoitc has quit IRC | 20:18 | |
*** ded has quit IRC | 20:19 | |
gholt | burris: It'd be good to patch the code to blow up with no SWIFT_HASH_PATH_SUFFIX yet. Not sure how that'd affect all the automated test environments though. | 20:19 |
*** benoitc has joined #openstack | 20:19 | |
joshuamckenty | I think Hudson can handle some ENV variables, if necessary | 20:21 |
*** devcamcar has quit IRC | 20:21 | |
gholt | chuck: I think you're the one who added that line of code anyway. :P And notmyname the original line before OpenStack. :P :P | 20:21 |
*** devcamcar has joined #openstack | 20:22 | |
gholt | creiht: ^^ stupid multi-nicks hehe | 20:22 |
Rudd-O | hudson can indeed handle environment variables | 20:22 |
*** ded has joined #openstack | 20:23 | |
*** devcamcar has quit IRC | 20:23 | |
*** benoitc has quit IRC | 20:23 | |
joshuamckenty | Oh, speaking of which: is it soren or dendrobates who has the sexy pylint setup for hudson? | 20:24 |
creiht | gholt: I thought you added the hash stuff :) | 20:24 |
gholt | Not that endcap thing. Man that was a whole thing. Remember why it was added? | 20:24 |
anotherjesse | is the hudson setup in a bzr repo? so others can set it up if they want | 20:24 |
notmyname | creiht: I added the hash stuff | 20:25 |
creiht | ahh | 20:25 |
gholt | Under duress thought, as I recall. :) | 20:25 |
joshuamckenty | all kludgey code is under duress, in my experience. | 20:25 |
joshuamckenty | Sometimes the duress is alchohol, though | 20:25 |
gholt | Heh | 20:25 |
joshuamckenty | I feel somewhat privileged that my worst code was never open source | 20:26 |
joshuamckenty | which should shock those of you who've looked carefully at nova - yes, sadly, I've done worse ;) | 20:26 |
gholt | lol | 20:27 |
joshuamckenty | Is there a good convention for an IRC "show of hands"? | 20:27 |
*** joearnold has quit IRC | 20:27 | |
gholt | o/ ? | 20:28 |
dendrobates | i think mordred did most of the hudson setup. | 20:28 |
*** sirp1 has joined #openstack | 20:28 | |
joshuamckenty | ah, right. Thanks | 20:29 |
dendrobates | release meeting in #openstack-meeting in 30 min. | 20:29 |
*** davidg has quit IRC | 20:31 | |
Rudd-O | I always have had the belief that better to release crappy code than to not reliease at all | 20:31 |
Rudd-O | however crappy it is | 20:31 |
*** littleidea has quit IRC | 20:31 | |
joshuamckenty | Oh, I agree completely. I wrote a lot of software under contract, however. | 20:32 |
*** devcamcar has joined #openstack | 20:33 | |
joshuamckenty | NASA and OpenStack was actually the first occasion where I won the open source argument with my client. | 20:33 |
joshuamckenty | And, to be fair, anotherjesse actually won the fight. I just started it. | 20:33 |
burris | gholt, joshuamckenty I think it is also possible to wrap the test function in setup.py with a func that sets the environment variable and unsets it before/after the test run | 20:33 |
dendrobates | joshuamckenty: really? | 20:33 |
*** p-scottie has quit IRC | 20:34 | |
joshuamckenty | dendrobates: Well, I worked on Flock when it was open source, but that was a decision they made ahead of time. | 20:34 |
zul | alot of the canadian government sitll doesnt use open source unfortunately | 20:34 |
joshuamckenty | Netscape was SUPPOSED to be open source when we worked on it, but we never won that fight with AOL legal. | 20:34 |
burris | gholt, joshuamckenty but I think it would be better if SWIFT_HASH_PATH_SUFFIX wasn't an environment variable but lived in a configuration file | 20:34 |
gholt | burris: True, that'd work with unit tests and probably probe tests, but not functional tests which could be run against a distant cluster. | 20:34 |
joshuamckenty | I generally prefer config over ENV as well, but not a mix of both | 20:35 |
joshuamckenty | are there many other swift ENV variables? | 20:35 |
burris | I don't think there are any other env variables for swift | 20:35 |
gholt | Good idea on conf value. | 20:35 |
burris | this one is dangerous | 20:35 |
creiht | well there is the MAKE_SWIFT_WORK env variable | 20:36 |
creiht | :) | 20:36 |
gholt | Only other env vars are optional test ones and optional command line tools one. | 20:36 |
joshuamckenty | ah, yeah. I always forget to set that one | 20:36 |
creiht | We started with a config variable, but there are several different configs, and they would all need that variable | 20:36 |
*** DubLo7 has quit IRC | 20:37 | |
gholt | It could be its own config file. Lame I know, but it'd work. | 20:37 |
joshuamckenty | you need config file includes | 20:37 |
creiht | heh | 20:37 |
burris | I could fix it for regular use and unit tests but it would break your automated tests and installs | 20:37 |
creiht | gholt: didn't you go down this road for a while before notmyname took it over? | 20:37 |
creiht | because didn't we also think about putting it in the ring? | 20:38 |
gholt | Ah, probably. You know my memory. | 20:38 |
creiht | heh | 20:38 |
burris | object server uses hash_path (which uses SWIFT_HASH_PATH_SUFFIX) but it doesn't use the ring... | 20:39 |
gholt | creiht: See? burris remembers the whole thing, hehe | 20:40 |
*** devcamcar has quit IRC | 20:40 | |
creiht | burris: right, and why we dropped the ring idea | 20:40 |
creiht | hah | 20:40 |
*** devcamcar has joined #openstack | 20:41 | |
gholt | Making it it's own configuration file is wonky, but easy to do and distribute and doesn't have the environ issues. | 20:41 |
burris | I don't think having its own config file is that lame, considering its very important that every node agree on the salt for the entire lifetime of the clust | 20:41 |
*** anotherjesse has quit IRC | 20:43 | |
*** devcamcar has quit IRC | 20:43 | |
burris | it also doesn't make sense to put it in the ring since the ring changes but the salt never does | 20:43 |
*** devcamcar has joined #openstack | 20:44 | |
joshuamckenty | I've been out of touch, but have we looked at standardizing the CLI / config file handling between nova and swift at all? | 20:44 |
Rudd-O | so guys | 20:44 |
Rudd-O | why gflags rather than optparse or argparse? | 20:44 |
creiht | I think I gave jaypipes too hard of a time, and he gave up :/ | 20:44 |
joshuamckenty | true answer, or cool answer? | 20:44 |
gholt | joshuamckenty: Yes, talked, but pushed it out past Austin release for now. | 20:44 |
joshuamckenty | we went with gflags cause termie liked it more than we liked anything else | 20:45 |
joshuamckenty | If I was going to switch now I would consider cement | 20:45 |
joshuamckenty | actually, the way that gflags are declared within the module that they're used is both cool, and annoying | 20:45 |
joshuamckenty | gholt: makes sense | 20:46 |
creiht | we used optparse because it didn't add another dependency :) | 20:46 |
joshuamckenty | gholt: I just want us to have some idea of where we're going to seek commonality before the next four major openstack components get started | 20:46 |
burris | so if I fixed hash_path to get the suffix out of a config file would you guys accept the patch? what would need to be done to increase the likelyhood of acceptance? | 20:46 |
joshuamckenty | I'm always tempted to say "beer" to that question, but it's not really true | 20:47 |
burris | the source and solution to all of lifes problems... | 20:47 |
creiht | burris: unless anyone else comes up with a better idea, I'm open to that option | 20:47 |
burris | I just don't want to create too much extra work for you so I'm worried about what will break | 20:47 |
gholt | burris: Go for it. Just need to be on the contributors list or an employee of a member company. | 20:48 |
burris | I work for Cloudscaling | 20:48 |
creiht | as long as you set the default to what we had before, should be fine | 20:48 |
creiht | at least shouldn't break anything (including tests) | 20:48 |
*** dgoetz has joined #openstack | 20:48 | |
gholt | burris: You're set to contribute then. :) | 20:49 |
creiht | burris: the worst that can happen is that you will get half way done then gholt will realize that he had already implemented it in a branch somewhere and beat you to the punch :) | 20:49 |
*** anotherjesse has joined #openstack | 20:49 | |
burris | I think the issue is there would be no default since it would be in a config file, I haven't looked to see how those are setup or if the ones that are there are particular to our install of it | 20:49 |
creiht | burris: there should be a reasonable default, but we could print a warning when something is run with the default | 20:50 |
creiht | (not set in the config) | 20:51 |
gholt | Hrm. I like the idea of no default personally. :P | 20:51 |
gholt | Blow up until you make the proper conf file. | 20:51 |
burris | that would be bad in production because people don't read the logs to see the warning, then you have a server who's hash_path returns different values than all the others | 20:51 |
burris | yeah the default is the thing that is really bothering us, env variables are also lame | 20:51 |
gholt | Of course, that goes against what other folks want with a set of packages that installs a working system. .... Unless we make a swift-lame-settings package? | 20:53 |
*** devcamcar has quit IRC | 20:53 | |
*** adjohn has joined #openstack | 20:53 | |
*** devcamcar has joined #openstack | 20:54 | |
creiht | yeah I'm torn both ways... trying to find a decent balance between the two | 20:55 |
*** devcamcar has quit IRC | 20:55 | |
*** p-scottie has joined #openstack | 20:57 | |
*** devcamcar has joined #openstack | 20:57 | |
*** benoitc has joined #openstack | 20:57 | |
burris | right now the swift setup.py doesn't install a working system, isn't the only thing that does so is swift-solo? | 20:57 |
creiht | burris: there are some other scripts as well | 20:58 |
creiht | but basically do the same thing | 20:58 |
creiht | there was talk about setting up the ubuntu packaging so that it would set up a simple self-contained system | 20:59 |
jaypipes | creiht: ha ha. | 20:59 |
creiht | :) | 21:00 |
joshuamckenty | yeah, I thought that was SAIO | 21:00 |
joshuamckenty | or some such | 21:00 |
jaypipes | creiht: no, just haven't gotten around to writing the proposal email to the ML :) | 21:00 |
jaypipes | creiht: the wiki proposal is done though... | 21:00 |
burris | which scripts? they could be modified to read a bunch of bytes out of /dev/uprandom and put them in the config file | 21:00 |
joshuamckenty | meeting time | 21:01 |
creiht | someone made a bash script and blogged about it, I think the NASA guys have some puppet scripts | 21:01 |
creiht | there isn't anything official though | 21:01 |
joshuamckenty | we have chef scripts, I think | 21:01 |
joshuamckenty | might have puppet scripts, too | 21:01 |
joshuamckenty | it's a bake-off | 21:01 |
burris | it's going to break a lot of peoples stuff but I think its important, I wonder how many people are running the default and don't even know they needed to change it before storing anything in their cluster? | 21:01 |
creiht | burris: I also meant to document that, but got lost in the shuffle | 21:02 |
burris | it's not too late to change how it works then :-) | 21:03 |
burris | I'll whip something up | 21:03 |
creiht | hehe | 21:03 |
*** ded has quit IRC | 21:03 | |
creiht | burris: but yeah the current chef scripts are just for dev (to which the string isn't so important) | 21:04 |
creiht | burris: but it would probably be a good idea to get something better in before the austin release | 21:05 |
gholt | Go for the blow up if not set? Packaging/scripts can handle the rest? | 21:07 |
burris | that sounds good to me | 21:08 |
creiht | sounds reasonable to me | 21:08 |
Rudd-O | I still get the problem with pidfile | 21:09 |
Rudd-O | I am trying to start nova-compute | 21:09 |
*** joearnold has joined #openstack | 21:09 | |
Rudd-O | it spits out attribueerror: pidfile | 21:10 |
Rudd-O | is there a commandline parameter I need to pass? | 21:10 |
*** burris has quit IRC | 21:10 | |
xtoddx | Rudd-O: never seen that. are you passing a --flagfile= flag? | 21:11 |
*** perestrelka has quit IRC | 21:11 | |
uvirtbot | New bug: #638449 in nova "Cannot update the flat network IP address list" [Undecided,New] https://launchpad.net/bugs/638449 | 21:11 |
*** perestrelka has joined #openstack | 21:11 | |
*** vvuksan has joined #openstack | 21:13 | |
*** btorch has quit IRC | 21:16 | |
*** devcamcar has quit IRC | 21:16 | |
*** pandemicsyn has quit IRC | 21:16 | |
*** pandemicsyn has joined #openstack | 21:16 | |
*** littleidea has joined #openstack | 21:16 | |
*** btorch has joined #openstack | 21:16 | |
*** ChanServ sets mode: +v pandemicsyn | 21:17 | |
creiht | oh man where did burris go | 21:18 |
creiht | https://bugs.launchpad.net/swift/+bug/638457 | 21:18 |
uvirtbot | Launchpad bug 638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New] | 21:18 |
creiht | :) | 21:18 |
Rudd-O | xtoddx | 21:25 |
Rudd-O | flagfile????????????? | 21:25 |
Rudd-O | how's that work? mind if I ask even if it is a stupid question? | 21:25 |
xtoddx | you can collect flags and stick them in a place like /etc/nova/proxy-server.conf | 21:25 |
*** devcamcar has joined #openstack | 21:26 | |
xtoddx | and --flagfile=/etc/nova/proxy-server.conf and store all your flags there | 21:26 |
*** p-scottie has quit IRC | 21:26 | |
xtoddx | some of the binaries have it baked in, others use the default, which i think is nova.conf in the current dir | 21:26 |
uvirtbot | New bug: #638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New] https://launchpad.net/bugs/638457 | 21:27 |
*** joschi___ has joined #openstack | 21:31 | |
*** devcamcar has quit IRC | 21:31 | |
*** joschi has quit IRC | 21:31 | |
Rudd-O | ohhhh | 21:31 |
Rudd-O | I see | 21:31 |
*** klord has quit IRC | 21:32 | |
*** p-scottie has joined #openstack | 21:33 | |
*** pharkmillups has quit IRC | 21:33 | |
*** pharkmillups has joined #openstack | 21:33 | |
*** anotherjesse has quit IRC | 21:37 | |
*** p-scottie has quit IRC | 21:37 | |
*** blpiatt has quit IRC | 21:38 | |
*** burris has joined #openstack | 21:38 | |
*** devcamcar has joined #openstack | 21:40 | |
*** rnewson has quit IRC | 21:42 | |
*** devcamcar has quit IRC | 21:42 | |
*** devcamcar has joined #openstack | 21:43 | |
*** p-scottie has joined #openstack | 21:44 | |
*** devcamcar has quit IRC | 21:44 | |
*** maple_be1 has joined #openstack | 21:44 | |
*** devcamcar has joined #openstack | 21:46 | |
creiht | burris: https://bugs.launchpad.net/swift/+bug/638457 | 21:46 |
uvirtbot | Launchpad bug 638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New] | 21:46 |
burris | yes thanks! | 21:47 |
creiht | Take that over if you don't mind (I don't know what user you are on launchpad) | 21:47 |
burris | I will, I think I have to create a new user | 21:48 |
creiht | cool | 21:48 |
*** vvuksan has quit IRC | 21:49 | |
*** ded has joined #openstack | 21:52 | |
*** joearnold has quit IRC | 21:56 | |
*** stewart has joined #openstack | 21:57 | |
*** jdarcy has quit IRC | 21:57 | |
eday | vishy: conflicts :) | 22:03 |
*** devcamcar has quit IRC | 22:03 | |
*** devcamcar has joined #openstack | 22:04 | |
*** dabo has quit IRC | 22:07 | |
*** devcamcar has quit IRC | 22:07 | |
*** devcamcar has joined #openstack | 22:09 | |
vishy | eday: on it | 22:11 |
*** p-scottie has quit IRC | 22:12 | |
*** stewart has quit IRC | 22:13 | |
*** pvo has quit IRC | 22:14 | |
*** devcamcar has quit IRC | 22:14 | |
*** devcamcar has joined #openstack | 22:15 | |
*** silassewell has joined #openstack | 22:19 | |
*** devcamcar has quit IRC | 22:19 | |
*** devcamcar has joined #openstack | 22:20 | |
*** amscanne_ has quit IRC | 22:21 | |
vishy | eday: resolved | 22:23 |
*** DubLo7 has joined #openstack | 22:23 | |
*** rlucio has joined #openstack | 22:25 | |
*** devcamcar has quit IRC | 22:25 | |
*** pharkmillups has quit IRC | 22:26 | |
*** stewart has joined #openstack | 22:30 | |
*** rnewson has joined #openstack | 22:30 | |
*** DubLo7 has quit IRC | 22:30 | |
*** adjohn has quit IRC | 22:31 | |
*** jakedahn has joined #openstack | 22:34 | |
*** devcamcar has joined #openstack | 22:35 | |
*** rnewson has quit IRC | 22:40 | |
*** devcamcar has quit IRC | 22:40 | |
*** devcamcar has joined #openstack | 22:41 | |
*** miclorb_ has joined #openstack | 22:47 | |
*** jakedahn_ has joined #openstack | 22:49 | |
*** npmap has quit IRC | 22:50 | |
*** devcamcar has quit IRC | 22:50 | |
*** jakedahn has quit IRC | 22:53 | |
*** gundlach has quit IRC | 22:53 | |
*** jakedahn_ has quit IRC | 22:53 | |
*** gundlach has joined #openstack | 22:57 | |
*** jkakar has joined #openstack | 22:58 | |
*** p-scottie has joined #openstack | 22:59 | |
*** devcamcar has joined #openstack | 23:00 | |
*** vvuksan has joined #openstack | 23:00 | |
*** aliguori has quit IRC | 23:03 | |
*** maple_bed has quit IRC | 23:03 | |
*** sirp1 has quit IRC | 23:08 | |
*** sirp1 has joined #openstack | 23:09 | |
*** joearnold has joined #openstack | 23:12 | |
*** dendrobates is now known as dendro-afk | 23:13 | |
*** devcamcar has quit IRC | 23:13 | |
*** devcamcar has joined #openstack | 23:15 | |
*** gasbakid has joined #openstack | 23:16 | |
*** devcamcar has quit IRC | 23:16 | |
*** devcamcar has joined #openstack | 23:17 | |
*** pvo has joined #openstack | 23:22 | |
*** ChanServ sets mode: +v pvo | 23:22 | |
*** skippyish has joined #openstack | 23:25 | |
*** amscanne_ has joined #openstack | 23:26 | |
*** ded has quit IRC | 23:37 | |
*** devcamcar has quit IRC | 23:37 | |
*** devcamcar has joined #openstack | 23:38 | |
*** sirp1 has quit IRC | 23:39 | |
*** gundlach has quit IRC | 23:39 | |
*** Rudd-O has quit IRC | 23:39 | |
*** zheng_li has quit IRC | 23:39 | |
*** zheng_li has joined #openstack | 23:40 | |
*** pvo has quit IRC | 23:42 | |
*** pvo has joined #openstack | 23:43 | |
*** pvo has joined #openstack | 23:44 | |
*** ChanServ sets mode: +v pvo | 23:44 | |
*** stewart has quit IRC | 23:44 | |
*** zheng_li has quit IRC | 23:47 | |
*** devcamcar has quit IRC | 23:47 | |
*** ArdRigh has joined #openstack | 23:47 | |
*** devcamcar has joined #openstack | 23:48 | |
*** pvo has quit IRC | 23:48 | |
*** Rudd-O has joined #openstack | 23:52 | |
*** devcamcar has quit IRC | 23:52 | |
*** tobym has quit IRC | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!