Tuesday, 2010-09-14

*** xtoddx has left #openstack00:11
*** hisak has quit IRC00:16
*** jakedahn has quit IRC00:41
*** jkelly has joined #openstack00:41
*** jakedahn has joined #openstack00:42
*** zooko has quit IRC00:53
*** maplebed has quit IRC00:55
*** burris has quit IRC01:04
*** joearnold has quit IRC01:05
*** jkelly has quit IRC01:08
*** aliguori has quit IRC01:26
*** mtaylor has quit IRC01:33
*** burris has joined #openstack01:38
*** zooko` has joined #openstack01:43
*** e1mer has joined #openstack01:48
*** ArdRigh has quit IRC01:59
*** miclorb has quit IRC02:09
*** zooko` has quit IRC02:15
*** zooko` has joined #openstack02:17
*** Cybo-mobile has joined #openstack02:29
*** jkelly has joined #openstack02:30
*** adjohn has joined #openstack02:34
*** Cybodog has quit IRC02:34
*** mtaylor has joined #openstack02:35
*** ChanServ sets mode: +v mtaylor02:35
*** jkelly has left #openstack02:36
*** silassewell has quit IRC02:41
*** tobym has joined #openstack02:44
*** Glacee has quit IRC02:52
*** zooko` is now known as zooko02:52
*** littleidea has quit IRC02:56
*** miclorb_ has joined #openstack03:10
*** littleidea has joined #openstack03:14
*** zooko has quit IRC03:17
*** deshantm_cosi has joined #openstack03:20
deshantm_cosianybody have a recommended distro for installing openstack?03:21
deshantm_cosiI'm setting up one or more systems for testing this week03:21
*** chewbranca_ has quit IRC03:22
creihtdeshantm_cosi: most of us use ubuntu server, so that will probably give you the best results03:23
*** vvuksan has joined #openstack03:30
*** vvuksan has quit IRC03:30
*** johndoy__ has joined #openstack03:32
*** johndoy_ has quit IRC03:36
*** kashyapc has joined #openstack03:44
*** kashyapc has quit IRC04:01
*** kashyapc has joined #openstack04:01
*** adjohn_ has joined #openstack04:02
*** adjohn has quit IRC04:04
*** adjohn_ is now known as adjohn04:04
*** Cybo-mobile has quit IRC04:08
*** zooko has joined #openstack04:13
*** zooko has quit IRC04:25
*** omidhdl has joined #openstack04:29
*** zooko has joined #openstack04:31
*** dele_ted has joined #openstack04:39
*** e1mer has quit IRC04:42
*** bitmonk has quit IRC04:43
*** zooko has quit IRC04:47
*** bitmonk has joined #openstack04:48
*** miclorb_ has quit IRC04:49
*** miclorb has joined #openstack04:49
*** miclorb__ has joined #openstack04:50
*** miclorb has quit IRC04:54
*** dele_ted has quit IRC04:55
*** f4m8_ is now known as f4m804:57
*** arcane has quit IRC05:01
*** zooko has joined #openstack05:12
*** rbergeron has quit IRC05:22
*** allsystemsarego has joined #openstack05:55
*** ibarrera has joined #openstack05:57
*** mtaylor has quit IRC05:59
*** sirp1 has quit IRC06:01
*** zooko has quit IRC06:07
*** zooko has joined #openstack06:11
*** mtaylor has joined #openstack06:17
*** ChanServ sets mode: +v mtaylor06:17
*** burris has quit IRC06:20
uvirtbotNew bug: #637805 in nova "keypairs shouldn't be in LDAP" [Low,New] https://launchpad.net/bugs/63780506:21
*** tobym has quit IRC06:30
*** zooko has quit IRC06:34
*** abecc has quit IRC06:44
*** debauer_ has joined #openstack06:46
*** debauer__ has quit IRC06:46
*** zooko has joined #openstack06:46
*** kashyapc has quit IRC06:49
uvirtbotNew bug: #637818 in swift "Add "Hand off" visibility to swift-get-nodes" [Undecided,New] https://launchpad.net/bugs/63781806:57
*** brd_from_italy has joined #openstack07:01
*** dele_ted has joined #openstack07:03
*** calavera has joined #openstack07:07
sorenvishy: As I understood it, the scoped session would be thread local.07:32
*** debauer_ has quit IRC07:33
*** debauer_ has joined #openstack07:34
sorenvishy: I've always thought that the way an ORM automagically turned these references into object attributes was one of the most convenient features of having an ORM to begin with.07:36
sorenvishy: ...but I guess we can revisit it for Austin+1.07:36
*** kashyapc has joined #openstack07:42
*** jakedahn has quit IRC08:03
*** adjohn has quit IRC08:11
*** jkakar has joined #openstack08:13
*** dele_ted has quit IRC08:34
*** michalis has joined #openstack08:37
*** omidhdl has quit IRC08:44
*** miclorb__ has quit IRC08:46
*** miclorb has joined #openstack08:46
*** DubLo7 has quit IRC08:47
*** miclor___ has joined #openstack08:49
*** omidhdl has joined #openstack08:49
*** miclorb has quit IRC08:50
*** littleidea has quit IRC09:21
*** zheng_li has joined #openstack09:26
*** littleidea has joined #openstack09:40
*** jkakar has quit IRC09:47
*** littleidea has quit IRC09:49
*** ctennis has quit IRC10:25
*** ctennis has joined #openstack10:38
*** ctennis has joined #openstack10:38
*** omidhdl has left #openstack10:44
*** kashyapc has quit IRC11:06
*** vvuksan has joined #openstack11:09
*** vvuksan1 has joined #openstack11:12
*** vvuksan has quit IRC11:13
*** vvuksan1 has quit IRC11:17
*** vvuksan has joined #openstack11:17
*** kashyapc has joined #openstack11:18
*** michalis has quit IRC11:20
*** vvuksan has quit IRC11:22
*** arcane has joined #openstack11:24
*** DubLo7 has joined #openstack11:29
*** vvuksan has joined #openstack11:35
*** DubLo7 has quit IRC11:38
*** vvuksan has quit IRC11:40
*** kashyapc has quit IRC11:43
*** vvuksan has joined #openstack11:48
*** tobym has joined #openstack12:00
*** michalis has joined #openstack12:10
*** miclor___ has quit IRC12:18
*** Podilarius has joined #openstack12:21
*** vvuksan has quit IRC12:22
*** DubLo7 has joined #openstack12:24
*** DubLo71 has joined #openstack12:28
*** DubLo7 has quit IRC12:30
*** tobym has quit IRC12:39
*** tobym has joined #openstack12:40
*** kuttan_1 has joined #openstack12:41
*** tobym has quit IRC12:42
*** kuttan_1 has quit IRC12:44
*** aliguori has joined #openstack12:59
*** jkakar has joined #openstack13:04
*** tobym has joined #openstack13:06
*** jkakar has quit IRC13:09
*** jkakar has joined #openstack13:09
*** hornbeck has quit IRC13:12
*** jkakar has quit IRC13:15
*** jkakar_ has joined #openstack13:15
*** Cybodog has joined #openstack13:20
*** klord has joined #openstack13:20
*** gundlach has joined #openstack13:26
*** burris has joined #openstack13:34
*** gundlach has quit IRC13:35
*** gundlach has joined #openstack13:36
*** burris has joined #openstack13:36
gundlachwhen's today's release meeting?13:39
dendrobatesgundlach: 4pm cst, check the wiki for local times13:41
gundlachdendrobates: ty13:46
*** rnewson has joined #openstack13:47
*** zooko has quit IRC13:50
*** jkakar_ has quit IRC13:51
*** DubLo71 has quit IRC13:56
*** dendrobates is now known as dendro-afk13:59
sorencreiht: Anyone we know?13:59
creihtA friend of mine, thought some people might find it of interest14:00
sorenCould be. I've requested an invite. It's always interesting to see how people use our stuff.14:01
*** mdomsch has joined #openstack14:04
*** dendro-afk is now known as dendrobates14:05
*** f4m8 is now known as f4m8_14:08
* soren pauses14:09
*** tobym has quit IRC14:09
*** npmap has joined #openstack14:11
*** tobym has joined #openstack14:16
blamarcreiht: thanks for the link! invite requested14:34
*** pharkmillups has joined #openstack14:40
*** jkakar has joined #openstack14:43
*** ded has joined #openstack14:44
*** sirp1 has joined #openstack14:50
*** hornbeck has joined #openstack14:52
*** amscanne has joined #openstack14:52
dendrobatesregister for the upcoming Openstack design summit.  https://launchpad.net/sprints/ods-b/+attend14:57
*** littleidea has joined #openstack15:01
*** zooko has joined #openstack15:01
*** DubLo7 has joined #openstack15:10
*** mtaylor has quit IRC15:13
*** Cybodog has quit IRC15:16
*** anm_ has quit IRC15:20
*** zooko has quit IRC15:24
*** anm_ has joined #openstack15:24
gundlachdendrobates: i've got some code i had to write for OpenStack that is independent enough to be a project on pypi (a ratelimiting package).  should i upload it to pypi as ratelimiting and then consume it in OpenStack, or must it be kept in OpenStack?15:29
gundlach[i'm leaning toward the former, and pvo agrees, but suggested checking w/ you too]15:30
dendrobatesgundlach: either is fine with me.  But we would need to package it for fedora and ubuntu to make sure it is easy to use as a dependency15:30
gundlachdendrobates: aren't we using pip to install python dependencies?15:31
gundlache.g. we also require webob, and i thought we just install that via pip15:31
dendrobateswe can use that, but distros frown on that and want packages so their installers/updaters know the state of the system15:32
* gundlach is not a packaging expert by any means15:32
dendrobatesis this going into Austin?15:32
dendrobatescan we ship a copy with austin?15:32
dendrobatesand pull it out in Bexar15:33
gundlachwe can do whatever is best; it's just a 2 or 3 file package15:33
dendrobatesBexar == Austin++15:33
gundlachdoes this imply that all the other python modules i've been requiring lately (eventlet, webob, routes) need to also be vetted?  I have just been adding them to pip-requires willy-nilly15:34
gundlachratelimiting, once i uploaded it to pypi, would just be a 4th module in the same class as those 315:34
dendrobatesI think that is fine for now, but if we want to be shipped in a distro, by default, we will need to make sure all our dependancies are packaged.15:35
dendrobatesthe distros will take care of it for us, mostly15:35
gundlachok, so 'fine for now' means i don't need to go make sure eventlet/webob/routes are already packaged for Fedora+Ubuntu?15:36
dendrobatesyeah, not your problem.  the package maintainers will handle it.15:39
dendrobatesif we make our own packages, we just need to be sure we let them know15:40
gundlachdendrobates: ok.15:40
dendrobates$(package_mantainers) == soren15:40
gundlach[interestingly, i just noticed on PyPI that Ian Bicking *just* released WebOb 1.0 within the last few minutes]15:41
gundlachok, thanks :)  i'll bug soren when i release ratelimiting to pypi after austin.15:41
*** smithpg1002 has joined #openstack15:42
creihtgundlach: do you have a link to this rate limiting code?15:42
* creiht is curious15:42
gundlachcreiht: not yet, i was holding off on setting up a google code project, etc.15:42
creihtWe need better rate limiting code in swift and wanted to see what you have :)15:42
gundlachdendrobates is of the opinion that i should hold off until Bexar on putting it in pypi, so i'll be adding it to openstack15:42
gundlachcreiht: ok, i'll check whether jaypipes has made openstack.common yet, and if so i'll drop it in there.15:43
creihtgundlach: is it rate limiting in terms of refusing requests after a certain rate, or does it slow down requests after a certain rate?15:43
*** calavera has quit IRC15:43
*** dele_ted has joined #openstack15:43
gundlachcreiht: refusing, and telling you how many secons to wait before retrying15:43
gundlach(the Rackspace API needs this kind of functionality)15:44
jaypipesgundlach: yes, I have :)15:44
gundlachhere you go: http://paste.openstack.org/show/24/15:44
gundlachjaypipes: ohai15:44
gundlachjaypipes: great, it's in trunk?  i'll drop my code in it.15:44
creihtwe need more of a slow down requests if they are doing too much15:44
jaypipesgundlach: though it's still not in Nova trunk, no15:44
gundlachcreiht: what does 'slow down' mean?15:44
gundlachcreiht: a request comes in -- do you want to queue it in memory?15:44
jaypipesgundlach: so may be best to package it into pypi for right now..15:44
gundlachjaypipes: dendro vetoed that.15:44
creihtso an example may be, wait a couple of milliseconds before returning the response15:45
dendrobatesgundlach: I didn;t veto it.15:45
jaypipesgundlach: ah, ok.  well, there's not much to openstack.common yet, because I'm still working on proposals... but certainly you could add it there. I'd still need packaging help from mtaylor though :(15:45
creihtin addition to an actual cap like what you are talking about (we already have that in swift)15:45
dendrobatesI just said can we ship a copy too, for this release15:45
gundlachdendrobates: not to imply dictatorship, just that you said it would be better to hold off15:45
gundlachoh, ship a copy *too*.  i see.  eh, i think i'll just wait until Bexar, so i don't have to fork the code.  no biggie.15:46
dendrobateswe are past the ubuntu freeze so adding new dependencies is hard15:46
gundlachcreiht: hmm, lemme think for a sec about how i'd add that15:46
gundlachcreiht: would you be using this as WSGI middleware?15:47
creihtI would like to if possible15:47
gundlachcreiht: and you want to say 'each user may make no more than N requests per minute, and if they try to we'll start delaying them?'15:47
gundlachwhat's the algorithm for how much to delay them?15:47
gholtIt'd need some modification to put rate limiting info into memcache as well.15:48
gundlach(you could do a quick-and-dirty version by just sleeping the # of seconds until they're allowed to make a request)15:48
gundlachgholt: i'm not using memcache -- i wrote a simple WSGI app instead, so you could pull off rate limiting in one request rather than 2 or 315:49
gundlachmakes atomicity easier as well15:49
gholtWell, we can't shard on user, for instance.15:49
gundlachgholt: yep, you can -- there's a note in the code i pasted above which talks about that15:49
creihtgundlach: We need to be able to rate limit across all the proxies15:49
gholtOh, this is an app, not middleware, sorry.15:49
gundlachjust make a WSGI app that shards on username and fwds to the right backend.  (which i didn't write because i'm pretty positive it exists in the wild)15:50
gundlachgholt: yeah, any middleware would be specific to an application, so i didn't include middleware in the package which i expected to ship to pypi15:50
*** ambo has joined #openstack15:50
gholtYeah, sorry, I started off on the complete wrong track. :)15:50
gundlachcreiht: so you put middleware in each one which calls out sideways to the WSGI app running on a separate server15:50
creihtthat doesn't sound web scale :)15:51
gholtI think it'd be better to have middleware that shares state in memcache servers.15:51
gundlachcreiht: hmm, i thought carefully about it to make it scale properly.  what sounds wrong?15:51
creihtgundlach: how do you scale the rate limiting service?15:51
gundlachgholt: i started down that path, with a rate limiter object that has different backends -- Local, list_of_memcacheds, redis -- but thought the current approach was better15:51
gundlachcreiht: shard by username.  if you need 10 times the capacity that one server can suport, then you:15:52
gundlachstart 10 WSGI apps, and put 1 or more proxies in front which are stateless but shard by incoming username.15:52
gundlachwhoever wishes to consume the rate limiting service hits one of those N stateless proxies, which fwds to the rigth shard15:53
gundlachif a shard goes down, the proxy stops rate limiting those users until the shard is replaced15:53
* creiht ponders15:53
* gundlach isn't sure if that was clear -- ask me to say again differently if needed15:54
creihtI understand15:54
gholtAll that should work. I don't think it'd be a great fit for Swift where we already have spread proxy servers using a ring of memcache servers.15:54
gundlachat a higher level -- if one machine can't scale, you can shard to many machines, and put frontends in front of the shards which fwd to the correct shard.15:54
gholtPlus, we'd prefer not to have to manage additional machines and services if at all possible.15:55
gundlachgholt: what do you mean 'spread proxy servers'?  and does a 'ring' of memcacheds mean something different than just a bunch of them running in a cluster?15:55
* creiht doesn't want to manage a rate limiting cluster on top of the swift cluster that he already has to manage :)15:55
gundlachgholt, creiht: yeah, so here's the tradeoff that made me think another service was worth it:15:55
gundlachif you use memcached to store your rate limiting info (a list of timestamps per action per user), then each time a request comes in from the web, you have to make multiple round trips to memcached15:56
gundlachand i don't know that you can guarantee atomicity (e.g. if 5 requests come in on different proxies, they might trip over each other writing to memcached)15:56
gundlachi didn't see immediately how to reuse memcached or otherwise, while still making a correct implementation (and not making lots of hops, sending a potentially large list of timestamps across the wire twice per request)15:57
gholtWe were using incr with time-based-key with memcache.15:57
gundlachgholt: how's that work?  e.g. i limit to 3 requests per minute15:58
gholtIt does mean that you can go over for a given second time span if that second laps two actual seconds.15:58
gholtOh, well, we're talking about limiting at 100s 100s per second. :)15:58
gholtThat was supposed to be 100s or 1000s, hehe15:58
gundlachoh, gotcha.  yes, at that rate you should use counters and memcache would suffice :)15:59
gundlachum, hm how do you make it atomic?15:59
gundlachi assume you're essentially pulling a counter, adding one to it, writing it back?15:59
gholtMemcache incr is atomic (supposedly)15:59
gundlachah, i hadn't heard of that16:00
*** brd_from_italy has quit IRC16:00
jeroredis is16:01
gundlachjero: right -- though i didn't like the idea of keeping rate limiting counters in a persistent store :)16:02
*** abecc has joined #openstack16:05
*** zheng_li has quit IRC16:12
*** xtoddx has joined #openstack16:17
*** tmarble_ is now known as tmarble16:17
*** dele_ted has quit IRC16:22
*** vvuksan has joined #openstack16:24
*** maplebed has joined #openstack16:24
*** jkakar has quit IRC16:27
redboDid you mention swift needs to be able to rate limit some types of requests differently?  If we used a rate limiting service, it'd probably be better if it'd just accept a key instead of semantically defining it as a username.16:29
*** ibarrera has quit IRC16:30
*** rlucio has joined #openstack16:31
gholtWell, he's got an action_name parameter that can be used for that.16:32
redbooh, okay16:32
*** pharkmillups has quit IRC16:32
*** vvuksan has quit IRC16:32
gundlachheh, i had just hopped to this window to ask if you guys need to rate limit on key, as does nova :)16:33
*** joearnold has joined #openstack16:33
gundlachgholt, creiht: so i'm reconsidering my approach with the knowledge that memcache supports atomic incr.  can you think of a way to support rolling limits [e.g. 100 reqs/day] without storing 100 timestamps?16:35
redboI'm assuming that's the account name we'd rate limit on, not the authenticated user.16:35
*** brd_from_italy has joined #openstack16:35
gundlachredbo: username is just an arbitrary string that defines a separate set of ratelimiting buckets.16:35
gundlachredbo: if swift supports multiple users per account, then yes you'd probably want the accountname.16:36
gholtgundlach: You just incr a timestamped-named-key with a timeout of a second (or two to be safe).16:36
gundlach[if you can think of a more generic name than 'username' i'd love to change it]16:36
gundlachgholt: but that doesn't allow for *rolling* limits, does it?16:36
gundlache.g. i allow 100 requests a day.  dude performs 100 requests at 11:59pm on thursday, then on Friday we allow 100 more at midnight?16:37
*** replicant has joined #openstack16:37
redbookay.  we'll hopefully have generic ACLs soon, so any user could access any account, and all of our scaling limitations are per account.16:37
gholtYeah, that's what I mentioned earlier. It wasn't too big a deal at 1s intervals, but might be at larger ones.16:37
redbowell, per container and per account16:38
gholtgundlach: For 100 per day, you could limit at 50 per twelve hours. :)16:38
gundlachgholt: right.  i think that it'll also lead to spiky request behavior, which is just what we're trying to avoid16:38
gundlachgholt: i thought of that and abandoned it, because they're not the same thing...16:38
*** burris has quit IRC16:39
gundlachgholt: why not limit at 1 every 16 minutes or whatever that works out to?16:39
gholtIf you're trying to prevent a spike over 100, it should be the same.16:39
gundlachgholt: (because then we're forcing regularity across time)16:39
dendrobatesrelease meeting today 21:00 UTC. Localtime: http://goo.gl/3ZYo Agenda at wiki.openstack.org/Meetings16:39
gundlachi guess it does prevent the spike, but a 1-per-second rate limit is different from a 3600-per-hour rate limit, so i don't think i can cheat by dividing like that16:40
gholtI'm not sure what/why you're limiting, but with us, we're just trying to manage system resources, not paid service levels or anything. So just preventing prolonged spikes works in our case.16:42
gundlachyep, nova's managing system resources, but the limits are like 100/min or 100/hour.16:43
*** zheng_li has joined #openstack16:43
gholtAh, as in you can pop 100 servers today, all in one second, or throughout the day?16:43
gundlachgholt: correct -- all at once, or throughout the hour.16:44
gundlachthough, when you put it that way, if a user hasn't been doing anything for the last hour, there's nothing stopping him from making a spike...16:44
*** KanGouLya has quit IRC16:44
gholtSo you're really trying to limit space resources within a time span, got it. We don't have that (except for basic bandwidth itself). Even with users eating up space as fast as they can, we should be able to add more capacity in time. It's the spikes in CPU usage and small request overload we have to manage.16:46
cory_the rolling window concept is tough as you have to keep track of every access16:46
gholtThough, I suppose somebody with their own Swift cluster might want time limited quotas like that.. Hmm.16:46
gundlachcory_: right -- i was keeping a ring of timestamps per action per user16:46
cory_and that ring has to be large enough for any possible rate limit16:47
cory_that's going to be fun :)16:47
gundlachcory_: well, it's just a list at the moment -- even if we had '1000 per period' that's only 4k16:47
cory_that's true16:47
gundlachi worked out the math for the Rackspace API (the reason i wrote this code in the first place) and we can handle 200k users16:47
gundlachwith one node16:47
cory_I guess it is just a list of every timestamp16:48
gundlachcory_: to be clear -- each ring is sized based on the action's limit.  e.g. we have a 5-length list for a 5/minute action, and a 100-length list for a 100/second action.16:48
gundlachcory_: except now gholt and creiht are making me question my wisdom in not using memcache instead and throwing out the 'rolling window' concept  :)16:49
cory_well, you seem to have completely different wants16:49
gholtHehe, well I'm pretty sure we're trying to solve to different, but admittedly similar, problems.16:49
*** blpiatt has quit IRC16:50
cory_yeah, they ideas are very similar but the constraints are quite a bit different16:50
gundlachcory_: i dunno -- whether you're trying to smooth out CPU spikes or I'm trying to smooth out load on the VM hosts, both are trying to smooth out the incoming rate of requests...16:50
*** pharkmillups has joined #openstack16:50
gundlachtell me again why they're different?16:51
cory_because of the 100 requests per day example16:51
*** pharkmillups has quit IRC16:51
gholtOne wants to limit how fast you do something, the other how much you do something.16:51
cory_what he said16:52
gundlachgholt: i don't think that's a real difference.  you want to limit how fast users send read/write requests.  i want to limit how fast users send reboot requests.  both take a finite amount of resources to respond to behind the scenes.16:52
*** amscanne_ has joined #openstack16:52
gundlachi must be in the wrong because it's 2 against one, but i still don't buy it :)16:52
cory_that's irc for you16:53
cory_it's kind of just a difference in how the "windows" behave16:53
gundlachcory_: so if i did convert 100/day into 4/hour, and kept a simple counter per hour, now i've gotten rid of the rolling window, and if someone has lots of requests to perform then they'll end up saturating each hour with 4 requests and end up performing 100/day like they wanted.16:53
cory_you're right that it may just be a semantic measurement difference16:54
gundlachonly users who don't have much work to do will be shafted by the 4/hour interpretation of the 100/day limit, and they don't mind because they, well, don't have a lot to do.16:54
cory_you mean they have 100 tiny requests and have to wait a whole day to get them done?16:54
*** amscanne has quit IRC16:55
gundlachcory_: if our rate limit is 100/day, i argue that the requests wouldn't be tiny16:55
gundlachor that wouldn't be our rate limit.  instead, they are requests that we think we can't handle more than 100 of in a day per user16:55
cory_no, I'm just trying to follow your last example16:55
cory_I kind of dont follow this part: "users who don't have much work to do "16:56
gundlachcory_: users who only have 12 requests per day to accomplish, for instance.16:56
gundlachthey would like to get them all done at once but must stretch them over 3 hours16:56
cory_that's rate limiting, right? :)16:57
gholtIf you're limiting reboots independently, wouldn't you just limit to whatever each host can reasonably support? Would you really want to limit on user in that case?16:57
gundlachgholt: reboots were a made-up example; i think RS actually rate limits at a less granular level, e.g. "100 POST requests per hour" where POST may be a server create, or a reboot, or backup16:58
gundlachwell, i give up -- i think i'll leave the code as is at least until austin is over with, for the sake of moving forward :)16:59
gholtHmm. Interesting... :)16:59
cory_it's definitely a fun discussion16:59
gundlachthanks for the discussion and feedback, guys16:59
edaygundlach: jumping in late here, but for rate limiting, I would just do a time-decay algorithm, that way they can hit their quote in 1 second, but then it takes <some time interval> to drop back to 0 (and incrementally allows more if they keep requesting)17:00
*** blpiatt has joined #openstack17:01
gundlacheday: how do i convert '100 requests per minute' into a decay function?17:01
edaygundlach: tweak this: http://oddments.org/wiki/programming:python:rate_limit.py   (and instead of sleep when limit is hit, return an error code)17:02
edaygundlach: this requires storing a single key with current rate17:02
cory_ok, decay is a much simpler answer17:03
cory_that's nice :)17:03
gholtYeah, shiny. Can you build that into memcache? :)17:04
cory_takes too much math for my brain to figure out wht the decay functions mean in terms of X per time interval17:05
gundlacheday: cool -- i'm trying to figure out if it would work for slower rates like 100/day -- maybe using float max_rate and rate would help17:06
gundlacheday: and if i could make it work in memcache without race conditions. thanks for the tip!17:06
edayyeah, it needs some more parameters to see how much to increase per 'hit' and decay window, but the algo should be the same17:07
*** rlucio has quit IRC17:07
*** burris has joined #openstack17:07
gholtMan, that'd be cool built into memcache. An atomic call like rate(key, max, interval) or somesuch.17:09
*** joearnold has quit IRC17:10
*** michalis has quit IRC17:10
*** joearnold has joined #openstack17:12
*** niczero has left #openstack17:27
*** aliguori has quit IRC17:27
DubLo7Just caught up on the rate limiting.  Doesn't that imply adding authentication of some sort to memcached as well as tracking call count per user / ip?  Sounds like an expensive addition.17:29
gundlachDubLo7: yeah, i don't think we'd actually modify memcache -- we'd just use memcache as the storage of the map from key->(last_timestamp, counter)17:30
gundlachwhere key is e.g. 'user michael performing a reboot action'17:30
gholtMemcache as it is might work fine for slower rates, but at higher rates the read then write would miss stuff.17:32
DubLo7gundlach: I see.  Well a bit more work because you want counts over a period of time.  key -> (array_of_timestamps) might work…17:33
*** pharkmillups has joined #openstack17:34
gundlachDubLo7: that's actually the alg that i have written at the moment, but eday points out that his approach does about the same thing with much less memory usage17:34
_0x44gundlach: Why memcached instead of Redis (that we already 'sort-of' have)?17:34
gundlach_0x44: because that's a persistent store, which feels wrong for tracking rate limiting.17:34
DubLo7this would be super simple in redis though… yeah17:34
gundlach_0x44: and, in Swift's case, because Swift already has a bunch of memcaches up and wanted to see if they could use this code as well17:34
_0x44gundlach: Tell them no and shake your fist at them.17:35
* _0x44 helps.17:35
*** mtaylor has joined #openstack17:35
*** ChanServ sets mode: +v mtaylor17:35
gundlach_0x44: that is one approach17:35
gholtHonestly, Swift's probably going to have to go with a different approach anyway. We were just curious if one-could-fit-all. Hehe17:36
_0x44I think my concern is really the expansion of datastores we're using. Doesn't this addition mean that memcached is now a dependency in addition to whatever other DB on the backend of the ORM?17:36
gundlachgholt: what's wrong with what you're doing today?17:36
gundlach_0x44: no, the rate limiter would have pluggable backends (as do several other pieces of openstack)17:36
gundlachso it could store in memcache for swift, or in local memory for small deployments.17:37
gholtgundlach: Well, right now we just return a refusal immediately on hitting the rate limit. We want to change that to just make the request delay executing the amount of time we want them to wait anyway. That doesn't work well with a 24hr rate limit though, hehe.17:37
_0x44gundlach: Oh, okay then, thanks.17:38
gundlach_0x44: i don't think of memcached as a datastore -- it's a cache.  i don't think it serves the same purpose as an ORM or redis.17:38
_0x44gundlach: It's a cache but data that isn't stored anywhere else would be stored there, no?17:38
DubLo7redis solution.  key = ratelimit-userhash-timestamp, expires = rate limit time.  previous_access_count = redis keys "ratelimit-userhash*", allow_access = (previous_access_count < max_allowed )17:38
gundlach_0x44: correct -- it would be stored there until the memcache server crashed17:39
_0x44gundlach: So if you wanted to query on that at all (for whatever reason), from the perspective of a user it's another datastore17:39
DubLo7I've done this already in my own project.  It works, it's fast.17:39
_0x44(Despite being a cache)17:39
DubLo7You'll want to put it into it's own database and key is ratelimit-processname-userhash-timestamp.17:40
DubLo7plus it replicates across servers so easy.17:40
gundlach_0x44: when you say you're concerned about the expansion of datastores, i think that's missing the point.  memcache serves a purpose -- i need to store some stuff in memory but many servers need to access the same memory.  redis/mysql/cassandra/gfs serve another purpose: i need to persist some data that many servers can access.17:41
gundlachnow, if i were arguing for using memcached for one purpose, and at the same time requiring some other in-memory store for another purpose, then that would be concerning17:42
gundlach[though i don't know of any other in-memory stores because memcached is so ubiquitous]17:42
_0x44gundlach: I agree, except that you can accomplish what you're trying to with memcached in redis.17:43
gundlach_0x44, DubLo7: we're discussing in #openstack-meeting today whether redis will be deprecated; so i'll have to wait till after that to talk about depending on redis :)17:44
DubLo7gundalch: Is this all nasa stuff now btw?17:44
gholtgundlach: Wasn't the only reason for not using redis the persistence? If it wasn't persisting, would it matter?17:44
DubLo7ah, ok17:44
creihtcan redis provide the same performance as memcache?17:45
gholtNot sure it matters at slow rates. :)17:45
creihtheh.. true17:46
* creiht has too much swift on the brain17:46
_0x44creiht: According to antirez it's just as fast but supports more datastructures17:46
DubLo7creiht: redis is on part with it an the programmer lists the o(n) cost of each command.  In my experience it's comparible speed and being able to query keys and extra key types makes any loss of speed so worth it17:47
DubLo7memcached is a blackbox.  Once you put something in you don't know what's still there.17:48
creihtcan you make redis not snapshot the memory to disk?17:51
creihtand I wonder how much of a performance impact snapshoting has17:51
edaygundlach: i still don't really consider redis a first class persistent store, it's more of a cache snapshot to disk (which some memcached verisons have as well). we're just using as a persistent store :)17:52
*** dendrobates is now known as dendro-afk17:52
*** kapil__ has joined #openstack17:52
edaycreiht: I'm pretty sure you can disable the snapshots17:52
DubLo7it's open source and cleanly written.  You could disable the command very easily.  Setting the db to /dev/null could work, but I'm not sure what would happen on startup17:53
DubLo7I'll try it out...17:53
*** replicant has quit IRC17:54
*** mdomsch has quit IRC17:59
*** littleidea has quit IRC18:02
gundlachgholt: re 'slow rates' -- while an individual user would have slow rates, we're targeting supporting 1 million users, so the overall load should be high.18:06
DubLo7snapshots can be disabled, but the normal rdb database can't be disabled as-is18:06
DubLo7chmod -r dump.rdb and redis lives happily with a 10 byte db18:06
gundlach_0x44: i hadn't heard that redis was as performant as memcache... i am surprised18:06
gholtgundlanch: Yeah, that more a tease, and not a great one either, heh.18:07
gundlachoh, ok :)18:07
DubLo7I've submitted patches to redis before.  I can add a memory only option if you guys think it's important.  Although I doubt it will be picked up in the official system.  It's about 20 lines of code or less.18:10
gundlachDubLo7: i don't think it's important -- we've got a working rate limiting system now, and were just talking about how to optimize it.18:11
gundlachwe aren't hurting to optimize (yet).18:11
gundlachthanks, though!18:12
DubLo7no problem.18:14
*** dendro-afk is now known as dendrobates18:16
*** littleidea has joined #openstack18:20
*** pvo has joined #openstack18:26
*** ChanServ sets mode: +v pvo18:26
*** tobym has quit IRC18:27
*** technoid_ has joined #openstack18:28
*** joearnold has quit IRC18:44
*** aliguori has joined #openstack18:51
*** mtaylor has quit IRC18:52
*** joearnold has joined #openstack18:53
*** p-scottie has joined #openstack18:59
*** stewart has quit IRC19:08
*** tobym has joined #openstack19:11
*** DogWater has quit IRC19:16
*** User113 has joined #openstack19:16
*** zooko has joined #openstack19:16
*** maple_bed has joined #openstack19:17
*** jakedahn has joined #openstack19:18
*** maplebed has quit IRC19:18
*** jakedahn has quit IRC19:27
*** amscanne__ has joined #openstack19:28
*** skippyish has joined #openstack19:29
*** rlucio has joined #openstack19:29
*** amscanne_ has quit IRC19:31
*** amscanne_ has joined #openstack19:35
*** amscanne__ has quit IRC19:39
sorendendrobates: I'm not sure what detail to add to https://blueprints.edge.launchpad.net/nova/+spec/austin-xen/+edit, really. There's not much to say. "Check that it works. If it doesn't, fix it."19:45
sorenWell, that's not entirely true. It can probably be summed up as "Add a libvirt template for Xen," though.19:47
sorendendrobates: Would that be better?19:48
*** dabo has joined #openstack19:49
*** skippyish has quit IRC19:49
*** ctennis has quit IRC19:50
*** maple_be1 has quit IRC19:50
*** jakedahn has joined #openstack19:51
*** anotherjesse has joined #openstack19:51
*** rlucio has quit IRC19:55
*** keshav2 has joined #openstack19:57
*** kapil__ has quit IRC19:57
*** vishy has quit IRC19:59
*** devcamcar has joined #openstack19:59
*** vishy has joined #openstack19:59
*** tobym has quit IRC20:00
*** Rudd-O has joined #openstack20:03
Rudd-Ohello guys20:04
*** devcamcar has quit IRC20:06
uvirtbotNew bug: #638396 in swift "saio add-user command needs to be code" [High,New] https://launchpad.net/bugs/63839620:06
Rudd-Ohey guys20:07
Rudd-Owhats up20:07
joshuamckentyone hour from game time20:07
*** devcamcar has joined #openstack20:08
Rudd-Ohey dendrobates20:09
Rudd-Oyou are with rackspace, right?20:09
*** allsystemsarego has quit IRC20:09
dendrobatesRudd-O: yep20:09
Rudd-Oah cool :-)20:09
dendrobatesand you are?20:09
Rudd-Oname's Manuel20:09
Rudd-OI'm with Cloud.com20:09
Rudd-Oand VERY interested in Nova20:09
dendrobatesah, nice to meet you.20:09
*** ctennis has joined #openstack20:10
dendrobatesjoshuamckenty: are you back in Canada?20:10
joshuamckentyNo, still in Italy20:11
anotherjesseback in italy20:11
joshuamckentyyeah, that20:11
joshuamckentyActually, going to Switzerland tomorrow20:11
joshuamckentyBut I'm trying to figure out how to get back for the design summit20:11
joshuamckenty(Can't believe it's going to be in Texas, though...)20:11
* anotherjesse recommends plane ... but a ocean voyage might be nice20:11
*** jakedahn has quit IRC20:12
creihtA rocket would be really fast :)20:12
* joshuamckenty recommends getting someone to buy him a plane ticket...20:12
joshuamckentyooh, I *do* work for NASA...20:12
Rudd-Oahhh switzerland20:12
*** sirp1 has quit IRC20:13
Rudd-Obeautiful landscape20:13
joshuamckentyI wouldn't know - I'm never outside of the office, and the hotel20:13
joshuamckentyNo point, really, with the quality of the swiss beer20:13
*** tobym has joined #openstack20:15
burrisdoes it bother you that SWIFT_HASH_PATH_SUFFIX has a default value and if you neglect to set the environment variable then a server might use a different suffix and cause chaos?20:16
*** ambo has left #openstack20:16
creihtgholt: -^ :)20:17
gholtburris: Good point. And I hate that thing anyway. :)20:18
*** joearnold has joined #openstack20:18
*** benoitc has quit IRC20:18
*** ded has quit IRC20:19
gholtburris: It'd be good to patch the code to blow up with no SWIFT_HASH_PATH_SUFFIX yet. Not sure how that'd affect all the automated test environments though.20:19
*** benoitc has joined #openstack20:19
joshuamckentyI think Hudson can handle some ENV variables, if necessary20:21
*** devcamcar has quit IRC20:21
gholtchuck: I think you're the one who added that line of code anyway. :P And notmyname the original line before OpenStack. :P :P20:21
*** devcamcar has joined #openstack20:22
gholtcreiht: ^^ stupid multi-nicks hehe20:22
Rudd-Ohudson can indeed handle environment variables20:22
*** ded has joined #openstack20:23
*** devcamcar has quit IRC20:23
*** benoitc has quit IRC20:23
joshuamckentyOh, speaking of which: is it soren or dendrobates who has the sexy pylint setup for hudson?20:24
creihtgholt: I thought you added the hash stuff :)20:24
gholtNot that endcap thing. Man that was a whole thing. Remember why it was added?20:24
anotherjesseis the hudson setup in a bzr repo? so others can set it up if they want20:24
notmynamecreiht: I added the hash stuff20:25
gholtUnder duress thought, as I recall. :)20:25
joshuamckentyall kludgey code is under duress, in my experience.20:25
joshuamckentySometimes the duress is alchohol, though20:25
joshuamckentyI feel somewhat privileged that my worst code was never open source20:26
joshuamckentywhich should shock those of you who've looked carefully at nova - yes, sadly, I've done worse ;)20:26
joshuamckentyIs there a good convention for an IRC "show of hands"?20:27
*** joearnold has quit IRC20:27
gholto/  ?20:28
dendrobatesi think mordred did most of the hudson setup.20:28
*** sirp1 has joined #openstack20:28
joshuamckentyah, right. Thanks20:29
dendrobatesrelease meeting in #openstack-meeting in 30 min.20:29
*** davidg has quit IRC20:31
Rudd-OI always have had the belief that better to release crappy code than to not reliease at all20:31
Rudd-Ohowever crappy it is20:31
*** littleidea has quit IRC20:31
joshuamckentyOh, I agree completely. I wrote a lot of software under contract, however.20:32
*** devcamcar has joined #openstack20:33
joshuamckentyNASA and OpenStack was actually the first occasion where I won the open source argument with my client.20:33
joshuamckentyAnd, to be fair, anotherjesse actually won the fight. I just started it.20:33
burrisgholt, joshuamckenty I think it is also possible to wrap the test function in setup.py with a func that sets the environment variable and unsets it before/after the test run20:33
dendrobatesjoshuamckenty: really?20:33
*** p-scottie has quit IRC20:34
joshuamckentydendrobates: Well, I worked on Flock when it was open source, but that was a decision they made ahead of time.20:34
zulalot of the canadian government sitll doesnt use open source unfortunately20:34
joshuamckentyNetscape was SUPPOSED to be open source when we worked on it, but we never won that fight with AOL legal.20:34
burrisgholt, joshuamckenty but I think it would be better if SWIFT_HASH_PATH_SUFFIX wasn't an environment variable but lived in a configuration file20:34
gholtburris: True, that'd work with unit tests and probably probe tests, but not functional tests which could be run against a distant cluster.20:34
joshuamckentyI generally prefer config over ENV as well, but not a mix of both20:35
joshuamckentyare there many other swift ENV variables?20:35
burrisI don't think there are any other env variables for swift20:35
gholtGood idea on conf value.20:35
burristhis one is dangerous20:35
creihtwell there is the MAKE_SWIFT_WORK env variable20:36
gholtOnly other env vars are optional test ones and optional command line tools one.20:36
joshuamckentyah, yeah. I always forget to set that one20:36
creihtWe started with a config variable, but there are several different configs, and they would all need that variable20:36
*** DubLo7 has quit IRC20:37
gholtIt could be its own config file. Lame I know, but it'd work.20:37
joshuamckentyyou need config file includes20:37
burrisI could fix it for regular use and unit tests but it would break your automated tests and installs20:37
creihtgholt: didn't you go down this road for a while before notmyname took it over?20:37
creihtbecause didn't we also think about putting it in the ring?20:38
gholtAh, probably. You know my memory.20:38
burrisobject server uses hash_path (which uses SWIFT_HASH_PATH_SUFFIX) but it doesn't use the ring...20:39
gholtcreiht: See? burris remembers the whole thing, hehe20:40
*** devcamcar has quit IRC20:40
creihtburris: right, and why we dropped the ring idea20:40
*** devcamcar has joined #openstack20:41
gholtMaking it it's own configuration file is wonky, but easy to do and distribute and doesn't have the environ issues.20:41
burrisI don't think having its own config file is that lame, considering its very important that every node agree on the salt for the entire lifetime of the clust20:41
*** anotherjesse has quit IRC20:43
*** devcamcar has quit IRC20:43
burrisit also doesn't make sense to put it in the ring since the ring changes but the salt never does20:43
*** devcamcar has joined #openstack20:44
joshuamckentyI've been out of touch, but have we looked at standardizing the CLI / config file handling between nova and swift at all?20:44
Rudd-Oso guys20:44
Rudd-Owhy gflags rather than optparse or argparse?20:44
creihtI think I gave jaypipes too hard of a time, and he gave up :/20:44
joshuamckentytrue answer, or cool answer?20:44
gholtjoshuamckenty: Yes, talked, but pushed it out past Austin release for now.20:44
joshuamckentywe went with gflags cause termie liked it more than we liked anything else20:45
joshuamckentyIf I was going to switch now I would consider cement20:45
joshuamckentyactually, the way that gflags are declared within the module that they're used is both cool, and annoying20:45
joshuamckentygholt: makes sense20:46
creihtwe used optparse because it didn't add another dependency :)20:46
joshuamckentygholt: I just want us to have some idea of where we're going to seek commonality before the next four major openstack components get started20:46
burrisso if I fixed hash_path to get the suffix out of a config file would you guys accept the patch?  what would need to be done to increase the likelyhood of acceptance?20:46
joshuamckentyI'm always tempted to say "beer" to that question, but it's not really true20:47
burristhe source and solution to all of lifes problems...20:47
creihtburris: unless anyone else comes up with a better idea, I'm open to that option20:47
burrisI just don't want to create too much extra work for you so I'm worried about what will break20:47
gholtburris: Go for it. Just need to be on the contributors list or an employee of a member company.20:48
burrisI work for Cloudscaling20:48
creihtas long as you set the default to what we had before, should be fine20:48
creihtat least shouldn't break anything (including tests)20:48
*** dgoetz has joined #openstack20:48
gholtburris: You're set to contribute then. :)20:49
creihtburris: the worst that can happen is that you will get half way done then gholt will realize that he had already implemented it in a branch somewhere and beat you to the punch :)20:49
*** anotherjesse has joined #openstack20:49
burrisI think the issue is there would be no default since it would be in a config file, I haven't looked to see how those are setup or if the ones that are there are particular to our install of it20:49
creihtburris: there should be a reasonable default, but we could print a warning when something is run with the default20:50
creiht(not set in the config)20:51
gholtHrm. I like the idea of no default personally. :P20:51
gholtBlow up until you make the proper conf file.20:51
burristhat would be bad in production because people don't read the logs to see the warning, then you have a server who's hash_path returns different values than all the others20:51
burrisyeah the default is the thing that is really bothering us, env variables are also lame20:51
gholtOf course, that goes against what other folks want with a set of packages that installs a working system. .... Unless we make a swift-lame-settings package?20:53
*** devcamcar has quit IRC20:53
*** adjohn has joined #openstack20:53
*** devcamcar has joined #openstack20:54
creihtyeah I'm torn both ways... trying to find a decent balance between the two20:55
*** devcamcar has quit IRC20:55
*** p-scottie has joined #openstack20:57
*** devcamcar has joined #openstack20:57
*** benoitc has joined #openstack20:57
burrisright now the swift setup.py doesn't install a working system, isn't the only thing that does so is swift-solo?20:57
creihtburris: there are some other scripts as well20:58
creihtbut basically do the same thing20:58
creihtthere was talk about setting up the ubuntu packaging so that it would set up a simple self-contained system20:59
jaypipescreiht: ha ha.20:59
joshuamckentyyeah, I thought that was SAIO21:00
joshuamckentyor some such21:00
jaypipescreiht: no, just haven't gotten around to writing the proposal email to the ML :)21:00
jaypipescreiht: the wiki proposal is done though...21:00
burriswhich scripts?  they could be modified to read a bunch of bytes out of /dev/uprandom and put them in the config file21:00
joshuamckentymeeting time21:01
creihtsomeone made a bash script and blogged about it, I think the NASA guys have some puppet scripts21:01
creihtthere isn't anything official though21:01
joshuamckentywe have chef scripts, I think21:01
joshuamckentymight have puppet scripts, too21:01
joshuamckentyit's a bake-off21:01
burrisit's going to break a lot of peoples stuff but I think its important, I wonder how many people are running the default and don't even know they needed to change it before storing anything in their cluster?21:01
creihtburris: I also meant to document that, but got lost in the shuffle21:02
burrisit's not too late to change how it works then :-)21:03
burrisI'll whip something up21:03
*** ded has quit IRC21:03
creihtburris: but yeah the current chef scripts are just for dev (to which the string isn't so important)21:04
creihtburris: but it would probably be a good idea to get something better in before the austin release21:05
gholtGo for the blow up if not set? Packaging/scripts can handle the rest?21:07
burristhat sounds good to me21:08
creihtsounds reasonable to me21:08
Rudd-OI still get the problem with pidfile21:09
Rudd-OI am trying to start nova-compute21:09
*** joearnold has joined #openstack21:09
Rudd-Oit spits out attribueerror: pidfile21:10
Rudd-Ois there a commandline parameter I need to pass?21:10
*** burris has quit IRC21:10
xtoddxRudd-O: never seen that.  are you passing a --flagfile= flag?21:11
*** perestrelka has quit IRC21:11
uvirtbotNew bug: #638449 in nova "Cannot update the flat network IP address list" [Undecided,New] https://launchpad.net/bugs/63844921:11
*** perestrelka has joined #openstack21:11
*** vvuksan has joined #openstack21:13
*** btorch has quit IRC21:16
*** devcamcar has quit IRC21:16
*** pandemicsyn has quit IRC21:16
*** pandemicsyn has joined #openstack21:16
*** littleidea has joined #openstack21:16
*** btorch has joined #openstack21:16
*** ChanServ sets mode: +v pandemicsyn21:17
creihtoh man where did burris go21:18
uvirtbotLaunchpad bug 638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New]21:18
Rudd-Ohow's that work? mind if I ask even if it is a stupid question?21:25
xtoddxyou can collect flags and stick them in a place like /etc/nova/proxy-server.conf21:25
*** devcamcar has joined #openstack21:26
xtoddxand --flagfile=/etc/nova/proxy-server.conf and store all your flags there21:26
*** p-scottie has quit IRC21:26
xtoddxsome of the binaries have it baked in, others use the default, which i think is nova.conf in the current dir21:26
uvirtbotNew bug: #638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New] https://launchpad.net/bugs/63845721:27
*** joschi___ has joined #openstack21:31
*** devcamcar has quit IRC21:31
*** joschi has quit IRC21:31
Rudd-OI see21:31
*** klord has quit IRC21:32
*** p-scottie has joined #openstack21:33
*** pharkmillups has quit IRC21:33
*** pharkmillups has joined #openstack21:33
*** anotherjesse has quit IRC21:37
*** p-scottie has quit IRC21:37
*** blpiatt has quit IRC21:38
*** burris has joined #openstack21:38
*** devcamcar has joined #openstack21:40
*** rnewson has quit IRC21:42
*** devcamcar has quit IRC21:42
*** devcamcar has joined #openstack21:43
*** p-scottie has joined #openstack21:44
*** devcamcar has quit IRC21:44
*** maple_be1 has joined #openstack21:44
*** devcamcar has joined #openstack21:46
creihtburris: https://bugs.launchpad.net/swift/+bug/63845721:46
uvirtbotLaunchpad bug 638457 in swift "Refactor SWIFT_HASH_PATH_SUFFIX to be in a config file" [High,New]21:46
burrisyes thanks!21:47
creihtTake that over if you don't mind (I don't know what user you are on launchpad)21:47
burrisI will, I think I have to create a new user21:48
*** vvuksan has quit IRC21:49
*** ded has joined #openstack21:52
*** joearnold has quit IRC21:56
*** stewart has joined #openstack21:57
*** jdarcy has quit IRC21:57
edayvishy: conflicts :)22:03
*** devcamcar has quit IRC22:03
*** devcamcar has joined #openstack22:04
*** dabo has quit IRC22:07
*** devcamcar has quit IRC22:07
*** devcamcar has joined #openstack22:09
vishyeday: on it22:11
*** p-scottie has quit IRC22:12
*** stewart has quit IRC22:13
*** pvo has quit IRC22:14
*** devcamcar has quit IRC22:14
*** devcamcar has joined #openstack22:15
*** silassewell has joined #openstack22:19
*** devcamcar has quit IRC22:19
*** devcamcar has joined #openstack22:20
*** amscanne_ has quit IRC22:21
vishyeday: resolved22:23
*** DubLo7 has joined #openstack22:23
*** rlucio has joined #openstack22:25
*** devcamcar has quit IRC22:25
*** pharkmillups has quit IRC22:26
*** stewart has joined #openstack22:30
*** rnewson has joined #openstack22:30
*** DubLo7 has quit IRC22:30
*** adjohn has quit IRC22:31
*** jakedahn has joined #openstack22:34
*** devcamcar has joined #openstack22:35
*** rnewson has quit IRC22:40
*** devcamcar has quit IRC22:40
*** devcamcar has joined #openstack22:41
*** miclorb_ has joined #openstack22:47
*** jakedahn_ has joined #openstack22:49
*** npmap has quit IRC22:50
*** devcamcar has quit IRC22:50
*** jakedahn has quit IRC22:53
*** gundlach has quit IRC22:53
*** jakedahn_ has quit IRC22:53
*** gundlach has joined #openstack22:57
*** jkakar has joined #openstack22:58
*** p-scottie has joined #openstack22:59
*** devcamcar has joined #openstack23:00
*** vvuksan has joined #openstack23:00
*** aliguori has quit IRC23:03
*** maple_bed has quit IRC23:03
*** sirp1 has quit IRC23:08
*** sirp1 has joined #openstack23:09
*** joearnold has joined #openstack23:12
*** dendrobates is now known as dendro-afk23:13
*** devcamcar has quit IRC23:13
*** devcamcar has joined #openstack23:15
*** gasbakid has joined #openstack23:16
*** devcamcar has quit IRC23:16
*** devcamcar has joined #openstack23:17
*** pvo has joined #openstack23:22
*** ChanServ sets mode: +v pvo23:22
*** skippyish has joined #openstack23:25
*** amscanne_ has joined #openstack23:26
*** ded has quit IRC23:37
*** devcamcar has quit IRC23:37
*** devcamcar has joined #openstack23:38
*** sirp1 has quit IRC23:39
*** gundlach has quit IRC23:39
*** Rudd-O has quit IRC23:39
*** zheng_li has quit IRC23:39
*** zheng_li has joined #openstack23:40
*** pvo has quit IRC23:42
*** pvo has joined #openstack23:43
*** pvo has joined #openstack23:44
*** ChanServ sets mode: +v pvo23:44
*** stewart has quit IRC23:44
*** zheng_li has quit IRC23:47
*** devcamcar has quit IRC23:47
*** ArdRigh has joined #openstack23:47
*** devcamcar has joined #openstack23:48
*** pvo has quit IRC23:48
*** Rudd-O has joined #openstack23:52
*** devcamcar has quit IRC23:52
*** tobym has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!