Tuesday, 2010-12-21

*** jordandev has quit IRC00:08
*** jbaker has quit IRC00:12
*** jordandev has joined #openstack00:12
*** rogue780 has quit IRC00:16
*** joearnold has quit IRC00:19
uvirtbotNew bug: #692803 in nova "instances are not resumed after node reboot" [Undecided,New] https://launchpad.net/bugs/69280300:21
*** winston-d has joined #openstack00:24
*** dfg_ has quit IRC00:25
*** jordandev has quit IRC00:27
uvirtbotNew bug: #692805 in nova "manually running euca-reboot-instances  fails after node reboot" [Undecided,New] https://launchpad.net/bugs/69280500:37
*** HouseAway is now known as AimanA00:41
*** _skrusty has quit IRC00:44
*** ar1 has joined #openstack00:53
*** _skrusty has joined #openstack00:56
*** irahgel1 has left #openstack00:59
*** daleolds has joined #openstack01:04
vishyeday: feel like helping me debug eventlet + rpc.call issues?01:07
vishytermie isn't around atm01:07
vishyeday: so nested calls are broken01:09
vishysometimes they just never return01:09
edaywhere are they nested?01:10
vishyand sometimes they raise an exception01:10
vishydid you get my pm or did i paste too many lines?01:11
edayI don't see how they could be nested01:11
*** sophiap_ has joined #openstack01:11
*** sophiap has quit IRC01:11
*** sophiap_ is now known as sophiap01:11
vishymeaning if you are in the middle of a call and you try to do another one01:12
vishyas in a call to compute that calls network01:12
vishyit goes boom if it is on the same host01:12
edayis this when not going through rabbit?01:13
edaymight need to pool the connectionobjects for everything, not just temp consumers then01:14
vishythis is the test that blows things up01:16
vishyit doesn't blow up when using real rabbit01:17
*** zaitcev has joined #openstack01:17
vishyeday: i get this RuntimeError: Second simultaneous read on fileno 8 detected.  Unless you really know what you're doing, make sure that only one greenthread can read any particular soc    ket.  Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False)01:18
vishyeday: when i try to listen on multiple queues with the same object01:18
*** kevnfx has joined #openstack01:19
edaytry this in rpc.cast01:19
eday-    conn = Connection.instance()01:19
eday+    conn = Connection.instance(True)01:19
edayerr, .call01:20
*** gaveen has quit IRC01:20
*** dendro-afk is now known as dendrobates01:21
*** Ryan_Lane has quit IRC01:24
edaythe rpc.Connection class assumes only one instance/process, unless you pass True to it01:24
vishycorrection: it is broken with real rabbit too01:25
edaywhich is why we pass True for temp queues, since it needs to create one for return first and one for sending01:25
vishybut only half of the time01:25
vishyand that change didn't help01:25
vishyso i tried modifying instance to return a new object every time01:28
vishystill get simultaneous reads01:29
vishyin fetch01:29
edaythats with rabbit?01:29
edayso, without rabbit, I think it locks up because the thread is blocking on a wait (what I'm seeing) before it can run the nested call01:30
vishyyeah what i don't get is how is the callback hitting before wait_msg.__call__() is going01:31
vishythis is where monkeypatching can get painful01:32
*** odyi has quit IRC01:33
vishyi'm blaming the consumer.wait(limit=1)01:34
edayhmm, fakerabbit is a singleton too, I wonder if thats borking it01:34
*** dragondm has quit IRC01:34
*** odyi has joined #openstack01:35
*** reldan has quit IRC01:35
vishylet me try switching01:36
vishyhmm that just made it fail completely01:37
*** daleolds has quit IRC01:38
vishyi'm blaming the greenthread.sleep(0)01:38
edayhmm, still not sure what that is doing :)01:40
edayI guess that is a cooperative yield01:41
*** roamin9 has joined #openstack01:41
*** reldan has joined #openstack01:44
vishyI emailed termie, so hopefully he has an idea.01:46
vishyunless you have any other brilliant things we could try01:46
edaywell, creating a new connection for each rpc call should fix rabbit01:47
vishyit doesn't though01:48
vishystill get duplicate reads on fetch01:48
vishyoo just noticed there is one in msg_reply as well01:49
vishylet me try there01:49
edayI would just change it at the top01:50
edaychange the default in Connection.instance = True01:50
vishyyeah i tried that01:52
vishystill gives me duplicate reads01:52
edayI'll have to look further, removing the singleton fds should have done it01:53
edaygottarun now though01:53
vishythat is what I'm trying to do01:54
vishyok might have figured something out01:59
*** _skrusty has quit IRC01:59
*** Ryan_Lane has joined #openstack02:00
nelson__ is swift extensible short of modifying the source using plugins?02:06
nelson__and, is there a nagios script? (I couldn't find one).02:06
*** _skrusty has joined #openstack02:12
*** hadrian has quit IRC02:13
*** opengeard has joined #openstack02:17
*** reldan has quit IRC02:21
notmynamenelson__: still there?02:29
nelson__are you?02:29
notmynamenelson__: swift is extensible with wsgi middleware. familiar with it?02:29
nelson__yes!  Thanks, that's the answer for that checkbox.02:29
nelson__and ... nagios?  We can probably write that if needed.02:30
notmynamesince we (Rackspace) us swift as cloud files, we don't have all the parts of the product open-sourced. all the storage stuff is there, but the billing, auth, and cdn stuff is product specific and kept in house. most of that is really just wsgi middleware we deploy internally02:31
notmynamewe don't have any special hooks for nagios or other monitoring stuff (other than looking at the swift logs)02:32
nelson__sure, that makes sense.02:32
nelson__okay, I'll put in a "no".02:32
* notmyname wonders if the cloudkick acquisition could result in some openstack monitoring goodness02:32
notmynamenelson__: what's the nagios question, specifically. I can ask our ops guys02:33
*** jdurgin has quit IRC02:33
*** jbaker has joined #openstack02:33
nelson__it's an open source version of What'sUp?02:34
nelson__(I think).  We get real-time reports of service accessibility.02:34
notmynameya, but what's the question you are trying to answer?02:34
nelson__Nagios is just a framework. You need a per-service script which tests the service and then reports back to the framework.02:35
notmynameah, ok02:35
*** reldan has joined #openstack02:35
nelson__I don't think it's going to be that hard to write one; but write why if one exists?02:35
* nelson__ hugs opensource.02:35
*** dirakx has joined #openstack02:35
notmynameI'm not really familiar with nagios either, so I'd like to ask our ops guys who run the monitoring stuff02:35
nelson__http://wikitech.wikimedia.org/view/Media_server/Distributed_File_Storage_choices <--- if anybody wants to do a once-over on that, I'd appreciate it. Obviously only bother with the OpenStack parts. :)02:45
notmynamenelson__: of course, I take issue with a few things :-)02:47
nelson__Oh, feel free to correct me.  I was mostly working off the documentation and whatever else I could find, so I almost certainly have some things wrong / incomplete.02:48
notmynamelet's start with the matrix02:49
notmynameI'd say direct HTTP is supported with swift, unless you mean something other than what I'm thinking02:49
nelson__probably. for me, "direct http" means "can we expose this to the squids / varnishes"?02:50
nelson__if we need to have a web server in front, then the answer is "no".02:50
nelson__I expect that we might have to force the caches to include an authorization token.02:51
nelson__I also expect that's just configuration on their end.02:51
notmynameall we have in front of swift is a load balancer02:51
notmynamebut swift talks HTTP direct to the client02:52
nelson__client software or web browser?02:52
notmynameeither, depends on your auth (which is another thing I'd like to go over in a bit)02:52
notmynamebut for now, think of browser as a subset of client software02:53
notmynameso creating stuff is a PUT, fetching is a GET, etc02:53
nelson__We have caches in front of our media server, so no web browser touches it directly.  The real question is whether our caches can access swift directly.  It sounds like your answer is "definitely maybe". :)02:53
notmynamewe don't currently support form uploads (POST for uploads POST currently only supports updating metadata)02:54
nelson__we'll have code to do that; always.02:54
notmynameif you have the containers marked as public, then definitely yes02:54
notmynameif you don't, it depends on your auth system02:55
nelson__okay, I'll change that then.02:55
nelson__we have some private wikis. We'll have to use auth for them.02:55
notmynamethat being said, because swift is (can be) used for large files, I don't know that I'd recommend using caching for the entire objects. it will work, but you have to be careful. what if someone tries to download a huge object?02:56
notmynameok, so moving on02:56
notmynameunder "load balanced for the same data"02:56
nelson__we don't have any really huge objects yet. we'll have to deal with streaming for them.02:56
notmynamethe client talks to the swift proxy server. the proxy server handles all of the talking on the internal network to the storage nodes. the proxies can (or should) be load balanced02:57
notmynamethat may be a little different than what you are asking, but IMO you don't have to worry about that detail. swift hides it from you02:58
notmynameso, internally, the proxy server does request the object from <num_replicas> servers and returns the object from first storage node to respond, but you don't have to load balance any of that02:59
nelson__I think I understand that.02:59
notmynamewas that similar to what you were asking, or am I on the wrong track?03:00
nelson__yes, it's exactly what I was asking.03:00
nelson__and yes, we're poking into the internals a bit.03:00
notmynameok, that's good. understand your stack :-)03:00
notmynameI'll skip "authentication" and come back to that later03:01
notmynamewhat is "supports unpublished files..."03:01
notmyname...for new upload03:01
nelson__We accept file uploads before we've established the copyright permission.03:02
nelson__Need to make sure that they're not visible to the web.03:02
nelson__but I think we can upload to a user that isn't public and then transfer it afterwards.03:03
notmynameya that makes sense03:03
*** sirp1 has quit IRC03:03
*** sirp1 has joined #openstack03:03
notmynamese support ACLs, and they are based on the swift account. it's also dependent on your auth system03:03
notmynameso you can give one account access to read or write to a container in another account03:04
nelson__yep, and I think even if that doesn't work, we can have an area which is not visible to the web, and then rename the file into the correct place (copy/delete if necessary).03:04
notmynameya, that works too03:05
notmynameok. "max files size"03:05
notmyname5GB is right. and it's not entirely right03:05
notmynameswift now (as of this morning) supports arbitrarily sized objects03:05
nelson__but.... ?03:06
notmynameupload the chunks of the objects as normal objects (each with the 5GB limit--or whatever you have changed the constant to)03:06
notmynamethen create a zero-byte object with an X-Object-Manifest header. this is the "manifest object" that ties the others together03:07
nelson__pretty sure we're going to want to have our own chunking system. Otherwise it gets crazy with such huge files.03:07
notmynameby GET'ing the manifest object, all of the parts are cat'ed out to the client as one object.03:07
nelson__oh! Interesting.03:07
notmynameso swift supports unlimited files sizes. uploaded in up to 5GB chunks03:07
nelson__okay, I'll put that in.03:07
notmynameand the 5GB limit is simply a constant in the code that can be changed to anything.03:08
notmynamethere are things to take in to consideration, of course03:08
notmynamebut it's 5GB by default03:08
notmynameyou will still have access to all the chunks. and side effects of this system allow you to actually append to an object or even insert in or maybe even sync chunks of the large object03:09
nelson__we expect to upload in chunks; if only to make sure that uploads actually run to completion.03:09
notmynameyou could have 10K 10 byte objects if you wanted03:09
notmynameor 100K. or more03:09
notmynamethe chunks also give you a pseudo pause/resume for uploads03:10
nelson__right; our thought as well.03:11
notmyname"maximum number of files"--where did you get the one million number from? adrian otto's blog?03:11
notmynameagain, that's right, but not the complete story03:11
notmynamecontainers get less performent (esp. for PUTs as they grow). but object GETs are unaffected by container size03:12
notmynamewe recommend that you start sharding your content across containers as you get more objects03:13
notmynamebut there is no hard number03:13
notmynamewe've put billions of objects in a container before (in testing)03:13
notmynamethere is nothing in the code that will stop you. it's a pain threshold thing03:14
notmynameif you put a bung of objects, then only read them, you can have a huge number of objects with no problems03:14
notmynameif you have an active container (lots of concurrent reads and writes), I'd limit that closer to 1-10 million03:15
notmynameand, again, there is no limit. it depends on your use case and your users03:16
nelson__so it's more that writes are slower when you get so many entries.03:16
notmynamewreese: feel free to jump in any time :-)03:17
notmynamein our testing it has a pretty linear degradation of performance03:18
nelson__We can probably live with that. We'll definitely be testing it!03:19
nelson__I've saved my edits; what else?03:19
notmynamemy answer for replication is similar to my answer for load balancing. rsync is what we use internally (with some optimizations on what we sync) to sync data, but swift hides that from you03:20
notmynamethe "time before data is replicated is off"03:20
notmynamewe won't return success unless we have at least 2 good writes (assuming 3 replicas)03:20
notmynamebut that doesn't have anything to do with replication.03:21
notmynamereplication runs as a daemon on the storage nodes. I believe it has a config variable that can be tuned to set the aggressiveness of the checks03:21
notmynamebyt default, I think it's 300 seconds03:22
notmynameso that ends up being your eventual consistency window.03:22
notmynameand it's all quite tunable03:23
nelson__Okay, but when a file gets written, it gets written to two copies, right?03:23
nelson__and then if it gets fetched, only the new version is returned?03:24
notmynameno, the fastest responder is returned :-)03:24
notmynameif the one that didn't succeed comes back online and responds first, that is the content that is returned by the proxy03:25
notmynameand replication will take care of getting the newer content on that node (eventually)03:25
notmynamewe've talked about changing that, but there are concerns about performance. and frankly, we need a good use case03:25
nelson__okay ... what if a file doesn't exist?03:27
nelson__as in ... new upload.03:27
notmynameok. I was a little off on how the proxy responds03:28
notmynameit doesn't do it concurrently. but it does respond with the first storage node that gives a 2xx response03:29
nelson__ah, okay, that's good.03:29
notmynameso if the first on 5xx, the second 404, and the third 200, then you get the data from the third03:29
nelson__and ... we can delete the file before uploading the new one. that will deposit a tombstone everywhere, right?03:29
notmynameat least, everywhere it can. again, replication handles that03:30
notmynamelast write wins. and DELETEs write a tombstone file03:30
nelson__okay, good. that's what we'll be doing anyway.  Older files are kept in a directory inaccessible to the web.03:30
nelson__yeah, it's okay if there's a little weirdness when a storage node has gone away.03:31
notmynameok. I think that's it for the matrix. now the auth part03:31
*** jbaker has quit IRC03:31
notmynameswift is BYOA: bring your own auth. we include a devauth for testing, but do not recommend it for prod use03:31
*** kashyapc has joined #openstack03:32
notmynamethe devauth server works like the rackspace cloud auth server (token-based), but there is no need to make your own work like that03:32
nelson__Oh, okay, that's fine.03:32
gholtIf you run with no auth wsgi installed at all, everything would be full public by default. ;)03:32
nelson__as far as I know, we only have a few private wikis, and right now they're handled by a script that handles auth*03:33
notmynameif you have your existing id system, write wsgi middleware that authenticates against that using whatever params in the request you want (like HTTP basic auth or some cookie)03:33
notmynamenelson__: so to back up, you're evaluating this for use with wikimedia. as in backing wikipedia? (*hopes*)03:33
gholtBut, even with devauth or swauth (a proposed alternative) there is the concept of "public containers" where you can access files with no credentials.03:33
nelson__yes; all of the media files; images of all stripes, MIDI, audio, and video files.03:34
notmynamethat's really, really cool to me03:35
nelson__Hehe, I'm excited about it too. Right now it's all coming from one machine.03:37
notmynamenelson__: gholt wants me to also point out that swauth (the proposed replacement for the current devauth) will be production ready ;-)03:37
nelson__well, all the originals.  The resized images come from a second machine.03:37
nelson__You can probably see how excited we are about moving to a DFS.03:38
notmynamelooks like you've done a lot of homework03:38
notmynamejust saw the "rack awareness/off site" row in the matrix03:39
notmynameswift supports the concept of zones. a zone is something that is as isolated as you can make it. if that's server, ok. rack, better. data center, great03:39
notmynameswift will store each copy of the object in a different zone03:40
nelson__Yes, we're bringing up another data center "soonest".  Right now we're very afraid of hurricanes in Florida.  :)03:40
*** reldan has quit IRC03:41
notmynameif you do end up using swift (or "i'd use it if it had this one other thing") is that we need use cases for new stuff. the large object support was directly made in response to a use case that NASA had. their users tend to have a little more data than the average rackspace user, so the different use cases are great to hear03:43
*** AimanA is now known as HouseAway03:43
notmynameand as an interesting piece of trivia, swift was written to replace an internal system that was modeled after mogilefs03:44
notmyname(i see you have it as another column)03:44
nelson__Hehe, yes, they seem to be incestuous, with each one trying to out-do the other. :)03:45
vishyeday: I got the tests working with real rabbit, turns out i was creating queues wrong and i had to change one instance() to take True.  Still broken with fake_rabbit however03:45
notmynamenelson__: on nested containers in swift: http://programmerthoughts.com/programming/nested-folders-in-cloud-files/03:46
*** schisamo has quit IRC03:49
nelson__yeah, we don't actually need a hierarchy.03:50
notmynameah, ok. saw that you mentioned it is all. it's there if you need it :-)03:50
nelson__we have multiple projects plus classifications, like thumbnails, and the filename which is unique across the entire project.03:51
*** damon__ has joined #openstack03:51
nelson__so we might have long filenames (keys), but we don't need a hierarchy.03:51
*** damon__ has quit IRC03:51
notmynameif you were to name tumbnails something like "/project_contianer/2010/thumbnails/cool_img.jpg" you could get a listing of all thumbnails in 2010, for example03:52
notmynamenelson__: any other questions? I'm hoping to stay up to see the eclipse, so I've got lots of time :-)03:53
nelson__Hehe, yeah, me too. :)03:53
nelson__although I can see it out the window from my bed, getting horizontal at that time of night is dangerous. :)03:53
notmynameno kidding03:54
notmynameI find myself thinking, "why is it so hard to stay up? I did this all the time in college."03:54
*** roamin9 has quit IRC03:54
*** roamin9 has joined #openstack03:55
notmynamenelson__: I'd love to see your final comparison of storage systems. it would be pretty cool to read about how you made the decision and implemented it03:56
nelson__We're pretty open about everything we do ... being a nonprofit, and dedicated to encyclopaedic publishing.03:57
*** jakedahn has joined #openstack04:13
*** kashyapc_ has joined #openstack04:35
*** kashyapc has quit IRC04:38
*** omidhdl has joined #openstack04:40
*** miclor___ has joined #openstack04:41
*** miclorb has quit IRC04:43
*** omidhdl has quit IRC04:46
*** omidhdl1 has joined #openstack04:46
*** miclor___ has quit IRC05:10
jeremybnotmyname: i'm still halfway through scrollback, but is it possible for auth middleware to weigh in on object writes/reads or only container/acct wide decisions?05:15
jeremybi.e. with the current API is an "append only" / WORM container possible? so, some users can write new files but not delete and read anything, some can just read and then maybe there's a user that can do anything (i guess the actual acct owner)05:17
jeremybhrmmmm, so can you use empty objects with manifests as symlinks? can they cross from one container to another?05:20
jeremybwhat if you have perms to read the object with the manifest but not what it points to? does a read fail?05:21
jeremybnelson__: ^^^05:21
nelson__ya got me.05:21
jeremybnelson__: i just thought you might be interested in the answers05:22
nelson__what answers? :)05:22
*** omidhdl has joined #openstack05:22
jeremybnelson__: they haven't come in yet05:22
gholtjeremyb: You can set read and write acls. No real way to limit deletes/overwrites because of the consistency window.05:22
*** omidhdl1 has quit IRC05:23
jeremybgholt: don't follow05:23
jeremybi understand you can write to 2 different nodes and they'll fight over who wrote last05:23
gholtIf you try to limit overwrites, what if one node doesn't have the original object and accepts the write? Then you have to back track to get rid of the one new write.05:24
jeremybbut if the object's days old and the nodes are all up to date...05:24
gholtYeah, for now we just erred on the side of what can truly be accomplished. To say we limit overwrites but not be able to truly enforce it would be bad.05:24
jeremybi'm saying if the node doesn't know then let them fight but if it does know then allow it to block05:25
gholtI guess: It's a possibility if it's really needed, but would probably have to be a toggleable feature, hehe.05:25
gholtA node could go down at any time, meaning older objects would also be affected by such a window05:26
jeremybsure. it can even be a wsgi api feature that has big scary warnings in the sphynx docs and doesn't get implemented in swauth05:26
gholtWell, you do need the hooks in Swift itself, to pass the acl information back, but yeah, it could be done. Just not atm and not for the next Bexar release.05:27
gholtOh, and acls are currently only for containers, no object-level acls at this time.05:27
gholtFor manifests, yes they can go across containers, the acls for both must be met in such a case.05:28
jeremybis bexar planned for end of jan?05:29
gholtI think so, lemme check...05:29
*** omidhdl has quit IRC05:30
gholtFeb 305:30
jeremybwoot, thanks05:30
jeremybyou don't know olpc's holt, do you?05:31
gholtThe basics of what you can control with your own auth wsgi middleware is on a request as it's inbound, and then a final callback with any x-container-read or x-container-write information.05:31
jeremybwhich is stored with the container itself (on nodes)05:32
gholtTo add what your talking about, you'd just add a x-container-delete and x-container-overwrite acl or somesuch.05:32
gholtYeah, in the metadata for the container.05:32
gholtI bet it'd be acceptable to all if that was added but off by default, with the warnings you mentioned that it's not failsafe. :)05:33
jeremybif you can read an object in a container there's no way to block listing of the contents of that container?05:33
jeremybe.g. so you can read if you know the name but not if you don't05:34
gholtNot currently. We didn't have many use cases to go on. And some of those use cases if infrequent could be served with their own container.05:34
*** sirp1 has quit IRC05:34
gholtKeep the code as simple as possible, but no simpler. Hehe05:35
jeremybis there a "leader" for the various ring servers? like accounts or containers. e.g. will all reads/writes for a container or account listing (not objects themselves) try leader first and then others if leader's not available?05:35
jeremybor is it like with objects, first responder05:36
gholtFor the symlinks/manifests, it's important to note they only follow the one jump, btw.05:36
jeremybalso, last i checked there was little to no support/testing for replica counts != 305:37
gholtFor the ring, for a given item, the three (by default) nodes are considered in order. So yeah, GETs will hit the primary first.05:37
jeremybany work being done with that or an outline of what should be tested05:37
gholtTrue, there's been more lately, and some bugs found in that area. But I think they're all (I hope) cleaned up now.05:37
jeremybmore what?05:37
gholtThe main thing was grepping for the number 3. hehe05:37
jeremybit's 11 in binary...05:38
gholtWe've had bugs reported that there was a spot in the replicator that assumed 3 nodes on handoff, for instance. But that's fixed in trunk now.05:38
jeremybbut that would just result in the wrong number getting replicated or what?05:38
jeremybs/number getting replicated/eventual replica count/05:39
gholtHmm. I don't remember exactly. Something like the handoff node would get confused as to whether it should delete it's copy or not after getting it back to one of the primary nodes.05:39
jeremybi mean was there any remote risk of dataloss or returning the wrong data?05:40
*** achew22 has joined #openstack05:40
gholtChecking. Launchpad is slow. :)05:40
uvirtbotLaunchpad bug 685730 in swift "object-replicator: replica deletion decision is wrong if replica_count != 3?" [Undecided,New]05:40
jeremybok, i'll read that in the morning05:41
achew22When I start openstack using novascript's run I get a "nova.exception.NotFound: Class get_connection cannot be found" is there anything I did wrong in setting this up?05:41
gholtIf you had 2 replicas, it'd mean you'd never get rid of handoff copies. If you had 4, it'd confuse itself and delete the fourth copy constantly (I believe).05:41
jeremybok, but you'd never have less than 2 and it'd never return one object when you asked for a different one?05:42
gholtachew22: I'm not much for nova information sorry. :/ Hopefully somebody's on right now that is.05:42
jeremybor half an object or complete garbage or something05:42
achew22no worries, things are pre alpha and I'm amazed that I even got one instance running05:42
achew22I threw a little party :)05:43
jeremybany checksums on read or that's left to the periodic checksum service? (forget the name)05:43
gholtjeremyb: No, no chance on that. Things are atomic. They're either there or they aren't. And yeah, in this case I think you're right about never being under 2. But it's all just 'thoughts', not tested to see what the bug did really. :)05:43
jeremybwell if someone could write up a little on what situations to test that would be nice :-)05:44
gholtETags on objects are the md5 sum of the contents. The auditor checks that on occasion. Though that sucker is a fight, because you don't want the load too high, but you don't want the checks too seldom.05:44
jeremybso, clients check md5 by default? (ewwwww, md5) what if a client finds an error? can it let someone know to get the replica fixed?05:45
gholtYeah, for the != 3 stuff we need to get a lab set up and some automated tests that check the backend copies, etc.05:45
*** omidhdl has joined #openstack05:46
gholtClients don't have to, they can if they want. The auditor, if it finds a bad checksum will quarantine the offender into a separate dir and the replicator will put a fresh copy in place from another node05:46
jeremybbut if the client does check and finds a problem can it get the auditor to prioritize that object?05:47
*** f4m8_ is now known as f4m805:47
gholtHmm. not right now. Good idea though. I think it'd be better implemented that the object server does the checksum on GETs automatically and quarantines the offender right away.05:48
jeremybsure, but maybe not on every request if it's requested dozens of times a min05:48
jeremybi guess there's no way for it to know if a request was served from the OS disk cache or from real disk05:49
gholtYeah, maybe not. Though md5 isn't too bad on cpu and we've had headroom there. Also, wouldn't work for range requests at all, hehe.05:49
jeremybyeah, heh05:50
jeremybhow do range reqs work with manifests?05:50
jeremyb(i'm assuming the OS disk cache in memory is less vuln. to corruption than real spinning disk)05:50
gholtRange requests on manifests was fun. :)05:50
gholtThink of it how you might do it manually. A container listing has all the segments that comprise the whole file. The json formatted listing also has the length of each segment.05:51
gholtSo seeking is just a matter of jumping over segments until you get where you want, then a range request on that first served segment (probably) and then serving until you get to your end point.05:52
jeremybdoes the manifest itself not have the lengths?05:52
gholtNo, the manifest simply has the info to do the container listing.05:52
gholtYou could actually append to a manifested large object that way if you wanted.05:53
*** omidhdl has quit IRC05:53
gholtBut, if you do a HEAD on the manifest, you'll get the total length, unless it's more than 10k segments.05:53
jeremybhaha, that's 48+ TB for one object05:54
jeremybdoes NASA do that?05:54
gholt:) Well, yeah, but you can also use smaller, each-not-even-the-same-size segments.05:54
jeremyb(i'm not seeing the modifying in the middle use case being common)05:54
jeremybso, what if you write 1000 smallish segments and then you want to compact them into one segment? can you do a copy or you have to download and upload?05:55
gholtNah, middle modifying would be weird. Maybe for a tileset game that let you pull the whole board with a manifest or something, hehe05:55
gholtThe COPY request will let you do that, limited to 5G of course.05:56
jeremybi mean if you copy the manifest05:56
jeremybthat won't just make a new manifest?05:56
gholtOh wait, I'm wrong on that.05:56
gholtAnd I coded it, lol.05:56
gholtCOPY on the manifest just makes another manifest (zero bytes taken up).05:56
gholtWe discussed that, I remember now. Though what you're saying would be darn useful. Hmmm.05:57
jeremybe.g. for something that does frequent writes but doesn't really have a need for all the pieces in the end run. could do them incrementally and periodically compact05:58
* gholt is taking notes. :)05:58
* jeremyb wonders if transparent compression is anywhere on the horizon05:58
gholtChanging the manifest COPY to compact makes sense. If you really want another manifest it's cheap to HEAD the original and PUT the new with the same X-Object-Manifest header.05:59
gholtIt's not cheap to pull down the whole object and reupload it to compact.05:59
jeremybright, agree on cheap05:59
jeremybnot sure about api change though05:59
jeremybis there any api versioning?06:00
gholtWell, the manifest code is for Bexar, so it's not set in stone yet anyway06:00
jeremyboh, right06:00
gholtIt just merged today, in fact, heheh06:00
gholtThe compression was talked about, but it costs cpu and saves disk. Cheaper for the customer, more expensive for the provider, if you're only charging for requests/space-used.06:01
jeremybbut if you're running your own cluster...06:01
gholtIt'd have to be charged differently and it'd work, I'm sure. Which means it'd have to have a bit of extra logging to indicate a compression-request or something.06:01
gholtYeah, that too. :D06:02
gholtBut honestly, if at all possible, it's better to use the client's CPU. :D06:02
jeremybi was thinking clients could request to have the compressed form and then you save on the wire too and only have to compress once06:02
jeremybso, really cost to provider would be different for writes but no different for reads assuming a compression capable client06:03
gholtAh damn, it's getting late and I have to be at work early tomorrow. Great ideas! Keep them coming and bring them up again if you don't see them for next release cycle (Cactus, Feb 3rd start).06:05
* jeremyb was just reading http://wiki.openstack.org/CactusReleaseSchedule06:05
gholtTrue, it'd be different on storage used. I guess one could charge on the uncompressed size, lol.06:05
jeremybi have to be sleeping too06:06
gholtCentral for me. 12am06:06
jeremybaha, America/New_York here06:06
*** omidhdl has joined #openstack06:06
jeremybnelson__: look up :)06:06
gholtI hear there's some moon thing happening. Hehe06:06
jeremybyeah, i heard, will miss it06:06
gholtLooks to be near 100% overcast here anyways06:07
gholtAh well, night!06:08
jeremybbtw, (not necessarily now...) i'm wondering how if at all log aggregation is done. e.g. for billing. is it just concentrated on the proxy servers and you're responsible for any further consolidation?06:08
jeremybgood night!06:08
*** DubLo7 has quit IRC06:08
gholtAh, that's notmyname's realm. Ping him at some point and he'll give you his run down. :D06:09
jeremybgholt: btw, you should take a look at tahoe lafs at some point. i have some more ideas (that i've mentioned here before actually) that come from that direction06:12
* jeremyb sleeps06:12
*** DubLo7 has joined #openstack06:19
achew22Sorry to repeat. When I start openstack using novascript's run I get a "nova.exception.NotFound: Class get_connection cannot be found" is there anything I did wrong in setting this up?06:25
*** jakedahn has quit IRC06:25
*** ramkrsna has joined #openstack06:29
*** Ryan_Lane has quit IRC06:30
*** kashyapc_ has quit IRC06:31
*** aimon has quit IRC06:39
*** aimon has joined #openstack06:39
*** miclorb_ has joined #openstack06:52
*** guigui1 has joined #openstack06:59
*** kevnfx has quit IRC07:01
notmynamejeremyb: ask me tomorrow about the stats. there is a fairly complete stats system that can be used to feed a billing system included in swift07:06
*** achew22 has quit IRC07:07
*** achew22 has joined #openstack07:07
*** achew22 has left #openstack07:15
xtoddx_cerberus_: if you get a chance can you take a look at lp:~anso/nova/paste to see that no openstack api stuff is broken or forgotten in that change? (other than the subdomain stuff, which I'll need to work up a mapper for, similar to Paste.urlmap)07:24
*** miclorb_ has quit IRC07:37
*** Ryan_Lane has joined #openstack07:59
*** rcc has joined #openstack08:02
*** brd_from_italy has joined #openstack08:07
*** Ryan_Lane has quit IRC08:10
*** zaitcev has quit IRC08:21
*** Cybodog has quit IRC08:27
*** doude has joined #openstack08:29
*** miclorb has joined #openstack08:30
*** larstobi has joined #openstack08:32
*** kashyapc has joined #openstack08:43
*** allsystemsarego has joined #openstack08:44
*** miclorb has quit IRC08:48
*** miclorb has joined #openstack09:18
*** irahgel1 has joined #openstack09:19
*** Abd4llA has joined #openstack09:19
*** Cybodog has joined #openstack09:37
*** roamin9 has quit IRC09:44
*** ar1 has quit IRC09:45
*** omidhdl1 has joined #openstack09:59
*** omidhdl has quit IRC10:00
*** aimon has quit IRC10:03
*** HugoKuo has joined #openstack10:04
*** irahgel1 has left #openstack10:06
*** miclorb has quit IRC10:12
*** jordandev has joined #openstack10:15
*** HugoKuo_ has joined #openstack10:23
*** vish1 has joined #openstack10:26
*** zns has joined #openstack10:27
*** [ack]_ has joined #openstack10:27
*** zns has quit IRC10:27
*** cw_ has joined #openstack10:28
*** mattt_ has joined #openstack10:28
*** termie_ has joined #openstack10:28
*** kashyapc_ has joined #openstack10:31
*** HugoKuo has quit IRC10:32
*** kashyapc has quit IRC10:32
*** dirakx has quit IRC10:32
*** [ack] has quit IRC10:32
*** cw has quit IRC10:32
*** termie has quit IRC10:32
*** pquerna has quit IRC10:32
*** clayg has quit IRC10:32
*** mattt has quit IRC10:32
*** vishy has quit IRC10:32
*** asksol has quit IRC10:32
*** clayg_ has joined #openstack10:32
*** asksol has joined #openstack10:33
*** pquerna has joined #openstack10:33
*** clayg_ is now known as clayg10:34
*** omidhdl1 has quit IRC11:04
*** Abd4llA has quit IRC11:11
*** Abd4llA has joined #openstack11:17
*** ibarrera has joined #openstack11:38
*** miclorb has joined #openstack11:40
*** miclorb has quit IRC11:43
*** miclorb has joined #openstack11:44
*** miclorb has quit IRC11:46
*** smaresca has quit IRC11:48
*** omidhdl has joined #openstack11:51
ttxsandywalsh: o/11:53
*** reldan has joined #openstack11:53
*** bigd_ has joined #openstack11:58
bigd_someone alive?11:59
sandywalshsomewhat :)11:59
uvirtbotbigd_: Error: "^" is not a valid command.11:59
bigd_im pretty new to all that cloud/cluster stuff, so may i ask some (hope not to dump) questions?12:00
*** smaresca has joined #openstack12:01
sandywalshgo for it ... if I can't help I'm sure someone can.12:01
*** jordandev has quit IRC12:02
bigd_how does openstack distributed the work? one or more VMs per node or can i combine the resources of all nodes to power one "big" VM?12:03
*** smaresca has quit IRC12:08
sandywalshbigd_, AFAIK you can partition groups of hosts within zones. There are schedulers allocated per zone. Additionally network and compute services reside in each zone.12:09
sandywalshbigd_, so you can decide how you want to partition. Machine, geography, business line, etc.12:10
bigd_what do you mean with "compute services"?12:11
sandywalshbigd_, the compute service is the business logic for controlling the openstack nova modules.12:14
sandywalshbigd_, compute divvies up the work and puts the tasks in rabbitmq queues. Then the various services (network, etc.) pick them up and do the work.12:16
bigd_im not sure if i got it right ... by defining a zone with 10 nodes i am able to put one VM (lets say Win2008) in this zone and this VM would use all resources?12:17
bigd_all resources of the current zone12:18
*** feather has quit IRC12:18
sandywalshbigd_, well, you need to decide which hypervisor you're going to use: kvm, xenserver, etc. That hypervisor may run many instances of the guest os (linux, windows, etc). Hopefully I'm understanding your question?12:19
*** smaresca has joined #openstack12:21
*** WonTu has joined #openstack12:22
*** irahgel has left #openstack12:22
*** WonTu has left #openstack12:22
bigd_we are geting closer ;-) ... sure there is a underlying engine like KVM etc... my point is if one instance would be running on a certain noe of the zone12:22
*** reldan has quit IRC12:23
sandywalshbigd_, yes. From what I understand, the scheduler decides where an instance is run.12:24
sandywalshbigd_, there are migration operations for moving instances/snapshots from one host to another12:25
bigd_hmm ... for our goal thats what we dont want as each node would be a desktop pc ... we want to combine their power12:27
patri0tbigd_: http://wiki.openstack.org/Overview12:28
patri0tbigd_: http://nova.openstack.org/service.architecture.html12:28
patri0tbigd_: http://www.box.net/shared/static/ussls7gp2j.png12:28
patri0tbigd_: these may help in general12:29
*** ctennis has quit IRC12:29
bigd_as openstack cant do what we need, you have maybe a suggestion what could be worth a look?12:35
patri0tcan you restate your goal?12:38
bigd_or to be more precise: you know any solution to combine many hosts/PCs to one "big"? Or lets say at least that a operating system would see it as one system/host12:41
patri0tIt more depends on what exactly you want to do12:43
patri0tthe difference between cloud computing/grid computing may address the same issue12:43
patri0tif you want to do one task using several machines grid computing should be a good approach12:45
patri0totherwise if you have several tasks and several workers and you may go for cloud computing12:46
bigd_patri0t ... the point is that we have got requests to find a soultion to help doing renderjobs (FX, 3d, etc) AND common computing tasks (simulations or calculations)12:46
*** ctennis has joined #openstack12:46
*** ctennis has joined #openstack12:46
fabiandbigd_: I suppose that it will also depend on the specififc application what kind of cluster/grid/cloud will help you12:48
patri0tbigd_: again it depends how you do those tasks, should they be done in parallel? or regularly you do one task first then start the next one (probably from the input of the previous task)12:48
bigd_thats my problems ... thats varies with every task12:49
bigd_and thats why i would like a "virtual base"12:50
bigd_so i dont have to change the hole setup12:50
bigd_for example for rendering openstack would fit perfectly12:50
bigd_as you need a lot of instances12:51
bigd_and i have a lot of (small) nodes/hosts12:51
fabiandbigd_: sure, you can also use the instances as mpi-nodes for e.g. simulations12:51
*** hazmat has joined #openstack12:52
*** schisamo has joined #openstack12:52
bigd_doesnt that eat up to much power as it would be another logical layer on top?12:53
fabiandthere will be some performance loss because they are just instances, at least you have got an infrastructure to deplaoy many nodes ..12:53
fabiandand the performance loss depends on the instance configuration (hypervisor, ..)12:55
bigd_i think give it a try, thanks13:00
*** DubLo7 has quit IRC13:03
*** reldan has joined #openstack13:05
bigd_btw, can openstack handle changes of the avaible nodes/hosts at runtime in it's current version? lets say 3 of 10 nodes get turned off or lose network connection without any warning13:11
*** ramkrsna has quit IRC13:16
*** westmaas has joined #openstack13:19
*** nelson__ has quit IRC13:21
*** nelson__ has joined #openstack13:22
*** doude has quit IRC13:24
*** doude has joined #openstack13:25
*** hadrian has joined #openstack13:35
*** Podilarius has joined #openstack13:36
*** krish has joined #openstack13:44
*** aliguori has joined #openstack13:47
*** Abd4llA has quit IRC13:48
*** krish has left #openstack13:48
olivier_Hi all13:55
olivier_My swift storage node indicate in their log file "object-replicator dev/sdb1 is not mounted13:56
olivier_But my disk is mounted.... How to fix that ?13:56
*** kevnfx has joined #openstack13:57
uvirtbotNew bug: #692994 in nova "nova-compute will not recover from loss of its xapi session" [Undecided,New] https://launchpad.net/bugs/69299414:01
*** DubLo7 has joined #openstack14:03
*** alekibango has quit IRC14:04
*** Daviey has quit IRC14:08
*** Daviey has joined #openstack14:17
*** alekibango has joined #openstack14:17
*** nati has joined #openstack14:25
*** omidhdl has quit IRC14:26
*** ppetraki has joined #openstack14:36
*** irahgel has joined #openstack14:37
*** nati has quit IRC14:37
notmynameolivier_: are you running a stand-alone system or in the SAIO?14:42
notmynamemore specifically, are you mounting real devices or loopback devices?14:43
olivier_I'm mounting real device14:46
olivier_What do you mean by SAIO ?14:46
*** larstobi has quit IRC14:46
*** f4m8 is now known as f4m8_14:47
olivier_I've got a lab with VM (1 proxy and 3 storage)14:47
olivier_and for each storage node, the mount point is a real device (/dev/sdb1)14:49
notmynameSAIO == swift all in one. the VM system we use for dev and is good for a "get your feet wet" test14:55
notmynameif you are running it in a VM, you should probably add "mount_check = false" to the [DEFAULT] section of /etc/swift/object-server.conf14:56
olivier_But swift didn't know that I'm using a VM ?14:57
*** jdarcy has joined #openstack14:58
notmynameolivier_: what's a happening is that "os.path.ismount(dev_path)" is returning false for your storage node15:06
*** gondoi has joined #openstack15:09
*** reldan has quit IRC15:12
olivier_ok, I've added "mount_check = false" in each 3 configurationf files (account-server.conf, object-server.conf, container-server.conf) on my each 3 storage nodes15:13
olivier_And I didn't have this error message anymore: Thanks15:13
olivier_But this didn't resolve my problem for creating a user with swift-auth-add-user (Update failed: 503 Service Unavailable)15:15
notmynameyou are running the proxy and the 3 storage nodes in one VM?15:16
fabiand(does someone know if this channel is logged somewhere?)15:17
notmynameare these the instructions you used? http://swift.openstack.org/development_saio.html15:17
notmynamefabiand: http://eavesdrop.openstack.org/irclogs/15:17
olivier_notmyname: I've got 4 VMs15:18
fabiandnotmyname: thank you.15:19
olivier_And I've used: http://swift.openstack.org/howto_installmultinode.html15:19
notmynameolivier_: ok, so you have 4 VMs that you put swift on (1 proxy + 3 storage nodes)? I think I'm following now15:20
*** dendrobates is now known as dendro-afk15:23
*** dendro-afk is now known as dendrobates15:24
notmynameolivier_: you are running the auth server, right?15:24
*** jkakar_ has joined #openstack15:25
olivier_here my log and configuration files: http://pastebin.com/qaGY7ZLD15:25
*** jkakar has quit IRC15:26
*** kevnfx has quit IRC15:26
*** bigd_ has quit IRC15:29
notmynameolivier_: honestly, I don't know. nothing jumps out at me, but I'm not an expert in that part of the code. unfortunately, the people who are are either on vacation or in an all-day meeting15:30
olivier_ok, thanks for your time15:32
_0x44jaypipes: You around?15:33
pikenmorning everyone15:33
jaypipes_0x44: yup15:33
_0x44jaypipes: I'm looking through your merge req, and I noticed you're using iteritems() a lot. Are we not targetting compatibility with py3k?15:34
_0x44(This isn't a complaint or a nitpick about the patch, just curiosity)15:35
jaypipes_0x44: :( I'm not a py3k expert... could you advise on what to change there?15:36
*** irahgel has quit IRC15:36
_0x44jaypipes: iteritems went away, so all that's left is items()15:36
_0x44jaypipes: I dunno why they chose to get rid of it15:36
jaypipes_0x44: I didn't know that.  So, I can just use items() where I am now using iteritems()?15:36
_0x44jaypipes: Yup :)15:37
jaypipes_0x44: good to know! I will change things as I edit files. Thanks for the heads up!15:37
_0x44You're welcome :)15:37
*** johnpur has joined #openstack15:37
*** ChanServ sets mode: +v johnpur15:37
* jaypipes checks off his "learn one thing a day" task...15:37
jaypipes_0x44: you know, old dog, new tricks, and all that ;)15:38
_0x44That's a really good task, I probably should have one like that. Omniscience makes that so difficult though. ;)15:38
jaypipes_0x44: indeed, it would :P15:38
_0x44One thing to note is that iteritems returns an iterator and items returns a list of tuples15:38
*** hggdh has quit IRC15:38
jaypipes_0x44: but effectively, the same usage...15:39
_0x44Yes, same interface just one is "nicer" for some value of "nicer"15:39
jaypipes_0x44: hehe15:39
jaypipes_0x44: how's Italy, btw?  having a good time?15:39
_0x44jaypipes: It's great! A bit chilly, though. :) My girlfriend came in on Saturday, and we're going to hit Florence for Christmas15:40
jaypipes_0x44: I don't want to hear about chilly :)  hasn't gotten above freezing here in 10 days or so...15:41
_0x44Pavia is hovering around freezing, but is really humid15:41
jaypipes_0x44: when I say to my dogs "let's go for a walk" and they don't even get out of their beds, you know it's cold.15:41
_0x44Give me 15F and dry over 32F and snow/sleeting :)15:42
_0x44Oh man, you should get your dogs some boots and parkas, I've seen them all over the place here.15:42
*** dfg_ has joined #openstack15:42
*** hggdh has joined #openstack15:43
*** reldan has joined #openstack15:44
dendrobatesjaypipes: testing your i18n branch...15:44
*** calavera has quit IRC15:45
*** jkakar_ is now known as jkakar15:45
dendrobatesoops, you already pushed it.15:46
jaypipesdendrobates: ya, it's already been reviewed... about 15 times.15:47
dendrobatesI know,  what ended up being the problem with the tests?15:47
jaypipesdendrobates: not sure...still waiting to see if it bombs again :(15:48
* ttx crosses fingers15:49
*** MarkAtwood has quit IRC15:49
ttxjaypipes: about the glance api, looks like I should be tracking the subspecs rather than the parent one, would you agree ?15:49
jaypipesttx: yep, and they are all up 2 date.15:50
ttxok, will untarget api for bexar, even if that sounds weird.15:51
jaypipesttx: why? it's done...15:52
jaypipesttx: the final piece is in code review right now..15:52
ttxjaypipes: I keep "unified-api", but remove "api" from the list, since it's just a master spec15:52
ttxjaypipes: what are you using "beta available" for ?15:53
jaypipesttx: using that for "it's in trunk, but before release"15:54
ttxjaypipes: I use it for "testable but not proposed for trunk yet"15:54
jaypipesttx: ah, sorry. feel free to corect me :)15:54
*** irahgel has joined #openstack15:54
ttxjaypipes: I use "Implemented" for "Merged in trunk"15:55
ttxjaypipes: ok, will do15:55
jaypipesttx: then feel free to mark implemented :) I'll remember that for future :)15:55
* ttx likes to see green.15:55
*** MarkAtwood has joined #openstack15:55
ttxjaypipes: otherwise they don't show up as completed... so they still appear in the deps graph.15:56
jaypipesttx: understood.15:56
*** hazmat has quit IRC15:59
*** MarkAtwood has quit IRC16:02
dendrobatesjaypipes: the i18n patch is still hanging on test_create_instance_associates_security_groups for me.16:03
*** jbaker has joined #openstack16:04
*** kevnfx has joined #openstack16:04
jaypipesdendrobates: gah. :(  do we know what is actually hanging on?16:04
jaypipesdendrobates: all tests pass for me locally, so I'm unsure how to fix. :(16:04
dendrobatesI am looking at the traceback.  will paste it16:04
*** MarkAtwood has joined #openstack16:07
*** jero is now known as jero_market16:07
dendrobatesnothing useful afaics  http://paste.openstack.org/show/325/16:07
dendrobatesI'm trying it again16:08
*** jero_market is now known as jero16:09
*** infernix has quit IRC16:09
*** sophiap_ has joined #openstack16:10
*** sophiap has quit IRC16:10
*** sophiap_ is now known as sophiap16:10
dendrobatesjaypipes: are you using the virt_env to test?16:11
*** dirakx has joined #openstack16:12
tr3buchetanyone else having trouble getting the nova-compute to run with latest trunk?16:17
*** dragondm has joined #openstack16:17
tr3buchetkeep getting this: http://pastie.org/139358616:17
*** guigui1 has quit IRC16:17
dabotr3buchet: running fine for me. Do you have a get_connection() method in nova/virt/connection.py?16:19
tr3buchetyes, line 3416:19
tr3buchetdabo ^16:19
tr3buchetand --connection_type=xenapi  as a flag16:20
rbergerondendrobates: are you still planning on coming to fudcon? :)16:21
jaypipesdendrobates: yes16:21
dendrobatesrbergeron: I cannot come due to a conflict.  I am trying to get someone to go in my place16:22
* rbergeron makes a sadface16:22
dabotr3buchet: try adding these debugging lines: http://pastie.org/139535916:24
dabotr3buchet: when I run that, I get this in my compute session: http://pastie.org/139536116:25
tr3buchetyeah me too16:29
rbergerondendrobates: well lmk if/when you find someone - we'd love to have additional openstack folks present16:29
tr3buchetand in api i get a lot of import failed16:29
tr3buchetfor nova.image.s3.S3ImageService and nova.network.manager.FlatManager,16:30
*** seshu has joined #openstack16:30
dabotr3buchet: yeah, it's not a "failure" so much as it is not a value that can be imported using __import__() directly.16:30
*** jkakar has quit IRC16:32
dabothe thing is, your paste showed that you are getting a "Class get_connection cannot be found" error. If that were the case, you'd never see the "CLS <function get_connection at 0x30088c0>" line in the debug output.16:32
dendrobatesI know some of you must have opinions on my proposal for adding new core devs...16:32
tr3buchetoutput from compute16:32
dabotr3buchet: did you add the verticle pipes to the debug output?16:33
tr3buchetyes, i usually do that ensure problem is some trailing garbage in strings16:34
dabotr3buchet: and you didn't get the "CLS" line like I pasted.16:34
tr3bucheti put it in the function...16:34
*** kashyapc_ has quit IRC16:34
tr3buchetwell that's because i'm getting the error on the import_class line probably16:35
tr3buchetyep, look at my paste, it never gets to the CLS line16:35
dabotr3buchet: that's why the "CLS" line is there - to verify that the import_class() call succeeded.16:35
tr3buchetit did not succeed16:37
tr3buchetwhen it calls import_class() on nova.virt.connection.get_connection it fails16:39
dabotr3buchet: I don't know what to tell you. I just grabbed a fresh copy of trunk from lp, and it runs fine.16:40
tr3buchetthat's exactly what i just did :(16:40
dabotr3buchet: don't know why this would give you your error, but did you do all the usual stuff? source novarc, run as root, etc?16:41
tr3buchetyes, i did the same thing i did when it ran before i pulled trunk16:41
tr3buchetthe only change was pulling trunk16:41
dabotr3buchet: sorry, I'm out of ideas16:42
tr3buchetrm -rf nova && bzr branch lp:nova/trunk nova16:42
* tr3buchet is waiting for bzr16:42
jk0you don't need to run it as root btw16:43
tr3buchetsame error16:43
*** ibarrera has quit IRC16:43
tr3bucheti run it with sudo16:43
jk0it doesn't need root privs to run16:44
tr3buchetwithout sudo, same error16:44
jk0not saying that is the cause, just in general16:44
jaypipesdendrobates: any thoughts on why the i18n branch is failing? :(16:45
dabojk0: really? When I first started running nova, everyone told me to run as root16:45
jk0I've never had to do it16:45
*** WonTu has joined #openstack16:45
*** WonTu has left #openstack16:45
comstuddabo, i think it'll need to run as root to talk to xenstore for the RS agent16:46
dabocomstud: why? are the xenstore read/write methods only available to root?16:47
dabojk0: heh, just tried running as me, and it looks like everything is working.16:47
dendrobatesjaypipes: Does where it is hanging give you any clues?  I need to squeeze some more verbose debugging out of the tests.16:48
*** jkakar has joined #openstack16:49
comstuddabo- they are in the guest.. i'm not sure about the host side, now16:50
jk0the problem you guys are seeing is cheetah was added as a dep16:50
jaypipesdendrobates: no, unfortunately it doesn't...16:50
jk0install python-cheetah and things will work fine16:50
dabotry: pip install Cheetah==
tr3bucheti used apt-get16:52
tr3buchetat any rate that solved it, thanks guys16:52
tr3buchetno idea how you came up with that16:52
*** kashyapc_ has joined #openstack16:52
*** kevnfx has quit IRC16:53
jk0use packages if you can16:53
jk0everything we need is in apt16:53
*** kevnfx has joined #openstack16:53
jk0tr3buchet: I ran the unit tests16:53
tr3buchetah, good idea16:53
dragondmfyi: it's a good idea, when pulling from trunk, to do: sudo pip install -r tools/pip-requires16:55
dabojk0: sorry, I'm not used to using packages for language dependencies. pip works cross-platform, cross-distro, etc.16:55
jk0my thoughts are to stick with packages since that's what will be used in the releases16:56
jk0at least if they install thru apt16:56
* ttx will be back in a couple hours16:57
dendrobatesjaypipes: trying pdb to see if I can get more info16:57
dabojk0: understood. It's best to do that for compatibility's sake now; long-term I don't think we should depend on a single distro for all OpenStack work.16:57
jk0afaik know, we only officially support one distro (10.10)16:58
ttxsuspense on the i18n branch is killing me :)16:58
dabojk0: correct. You do see my point that 2-3 years from now, we most likely won't be only supporting ubu 10.10. When that happens, we won't be able to rely on apt packages for consistency.16:59
jk0I understand, but my point is that right now, we're only supporting one distro, so we need to make sure we're developing against those packages and not something that might be in pip17:00
dragondmtrue, an we need to make sure our deps in pip-requires are accurate17:00
dragondmsilly question: how is our hudson build handling deps?17:01
dragondmdoes it use the pip-requires?17:02
spectorclanOpenStack Design Summit - Forming a Program Committee ; more info at http://www.openstack.org/blog/2010/12/openstack-design-summit-program-committee/17:03
*** jkakar has quit IRC17:08
*** Ryan_Lane has joined #openstack17:11
*** kevnfx has quit IRC17:12
jk0jaypipes: would you have a sec to take another look at https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/44251 please?17:13
*** doude has quit IRC17:21
*** jkakar has joined #openstack17:25
*** pquerna has quit IRC17:35
*** pquerna has joined #openstack17:35
*** jkakar has quit IRC17:43
jaypipesjk0: yup, doing so now.17:44
jaypipesjk0: approved.17:45
jk0thanks jau17:45
*** kevnfx has joined #openstack17:45
jaypipesjk0: no probs :)17:45
*** Ryan_Lane has quit IRC17:47
*** kevnfx_ has joined #openstack17:48
*** kevnfx has quit IRC17:48
*** kevnfx_ is now known as kevnfx17:48
jk0tr3buchet: how did you and sandywalsh test your pause/unpause API?17:50
jk0you guys write up an docs on that?17:50
tr3buchetwe tested using cloudervers api17:51
jk0ah ok, thanks17:51
tr3buchethe added pause and unpause17:51
tr3buchetif you pull from his branch17:51
openstackhudsonProject nova-tarmac build #45,496: ABORTED in 2 hr 5 min: http://hudson.openstack.org/job/nova-tarmac/45496/17:53
jaypipesdendrobates: can you pm me the server creds to log into the hudson box that runs the tarmac job?  I need to modify nova-test.sh to output more verbose stuff...17:54
*** joearnold has joined #openstack17:55
*** jkakar has joined #openstack17:57
dendrobatesjaypipes: I don't have credentials on that box.  soren and mtaylor do.17:58
jaypipesdendrobates: k.17:58
*** westmaas has quit IRC17:58
*** jdurgin has joined #openstack17:59
openstackhudsonProject nova build #313: SUCCESS in 1 min 14 sec: http://hudson.openstack.org/job/nova/313/17:59
*** ccustine has joined #openstack18:00
dendrobatesjaypipes: if you send me the changes I'll run them locally and paste the results18:11
jaypipesdendrobates: I don't know what test_nova.sh looks like :(18:12
dendrobatesjaypipes: right, but since I seem to be able to reproduce it, you could try editing run_tests.py to get what you want18:13
jaypipesdendrobates: how can you reproduce it?18:13
*** kevnfx_ has joined #openstack18:14
*** kevnfx has quit IRC18:14
*** kevnfx_ is now known as kevnfx18:14
dendrobatesI get the same result, hanging at 100% of a cpu just running the test locally18:14
dendrobatesall other branches work fine18:14
dendrobatesonly yours hangs18:15
jaypipesdendrobates: ah, when not running in a VM?18:15
jaypipesdendrobates: s/VM/virtenv18:15
jaypipesdendrobates: gotcha.  lemme see what I can uncover.18:15
*** daleolds has joined #openstack18:19
*** joearnol_ has joined #openstack18:20
*** joearnold has quit IRC18:23
edayvish1: where do things stand with the nested rpc stuff? all fixed, or is the fakerabbit one still hanging?18:24
vish1achew22: it is because of a recent dependency on cheetah apt-get-install python-cheetah will get you going.  I've updated the github novascript to include it18:24
vish1eday: fakerabbit is still hanging.  Termie suggested that we probably need to start a greenthread for each call (i was going to experiment with that in fakerabbit this morning)18:25
*** vish1 is now known as vishy18:25
*** hggdh has quit IRC18:25
*** hggdh has joined #openstack18:26
edayvishy: ahh, ok18:27
edaylet me know if I can help18:27
vishytr3buchet: same issue ^^ python cheetah dependency18:28
*** kevnfx has quit IRC18:28
vishyhah I should have finished scrollback, looks like jk0 beat me to it18:29
*** kashyapc_ has quit IRC18:29
mtaylorjaypipes: I have credentials everywhere18:31
jaypipesmtaylor: hey :)18:32
jaypipesmtaylor: trying to figure out what the heck is going on when merging my i18n-strings branch...18:32
mtaylorjaypipes: it hates bunnies18:32
jaypipesmtaylor: in virt_env, works flawlessly, outside of virt_env, hangs with cpu 100%...18:32
mtaylorjaypipes: well, fwiw, the sum total of test_nova.sh is:18:33
mtaylorpep8 --repeat --show-pep8 --show-source bin/* nova && python run_tests.py && nos18:33
mtayloretests -w nova/tests/api && python setup.py sdist18:33
mtaylorexcept all on one line18:33
jaypipesmtaylor: k, I'm trying to diagnose on my local machine...I'll let you know if I get stuck any furhter...18:34
mtaylorjaypipes: ok.18:34
*** kevnfx has joined #openstack18:34
*** kevnfx_ has joined #openstack18:35
*** kevnfx has quit IRC18:35
*** kevnfx_ is now known as kevnfx18:35
*** nelson__ has quit IRC18:50
*** kevnfx_ has joined #openstack18:56
*** kevnfx has quit IRC18:56
*** kevnfx_ is now known as kevnfx18:56
*** joearnol_ has quit IRC18:57
sorenmtaylor: Did you see my question last night about permissions on the hudson box?18:58
mtaylorsoren: nope, sorry. missed it. what's up?18:58
soren22:10 <+soren> -rwxr-x--- 1 hudson adm  26338 Sep 15 17:08 /usr/lib/python2.6/os.py18:59
soren22:10 <+soren> -rwxr-x--- 1 hudson adm  26303 Dec  7 17:20 /usr/lib/python2.6/os.pyc18:59
sorenFor instance.18:59
sorenThere's a bunch like it.18:59
*** kevnfx has quit IRC19:05
sorenmtaylor: Any idea what heck that is all about?19:09
mtayloruh. no19:10
mtaylorthat's... very weird19:10
sorenmtaylor: They all have the same timestamps.19:12
sorenWaay in the past. There's now way they've been that way since Sep 15.19:13
sorenI've most certainly run python stuff as myself on there since then.19:13
soren...and that's not possible now.19:13
sorenEr... Well, I guess a few things work.19:13
sorenbzr doesn't at all, for instance (due to not being able to read os.py.19:13
mtaylorsoren: what the hell19:16
sorenWhat the hell indeed.19:16
sorenDec 7 may be accurate.19:16
sorenDec  7 17:31:20 openstack-hudson sudo:  mordred : TTY=pts/0 ; PWD=/home/mordred ; USER=root ; COMMAND=/bin/su -19:18
* soren glances at mtaylor 19:18
mtaylorwhy would I have done a chown/chmod on /usr/lib though...19:19
sorenI don't know.19:19
sorenBunch of half interesting things in dpkg.log from Dec 7 at exatly that time :)19:19
*** sophiap has quit IRC19:20
sorenmtaylor: Hey, that's the day you upgraded to Maverick, wasn't it?19:21
*** WonTu has joined #openstack19:22
vishyeday: not making a whole lot of progress here, I might just wait for termie to hack on it.  My knowledge of eventlet/greenthreads is clearly not complete enough.19:22
*** WonTu has left #openstack19:22
edayvishy: if you want to push a branch I can look19:23
mtaylorsoren: oh - yes it was!19:23
vishyeday: lp:~/vishvananda/nova/move-ip-allocation - try python run_tests.py RpcTestCase19:24
sorenmtaylor: It seems to be contained to just those files.19:25
mtaylorsoren: just os.py and os.pyc ?19:25
vishyand of course there is an extra slash between ~ and vishvananda19:25
sorenmtaylor: No, err..19:25
sorenmtaylor: Just stuff from python2.6-minimal.19:25
henrichrubinhi, does anyone have any comments on this? https://blueprints.launchpad.net/nova/+spec/frontend-heterogenous-architecture-support19:25
mtaylorsoren: that's really f-ing messed up19:25
edayvishy: looking19:26
vishyhenrichrubin: i don't see why you need to add architecture to instance types19:26
*** larstobi has joined #openstack19:26
vishyit should be based on the image19:26
vishyand perhaps stored in the instance table19:27
henrichrubinif we have several different architectures, then we have to check if the instance matches the physical host19:27
sorenmtaylor: Oh, we can't use the timestamps.19:27
vishyright but this is not an instance type requirement19:27
henrichrubinand the image matches both as well19:27
vishythis is an image type requirement19:28
vishyinstance type is how much ram/cpu/disk space the instance should get19:28
sorenmtaylor: Ha hah!19:28
vishythat should be orthogonal to the type of processor it is running19:28
mtaylorsoren: found it?19:28
sorenmtaylor: ctime for those files is 17:28 on Dec 7.19:28
sorenmtaylor: Looking at auth.log, there's some su'ing around that time.19:29
henrichrubinunderstand.  but how can the scheduler determine which node to run on?19:29
vishyhenrichrubin: it looks at the architecture of the image and schedules to a matching node19:30
henrichrubini think a HOSTS table is needed that contains information about a physical host19:30
edayvishy: how was that last failing for you? I get exceptions.AttributeError: 'WaitMessage' object has no attribute 'result'19:30
vishyhenrichrubin: absolutely, I would initially add it to the service table19:30
vishyeday: yeah that is it although occassionally it just hangs19:30
henrichrubinor host_arch could be specified in nova.conf19:30
edayvishy: ok, just making sure19:31
vishyhenrichrubin: yes you could do it with a flag as well.  That is how i'm specifying network_host in one of my branches, but I think eventually we will need more data about a host as you are saying.19:31
sorenmtaylor: Getting closer.19:32
sorenmtaylor: The ctime for all the fucked files are Dec 7 17:28:37-17:28:38.19:33
sorenmtaylor: From dpkg.log:19:33
* vishy hands soren a magnifying glass19:33
vishygo, sherlock!19:33
soren2010-12-07 17:28:33 status half-configured hudson 1.38819:34
soren2010-12-07 17:28:38 status installed hudson 1.38819:34
ttxif sherlock could unfuck the i18n merge, I'd buy him a new pipe.19:34
* soren goes to stare at hudson.19:34
jaypipesttx: hell, I'd buy him two.19:35
henrichrubinvishy:  thanks.  i'll have to investigate how the scheduler knows the image's architecture, before it selects a node.  any idea?19:35
* ttx stares at third-party packaging with usual arrogance of distribution developers19:35
sorenhudson's postinst does indeed chown and chmod a bunch of things.19:35
* ttx sighs19:35
* mtaylor sighs at the hudson packaging19:36
sorenIt seems to be good about it, though.19:36
soren        find /var/lib/hudson -path "*jobs" -prune -o -exec chown hudson:adm {} + || true19:36
*** opengeard has quit IRC19:36
sorenThat looks benign to me.19:36
*** Abd4llA has joined #openstack19:36
mtaylorsoren: symlink traversal perhaps?19:37
sorenDoes running tests create symlinks?19:37
sorenI'm not sure.19:37
mtaylorperhaps if something is creating a venv19:37
sorenI hate those things.19:38
vishyhenrichrubin: yeah it is a little tough because we don't have images in the datamodel.  I would add a field to the instances table, and modify compute.api to set the field it when it retrieves data about the image in create_image.19:38
mtaylorthat might symlink to the /usr/lib python lib?19:38
sorenIt's possible.19:38
sorenThat would certainly explain it.19:38
sorenBah. It seems there's no compromise involved.19:39
sorenAny objection to my just reinstalling python2.6-minimal?19:39
vishysoren: add -h to chown?19:41
ttxvishy: not sure soren wants to work on Hudson packaging19:41
sorenI'm sure I don't.19:42
sorenmtaylor: 19:39 <+soren> Any objection to my just reinstalling python2.6-minimal?19:42
mtaylorsoren: not at all19:43
mtaylorpackaging java for debian still royally sucks19:43
sorenPython works again.19:43
sorenI'd love to blame this on Java, but I can't really.19:43
henrichrubinvishy:  thanks.  i'll take a stab at it.  only difficulty i see is the scheduler.19:43
mtaylorwell, it's marginally realated - it's so hard to package properly19:44
mtaylorso it means we're left with things like hudson which are packaged "good enough" but not really19:44
*** henrichrubin has quit IRC19:45
*** henrichrubin has joined #openstack19:45
sorenjaypipes: bzr on the hudson box should be functional now.19:45
*** westmaas has joined #openstack19:50
*** henrichrubin has quit IRC19:55
*** henrichrubin has joined #openstack19:55
*** hazmat has joined #openstack19:56
sorenjaypipes: It looks very much rpc related to me.19:57
*** sophiap has joined #openstack20:00
ttxTeam meeting in one hour in #openstack-meeting !20:00
vishyeday: out for lunch. msg me if you eureka20:04
edayvishy: I think I see the problem, working on a fix20:06
edaybackend is assuming only 1 consumer at a time20:06
*** miclorb_ has joined #openstack20:16
rbergeron /win 4820:18
edaywow, 48?20:24
* eday thought 20 was a lot20:24
*** reldan has quit IRC20:29
*** HouseAway is now known as AimanA20:40
soreneday: Really? This is /win 61. #rabbitmq is /win 175.20:44
edaysoren: heh, I like to keep a tidy set of channels I guess20:46
*** dragondm has quit IRC20:48
*** iammartian has left #openstack20:49
*** dragondm has joined #openstack20:50
*** zaitcev has joined #openstack20:51
*** rcc has quit IRC20:52
edayvishfixed it :)20:54
dendrobates openstack meeting in 4 min in #openstack-meeting20:56
dendrobatesttx lost internet, so I'll be filling in.20:57
vishyeday: nice! plz to have the fix?20:57
* jeremyb wonders why there's just a big combined channel for both nova and swift20:58
edaypushing now, one sec20:58
*** Ryan_Lane has joined #openstack20:58
sorenjeremyb: If it gets overwhelming, we can split them. For now, there's really little point.20:58
jeremybsoren: i have sometimes been overwhelmed20:59
sorenjeremyb: We also don't split developer talk from user help.20:59
jeremyb(i come from the swift interest first, maybe nova in 3-6 months time perspective)20:59
jeremybright, i don't think it's a user vs. dev issue i'm seeing20:59
* vishy likes soren when he's on vacation20:59
jeremybwas mostly just wondering if it had ever been discussed before and what the rationale was21:00
jeremybRyan_Lane: you missed the meeting announce... starting any sec in #openstack-meeting (FYI)21:00
sorenjeremyb: No no, I'm not suggesting that. I'm just pointing another sort of split that we also don't do.21:00
jeremybsoren: right21:01
vishyI'm pretty good at blocking out the swift stuff21:01
sorenjeremyb: It's a volume question, really. Until there's enough volume to sustain two channels, we'd rather try to grow synergy or something.21:01
vishyIf i see a bunch of comments by notmyname, I scroll through them quickly :p21:01
edayvishy: lp:~eday/nova/fakerabbit-fix21:01
jeremybyeah, volume is exactly why i brought it up21:01
notmynamevishy: where's the love?21:02
vishyeday: thx do you want to propose for merge separately? or should i just merge it and propose with mine?21:02
sorenvishy: "bzr lp-open lp:~eday/nova/fakerabbit-fix" <--- not sure if you know that trick21:02
edayvishy: merge into yours and check it out21:02
sorenmtaylor: <kick> 21:02 < dendrobates> ACTION: mtaylor to bring down the openstack-discuss group21:02
vishysoren: no didn't know that21:02
sorenvishy: There's also "bzr merge --preview 21:02 < dendrobates> ACTION: mtaylor to bring down the openstack-discuss group21:03
sorenGah. I suck at cut-and-paste.21:03
sorenvishy: There's also "bzr merge --preview lp:~eday/nova/fakerabbit-fix" from another branch.21:03
vishysoren: weird the lp open is trying to make me login21:05
vishywith links21:05
*** jbaker has quit IRC21:06
sorenvishy: Ah.21:06
Ryan_LaneI'm in SLC airport :)21:06
vishyeday: you're a star.  It works with RpcTestCase and CloudTestCase21:07
vishyeday: ship it!21:07
notmynamejeremyb: I think this channel has been about 70/30 split for nova/swift conversations, so the nova guys tune us out and the swift guys show up when someone mentions swift.21:07
Ryan_LaneI also don't seem to be able to join21:07
jeremybRyan_Lane: you're in now21:07
edayvishy: hehe, glad it works for you too21:08
sorenjaypipes: That branch from eday may fix your stuff, too.21:08
*** Ryan_Lane_ has joined #openstack21:09
*** Ryan_Lane has quit IRC21:12
*** reldan has joined #openstack21:12
*** kevnfx has joined #openstack21:19
*** Ryan_Lane_ has quit IRC21:24
vishysoren: are the upstart scripts working in the ppa?21:25
*** johnpur has quit IRC21:25
sorenvishy: There's some weirdness with the objectstore one that I can't completely work out.21:26
vishysoren: ah, good ol' objectstore21:26
vishysoren: someone needs to write an extension/proxy to glance to allow for uploading bundles and registering images21:27
vishysoren: so we can kill it21:27
sorenIsn't that what Glance does.21:27
vishysoren: i don't think it replicates the S3 api or knows how to decrypt manifests/bundles21:28
vishyeday: hmm proposing based on yours didn't work so well :)21:29
alekibangosomeone, when this will work? https://blueprints.launchpad.net/nova/+spec/instance-migration21:29
*** Abd4llA has quit IRC21:29
*** masumotok_ has joined #openstack21:29
vishyalekibango: don't think it is targetted yet21:31
*** jdarcy has quit IRC21:31
alekibangoimho thats one of really important things, along with restart after reboot :)21:32
edayvishy: hmm? I would just merge it into yours and propose yours21:32
vishyeday: yeah that is what i did.  I didn't realize that you had based it off my branch21:32
*** DubLo7 has quit IRC21:33
alekibangovishy: i would try to  help with the reboot issue, as it bites me now, if you would give me some tips21:33
edayvishy: hat should still work, it should only merge in my commit21:33
jk0anyone up for a review? :) https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/4439421:34
vishyeday: yeah but it doesn't work to put in your branch as a prereq cuz in shows a 2 line diff. :)21:35
*** guynaor has left #openstack21:35
*** reldan has quit IRC21:36
*** kevnfx has left #openstack21:37
*** reldan has joined #openstack21:38
*** hggdh has quit IRC21:39
alekibangoit would be cool to have a fuctional test telling  us if after a merge nova would still work21:40
*** gondoi_ has joined #openstack21:40
alekibangoautomated on changes on trunk and merge requests21:41
*** gondoi_ has quit IRC21:41
edayalekibango: we talked about that at the summit, and there is a blueprint, but no one has time to work on it yet21:41
alekibangoi think that should be high priority for nova...21:41
edayalekibango: me too, as do others21:42
*** hggdh has joined #openstack21:42
alekibangoi am willing to help this happen, but i dont feel confident yet to lead on this21:43
alekibangoso if you  will need help, ask me21:43
alekibangoeday: maybe we should together organize some  sprint for this21:44
alekibangoor at least little talk/etherpad session for start21:44
edayalekibango: jaypipes was leading this at the summit, way want to talk with him too21:44
alekibangojaypipes: automated functional tests!   talk to me if you will need help21:45
*** irahgel has quit IRC21:45
*** kevnfx has joined #openstack21:46
alekibangoeday: now i can install nova cllusters automatically in ~300-400  seconds (using fai)21:47
alekibangoon real hw21:47
alekibangothat might be way to test it somehow21:47
vishyalekibango: lp:~vishvananda/nova/fix-reboot21:48
vishy(untested but i think i got everything)21:48
alekibangowill test21:48
sorenalekibango: We don't really need real hardware, though.21:48
sorenalekibango: That's why I added the UML backend.21:48
sorenalekibango: So that we could actually test everything in the cloud21:49
alekibangowell, you do if you also want to test xen and kvm21:49
alekibangoits sweet :)21:49
sorenAnd VMWare and blah and foo and bar..21:49
sorenSure, hardware is needed for that.21:49
alekibangoif someone will donate few servers, i could help by installing/configuring  fai there21:50
soren..but the vast majority of Nova doesn't care about the differences between UML and KVM.21:50
alekibangocan install debian or ubuntu  servers  with nova -- and test even more platforms21:50
alekibangoeven fedora etc21:50
alekibangosoren: i agree - but i also noted how hard is for xen people to get review21:51
sorenalekibango: That's really an artifact of poor unit tests (which they are addressing).21:51
sorenIn an ideal world, to review some new code, I would only have to review the unit tests.21:52
alekibangowell, unit tests can test everything21:52
sorenIf I believe they're correct and they pass, everything should be fine.21:52
edayalekibango: see smoketests for some other integration tests, I don't think they work anymore though21:53
soreneday: They do.21:53
edayoh, cool21:53
soreneday: They were fixed reasonably recently.21:53
soreneday: r45621:53
edaywasnt sure if that had hit trunk21:53
alekibangowell still nothing beats functional tests  :)21:54
alekibangoand automated ones are great move forward21:54
alekibangothats why i am talking about them... i fell the need - even reading those tests would help people to use nova correctly :)21:55
alekibangoso they will help documenting21:55
alekibangoi see strong synergy possibilities with automated functional tests... :)21:56
sorenvishy: Ok, the next packages will be teh awesome.21:57
sorenvishy: nova-objectstore now plays nicely.21:57
vishysoren: hawt21:58
* soren clims back under his rock21:58
*** nelson__ has joined #openstack21:58
*** irahgel has joined #openstack21:59
*** jbaker has joined #openstack22:00
*** ctennis has quit IRC22:03
*** westmaas has quit IRC22:05
nelson__a question I can't find an answer to on openstack.com, or on the wiki: who is selling commercial support for openstack?22:07
zykes-nelson__: you can buy Mikokura stack22:08
zykes-which is supported - based on openstack22:08
vishynelson__: various companies are doing support22:08
zykes-or you can wait until canonical starts to support it22:08
nelson__Okay, so why are they so shy?22:08
zykes-or vishy :p22:08
alekibangovishy: will test tomorrow, my body needs some sleep22:08
vishyanso labs (the company i work for) is one.22:08
vishyalekibango: ok let me know if it works and i'll propose for review22:09
zykes-vishy: ain't you working for rspace ?22:09
nelson__why isn't there a page in the wiki entitled "commercial support"?22:09
alekibangovishy: will do22:09
vishyzykes-: no,22:09
vishyzykes-: I work for anso, which is the company that does the development on NASA's cloud22:10
jeremybto clarify, i think nelson__ is primarily interested in swift atm22:10
jeremybnot support for ryan's compute nodes, right?22:10
vishynelson__: ah, we are primarily nova experts, but we will be offering wider support as well22:10
vishynelson__: the reason there isn't more info about commercial support is openstack is still very young, so most companies are still organizing their support22:11
annegentlenelson__: yep, we're young but definitely looking at ways to provide support/services at a pro-level. what would you have in mind?22:12
annegentlenelson__: for the wiki page? Just an explanation of the current state?22:12
vishynelson__: it sounds like it would be something that would be great to add to the wiki :)22:12
vishynelson__: if you are at liberty to disclose, which company do you work for?22:13
nelson__annegentle: see http://qmail.org/top.html#paidsup22:13
nelson__vishy: contracting.22:13
annegentlenelson__: ah qmail, she says fondly. Such polite error messages they have.22:14
nelson__yeah, before people got used to them, they thought a person was apologizing for failure to deliver. :)22:15
nelson__annegentle: that's the kinda of page I was looking for for openstack swift.22:15
alekibangonelson__: see how helpfull those people are? you call that no support?   take your time to try calling microsoft once, heh...22:15
*** Ryan_Lane has joined #openstack22:16
alekibangonelson__: if you will wait few months, there will be lots of companies supporting openstack, thats sure... its still so young22:18
nelson__"still so young"??  Who would want to adopt brand new code??22:19
* nelson__ runs away screaming.22:19
XenithI'm certainly pushing for my company to support openstack. Would be nice to add that to our contracting services.22:19
* nelson__ teaches alekibango how not to market his software. :)22:20
alekibangoits not my software yet ...  but potentially will be, as my merge requests is waiting for rewiew22:20
annegentlenelson__: while it's not fair to compare swift and nova ( just like you shouldn't compare children!), nova's experiencing more code enhancements right now22:21
dabonelson__: swift has been used in production for Rackspace for some time. It's the OpenStack project that's "so young"22:21
alekibangonelson__: there are some big companies who love openstack... http://www.openstack.org/community/22:21
Ryan_LaneI'm a fan so far :)22:22
alekibangodabo: right, i still think of nova when i say openstack22:22
nelson__dabo: yes, I know. just picking on alekibango. otherwise I really WOULD be running away screaming.22:22
Ryan_Lanenelson__: see even WMF loves it :D22:22
annegentlenelson__: hee hee22:22
alekibangonelson__: :)22:22
Ryan_Lanenelson__: ;)22:22
Ryan_Laneof course I'm using nova22:22
Ryan_Lane(nelson__ is also working with WMF :D )22:23
alekibangonelson__: but fact is, nova is missing few important features to be usefull in production. but i believe bexar release will be the bomb22:23
alekibangoif we will not fail22:23
Ryan_Lanealekibango: will it have persistent images?22:23
alekibangoRyan_Lane: i for one hope for sheepdog integration22:23
alekibangoor similar system22:24
jk0would anyone in core mind reviewing https://code.launchpad.net/~jk0/nova/diagnostics-per-instance/+merge/44394 ?22:24
Ryan_Lanesome storage architecture direction would be nice :)22:24
*** ctennis has joined #openstack22:25
Ryan_LaneI also like to see vm migration22:25
alekibangolive migration?22:25
alekibangoor migration22:25
*** gondoi has quit IRC22:25
Ryan_Laneif the storage architecture is shared, storage migration isn't as big of a deal22:25
alekibangothen we could implement live rescheduler :)22:25
zykes-isn't there a ceph integration thing going on ?22:25
alekibangoyes it is22:25
alekibangobut ceph is still highly experimental code22:26
zykes-for bexar ?22:26
alekibangonot sure if for bexar22:26
alekibangobyt someone is testing it22:26
alekibangobrowse irc logs to find who22:26
alekibango(keyword rbd or ceph)22:26
*** _skrusty has quit IRC22:27
Ryan_Lanesheepdog would be really nice :)22:29
Ryan_Lanelooks like it supports snapshots and thin provisioning22:29
Ryan_Laneand cloning \o/22:29
alekibangoyes yes22:29
alekibangoand its fast, reliable22:29
alekibangoand decouples cpu from harddrive22:30
alekibangoso you can have diskless nodes22:30
alekibangofor nova-compute22:30
alekibangothats my goal for next few months22:30
Ryan_Laneawesome it has conversion support for existing images too22:32
alekibangoyes i love it and even  shaun the sheep loves this dog22:36
*** _skrusty has joined #openstack22:41
alekibangobtw the man talking about testing ceph/rbd was KyleM1...22:41
zykes-ok: )22:43
zykes-is sheepdogg support coming for bexar ?22:43
zaitcevI almost can guarantee that support of sheepdog does not appear in Bexar. It was talked about but not for Bexar.22:44
*** kevnfx has quit IRC22:45
zaitcevPoke jcsmith about it. He tried to find a sensible way to attach it... I did not get details but they were thinking about emulating one of NDB derivatives.22:45
zaitcevThe main problem is that sheepdog is only available in qemu.22:46
zaitcevIf your hypervisor uses qemu to emulate its devices, you're in luck. If not, no sheepdog for you.22:46
*** Ryan_Lane has quit IRC22:48
zykes-does kvm use qemu ?22:50
tr3buchetcan someone help me out with a merge?  https://code.launchpad.net/~tr3buchet/nova/os_api_images/+merge/4418122:50
*** kevnfx has joined #openstack22:51
*** allsystemsarego has quit IRC22:52
*** dendrobates is now known as dendro-afk22:55
*** dirakx has quit IRC22:57
*** bolapara has joined #openstack22:58
openstackhudsonProject nova build #314: SUCCESS in 1 min 15 sec: http://hudson.openstack.org/job/nova/314/22:59
*** termie_ is now known as termie23:00
*** termie has joined #openstack23:00
*** dendro-afk is now known as dendrobates23:01
jk0tr3buchet: those spaces aren't gone yet :)23:04
*** seshu has left #openstack23:05
mtaylorsoren: on it23:06
*** joearnold has joined #openstack23:07
*** ppetraki has quit IRC23:08
*** aliguori has quit IRC23:09
uvirtbotNew bug: #693211 in swift "variable name collision in access processor" [High,In progress] https://launchpad.net/bugs/69321123:21
*** joearnol_ has joined #openstack23:27
*** joearnold has quit IRC23:28
vishynotmyname: no lack of love here, I'm just glad that you have the swift questions handled.  I have my hands full with nova atm.23:29
*** MarkAtwood has quit IRC23:32
*** MarkAtwood has joined #openstack23:32
*** kevnfx has quit IRC23:36
*** schisamo has quit IRC23:42
*** jbaker has quit IRC23:53
*** sirp1 has joined #openstack23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!