*** dragondm has quit IRC | 00:02 | |
*** Tv has quit IRC | 00:20 | |
*** antonyy has joined #openstack-dev | 00:22 | |
*** Binbingone is now known as Binbin | 00:50 | |
*** jdurgin has quit IRC | 00:55 | |
*** mgius has quit IRC | 00:57 | |
*** Tv_ has joined #openstack-dev | 00:59 | |
*** cloudgroups has joined #openstack-dev | 01:06 | |
*** cloudgroups has left #openstack-dev | 01:25 | |
*** Binbin has quit IRC | 02:06 | |
*** Arminder-Office has joined #openstack-dev | 02:10 | |
*** Arminder has quit IRC | 02:13 | |
*** Tv_ has quit IRC | 04:40 | |
*** sirp__ has quit IRC | 05:37 | |
*** zaitcev has quit IRC | 05:51 | |
*** Binbin has joined #openstack-dev | 05:58 | |
anotherj1sse | hmm qemu powered libvirt is broken on maverick vms on rackspace cloud servers? | 06:23 |
---|---|---|
anotherj1sse | ls: cannot access /sys/devices/virtual/dmi: No such file or directory | 06:23 |
*** cloudgroups has joined #openstack-dev | 06:29 | |
soren | anotherj1sse: When do you see that error? | 06:29 |
anotherj1sse | soren: the error I see is that virsh list fails | 06:50 |
anotherj1sse | soren: as does python-libvirt | 06:50 |
anotherj1sse | soren: http://pastie.org/1969950 <- also I private messaged you root access | 06:52 |
*** antonyy has quit IRC | 06:54 | |
*** openpercept has joined #openstack-dev | 07:42 | |
*** cloudgroups has left #openstack-dev | 07:59 | |
*** mancdaz has joined #openstack-dev | 08:12 | |
*** mattray has joined #openstack-dev | 10:10 | |
*** BinaryBlob has joined #openstack-dev | 10:30 | |
*** mattray has quit IRC | 11:06 | |
sandywalsh | soren, just replied to your awesome flavor-queue idea. | 11:42 |
*** BinaryBlob has quit IRC | 11:42 | |
soren | sandywalsh: I have a bunch of notes on the subject from way back.. Trying to find them now. | 11:53 |
soren | sandywalsh: Weird. I remember stumbling upon them less than a month ago, but now I can't find them. | 11:55 |
sandywalsh | soren, I know the feeling | 11:57 |
soren | Found it! | 12:10 |
soren | Just where it should be, but apparantly I can't type. | 12:11 |
* soren pastebings | 12:11 | |
soren | See? I can't type properly. | 12:11 |
sandywalsh | a piece of crumpled up napkin behind the waste basket? | 12:11 |
*** BinaryBlob has joined #openstack-dev | 12:11 | |
soren | No, in my $HOME/Reference/OpenStack folder. There's only half a dozen things in there, not sure how I missed it first time I looked. | 12:12 |
sandywalsh | "Note on scheduler idea: make it awesome. <details to follow>" | 12:12 |
sandywalsh | :) | 12:12 |
*** BinaryBlob has quit IRC | 12:16 | |
*** adiantum has joined #openstack-dev | 12:24 | |
*** BinaryBlob has joined #openstack-dev | 12:30 | |
*** Arminder-Office has quit IRC | 12:30 | |
jaypipes | *yawn* | 12:50 |
*** thatsdone has joined #openstack-dev | 12:52 | |
*** Binbin is now known as Binbingone | 13:01 | |
*** thatsdone has quit IRC | 13:05 | |
*** ameade has joined #openstack-dev | 13:09 | |
ttx | http://wiki.openstack.org/reviewslist/ now features appropriate prioritization of stuff targeted to the next milestone | 13:24 |
ttx | next time you don't know what you should be reviewing, use it ^^ | 13:25 |
ameade | :) | 13:30 |
*** foxtrotgulf has joined #openstack-dev | 13:31 | |
*** BinaryBlob has quit IRC | 13:39 | |
*** dprince has joined #openstack-dev | 13:46 | |
*** dprince has quit IRC | 13:55 | |
*** openpercept has quit IRC | 14:02 | |
jaypipes | ttx: I can hardly keep up with your prolific bug management the last few days :) | 14:07 |
ttx | jaypipes: Phase 1: fix obvious bad status, Phase 2: touch new/undecided bugs, Phase 3: refresh Incomplete bugs... | 14:08 |
jaypipes | ttx: step 4: inundate Jay with emails :P | 14:08 |
ttx | yep :) | 14:09 |
*** troytoman-away is now known as troytoman | 14:23 | |
*** jkoelker has joined #openstack-dev | 14:24 | |
*** Arminder has joined #openstack-dev | 14:26 | |
* soren runs some errands | 14:35 | |
*** pyhole has quit IRC | 14:52 | |
*** pyhole has joined #openstack-dev | 14:52 | |
*** dragondm has joined #openstack-dev | 15:00 | |
*** troytoman is now known as troytoman-away | 15:11 | |
annegentle | can anyone troubleshoot/fix http://eavesdrop.openstack.org? | 15:14 |
*** sirp__ has joined #openstack-dev | 15:18 | |
*** openpercept has joined #openstack-dev | 15:19 | |
*** openpercept is now known as Guest57742 | 15:20 | |
dabo | Got a quick, simple (1 line), but critical bug fix merge prop. Can I get some nova-core love for https://code.launchpad.net/~ed-leafe/nova/lp785843/+merge/62321? | 15:22 |
markwash | tr3buchet: there? | 15:30 |
markwash | dabo: looking now | 15:33 |
dabo | thx | 15:34 |
*** Tv_ has joined #openstack-dev | 15:34 | |
markwash | dabo: I presume you were hitting the error case and getting the wrong exception, and that's what lead to finding this problem? | 15:34 |
dabo | markwash: exactly. The exception being raised itself caused a different exception. | 15:35 |
*** rnirmal has joined #openstack-dev | 15:38 | |
*** johnpur has joined #openstack-dev | 15:42 | |
*** ChanServ sets mode: +v johnpur | 15:42 | |
openstackjenkins | Project nova build #936: SUCCESS in 2 min 45 sec: http://jenkins.openstack.org/job/nova/936/ | 15:49 |
openstackjenkins | Tarmac: The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID. | 15:49 |
openstackjenkins | Project nova-tarball-bzr-delta build #181: FAILURE in 12 sec: http://jenkins.openstack.org/job/nova-tarball-bzr-delta/181/ | 15:58 |
openstackjenkins | * Tarmac: Created new libvirt directory, moved libvirt_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities. | 15:58 |
openstackjenkins | * Tarmac: The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID. | 15:58 |
*** dprince has joined #openstack-dev | 16:00 | |
*** rnirmal_ has joined #openstack-dev | 16:02 | |
*** antonyy has joined #openstack-dev | 16:04 | |
openstackjenkins | Project nova build #937: SUCCESS in 2 min 41 sec: http://jenkins.openstack.org/job/nova/937/ | 16:04 |
openstackjenkins | Tarmac: Created new libvirt directory, moved libvirt_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities. | 16:04 |
*** rnirmal_ has quit IRC | 16:04 | |
*** rnirmal has quit IRC | 16:06 | |
*** rnirmal has joined #openstack-dev | 16:06 | |
*** adiantum has quit IRC | 16:33 | |
tr3buchet | markwash: what's up? | 16:37 |
anotherj1sse | pvo (or other cloud servers guys) - anyone know why qemu/libvirt doesn't work on maverick on cloud servers? - log here: http://pastie.org/1969950 - I can give root access | 16:38 |
sirp__ | if anyone is interested in dist-sched work, we'd love some feedback on https://code.launchpad.net/~sandy-walsh/nova/dist-sched-2a/+merge/61245 and https://code.launchpad.net/~rconradharris/nova/dist-sched-2b/+merge/61352 | 16:39 |
pvo | anotherj1sse: soren might know. I thought he had played with it in the past. I hadn't worked with libvirt as much. | 16:40 |
pvo | could it be the kernel? | 16:40 |
anotherj1sse | pvo: that is what sleepsonthefloor was thinking ... | 16:41 |
pvo | what kernel are you running? | 16:41 |
anotherj1sse | pvo: Ubuntu 10.10 - uname -a reports: 2.6.35.4-rscloud #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64 GNU/Linux | 16:43 |
anotherj1sse | so it is your fault :) | 16:43 |
pvo | ha. msg me your slice_id and I can poke around . | 16:44 |
dprince | anotherj1sse: Are you using PPA? | 16:46 |
dprince | anotherj1sse: The PPA packages currently require libvirt 0.8.8. I think Soren built a version of libvirt 0.8.8 specifically for the PPA. | 16:47 |
dprince | anotherj1sse: The Cloud Servers kernel doesn't support the libvirt 0.8.8. So that is the cause of your error. | 16:47 |
dprince | anotherj1sse:: Essentially my guess would be that your libvirt installation isn't working. | 16:48 |
dprince | anotherj1sse: What I do for SmokeStack (which currently runs on Cloud Servers until I can get it running on a Nova installation) is just use libvirt 0.8.3 which works great. | 16:49 |
dprince | anotherj1sse: Libvirt 0.8.3 won't support LXC but it should work fine otherwise. | 16:49 |
anotherj1sse | dprince: it should be - checking | 16:49 |
*** adiantum has joined #openstack-dev | 16:49 | |
anotherj1sse | ii libvirt0 0.8.8-1ubuntu3~ppamaverick1 library for interfacing with different virtualization systems | 16:50 |
dprince | dprince: Yep. That version won't work. At least I couldn't get it to. | 16:50 |
anotherj1sse | dprince: thx | 16:50 |
dprince | anotherj1sse: I love it when I talk to myself. :) | 16:50 |
anotherj1sse | I see the source | 16:51 |
dprince | anotherj1sse: If you know a better solution I'm all ears. | 16:51 |
*** zaitcev has joined #openstack-dev | 16:56 | |
anotherj1sse | pvo: my account - anotherjesse - responds "too many requests" when I try to create a cloud server ... are there quotas I'm hitting? | 16:57 |
*** jdurgin has joined #openstack-dev | 16:57 | |
pvo | yep, there are. | 16:57 |
pvo | i can't adjust those, but I know who can. | 16:57 |
anotherj1sse | awesome - want an email or is this good enough? | 16:58 |
dprince | anotherj1sse: file a ticket in the Cloud Servers control panel and they'll bump them for you. | 16:58 |
pvo | if you can email me, I know who to send it to. | 16:58 |
*** bcwaldon has joined #openstack-dev | 16:58 | |
anotherj1sse | dprince: you hit it with smokestack too eh? | 16:59 |
dprince | anotherj1sse: My POST limits /servers are bumped on one account. | 16:59 |
dprince | anotherj1sse: We have another account we use in Titan that I still need to get the limits increased on. | 17:00 |
jaypipes | markwash, bcwaldon: see my post re: pagination? | 17:09 |
jaypipes | sirp__: my comment about making register_models() just do the db_sync make sense? | 17:10 |
sirp__ | jaypipes: yeah, i think so, im gonna take a shot at auto-migrating | 17:11 |
jaypipes | sirp__: should just be like a couple lines of code to put in register_models.. | 17:11 |
sirp__ | jaypipes: although, i do like the idea of having setup as an explicit separate step... | 17:11 |
jaypipes | sirp__: I'm not too thrilled with that, tbh. | 17:12 |
jaypipes | sirp__: especially since it's worked automatically for a while now. | 17:12 |
sirp__ | jaypipes, well, to be honest, it "worked" in a very broken way... left db's entirely inconsistent w/ no easy road to recovery | 17:12 |
jaypipes | sirp__: the easy way to recovery was to simply update the migration table to the latest version... | 17:13 |
sirp__ | jaypipes, we'll have to eyeball the schemas to figure out which version was auto-created, since the migrate_version wasn't populated | 17:13 |
sirp__ | jaypipes, that works if they were running the latest version... | 17:13 |
sirp__ | jaypipes, should be the case most of the time, but, is it a safe assumption in general? (not sure) | 17:14 |
jaypipes | sirp__: k. I still think this bug is a bit obscure and not something that would happen without the manual intervention from graham. | 17:14 |
sirp__ | jaypipes: hmm, it wasn't hard to replicate; really wouldn't consider it too obscure. You load up glance; it generates tables. You go to upgrade and it s'plodes :) | 17:15 |
sirp__ | jaypipes, seems like that's going to happen alot unless we come up with a way of preventing db from being inconsistent | 17:16 |
sirp__ | jaypipes: either take the rails approach of, always make db:migrate a pre-req step; or, auto migrate | 17:16 |
jaypipes | sirp__: how was the "load up glance" done, though? | 17:17 |
jaypipes | sirp__: on install, glance should have db_sync run. | 17:17 |
jaypipes | sirp__: so I don't see really how doing an upgrade on a regular install would produce the bug. | 17:18 |
sirp__ | jaypipes, good question. One way to trigger it, is to grab source and just run glance-registry | 17:19 |
sirp__ | jaypipes: agreed, on a normal install, this probably wouldn't be triggered | 17:19 |
jaypipes | sirp__: ok. so why not the solution I proposed, which is register_models() does a check for versioning and runs migrate.sync if no versioning is found? that would solve the bug. | 17:20 |
sirp__ | jaypipes, yeah, i'm okay with that as a solution. I personally would prefer making the migration a separate required step (makes things more explicit and all); but, handling this automatically doesn't seem terrible either | 17:21 |
jaypipes | sirp__: I hear ya. I prefer the automated step vs. the manual one in this case, partly because that is inline with nova, too. | 17:22 |
jaypipes | sirp__: though I think the same "bug" exists in nova, too. | 17:22 |
sirp__ | jaypipes, yeah, im a little confused as to why nova hasn't experienced something like this yet... was going to look into that shortly. Regardless, I'll go ahead and toss in auto-migration and re-propose | 17:23 |
*** Guest57742 is now known as openpercept | 17:24 | |
*** openpercept has quit IRC | 17:24 | |
*** openpercept has joined #openstack-dev | 17:24 | |
*** rnirmal has quit IRC | 17:24 | |
jaypipes | sirp__: cheers mate | 17:25 |
sirp__ | jaypipes: good deal | 17:25 |
sandywalsh | jaypipes, damn you for bringing up pagination ... I was hoping that problem would just go away if we ignored it :) (got the same problem with GET /servers/ ) | 18:01 |
bcwaldon | soren: I would *really* appreciate feedback on this MP | 18:02 |
bcwaldon | soren: http://www.openstack.org/blog/2011/05/openstack-conference-and-design-summit-spring-2011-overview-video/ | 18:02 |
jaypipes | sandywalsh: hehe | 18:02 |
bcwaldon | nope | 18:02 |
bcwaldon | soren: https://code.launchpad.net/~rackspace-titan/nova/osapi-serialization/+merge/61656 | 18:03 |
markwash | jaypipes: why is marker the id of the *first* record returned in the original query? what if the first record in the original query has a very old updated_at time? | 18:05 |
markwash | jaypipes: for example if we are sorting ascending by updated_at | 18:07 |
jaypipes | markwash: sorry, I was incorrect in stating that. It should have been that marker is the timestamp of the initial query. | 18:11 |
bcwaldon | jaypipes: could you also use created_at? | 18:12 |
jaypipes | bcwaldon: I suppose. | 18:14 |
markwash | bcwaldon: but you really want the maximum of all created_at and all deleted_at times in the table | 18:15 |
markwash | bcwaldon: maybe even all updated_at times too | 18:16 |
markwash | bcwaldon: so you might as well use now() | 18:16 |
jaypipes | markwash: no, you are comparing created_at <= @time_of_initial_query | 18:19 |
jaypipes | markwash: or even created_at <= @time_of_initial_query AND deleted_at <= @time_of_initial_query | 18:20 |
*** mgius has joined #openstack-dev | 18:22 | |
markwash | jaypipes: agree, but i believe there is no effective difference in behavior for times between max(created_at across rows, deleted_at across rows) and now() | 18:23 |
markwash | jaypipes: just doing it based on the max is presumably an expensive lookup | 18:23 |
jaypipes | markwash: there's a big difference in efficiency :) | 18:24 |
markwash | jaypipes: agree :-) | 18:24 |
jaypipes | markwash: plus, the user may wait quite some time between going to page 2. | 18:24 |
markwash | jaypipes: I was attempting to make perhaps an academic point to bcwaldon | 18:24 |
jaypipes | markwash: and I'm proposing to ensure that the page 2 the user sees corresponds exactly to the page 2 of the initial query (shouldn't include new rows or be missing deleted stuff) | 18:24 |
jaypipes | markwash: you and your academia! | 18:25 |
markwash | jaypipes: I far prefer macadamia | 18:25 |
bcwaldon | seconded | 18:25 |
jaypipes | markwash: lol | 18:25 |
* jaypipes partial to cashews. | 18:25 | |
markwash | jaypipes: agree that the timestamp must be the same across separate queries, i was just talking about valid values of the timestamp when it generated on the first query | 18:25 |
* bcwaldon I welcome all nuts | 18:25 | |
jaypipes | bcwaldon: you ARE a nut. :) | 18:27 |
bcwaldon | well... | 18:27 |
jaypipes | markwash: understood | 18:27 |
*** rnirmal has joined #openstack-dev | 18:28 | |
*** blamar_ has quit IRC | 18:32 | |
*** blamar_ has joined #openstack-dev | 18:34 | |
*** blamar_ has quit IRC | 18:37 | |
*** blamar_ has joined #openstack-dev | 18:39 | |
*** blamar_ has quit IRC | 18:43 | |
*** blamar_ has joined #openstack-dev | 18:44 | |
*** dprince has quit IRC | 18:49 | |
anotherj1sse | [B | 19:18 |
vishy | markwash: any comments on tasks? | 19:21 |
*** adiantum has quit IRC | 19:29 | |
*** antonyy has quit IRC | 19:30 | |
*** antonyy_ has joined #openstack-dev | 19:30 | |
markwash | vishy: inded | 19:30 |
markwash | vishy: indeed even | 19:31 |
markwash | vishy: looking at that code it is pretty awesome | 19:31 |
markwash | vishy: maybe kind of scary awesome? | 19:31 |
markwash | vishy: I'm a little worried about something like that running in production--mostly around the information in the task table | 19:31 |
markwash | vishy: do we have to migrate that data? is there maybe some other place we could store it? | 19:32 |
markwash | vishy: also, I am curious if this solves our "many writers" problem? currently there are many places other than the 'api' server that use the various api classes | 19:32 |
markwash | vishy: like compute manager makes references to network.api | 19:32 |
markwash | vishy: perhaps there are more such cases, which would mean we still have to configure a ton of nodes for writing | 19:33 |
markwash | vishy: as nifty as it all is, I keep going back to having nova-writer :-/ | 19:33 |
markwash | vishy: and I'm wondering if I need to make some code to back that up as a real proposal | 19:33 |
vishy | markwash: many writers is fine. Large number of writers is not | 19:34 |
vishy | markwash: yes, migrations seem like they could be scary | 19:34 |
markwash | vishy: yeah, I'm wondering where the line is and what side of it we're on with these general no-db-messaging approaches | 19:35 |
vishy | but i think we can get around that by enforce clearing the task queue before migrating | 19:35 |
markwash | like would all the compute nodes need write permission? | 19:35 |
vishy | markwash: the purpose of moving it this way is to take write away from compute nodes | 19:35 |
vishy | I don't think it is really an "api" concern specifically. It is really business layer | 19:36 |
vishy | markwash: I'm trying to make the nodes into stupid task executors | 19:36 |
markwash | vishy: maybe I was mistaken. . I see now that the compute manager has direct references to the network and volume managers, but it also has references to network_api | 19:37 |
markwash | vishy: and presumably the network_api would need write permissions | 19:37 |
vishy | ultimately everything should be going through the api | 19:38 |
markwash | vishy: going through which api? | 19:39 |
vishy | as long as a given "api" can own the task...it tells the compute/volume nodes to execute chunks of the tasks and keeps track of the task | 19:39 |
vishy | compute should only talk to network through network.api | 19:39 |
vishy | etc.. | 19:39 |
vishy | i really hate calling these api's, it is really more of a supervisor | 19:39 |
markwash | so, if we take the volume create approach and apply it to the networking code | 19:41 |
vishy | so compute communicates through the api to the volume supervisor. The volume supervisor owns the tasks and the database writing. then sends idempotent messages to the workers to achieve the task | 19:41 |
markwash | okay, I think I buy the idea of the supervisor, but is that in the example code anywhere? | 19:42 |
vishy | markwash: network is a little different at the moment because it doesn't have edge-workers | 19:42 |
markwash | vishy: ah yes good point--maybe its not the best example then | 19:42 |
vishy | markwash: the code in volume.api is the supervisor, there isn't a separate object at the moment | 19:43 |
vishy | so essentially in the current code nova-api is acting as the supervisor | 19:43 |
vishy | (because it is the actual process that imports volume.api and runs the code) | 19:44 |
markwash | vishy: I kinda love the supervisor approach if we move it out from the volume api | 19:44 |
vishy | markwash: yes, I suppose I'm a little worried about having another class/worker | 19:45 |
vishy | that essentially just passes around the same requests | 19:45 |
markwash | vishy: I think we should probably look at the naming around api and manager as you were saying | 19:45 |
markwash | vishy: really, api is just the same interface as manager, but with the implication that its operating through messaging | 19:45 |
markwash | vishy: so its more like volume.Manager and volume.RemoteManager | 19:46 |
markwash | not that I love the name Manager :-) | 19:46 |
vishy | right | 19:46 |
vishy | I don't know that we need volume.api | 19:46 |
vishy | perhaps we just rename volume.api to volume.supervisor | 19:47 |
vishy | the supervisor has an api | 19:47 |
vishy | just like the manager has an api | 19:47 |
vishy | seems like it would be nice to modify scheduler so it could run in-process as well | 19:48 |
markwash | vishy: that sounds like it could work | 19:48 |
vishy | although i'm not sure if that works with the zone stuff | 19:48 |
markwash | vishy: I kind of like how flexible that is in the context of moving from in project libraries to separate services | 19:49 |
markwash | vishy: because we could separate out at either the manager or the supervisor layer, depending on where the very front end wants to get its data | 19:49 |
markwash | vishy: maybe I'm running with this in a direction you're not so fond of :-) | 19:51 |
vishy | not sure what you mean by separate out | 19:51 |
vishy | the scheduler? | 19:51 |
markwash | when we replace nova code with external services, we can do it either by replacing the manager and keeping the supervisor around to update the nova db | 19:52 |
markwash | or we can stop reading from the nova db and replace all of the supervisor and manager code for that | 19:52 |
markwash | but I'm probably missing something wrt zones | 19:53 |
markwash | vishy: I guess I still come back to nova-writer in a way because I really like CQRS | 19:54 |
markwash | vishy: if you're interested I think maybe you can get a good sense of it from http://www.udidahan.com/2009/12/09/clarified-cqrs/ | 19:54 |
markwash | vishy: but I'm not suggesting that as a blocker on the supervisor approach necessarily | 19:55 |
vishy | ah interesting | 19:56 |
jaypipes | vishy: pls see https://bugs.launchpad.net/nova/+bug/788295. | 20:03 |
uvirtbot | Launchpad bug 788295 in nova "UnboundLocalError in exception handling" [Undecided,New] | 20:03 |
vishy | jaypipes: my guess is that has something to do with not using glance to store images | 20:05 |
vishy | he said it was an old install right? | 20:05 |
jaypipes | vishy: hmm, not sure... I've asked somik to join us here. | 20:06 |
*** somik_ has joined #openstack-dev | 20:07 | |
jaypipes | somik_: heya. vishy had a question for you. | 20:08 |
somik_ | sure- go ahead | 20:08 |
jaypipes | dabo: you marked the bug duplicate... good to know. maybe you have some input for somik_ . | 20:08 |
dabo | jaypipes: somik_: I don't know about the underlying bug in glance; my fix was only for the additional bug in the exception handler | 20:09 |
vishy | i don't know if there as an underlying bug | 20:10 |
somik_ | dabo: I tried the exception handler fix but to no avail. | 20:10 |
vishy | perhaps the settings are just old | 20:10 |
somik_ | so I am wondering if its just a config issue | 20:10 |
somik_ | vishy: what do you mean by settings are just old? any specific config i hsould look at> | 20:11 |
dabo | somik_: you should have gotten the "StorageError: Timeout waiting for device sdb to be created" error, but not the "Error: local variable 'instance_obj' referenced before assignment" message | 20:11 |
jaypipes | dabo: see the pastebin link in the bug report... | 20:12 |
dabo | jaypipes: ok, that's what I would expect | 20:12 |
vishy | somik_: do you have anything in your initial install that you need? I might suggest just wiping it and starting over | 20:13 |
*** rnirmal has quit IRC | 20:13 | |
somik_ | vishy: this is a new install with trunk from 2 days ago, so i havent been able to create any VMs since then. | 20:13 |
*** rnirmal has joined #openstack-dev | 20:14 | |
vishy | oh! | 20:15 |
vishy | somik_: i thought you had upgraded an old install | 20:15 |
vishy | did you follow the xen install guide? | 20:16 |
somik_ | vishy: correct, with few changes to nova.conf | 20:16 |
vishy | somik_: not sure how helpful I'm going to be here | 20:16 |
somik_ | because i have two hypervisors running compute and a cloud controller running api, scheduelr, glance and dashboard | 20:17 |
vishy | i haven't done any troubleshooting for xen issues | 20:17 |
*** lorin1 has joined #openstack-dev | 20:18 | |
somik_ | i guess if you guys are running trunk and can spawn VMs without issues then thats a positive, maybe i'll check-in with josh kearney since he wrote the xen guide. | 20:18 |
jk0 | what's up? | 20:19 |
jk0 | somik_: there was a fix we merged today that may fix what you've been seeing | 20:19 |
jk0 | try pulling trunk and spawning again | 20:20 |
somik_ | +jk0: i can try that, i was talking about this bug - https://bugs.launchpad.net/nova/+bug/788295 just fyi for others | 20:20 |
uvirtbot | somik_: Error: Could not parse data returned by Launchpad: timed out | 20:20 |
somik_ | i'll try to pull trunk again then may be its all fixed | 20:22 |
jk0 | somik_: yeah, I believe that was taken care of | 20:22 |
somik_ | +jk0: cool thanks then, I'll be back if it wasn't ;) | 20:23 |
openstackjenkins | Project nova build #938: SUCCESS in 2 min 42 sec: http://jenkins.openstack.org/job/nova/938/ | 20:39 |
openstackjenkins | * Tarmac: Several changes designed to bring the openstack api 1.1 closer to spec | 20:39 |
openstackjenkins | - add ram limits to the nova compute quotas | 20:39 |
openstackjenkins | - enable injected file limits and injected file size limits to be overridden in the quota database table | 20:39 |
openstackjenkins | - expose quota limits as absolute limits in the openstack api 1.1 limits resource | 20:40 |
openstackjenkins | - add support for controlling 'unlimited' quotas to nova-manage | 20:40 |
openstackjenkins | * Tarmac: During the API create call, the API would kick off a build and then loop in a greenthread waiting for the scheduler to pick a host for the instance. After API would see a host was picked, it would cast to the compute node's set_admin_password method. | 20:40 |
openstackjenkins | The API server really should not have to do this. The password to set should be pushed along with the build request, instead. The compute node can then set the password after it detects the instance has booted. This removes a greenthread from the API server, a loop that constantly checks the DB for the host, and finally a cast to the compute node. | 20:40 |
ttx | soren: around ? | 20:45 |
ttx | soren: see my email, and let's discuss that tomorrow if needed. | 20:48 |
soren | ttx: I am. | 20:49 |
soren | ttx: But don't tell anyone. | 20:49 |
ttx | soren: I won't, unles you tell someone I was here. | 20:49 |
ttx | soren: does my email make sense ? | 20:49 |
soren | ttx: No :) | 20:50 |
ttx | soren: ok... I don't think I have enough energy left to explain what I mean, so let's do that tomorrow ? | 20:50 |
soren | ttx: Ok. The question on my mind is: Why does it need tobe different from what Nova is doing? | 20:51 |
soren | ttx: ..but you can answer that tomorrow :) | 20:51 |
ttx | soren: I'm not sure it would. And I'll explain that tomorrow. | 20:52 |
ttx | good night :) | 20:52 |
soren | G'night :) | 20:52 |
somik_ | +jk0: I pulled the latest trunk and the result is pretty much the same atleast to my naive eyes - http://paste.openstack.org/show/1418/ | 20:59 |
jk0 | that's a different exception | 20:59 |
jk0 | are you running the compute node inside xenserver? | 20:59 |
jk0 | and running as root? | 20:59 |
somik_ | +jk0: yup and its running i am running compute sudo inside ubuntu a VM on the xenserver | 21:01 |
jk0 | ah, what version? | 21:03 |
jk0 | if Maverick, make sure this is on your flagfile: --xenapi_remap_vbd_dev=true | 21:03 |
bcwaldon | soren: since you're here, may I ask a favor of you? | 21:04 |
somik_ | +jk0: its maverick and i have that on the flagfile.. | 21:04 |
bcwaldon | soren: I would LOVE to get some feedback on this MP: https://code.launchpad.net/~rackspace-titan/nova/osapi-serialization/+merge/61656 | 21:05 |
*** lorin1 has left #openstack-dev | 21:05 | |
somik_ | +jk0: its lucid but i have the maverick flag maybe thats the issue. lemme retry | 21:10 |
jk0 | yes, that would do it | 21:10 |
soren | bcwaldon: I'm not really here. Well, I am, but I'm not working. | 21:16 |
*** bcwaldon has quit IRC | 21:18 | |
openstackjenkins | Project nova build #939: SUCCESS in 2 min 42 sec: http://jenkins.openstack.org/job/nova/939/ | 21:18 |
openstackjenkins | Tarmac: Fixed the mistyped line referred to in bug 787023 | 21:18 |
uvirtbot | Launchpad bug 787023 in nova "rate limits builder uses incorrect key" [Medium,In progress] https://launchpad.net/bugs/787023 | 21:18 |
*** RobertLaptop has quit IRC | 21:21 | |
somik_ | +jk0: that was it! thanks! the vm spawned fine now | 21:26 |
jk0 | awesome | 21:27 |
jk0 | no problem | 21:27 |
*** RobertLaptop has joined #openstack-dev | 21:33 | |
*** ameade has quit IRC | 21:37 | |
vishy | considering proposing the following diff for merge: http://pastie.org/1973500 | 21:56 |
vishy | it is good on so many levels | 21:56 |
*** foxtrotgulf has quit IRC | 22:04 | |
*** mgius has quit IRC | 22:10 | |
*** jkoelker has quit IRC | 22:40 | |
*** somik_ has quit IRC | 23:03 | |
*** dragondm has quit IRC | 23:25 | |
*** rnirmal has quit IRC | 23:32 | |
*** Tv_ has quit IRC | 23:33 | |
*** johnpur has quit IRC | 23:37 | |
*** bcwaldon has joined #openstack-dev | 23:51 | |
*** antonyy_ has quit IRC | 23:57 | |
cloudnod | hm, irccloud complaining that my usage exceeds beta allowance | 23:58 |
cloudnod | but now i'm hooked. #smart | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!