*** rnirmal has quit IRC | 00:03 | |
*** pvo has quit IRC | 00:05 | |
*** msassak has quit IRC | 00:06 | |
mtaylor | jaypipes: talk to me about what you want installed and from where? | 00:07 |
---|---|---|
*** adjohn has joined #openstack | 00:08 | |
mtaylor | jaypipes: is swift 1.2.0-0ubuntu1~maverick0 good enough? | 00:10 |
*** hadrian_ has joined #openstack | 00:11 | |
*** bcwaldon has quit IRC | 00:12 | |
jaypipes | mtaylor: yep, thx! | 00:13 |
mtaylor | jaypipes: done | 00:13 |
*** hadrian has quit IRC | 00:14 | |
*** hadrian__ has joined #openstack | 00:14 | |
jaypipes | mtaylor: u rock. thx man. | 00:16 |
*** hadrian_ has quit IRC | 00:17 | |
*** burris has joined #openstack | 00:24 | |
*** ovidwu has joined #openstack | 00:26 | |
*** burris has quit IRC | 00:29 | |
*** burris has joined #openstack | 00:31 | |
*** z0 has joined #openstack | 00:32 | |
Code_Bleu | i added floating ips...now i cant run my instance...it says "connection refused". Im looking at logs now, but any ideas why? | 00:34 |
*** oldbam has left #openstack | 00:35 | |
*** aliguori has quit IRC | 00:36 | |
*** littleidea has joined #openstack | 00:39 | |
Code_Bleu | nevermind, nova-api wasnt running | 00:40 |
vishy | Code_Bleu: if you have ips you don't want to add, just manually remove them from the database | 00:42 |
Code_Bleu | vishy: thats what i ended up doing ;-) | 00:43 |
vishy | Code_Bleu: or you can just add them all individually | 00:43 |
Code_Bleu | vishy: how do you add them individually? | 00:43 |
vishy | for floating ips the cidr is a convenience | 00:43 |
vishy | nova-manage floating create <host> 10.0.0.5/32 | 00:43 |
Code_Bleu | vishy: when i run an instance...how does it know to get a fixed or floating ip ? | 00:44 |
vishy | actually you can use floating delete to remove individual ones too | 00:44 |
vishy | it always gets a fixed | 00:44 |
Code_Bleu | vishy: confused....so how can i assign a public ip to my instance then? | 00:44 |
vishy | floating ips are controlled by euca-allocate-address and euca-associate-address | 00:45 |
vishy | just like elastic ips on amazon | 00:45 |
uvirtbot | New bug: #731010 in nova "nova.sh project zipfile fails on first run" [Low,In progress] https://launchpad.net/bugs/731010 | 00:46 |
Code_Bleu | vishy: i associated an address to an instance but when i try to ssh it doesnt automatically login with "ubuntu" like it did before...its prompting for a password now | 00:47 |
vishy | where are you logging in from? | 00:47 |
Code_Bleu | vishy: and i cant access it outside of the server | 00:47 |
vishy | is it in the same subnet as the ip of your host? | 00:48 |
Code_Bleu | vishy: first attempt was from the OpenStack server | 00:48 |
Code_Bleu | vishy: yes | 00:48 |
vishy | did you euca-authorize -P tcp -p 22 default | 00:48 |
*** ccustine has quit IRC | 00:48 | |
Code_Bleu | vishy: yes...it worked whenever i was using a fixed ip | 00:50 |
vishy | fixed_ips don't need to be authorized | 00:51 |
vishy | sounds like ip forwarding is having trouble | 00:52 |
Code_Bleu | vishy: well i just did what the manual said...i guess it assumed i was using floating | 00:52 |
vishy | cat /proc/sys/net/ipv4/ip_forward | 00:52 |
*** aliguori has joined #openstack | 00:52 | |
*** lionel has quit IRC | 00:52 | |
vishy | Code_Bleu: nope floating is just if you allocate associate | 00:52 |
Code_Bleu | vishy: its set to 0 | 00:53 |
vishy | that needs to be one | 00:53 |
vishy | echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward | 00:54 |
*** lionel has joined #openstack | 00:54 | |
Code_Bleu | vishy: ok, i can ping now from another machine | 00:55 |
*** littleidea has quit IRC | 00:55 | |
Code_Bleu | vishy: ok, i had the command wrong... i had ssh -i mykey...not ssh -i mykey.priv it works | 00:56 |
vishy | good deal | 00:56 |
vishy | :) | 00:56 |
Code_Bleu | vishy: yesterday whenever i got the fixed ip version working..I was trying another instance of another image, and it did the metadata error thing...am trying again now | 00:57 |
Code_Bleu | vishy: maybe its a image thing..i dont know | 00:57 |
vishy | is the other image a desktop image? | 00:59 |
Code_Bleu | vishy: i see..i guess when you do floating it just redirects to the fixed ip? describe instances now shows '<public ip> <private ip>' | 00:59 |
Code_Bleu | vishy: yes | 00:59 |
vishy | yup | 00:59 |
vishy | ok for desktop images | 00:59 |
vishy | you need to do: | 00:59 |
Code_Bleu | vishy: i got the ubuntu 10.10 64 bit desktop image from the ubuntu euc | 01:00 |
vishy | ip addr add 169.254.169.254/32 scope link dev br100 | 01:00 |
*** pvo has joined #openstack | 01:00 | |
vishy | it actually needs to have the ip assigned for desktop images to work | 01:00 |
vishy | because they send out an arp packet | 01:00 |
vishy | then the desktop one should boot properly | 01:00 |
Code_Bleu | vishy: only desktop images do this? | 01:01 |
vishy | well it is actually any image that has certain networking setup | 01:01 |
vishy | the desktop images have a local route set up inside the instance so it tries to arp first instead of just sending out the http packet through the default gateway | 01:01 |
vishy | in any case, nova-network should probably just automatically add the address, but I'm not sure that is totally safe in all situations so it is manual right now | 01:02 |
*** dragondm has quit IRC | 01:02 | |
mtaylor | jaypipes: ping | 01:05 |
Code_Bleu | vishy: still getting: DataSourceEc2.py[WARNING]: waiting for metadata service at http:\/\/169.254.169.254\/2009-04-04\/meta-data\/instance-id | 01:05 |
Code_Bleu | vishy: should it be a scope global? instead of the scope link when i add the ip addr? | 01:06 |
vishy | Code_Bleu: no but it probably got put first in the list | 01:06 |
vishy | so you probably need to remove and rea-dd the 10.3.3.1 address | 01:06 |
Code_Bleu | vishy: yes, it is listed first | 01:07 |
Code_Bleu | vishy: ok | 01:07 |
vishy | so you need to do an ip addr del <paste info between inet and br100 for 10.3 address> dev br100 | 01:07 |
vishy | then ip addr add ^^ | 01:08 |
*** kang__ has joined #openstack | 01:09 | |
*** rlucio has quit IRC | 01:09 | |
Code_Bleu | vishy: i did: ip addr del 10.3.3.1/24 brd 10.3.3.255 scope global dev br100 | 01:09 |
Code_Bleu | vish: then replaced del with add, still listed second | 01:10 |
vishy | ? | 01:10 |
vishy | did you sudo? | 01:10 |
Code_Bleu | vishy: im root | 01:10 |
*** enigma1 has left #openstack | 01:11 | |
vishy | do an ip addr between | 01:11 |
vishy | see if it actually disappears | 01:11 |
vishy | you might have to killall dnsmasq to actaully get it to go away since it is listening on that ip | 01:12 |
Code_Bleu | vishy: yes, it disappears...i even restarted networking in between too...still second | 01:12 |
vishy | hmm you might have to remove the 169 and add it as global | 01:13 |
vishy | maybe it lists the locals first? | 01:13 |
Code_Bleu | vishy: that worked with global | 01:14 |
vishy | cool | 01:14 |
vishy | you could probably also just stick it on another bridge by itself, but I've never tried that method | 01:15 |
Code_Bleu | vishy: run starts and instance, terminate kills the instance...is there a stop or shutdown...that keeps the same instance there, but just not on? | 01:16 |
vishy | no | 01:16 |
vishy | but you can do a shutdown -h now from inside the instance | 01:16 |
vishy | and then a euca-reboot-instances to restart it | 01:16 |
vishy | (it will still show as running in the instance list at the moment) | 01:17 |
Code_Bleu | vishy: i havent been successful with the reboot. It hangs the instance with a status of shutdown and i have to just do another run instance and terminate the previous one | 01:19 |
vishy | Code_Bleu: really? euca-reboot-instances is broken? sounds like you should make a bug report | 01:20 |
Code_Bleu | vishy: i could be wrong...i would have to try again...but i do remember instances getting hung in shutdown | 01:20 |
vishy | hmm i just tried on my build and it works | 01:20 |
*** pvo has quit IRC | 01:22 | |
*** rlucio has joined #openstack | 01:27 | |
Code_Bleu | vishy: so if i want to rdp or vnc into the desktop image, do i have to run euca-authorize the ports for each? | 01:31 |
*** j05h has quit IRC | 01:31 | |
vishy | yes | 01:35 |
uvirtbot | New bug: #731030 in glance "test_db_sync_downgrade_then_upgrade fails with mysql and drizzle" [Undecided,New] https://launchpad.net/bugs/731030 | 01:36 |
*** nelson has quit IRC | 01:37 | |
*** nelson has joined #openstack | 01:37 | |
*** HugoKuo has joined #openstack | 01:38 | |
Code_Bleu | vishy: so what if i wanted rdp allowed for 10.0.0.100, but not 10.0.0.101? if i do the -P tcp -p 3389 default..thats for all instances on my floating ip range? correct? | 01:38 |
vishy | you need to create a new security group | 01:39 |
vishy | a | 01:39 |
vishy | nd | 01:39 |
vishy | start the instance in that security group | 01:39 |
vishy | then you can control the rules individually | 01:39 |
*** rlucio has quit IRC | 01:39 | |
vishy | euca-add-group desktop | 01:39 |
vishy | euca authorize -P tcp -p 3389 desktop | 01:40 |
Code_Bleu | vishy: this is a stupid ?, but when you terminate an instance, you basically delete everything? so if i set passwords, install apps..etc...its all gone when i do a terminate? | 01:40 |
vishy | euca-run-instances -g desktop | 01:40 |
vishy | Code_Bleu: correct | 01:40 |
Code_Bleu | vishy: ok, cool...thanks | 01:40 |
vishy | Code_Bleu: you can euca-bundle-image to create a new base image from your instance | 01:40 |
Code_Bleu | vishy: i will be writing OpenStack code before you know it ;-) | 01:41 |
vishy | sorry euca-bundle-vol that is | 01:41 |
vishy | good deal | 01:41 |
*** gregp76 has quit IRC | 01:41 | |
Code_Bleu | vishy: now i just need to run 3 instances within my current OpenStack install, and create an OpenStack install on those 3 instances...so i can learn more about multiple hosts...im sure that will probably get hairy | 01:43 |
*** joearnold has quit IRC | 01:44 | |
*** hadrian__ has quit IRC | 01:44 | |
*** joearnold has joined #openstack | 01:45 | |
vishy | heh yes | 01:50 |
vishy | it will | 01:50 |
*** dysinger has quit IRC | 02:00 | |
*** justinsb has joined #openstack | 02:02 | |
*** j05h has joined #openstack | 02:03 | |
*** bcwaldon has joined #openstack | 02:22 | |
*** MarkAtwood has quit IRC | 02:25 | |
*** cascone has quit IRC | 02:27 | |
*** bcwaldon has quit IRC | 02:27 | |
*** freeflying has quit IRC | 02:35 | |
*** zul has quit IRC | 02:35 | |
*** freeflying has joined #openstack | 02:35 | |
*** zul has joined #openstack | 02:37 | |
HugoKuo | how to arange my network , If I use Two machine for playing nova? | 02:45 |
HugoKuo | two NIC on each machine ? | 02:46 |
*** Ryan_Lane has joined #openstack | 02:52 | |
*** rds__ has quit IRC | 02:52 | |
*** EndEng|Desktop has joined #openstack | 02:57 | |
*** EndEng|Alt has quit IRC | 03:01 | |
Code_Bleu | what do i do after i run euca-bundle-vol -u <#> <instance id> --no-inherit? | 03:09 |
*** MarkAtwood has joined #openstack | 03:09 | |
*** EndEng|Alt has joined #openstack | 03:10 | |
*** EndEng|Desktop has quit IRC | 03:14 | |
HugoKuo | upload | 03:14 |
HugoKuo | then register | 03:14 |
HugoKuo | the last step is run it | 03:14 |
*** aliguori has quit IRC | 03:14 | |
Code_Bleu | im doing the euca-bundle-vol on a running instance...so i need to terminate, then upload, register, then run? | 03:16 |
*** gregp76 has joined #openstack | 03:16 | |
*** EndEng|Desktop has joined #openstack | 03:20 | |
Code_Bleu | ran bundle-vol and ...no hd activity light on my server anymore either and my screen is still sitting at: This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override | 03:21 |
*** EndEng|Alt has quit IRC | 03:24 | |
*** bcwaldon has joined #openstack | 03:37 | |
Code_Bleu | how can i change the path to where my images and instances are? | 03:38 |
*** bcwaldon has quit IRC | 03:41 | |
*** cascone has joined #openstack | 03:45 | |
HugoKuo | sorry , I were in Lab | 03:48 |
HugoKuo | I have no idea for your last question :< | 03:49 |
*** dirakx has joined #openstack | 03:49 | |
Code_Bleu | how can i change the default path to where images and instances are created? I ran my / partition out of space...i have another partition with a lot of space in it..and i would like to have my images and instances in there | 03:49 |
*** uvirtbot has quit IRC | 04:08 | |
*** uvirtbot` has joined #openstack | 04:08 | |
*** MarkAtwood has quit IRC | 04:16 | |
*** EndEng|Alt has joined #openstack | 04:43 | |
Code_Bleu | What do i need to check: im getting this error now: AMQP server on localhost:5672 is unreachable | 04:44 |
*** Ryan_Lane has quit IRC | 04:45 | |
Code_Bleu | i just symlinked /var/lib/nova my / partition ran out of space. I created a new logical volume and moved the nova folder there and just symlinked to it. Now im getting the AMQP error | 04:45 |
*** EndEng|Desktop has quit IRC | 04:47 | |
*** miclorb_ has quit IRC | 05:06 | |
*** ovidwu has quit IRC | 05:37 | |
*** f4m8_ is now known as f4m8 | 05:40 | |
*** ovidwu has joined #openstack | 05:42 | |
*** MarkAtwood has joined #openstack | 05:48 | |
*** kashyap has joined #openstack | 05:52 | |
*** kashyap has quit IRC | 05:57 | |
*** ramkrsna has joined #openstack | 06:07 | |
*** ramkrsna has joined #openstack | 06:07 | |
vishy | restart amqp? | 06:16 |
vishy | . /etc/init.d/rabbitmq-server restart | 06:18 |
vishy | probably the db is corrupted if you ran out of space, so you might need to rm -rf /var/lib/rabbitmq/mnesia | 06:18 |
vishy | then do a start | 06:18 |
*** guynaor has joined #openstack | 06:20 | |
*** guynaor has left #openstack | 06:20 | |
*** mgoldmann has joined #openstack | 06:45 | |
*** fayce has joined #openstack | 06:55 | |
*** fayce has quit IRC | 07:07 | |
*** dinu has joined #openstack | 07:09 | |
*** fayce has joined #openstack | 07:09 | |
*** gasbakid has joined #openstack | 07:16 | |
*** naehring has joined #openstack | 07:16 | |
*** ericrw has joined #openstack | 07:19 | |
*** kashyap has joined #openstack | 07:21 | |
*** ericrw has quit IRC | 07:28 | |
*** dirakx has quit IRC | 07:29 | |
*** mahadev has quit IRC | 07:31 | |
*** nerens has joined #openstack | 07:32 | |
*** 13WAAA6R2 has joined #openstack | 07:36 | |
*** miclorb_ has joined #openstack | 07:38 | |
*** ramkrsna has quit IRC | 07:50 | |
*** ramkrsna has joined #openstack | 07:50 | |
*** gregp76 has quit IRC | 07:50 | |
*** lionel has quit IRC | 07:55 | |
*** lionel has joined #openstack | 07:55 | |
*** justinsb has quit IRC | 08:01 | |
*** omidhdl has joined #openstack | 08:06 | |
*** azneita has quit IRC | 08:07 | |
*** justinsb has joined #openstack | 08:08 | |
*** CloudChris has joined #openstack | 08:09 | |
*** maple_bed has quit IRC | 08:12 | |
*** skiold has joined #openstack | 08:18 | |
*** mariolone has joined #openstack | 08:18 | |
*** mariolone has left #openstack | 08:18 | |
*** rcc has joined #openstack | 08:18 | |
*** taihen has joined #openstack | 08:21 | |
*** maplebed has joined #openstack | 08:23 | |
*** eikke has joined #openstack | 08:37 | |
*** miclorb_ has quit IRC | 08:40 | |
*** Nacx has joined #openstack | 08:42 | |
eikke | in swift, is it normal test.unit.container.test_updater.TestContainerUpdater.test_run_once takes a very long time to run, or is something broken on my system? (hangs in a poll() call which is restarted every minute) | 08:45 |
*** dirakx has joined #openstack | 08:47 | |
*** MarcMorata has joined #openstack | 08:51 | |
*** hadrian has joined #openstack | 08:56 | |
*** phoexer has joined #openstack | 08:59 | |
*** phoexer has quit IRC | 09:06 | |
*** allsystemsarego has joined #openstack | 09:07 | |
*** allsystemsarego has joined #openstack | 09:07 | |
*** nerens has quit IRC | 09:10 | |
*** JohnBergoon has joined #openstack | 09:12 | |
*** JohnBergoon has left #openstack | 09:13 | |
HugoKuo | bonjour | 09:22 |
HugoKuo | should I install nova-compute in CC? | 09:22 |
HugoKuo | my archi is two machine one is CC another is compute-node | 09:23 |
dinu | hello all. Just a simple question. Any success with having nova working with XenAPI per this documentation ? http://wiki.openstack.org/XenServerDevelopment | 09:26 |
*** MarcMorata has quit IRC | 09:32 | |
*** mariolone has joined #openstack | 09:39 | |
*** mariolone has left #openstack | 09:39 | |
*** MarcMorata has joined #openstack | 09:41 | |
*** drico has quit IRC | 09:43 | |
*** MarkAtwood has quit IRC | 09:49 | |
*** rds__ has joined #openstack | 09:57 | |
*** MarcMorata has quit IRC | 09:59 | |
*** MarcMorata has joined #openstack | 10:01 | |
*** hadrian has quit IRC | 10:02 | |
*** dabo has quit IRC | 10:06 | |
*** sirp_ has quit IRC | 10:06 | |
*** spectorclan has quit IRC | 10:06 | |
*** dabo has joined #openstack | 10:07 | |
*** miclorb_ has joined #openstack | 10:08 | |
*** sirp has joined #openstack | 10:08 | |
*** spectorclan has joined #openstack | 10:08 | |
*** MarcMorata has quit IRC | 10:22 | |
*** MarcMorata has joined #openstack | 10:23 | |
*** MarcMorata has joined #openstack | 10:23 | |
zykes- | hmm, in amqp has a over the wire messaging format no? | 10:31 |
CloudChris | Hi guys ... a question about the communication between the components when a new image is spawned: | 10:43 |
CloudChris | If a user uses EC2 over nova-api to spawn an image - does then nova-api contact the nova-scheduler via AMQP to get the right NODE where to talk (again via AMQP) to nova-network,compute,volume OR does the scheduler do alle the communication to compute,network,volume? | 10:43 |
HugoKuo | how come my 2 core machine can run up over 6 m1.tiny instance @_@ it's cool | 10:46 |
*** nerens has joined #openstack | 10:52 | |
*** naehring has quit IRC | 10:53 | |
*** naehring has joined #openstack | 10:53 | |
*** naehring has quit IRC | 11:02 | |
*** adjohn has quit IRC | 11:02 | |
*** adjohn has joined #openstack | 11:03 | |
*** adjohn has quit IRC | 11:04 | |
*** adjohn has joined #openstack | 11:05 | |
*** iRTermite has quit IRC | 11:07 | |
*** iRTermite has joined #openstack | 11:10 | |
*** 13WAAA6R2 has quit IRC | 11:15 | |
*** naehring has joined #openstack | 11:16 | |
*** miclorb_ has quit IRC | 11:21 | |
*** miclorb has joined #openstack | 11:35 | |
*** DigitalFlux has joined #openstack | 11:37 | |
*** miclorb has quit IRC | 11:40 | |
soren | CloudChris: The latter. | 11:54 |
soren | CloudChris: The api server basically sends a "this is what I want. Make it happen. kthxbai" to the scheduler. | 11:54 |
*** rds__ has quit IRC | 12:05 | |
CloudChris | soren: thanks :) | 12:19 |
*** kashyap has quit IRC | 12:21 | |
*** ctennis has quit IRC | 12:23 | |
*** omidhdl has quit IRC | 12:26 | |
*** dovetaildan has quit IRC | 12:29 | |
*** dovetaildan has joined #openstack | 12:31 | |
*** dovetaildan has quit IRC | 12:33 | |
*** naehring has quit IRC | 12:33 | |
*** freeflying has quit IRC | 12:33 | |
*** z0 has quit IRC | 12:33 | |
*** jfluhmann has quit IRC | 12:33 | |
*** Failbait1 has quit IRC | 12:33 | |
*** gcc has quit IRC | 12:33 | |
*** filler has quit IRC | 12:33 | |
*** magritte has quit IRC | 12:33 | |
*** mgoldmann has quit IRC | 12:35 | |
*** mahadev has joined #openstack | 12:36 | |
*** ctennis has joined #openstack | 12:37 | |
*** ctennis has joined #openstack | 12:37 | |
*** dovetaildan has joined #openstack | 12:39 | |
*** naehring has joined #openstack | 12:39 | |
*** freeflying has joined #openstack | 12:39 | |
*** z0 has joined #openstack | 12:39 | |
*** jfluhmann has joined #openstack | 12:39 | |
*** Failbait1 has joined #openstack | 12:39 | |
*** gcc has joined #openstack | 12:39 | |
*** magritte has joined #openstack | 12:39 | |
*** filler has joined #openstack | 12:39 | |
soren | oh, ffs. | 12:40 |
*** mahadev has quit IRC | 12:41 | |
*** mahadev has joined #openstack | 12:42 | |
*** mahadev has quit IRC | 12:50 | |
HugoKuo | got a question | 12:56 |
HugoKuo | my CC can run 9 m1.tiny instances@_@ | 12:56 |
HugoKuo | there's only 2 cores on that machine | 12:56 |
HugoKuo | that's cool... | 12:56 |
HugoKuo | but how come | 12:56 |
HugoKuo | btw , bcz I use eucalyptus before ....one core can just run up one instance | 12:57 |
HugoKuo | thanks | 12:57 |
jaypipes | mtaylor: pong | 12:57 |
HugoKuo | is there any cmd can show me that how many resources I have in my pool ? | 12:58 |
HugoKuo | such as euca-describe-avalibility-zone verbose | 12:58 |
*** ryker has quit IRC | 12:59 | |
soren | You can try using euca-describe-avalibility-zone verbose | 13:01 |
HugoKuo | thanks I did that and it show me | 13:11 |
HugoKuo | which service is work on which machine | 13:11 |
jaypipes | soren: morning. | 13:11 |
HugoKuo | is that normal that I can run 9 instances on a compute nodes @@? | 13:12 |
HugoKuo | it has only 2 cores | 13:12 |
ttx | jaypipes: good morning to you ! | 13:12 |
ttx | jaypipes: done with jury duty ? | 13:13 |
jaypipes | ttx: ah, morning :) | 13:13 |
soren | HugoKuo: It's not unexpected. | 13:13 |
jaypipes | ttx: yes, as of Thursday last week. thus the uptick in getting things done from my end ;) | 13:13 |
ttx | jaypipes: I'd need some status update on Glance specs; whenever you have 5 min. | 13:13 |
HugoKuo | ooh thanks soren | 13:13 |
jaypipes | ttx: they are all up-to-date. | 13:14 |
CloudChris | I have an understanding problem with the term "cloud controller". | 13:14 |
CloudChris | It is used in Laurent Luce's Blog http://www.laurentluce.com/?p=227&cpage=1 | 13:14 |
CloudChris | and also in the OpenSTack Wiki http://nova.openstack.org/service.architecture.html | 13:14 |
CloudChris | Laurent writes: | 13:15 |
CloudChris | "Cloud controller: handles the communication between the compute nodes, the networking controllers, the API server and the scheduler." | 13:15 |
CloudChris | On the Wiki it says: | 13:15 |
CloudChris | "Communication to and from the cloud controller is by HTTP requests through multiple API endpoints." | 13:15 |
CloudChris | So where is this Cloud Controller piece, or what is it exactly? | 13:15 |
soren | I have no clue. | 13:16 |
ttx | jaypipes: right, I had a few extra questions. Like do you have a branch for the checksumming stuff (marked "started"), can I mark image-conversion deferred... | 13:16 |
soren | In the diagram on http://www.laurentluce.com/?p=227&cpage=1 the thing called cloud controller can only really be the amqp server. | 13:17 |
CloudChris | that was exactly what I thought should be there .. | 13:17 |
soren | Who is this Laurent person anyway? | 13:17 |
jaypipes | ttx: yes, I do. actually, was just merging it with trunk locally. be psuhing it to LP shortly. | 13:17 |
ttx | jaypipes: + except from image-conversion is everything still a target, or is support-ssl also probably deferred ? | 13:17 |
jaypipes | ttx: and yes on deferring conversion. | 13:17 |
jaypipes | ttx: support-ssl will be fine. | 13:18 |
ttx | jaypipes: ok, thanks | 13:18 |
*** dirakx has quit IRC | 13:22 | |
HugoKuo | well after two days work .... already can ssh and ping a instance ......network mode is FlatManage.......but I found that the instance did not have correct dns server in resolve.conf | 13:23 |
*** h0cin has joined #openstack | 13:23 | |
*** h0cin has joined #openstack | 13:24 | |
HugoKuo | so I try to change the mode into FlatDHCPManage mode........but instance still not get the correct DNS-nameserver in resolve.conf ..... | 13:24 |
*** aliguori has joined #openstack | 13:25 | |
HugoKuo | is there any flag for dhcp configuration ? | 13:26 |
soren | HugoKuo: Whether it will do injection or use dhcp is defined in the database when you create the network. | 13:26 |
soren | If you change the networkmanager later on, it'll still do what it was told to do to begin with. | 13:26 |
HugoKuo | !!!!! | 13:26 |
openstack | HugoKuo: Error: "!!!!" is not a valid command. | 13:26 |
HugoKuo | oh !!!! thanks | 13:27 |
HugoKuo | so I should clean up the network table , then restart all service to make the change work ? | 13:29 |
*** hazmat has joined #openstack | 13:31 | |
*** dirakx has joined #openstack | 13:35 | |
*** nijaba has quit IRC | 13:36 | |
*** z0_ has joined #openstack | 13:37 | |
*** ericrw has joined #openstack | 13:37 | |
*** dirakx has quit IRC | 13:37 | |
*** dirakx has joined #openstack | 13:38 | |
*** nijaba has joined #openstack | 13:38 | |
*** nijaba has joined #openstack | 13:38 | |
*** z0 has quit IRC | 13:38 | |
*** fayce has quit IRC | 13:42 | |
*** mgoldmann has joined #openstack | 13:44 | |
uvirtbot` | New bug: #731304 in glance "Missing full functional tests" [High,Triaged] https://launchpad.net/bugs/731304 | 13:46 |
*** z0_ has quit IRC | 13:47 | |
*** rds__ has joined #openstack | 13:48 | |
*** pvo has joined #openstack | 13:48 | |
soren | uvirtbot`: nick uvirtbot | 13:49 |
*** uvirtbot` is now known as uvirtbot | 13:49 | |
*** hadrian has joined #openstack | 13:52 | |
nerens | Hi guys, where can I get the ami-cloudpipe image from? | 13:52 |
*** z0_ has joined #openstack | 13:52 | |
*** f4m8 is now known as f4m8_ | 14:01 | |
*** dprince has joined #openstack | 14:06 | |
patri0t | soren: can we have several scheduler services/nodes in a deployment? like compute controller, ... | 14:17 |
pvo | patri0t: sure | 14:20 |
pvo | should be able to | 14:21 |
patri0t | pvo: it's not mentioned here: http://docs.openstack.org/openstack-compute/admin/content/ch02s03.html | 14:22 |
patri0t | pvo: second para | 14:23 |
pvo | it kinda does, but you are right. it isn't very clear. | 14:24 |
pvo | by it being message based, any node should be able to pull messages off its queue and process. | 14:25 |
pvo | some may have race conditions at the moment | 14:25 |
dabo | patri0t: this blueprint spec may clear that up: http://wiki.openstack.org/MultiClusterZones | 14:25 |
*** gasbakid has quit IRC | 14:26 | |
patri0t | pvo: great, that's really useful | 14:26 |
patri0t | dabo: takk | 14:27 |
naehring | hi! is the list of roles (cloudadmin etc.) static or can it be extended by self defined roles? Is there any way like "nova-manage role list" to get all exsiting roles ? | 14:30 |
*** hazmat has quit IRC | 14:32 | |
*** hazmat has joined #openstack | 14:32 | |
*** cascone has quit IRC | 14:38 | |
uvirtbot | New bug: #731341 in nova "nova-manage is missing some core functionality" [Undecided,New] https://launchpad.net/bugs/731341 | 14:41 |
pvo | naehring: the roles can be defined by an operator. | 14:43 |
pvo | the end goal is to have these come out of an auth system and not be defined only in nova | 14:43 |
naehring | ah, this is the reasone why the ldap-implementation contains several roles? | 14:43 |
pvo | yep | 14:43 |
naehring | are the roles aimed on a project base or "system-wide"? | 14:44 |
naehring | this way both should be possible. | 14:44 |
*** dirakx has joined #openstack | 14:46 | |
*** nerens has quit IRC | 14:49 | |
*** jero has quit IRC | 14:54 | |
*** fayce has joined #openstack | 14:55 | |
*** jero has joined #openstack | 14:56 | |
uvirtbot | New bug: #731350 in nova "KeyError: 'type' when creating servers with OS API w/ Glance" [Undecided,New] https://launchpad.net/bugs/731350 | 14:56 |
*** winston-d has joined #openstack | 15:02 | |
*** fayce has quit IRC | 15:03 | |
winston-d | can somebody help me with a Swift problem? | 15:03 |
notmyname | winston-d: I only have a few minutes before a meeting, but I can try | 15:04 |
winston-d | I got '503 Service Unavailable' error while trying to 'swift-auth-add-user' | 15:04 |
notmyname | check syslog (or wherever you are redirecting it). you are intending to use devauth and not swauth? | 15:04 |
winston-d | notmyname: hi, there. i am following the doc in swift.openstack.org to use devauth. | 15:05 |
winston-d | notmyname: syslog said ' proxy-server Account PUT returning 503 for [503, 503, 503] (txn: txdfcbf0f9-74e9-4a0c-a5c3-1e6ea5ef5084)' | 15:05 |
notmyname | grep syslog for the txn to see all entries related to that request | 15:06 |
winston-d | notmyname: this used to work with previous swift release. | 15:06 |
creiht | winston-d: yeah I would grep for that txn on the account servers to see why they 503d | 15:07 |
winston-d | creiht: hey, nice to talk to you again. | 15:07 |
winston-d | here's the txn info in syslog: http://paste.openstack.org/show/816/ | 15:08 |
*** msassak has joined #openstack | 15:09 | |
creiht | winston-d: yeah that is saying that all 3 account servers for trying to PUT to that account returned a 503 | 15:09 |
creiht | so if you grep the logs of your account servers for that transaction, it should give you a more relevant error | 15:09 |
winston-d | creiht: i see. let me do that. | 15:09 |
winston-d | creiht: it says " Permission denied: '/srv/node/sdb1/accounts'" | 15:11 |
creiht | sounds like you have a permission issue then :) | 15:11 |
winston-d | creiht: am i suppose to chown /srv/node to swift:swift? | 15:11 |
creiht | make sure /srv/node/sd* is owned by swift.swift | 15:11 |
creiht | yeah | 15:11 |
winston-d | creiht: OK | 15:11 |
*** gregp76 has joined #openstack | 15:12 | |
notmyname | winston-d: /srv/node should be owned by root:root | 15:12 |
notmyname | /srv/node/sd* should be swift:swift | 15:12 |
winston-d | notmyname: Oh, thanks. it works now. | 15:13 |
notmyname | if /srv/node is owned by swift, failure scenarios where the volume is unmounted will still allow the process to write to that location (rather than erroring out and using a handoff node) | 15:13 |
*** msassak has quit IRC | 15:14 | |
winston-d | notmyname: good point. What is the difference between devauth and swauth? | 15:14 |
*** msassak has joined #openstack | 15:15 | |
winston-d | I remember there was no such thing as swauth back in 1st swift release, at least not seen in documentation | 15:15 |
creiht | swauth is an implementation of auth that is backended by the swift cluster | 15:15 |
creiht | the goal is for it to be more reliable/scalable than the devauth | 15:16 |
notmyname | devauth will be is deprecated an will be removed from the release in the future | 15:17 |
winston-d | creiht: then I will start to use swauth. :) | 15:18 |
creiht | it is highly recommended :) | 15:18 |
*** thatsdone has joined #openstack | 15:22 | |
*** imsplitbit has joined #openstack | 15:22 | |
winston-d | creiht: can devauth and swauth work together ? | 15:24 |
*** ramkrsna has quit IRC | 15:24 | |
creiht | winston-d: In theory, maybe :) | 15:26 |
creiht | Each would need to use a different reseller prefix, and a separate auth middleware setup for each | 15:26 |
*** dirakx has quit IRC | 15:27 | |
winston-d | creiht: but the account created by devauth cannot be used by swauth, right? | 15:27 |
*** dragondm has joined #openstack | 15:29 | |
*** dendrobates is now known as dendro-afk | 15:31 | |
winston-d | creiht: or, if i have used swift-auth-add-user to create a root user, and I'd like to switch to swauth, should i use swauth-add-user to create root user again? | 15:31 |
*** dendro-afk is now known as dendrobates | 15:31 | |
*** spectorclan_ has joined #openstack | 15:33 | |
spectorclan_ | Governance Nominations and Election Process Details - http://www.openstack.org/blog/2011/03/openstack-governance-nominations-and-election-process/ | 15:33 |
creiht | winston-d: swauth has a helper script that will import your accounts from devauth | 15:34 |
winston-d | creiht: i tried 'swauth-add-user' but it hangs there and never return | 15:35 |
creiht | winston-d: did you set your configs to use swauth? | 15:36 |
*** rnirmal has joined #openstack | 15:36 | |
*** hadrian has quit IRC | 15:36 | |
*** ramkrsna has joined #openstack | 15:37 | |
winston-d | creiht: i added [filter:swauth] in proxy-server.conf, but there's still [app:auther-server] section in auth-server.conf | 15:37 |
winston-d | creiht: where is the 'switch' for swauth? | 15:38 |
creiht | winston-d: the recent docs for both saio and multi-server install have instructions on how to setup swauth | 15:39 |
creiht | http://swift.openstack.org/howto_installmultinode.html | 15:39 |
creiht | http://swift.openstack.org/development_saio.html | 15:39 |
imsplitbit | what options do you need to set to make nova use mysql? I set --sql_connection but apparently that isn't it | 15:42 |
imsplitbit | cause my nova instance is still using sqlite | 15:42 |
naehring | imsplitbit, did you initialize the db? | 15:45 |
winston-d | creiht: i think i followed the instructions in multinode doc, but still swauth-add-user takes forever to return and there's nothing in syslog | 15:47 |
*** maplebed has quit IRC | 15:49 | |
imsplitbit | naehring: yes | 15:49 |
*** maplebed has joined #openstack | 15:49 | |
creiht | winston-d: my first guess is maybe a connection is timing out? | 15:50 |
creiht | gholt: -^ ? | 15:50 |
jaypipes | sandywalsh: hey, so what's the remaining thing that is required to merge zones2? | 15:50 |
naehring | do you have the neccessary tables in your database? And what exactly do you mean with "instance is still using sqlite"? | 15:50 |
sandywalsh | jaypipes, novaclient in the ppa and getting some final word on the copyright notices | 15:51 |
jaypipes | sandywalsh: ah, ok. | 15:51 |
imsplitbit | naehring: when I do nova-manage network list it is displaying information from an sqlite file, not the nova database in mysql | 15:51 |
sandywalsh | jaypipes, as per ttx's email | 15:52 |
*** gregp76 has quit IRC | 15:53 | |
naehring | imsplitbit, did you restart the nova services after switching the datasource? | 15:54 |
imsplitbit | naehring: yes | 15:55 |
winston-d | creiht: should be proxy-server timed out? | 15:55 |
creiht | the script should time out eventually | 15:56 |
creiht | if that is the case, then it isn't even getting to the proxy | 15:56 |
winston-d | it's been like almost 10 mintues | 15:58 |
gholt | Any firewall rules? iptables w/e | 15:59 |
imsplitbit | glance is using mysql but nova isn't even though my nova.conf has an sql_connection setting | 15:59 |
imsplitbit | I wonder if it isn't actually parsing the conf file | 15:59 |
imsplitbit | hmmm | 15:59 |
*** gdusbabek has quit IRC | 16:00 | |
*** gdusbabek has joined #openstack | 16:00 | |
*** naehring has quit IRC | 16:00 | |
*** dirakx has joined #openstack | 16:01 | |
gholt | winston-d: swauth-add-user by default tries to connect to http://127.0.0.1:8080/auth/ I'd try to telnet <ip> <port> and just see if you can connect at all. | 16:02 |
winston-d | gholt: for iptables, i've flushed all INPUT chain's rule and set default rule to ACCEPT | 16:03 |
winston-d | gholt: there's connection when telnet to 127.0.01 8080 | 16:04 |
gholt | So where's your proxy configured to run? | 16:05 |
winston-d | gholt: same place where i ran 'swauth-add-user' command | 16:06 |
gholt | No I mean, what ip and port? | 16:07 |
gholt | Check your /etc/swift/proxy-server.conf Is there a bind_ip or bind_port setting? | 16:07 |
*** MarkAtwood has joined #openstack | 16:08 | |
*** CloudChris has left #openstack | 16:08 | |
winston-d | ip is 192.168.4.101, port is 8080 | 16:09 |
winston-d | things like 'default_swift_cluster = local#https://192.168.4.101:8080/v1" | 16:09 |
gholt | winston-d: Okay. But there was no bind_ip or bind_port settings? | 16:11 |
winston-d | gholt: nope | 16:13 |
gholt | winston-d: Okay, are you using SSL, are there cert_file and key_file settings in the proxy conf? | 16:17 |
winston-d | gholt: yes, i'm using SSL. here's my proxy-server config: http://paste.openstack.org/show/818/ | 16:18 |
*** ccustine has joined #openstack | 16:19 | |
gholt | winston-d: Okay, so you do have a bind_port setting. :P I think the problem is that you're using SSL. Try adding -A https://192.168.4.101:8080/v1 to your swauth-add-user command line | 16:20 |
winston-d | gholt: Oops, yes, i did have bind_port. let me try. | 16:22 |
winston-d | gholt: running " swauth-add-user -A https://192.168.4.101:8080/v1 -K swauthkey -a system root testpass" and there is error. "swauth-add-user", line 67, in <module> parsed.path += '/'AttributeError: can't set attribute | 16:25 |
gholt | Oh, I led you astray bit there, sorry. Change the /v1 to /auth/ | 16:26 |
gholt | And there's a known bug with the trailing slash, be sure to include it. | 16:26 |
*** KnightHacker has joined #openstack | 16:27 | |
winston-d | gholt: hmm, instantly returned. 'Account creation failed: 401'. | 16:28 |
gholt | Ah good, we're getting somewhere. :) | 16:28 |
gholt | Did you include a -K <yourswauthkey> ? | 16:29 |
winston-d | gholt: yes, i did | 16:29 |
winston-d | gholt: and I checked the key is correct (the same as the on in proxy-server.conf) | 16:29 |
gholt | Hmmm. | 16:29 |
gholt | Do the proxy logs (syslog) give any clues? | 16:30 |
winston-d | gholt: proxy logs, no. but the account server has a 404 | 16:31 |
gholt | Ah, [smacks head], you probably haven't run swauth-prep succesfully yet. | 16:32 |
gholt | Try a swauth-prep -A https://192.168.4.101:8080/auth/ -K <key> | 16:32 |
*** czajkowski has joined #openstack | 16:33 | |
winston-d | gholt: err.. 'Auth subsystem prep failed: 401' | 16:34 |
*** taihen has quit IRC | 16:35 | |
gholt | That part's weird. And the multi node docs are missing the prep call, I'll submit a patch for that. | 16:35 |
* gholt wishes he could just tail the proxy logs from here. :) | 16:36 | |
winston-d | gholt: is CloudFile API supports swauth? | 16:36 |
gholt | Anything in those logs from the swauth prep that gives a clue? | 16:36 |
winston-d | gholt: there's only one line for swauth prep. POST /auth/v2/.prep HTTP/1.0 401 - - - - - - tx06c5962d-4865-4450-9a6f-e2cfa13e9a7a | 16:37 |
*** skiold has quit IRC | 16:37 | |
gholt | winston-d: Hmm. It should have tried to make a AUTH_.auth account, a few containers in that account, and maybe some other stuff too. Did the -K <key> match super_admin_key in proxy conf this time? :) | 16:39 |
winston-d | gholt: hmm. i think i located the root cause. it's my proxy-server.conf lacks of swauth in [pipeline:main] part. | 16:41 |
gholt | Ah, didn't even notice that myself. Hehe | 16:41 |
winston-d | gholt: swauth-prep works now. and in multinodes doc, swauth is _not_ in[pipeline]. I check AIO doc and found that. | 16:42 |
winston-d | gholt: maybe you can also update that part of multi node doc | 16:42 |
gholt | Yes, the setup docs all assume devauth still. There's a sample swauth pipeline in the docs though | 16:42 |
jaypipes | dubs: mind re-reviewing https://code.launchpad.net/~jaypipes/glance/glance-cli-tool/+merge/51322? I updated to fix your review comments... | 16:44 |
jaypipes | sirp: also, a review from you on https://code.launchpad.net/~jaypipes/glance/glance-cli-tool/+merge/51322 would be great :) | 16:45 |
winston-d | gholt: swauth-add-user now has 500, because object-server timed out. | 16:45 |
sirp | jaypipes: sure, blocked at the moment on api+disk_format issues, but once those are fixed up, ill move on to that | 16:46 |
*** rds__ has quit IRC | 16:46 | |
jaypipes | sirp: blocked on stuff in Glance, or Nova? | 16:47 |
openstackhudson | Project nova build #603: SUCCESS in 1 min 51 sec: http://hudson.openstack.org/job/nova/603/ | 16:48 |
openstackhudson | Tarmac: This fix is an updated version of Todd's lp720157. Adds SignatureVersion checking for Amazon EC2 API requests, and resolves bug #720157. | 16:48 |
uvirtbot | Launchpad bug 720157 in nova "Nova returns HTTP 400 for SignatureVersion=1 requests" [Medium,In progress] https://launchpad.net/bugs/720157 | 16:48 |
winston-d | gholt: here's proxy log for 'swauth-add-user' | 16:48 |
sirp | jaypipes: the chain is something like, 1) add related_images, which needs 2) api_disk_format to work, 3) which needs migrations to work, which would be better if 3) glance-manage respected the config file | 16:48 |
jaypipes | sirp: gotcha. anything I can assist with? | 16:49 |
jaypipes | sirp: need me to take bugs off you or anything? | 16:49 |
sirp | jaypipes: dubs and i are going to have a skype call to compare notes on what we're having issues with | 16:49 |
*** KnightHacker has left #openstack | 16:51 | |
jaypipes | sirp: k. lemme know the result and if I can help. | 16:51 |
sirp | jaypipes: cool, thx | 16:51 |
gholt | winston-d: Did you miss a link to the proxy log? | 16:52 |
*** eikke has quit IRC | 16:53 | |
*** hub_cap has joined #openstack | 16:54 | |
winston-d | gholt: sorry, :) http://paste.openstack.org/show/819/ | 16:54 |
*** hadrian has joined #openstack | 16:57 | |
gholt | winston-d: Ah okay, and you're sure the object servers are running out at 192.168.4.57:6000 and 192.168.4.58:6000 and do their logs give any clues? | 16:57 |
openstackhudson | Project nova build #604: SUCCESS in 1 min 51 sec: http://hudson.openstack.org/job/nova/604/ | 16:58 |
openstackhudson | Tarmac: refactored up nova/virt/xenapi/vmops _get_vm_opaque_ref() | 16:58 |
openstackhudson | no longer inspects the param to check to see if it is an opaque ref | 16:58 |
openstackhudson | works better for unittests | 16:58 |
gholt | winston-d: I'm going to be out for lunch for a little while... | 17:01 |
*** rlucio has joined #openstack | 17:01 | |
*** dirakx has quit IRC | 17:02 | |
*** dirakx1 has joined #openstack | 17:02 | |
*** zenmatt has quit IRC | 17:02 | |
winston-d | gholt: yes, object servers are running on 5.57 and 5.58. but the log only contains container/account server to that txn . | 17:02 |
winston-d | gholt: OK, thanks for your help. I've to leave for sleep. it's already 1:00 in the morning. | 17:03 |
winston-d | gholt: have a nice day~ | 17:03 |
*** ramkrsna has quit IRC | 17:04 | |
*** ctennis_ has joined #openstack | 17:07 | |
*** ctennis_ has joined #openstack | 17:07 | |
*** ctennis has quit IRC | 17:09 | |
*** ctennis_ is now known as ctennis | 17:09 | |
uvirtbot | New bug: #731448 in nova "Image Format Instability" [Undecided,New] https://launchpad.net/bugs/731448 | 17:12 |
*** gregp76 has joined #openstack | 17:12 | |
*** rlucio has quit IRC | 17:16 | |
*** rds__ has joined #openstack | 17:17 | |
*** mahadev has joined #openstack | 17:17 | |
*** mray has joined #openstack | 17:18 | |
*** winston-d has quit IRC | 17:20 | |
*** bostonmike has joined #openstack | 17:21 | |
*** galstrom has joined #openstack | 17:21 | |
*** eikke has joined #openstack | 17:22 | |
btorch | anyone here has had issues ssh into lucid-server-uec-amd64 image .. keep getting access denied due to key .. I have tried several keys already | 17:23 |
*** mahadev has quit IRC | 17:23 | |
sirp | jaypipes: dubs and i just ran all of the tests with the disk_format migration, they all passed; not sure how why they broke before | 17:25 |
sirp | jaypipes: if they pass for you, maybe we should just go ahead and add 003 to trunk | 17:26 |
jaypipes | sirp: k, I'll push a branch with it... not sure why it was failing before. | 17:28 |
mtaylor | jaypipes: ok - I got all of the glance test stuff sorted out | 17:30 |
*** mahadev has joined #openstack | 17:30 | |
*** photron has joined #openstack | 17:30 | |
mtaylor | jaypipes: only issue I'm having from perfection right now is that the migration test uses os.unlink rather than drop_all to clear out the database | 17:31 |
mtaylor | jaypipes: but the test suite is a few steps of abstration away from having access to a sqlalchemy context - any ideas? | 17:31 |
*** joearnold has joined #openstack | 17:32 | |
jaypipes | sirp: any ideas on mtaylor's query above ^^? | 17:33 |
sirp | mtaylor: why is unlink causing a problem? | 17:35 |
mtaylor | sirp: if you run the test-cases with not-sqlite | 17:35 |
mtaylor | sirp: unlink doesn't so much get rid of anything :) | 17:36 |
sirp | mtaylor: oh is see, didn't know we were using anything besides sqlite for testing... | 17:36 |
sirp | hmm | 17:36 |
mtaylor | sirp: well, we weren't | 17:36 |
mtaylor | sirp: I just made a patch which allows one to pick a different DB URI for the test suite so we can ensure things do work with other backends (they do, btw, yay sqlalchemy) | 17:37 |
mtaylor | so it works - but running the test suite twice at the moment involves a manual drop/create schema | 17:37 |
sirp | mtaylor: oh right, i follow... | 17:37 |
*** mgoldmann has quit IRC | 17:37 | |
sirp | mtaylor: can we replace `unlink` with something like from glance.registry.db import api ; api.unregister_models() ? | 17:40 |
*** MarcMorata has quit IRC | 17:40 | |
mtaylor | sirp: lemme try | 17:40 |
uvirtbot | New bug: #731470 in nova "Nova should use disk_format and container_format" [High,In progress] https://launchpad.net/bugs/731470 | 17:42 |
*** mahadev has quit IRC | 17:43 | |
*** ram___ has joined #openstack | 17:46 | |
*** galstrom has quit IRC | 17:46 | |
*** MotoMilind has joined #openstack | 17:50 | |
MotoMilind | Hi! My "euca-describe-instances" command takes a long time, and then only returns an error as follows: | 17:51 |
MotoMilind | root@openstack01:~# euca-describe-instances | 17:51 |
MotoMilind | [Errno 111] Connection refused | 17:51 |
MotoMilind | Where do I start to look for some debug information? | 17:51 |
MotoMilind | Thanks | 17:51 |
kpepple | MotoMilind: /var/log/nova/nova-api.log | 17:52 |
*** ram___ is now known as ramd | 17:52 | |
*** MarcMorata has joined #openstack | 17:52 | |
kpepple | MotoMilind: chances are that you have that port blocked by your firewall or nova-api isn't running | 17:53 |
mtaylor | sirp: oh - ok, yes. slightly more hoop jumping - but it now works | 17:54 |
mtaylor | jaypipes, sirp: https://code.launchpad.net/~mordred/glance/alternate-test-dburi/+merge/52587 ... all works for me now | 17:55 |
jaypipes | mtaylor: cheers | 17:56 |
MotoMilind | Hmm, the log file has today's date (5 hour old timestamp), but it is empty | 17:58 |
*** aliguori has quit IRC | 17:58 | |
MotoMilind | must be that nova-api is not running | 17:58 |
kpepple | MotoMilind: is nova-api running ? also, make sure you have --verbose in your /etc/nova/nova.conf file | 17:58 |
MotoMilind | Yep, you called it | 17:58 |
MotoMilind | Thanks | 17:59 |
kpepple | MotoMilind: np | 17:59 |
*** rcc has quit IRC | 17:59 | |
ramd | Hello Have issues running nova-network in RHEL 6.0 | 18:03 |
ramd | It wouldn't start | 18:03 |
*** MarkAtwood has quit IRC | 18:03 | |
ramd | Here is the trace : | 18:03 |
ramd | nova.root): TRACE: Stderr: '\ndnsmasq: cannot run lease-init script /usr/bin/nova-dhcpbridge: No such file or directory\n' | 18:04 |
ramd | (nova.root): TRACE: | 18:04 |
kpepple | ramd: just paste your trace at paste.openstack.org and provide a link | 18:04 |
*** thatsdone has quit IRC | 18:04 | |
ramd | If I try just nova-bridge it gives me error - IndesError list index out of range | 18:04 |
ramd | [root@c3l-openstack5 creds]# nova-dhcpbridge | 18:05 |
ramd | 2011-03-08 10:00:09,338 CRITICAL nova.root [-] list index out of range | 18:05 |
ramd | (nova.root): TRACE: Traceback (most recent call last): | 18:05 |
*** MarcMorata has quit IRC | 18:05 | |
ramd | (nova.root): TRACE: File "/usr/bin/nova-dhcpbridge", line 131, in <module> | 18:05 |
ramd | (nova.root): TRACE: main() | 18:05 |
ramd | (nova.root): TRACE: File "/usr/bin/nova-dhcpbridge", line 118, in main | 18:05 |
ramd | (nova.root): TRACE: action = argv[1] | 18:05 |
ramd | (nova.root): TRACE: IndexError: list index out of range | 18:05 |
ramd | (nova.root): TRACE: | 18:05 |
kpepple | ramd: are you running from trunk (via bzr branch lp:nova) or using the RHEL 6 packages ? | 18:05 |
ramd | Help, please | 18:05 |
*** mahadev has joined #openstack | 18:05 | |
*** dprince has quit IRC | 18:08 | |
*** dprince has joined #openstack | 18:09 | |
ramd | oops.Sorry. Sure. will do. | 18:09 |
ramd | Using packages from GridDynamics repo | 18:09 |
*** DigitalFlux has quit IRC | 18:10 | |
*** joearnold has quit IRC | 18:10 | |
ramd | Trace is at http://paste.openstack.org/show/820/ | 18:11 |
MotoMilind | Aah, so nova-api won't run, with the following error in the log file: | 18:12 |
MotoMilind | (nova.root): TRACE: File "/usr/lib/pymodules/python2.6/nova/api/ec2/cloud.py", line 116, in setup | 18:12 |
MotoMilind | (nova.root): TRACE: os.chdir(FLAGS.ca_path) | 18:12 |
MotoMilind | (nova.root): TRACE: OSError: [Errno 13] Permission denied: '/root/src/nova/trunk/CA' | 18:12 |
MotoMilind | However, if I try that on Python interepreter, I can indeed os.chdir just fine | 18:12 |
MotoMilind | odd | 18:12 |
MotoMilind | Let me double-check my ca_path value | 18:13 |
kpepple | ramd: does /usr/bin/nova-dhcpbridge exist ? | 18:14 |
*** brd_from_italy has joined #openstack | 18:15 | |
ramd | kpepple: yes. It is available in /usr/bin | 18:15 |
*** zul has quit IRC | 18:15 | |
ramd | when I execute nova-dhcpbridge it errors out | 18:16 |
MotoMilind | Hmm, I remember I had to set the value in /etc/nova/nova.conf specifically for "nova-manage vpn create <myproject>" | 18:16 |
kpepple | ramd: can you paste you /etc/nova/nova.conf file into paste.openstack.org ? i think you are missing a flag for --dhcpbridge_flagfile= | 18:16 |
*** jfluhmann has quit IRC | 18:17 | |
MotoMilind | I see. When I remove that config, it starts nova-api just fine | 18:17 |
MotoMilind | Now let me check if I can start cloudpipe as well, without the two settings | 18:18 |
ramd | kpepple: here it is http://paste.openstack.org/show/821/ ; I do see the dhcpbridge_flagfile pointing to /etc/nova/nova.conf | 18:19 |
*** zul has joined #openstack | 18:20 | |
*** hadrian has quit IRC | 18:20 | |
*** dprince has quit IRC | 18:20 | |
*** larissa has quit IRC | 18:21 | |
*** jfluhmann has joined #openstack | 18:21 | |
*** larissa has joined #openstack | 18:21 | |
kpepple | ramd: run "$ /usr/bin/nova-dhcpbridge --networks_path=/var/lib/nova/networks --verbose" and see what it says | 18:23 |
kpepple | ramd: you may need to run this as root | 18:23 |
kpepple | ramd: sorry, mean "$ /usr/bin/nova-dhcpbridge --dhcpbridge_flagfile=/etc/nova/nova.conf" | 18:23 |
ramd | kpepple: It errors out with "no such table: networks....Trace is at http://paste.openstack.org/show/822/ | 18:26 |
kpepple | ramd: no, run "/usr/bin/nova-dhcpbridge --dhcpbridge_flagfile=/etc/nova/nova.conf" ... you have some extra characters in there | 18:28 |
kpepple | ramd: also, have you sync'd the db ? | 18:28 |
mtaylor | anybody know of the top of their head how to get from a sqlalchemy Session object to its corresponding metadata object? | 18:28 |
mtaylor | jaypipes: ^^ ? | 18:28 |
sirp | mtaylor: just a guess, but does session -> engine -> metadata work? | 18:31 |
ramd | kpepple: I did nova-manage db sync and ran the command with single underscore..still the same error trace is at http://paste.openstack.org/show/823/ | 18:31 |
mtaylor | sirp: not that I can tell... | 18:32 |
mtaylor | sirp: I think i can see a path ... lemme try someting | 18:32 |
kpepple | ramd: no, you need to type it exactly as i have it with two dashes before the dhcpbridge_flagfile. copy and paste this: | 18:33 |
kpepple | ramd: /usr/bin/nova-dhcpbridge --dhcpbridge_flagfile=/etc/nova/nova.conf | 18:33 |
ramd | Oops sorry. Now back to the old error http://paste.openstack.org/show/824/ | 18:35 |
*** lucas_ has joined #openstack | 18:38 | |
lucas_ | hi | 18:38 |
lucas_ | I was wondering: what is the largest know openstack deployment? | 18:39 |
*** taihen has joined #openstack | 18:39 | |
* kpepple looks thru nova-dhcpbridge code | 18:41 | |
*** mahadev has quit IRC | 18:41 | |
*** pvo has quit IRC | 18:41 | |
kpepple | ramd: do you have a br0 interface ? | 18:42 |
ramd | no I don't have one. I've the VLANManager and it did create vlan100 and br100 | 18:44 |
ramd | also do I need sqlite? I;m using mysql and defined in the nova.conf | 18:45 |
kpepple | ramd: no, you shouldn't need sqlite if you have mysql configured | 18:46 |
jarrod | when im booting my xen instance, it sticks at busybox v1.13.3 (initramfs) | 18:47 |
jarrod | anyone have an idea of where i went wrong? | 18:47 |
jarrod | before that it's "Gave up waiting for root device" | 18:48 |
kpepple | ramd: i don't have the exact same version of code that you do (i'm not on RHEL so not using those packages) but linux_net.py seems to be breaking on creating your bridge file or pid file. can you make sure that the user that you are using to run nova-network has permissions for both of those directories ? | 18:50 |
*** mahadev has joined #openstack | 18:54 | |
ramd | kpepple: yes it does..I'm starting the nova-network as root | 18:56 |
*** paltman has quit IRC | 18:59 | |
kpepple | ramd: do you already have a dnsmasq running ? check ps -ef | 19:00 |
kpepple | ramd: if not, what happens when you run "sudo -E dnsmasq --strict-order --bind-interfaces --conf-file= --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.0.1 --except-interface=lo --dhcp-range=192.168.0.3,static,120s --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro" ? | 19:00 |
jarrod | My Xen instance stops after "Begin: Running /scripts/init-bottom ...\nDone." -- any ideas how I can move past this? | 19:01 |
*** littleidea has joined #openstack | 19:01 | |
MotoMilind | Ok, so I believe I can summarize my current issue succinctly now. On Ubuntu, I installed bexar. The release didn't have certain files, such as bootstrap.template for cloudpipe and geninter.sh. Downloading nova/trunk found those files. To point to those, I need to set —use_project_ca=True and —ca_path to the location of nova trunk. But having those configs prevents nova-api from starting. As a workaround, I start nova-api without the configs | 19:02 |
*** Nacx has quit IRC | 19:03 | |
*** littleidea has quit IRC | 19:04 | |
*** MarkAtwood has joined #openstack | 19:07 | |
*** mgoldmann has joined #openstack | 19:09 | |
*** gaveen has joined #openstack | 19:10 | |
*** littleidea has joined #openstack | 19:16 | |
jarrod | i guess xen support is out the window | 19:16 |
sirp | jaypipes: think you'll have migration 003-in-trunk mergeprop'd soon? | 19:17 |
jaypipes | sirp: yes, within an hour. sorry, had some lunch. | 19:17 |
sirp | jaypipes: oh no worries | 19:19 |
ramd | kpepple: tried it...same kind of "no table error" wondering anything on DB side http://paste.openstack.org/show/825/ | 19:19 |
*** aliguori has joined #openstack | 19:22 | |
*** dprince has joined #openstack | 19:28 | |
kpepple | ramd: yes, there is definitely that. log into your mysql database and make sure you have a network table ("select * from networks"). you probably do ... not sure why it's not finding your correct db ... | 19:29 |
*** DigitalFlux has joined #openstack | 19:30 | |
*** sirp has quit IRC | 19:32 | |
*** sirp_ has joined #openstack | 19:36 | |
*** maplebed has quit IRC | 19:37 | |
jaypipes | dubs: still getting this error when running the migrate script from that bug: http://paste.openstack.org/show/826/ :( | 19:40 |
ramd | kpepple: Something goofy on the netmask...I remember doing nova-manage network create 192.168.0.0/24 ...but here it shows as /25 http://paste.openstack.org/show/827 | 19:42 |
sirp_ | jaypipes: hmm, i thought dubs was running it without problems, did that just crop up? what changed? | 19:42 |
jaypipes | sirp_: that's always been the error I get from running the test suite with that migrate script in the migrate repo. That's the reason I did not include the migrate script in the orig merge prop in the first place. :) | 19:43 |
ramd | kpepple: is there anyway I can delete these networks and create new one | 19:43 |
kpepple | ramd: nova-manage network delete ? | 19:44 |
sirp_ | jaypipes: right, but dubs and i have been running the tests w/o issue lately? i'm wondering what suddenly changed to cause that problem for him and i | 19:45 |
jaypipes | sirp_: I didn't say it caused problems for you. I said *I* was seeing the errror... | 19:45 |
ramd | kpepple: delete is not there ...delete doesn't match any options | 19:46 |
sirp_ | jaypipes: gotcha, thought you said dubs was seeing the problem, nm | 19:47 |
*** mahadev has quit IRC | 19:49 | |
kpepple | ramd: you'll have to pull them from db bu | 19:50 |
kpepple | ramd: by hand | 19:50 |
dubs | jaypipes: i haven't been able to reproduce that. it just works for me (not often that this is said by the person who *didn't* write the code) | 19:52 |
jaypipes | sirp_: and you can't reproduce the issue either, right? | 19:53 |
jaypipes | dubs, sirp_: what version of sqlalchemy-migrate and sqlalchemy are you on, btw? | 19:53 |
dweimer | When using swauth whats the best way to get a list mapping the swauth account names and their corresponding swift AUTH_... account identifieers? | 19:54 |
sirp_ | jaypipes: i *was* able to reproduce it originally, but haven't been able to lately | 19:54 |
*** h0cin has quit IRC | 19:55 | |
*** DigitalFlux has quit IRC | 19:58 | |
gholt | dweimer: There is not a cool tool for that atm. You can use swift-list to get to that info though. | 19:58 |
*** zenmatt has joined #openstack | 19:59 | |
gholt | dweimer: If you'd like to write a tool to do the mapping, I can explain what would need to be done. Not sure if you're asking for a one time need or something longer term. | 19:59 |
dubs | jaypipes: ubuntu maverick packages of python-migrate, 0.6-2 | 20:00 |
sirp_ | jaypipes: running under ubuntu, i just reproduced a error again (different than the one ref by jaypipes above); mac os x seems to pass | 20:01 |
*** DigitalFlux has joined #openstack | 20:01 | |
btorch | do I need euca2ools installed on a compute node box ? I have that on the cloud-controller node which also happens to be a compute-node on this test environment I got .. adding a compute node now | 20:02 |
jaypipes | dubs: hmm, I'm on 0.6-1... | 20:02 |
jaypipes | sirp_: what's the other error? | 20:02 |
jaypipes | sirp_: and is there a version diff between ubuntu and your mac? | 20:03 |
sirp_ | this beauty: http://paste.openstack.org/show/828/ | 20:03 |
sirp_ | checking now | 20:03 |
sirp_ | darn, no __version__ | 20:04 |
ramd | kpepple: I'm confused...I deleted it and recreated using nova-manage....Still it creates it with /25 netmask | 20:05 |
jaypipes | sirp_: I'm jealous of your merge proposals to Glance going so smoothly. :P | 20:05 |
sirp_ | jaypipes: heh | 20:06 |
eday | sandywalsh: do you have any q's regarding the licensing stuff still? | 20:06 |
jaypipes | sirp_: that should be easily fixed. Put a "useexisting=True" kwargs in the Table constructor in the 001 migrate script. | 20:06 |
sirp_ | jaypipes: will do; i didn't do that when i was first creating the migrate stuff b/c i thought that was papering-over a fundamental mis-understanding of how things worked | 20:07 |
jaypipes | sirp_: though technically, in the test, the database should *not* exist before running the upgrade to 001... did you remove that os.unlink()? ;) | 20:07 |
jaypipes | sirp_: s/database/table | 20:08 |
sandywalsh | eday, yes, what do we do with jacobians old non-marked files | 20:08 |
sandywalsh | eday, should we add a BSD copyright for him | 20:08 |
sandywalsh | eday, or just add ours (where appropriate) | 20:08 |
sandywalsh | eday, currently, his files are untouched and we have our copyright on new/heavily altered files | 20:08 |
jaypipes | sirp_, dubs: BTW, if you feel the checksum branch is good to go, feel free to set it to Approved. | 20:08 |
ttx | sandywalsh: also there is no mention of BSD anymore, I think, in the current tarballs | 20:09 |
sirp_ | jaypipes: one of the reasons i used that "define_table_blah" pattern was to avoid the useexisting stuff; by not defining that in the module scope, i thought we could avoid that messiness | 20:09 |
sandywalsh | ttx no, in license I have both | 20:09 |
ttx | sandywalsh: oh, ok | 20:10 |
*** gregp76_ has joined #openstack | 20:10 | |
sandywalsh | ttx https://github.com/rackspace/python-novaclient/blob/master/LICENSE | 20:10 |
sandywalsh | ttx scroll to bottom | 20:10 |
ttx | sandywalsh: haven't seen the latest :) | 20:10 |
jaypipes | sirp_: yeah, I understand. I used that technique in my migrate script, too, btw. I'm no SA wizard, though, so some of this stuff is just too "magical" especially considering the horrific state of the migrate documentation regarding anything more than "here, add a little table". | 20:11 |
*** gregp76 has quit IRC | 20:12 | |
*** gregp76_ is now known as gregp76 | 20:12 | |
sirp_ | jaypipes: oh yeah, having major regrets with SA migrate | 20:12 |
eday | sandywalsh: we should add his name+bsd license for all old files | 20:12 |
sirp_ | jaypipes: unfortunately, not sure of a better path, so guess fix-up-for-now is the only way forward | 20:12 |
eday | sandywalsh: only add ours+apache to new or heavily modified (which you already did) | 20:12 |
sandywalsh | eday, ok, I'll go back and fix up ... thx | 20:13 |
jaypipes | sirp_: ya | 20:13 |
eday | sandywalsh: cool :) wanted t make sure we get this fixed so you branch can land soon! | 20:13 |
sandywalsh | eday, ha, no kidding | 20:13 |
ttx | eday, sandywalsh: setup.py only mentions Apache as a license, is that an issue ? | 20:13 |
eday | ttx: ahh, should probably mention both | 20:14 |
sandywalsh | ttx, eday ... hmm, good question. I don't think PyPi will take dual licenses | 20:14 |
eday | if we only get one, I'd use apache since it's more restrictive | 20:14 |
sandywalsh | checking | 20:14 |
ttx | Team meeting in 45 minutes in #openstack-meeting | 20:16 |
uvirtbot | New bug: #731558 in glance "Empty fields get turned in to "None" when passing over http. " [Low,In progress] https://launchpad.net/bugs/731558 | 20:17 |
jaypipes | vishy: thx for taking the time to fix bugs in Glance, man. Appreciate it. | 20:17 |
*** littleidea has quit IRC | 20:18 | |
jaypipes | sirp_: ever figure out the version difference between your ubuntu and mac boxen for sqlalchemy and sqlalchemy-migrate? | 20:18 |
*** rlucio has joined #openstack | 20:18 | |
vishy | jaypipes no prob jay | 20:18 |
*** rds__ has quit IRC | 20:19 | |
sirp_ | jaypipes: so, unless i'm missing something, `migrate` doesn't embed any version metadata in the source code | 20:20 |
*** rlucio has left #openstack | 20:20 | |
*** msassak has quit IRC | 20:21 | |
*** DigitalFlux has quit IRC | 20:21 | |
jaypipes | sirp_: no way to find out the package installed on a mac? | 20:23 |
dubs | sirp_: how did you install it? | 20:23 |
sirp_ | jaypipes: dubs: on my mac i'm using sqlalchemy_migrate-0.6-py2.6.egg | 20:24 |
dweimer | gholt: I would be willing to write a tool to do it. This would be a longer term need for us as we are hoping to use it for account billing purposes. | 20:25 |
*** dillon-w has quit IRC | 20:26 | |
dweimer | gholt: In trying to see how the swauth middleware does it, I think I can get the information from AUTH_.auth. I'm still trying trying to figure out exactly how that would work though. | 20:27 |
jaypipes | sirp_: k. mine is sqlalchemy_migrate-0.6.1-py2.6.egg... | 20:27 |
jaypipes | sirp_: your mac works fine, right. seems that 0.6-1 is the culprit. I'm going to pull 0.6-2 (the one dubs has) and see if that fixes things. | 20:28 |
sirp_ | jaypipes: cool | 20:28 |
*** blueadept has joined #openstack | 20:29 | |
openstackhudson | Project swift build #213: SUCCESS in 27 sec: http://hudson.openstack.org/job/swift/213/ | 20:31 |
openstackhudson | Tarmac: change internal proxy to make calls to handle_request with a copy of the request rather than the request itself | 20:31 |
*** ddumitriu has joined #openstack | 20:39 | |
*** gaveen has quit IRC | 20:44 | |
sirp_ | jaypipes: found some wonkiness; my migrate_version is set to 0, when it should be 3 | 20:44 |
sirp_ | jaypipes: that explains why it was trying to upgrade a table that was already there *grumble* | 20:45 |
jaypipes | sirp_: grumble grumble. :) | 20:45 |
sirp_ | jaypipes: well that fixed it | 20:47 |
jaypipes | heh | 20:47 |
sirp_ | jaypipes: not sure how they got out of sync | 20:47 |
sirp_ | jaypipes: thought upgrades were atomic, but maybe an exception caused it to upgrade without writing the version variable | 20:48 |
*** MarcMorata has joined #openstack | 20:48 | |
jaypipes | sirp_: interesting... | 20:49 |
dubs | heh | 20:50 |
sirp_ | jaypipes: so from my perspective, i guess all is well, no errors... | 20:50 |
sirp_ | jaypipes: so we're just waiitng to if your upgrade fixes the shennanigans | 20:50 |
gholt | dweimer: See if this helps: http://paste.openstack.org/show/830/ | 20:52 |
jaypipes | sirp_: I can't seem to get pip-requires to respect the sqlalchemy-migrate>=0.6.1 (b/c they used a dash -1 instead of a .1)... grr this shit makes my head hurt. | 20:53 |
sirp_ | jaypipes: maybe just grab from google-code trunk? | 20:53 |
jaypipes | sirp_: yeah (dealing with 5 things up in the air right now...) I'll try that | 20:54 |
gholt | dweimer: There isn't a mapping back the other way though, which is what you probably really need now that I think about it. | 20:55 |
sirp_ | jaypipes: cool, since i've got glance working for the moment, im gonna start work on getting Nova to respect the disk_format + container_format stuff | 20:55 |
ttx | Team meeting in 5 min. in #openstack-meeting | 20:55 |
gholt | dweimer: There has been some talk on mappings (user/group mostly, but this would be the same) in swauth. Swauth is pretty young so it hasn't (well... I haven't) got that far yet. | 20:56 |
*** pvo has joined #openstack | 20:57 | |
*** pvo has quit IRC | 20:57 | |
*** pvo has joined #openstack | 20:57 | |
*** ChanServ sets mode: +v pvo | 20:57 | |
*** MarcMorata has quit IRC | 20:58 | |
*** hub_cap has quit IRC | 20:58 | |
nelson | Argh. Why is 'st' giving me a 499 error? tcpdump says that it's written the full 7168 bytes of the file, but when 'st' tries to recvfrom(), it hits an EAGAIN; it polls; does the recvfrom() again and gets an immediate EOF. | 20:59 |
*** syah has quit IRC | 20:59 | |
nelson | The object server never sees a single byte of data. | 20:59 |
dweimer | gholt: Thanks for the curl example that does help. For the short term we can manage the overall account list elsewhere. I'm interested in looking into it further though. Would adding an account name field to the accounts sqllite database make sense? | 21:00 |
kpepple | ramd: are you trying to bridge to your own network? | 21:00 |
pandemicsyn | nelson: anything in proxy.error ? | 21:00 |
pandemicsyn | or where ever your logging proxy errors | 21:00 |
*** ChanServ sets mode: +v pandemicsyn | 21:01 | |
nelson | Not that I can see. It's the object server which isn't reading anything. | 21:01 |
gholt | dweimer: Well, here's a thing you can do: Use -s <suffix> with your swauth-add-account and you'll know what the AUTH_<suffix> is. If you just use <suffix> = <swauth-account-name> the mapping will just be 'prepend AUTH_'. | 21:02 |
nelson | The code is reacting correctly to the results of the syscalls. | 21:02 |
gholt | dweimer: In the long term, swauth should manage a back-mapping itself, which means two-phase commits and such. We can't really store it on the account itself without munging the auth/swift separation we want. | 21:03 |
*** bostonmike has quit IRC | 21:04 | |
dweimer | gholt: Thanks, the suffix change should work for us. It would make things like mapping public urls to internal urls a bit easier as well. | 21:07 |
gholt | dweimer: Cool, the only reason for the crazy mapping actually is just history. :) | 21:07 |
*** dkocher has joined #openstack | 21:08 | |
*** dkocher has quit IRC | 21:12 | |
*** pothos_ has joined #openstack | 21:20 | |
*** MarcMorata has joined #openstack | 21:22 | |
*** pothos has quit IRC | 21:22 | |
*** dprince has quit IRC | 21:22 | |
*** pothos_ is now known as pothos | 21:22 | |
*** allsystemsarego has quit IRC | 21:28 | |
*** MarcMorata has quit IRC | 21:28 | |
*** rds__ has joined #openstack | 21:29 | |
*** eikke has quit IRC | 21:29 | |
jarrod | is someone working on snapshots with kvm & glance? | 21:33 |
sirp_ | jarrod: termie had mentioned something about that, now sure how far along he is | 21:35 |
sirp_ | *not | 21:35 |
jarrod | that seems to be the only thing kvm is missing | 21:35 |
jarrod | because other than that, it works perfect out of the box | 21:35 |
jarrod | atleast with openstack | 21:36 |
nelson | CRAP. The problem is in my code. WHY DO ALL THE BUGS HAVE TO BE IN my CODE?? | 21:41 |
jarrod | i think "my" was the only thing that really needed to be captialized | 21:42 |
*** lvaughn_ has quit IRC | 21:43 | |
*** enigma has joined #openstack | 21:44 | |
*** lvaughn has joined #openstack | 21:45 | |
soren | vishy: Fresh qemu-kvm is in the ppa, btw. | 21:45 |
creiht | lol | 21:46 |
j05h | nelson: its open source. take heart in the fact that none of the code is yours. or dismay. your call. | 21:48 |
soren | j05h: You've got it the wrong way around. All the code is yours, not none. :) | 21:49 |
j05h | touche. | 21:49 |
j05h | nelson: sorry about that bug i put in your code. | 21:49 |
*** lvaughn_ has joined #openstack | 21:50 | |
*** mgoldmann has quit IRC | 21:51 | |
*** lvaughn has quit IRC | 21:54 | |
xtoddx | soren: i'm taking Jesse's review day over today. i'll update the wiki to show the change | 21:54 |
xtoddx | soren: do we have docs of the process we should follow on review days? (ie, prioritize things that come in that day vs. backlog?) | 21:55 |
soren | xtoddx: Just do it. No need to inform me :) | 21:55 |
*** spectorclan_ has quit IRC | 21:55 | |
soren | xtoddx: Nothing apart from what's on the ReviewDays page. | 21:55 |
vishy | soren: thanks | 21:55 |
soren | xtoddx: I was hoping we'd fill it in as we went along and learned how to do it best. | 21:55 |
vishy | soren: LockFailed: failed to create /usr/lib/pymodules/python2.6/cybera16.Dummy-7-4166 | 21:55 |
vishy | i think we need to use a different lockfile dir? | 21:56 |
soren | vishy: cybera16.Dummy-7-4166 means nothing to me, but you're supposed to set a lock_path to somewhere writable. | 21:56 |
vishy | soren: oh didn't realize there was a lock_path flag | 21:56 |
soren | vishy: the default follows the same pattern as logdir and instances_path etc. | 21:57 |
soren | vishy: It's in the cross-binary-sync branch. | 21:57 |
vishy | soren: interesting, I wonder why it didn't use my state_path then | 21:57 |
soren | vishy: I'm assuming you're testing the iptables stuff? | 21:57 |
vishy | soren: aye | 21:57 |
vishy | testing from packages | 21:57 |
soren | vishy: Oh, because state_path is something else. | 21:57 |
soren | vishy: Lock files don't belong in /var/lib. They belong in /var/lock. | 21:58 |
vishy | ah | 21:58 |
vishy | so packages need to create /var/lock/nova/ and set it? | 21:58 |
*** brd_from_italy has quit IRC | 21:58 | |
*** miclorb_ has joined #openstack | 21:58 | |
*** ChanServ sets mode: +v redbo | 21:59 | |
soren | Yeah. | 21:59 |
soren | vishy: FHS says so. | 21:59 |
*** mahadev has joined #openstack | 21:59 | |
vishy | hmm the other ones are rw by root only | 21:59 |
vishy | i assume we own these by nova | 21:59 |
soren | Which other ones? | 22:00 |
openstackhudson | Project swift build #214: SUCCESS in 28 sec: http://hudson.openstack.org/job/swift/214/ | 22:01 |
openstackhudson | Tarmac: Modifying the multi-node install to fix a bug reported (700894) and to put in changes based on feedback from Russell Nelson. | 22:01 |
*** ctennis has quit IRC | 22:01 | |
*** ericrw has left #openstack | 22:02 | |
*** ericrw has joined #openstack | 22:02 | |
vishy | the other stuff in /var/lock (lvm and iscsi are in there) | 22:02 |
soren | vishy: Oh. | 22:03 |
soren | vishy: Yeah, add another dir, make it owned by nova. | 22:03 |
vishy | and chmod 700? | 22:03 |
soren | vishy: I intentionally haven't adjusted the packaging yet. | 22:03 |
soren | vishy: Yeah, should be fine. | 22:03 |
*** jamiec has joined #openstack | 22:08 | |
vishy | soren: sick, just ran 32 m1.smalls on 4 machines in less than 1 minute -- no failures | 22:08 |
soren | vishy: That's what I've been trying to say.. :) | 22:08 |
soren | It works, bitches. | 22:08 |
soren | :) | 22:08 |
*** lionel has quit IRC | 22:10 | |
*** lionel has joined #openstack | 22:10 | |
*** photron has quit IRC | 22:12 | |
*** littleidea has joined #openstack | 22:14 | |
*** ctennis has joined #openstack | 22:15 | |
*** ctennis has joined #openstack | 22:15 | |
justinsb | sandywalsh, dabo: I'd really like to see the distributed scheduler & multi-zone stuff make Cactus. Anything I can help with? What's the blocker? | 22:15 |
*** msassak has joined #openstack | 22:15 | |
dabo | Multi-zone will make cactus. Distributed scheduler has been redefined several times, because of changes to multi-zone. | 22:18 |
dabo | There are also other structural changes that may force yet another approach | 22:18 |
*** viirya has quit IRC | 22:20 | |
*** mahadev has quit IRC | 22:20 | |
*** viirya has joined #openstack | 22:20 | |
*** mahadev has joined #openstack | 22:20 | |
ericrw | yeah, so my point about execvp is that I can polish it up and make a merge request, but it will need a lot more testing than I can personally give it. | 22:20 |
nelson | jarrod: when everything is in caps except one or two words, those words are in SUPER-caps. | 22:21 |
ericrw | before it can go into a release | 22:21 |
*** littleidea has quit IRC | 22:24 | |
sandywalsh | justinsb, the anso team has proposed we assume a single db across zones. I'm still working through the implications of this change. | 22:25 |
westmaas_ | sandywalsh: like a singledb worldwide? | 22:26 |
*** mahadev has quit IRC | 22:26 | |
*** mahadev has joined #openstack | 22:27 | |
justinsb | sandywalsh, dabo: What's a zone then? Is the idea that we'll have a "rackspace-us-south" front end zone, but that this will actually be a front end to "dfw.rackspace-us-south" and "sat.rackspace-us-south"? | 22:27 |
*** blakeyeager has joined #openstack | 22:28 | |
dabo | a zone is anything you define it to be. It's simply a way of logically organizing services. | 22:28 |
sandywalsh | westmaas_, that's the assumption | 22:28 |
justinsb | dabo: OK, but that doesn't really help us figure out what we need to code! | 22:29 |
dabo | westmaas_: yes, a single db. I'm hoping they select something like cassandra rather than rely on replication | 22:29 |
westmaas_ | yes please :) | 22:29 |
sandywalsh | justinsb, my definition for it has been 'zone=nova deployment' | 22:29 |
sandywalsh | justinsb, the value of the single db is twofold: | 22:30 |
dabo | justinsb: yes, it does. You can't assume anything about zones except: a) they may contain child zones and b) they may contain compute services | 22:30 |
eday | sandywalsh: wait, when did this happen? I thought we were doing one db/zone? | 22:30 |
sandywalsh | justinsb, 1. we can do server best match in the db (mostly) which is very efficient | 22:30 |
sandywalsh | justinsb, 2. we can ask questions like get_all_customer_instances(cust X) | 22:30 |
dabo | eday: yes, it's a very recent rebirth of the idea | 22:30 |
eday | *sigh* | 22:31 |
sandywalsh | eday, it's not definitive ... I'm still pondering the implications | 22:31 |
justinsb | sandywalsh, But didn't you do the magic to assemble answers from sub-zones? Isn't that what your zones patch is all about? | 22:31 |
sandywalsh | justinsb, yes | 22:31 |
sandywalsh | :/ | 22:31 |
justinsb | Any idea what's motivating moving away from that? | 22:31 |
justinsb | I think the multi-zone stuff is "da bomb" | 22:31 |
dabo | justinsb: depends on what you mean by 'assemble answers'. The zones can summarize capabilities of child zones/hosts | 22:32 |
*** gregp76 has quit IRC | 22:32 | |
westmaas_ | if a zone can have sub zones...I am confused about what it means for a zone to have its own db | 22:32 |
sandywalsh | justinsb, some queries are very expensive (like the 'get me all instances for customer X' example above) | 22:32 |
westmaas_ | child zones* | 22:32 |
*** dendrobates is now known as dendro-afk | 22:32 | |
*** sirp_ is now known as sirp | 22:32 | |
dabo | justinsb: consider the question: give me all the compute nodes that have instances for customer X | 22:33 |
justinsb | sandywalsh, dabo: Don't I just query each sub-zone, and then combine the answers? | 22:33 |
sandywalsh | westmaas_, I'm not sure it still would have it's own db | 22:33 |
sandywalsh | justinsb, that was the assumption I was going under ... it may change | 22:33 |
justinsb | sandywalsh: Sorry, by I, I meant your "front-end zone" magic. I see it as a sort of proxy on a bunch of zones | 22:34 |
dabo | that sort of query has a different way of combining than: can you accommodate a linux instance with 4gb RAM and 500GB disk? | 22:34 |
sandywalsh | right, largely | 22:34 |
westmaas_ | seems like we could do some kind of caching proxy...yeah what you guys are already talking about | 22:34 |
* westmaas_ stops typing | 22:34 | |
eday | justinsb: no, you need to cache subset of info at each zone | 22:34 |
sandywalsh | anyway, it's early yet and it's hard to pin the guys down. I'll post to the group once I can formulate a summary | 22:35 |
eday | sandywalsh: who is driving this, and why is it not on the ML? | 22:35 |
justinsb | Well, not to pull the process card (when I hate process), but isn't this agreed for Cactus? | 22:35 |
eday | sandywalsh: the community has been very involved and these discussions and we should be having backroom design coming out like this | 22:35 |
eday | err, shouldn't :) | 22:35 |
sandywalsh | eday, it was a casual conversation, nothing formal. But now that I'm done zone3 it's my focus. The intention is to be open with it. | 22:36 |
justinsb | I think the 'create instance' query is more fun, but still very doable in a proxy model | 22:36 |
dabo | eday: I was just stating to type up an email to ask about this when we started this irc discussion. :) | 22:36 |
justinsb | Maybe we should wait for the ML then! | 22:36 |
sandywalsh | yeah, hang tight, I plan on emailing by tomorrow at the latest (deadlines and all :) | 22:37 |
justinsb | It seems to me that no change to the plan has been agreed, so we should proceed with the plan as agreed | 22:37 |
eday | for the record, I think a shared, global DB cluster (assuming mysql/pg since we're pretty tied to sqlalchemy) is a horrible idea. | 22:37 |
justinsb | eday: ++ | 22:37 |
eday | goes against our stated tentets on the wiki, so we'll need to update those if this happens | 22:38 |
jaypipes | mtaylor, soren: did Swift get uninstalled from the Hudson box, or is this build done on a different machine (and needs Swift installed?) http://launchpadlibrarian.net/65915831/buildlog_ubuntu-maverick-i386.glance_2011.2~bzr83-0ubuntu0ppa1~maverick1_FAILEDTOBUILD.txt.gz | 22:38 |
justinsb | Although there are some people out there that can scale relational DBs :-), I think there are a whole bunch of interesting possibilities if we allow looser coupling | 22:38 |
eday | it's not just about scale | 22:38 |
eday | it's reliability too | 22:38 |
justinsb | And non-boring stuff as well :-) | 22:39 |
justinsb | Like cloud-bursting | 22:39 |
eday | and what happens with zones operated by different orgs, we can't all share a mysql cluster | 22:39 |
justinsb | eday: Yes - and federation! | 22:39 |
mtaylor | jaypipes: that's an ubuntu package builder in the ubuntu autobuilders | 22:39 |
mtaylor | jaypipes: so it means that perhaps swift is missing from the build deps in the debian/control file | 22:39 |
eday | sandywalsh: looking forward to your email with more details :) | 22:40 |
sandywalsh | eday, I agree fully. I wonder which approach would have the higher performance cost. | 22:40 |
sandywalsh | eday, yup, it's front of mind for me now. | 22:40 |
jaypipes | mtaylor: can you assist with that? | 22:42 |
eday | sandywalsh: if we embrace eventual consistency between zones and use active caching, we can perform as much as we need to | 22:42 |
jaypipes | justinsb: going through your (many :) ) merge props... fakerabbit sent off to Tarmac. | 22:42 |
mtaylor | jaypipes: yes - but I'm about to walk out the door for a moment- I'll get it when I get back | 22:43 |
justinsb | jaypipes: Thanks! I sent you that overview - I think most of them are actually on-plan | 22:43 |
jaypipes | justinsb: going through them one by one :) | 22:43 |
jaypipes | mtaylor: ya, no worries. appreciate it. | 22:43 |
openstackhudson | Project nova build #605: SUCCESS in 1 min 50 sec: http://hudson.openstack.org/job/nova/605/ | 22:48 |
openstackhudson | Tarmac: Fix the bug where fakerabbit is doing a sort of prefix matching on the AMQP routing key | 22:48 |
*** johan_ has joined #openstack | 22:51 | |
jaypipes | xtoddx, termie: got a chance, a re-review of justinsb's https://code.launchpad.net/~justin-fathomdb/nova/servicify-nova-api/+merge/50857 would be useful. thx. (trying to push through the backlog here...) | 22:52 |
*** rds__ has quit IRC | 22:52 | |
*** drico has joined #openstack | 22:52 | |
comstud | hey all | 22:53 |
comstud | johan_ == latest member of rackspace cloud servers dev (ozone) team | 22:53 |
jaypipes | johan_: welcome to chaosville. | 22:53 |
xtoddx | jaypipes: will do | 22:53 |
comstud | give him a big welcome.. he'll have questions soon, i'm sure | 22:53 |
comstud | :) | 22:53 |
johan_ | thanks | 22:54 |
*** sirp has quit IRC | 22:54 | |
*** sirp has joined #openstack | 22:54 | |
*** Vek has quit IRC | 22:54 | |
*** sirp has quit IRC | 22:54 | |
jaypipes | justinsb: hey, on https://code.launchpad.net/~justin-fathomdb/nova/justinsb-openstack-api-volumes/+merge/50868 do you have an additional pre-req of your keys branch? I'm seeing Keys code in there, yet don't see any proposed keys branch (though I do remember you submitting one...) | 22:54 |
jaypipes | johan_: feel free to ping us if you have any questions on bzr, launchpad, or the code base. we're here to help. | 22:55 |
*** sirp_ has joined #openstack | 22:55 | |
justinsb | jaypipes: The keys and volume branches became intertwined, so they're now one and the same. Maybe I should change the commit message on the MP? | 22:55 |
jaypipes | justinsb: hmm.. | 22:56 |
johan_ | jaypipes: thanks. i have nova/glance working on my development hardware now | 22:56 |
*** imsplitbit has quit IRC | 22:56 | |
jaypipes | justinsb: well, I'd actually prefer to see smaller patches... these large ones are difficult to review... but then again, you've already got a dozen branches proposed ;) | 22:56 |
jaypipes | johan_: cool | 22:57 |
*** matiu has joined #openstack | 22:57 | |
justinsb | jaypipes: They start off separate, but the longer they're outstanding the more I tend to need to make changes to all of them and they tend to become interconnected | 22:58 |
johan_ | but i did run into a problem that i had to work around (fix?) | 22:58 |
justinsb | jaypipes: But I'll try harder to keep them clean! | 22:58 |
johan_ | the trunk version of glance doesn't appear to be commpatible with the trunk version of nova | 22:59 |
johan_ | the 'type' key seems to have moved to the 'properties' dict | 22:59 |
jaypipes | justinsb: I understand you :) | 22:59 |
*** MarcMorata has joined #openstack | 23:00 | |
johan_ | i hate to make these two small changes to get the code to work: http://paste.openstack.org/show/832/ | 23:01 |
jaypipes | johan_: there is a fix already going to hudson: https://code.launchpad.net/~rconradharris/nova/lp731470/+merge/52620 | 23:01 |
jaypipes | johan_: give it about 15 minutes, then bzr pull your local trunk branch and merge trunk into your topic branch... should be all set. | 23:01 |
johan_ | ahh, thanks | 23:01 |
*** azneita has joined #openstack | 23:02 | |
openstackhudson | Project swift build #215: SUCCESS in 29 sec: http://hudson.openstack.org/job/swift/215/ | 23:03 |
openstackhudson | * Tarmac: Fix for object auditor invalidate hashes location bug | 23:03 |
openstackhudson | * Tarmac: fix ring hash check fix in obj replicator tests. | 23:03 |
*** guynaor has joined #openstack | 23:07 | |
*** MarcMorata has quit IRC | 23:09 | |
*** guynaor has left #openstack | 23:10 | |
*** ironcame1 is now known as ironcamel | 23:11 | |
jaypipes | sirp_, dabo: so, sirp_'s fix for the 'type' => 'disk_format' thing has failed. I think it's because the Hudson box does not have the latest version of Glance? what do you think? | 23:12 |
*** ferdy has joined #openstack | 23:14 | |
*** Dimm_ has joined #openstack | 23:14 | |
*** ppetraki has quit IRC | 23:15 | |
Dimm_ | Hi! Can somebody please help with installing swift on RH? The services are dying silently, no errors anywhere... | 23:15 |
*** hadrian has joined #openstack | 23:16 | |
*** blueadept has quit IRC | 23:18 | |
*** gregp76 has joined #openstack | 23:18 | |
*** cascone has joined #openstack | 23:20 | |
*** ferdy has left #openstack | 23:24 | |
uvirtbot | New bug: #731668 in nova "No way to cleanly shut down WSGI" [Undecided,New] https://launchpad.net/bugs/731668 | 23:27 |
dubs | jaypipes: i think even with the latest version of glance it would fail because the migration is not there yet | 23:36 |
openstackhudson | Project nova build #606: SUCCESS in 1 min 48 sec: http://hudson.openstack.org/job/nova/606/ | 23:38 |
openstackhudson | Tarmac: Use disk_format and container_format in place of image type. | 23:38 |
*** mray has quit IRC | 23:39 | |
*** Dimm_ has quit IRC | 23:45 | |
*** aliguori has quit IRC | 23:45 | |
openstackhudson | Project nova build #607: SUCCESS in 1 min 50 sec: http://hudson.openstack.org/job/nova/607/ | 23:48 |
openstackhudson | Tarmac: Fixes lp730960 - mangled instance creation in virt drivers due to improper merge conflict resolution | 23:48 |
vishy | soren: apparently just adding it the same way as other dirs is a bit bad: E: nova-common: dir-or-file-in-var-lock var/lock/nova/ | 23:50 |
*** lionel has quit IRC | 23:58 | |
*** lionel has joined #openstack | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!