*** MotoMilind has quit IRC | 00:02 | |
aixenv | is this nova installer v1.1 script known to be broken? | 00:03 |
---|---|---|
aixenv | its not creating my /root/creds/novarc | 00:03 |
aixenv | "/usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip &>> $LOGFILE" seems to be failing | 00:04 |
aixenv | on a new install ubuntu 10.10 any reason whY? | 00:04 |
*** nelson has quit IRC | 00:05 | |
*** nelson has joined #openstack | 00:05 | |
*** joearnold has quit IRC | 00:05 | |
n1md4 | I created it with my sudo user, in the home directory, seems to work. | 00:07 |
n1md4 | It didn't seem necessary to create it in the /root directory; although I could be wrong. | 00:08 |
n1md4 | What error are you getting? | 00:08 |
aixenv | the script is supposd to make the directory if it doesnt exist | 00:08 |
aixenv | so im wondering if the nova-manage commands are failing above | 00:09 |
aixenv | ok i echo'd the variables right before that line | 00:10 |
aixenv | and it got the echos right | 00:10 |
*** z0 has quit IRC | 00:12 | |
*** enigma has quit IRC | 00:12 | |
n1md4 | I didn't have those problems, can you run the failing commands manually? | 00:12 |
aixenv | im not even sure whats failing one sec ill paste bin the script output | 00:13 |
aixenv | http://pastebin.com/R924mDzz | 00:14 |
*** gregp76 has quit IRC | 00:15 | |
*** clauden_ has quit IRC | 00:15 | |
n1md4 | Ah! Don't worry about that, you can create it afterwards. | 00:16 |
n1md4 | It will work otherwise. | 00:16 |
aixenv | but it wont create the creds manually either | 00:16 |
aixenv | i have an idea real quick one sec | 00:17 |
*** sparkycollier has quit IRC | 00:18 | |
*** sebastianstadil has joined #openstack | 00:19 | |
*** ewindisch has joined #openstack | 00:20 | |
aixenv | bah no dice | 00:20 |
aixenv | the nova-manage project zipfile $project $project_user /path/to/creds.zip doesnt work either, no error it just doesnt create anything | 00:20 |
*** enigma1 has joined #openstack | 00:21 | |
n1md4 | try 'sudo /usr/bin/python /usr/bin/nova-manage project zipfile $project $project_user novacreds.zip' with sudo user, in your home directory | 00:23 |
*** adiantum has quit IRC | 00:23 | |
aixenv | im root | 00:25 |
*** pharkmillups has quit IRC | 00:25 | |
n1md4 | I've another problem! I only want 1 controller, but after manually installing all nova- packages, and then scp'ing nova.conf to the cloud-2, I get duplicate scheduler and network binaries http://paste.openstack.org/show/1023/ | 00:26 |
n1md4 | aixenv: try running it as your regular user, without root. | 00:26 |
aixenv | well i did this same command history previously w/o issues at this point (as root) | 00:27 |
*** dendro-afk is now known as dendrobates | 00:27 | |
*** adiantum has joined #openstack | 00:28 | |
*** joearnold has joined #openstack | 00:28 | |
*** enigma1 has quit IRC | 00:38 | |
*** MotoMilind has joined #openstack | 00:40 | |
*** maplebed has quit IRC | 00:43 | |
*** joearnold has quit IRC | 00:44 | |
*** bluetux has joined #openstack | 00:46 | |
*** adiantum has quit IRC | 00:52 | |
winston-d | How can I enable CDN feature of Swift, if it's possible? | 00:53 |
creiht | winston-d: do you have a CDN provider to work with, or do you just want to make containers public? | 00:55 |
winston-d | creiht: I just want to make container public. | 00:55 |
creiht | hrm | 00:56 |
* creiht makes a note to annegentle to point out that public containers are not in the swift dev guide | 00:57 | |
winston-d | creiht : I stored files of a website to swift and now I'd like to use that with PHP bindings | 00:57 |
creiht | http://swift.openstack.org/misc.html#module-swift.common.middleware.acl | 00:58 |
*** benbenhappy has joined #openstack | 00:58 | |
creiht | short answer is you want to set the X-Container-Read metadata on the container to .r:* | 00:58 |
creiht | which basically says that any referrer can read any objects in that container | 00:58 |
openstackjenkins | Project nova build #736: SUCCESS in 2 min 21 sec: http://hudson.openstack.org/job/nova/736/ | 00:58 |
openstackjenkins | Tarmac: Displays an error message to the user if an exception is raised. This is vital because if logfile is set, the exception shows up in the log and the user has no idea something went wrong. | 00:58 |
creiht | so with st that might look like | 01:00 |
*** littleidea has quit IRC | 01:00 | |
winston-d | creiht : great. I guess I should use 'st' to do setting X-Container-Read | 01:00 |
*** enigma1 has joined #openstack | 01:00 | |
creiht | st -A auth_url -U user -K key post -r '.r:*' container_name | 01:01 |
* winston-d taking note of the 'st' magic | 01:02 | |
*** enigma1 has quit IRC | 01:03 | |
*** adiantum has joined #openstack | 01:04 | |
winston-d | creiht : last time I encountered some issue with Java bingdings and it turned out chaning user from 'test' to 'system:test' solved the problem. I'm a bit confused, isn't 'system' the account name? | 01:04 |
creiht | ahh.. yes | 01:05 |
creiht | That is mostly due to how swift has evolved historically | 01:05 |
creiht | at rackspace there are only accounts for cloud files, and no idea of users | 01:06 |
creiht | users were added in to the internal auth (now swauth) for other reasons | 01:06 |
aixenv | so swift doesnt know the concept of users, but just specific clouds? | 01:06 |
creiht | and thus having to send the account:username as the account string to get around that | 01:06 |
creiht | aixenv: that is mostly up to your auth implementation :) | 01:07 |
creiht | by using the account string like that, it allowed us to add users to swift, but stil be backward compatible with the bindings | 01:07 |
creiht | aixenv: swift uses opaque strings for the account identifier, so it is up to auth to map the users account (however it is designated) to that string | 01:08 |
creiht | and by default that string is just a UUID | 01:08 |
creiht | so my testing account of 'test:tester' (test being the account and tester being the user) may map to a string that looks like AUTH_a3c717f5b17e40a1b38464fb8da02ea4 | 01:10 |
*** enigma has joined #openstack | 01:12 | |
*** dirakx has quit IRC | 01:14 | |
*** benbenhappy has quit IRC | 01:15 | |
*** sebastianstadil has quit IRC | 01:15 | |
*** winston-d has quit IRC | 01:22 | |
*** enigma has quit IRC | 01:22 | |
*** gregp76 has joined #openstack | 01:29 | |
*** enigma has joined #openstack | 01:31 | |
*** aliguori has quit IRC | 01:33 | |
*** z0 has joined #openstack | 01:33 | |
*** MotoMilind has quit IRC | 01:35 | |
*** adiantum has quit IRC | 01:36 | |
*** enigma1 has joined #openstack | 01:38 | |
*** enigma has quit IRC | 01:38 | |
zul | yay.... | 01:39 |
*** enigma1 has quit IRC | 01:39 | |
*** adiantum has joined #openstack | 01:42 | |
*** santhosh has joined #openstack | 01:42 | |
*** benbenhappy has joined #openstack | 01:43 | |
*** dendrobates is now known as dendro-afk | 01:43 | |
*** santhosh has quit IRC | 01:44 | |
*** santhosh has joined #openstack | 01:45 | |
*** gregp76 has quit IRC | 01:45 | |
*** enigma has joined #openstack | 01:48 | |
aixenv | hey guys i spawned up an instance.. 'euca-describe-instances' shows it running. but i cant do 'ssh -i cloudadmin.priv root@10.0.0.2' and access it (where 10.0.0.2, is the ip assigned apparently based on describe instances) im looking in the logs and not seeing anything too helpful | 01:50 |
aixenv | any ideas? | 01:50 |
aixenv | now my 'ifconfig -a|grep 10.0.0' shows no returned entries so that might be an issue | 01:51 |
*** littleidea has joined #openstack | 01:52 | |
aixenv | and if i can give anymore information i'd be happy to assist | 01:53 |
*** enigma has quit IRC | 01:54 | |
*** Ryan_Lane has quit IRC | 01:55 | |
*** winston-d has joined #openstack | 01:56 | |
*** bcwaldon has quit IRC | 01:57 | |
*** z0 has quit IRC | 01:57 | |
*** z0 has joined #openstack | 01:58 | |
*** rchavik has quit IRC | 02:01 | |
*** dendro-afk is now known as dendrobates | 02:02 | |
*** enigma has joined #openstack | 02:04 | |
*** woleium has joined #openstack | 02:07 | |
*** adiantum has quit IRC | 02:08 | |
devdvd | i get error: internal error character device (null) is not using a PTY | 02:08 |
devdvd | when i try to console into my instance with virsh | 02:08 |
devdvd | thoughts? | 02:08 |
*** littleidea has quit IRC | 02:09 | |
*** adiantum has joined #openstack | 02:13 | |
*** enigma has quit IRC | 02:13 | |
aixenv | should my instance ip be from a network that is configured on my cloud/node controller server? im thinking yes, and that is currently not the case | 02:18 |
*** enigma has joined #openstack | 02:22 | |
aixenv | another question - does the "desired network + CIDR for project" have to be contained within the "Controller network range" ? | 02:23 |
*** z0 has quit IRC | 02:25 | |
*** enigma has quit IRC | 02:27 | |
*** littleidea has joined #openstack | 02:30 | |
*** littleidea has quit IRC | 02:32 | |
*** enigma has joined #openstack | 02:36 | |
*** miclorb_ has quit IRC | 02:37 | |
*** enigma has quit IRC | 02:41 | |
*** bluetux has quit IRC | 02:48 | |
*** miclorb has joined #openstack | 02:50 | |
*** enigma has joined #openstack | 02:50 | |
*** enigma has quit IRC | 02:51 | |
uvirtbot | New bug: #744708 in glance "kernel_id and ramdisk_id marked as deleted by default" [Undecided,New] https://launchpad.net/bugs/744708 | 03:01 |
*** lool- has joined #openstack | 03:01 | |
*** AimanA is now known as HouseAway | 03:01 | |
*** troytoma` has joined #openstack | 03:05 | |
*** miclorb has quit IRC | 03:06 | |
*** ewindisch has quit IRC | 03:06 | |
*** kapil has quit IRC | 03:06 | |
*** koolhead11|afk has quit IRC | 03:06 | |
*** viirya has quit IRC | 03:06 | |
*** westmaas_ has quit IRC | 03:06 | |
*** dweimer has quit IRC | 03:06 | |
*** mark has quit IRC | 03:06 | |
*** HugoKuo_ has quit IRC | 03:06 | |
*** zykes- has quit IRC | 03:06 | |
*** nijaba has quit IRC | 03:06 | |
*** karmabot has quit IRC | 03:06 | |
*** ksteward has quit IRC | 03:06 | |
*** kpepple_ has quit IRC | 03:06 | |
*** sleepsonthefloor has quit IRC | 03:06 | |
*** colinnich has quit IRC | 03:06 | |
*** jamiec has quit IRC | 03:06 | |
*** huleboer has quit IRC | 03:06 | |
*** kang__ has quit IRC | 03:06 | |
*** arreyder has quit IRC | 03:06 | |
*** widodh has quit IRC | 03:06 | |
*** adiantum has quit IRC | 03:06 | |
*** Daviey has quit IRC | 03:06 | |
*** stewart has quit IRC | 03:06 | |
*** jfluhmann has quit IRC | 03:06 | |
*** eday has quit IRC | 03:06 | |
*** laurensell has quit IRC | 03:06 | |
*** letterj has quit IRC | 03:06 | |
*** taihen has quit IRC | 03:06 | |
*** MarkAtwood has quit IRC | 03:06 | |
*** xtoddx has quit IRC | 03:06 | |
*** jheiss has quit IRC | 03:06 | |
*** deepy has quit IRC | 03:06 | |
*** gdusbabek has quit IRC | 03:06 | |
*** benbenhappy has quit IRC | 03:06 | |
*** burris has quit IRC | 03:06 | |
*** nid0 has quit IRC | 03:06 | |
*** markwash has quit IRC | 03:06 | |
*** fysa has quit IRC | 03:06 | |
*** duffman_ has quit IRC | 03:06 | |
*** Adri2000 has quit IRC | 03:06 | |
*** s1cz has quit IRC | 03:06 | |
*** n1md4 has quit IRC | 03:06 | |
*** johan_ has quit IRC | 03:06 | |
*** Vek has quit IRC | 03:06 | |
*** jiboumans has quit IRC | 03:06 | |
*** ironcamel has quit IRC | 03:06 | |
*** dubs has quit IRC | 03:06 | |
*** annegentle has quit IRC | 03:06 | |
*** PiotrSikora has quit IRC | 03:06 | |
*** ramd has quit IRC | 03:06 | |
*** kyzh has quit IRC | 03:06 | |
*** Xenith has quit IRC | 03:06 | |
*** alekibango has quit IRC | 03:06 | |
*** odyi has quit IRC | 03:06 | |
*** jeremyb has quit IRC | 03:06 | |
*** gcc has quit IRC | 03:06 | |
*** justinsb has quit IRC | 03:06 | |
*** arun has quit IRC | 03:06 | |
*** doude has quit IRC | 03:06 | |
*** santhosh has quit IRC | 03:06 | |
*** paltman has quit IRC | 03:06 | |
*** aixenv has quit IRC | 03:06 | |
*** Glace has quit IRC | 03:06 | |
*** herki has quit IRC | 03:06 | |
*** troytoman-away has quit IRC | 03:06 | |
*** morfeas has quit IRC | 03:06 | |
*** ioso has quit IRC | 03:06 | |
*** londo_ has quit IRC | 03:06 | |
*** Beens has quit IRC | 03:06 | |
*** _cerberus_ has quit IRC | 03:06 | |
*** jbarratt_ has quit IRC | 03:06 | |
*** mtaylor has quit IRC | 03:06 | |
*** asksol has quit IRC | 03:06 | |
*** yamahata has quit IRC | 03:06 | |
*** yosh has quit IRC | 03:06 | |
*** comstud has quit IRC | 03:06 | |
*** cclien has quit IRC | 03:06 | |
*** lool has quit IRC | 03:06 | |
*** ironcamel2 has quit IRC | 03:06 | |
*** chmouel has quit IRC | 03:06 | |
*** redbo has quit IRC | 03:06 | |
*** klumpie has quit IRC | 03:06 | |
*** sunech has quit IRC | 03:06 | |
*** winston-d has quit IRC | 03:06 | |
*** soosfarm has quit IRC | 03:06 | |
*** lionel has quit IRC | 03:06 | |
*** dsockwell has quit IRC | 03:06 | |
*** ke4qqq has quit IRC | 03:06 | |
*** cdbs has quit IRC | 03:06 | |
*** antonym has quit IRC | 03:06 | |
*** jlmjlm has quit IRC | 03:06 | |
*** btorch has quit IRC | 03:06 | |
*** flashn_ has quit IRC | 03:06 | |
*** [ack] has quit IRC | 03:06 | |
*** ctennis has quit IRC | 03:06 | |
*** arun_ has quit IRC | 03:06 | |
*** vernhart has quit IRC | 03:06 | |
*** jaypipes has quit IRC | 03:06 | |
*** zul has quit IRC | 03:06 | |
*** shawn has quit IRC | 03:06 | |
*** pothos has quit IRC | 03:06 | |
*** soren has quit IRC | 03:06 | |
*** smoser has quit IRC | 03:06 | |
*** Ep5iloN_ has quit IRC | 03:06 | |
*** hadrian has quit IRC | 03:06 | |
*** dabo has quit IRC | 03:06 | |
*** freeflying has quit IRC | 03:06 | |
*** localhost has quit IRC | 03:06 | |
*** iRTermite has quit IRC | 03:06 | |
*** kim0 has quit IRC | 03:06 | |
*** purpaboo has quit IRC | 03:06 | |
*** ianweller has quit IRC | 03:06 | |
*** patri0t has quit IRC | 03:06 | |
*** devcamcar has quit IRC | 03:06 | |
*** konetzed has quit IRC | 03:06 | |
*** pquerna has quit IRC | 03:06 | |
*** dovetaildan has quit IRC | 03:06 | |
*** m_3 has quit IRC | 03:06 | |
*** cloud0 has quit IRC | 03:06 | |
*** vishy has quit IRC | 03:06 | |
*** tr3buchet has quit IRC | 03:06 | |
*** openstackjenkins has quit IRC | 03:06 | |
*** filler has quit IRC | 03:06 | |
*** h1nch has quit IRC | 03:06 | |
*** lstoll has quit IRC | 03:06 | |
*** termie has quit IRC | 03:06 | |
*** czajkowski has quit IRC | 03:06 | |
*** JordanRinke has quit IRC | 03:06 | |
*** KnightHacker has quit IRC | 03:06 | |
*** f4m8_ has quit IRC | 03:06 | |
*** keekz has quit IRC | 03:06 | |
*** RJD22 has quit IRC | 03:06 | |
*** mdomsch has joined #openstack | 03:14 | |
*** sunech has joined #openstack | 03:14 | |
*** burris has joined #openstack | 03:14 | |
*** nid0 has joined #openstack | 03:14 | |
*** markwash has joined #openstack | 03:14 | |
*** fysa has joined #openstack | 03:14 | |
*** duffman_ has joined #openstack | 03:14 | |
*** Adri2000 has joined #openstack | 03:14 | |
*** s1cz has joined #openstack | 03:14 | |
*** n1md4 has joined #openstack | 03:14 | |
*** johan_ has joined #openstack | 03:14 | |
*** Vek has joined #openstack | 03:14 | |
*** jiboumans has joined #openstack | 03:14 | |
*** ironcamel has joined #openstack | 03:14 | |
*** dubs has joined #openstack | 03:14 | |
*** annegentle has joined #openstack | 03:14 | |
*** PiotrSikora has joined #openstack | 03:14 | |
*** yosh has joined #openstack | 03:18 | |
*** ianweller has joined #openstack | 03:18 | |
*** _morfeas has joined #openstack | 03:18 | |
*** ctennis has joined #openstack | 03:18 | |
*** arun_ has joined #openstack | 03:18 | |
*** vernhart has joined #openstack | 03:18 | |
*** jaypipes has joined #openstack | 03:18 | |
*** zul has joined #openstack | 03:18 | |
*** shawn has joined #openstack | 03:18 | |
*** pothos has joined #openstack | 03:18 | |
*** soren has joined #openstack | 03:18 | |
*** smoser has joined #openstack | 03:18 | |
*** niven.freenode.net sets mode: +v soren | 03:18 | |
*** santhosh has joined #openstack | 03:18 | |
*** jbarratt has joined #openstack | 03:18 | |
*** londo_ has joined #openstack | 03:18 | |
*** ramd has joined #openstack | 03:18 | |
*** kyzh has joined #openstack | 03:18 | |
*** Xenith has joined #openstack | 03:18 | |
*** alekibango has joined #openstack | 03:18 | |
*** odyi has joined #openstack | 03:18 | |
*** jeremyb has joined #openstack | 03:18 | |
*** gcc has joined #openstack | 03:18 | |
*** justinsb has joined #openstack | 03:18 | |
*** arun has joined #openstack | 03:18 | |
*** chmouel has joined #openstack | 03:19 | |
*** Glace has joined #openstack | 03:19 | |
*** Ep5iloN_ has joined #openstack | 03:19 | |
*** hadrian has joined #openstack | 03:19 | |
*** dabo has joined #openstack | 03:19 | |
*** freeflying has joined #openstack | 03:19 | |
*** localhost has joined #openstack | 03:19 | |
*** purpaboo has joined #openstack | 03:19 | |
*** iRTermite has joined #openstack | 03:19 | |
*** kim0 has joined #openstack | 03:19 | |
*** pquerna has joined #openstack | 03:19 | |
*** patri0t has joined #openstack | 03:19 | |
*** devcamcar has joined #openstack | 03:19 | |
*** konetzed has joined #openstack | 03:19 | |
*** ianweller is now known as Guest31864 | 03:19 | |
*** cclien_ has joined #openstack | 03:19 | |
*** asksol_ has joined #openstack | 03:19 | |
*** winston-d has joined #openstack | 03:19 | |
*** soosfarm has joined #openstack | 03:19 | |
*** lionel has joined #openstack | 03:19 | |
*** dsockwell has joined #openstack | 03:19 | |
*** ke4qqq has joined #openstack | 03:19 | |
*** cdbs has joined #openstack | 03:19 | |
*** antonym has joined #openstack | 03:19 | |
*** jlmjlm has joined #openstack | 03:19 | |
*** btorch has joined #openstack | 03:19 | |
*** flashn_ has joined #openstack | 03:19 | |
*** [ack] has joined #openstack | 03:19 | |
*** niven.freenode.net sets mode: +v antonym | 03:19 | |
*** redbo has joined #openstack | 03:19 | |
*** ironcame12 has joined #openstack | 03:19 | |
*** sebastianstadil has joined #openstack | 03:19 | |
*** adiantum has joined #openstack | 03:19 | |
*** Daviey has joined #openstack | 03:19 | |
*** stewart has joined #openstack | 03:19 | |
*** jfluhmann has joined #openstack | 03:19 | |
*** eday has joined #openstack | 03:19 | |
*** laurensell has joined #openstack | 03:19 | |
*** letterj has joined #openstack | 03:19 | |
*** niven.freenode.net sets mode: +vv eday letterj | 03:19 | |
*** Guest31864 is now known as ianweller | 03:19 | |
*** ianweller has joined #openstack | 03:19 | |
*** ewindisch has joined #openstack | 03:20 | |
*** kapil has joined #openstack | 03:20 | |
*** koolhead11|afk has joined #openstack | 03:20 | |
*** viirya has joined #openstack | 03:20 | |
*** westmaas_ has joined #openstack | 03:20 | |
*** dweimer has joined #openstack | 03:20 | |
*** mark has joined #openstack | 03:20 | |
*** HugoKuo_ has joined #openstack | 03:20 | |
*** zykes- has joined #openstack | 03:20 | |
*** nijaba has joined #openstack | 03:20 | |
*** karmabot has joined #openstack | 03:20 | |
*** ksteward has joined #openstack | 03:20 | |
*** kpepple_ has joined #openstack | 03:20 | |
*** sleepsonthefloor has joined #openstack | 03:20 | |
*** colinnich has joined #openstack | 03:20 | |
*** huleboer has joined #openstack | 03:20 | |
*** jamiec has joined #openstack | 03:20 | |
*** kang__ has joined #openstack | 03:20 | |
*** arreyder has joined #openstack | 03:20 | |
*** widodh has joined #openstack | 03:20 | |
*** doude has joined #openstack | 03:20 | |
*** miclorb has joined #openstack | 03:20 | |
*** yamahata has joined #openstack | 03:20 | |
*** klumpie_ has joined #openstack | 03:20 | |
*** Beens_ has joined #openstack | 03:20 | |
*** mtaylor_ has joined #openstack | 03:20 | |
*** enigma has joined #openstack | 03:20 | |
*** aixenv2 has joined #openstack | 03:20 | |
*** taihen has joined #openstack | 03:20 | |
*** MarkAtwood has joined #openstack | 03:20 | |
*** xtoddx has joined #openstack | 03:20 | |
*** deepy has joined #openstack | 03:20 | |
*** jheiss has joined #openstack | 03:20 | |
*** gdusbabek has joined #openstack | 03:20 | |
*** _cerberu` has joined #openstack | 03:20 | |
*** herki_ has joined #openstack | 03:20 | |
*** dovetaildan has joined #openstack | 03:20 | |
*** KnightHacker has joined #openstack | 03:20 | |
*** m_3 has joined #openstack | 03:20 | |
*** cloud0 has joined #openstack | 03:20 | |
*** vishy has joined #openstack | 03:20 | |
*** tr3buchet has joined #openstack | 03:20 | |
*** openstackjenkins has joined #openstack | 03:20 | |
*** filler has joined #openstack | 03:20 | |
*** czajkowski has joined #openstack | 03:20 | |
*** h1nch has joined #openstack | 03:20 | |
*** lstoll has joined #openstack | 03:20 | |
*** termie has joined #openstack | 03:20 | |
*** JordanRinke has joined #openstack | 03:20 | |
*** f4m8_ has joined #openstack | 03:20 | |
*** keekz has joined #openstack | 03:20 | |
*** RJD22 has joined #openstack | 03:20 | |
aixenv2 | http://pastebin.com/vEnyyYfy <= anyone care to help give a clue wtf is going on? | 03:21 |
*** paltman has joined #openstack | 03:25 | |
*** ioso has joined #openstack | 03:26 | |
aixenv2 | anyone there? | 03:26 |
*** enigma has left #openstack | 03:29 | |
HugoKuo_ | how did you run the script? | 03:34 |
*** mray has joined #openstack | 03:34 | |
aixenv2 | ./nova-CC-install-v1.1.sh | 03:34 |
HugoKuo_ | the latest version ? | 03:34 |
aixenv2 | i believe so - i went to the github and grabbed one from feb 17 that appears to be the latest yes | 03:34 |
aixenv2 | https://github.com/dubsquared/OpenStack-NOVA-Installer-Script/blob/master/nova-CC-install-v1.1.sh | 03:35 |
HugoKuo_ | remove /var/log/nova folder | 03:35 |
HugoKuo_ | remove /root/creds | 03:36 |
aixenv2 | it's not there | 03:36 |
aixenv2 | it's a simple shell script too, it just runs commands, it's supposed to created the folder and if it isnt there it just errors to STDERR and goes to the next command | 03:36 |
aixenv2 | there's no logic of if foo else bar type of structure | 03:37 |
*** miclorb has quit IRC | 03:39 | |
aixenv2 | its just a running list of commands and echo'd comments | 03:39 |
HugoKuo_ | so your problem is ? | 03:39 |
aixenv2 | the /root/creds/* files are not being created | 03:40 |
aixenv2 | nor is the /root/creds/novarc | 03:40 |
HugoKuo_ | is there any error if you try to create manually? | 03:40 |
aixenv2 | ./usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip | 03:41 |
aixenv2 | no error, it just doesnt do it | 03:41 |
aixenv2 | drops to a prompt | 03:41 |
aixenv2 | im going to re-run, im purging on the files related to nova, and going to do a find on -d nova and delte all dirs, restarted and retry | 03:43 |
*** hadrian has quit IRC | 03:51 | |
aixenv2 | ok i have it all reinstalled.. about to run that script | 03:53 |
aixenv2 | should my instance ip be from a network that is configured on my cloud/node controller server? im thinking yes, and that is currently not the case | 03:57 |
aixenv2 | another question - does the "desired network + CIDR for project" have to be contained within the "Controller network range" ? | 03:57 |
aixenv2 | btw new install no workie via that script | 03:57 |
*** mdomsch has quit IRC | 03:58 | |
*** arun has quit IRC | 04:02 | |
*** odyi has quit IRC | 04:02 | |
*** arun has joined #openstack | 04:03 | |
*** bluetux has joined #openstack | 04:04 | |
*** joearnold has joined #openstack | 04:08 | |
*** miclorb has joined #openstack | 04:11 | |
*** ewindisch has quit IRC | 04:15 | |
aixenv2 | i must be missing something from the networking here.. i can spin up an instance but i cant ssh to it | 04:16 |
aixenv2 | btw that script doesnt work, i decided to do the manual way, and i got an instance spun up that way | 04:16 |
aixenv2 | someone using ubuntu10.10 care to show me what their /etc/network/interfaces looks like (you can x.x.x out the company related public ips) ? | 04:20 |
*** adiantum has quit IRC | 04:26 | |
aixenv2 | only br100 (has our public ip), virbr0 has a 192.168.122.1 ip, no clue where that came from it isnt specified in /etc/network/interfaces, all the other interfaces are not showing any ips.. they show up though | 04:26 |
*** ewindisch has joined #openstack | 04:26 | |
aixenv2 | http://pastebin.com/rJu1wtg3 <== /etc/network/interfaces | 04:28 |
aixenv2 | http://pastebin.com/XccBbJcG <== /etc/nova/nova.conf ; what am i missing here? | 04:29 |
*** adiantum has joined #openstack | 04:30 | |
*** fysa_ has joined #openstack | 04:31 | |
*** fysa has quit IRC | 04:33 | |
*** fysa_ is now known as fysa | 04:33 | |
*** joearnold has quit IRC | 04:34 | |
*** aixenv has joined #openstack | 04:35 | |
*** f4m8_ is now known as f4m8 | 04:44 | |
aixenv | argh i suppose im going to resort to the mailing list | 04:46 |
*** woleium has quit IRC | 04:49 | |
*** _cerberu` is now known as _cerberus_ | 04:55 | |
*** gregp76 has joined #openstack | 05:11 | |
*** kashyap has joined #openstack | 05:12 | |
alekibango | aixenv: using vlanmanager you should not define that bridge | 05:12 |
alekibango | ... you know how to use euca-authorize? | 05:13 |
alekibango | aixenv: nova will define the bridge for you | 05:13 |
*** guigui1 has joined #openstack | 05:13 | |
aixenv | i did the authorizes lines | 05:21 |
aixenv | icmp and ssh should be allowed | 05:21 |
aixenv | i cant seem to figure out what im doing wrong networking wise | 05:21 |
aixenv | especially when im trying to configure this all on 1 box just to test proof of concept | 05:22 |
alekibango | remove the bridge configuration from network/interfaces and reboot :) | 05:22 |
alekibango | i did this mistake too... | 05:22 |
*** bluetux has quit IRC | 05:23 | |
aixenv | so just have my eth0 configured? | 05:23 |
aixenv | like a normal server? | 05:23 |
alekibango | yes | 05:23 |
aixenv | ok testing | 05:23 |
aixenv | ty for the assistance | 05:23 |
aixenv | im about to pull my hair out | 05:23 |
alekibango | np.. most are sleeping now... | 05:23 |
alekibango | aixenv: i know how you feel | 05:24 |
alekibango | americans will be up in 8-9 hours | 05:24 |
aixenv | bah stinky americans P | 05:24 |
* aixenv is american.. just works 24/7 | 05:24 | |
aixenv | i should be able to test this in a second | 05:25 |
alekibango | aixenv: many of them do... still the economy crash is coming | 05:25 |
aixenv | alekibango: that's ok the world ends in december :) | 05:25 |
alekibango | you know you can thank the FED for this | 05:25 |
alekibango | aixenv: not yet, that is lies :) | 05:25 |
aixenv | the mayans would differ with you haha | 05:26 |
alekibango | but this year, for many the world as they know it will be gone | 05:26 |
aixenv | alekibango: you're about as depressing as my dentist lol | 05:26 |
alekibango | aixenv: hands on my clock will not end on whole hour... they just restart | 05:26 |
alekibango | or continue | 05:27 |
HugoKuo_ | I can show you all my configuration | 05:27 |
alekibango | HugoKuo_: great | 05:27 |
HugoKuo_ | what's your network mode ? | 05:27 |
alekibango | he has vlan | 05:27 |
HugoKuo_ | I'm using FlatDHCP | 05:27 |
HugoKuo_ | :< | 05:27 |
alekibango | HugoKuo_: share it still | 05:27 |
HugoKuo_ | ok | 05:27 |
aixenv | static ip on eth0 , vlans yes but untagged | 05:27 |
alekibango | dont forget to add /etc/network/interfaces | 05:27 |
*** benbenhappy has joined #openstack | 05:28 | |
HugoKuo_ | I have two openstack environment , I'll show you one of them which combine our corporate network | 05:28 |
HugoKuo_ | it's a single machine box | 05:29 |
aixenv | networking wise - i thought i was doing a very simple proof of concept | 05:29 |
aixenv | HugoKuo_: that's what mine is | 05:29 |
HugoKuo_ | wait a seconds | 05:29 |
aixenv | ok | 05:29 |
alekibango | aixenv: how many hosts you have? | 05:29 |
aixenv | alekibango: im just trying 1 dude lol | 05:30 |
alekibango | i was not asking how many dudes :) | 05:32 |
*** benbenhappy has quit IRC | 05:33 | |
aixenv | here's the setup | 05:33 |
alekibango | aixenv: you should get 5 for proof of concept | 05:33 |
alekibango | 20 for test | 05:33 |
alekibango | 40+ for prod | 05:33 |
aixenv | eth0 - my public ip, nothing else configured (eth1/etc) and trying to do cloud controller, node controller, etc etc | 05:33 |
aixenv | alekibango: well yes but first i need to get 1 spun up | 05:33 |
alekibango | ic | 05:33 |
alekibango | just remember those numbers | 05:33 |
alekibango | it might help | 05:34 |
aixenv | ok | 05:34 |
aixenv | server isnt coming back up , putting it on a kvm | 05:34 |
*** miclorb has quit IRC | 05:35 | |
HugoKuo_ | http://pastebin.com/Pw86A7sZ | 05:35 |
HugoKuo_ | here you go | 05:35 |
alekibango | best is using IMPI | 05:35 |
alekibango | HugoKuo_: tried to use sheepdog? | 05:35 |
HugoKuo_ | not yet | 05:36 |
*** koolhead11|afk is now known as koolhead11 | 05:36 | |
HugoKuo_ | I'm working on multi servers | 05:36 |
HugoKuo_ | I try to separate nova-network into a single box | 05:36 |
aixenv | yea that's next, im sure that'll be big headache too lol | 05:36 |
HugoKuo_ | I stuck on this step been several days | 05:37 |
HugoKuo_ | don't worry | 05:37 |
*** gregp76 has quit IRC | 05:37 | |
HugoKuo_ | if you just want to add more compute node | 05:37 |
HugoKuo_ | it's easy | 05:37 |
HugoKuo_ | in another environment , I got 1 cloud controller and 2 more compute nodes | 05:38 |
HugoKuo_ | it works fine | 05:38 |
HugoKuo_ | but while I separate nova-network , nova-network host always can not route to external network | 05:40 |
HugoKuo_ | I don't know where's the problem , so that I'm going to use Puppet deployment tool instead of manually | 05:40 |
HugoKuo_ | btw , while you try to multi node , there has something should be rewriten in database | 05:41 |
HugoKuo_ | such as instance's gateway | 05:42 |
zigo-_- | Can someone try this on an Ubuntu box, and tell me the output for me? dpkg-vendor --derives-from ubuntu | 05:46 |
*** benbenhappy has joined #openstack | 05:47 | |
HugoKuo_ | trying | 05:47 |
*** ewindisch has quit IRC | 05:48 | |
HugoKuo_ | the answer is nothing | 05:49 |
zigo-_- | :( | 05:49 |
HugoKuo_ | wrong params @@ | 05:50 |
*** guigui1 has quit IRC | 05:50 | |
zigo-_- | Oh, no, it was right! | 05:50 |
zigo-_- | It only is a test... | 05:50 |
zigo-_- | Hang on. | 05:50 |
HugoKuo_ | root@Jack:~# dpkg-vendor --drivers-from ubuntu | 05:51 |
HugoKuo_ | dpkg-vendor: unknown option `--drivers-from' | 05:51 |
*** miclorb has joined #openstack | 05:51 | |
zigo-_- | if dpkg-vendor --derives-from ubuntu ; then echo "I am Ubuntu" ; else echo "I am not" ; fi | 05:52 |
zigo-_- | What does this? | 05:52 |
zigo-_- | It is derives-from, not drivers-from btw... | 05:52 |
HugoKuo_ | oh | 05:52 |
zigo-_- | So? Does it say "I am Ubuntu" ? :) | 05:53 |
HugoKuo_ | wait @! | 05:53 |
HugoKuo_ | yes | 05:54 |
zigo-_- | Cool! | 05:54 |
HugoKuo_ | it echo I am Ubuntu | 05:54 |
*** kapil has quit IRC | 05:54 | |
zigo-_- | So, that's the woraround for the debian/*.upstart thing then. | 05:54 |
* zigo-_- is writing the patch. | 05:55 | |
*** adjohn has joined #openstack | 05:55 | |
HugoKuo_ | cool | 05:55 |
HugoKuo_ | it's fun XD | 05:56 |
*** ewindisch has joined #openstack | 05:56 | |
*** dmd17 has joined #openstack | 05:57 | |
*** nerens has joined #openstack | 05:59 | |
*** aixenv2 has quit IRC | 06:00 | |
*** aixenv2 has joined #openstack | 06:01 | |
alekibango | zigo-_-: use just this: dpkg-vendor some && echo "1" || echo "2" | 06:02 |
alekibango | zigo-_-: you are debian dev? | 06:03 |
zigo-_- | Yes I am. | 06:03 |
koolhead11 | heh | 06:03 |
zigo-_- | Trying to package Openstack in Debian. | 06:03 |
alekibango | you are trying to package nova for debian? | 06:03 |
zigo-_- | :P | 06:03 |
alekibango | ok i have small patch for you then | 06:03 |
*** guigui1 has joined #openstack | 06:03 | |
zigo-_- | Thanks. | 06:03 |
zigo-_- | Please send to zigo@debian.org | 06:03 |
alekibango | its short... :) | 06:03 |
alekibango | and i hate mails | 06:03 |
zigo-_- | Oh ok! | 06:04 |
zigo-_- | if dpkg-vendor --derives-from ubuntu ; then \ | 06:04 |
zigo-_- | for i in *.upstart.in ; do \ | 06:04 |
zigo-_- | MYPKG=`echo $i | cut -d. -f1` ; \ | 06:04 |
zigo-_- | cp $MYPKG.upstart.in $MYPKG.upstart ; \ | 06:04 |
zigo-_- | done | 06:04 |
zigo-_- | fi | 06:04 |
zigo-_- | That's what is needed to go around the upstart stuff ... | 06:04 |
alekibango | zigo-_-: do you have some git? | 06:04 |
zigo-_- | And of course, rename the upstart files as .in | 06:04 |
zigo-_- | What do you mean? | 06:04 |
zigo-_- | I use and have git repositories yes. | 06:05 |
alekibango | some source code repo | 06:05 |
alekibango | for tis packaging sources | 06:05 |
alekibango | those* | 06:05 |
zigo-_- | http://git.gplhost.com/gitweb/ | 06:05 |
zigo-_- | Well, I thought that OS was using bzr? | 06:05 |
alekibango | OS is :) | 06:05 |
alekibango | but i am independent and i love git more | 06:06 |
zigo-_- | I kind of dislike bzr already, too. | 06:06 |
zigo-_- | :) | 06:06 |
zigo-_- | I didn't try it few days ago. | 06:06 |
alekibango | i just cannot get used to it | 06:06 |
zigo-_- | By the way, what's the way to send patches once they got commited on my local bzr repo? | 06:07 |
alekibango | you push them on your account on lp | 06:07 |
zigo-_- | Ah... | 06:07 |
alekibango | and do a merge request | 06:07 |
zigo-_- | What's the command for that? | 06:08 |
alekibango | (aka pull request) | 06:08 |
alekibango | zigo-_-: there is nice wiki.openstack.org | 06:08 |
alekibango | where its described in detail | 06:08 |
alekibango | zigo-_-: so where are yours nova packaging sources? | 06:09 |
alekibango | will you do swift too? | 06:09 |
zigo-_- | I'm building a private repo for the moment. | 06:09 |
alekibango | ok i will wait | 06:09 |
zigo-_- | I need nova/swift to work with Squeeze. | 06:09 |
alekibango | zigo-_-: it can be done | 06:10 |
zigo-_- | When it does, then I will work on having it work in SID and upload. | 06:10 |
alekibango | i did that few times already | 06:10 |
zigo-_- | Cool. | 06:10 |
alekibango | but i am not dd | 06:10 |
zigo-_- | I sent a mail to the openstack list, but it seems it didn't get through ... :( | 06:11 |
alekibango | zigo-_-: i have 60k mails to read | 06:12 |
alekibango | :( | 06:12 |
* zigo-_- goes to eat lunch | 06:12 | |
alekibango | zigo-_-: will you also cover sheepdog? | 06:12 |
zigo-_- | What's that? | 06:13 |
alekibango | dont worry, go eat... i will give you all needed info to have success with debian :) | 06:13 |
zigo-_- | Cool ! | 06:13 |
*** Ryan_Lane has joined #openstack | 06:16 | |
alekibango | heh that was my record, reading 6000 mails in 2 minutes | 06:19 |
HugoKuo_ | 6000 mails ....................... | 06:20 |
alekibango | i am glad sheepdog continues to release code... but i am worried about developers who are japanese... | 06:22 |
alekibango | the situation in fukushima is very bad.. chernobyl on steroids... | 06:22 |
aixenv | such a terrible state of affairs | 06:23 |
HugoKuo_ | did not success ?@@? | 06:23 |
*** adiantum has quit IRC | 06:24 | |
*** kashyap has quit IRC | 06:25 | |
*** zenmatt has quit IRC | 06:25 | |
alekibango | HugoKuo_: no, they only lied in media from the start about how everything is (or will be soon) under control... | 06:26 |
alekibango | its bigger than chernobyl now... and will continue to be bigger... for weeks or months | 06:26 |
HugoKuo_ | hmm... | 06:26 |
alekibango | HugoKuo_: they lied to people about every major nuclear accident | 06:27 |
*** adiantum has joined #openstack | 06:27 | |
alekibango | (governments, media) | 06:27 |
HugoKuo_ | haha | 06:27 |
alekibango | this is no joke... many people died thanks to this... | 06:27 |
alekibango | even my father almost died after chernobyl | 06:27 |
alekibango | and we are 900 km away | 06:28 |
HugoKuo_ | ............ | 06:28 |
HugoKuo_ | I'm sorry about that | 06:28 |
alekibango | almost... he had his thyroid removed | 06:28 |
HugoKuo_ | I though the nuclear accident is alomost under control | 06:29 |
alekibango | they did tell us to go to parade on may 1st... | 06:29 |
alekibango | HugoKuo_: thats lies | 06:29 |
HugoKuo_ | at least on international new which i see in tw | 06:29 |
HugoKuo_ | wtf.......... | 06:29 |
HugoKuo_ | pray for japan | 06:29 |
alekibango | HugoKuo_: really do | 06:29 |
alekibango | if worst case scenario will play out, japan will be empty | 06:30 |
alekibango | everyone will run away or die | 06:30 |
alekibango | if thay fail with rescue efforts | 06:31 |
HugoKuo_ | my ex girlfriend married to japan | 06:31 |
HugoKuo_ | I;m worry about her now | 06:31 |
alekibango | i mean when those reactors will all melt. including those used fuel pools nearby | 06:31 |
alekibango | lets just hope it will not happen... | 06:31 |
alekibango | probability is not zero | 06:32 |
HugoKuo_ | that's horrible | 06:32 |
HugoKuo_ | I don't like to hear such kind of news about truth | 06:32 |
alekibango | best scenario: <30k dead soon... more 50k-200k in years ahead thanks to cancer | 06:32 |
HugoKuo_ | .............. | 06:33 |
alekibango | i might be wrong | 06:33 |
HugoKuo_ | ..................best scenario | 06:33 |
alekibango | but thats what i think after watching all news carefully... | 06:33 |
HugoKuo_ | News seems like the tool of politician | 06:34 |
alekibango | .. problem is tepco lies still - no truth from them or government... so we do not really know for sure | 06:34 |
alekibango | HugoKuo_: they SURE are | 06:34 |
alekibango | and believe me, they have all the info... but they do not want us to see it | 06:35 |
HugoKuo_ | ofcourse they don't | 06:35 |
alekibango | they love plausible deniability too (we didnt know its that bad!) | 06:35 |
HugoKuo_ | It's a crazy damage of their benift | 06:36 |
HugoKuo_ | It's a crazy damage of their benifit | 06:36 |
alekibango | HugoKuo_: and btw, i do not think we can rule out possible HAARP action in this -- if you will see this http://137.229.36.30/cgi-bin/magnetometer/gak-mag.cgi with plot width = 1 month.... | 06:37 |
alekibango | there was lots of action on 03/11 | 06:37 |
HugoKuo_ | linking | 06:39 |
alekibango | i seen more usefull info on joke twitter account than or real one... http://twitter.com/tep_co (joke page) | 06:39 |
*** miclorb has quit IRC | 06:40 | |
HugoKuo_ | damn tepco | 06:41 |
alekibango | this joke page has better signal/noise ratio | 06:41 |
alekibango | :) | 06:41 |
HugoKuo_ | sorry about my poor english , I can not show how angry I am correctly | 06:42 |
*** mgoldmann has joined #openstack | 06:43 | |
alekibango | few links for you http://www.washingtonpost.com/world/radiation-levels-reach-new-highs-as-conditions-worsen-for-workers/2011/03/27/AFsMLFiB_print.html http://www.infowars.com/meltdown-plutonium-found-in-soil-at-fukushima-as-cover-up-continues/ | 06:43 |
alekibango | btw tepco already admitted few times that they lied whole time | 06:43 |
alekibango | and they now were ordered (!) by government to not publish BS anymore... which means that there will be coverup 2.0 continuing... | 06:44 |
HugoKuo_ | I'll post them on my facebook | 06:46 |
alekibango | well. sorry for OT.. please take care about yourself if you are near.. watch maps of radioactive fallouts | 06:46 |
*** kashyap has joined #openstack | 06:46 | |
alekibango | even in eu we measured the fallout coming ... (still very low) | 06:46 |
HugoKuo_ | It's high posibility that radioactive fallout effect taiwan | 06:47 |
alekibango | one last OT line: its not the same as being radiated in hospital... that will not leave radioactive dust particles in your body until you die... | 06:47 |
alekibango | HugoKuo_: try getting some potassium iodide (ask your doc too)... dont get outside much... hygiene... | 06:48 |
alekibango | HugoKuo_: and follow maps of radiation | 06:49 |
HugoKuo_ | thanks for your advice . | 06:49 |
alekibango | like this one... (prediction, modelled) http://www.woweather.com/weather/news/fukushima?LANG=us&VAR=eurad2500&HH=0 | 06:49 |
aixenv | VlanManager or FlatDHCPManager and why? | 06:49 |
alekibango | aixenv: vlanmanager is easier for start... | 06:50 |
HugoKuo_ | you make me nervious I got have a cigar | 06:50 |
alekibango | flatDHCP allows running unchanged windows | 06:50 |
alekibango | that might kill you sooner.. as wind is not going to taiwan ... | 06:50 |
alekibango | and please remember,... its not comparable to radiation in hospital... | 06:51 |
alekibango | who compares it, lies to you | 06:51 |
*** CloudChris has joined #openstack | 07:01 | |
aixenv | is something wrong with my instances? both of the ips on 'euca-describe-instances' are the same | 07:02 |
aixenv | i thought the first ip was supposed to be dif | 07:02 |
alekibango | heh should be.. what number they are? | 07:03 |
aixenv | 192.168.0.5 | 07:03 |
aixenv | INSTANCE i-00000003 ami-5c6cfc0e 192.168.0.5 192.168.0.5 running mykey (cloud01, openstack01) 0 m1.tiny 2011-03-29T07:01:38Z nova | 07:03 |
aixenv | which of course times out | 07:04 |
* aixenv sighs | 07:04 | |
aixenv | ssh: connect to host 192.168.0.5 port 22: No route to host | 07:04 |
alekibango | strange | 07:05 |
*** MarkAtwood has quit IRC | 07:07 | |
aixenv | id be happy to do a gtm and show you, want me to set it up? | 07:07 |
alekibango | gtm=? | 07:07 |
*** flopflip has quit IRC | 07:07 | |
aixenv | its like webex , remote session so you could see my setup and look at my server | 07:07 |
alekibango | aixenv: iam not sure i am best one for this :) | 07:08 |
aixenv | well yours works , mine doesnt, so.. you're better than me :) | 07:08 |
alekibango | aixenv: you should learn to love NX | 07:08 |
alekibango | (nomachine.com) | 07:08 |
alekibango | or freenx | 07:08 |
aixenv | same kinda thing, we use this for my company | 07:09 |
*** flopflip has joined #openstack | 07:09 | |
*** guigui1 has quit IRC | 07:11 | |
zigo-_- | back | 07:12 |
*** lionel has quit IRC | 07:12 | |
zigo-_- | So, what is sheepdog? | 07:12 |
*** lionel has joined #openstack | 07:12 | |
soren | aixenv: Looks like you've used the same ip range for fixed as well as floating ips. | 07:12 |
winston-d | sheepdog? distributed block storage for KVM? | 07:13 |
alekibango | zigo-_-: see images http://www.osrg.net/sheepdog/ | 07:14 |
aixenv | soren: should my cloud controller network and my project ranges be dif? | 07:14 |
alekibango | its cluster storage | 07:14 |
aixenv | i thought the project range had to be within the subnet of the cloud controller network | 07:14 |
alekibango | aixenv: ha, that might be the case... :) | 07:14 |
soren | aixenv: There's no such thing as a cloud controller, and I don't know what a "project range" is either. | 07:15 |
soren | Where are people getting these terms from? | 07:15 |
alekibango | zigo-_-: its a way to make reliable and fast storage for nova clusters (not for whole clouds, as its scalable up to hundreds of servers only) | 07:15 |
winston-d | alekibango : r u using sheepdog already? | 07:15 |
alekibango | trying to :) | 07:16 |
alekibango | i have now some problems with my setup, fixing it | 07:16 |
* winston-d is also interested in it | 07:16 | |
alekibango | winston-d: it seems to be very promising... and also maybe young a bit | 07:16 |
winston-d | sheepdog is developed by NTT, right? | 07:16 |
aixenv | my apologies, from this - https://github.com/dubsquared/OpenStack-NOVA-Installer-Script/ | 07:16 |
alekibango | winston-d: 2 developers in japan, basically. not sure what is NTT | 07:17 |
aixenv | the wiki for openstack said to use that, i found it broken, so i did the steps manually | 07:17 |
aixenv | soren: but the terminology stuck | 07:17 |
alekibango | zigo-_-: do you know fai? iam doing fai installs | 07:17 |
winston-d | alekibango : yes, it's developed by NTT. morita.kazutaka at lab.ntt.co.jp | 07:18 |
aixenv | let me restate, i thought my project_CIDR had to be within my 'fixed_range' (i used a project_cidr of 192.168.0.0/27, and a fixed_range of 192.168.0.0/24) | 07:19 |
alekibango | zigo-_-: you can get ubuntu natty package of sheepdog... it required some fix in start/stop script... but otherwise it works | 07:19 |
aixenv | i would advise possibly having that link removed from the wiki that seems to be a rather offficial document related to openstack documentation | 07:20 |
winston-d | NTT is also contributing a Fault-Tolerance feature to KVM. | 07:20 |
aixenv | "read -p "Controller network range for ALL projects (normally x.x.x.x/12):" FIXED_RANGE" | 07:20 |
aixenv | so it appears the "controller network range" == the fixed_range | 07:20 |
soren | It would appear so. | 07:20 |
* soren wishes people wouldn't invent their own terminology. | 07:21 | |
alekibango | soren :) | 07:21 |
aixenv | i wish that wouldn't be tied to the official wiki :) | 07:21 |
zigo-_- | alekibango: I know FAI yes. It's a great tool. | 07:21 |
alekibango | isnt nova.sh part of nova now? | 07:22 |
zigo-_- | Maybe, once OS packaging is done, we can make a "Debian pure blend" CD. | 07:22 |
alekibango | zigo-_-: i would like to work with you on this one | 07:22 |
zigo-_- | Something that would ask only an IP address and hostname, and do the rest of automatically. | 07:22 |
*** santhosh has quit IRC | 07:22 | |
zigo-_- | Preseeding isn't hard. | 07:22 |
aixenv | what might be wrong since i have both the controllers ip and the instance ip the same? | 07:23 |
zigo-_- | Anyway, that's for later. | 07:23 |
alekibango | still i would like to rather have debian pure blend FAI server with nova config included | 07:23 |
alekibango | where you will just define which machines to install | 07:23 |
alekibango | and it will be up in 10 minutes | 07:23 |
*** adiantum has quit IRC | 07:23 | |
* zigo-_- needs to lear bzr more. | 07:23 | |
alekibango | would you like to help me with this one? | 07:23 |
*** nijaba has quit IRC | 07:23 | |
alekibango | i will release my fai configs soon | 07:24 |
alekibango | so others might use it | 07:24 |
*** adiantum has joined #openstack | 07:25 | |
aixenv | maybe its just associating a controller ip but not the actual instance ip | 07:25 |
alekibango | zigo-_-: look on package bzr-git if you feel like | 07:25 |
aixenv | 'euca-describe-addresses' = no output - i imagine this is wrong | 07:26 |
*** nijaba has joined #openstack | 07:27 | |
*** Nacx has joined #openstack | 07:28 | |
soren | zigo-_-: What exactly is the difficulty with bzr? | 07:28 |
alekibango | zigo-_-: do you use jabber? | 07:28 |
zigo-_- | I don't use jabber. | 07:29 |
zigo-_- | soren: Mainly, that I don't know it! :) | 07:29 |
zigo-_- | It's only going to take some time... | 07:29 |
alekibango | zigo-_-: http://wiki.openstack.org/LifeWithBzrAndLaunchpad | 07:29 |
zigo-_- | thanks | 07:30 |
alekibango | http://doc.bazaar.canonical.com/migration/en/survival/bzr-for-git-users.html | 07:30 |
soren | You commit things with "bzr commit". You pull stuff from places with "bzr pull". You push stuff to places with "bzr push". You merge stuff from places with "bzr merge"... | 07:30 |
zigo-_- | GREAT ! | 07:30 |
zigo-_- | That's exactly what I needed. | 07:30 |
zigo-_- | soren: That's things I know already, thank you. | 07:30 |
alekibango | zigo-_-: its not much more complicated when you are using launchpad | 07:31 |
zigo-_- | Things I am lost with are: how do I list branches? | 07:31 |
aixenv | +soren: any ideas why i might be seeing this duplicate ip or where possibly to look? | 07:31 |
alekibango | zigo-_-: you are git user.. :) in git branching is very cheap :) here they usually use another directory for a branch | 07:32 |
soren | aixenv: Yes, you've very likely specified the same ip range as both your floating ip and fixed ip range. | 07:32 |
aixenv | where is the floating ip range configured at? nova.conf ? | 07:33 |
soren | They're added using "nova-manage floating create" | 07:33 |
aixenv | 'nova-manage floating list' has nothing | 07:34 |
alekibango | zigo-_-: when packaging for squeeze, you should repackage qemu too, as sqeeeze version does not support sheepdog storage | 07:35 |
aixenv | and this url ive been using as my reference, doesnt mention this at all, lovely | 07:35 |
zigo-_- | alekibango: I'm mainly interested in having OS working with Xen. | 07:35 |
zigo-_- | We (at GPLHost) are Xen specialists. | 07:36 |
zigo-_- | We have hundreds of VPS customers using Xen. | 07:36 |
alekibango | zigo-_-: i think kvm is more lovely :) | 07:36 |
zykes- | can't xen be made to talk sheepdogg ? | 07:37 |
soren | aixenv: Ok, that's fine, then. | 07:37 |
alekibango | cant be for now | 07:37 |
soren | aixenv: Floating ip's are not required. | 07:37 |
alekibango | but someone is working on it imho | 07:37 |
aixenv | soren: oh ok, phew | 07:37 |
zigo-_- | The thing is, we have already existing boxes running Xen. | 07:37 |
zykes- | alekibango: working on that ? | 07:37 |
alekibango | zigo-_-: ic | 07:37 |
zykes- | what, | 07:37 |
alekibango | zykes-: on xen + sheepdog | 07:37 |
zigo-_- | In the ideal world, I'd like to mix both cloud and VPS things. | 07:37 |
zykes- | ah | 07:37 |
zigo-_- | Our dom0 controler is written in Python. | 07:37 |
alekibango | cloud of vps = nova... cloud of storage = swift | 07:38 |
aixenv | soren : would it help you for me to post my nova.conf, and network/interfaces to pastebin? | 07:38 |
zigo-_- | It has monitoring, installation, etc. | 07:38 |
zigo-_- | And it's in Python too. | 07:38 |
alekibango | python = love story | 07:38 |
soren | aixenv: I don't know. What is your problem, exactly? | 07:38 |
zigo-_- | I'm sure some of it could be reused. | 07:38 |
zykes- | zigo-_-: what company? | 07:38 |
zigo-_- | GPLHost | 07:38 |
zigo-_- | I'm the founder, CEO, and software main author. | 07:38 |
zykes- | ah | 07:38 |
aixenv | soren: well #1 the same ip shows on 'euca-describe-instances' and #2 i cant for the life of me actually ping or ssh into a running instance even after doing the authorize icmp/ssh commands | 07:39 |
soren | aixenv: Which network manager? | 07:39 |
aixenv | vlan | 07:39 |
soren | Ok. And where are you trying to connect from? | 07:40 |
aixenv | soren: everything on the same server (proof of concept) | 07:40 |
soren | aixenv: Is that also where you're trying to connect fomr? | 07:40 |
aixenv | yes sir | 07:40 |
soren | s/fomr/from/ | 07:40 |
soren | Ok. | 07:40 |
soren | pastebin "ip route", please. | 07:40 |
aixenv | ok | 07:41 |
*** Ryan_Lane has quit IRC | 07:41 | |
aixenv | soren: http://pastebin.com/BEuNGDLD | 07:42 |
aixenv | im not sure where the /28 is coming from, as im setting the fixed_range at 192.168.0.0/24 | 07:43 |
winston-d | zigo-_- : what's VPS? | 07:44 |
alekibango | virtual private server | 07:45 |
*** allsystemsarego has joined #openstack | 07:45 | |
*** allsystemsarego has joined #openstack | 07:45 | |
winston-d | alekibango : thanks. hosting service provider? | 07:45 |
alekibango | i am not with him, but i think so | 07:46 |
zigo-_- | winston-d: A VPS is a Virtual Private Server. So it could be a VM, but also a container like with virtuozzo. | 07:46 |
zigo-_- | We do only Xen VMs though. | 07:46 |
alekibango | zigo-_-: would you like to use my fai configs somehow? | 07:47 |
winston-d | zigo-_- : I see. :) you never know what your server really is. | 07:47 |
zigo-_- | Because we believe it's of better quality than VZ or others. | 07:47 |
zigo-_- | winston-d: You do. ls /proc ... :) | 07:47 |
soren | aixenv: I'm not going to help you if you destroy my debug information. | 07:47 |
winston-d | zigo-_- : just out of curiousity, do you use PV guest or HVM? | 07:47 |
zigo-_- | PV only. | 07:47 |
aixenv | soren: can i PM it to you? | 07:47 |
zigo-_- | We have HVM support, but I don't like it. | 07:47 |
zigo-_- | It's CPU demanding for no valid reasons. | 07:48 |
soren | aixenv: If you don't want anyone else to help, sure. | 07:48 |
zigo-_- | I maintain "xen-qemu-dm" in Debian (the part that does the HVM drivers). | 07:48 |
soren | Not sure what can be so secret about a routing table. | 07:48 |
winston-d | zigo-_- : i guess so. maybe that's the reason you don't like KVM? | 07:48 |
alekibango | zigo-_-: i dont like that you are fixed on some kernel with xen | 07:48 |
zigo-_- | Frankly, this lead me to hate Qemu! :) | 07:48 |
aixenv | i cant share the ip range soren | 07:48 |
soren | Then I can't help you. | 07:48 |
aixenv | I'll pm you. | 07:48 |
soren | Maybe I'm an evil hacker. | 07:49 |
soren | Who knows? | 07:49 |
zigo-_- | alekibango: This was truth until few weeks ago. | 07:49 |
alekibango | zigo-_-: ?? | 07:49 |
zigo-_- | alekibango: Mainline 2.6.39 now includes dom0 support!!! | 07:49 |
alekibango | wow | 07:49 |
alekibango | missed this one | 07:49 |
winston-d | zigo-_- : really? | 07:49 |
alekibango | might make me rethink my strategy | 07:49 |
zigo-_- | So, you are either stuck with older versions of the kernel with Xen, or you run bleeding edge kernels! :) | 07:49 |
* winston-d chekcing lwn.net | 07:49 | |
antonym | yeah, they finally got dom0 support in | 07:49 |
*** bkkrw has joined #openstack | 07:50 | |
*** daveiw has joined #openstack | 07:50 | |
zigo-_- | A lot of people failed to realize all the huge work that has been made in Xen over the last 2 years. | 07:50 |
alekibango | thats great news for xen | 07:50 |
zigo-_- | And got cought by the RedHat marketing crap... | 07:50 |
alekibango | congrats ! | 07:50 |
antonym | yeah, it is, they've been needing that for a while | 07:50 |
zigo-_- | In 2.6.38 (already released), there's dom0 support as well. | 07:50 |
zigo-_- | But no dom0 backend. | 07:50 |
alekibango | zigo-_-: i know rh does treat xen as dying technology | 07:50 |
zigo-_- | So, you get a server that can run VMs with ... a ramdisk and that's it! | 07:51 |
zigo-_- | :) | 07:51 |
alekibango | lolo | 07:51 |
zigo-_- | alekibango: It's going to be funny to see how RedHat will *not* be able to patch the mainline kernels to actually *remove* xen support! | 07:51 |
zigo-_- | :) | 07:51 |
alekibango | i love free software :) | 07:52 |
alekibango | its life... | 07:52 |
winston-d | anyone post a URL for 2.6.39 news? | 07:52 |
alekibango | proprietary one = system of death | 07:52 |
zigo-_- | At the end of the story, we are all happy. Xen improved, mainline kernel improved, overall stuff are all better, including KVM. | 07:52 |
zykes- | zigo-_-: it's cool to have both kvm and xen | 07:53 |
zigo-_- | If you guys want to try the bleeding edge Xen, you got to pull the Jeremy Git repo. | 07:53 |
zykes- | people prefer their own choices | 07:53 |
zigo-_- | zykes-: EXACTLY ! | 07:53 |
alekibango | yes thats what mean | 07:53 |
zigo-_- | Jeremy's Git repo is 2.6.39 plus patches. | 07:53 |
winston-d | zigo-_- : have you evaluate 2.6.39 dom0 against previous PV_OPS dom0? from performance perspective | 07:53 |
zigo-_- | And more and more, the difference is small. | 07:54 |
zykes- | what I like with KVM though over Xen, is that I've had better experiences with KVM over Xen to run on "cheaper" hardware / newer, Xen's crashed too many times for my taste | 07:54 |
zigo-_- | winston-d: I just use Debian's kernel. | 07:54 |
zigo-_- | I see no point in wasting my time building my own. | 07:54 |
zykes- | though Xen is cool that you can run paravirt on old hw :p | 07:54 |
zigo-_- | I used to, I gave up on that. | 07:54 |
zigo-_- | Well, PV isn't only cool because of old hardware, IMHO. | 07:54 |
zykes- | ;) | 07:54 |
zigo-_- | It's just faster than using Qemu. | 07:54 |
alekibango | zigo-_-: i still need to do that for music | 07:54 |
alekibango | i miss low latency, realtime desktop kernels in debian | 07:55 |
zykes- | zigo-_-: though 2.6.39 | 07:55 |
zykes- | isn't stable yet. | 07:55 |
zigo-_- | One of the issue as well, with Xen, is that it doesn't fit at all for desktop use. | 07:55 |
zigo-_- | Running it with a wlan card is a nightmare. | 07:55 |
zigo-_- | So it lost popularity because of that. | 07:55 |
winston-d | alekibango : maybe you'd try direct IO stuff. really low latency, high performance. | 07:56 |
alekibango | xen is also complicated to start with | 07:56 |
zykes- | winston-d: direct io what | 07:56 |
zigo-_- | zykes-: Yup, and Xen backend just got merged because there was a merge window ! :) | 07:56 |
zigo-_- | In 3 months, we're all saved from big troubles. | 07:56 |
alekibango | winston-d: what do you mean direct io? | 07:56 |
winston-d | zykes- : VT-d, aka PCI pass-through, or even SR-IOV | 07:56 |
alekibango | winston-d: i need to use jackd | 07:56 |
zykes- | sr-iov from who winston-d ? | 07:56 |
zigo-_- | SR-IOV was in Xen even before the hardware was available. | 07:57 |
zigo-_- | It was done in Intel Shanghai, few kms away from my home! :) | 07:57 |
winston-d | zykes- : that's hardware capability. | 07:57 |
winston-d | zigo-_- : r u in PRC? | 07:57 |
zigo-_- | Yup. | 07:57 |
zigo-_- | French guy living in Shanghai. | 07:57 |
alekibango | aha that direct io :) | 07:58 |
winston-d | zigo-_- : cool. r u running business in PRC to? | 07:58 |
zigo-_- | That's not very easy to do so. | 07:58 |
winston-d | zigo-_- : :) | 07:58 |
zigo-_- | I'm trying, but hosting is hard to do when you don't know the right persons in China. | 07:58 |
winston-d | we've saw performance issues with PV. | 07:58 |
winston-d | at large scale | 07:58 |
zigo-_- | However, the gov. here is pushing SO HARD to have a big cloud computing platform. | 07:58 |
zigo-_- | I'm aiming at gov. funds right now. | 07:58 |
alekibango | they have capitalism under leadership of commie party :) | 07:58 |
zigo-_- | They might give a lot ... | 07:59 |
alekibango | zigo-_-: openstack is capable of satisfy as BIG CLOUD PLATFORM | 07:59 |
alekibango | to* | 07:59 |
winston-d | zigo-_- : hmm, i'm interested. | 07:59 |
alekibango | heh my englush... | 07:59 |
winston-d | zigo-_- : btw, i'm Shanghai 2 | 08:00 |
zigo-_- | winston-d: Are you? Where ? :) | 08:00 |
zigo-_- | Are you Chinses? | 08:00 |
zigo-_- | Chinese. | 08:00 |
winston-d | zigo-_- : yes, i'm Chinese. Right now i'm in Minhang district, if you know | 08:00 |
zigo-_- | I do! | 08:01 |
zigo-_- | winston-d: We should meet each other then!!! | 08:01 |
benbenhappy | jiaoda? | 08:01 |
zigo-_- | I'm in KangQiao | 08:01 |
winston-d | benbenhappy : hmm, not really, but quite close | 08:01 |
zigo-_- | winston-d: Ever heard about the Shanghai Linux User Group (SHLUG)? | 08:01 |
zigo-_- | benbenhappy: Are you in SH too? | 08:02 |
benbenhappy | me too | 08:02 |
winston-d | zigo-_- : really? That's not far from my home. SHLUG, not really. | 08:02 |
benbenhappy | and I know zigo | 08:02 |
winston-d | and zigo-_- doesn't know u? ha | 08:02 |
zigo-_- | benbenhappy & winston-d: You guys should come next thuesday to the weekly meet-up ! | 08:03 |
benbenhappy | we have talk about openstack together | 08:03 |
zigo-_- | benbenhappy: When ? | 08:03 |
benbenhappy | zigo-_-:I am du | 08:03 |
winston-d | benbenhappy zigo-_-: in person? | 08:03 |
zigo-_- | Du YuJie? | 08:03 |
benbenhappy | yes | 08:03 |
zigo-_- | Ahahah ! :) | 08:03 |
zigo-_- | Lol. | 08:03 |
alekibango | make shangain opentack group :) | 08:04 |
zigo-_- | benbenhappy: Do you know winston-d ? | 08:04 |
winston-d | zigo-_- : when & where is SHLUG meet-up? | 08:04 |
winston-d | zigo-_-: i guess not | 08:04 |
benbenhappy | really ,we want to make shanghai openstack group | 08:04 |
zigo-_- | winston-d: Every thursday in NanJing Lu. | 08:04 |
* winston-d sighs. that's downtown | 08:05 | |
winston-d | zigo-_- : how many VMs r u running for one physical system? have u met any IO issue w/ PV? | 08:07 |
*** miclorb_ has joined #openstack | 08:07 | |
zigo-_- | winston-d: It really depends on the hardware. | 08:07 |
zigo-_- | We tend not to put too much customers on a single one, because we want quality. | 08:08 |
zigo-_- | We have no issues with IO, because ... we monitor it and do a lot of policying! :) | 08:08 |
zigo-_- | Also, RAID10 helps a lot. | 08:08 |
zigo-_- | Most customers are doing almost nothing with their VM anyway... :D | 08:08 |
winston-d | zigo-_- : then more practical questions is what is the ratio of vCPU vs. pCPU? | 08:08 |
benbenhappy | maybe 10 VMs in one server that has 4G mem? | 08:08 |
zigo-_- | Yup. | 08:09 |
zigo-_- | Up to 30 with newer core i7 with raid10 | 08:09 |
winston-d | zigo-_- : so light-weighted VM? | 08:09 |
zigo-_- | I'd say, 10 customer per core is reasonable. | 08:09 |
*** uksysadmin has joined #openstack | 08:09 | |
zigo-_- | Xen is lightweight, really... | 08:09 |
* winston-d is impressed by light-weighted VMs | 08:10 | |
zigo-_- | Run 30 VMs, and you get a 0.0.6 load. | 08:10 |
benbenhappy | the problem is domU | 08:10 |
*** doude has quit IRC | 08:10 | |
zigo-_- | 0.06 | 08:10 |
winston-d | zigo-_- : we did much much heavier load | 08:10 |
zigo-_- | winston-d: Who is "we" ? Your company? | 08:10 |
zigo-_- | What company? | 08:10 |
benbenhappy | maybe in school:) | 08:11 |
winston-d | in our lab. | 08:12 |
benbenhappy | cloud computing lab? | 08:12 |
zigo-_- | I ran out of cigarette, I'm going out to buy some. See ya. | 08:12 |
winston-d | benbenhappy : not really. it's a small lab inside a big company. :) | 08:13 |
*** drico has quit IRC | 08:14 | |
*** drico has joined #openstack | 08:15 | |
*** adiantum has quit IRC | 08:15 | |
*** jeffjapan has quit IRC | 08:15 | |
winston-d | can anyone post a link for Dom 0 getting into 2.6.39 news? I did a rough search but found otherwise. http://www.h-online.com/open/features/Kernel-Log-Development-of-2-6-39-under-way-series-33-revived-1212988.html | 08:18 |
*** lool- is now known as lool | 08:19 | |
*** lool has joined #openstack | 08:19 | |
uvirtbot | New bug: #744814 in nova ""python setup.py sdist" yields a bunch of useless errors" [Undecided,New] https://launchpad.net/bugs/744814 | 08:21 |
*** adiantum has joined #openstack | 08:22 | |
winston-d | zigo-_- : can you help? | 08:22 |
benbenhappy | he is smoking now | 08:23 |
winston-d | benbenhappy : so r u in SHLUG too? that's where u met zigo? | 08:23 |
*** ewindisch has quit IRC | 08:23 | |
benbenhappy | we had talked about that in shlug maillist ,maybe you can search for that | 08:24 |
benbenhappy | we meet in Xen summit 2009 | 08:24 |
winston-d | xen summit in SH? | 08:25 |
benbenhappy | yes | 08:25 |
winston-d | hmm, i guess you might see me | 08:25 |
benbenhappy | he has a topics | 08:25 |
*** freeflying has quit IRC | 08:25 | |
winston-d | interesting. what's zigo's topic? | 08:26 |
benbenhappy | do you have a speech in xen summit | 08:26 |
*** irahgel has joined #openstack | 08:26 | |
winston-d | i found zigo's slides for xen summit | 08:32 |
benbenhappy | so ,what's yours ? | 08:32 |
zigo-_- | winston-d: http://www.gplhost.com/software-dtc-xen_5-Xen_Summit_Asia_2009_at_intel_shanghai.html | 08:33 |
*** doude has joined #openstack | 08:33 | |
*** ton_katsu has joined #openstack | 08:34 | |
ton_katsu | hello | 08:34 |
ton_katsu | I'm from japan. | 08:34 |
zigo-_- | ton_katsu: Hi there. | 08:34 |
winston-d | zigo-_- : i'd like to read more information about Dom0 getting into 2.6.39 kernel. can you help? | 08:34 |
zigo-_- | I hope you aren't in the norther part. | 08:35 |
zigo-_- | winston-d: You should read the xen-devel list then. | 08:35 |
zigo-_- | You have news about it every day, nearly. | 08:35 |
zigo-_- | There's nothing much to know anyway... | 08:35 |
zigo-_- | Just that it's getting in. | 08:35 |
*** bkkrw has quit IRC | 08:36 | |
winston-d | zigo-_- : i used to. but now i'm reading linux-kvm list instead. :) | 08:36 |
*** bkkrw has joined #openstack | 08:36 | |
zigo-_- | Switch back! :) | 08:36 |
benbenhappy | winston-d :why use kvm instead? | 08:36 |
ton_katsu | I am trying openstack(nova). | 08:36 |
*** nerens has quit IRC | 08:37 | |
zigo-_- | benbenhappy: Really, the virtualization tech. doesn't matter much. | 08:37 |
zigo-_- | Both are good. | 08:37 |
zigo-_- | After, it's rather a mater of tastes... | 08:37 |
ton_katsu | but , instance not start. | 08:38 |
zigo-_- | I guess I'm using Xen because that was the only usable virt. tech. after UML (which wasn't good at all). | 08:38 |
benbenhappy | ton_kausu:do you have problem about bzr? | 08:38 |
winston-d | xen sucks for its PV (not that virtio is doing better but looks more promising) and its scheduler | 08:38 |
zigo-_- | What's wrong with Xen scheduler ? | 08:39 |
zigo-_- | I think it's great. | 08:39 |
ton_katsu | nova-compute.log = [libvir: QEMU error : Domain not found: no domain with matching name 'instance-00000003'] | 08:39 |
benbenhappy | winston-d:what's your topics on xen summit? | 08:40 |
ton_katsu | please help me | 08:40 |
ton_katsu | nova version is 2011.1~bzr645-0ubuntu0ppa1~lucid1 | 08:41 |
ton_katsu | OS: ubuntu10.04LTS | 08:41 |
ton_katsu | Hypervisor: kvm | 08:41 |
ton_katsu | wget http://uec-images.ubuntu.com/releases/10.04/release/ubuntu-10.04-server-uec-amd64.tar.gz | 08:42 |
ton_katsu | was registered | 08:43 |
winston-d | zigo-_- : if you have ~40 vms, all doing some computation & IO (w/ PV) on top of 24 cores. you'll see some perf overhead. if the scale goes bigger, than scheduler overhead can more significant. | 08:44 |
ton_katsu | euca-describe-instances is ... | 08:44 |
zigo-_- | winston-d: I don't overload servers that way! :) | 08:44 |
ton_katsu | RESERVATION r-igweqspn project default | 08:44 |
ton_katsu | INSTANCE i-00000003 ami-tn4sf0ud 192.168.1.3 192.168.1.3 pending mykey | 08:44 |
ton_katsu | please help me. | 08:44 |
winston-d | benbenhappy, zigo-_-: my slides for xen summit 2009. http://www.xen.org/files/xensummit_intel09/xensummit2009_IOVirtPerf.pdf | 08:45 |
winston-d | zigo-_- : well, others might do | 08:45 |
ton_katsu | pending...pending...pending... ..... .... | 08:46 |
ton_katsu | why? | 08:46 |
zigo-_- | winston-d: So you are the one that did this SR-IOV thing? | 08:47 |
zigo-_- | It was a great presentation. | 08:47 |
*** benbenhappy has left #openstack | 08:47 | |
*** freeflying has joined #openstack | 08:48 | |
zigo-_- | freeflying: Hello! :) | 08:48 |
winston-d | zigo-_- : but it was in Chinese (i was intended to). i hope it was difficult for u | 08:48 |
*** rds__ has joined #openstack | 08:49 | |
zigo-_- | winston-d: If you were doing it again, I guess I would understand a lot more. | 08:50 |
winston-d | zigo-_- : that's easy. there's video recording out there in xen.org. :D | 08:50 |
zigo-_- | Well, now I know how great SR-IOV is! :) | 08:50 |
zigo-_- | I just wonder: will it one day be implemented in PV Xen ? | 08:51 |
winston-d | zigo-_- : i heard they already did that for PV guest. | 08:51 |
zigo-_- | Cool. | 08:51 |
*** kyzh has quit IRC | 08:52 | |
*** kyzh has joined #openstack | 08:52 | |
zigo-_- | How do I apply a patch to a branch after I did "bzr send -o my.patch" ? | 08:52 |
*** kyzh has quit IRC | 08:52 | |
*** kyzh has joined #openstack | 08:52 | |
zigo-_- | (eg: what's the "git am" equivalent?) | 08:54 |
winston-d | have 2 run. talk 2 u guys next time. | 08:54 |
zigo-_- | I hope to see you one day at the SHLUG meetup then! | 08:54 |
zigo-_- | Bye. | 08:54 |
uvirtbot | New bug: #744833 in nova "python-suds shouldn't be a hard dependency" [Undecided,New] https://launchpad.net/bugs/744833 | 08:56 |
*** z0 has joined #openstack | 08:59 | |
*** adiantum has quit IRC | 09:06 | |
zigo-_- | I begin to hate bzr, considering that it is fetching from network so slow! | 09:07 |
zykes- | what's a good "lab" server to get most for ones money? | 09:09 |
zykes- | i was thinking of Dell R410 with 2*nehalem cpus | 09:10 |
zigo-_- | Dell are power eaters. | 09:10 |
aixenv | and come with broadcom chips, which suck | 09:11 |
zigo-_- | and which has a non-free firmware included... | 09:12 |
zykes- | zigo-_-: what to get then ? | 09:12 |
zykes- | hp or ibm ? | 09:12 |
aixenv | that doesn't stop the broadcom chip from sucking | 09:12 |
* zigo-_- buys exclusively from Supermicro. | 09:12 | |
aixenv | which suck even more | 09:12 |
zigo-_- | Ah ? :) | 09:12 |
zigo-_- | Why that? | 09:12 |
aixenv | supermicros are crap | 09:13 |
zigo-_- | I'm really happy with them. | 09:13 |
zigo-_- | Fast, cheap, reliable. | 09:13 |
aixenv | but im also not gonna get into that, we've had nothing but bad luck with them | 09:13 |
zigo-_- | And with nice KVMs. | 09:13 |
aixenv | they dont last that's one thing | 09:13 |
zigo-_- | That's not what I can tell with my statistics. | 09:13 |
zigo-_- | I'd say I have a failure raite of less than 1% per yaer. | 09:13 |
zigo-_- | year | 09:14 |
zykes- | what's something to get ? | 09:14 |
zigo-_- | The thing is, don't get the PSU they include, ask for a bigger one. | 09:14 |
zykes- | i can choose from dell, ibm, hp | 09:14 |
zigo-_- | Most of the time, they would sell you a too small PSU, and then troubles start. | 09:14 |
aixenv | you'll see failures as you get more age on them | 09:15 |
*** z0 has quit IRC | 09:15 | |
aixenv | zykes: get what you normally get, or get what make sthe most sense financially, what hardware your DC is used to , etc | 09:15 |
zigo-_- | Please define "more age". | 09:15 |
zigo-_- | I've been 7 years in the business. Should I wait more until I see my failure rate going high? | 09:16 |
aixenv | we saw failures at 5, so you're lucky | 09:16 |
zigo-_- | :) | 09:16 |
aixenv | the supermicro P4SXX series have been nothing but problems | 09:17 |
zigo-_- | That's pretty old. | 09:17 |
aixenv | but again those are old | 09:17 |
aixenv | yes they are circo 2005ish | 09:17 |
zigo-_- | We currently use X8STi-F with core i7 or Xeon (depending on availability). | 09:17 |
aixenv | all the hardware issues kidna soured our taste of supermicro | 09:18 |
zigo-_- | How many servers are we talking about here? | 09:18 |
aixenv | not that i like who we use now either tho | 09:18 |
aixenv | 1000s | 09:18 |
zigo-_- | Fair enough then. | 09:18 |
zigo-_- | So, how do I apply a "bzr send -o my.patch" patch file to a local repo? | 09:19 |
aixenv | like i said we use debian as our core o/s and i have to compile a custom initrd and preseed to make the dells happy with the broadcom nic, so annoying | 09:19 |
zigo-_- | soren ? | 09:19 |
soren | eh? | 09:20 |
zigo-_- | So, how do I apply a "bzr send -o my.patch" patch file to a local repo? | 09:20 |
zigo-_- | I've been looking for ages ... | 09:21 |
soren | zigo-_-: What exactly are you trying to do? | 09:21 |
zigo-_- | Fixing the fact that I did work on "trunk" when I want in fact to work in another branch. | 09:22 |
zigo-_- | Then send you patches ... :) | 09:22 |
soren | Ok, just go to the branch where you wanted to do your work.. | 09:22 |
soren | Did you commit the changes on trunk? | 09:22 |
zigo-_- | Yup. | 09:23 |
soren | Ok. | 09:23 |
*** nerens has joined #openstack | 09:23 | |
soren | So, go to the branch where you wanted to do your work, and do "bzr pull ../trunk" | 09:23 |
zykes- | geez, why does it have to come with broadcom | 09:23 |
soren | (replace ../trunk with whereever your trunk is) | 09:23 |
aixenv | zykes: i ask myself that everytime i hit buy | 09:24 |
zigo-_- | Ok, cool! | 09:24 |
zigo-_- | Then I shall just push to my own launchpad repo, right? | 09:24 |
soren | Ignore "bzr send". We don't need it | 09:24 |
zykes- | aixenv: is there any better alternatives ? | 09:24 |
soren | zigo-_-: That's the plan, yes. | 09:24 |
aixenv | zykes: id get what your DC guys are most used to dealing with, if that isnt an issue, ive always wanted to switch to HP, and at a previous job i used IBM with never any issues | 09:25 |
aixenv | but at long as you're buying quality components/formfactor/etc im sure you'll be fine | 09:25 |
zykes- | what's bad with broadcoms ? | 09:26 |
aixenv | horrible on bandwidth tests , which is enough | 09:26 |
aixenv | depending on your o/s of cchoice, it can cause headaches since debian for example has the firmware in non-free | 09:27 |
zigo-_- | And they need firmware-linux-nonfree, which is annoying. | 09:27 |
aixenv | aye totally, although if you're debian based i can share with you a preseed i did to fix that | 09:27 |
zigo-_- | Is that normal that "bzr launchpad-login thomas-goirand" takes ages to run? | 09:28 |
soren | No. | 09:31 |
soren | Takes about a second for me. | 09:31 |
uvirtbot | New bug: #744853 in nova "euca-xxx commands : EC2ResponseError: 403 Forbidden" [Undecided,New] https://launchpad.net/bugs/744853 | 09:31 |
zykes- | hmms, why does all the servers come with broadcom chips.. | 09:33 |
*** aixenv2 has quit IRC | 09:33 | |
*** aixenv2 has joined #openstack | 09:35 | |
zigo-_- | zigo@GPLHost:buzdev>_ ~/sources/bzr/nova/zigo$ bzr launchpad-login thomas-goirand | 09:35 |
zigo-_- | bzr: ERROR: pycurl.error: (28, 'gnutls_handshake() failed: A TLS packet with unexpected length was received.') | 09:35 |
*** aixenv has quit IRC | 09:37 | |
*** aixenv has joined #openstack | 09:39 | |
aixenv | i notice my nova.conf doesnt have 'routing_source' is that an optional parameter? | 09:40 |
aixenv | erm sorry 'routing_source_ip' | 09:40 |
*** drico has quit IRC | 09:40 | |
*** drico has joined #openstack | 09:41 | |
*** nerens has quit IRC | 09:43 | |
zigo-_- | soren: Just did bzr push lp:~thomas-goirand/nova/debian | 09:44 |
zigo-_- | Can you tell me if it's ok? | 09:44 |
*** nerens has joined #openstack | 09:44 | |
zigo-_- | Packaging ... | 09:44 |
*** miclorb_ has quit IRC | 09:45 | |
* zigo-_- has to go to buy food, bbl | 09:46 | |
*** ewindisch has joined #openstack | 09:50 | |
*** adjohn has quit IRC | 09:51 | |
*** dmd17 has quit IRC | 09:53 | |
*** bkkrw has quit IRC | 10:00 | |
*** bkkrw has joined #openstack | 10:05 | |
*** uksysadmin has quit IRC | 10:06 | |
*** ramkrsna has joined #openstack | 10:12 | |
*** zaccone has joined #openstack | 10:16 | |
zaccone | helo | 10:16 |
*** joloughlin has joined #openstack | 10:18 | |
joloughlin | whats the recommended way of adding images to glance ? | 10:19 |
joloughlin | i can glance add with no problems but euca-describe-images throws an error | 10:19 |
joloughlin | adding images via nova-manage or uec-publish causes errors as well | 10:20 |
zigo-_- | back | 10:22 |
joloughlin | is there a recommended way to add images to nova when its configured to use glance ? | 10:25 |
*** adiantum has joined #openstack | 10:27 | |
*** smaresca has quit IRC | 10:28 | |
*** z0 has joined #openstack | 10:34 | |
*** drico has quit IRC | 10:35 | |
*** drico has joined #openstack | 10:38 | |
joloughlin | are there problems with glance and nova ? | 10:39 |
*** smaresca has joined #openstack | 10:40 | |
*** ovidwu has quit IRC | 10:43 | |
zul | can someone review this branch one more time? https://code.launchpad.net/~zulcss/nova/nova-lxc/+merge/55260 | 10:43 |
*** ovidwu has joined #openstack | 10:43 | |
freeflying | zigo-_-: hi | 10:44 |
zaccone | I have got a question. I've trying to install the OpenStack (full set) on single Virtualbox guest machine. I have a 32bit system, so i tried downloading two system images - i386 and i686, yet none of them work. I mean after running euca-run-instances i only get the message that instance is scheduling, after that the instance changes it's state into networking and dies. No IP is assigned. | 10:46 |
zaccone | any clues, where i can start looking for solution? | 10:46 |
aixenv | ./var/log/nova/nova-compute.log , *and the other /var/log/nova/ log files* | 10:47 |
freeflying | zedas: /var/log// | 10:47 |
soren | zigo-_-: Why do you want to package 2011.1.1 (according to your ITP)? | 10:50 |
zaccone | well First of all sqlalchemy raises an exception: (nova): TRACE: raise db.NoMoreNetworks() | 10:51 |
*** metoikos has joined #openstack | 10:51 | |
zaccone | (nova): TRACE: NoMoreNetworks: None | 10:51 |
joloughlin | is there a recommended way to add images to nova when its configured with glance ? | 10:53 |
joloughlin | i have tried uec-publish and nova-manage images | 10:53 |
joloughlin | both throw an error | 10:53 |
joloughlin | i can add to glance directly with glance add | 10:53 |
joloughlin | but then euca-describe-images throws an error | 10:53 |
zaccone | joloughlin: what errro? | 10:54 |
joloughlin | UnknownError: An unknown error has occurred. Please try your request again | 10:55 |
joloughlin | this is after adding via glance add | 10:55 |
joloughlin | glance details shows the image | 10:55 |
*** z0 has quit IRC | 10:56 | |
*** z0 has joined #openstack | 10:56 | |
*** bkkrw has quit IRC | 11:03 | |
*** joloughlin has quit IRC | 11:03 | |
*** Jordandev has joined #openstack | 11:10 | |
*** Jordandev has quit IRC | 11:12 | |
*** bkkrw has joined #openstack | 11:15 | |
*** drico has quit IRC | 11:18 | |
*** miclorb_ has joined #openstack | 11:22 | |
*** ovidwu has quit IRC | 11:28 | |
*** ovidwu has joined #openstack | 11:28 | |
*** ovidwu has quit IRC | 11:29 | |
*** adjohn has joined #openstack | 11:30 | |
*** ovidwu has joined #openstack | 11:31 | |
*** z0 has quit IRC | 11:40 | |
*** z0 has joined #openstack | 11:52 | |
*** miclorb_ has quit IRC | 11:54 | |
*** ton_katsu has quit IRC | 11:58 | |
*** dprince has joined #openstack | 12:11 | |
*** hggdh has quit IRC | 12:12 | |
*** adiantum has quit IRC | 12:14 | |
*** hggdh has joined #openstack | 12:14 | |
*** ctennis has quit IRC | 12:14 | |
*** adiantum has joined #openstack | 12:26 | |
*** foutchy has joined #openstack | 12:32 | |
*** adjohn has quit IRC | 12:35 | |
*** z0 has quit IRC | 12:38 | |
*** h0cin has joined #openstack | 12:39 | |
n1md4 | afternoon. What's the fundemental difference between release and trunk? | 12:39 |
BK_man | n1md4: developer version (unstable) and last tested release | 12:40 |
n1md4 | BK_man: thank you :) | 12:41 |
foutchy | morning, installing swift on a dev VM, better using partition or loopback device? | 12:43 |
n1md4 | Using VlanManaged do I need to set compute nodes with br100? | 12:50 |
*** koolhead11 is now known as koolhead11|afk | 12:54 | |
*** mray has joined #openstack | 12:57 | |
*** mray1 has joined #openstack | 12:58 | |
*** zenmatt has joined #openstack | 12:58 | |
*** aliguori has joined #openstack | 12:59 | |
*** aliguori_ has joined #openstack | 12:59 | |
*** aliguori has quit IRC | 13:00 | |
*** aliguori_ has quit IRC | 13:00 | |
*** aliguori has joined #openstack | 13:00 | |
*** mray has quit IRC | 13:02 | |
n1md4 | I installed controller and node with install scripts and wasn't able to ping instances. I then installed controller and node manually, the controller is able to access the instances, but nodes do not even show up in the services table. Ideas? | 13:04 |
soren | This is getting ridiculous... | 13:10 |
soren | EVERYONE: Has anyone had any success with dubsquared's install script in recent history? | 13:11 |
*** mray has joined #openstack | 13:12 | |
aixenv | no | 13:12 |
soren | Can everyone please stop using it, then? kthxbai! | 13:12 |
aixenv | can openstack take it off the oficial wiki? | 13:12 |
soren | It's a.... | 13:13 |
aixenv | people are going to use the resources provided by the openstack community | 13:13 |
soren | wait for it... | 13:13 |
soren | WIKI! | 13:13 |
soren | Anyone can. | 13:13 |
aixenv | well it should be a managed wiki | 13:13 |
n1md4 | soren: certainly not plug and play :-/ Can they configured to use VlanManager by default too? | 13:13 |
n1md4 | :) | 13:13 |
soren | FlatManager is the | 13:13 |
soren | *worst* possible default choice. | 13:13 |
n1md4 | soren: right. as a beginner to openstack I found this an odd choice ;) | 13:14 |
*** Zangetsu has joined #openstack | 13:18 | |
zigo-_- | soren: Did you merge my changes? | 13:19 |
*** santhosh has joined #openstack | 13:19 | |
*** z0 has joined #openstack | 13:20 | |
soren | zigo-_-: Sorry, have been tied up all day | 13:20 |
*** bcwaldon has joined #openstack | 13:20 | |
zigo-_- | It's ok! :) | 13:20 |
*** ppetraki has joined #openstack | 13:21 | |
*** hadrian has joined #openstack | 13:25 | |
*** Zangetsu_ has joined #openstack | 13:26 | |
*** aixenv has quit IRC | 13:27 | |
*** Zangetsu has quit IRC | 13:29 | |
*** Zangetsu_ is now known as Zangetsu | 13:29 | |
*** rds__ has quit IRC | 13:33 | |
* zul would love it if someone can merge lxc ;) | 13:39 | |
*** rds__ has joined #openstack | 13:41 | |
jaypipes | termie: heya, blamar has updated the docstrings according to your request on https://code.launchpad.net/~blamar/nova/openstack-api-1-1-images/+merge/53942. Please feel free to set to Approved if there are no further comments. Thanks. | 13:45 |
*** chuck_ has joined #openstack | 13:45 | |
dendrobates | zul: I'll look at it. | 13:45 |
chuck_ | dendrobates: thanks | 13:45 |
*** zul has quit IRC | 13:45 | |
*** chuck_ is now known as zul | 13:46 | |
*** zul has joined #openstack | 13:46 | |
*** f4m8 is now known as f4m8_ | 13:46 | |
soren | zul: Did you fix those things I pointed out? | 13:47 |
zul | yep | 13:47 |
* soren looks | 13:47 | |
zigo-_- | Hi. | 13:56 |
soren | zul: nova/virt/libvirt_conn.py is missing a whitespace after a ','. | 13:58 |
zul | soren: where? | 13:59 |
zul | as in what line number | 13:59 |
soren | Oh, sorry, I thought I wrote that. | 13:59 |
soren | Line 350. | 13:59 |
soren | cut and paste mistake. | 13:59 |
zul | gimme a sec | 13:59 |
soren | (from me) | 13:59 |
zul | soren: just pushed | 14:02 |
*** foutchy has left #openstack | 14:02 | |
*** adjohn has joined #openstack | 14:11 | |
n1md4 | Hi. What's required to have a node show in the controller services table? I'd like to debug why it's not appearing. | 14:15 |
*** mdomsch has joined #openstack | 14:18 | |
*** jero has joined #openstack | 14:19 | |
n1md4 | Hmmm figured it out, python-suds was not installed. Correct that enabled the node to appear | 14:22 |
soren | n1md4: Known problem: https://bugs.launchpad.net/nova/+bug/744833 | 14:24 |
uvirtbot | Launchpad bug 744833 in nova "python-suds shouldn't be a hard dependency" [Undecided,New] | 14:24 |
n1md4 | soren: "There are no FAQs for OpenStack Compute (nova) matching “suds ”." How was I to know :P | 14:28 |
n1md4 | :) | 14:28 |
soren | You're the second person to ask. | 14:33 |
soren | It's not frequent yet :) | 14:33 |
*** flopflip has quit IRC | 14:34 | |
*** flopflip has joined #openstack | 14:34 | |
n1md4 | soren: What does this generally mean "Found instance 'instance-00000017' in DB but no VM.". | 14:35 |
soren | smoser: Ideally, it means that an instance has crashed. | 14:36 |
soren | whoops | 14:36 |
soren | smoser: Not for you :) | 14:36 |
soren | n1md4: Ideally, it means that an instance has crashed. | 14:36 |
*** z0 has quit IRC | 14:36 | |
soren | n1md4: ...alternatively, it might mean that the clean up job is a bit too quick to clean up dead instances. | 14:36 |
dprince | jaypipes: you there? | 14:36 |
jaypipes | dprince: yup | 14:37 |
*** adjohn has quit IRC | 14:38 | |
dprince | jaypipes: Not sure if you noticed my Glance bug last night. My revision 99 commit in Glance kind of hosed us. Glance trunk can't be used to spawn instances w/ AMI style images. | 14:38 |
dprince | jaypipes: I just pushed a branch that fixes it. Essentially I made it so that external requests to update images 'purge properties/metadata'. | 14:39 |
dprince | jaypipes: internally however Glance makes many calls to update_images (for status changes, etc.) and these can still use the classic style update_image call which doesn't purge props. | 14:40 |
jaypipes | dprince: ah, yes indeed. oops. | 14:40 |
jaypipes | dprince: looks like an /images/<ID>/prop API is needed. | 14:40 |
dprince | jaypipes: tested it well this morning. Won't say it is the cleanest patch but it resolves the problem I introduced in rev 99 and still works perfectly with image metadata. | 14:40 |
dprince | jaypipes: Yes. We need a subresource for image metadata. | 14:41 |
jaypipes | dprince: k. still pushing through inbox. will get to it very shortly. | 14:41 |
jaypipes | dprince: properties, please. not metadata :P | 14:41 |
dprince | jaypipes: Sure. Anyway. I felt like a new sub-resource was too big a fix for now. That would certainly be the cleanest solution however but would likely involve client changes as well. | 14:42 |
jaypipes | dprince: yup, no worries. I should have been more vigilant on that. sorry. :( | 14:44 |
*** ctennis has joined #openstack | 14:53 | |
*** dragondm has joined #openstack | 14:53 | |
* soren takes off for dinner | 14:55 | |
*** jero has quit IRC | 14:57 | |
*** spectorclan_ has joined #openstack | 15:00 | |
uvirtbot | New bug: #745016 in nova "Unconditionally injects network configuration" [High,New] https://launchpad.net/bugs/745016 | 15:02 |
*** jero has joined #openstack | 15:08 | |
*** dillon-w has joined #openstack | 15:09 | |
*** comstud has joined #openstack | 15:10 | |
*** ChanServ sets mode: +v comstud | 15:10 | |
dillon-w | is there a place to set 'cdnManagementURL'? | 15:10 |
*** odyi has joined #openstack | 15:11 | |
*** odyi has joined #openstack | 15:11 | |
blamar | dillon-w, line 149 of nova/api/openstack/auth.py? | 15:12 |
dillon-w | blamar : thanks, let me check | 15:13 |
ttx | tr3buchet, _cerberus_ : if you're ok with it, please switch https://code.launchpad.net/~citrix-openstack/nova/xenapi-vlan-network-manager/+merge/53660 status to "approved" | 15:13 |
_cerberus_ | Can do | 15:13 |
tr3buchet | was waiting on termie | 15:14 |
*** pharkmillups has joined #openstack | 15:14 | |
_cerberus_ | tr3buchet: I went ahead and flipped it. | 15:15 |
creiht | dillon-w: no, since that is part of cloudfiles, but not swift | 15:15 |
tr3buchet | _cerberus_: saw | 15:15 |
ttx | tr3buchet: right, same for https://code.launchpad.net/~blamar/nova/openstack-api-1-1-images/+merge/53942 | 15:15 |
*** ccustine has joined #openstack | 15:16 | |
creiht | dillon-w: though you could probably add it to the auth | 15:16 |
ttx | we'll wait a bit more to give him a last-chance review on that one. | 15:16 |
ttx | dendrobates, vishy, termie, jaypipes, soren: https://code.launchpad.net/~zulcss/nova/nova-lxc/+merge/55260 seems to have addressed the review concerns, please rereview | 15:17 |
dillon-w | creiht : hmm. actually what i want is to made my objects in swift public accessable. ACL is one thing, public URL is another. | 15:17 |
dillon-w | creiht : there's public_uri() method for PHP bindings which returns CDN URL for objects. it seems that doesn't work with Swift? | 15:18 |
ttx | creiht: https://code.launchpad.net/~david-goetz/swift/obj_server_file_checker/+merge/54765 looks unreviewed, would be great if it could make it today. | 15:19 |
*** bkkrw has quit IRC | 15:24 | |
dillon-w | creiht : now I can upload files to Swift with Java bindings, can I also using Java bindings to modified ACL? my guess was 'cdnEnableContainer() method in java bindings but that doesn't work if 'cdnManagementURL' is not set. | 15:24 |
*** johnpur has joined #openstack | 15:25 | |
*** ChanServ sets mode: +v johnpur | 15:25 | |
*** daveiw has quit IRC | 15:27 | |
blamar | tr3buchet: Updated https://code.launchpad.net/~blamar/nova/openstack-api-1-1-images/+merge/53942 with trunk merge + conflict resolution | 15:27 |
openstackjenkins | Project nova build #737: SUCCESS in 2 min 25 sec: http://hudson.openstack.org/job/nova/737/ | 15:28 |
openstackjenkins | Tarmac: Added VLAN networking support for XenAPI | 15:28 |
*** nid0 has quit IRC | 15:31 | |
ttx | yay, one down. | 15:32 |
zul | ooh....me next? :) | 15:34 |
*** pharkmillups has quit IRC | 15:35 | |
*** guigui has joined #openstack | 15:36 | |
uvirtbot | New bug: #745027 in nova "OSAPI versions response returns incorrect Content-Type" [Undecided,New] https://launchpad.net/bugs/745027 | 15:36 |
*** ramkrsna has quit IRC | 15:37 | |
*** ramkrsna has joined #openstack | 15:37 | |
*** ramkrsna has joined #openstack | 15:37 | |
*** MotoMilind has joined #openstack | 15:39 | |
*** kashyap has quit IRC | 15:39 | |
*** enigma has joined #openstack | 15:39 | |
jaypipes | sirp: around for a quick chat? | 15:40 |
sirp | sure thing | 15:41 |
*** meganwohlford has joined #openstack | 15:41 | |
*** meganwohlford has left #openstack | 15:42 | |
*** guigui has left #openstack | 15:45 | |
uvirtbot | New bug: #745043 in nova "OSAPI fault responses returning invalid content-type/body" [Undecided,New] https://launchpad.net/bugs/745043 | 15:46 |
*** rds__ has quit IRC | 15:49 | |
jaypipes | sirp, sandywalsh, dabo: another core review on https://code.launchpad.net/~rackspace-titan/nova/osapi-pass-update-lp744567/+merge/55239 would be great, thx | 15:49 |
*** dillon-w has quit IRC | 15:50 | |
*** sparkycollier has joined #openstack | 15:56 | |
*** sparkycollier has quit IRC | 15:59 | |
*** aliguori has quit IRC | 16:02 | |
*** kapil has joined #openstack | 16:02 | |
markwash | jaypipes: not saying Glance changed recently--just that it changed since Ed and Rick deployed it | 16:03 |
jaypipes | markwash: Glance, or the GlanceImageService in Nova? | 16:04 |
markwash | jaypipes: Glance | 16:04 |
jaypipes | markwash: ok, thx. just wanted to make sure... can be very confusing. :) | 16:04 |
*** cjreyn has joined #openstack | 16:05 | |
*** dirakx has joined #openstack | 16:06 | |
cjreyn | hi all, I've just upgraded my Cloud via apt and its screwed the compute nodes | 16:07 |
cjreyn | they're just spitting out errors (very) rapidly: Inner Exception: No module named suds from (pid=1990) import_class /usr/lib/pymodules/python2.6/nova/utils.py:65 | 16:07 |
cjreyn | any ideas? | 16:07 |
zul | sudo apt-get install python-suds | 16:09 |
jk0 | ttx: updated https://bugs.launchpad.net/nova/+bug/695587 (it's done) | 16:09 |
uvirtbot | Launchpad bug 695587 in nova "Instance action recording fails with "primary key must be unique" error" [Undecided,Fix committed] | 16:09 |
*** zigo-_- has quit IRC | 16:15 | |
cjreyn | how come this isn't caught as a dependency? | 16:15 |
*** sparkycollier has joined #openstack | 16:16 | |
jk0 | cjreyn: there's a bug in for it now | 16:16 |
*** zaccone has quit IRC | 16:22 | |
*** zigo-_- has joined #openstack | 16:23 | |
*** zigo-_- has quit IRC | 16:24 | |
*** aliguori has joined #openstack | 16:27 | |
*** Nacx has quit IRC | 16:30 | |
*** shentonfreude has joined #openstack | 16:30 | |
*** dendrobates is now known as dendro-afk | 16:32 | |
blamar | jaypipes, fixed import anomaly when you get a sec :) | 16:37 |
*** nid0 has joined #openstack | 16:37 | |
tr3buchet | blamar i'll let jaypipes mark it approved | 16:38 |
tr3buchet | if he doesn't do it quickly, let me know | 16:38 |
blamar | ty | 16:38 |
*** kyzh has quit IRC | 16:39 | |
*** gregp76 has joined #openstack | 16:42 | |
*** MarkAtwood has joined #openstack | 16:43 | |
jaypipes | blamar: yes, just doing a final check. | 16:43 |
jaypipes | blamar: not going to comment on the MP, but in the future, lines like this: 658+ """ % (locals())) You don't need the parens around locals()... | 16:44 |
jaypipes | blamar: off to tarmac pit. | 16:45 |
*** Ryan_Lane has joined #openstack | 16:45 | |
*** zigo-_- has joined #openstack | 16:46 | |
*** maplebed has joined #openstack | 16:48 | |
*** lionel has quit IRC | 16:49 | |
*** lionel has joined #openstack | 16:49 | |
*** zigo has joined #openstack | 16:54 | |
*** zigo-_- has joined #openstack | 16:55 | |
*** mgoldmann has quit IRC | 16:56 | |
jaypipes | blamar: image1.1 merged. | 16:56 |
*** mgoldmann has joined #openstack | 16:56 | |
blamar | jaypipes, yeah, did I do that again? I thought I fixed that? | 16:56 |
jaypipes | blamar: in the test cases, yeah, but you can clean that up any time, no worries. | 16:56 |
blamar | jaypipes: thanks a ton | 16:58 |
*** dendro-afk is now known as dendrobates | 16:58 | |
openstackjenkins | Project nova build #738: SUCCESS in 2 min 27 sec: http://hudson.openstack.org/job/nova/738/ | 16:58 |
openstackjenkins | Tarmac: Adds support for versioned requests on /images through the OpenStack API. | 16:58 |
*** jero has quit IRC | 16:59 | |
openstackjenkins | Project swift build #229: SUCCESS in 33 sec: http://hudson.openstack.org/job/swift/229/ | 17:03 |
openstackjenkins | Tarmac: changing /usr/bin/python to /usr/bin/env python in bins | 17:03 |
*** joearnold has joined #openstack | 17:04 | |
cjreyn | ok, now I'm seeing two errors preventing instance spawning. The first seems to relate to obtaining images | 17:05 |
cjreyn | File "/usr/lib/pymodules/python2.6/nova/virt/images.py", line 51, in fetch | 17:05 |
cjreyn | (nova.compute.manager): TRACE: metadata = image_service.get(elevated, image_id, image_file) | 17:05 |
cjreyn | (nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/image/local.py", line 113, in get | 17:05 |
cjreyn | (nova.compute.manager): TRACE: raise exception.NotFound | 17:05 |
cjreyn | (nova.compute.manager): TRACE: NotFound: None | 17:05 |
cjreyn | the images are listed as available in euca-describe images | 17:07 |
cjreyn | the second relates to this trace: | 17:09 |
cjreyn | TypeError: f(ile) should be int, str, unicode or file, not <open GreenPipe '<fd:16>', mode 'wb' at 0x455d560> | 17:09 |
*** jero has joined #openstack | 17:10 | |
*** st-14258 has joined #openstack | 17:10 | |
blamar | jaypipes, looks like it merged, but hudson kicked back my MP, should I just manually mark it merged? | 17:11 |
*** dendrobates is now known as dendro-afk | 17:11 | |
*** comstud is now known as dutsmoc | 17:12 | |
jk0 | cjreyn: need to use our PPA for eventlet | 17:12 |
cjreyn | i justed installed from the ppa | 17:12 |
jk0 | you restart the services by chance? | 17:13 |
cjreyn | jk0: yes | 17:15 |
jk0 | hmm | 17:15 |
jk0 | that GreenPipe exception should go away with out eventlet PPA | 17:15 |
jk0 | *with our | 17:15 |
cjreyn | I have two compute nodes up, the first suffers from the green pipe exception... | 17:16 |
cjreyn | the other from this "TypeError | 17:16 |
sandywalsh | jaypipes, looking | 17:16 |
*** vernhart has quit IRC | 17:18 | |
*** zaccone has joined #openstack | 17:19 | |
*** maplebed has quit IRC | 17:23 | |
*** matiu has joined #openstack | 17:24 | |
*** maplebed has joined #openstack | 17:27 | |
*** irahgel has left #openstack | 17:31 | |
*** westmaas_ is now known as westmaas | 17:31 | |
*** kashyap has joined #openstack | 17:31 | |
*** burris has quit IRC | 17:32 | |
*** clauden_ has joined #openstack | 17:34 | |
*** vernhart has joined #openstack | 17:38 | |
cjreyn | ok these are the tty images. The main err I'm getting seems to be related to fetching images. I've upgraded by ppa from an old release (bzr706) which worked fine before. What changed? | 17:40 |
cjreyn | and is there a way to go back to this old release via ubuntu's ppa? | 17:41 |
devcamcar | mtaylor: ever get a chance to look at the adminclient pypi stuff? | 17:41 |
*** comstud has joined #openstack | 17:42 | |
*** ChanServ sets mode: +v comstud | 17:43 | |
*** kbringard has joined #openstack | 17:45 | |
*** dendro-afk is now known as dendrobates | 17:49 | |
zigo-_- | In http://docs.openstack.org/openstack-compute/admin/content/ch03s02.html#d5e194, it's written that we should install python-software-properties and rabbitmq-server | 17:49 |
zigo-_- | Why these aren't in the nova dependencies? | 17:49 |
zigo-_- | And python-greenlet python-mysqldb? | 17:50 |
*** pharkmillups has joined #openstack | 17:53 | |
*** spectorclan_ has quit IRC | 17:53 | |
*** imsplitbit has joined #openstack | 17:54 | |
uvirtbot | New bug: #745138 in nova "content-type for unauthorized requests is set incorrectly" [Undecided,In progress] https://launchpad.net/bugs/745138 | 17:56 |
*** gregp76 has left #openstack | 18:03 | |
*** photron has joined #openstack | 18:05 | |
*** CloudChris has quit IRC | 18:10 | |
cjreyn | im still suffering from this bug despite a fix. I presume the fix applies to the ppa? | 18:11 |
cjreyn | https://bugs.launchpad.net/nova/+bug/702741 | 18:11 |
uvirtbot | Launchpad bug 702741 in nova "failed to retrieve chardev info with 'info chardev'" [Critical,Fix released] | 18:11 |
*** nelson has quit IRC | 18:12 | |
*** nelson has joined #openstack | 18:12 | |
cjreyn | Its not clear whether this was a permissions problem, or a problem with the image disk types | 18:13 |
*** zaccone has quit IRC | 18:13 | |
uvirtbot | New bug: #745152 in nova "floating ip address allocated for project1 can be associated with the instances running for project2" [Undecided,New] https://launchpad.net/bugs/745152 | 18:16 |
*** zenmatt has quit IRC | 18:19 | |
*** zenmatt has joined #openstack | 18:20 | |
*** CloudChris has joined #openstack | 18:21 | |
ttx | jk0: cool, thanks | 18:25 |
ttx | soren: could you look at lxc and give it the final push ? Should be ok now | 18:26 |
ttx | soren: https://code.launchpad.net/~zulcss/nova/nova-lxc/+merge/55260 | 18:26 |
*** spectorclan_ has joined #openstack | 18:28 | |
*** st-14258 has quit IRC | 18:28 | |
zigo-_- | Starting nova api: nova-api2011-03-29 18:28:38,664 AUDIT nova.api [-] Starting nova-api node (version 2011.1.1-workspace:tarmac-20110224184504-4e19t5nx33b8gpy9) | 18:29 |
zigo-_- | 2011-03-29 18:28:38,664 ERROR nova.api [-] No paste configuration found for: nova-api.conf | 18:29 |
zigo-_- | . | 18:29 |
zigo-_- | Quite ugly ... :) | 18:29 |
zigo-_- | Is there a way to start the daemon so that it doesn't shout on the console? | 18:29 |
*** purpaboo is now known as lurkaboo | 18:30 | |
*** kashyap has quit IRC | 18:31 | |
*** matiu has quit IRC | 18:35 | |
annegentle | um wow. We have nearly doubled the page count of the Compute Admin guide PDF this release... | 18:37 |
*** johnpur has quit IRC | 18:37 | |
*** gasbakid has joined #openstack | 18:38 | |
annegentle | 44 pages -> 82 pages. | 18:38 |
termie | annegentle: is that a good thing? | 18:38 |
kpepple_ | zigo-_-: add --demonize to your /etc/nova/nova.conf file | 18:40 |
zul | termie: ping where in the lxc branch? | 18:43 |
termie | zul: heya | 18:44 |
termie | zul: what is your question? | 18:44 |
zigo-_- | I don't need to, I'm using start-stop-daemon. | 18:44 |
*** dirakx has quit IRC | 18:44 | |
zul | termie: im not sure where the extra white spaces you are talking about | 18:44 |
termie | zul: i have commented twice on your branch and given exact line numbers | 18:44 |
termie | zul: please re-read my comments | 18:44 |
justinsb | zul: Or you could try using the termie bot! | 18:44 |
termie | zul: perhaps you need to look at it in the web interface if you have been using the email interface? | 18:45 |
zul | termie: probably :) | 18:45 |
termie | justinsb: i really expected that pep8 would catch two spaces between {'asd': 'foo'} | 18:45 |
*** burris has joined #openstack | 18:45 | |
justinsb | termie: Oh.. the termiebot won't catch that. Does pep8 not catch it? I'll check... | 18:46 |
zul | it doesnt i just ran pep8 nova/tests/test_virt.py | 18:46 |
*** mgoldmann has quit IRC | 18:47 | |
termie | zul: weird, that seems very much the kind of thing it would catch | 18:47 |
*** gondoi has joined #openstack | 18:50 | |
*** burris has quit IRC | 18:50 | |
zul | termie: also i changed the docstrings to the following: http://pastebin.ubuntu.com/586986/ | 18:51 |
termie | zul: awesome | 18:51 |
termie | zul: i think we technically want there to be an extra newline before the closing quotes | 18:52 |
termie | zul: but that is some thing for emacs users and as a vim user i have yet to see anybody make use of it so i tend to let it slide | 18:52 |
termie | zul: i think termiebot will catch it | 18:53 |
zul | where is the termiebot anyways? | 18:53 |
termie | zul: it is still a branch justin is working on | 18:53 |
zul | ah ok | 18:53 |
termie | https://code.launchpad.net/~justin-fathomdb/nova/termie-bot/+merge/55031 | 18:53 |
justinsb | termie: Just added OneSpaceInDictionary rule: ":\s(\s+)'" | 18:54 |
justinsb | termie: One false positive (confused by a docstring, but otherwise seems to just catch real issues) | 18:55 |
justinsb | termie: Are docstrings allowed to be in single quotes (''') ? | 18:55 |
*** burris has joined #openstack | 18:55 | |
termie | justinsb: why does checker_Base_calsses even exist? | 18:55 |
termie | justinsb: they probably technically are but I'd say stick to the normal dbl quotes | 18:56 |
*** zaccone has joined #openstack | 18:56 | |
justinsb | termie: So that it doesn't try to run Checker and SimpleRegexChecker (the helper base classes) | 18:56 |
termie | justinsb: i don't see why there is a need to check whether something is a checker base class | 18:56 |
justinsb | termie: I suspect there's a more Pythonic way... | 18:56 |
termie | justinsb: i'd probably just make a registry and list the tests you want to run | 18:56 |
*** ctennis has quit IRC | 18:56 | |
*** ctennis has joined #openstack | 18:57 | |
zul | termie: ok pushed | 18:57 |
*** burris has quit IRC | 18:57 | |
termie | so at the end just say 'checks = [OneSpaceInTodoOrNotes, ClaimYourTodoAndNotes]' | 18:57 |
justinsb | termie: If I'm going to have a registry, I'd rather have just two exceptions in there, rather than dealing with forgetting to put real checks them into the registry each time | 18:57 |
*** burris has joined #openstack | 18:57 | |
termie | justinsb: "each time" ? | 18:58 |
justinsb | termie: Each time we add a test | 18:58 |
termie | justinsb: that's not the largest mental burden in the world | 18:58 |
justinsb | termie: True, but I can see myself screwing it up :-) | 18:59 |
*** nelson has quit IRC | 18:59 | |
*** drico has joined #openstack | 18:59 | |
*** nelson has joined #openstack | 18:59 | |
ttx | Meeting in 2 hours in #openstack-meeting ! | 18:59 |
termie | justinsb: checking that stuff is derived from a base class is not really the pythonic way of handling this | 19:00 |
termie | justinsb: we do duck-typing, any class that conforms to the interface should be fine | 19:00 |
blamar | If any Cores have a second to check out a High priority bug fix, it would be much appreciated by myself and dprince: https://code.launchpad.net/~rackspace-titan/nova/lp742204/+merge/55200 | 19:00 |
justinsb | termie: I think you're trying to lure me into a language flame war... :-) | 19:01 |
*** estranho has joined #openstack | 19:01 | |
termie | justinsb: not at all, just trying to get rid of your extra code ;) | 19:01 |
*** sparkycollier has quit IRC | 19:01 | |
*** aixenv2 has quit IRC | 19:01 | |
eday | justinsb: not sure if it's a good fit for this issue, but from what you're saying here it sounds like you may want to use Python metaclasses (can be used for automatic registry hooks) | 19:02 |
termie | eday: aye but i rarely want to suggest metaclasses to people, big rabbit hole | 19:02 |
termie | eday: just a decorator is enough | 19:02 |
*** burris has quit IRC | 19:02 | |
*** aixenv has joined #openstack | 19:02 | |
*** sparkycollier has joined #openstack | 19:04 | |
justinsb | eday, termie: Off to read about metaclasses | 19:05 |
mtaylor_ | jaypipes: ola hombre | 19:05 |
justinsb | termie: I think if you don't derive everything from Checker the line count will go up... | 19:07 |
termie | justinsb: i'm not saying not to derive it from checker | 19:07 |
termie | justinsb: i am saying it doesn't matter where things are derived from | 19:07 |
justinsb | termie: Oh... the 'check' discovery side | 19:07 |
justinsb | termie: Don't know why I put that in quotes. More coffee needed I think | 19:08 |
*** burris has joined #openstack | 19:09 | |
jaypipes | mtaylor_: ya? | 19:10 |
mtaylor_ | jaypipes: are there blueprints in places that talk about automated testing needs? | 19:11 |
*** mtaylor_ is now known as mtaylor | 19:11 | |
kbringard | is it still required that you specify image_type=machine when putting an image in glance? | 19:11 |
termie | justinsb: http://pastie.org/1732213 you can do something like that if you want it to do be dynamic | 19:11 |
termie | just build the CHECKERS in main() | 19:11 |
termie | justinsb: somewhat unusual to use 'do_*' also, ftr | 19:11 |
jaypipes | mtaylor: one sec | 19:12 |
jaypipes | mtaylor: this was about as far as I got: http://wiki.openstack.org/NovaTestingHudson. There is a link to a blueprint on there, but lots of folks have gone in varying directions over the past 2 months. Nothing coordinated. | 19:15 |
jaypipes | dprince: merging lp:~dan-prince/glance/purge_props. Not a huge fan, but as you say, it's a hotfix for right now. we can discuss a nicer approach at the summit. | 19:17 |
justinsb | termie: run_check a better name than do_check? | 19:17 |
justinsb | termie: I have an aversion to catching exceptions without either logging or rethrowing, which is the problem with the pastie approach. | 19:18 |
justinsb | termie: e.g. I want to be told if a class forgot to implement the correct check method | 19:19 |
justinsb | termie: Or if someone upstream decided to rename do_check -> run_check :-) | 19:19 |
*** MarkAtwood has left #openstack | 19:20 | |
*** burris has quit IRC | 19:26 | |
dabo | justinsb: that's because exceptions in Python are not errors | 19:26 |
justinsb | termie, vishy: Can I get you to look at this pls... trying to figure out what should happen in terms of restarting / shutting down instances: | 19:27 |
justinsb | termie, vishy: http://bazaar.launchpad.net/~justin-fathomdb/nova/restart-instance/view/head:/nova/virt/driver.py | 19:27 |
dabo | they are no different than if/else tests | 19:27 |
justinsb | dabo: Well, that's good to know. It's not a bug, it's a feature :-) | 19:27 |
dabo | no, it's a design | 19:27 |
dprince | jaypipes: sure. I couldn't think of a better way to pull it off without adding a major new interface changing feature. Sounds good. | 19:28 |
markwash | termie: can you check out https://code.launchpad.net/~rackspace-titan/nova/change-password-v1-1/+merge/54917 again to see if I adequately fixed things up in response to your comments? | 19:28 |
openstackjenkins | Project nova build #739: SUCCESS in 2 min 26 sec: http://hudson.openstack.org/job/nova/739/ | 19:28 |
openstackjenkins | Tarmac: Glance used to return None when a date field wasn't set, now it returns ''. | 19:28 |
openstackjenkins | Glance used to return dates in format "%Y-%m-%dT%H:%M:%S", now it returns "%Y-%m-%dT%H:%M:%S.%f". | 19:28 |
openstackjenkins | Fixed to allow for all cases. | 19:28 |
dabo | justinsb: here's what I mean. The second example is a much more Pythonic way of handling things. http://pastie.org/1732300 | 19:30 |
justinsb | dabo: But suppose upstream renames do_check to run_check. How do you detect that? | 19:31 |
dabo | justinsb: not sure what you're describing. | 19:31 |
dabo | how would that happen? | 19:31 |
justinsb | dabo: You write a checker that implemented do_check. It checks something important. I then rename do_check to run_check. You don't realize, so your test isn't being run. That shouldn't happen silently. | 19:32 |
kbringard | whoever just fixed that glance bug (I think it was Tarmac) thanks, you rule | 19:33 |
Ryan_Lane | vishy: 2 seconds for a role lookup? This seems insanely high. how many roles are on this ldap server? | 19:34 |
Ryan_Lane | vishy: is the ldap server doing proper indexing? | 19:34 |
dabo | justinsb: I'm not sure why you would rename an essential function. Things *should* blow up when someone does something stupid like that. But you don't litter your code with tons of defensive checks to prevent someone from doing stupid things - that's what code reviews are for. | 19:34 |
justinsb | dabo: I would rename the essential function because termie suggested I do it above. | 19:35 |
Ryan_Lane | I avoided doing in-memory caching of roles, but maybe I'll take the time to implement it | 19:35 |
Ryan_Lane | it adds a lot of complexity | 19:35 |
justinsb | dabo: Who's to say what you're doing in your derived branch, so it won't be part of the code review | 19:35 |
justinsb | dabo: I'd suggest that the number of defensive checks are higher in the Pythonic way than in the OO way | 19:36 |
vishy | Ryan_Lane: it is specifically the way that the roles are looked up | 19:36 |
Ryan_Lane | I also avoided some performance improvements because it would cause major changes in the ldap code… I can likely implement those too | 19:36 |
vishy | Ryan_Lane: it makes 5 or 6 requests to ldap | 19:36 |
dabo | justinsb: isn't that for a new function? Suggestion clear names is always done for new functions; rarely does a function that is already implemented throughout the code get its name changed | 19:36 |
Ryan_Lane | yep | 19:36 |
dprince | kbringard: I broke glance in 99. Sorry for that. With that latest commit is should be good to go again. | 19:36 |
Ryan_Lane | I can make that less | 19:36 |
Ryan_Lane | but it's going to take me time because I'm going to have to make some fairly substantial changes | 19:37 |
vishy | Ryan_Lane: it is the number of requests that makes it slow, but we have a very simple cache on roles to make it go really fast | 19:37 |
dabo | justinsb: if it's never going to be merged into trunk, then you're right - you can do whatever you want to it. | 19:37 |
dabo | justinsb: And you could suggest that, but you'd be wrong. | 19:37 |
* Ryan_Lane nods | 19:37 | |
vishy | Ryan_Lane: check out the linked branch if you want, it is pretty simple | 19:37 |
Ryan_Lane | ah ok. you already implemented fixes. | 19:38 |
vishy | Ryan_Lane: in diablo, we're moving all projects/groups/roles into authn | 19:38 |
vishy | and out of nova | 19:38 |
termie | justinsb: sorry was afk... 'check' is a better name than 'do_check' i'd say | 19:38 |
Ryan_Lane | what's that mean? | 19:38 |
vishy | Ryan_Lane: it would be a great opportunity to make it work how it should work | 19:38 |
kbringard | dprince: no worries, was it the GLANCE_FMT instesad of ISO_FMT in the return? | 19:38 |
vishy | Ryan_Lane: I'm sure I could use some help to make the IDM more "Ldappy" | 19:39 |
termie | justinsb: re catching exceptions and renaming and whatnot, this is still a very small and easy to understand file | 19:39 |
Ryan_Lane | IDM as in using something like SAML? | 19:39 |
Ryan_Lane | I noticed that discussion | 19:39 |
Ryan_Lane | most IDMs have an LDAP backend | 19:39 |
justinsb | termie: I just don't get it. It just seems like it costs us nothing to engineer this 'right'. We can do something more Pythonic, but we give up a lot of help from the compiler/runtime, and I don't see what we're getting in return. | 19:40 |
*** bkkrw has joined #openstack | 19:40 | |
Ryan_Lane | vishy: so how does this affect the LDAP support? | 19:40 |
Ryan_Lane | I'm relying on it fairly heavily right now :) | 19:40 |
dprince | kbringard: Sorry. That was a Glance image service bug. Lamar got that one. Glance was returning ISO_FMT(in the old code). The new code accepts both formats. | 19:40 |
kbringard | dprince: nice, well good work you guys :-) | 19:41 |
kbringard | how long does it usually take for the new build to prop out to the apt repo? | 19:41 |
vishy | Ryan_Lane: we could do something like saml, but my thinking was to just take the ldap stuff we have now and fix it | 19:42 |
Ryan_Lane | ah ok | 19:42 |
Ryan_Lane | sounds good | 19:42 |
*** vernhart has quit IRC | 19:42 | |
vishy | i.e.: remove the pseudo-roles like project_manager | 19:42 |
vishy | turn everything into groups, instead of differentiating projects and roles | 19:42 |
termie | justinsb: it isn't 'right' it is overly complex, it is more important to be able to easily iterate and make changes than it is to prevent future coders from being able to break something | 19:42 |
vishy | etc. | 19:42 |
Ryan_Lane | how would that work for multiple projects? | 19:43 |
vishy | Ryan_Lane: I could probably use your help with it | 19:43 |
Ryan_Lane | sounds good | 19:43 |
termie | justinsb: your current implementation locks down which classes can be used | 19:43 |
Ryan_Lane | I'll be at the design summit | 19:43 |
vishy | Ryan_Lane: business are requesting more control | 19:43 |
Ryan_Lane | back in like 1 hour. lunch :) | 19:43 |
Ryan_Lane | sorry | 19:43 |
vishy | so they need organizations, business units etc. | 19:43 |
vishy | np. | 19:43 |
*** Ryan_Lane is now known as Ryan_Lane|lunch | 19:43 | |
termie | justinsb: if somebody somehow makes a mistake and doesn't name their method correctly it will be pretty obvious to them when they test it | 19:43 |
justinsb | termie: Yes. There's a contract that must be implemented by a checker. It has to implement do_check (or whatever we call it). That contract can either be explicit or implicit. | 19:43 |
soren | vishy: You said you ran your tests yesterday against current trunk? | 19:44 |
justinsb | termie: But ... why bother? Why not get some help from the computer? | 19:44 |
soren | vishy: How did you make it past this? https://bugs.launchpad.net/nova/+bug/745016 | 19:44 |
uvirtbot | Launchpad bug 745016 in nova "Unconditionally injects network configuration" [High,New] | 19:44 |
termie | justinsb: it isn't help | 19:44 |
termie | justinsb: it is just weight | 19:44 |
termie | justinsb: what you are trying to accomplish is a very easy task in python and people are used to dealing with the pattern that i pastie'd | 19:46 |
termie | justinsb: they are not used to explicit contracts, and they needn't be as the language provides a lot of flexibility as to how to fulfill implicit contracts | 19:47 |
soren | termie: Is this about the isinstance check? | 19:48 |
soren | isinstance-ish. | 19:48 |
justinsb | termie: I believe a lot of our nova bugs are related to not having explicit contracts. | 19:48 |
zul | termie: hey i think i fixed the extra spaces can you have a look? | 19:49 |
*** adiantum has quit IRC | 19:49 | |
termie | soren: not really, more about the approach to validating whether something can be used as a checker | 19:49 |
termie | zul: 'updating diff' so it'll be a minute | 19:49 |
justinsb | termie: I thought it was about the use of type-checking vs duck-typing, which boils down to the isinstance-ish check? | 19:50 |
*** estranho has quit IRC | 19:50 | |
termie | justinsb: it boils down in implementation to your isinstance-ish test, yes, and type-checking vs duck-typing is what i said as far as i can tell | 19:50 |
termie | justinsb: i am mostly trying to convince you on how python does things | 19:51 |
termie | justinsb: the implementation at this point is trivial compared to the conversation, it is only 10 lines of code either way | 19:52 |
justinsb | termie: And I'm trying to convince you that just because a high priest tells you to do something, it doesn't mean it's right :-) | 19:52 |
justinsb | termie: I agree that we've probably blown it out of proportion :-) | 19:52 |
termie | justinsb: i'm not claiming any high priests are involved, i am saying one way is how a person normally solves this problem in python and the other way will be unusual to anybody else looking at the code | 19:53 |
soren | just for reference: I'm with termie on this. isinstance is generally frowned upon. If we really wanted to be more explicit about contracts, something like zope-interfaces are more widely accepted, but I'm not sure we want to go down that pat. | 19:53 |
termie | also, if soren agrees with me it must be true :p | 19:53 |
soren | True that. | 19:53 |
soren | :) | 19:53 |
justinsb | termie: I thought interfaces in OO were generally accepted | 19:54 |
justinsb | termie, soren: I don't think I've ever seen the two of you agree :-) Why don't you talk about LP ;-) | 19:54 |
soren | justinsb: I won't make a habit of it, I promise :) | 19:54 |
justinsb | termie: I'd be happy to do it the Pythonic way if I thought we got something in return for giving up the missing-implementation checking | 19:55 |
justinsb | termie: So what do we get in return? Not having to derive from Checker? | 19:56 |
termie | justinsb: we get readability and flexibility | 19:56 |
*** j05h has quit IRC | 19:57 | |
*** burris has joined #openstack | 19:58 | |
soren | I would really appreciate another review of https://code.launchpad.net/~openstack-gd/nova/lp745016/+merge/55397 It's rather devastating to my tests. | 19:58 |
jk0 | on it | 19:59 |
ttx | we need a quotes page on the wiki. I'd certainly add that one: "<termie> also, if soren agrees with me it must be true :p" | 19:59 |
*** jfluhmann has quit IRC | 19:59 | |
*** colinnich has left #openstack | 20:00 | |
*** jfluhmann has joined #openstack | 20:00 | |
ttx | (Meeting in one hour in #openstack-meeting !) | 20:00 |
soren | mtaylor: You're much more of a Jenkins guru than I.. Say I wanted to automatically (programmatically, that is) trigger a parameterised build, and run a bunch of other tests afterwards, and somehow get back single pass/fail.. Can I do that? | 20:00 |
justinsb | termie: Can we talk about the power state stuff instead? | 20:01 |
mtaylor | soren: hehe. sort of but not really | 20:01 |
soren | mtaylor: darn it. | 20:01 |
termie | justinsb: yssir | 20:01 |
mtaylor | soren: are you wanting to have tarmac trigger a jenkins job and get a result back? | 20:01 |
termie | justinsb: too many windows open | 20:01 |
soren | mtaylor: s/tarmac/something/ but yes. | 20:02 |
termie | justinsb: why did you link to a file? | 20:02 |
mtaylor | soren: yeah - so, I've got some todo list items on my plate to support that | 20:02 |
justinsb | termie: No worries. Because I wanted to just talk about the behaviour, not the whole patch. | 20:02 |
termie | k | 20:02 |
btorch | I just installed nova-compute 2011.2~bzr907-0ubuntu0ppa1 on a new box and nova-compute.log keeps on giving "NotFound: Class get_connection cannot be found" <- box is setup same way as the other compute nodes that have a lower version | 20:02 |
justinsb | termie: Just about what should happen in various scenarios | 20:03 |
btorch | stupid suds | 20:03 |
soren | mtaylor: I have some ideas about how to get to where I want, but I was just wondering if there was a built-in sort of rpc mechanism. Ok. | 20:03 |
justinsb | termie: Then, if it's a straight boolean, I can rework the rest of the patch | 20:03 |
termie | btorch: usually you get that when missing a depedency | 20:03 |
btorch | termie: yeah python suds | 20:03 |
mtaylor | soren: there is to a degree - you can trigger jobs via REST | 20:03 |
mtaylor | soren: it's the chaning jobs together and getting a result back that's the real trick - BUT - you can have jobs take post-complete actions | 20:04 |
justinsb | termie: It's just Action and PersistenceMode (and let's try to avoid talking about whether it's Pythonic until we agree the behaviour) :-) | 20:04 |
termie | justinsb: where is your merge prop? oh i bet it isn't on the active page | 20:04 |
mtaylor | soren: so if it's possible for your calling thing to then set itself into a state where it listens for callback responses, then you could set something up | 20:04 |
termie | justinsb: can it wait a little bit? vish is eating a sandwich | 20:05 |
soren | mtaylor: What I'm thinking is this: | 20:05 |
termie | and he wants to be involved | 20:05 |
justinsb | termie: It's in WIP. It's here: https://code.launchpad.net/~justin-fathomdb/nova/restart-instance/+merge/54652 | 20:05 |
*** omidhdl has joined #openstack | 20:05 | |
justinsb | termie: Can definitely wait for vish to eat | 20:05 |
justinsb | termie: I hope it's not another questionable meatball sub :-) | 20:05 |
*** j05h has joined #openstack | 20:05 | |
jk0 | soren: approved | 20:05 |
termie | actually... | 20:05 |
*** dprince has quit IRC | 20:06 | |
soren | mtaylor: The parameterized job (which installs nova from a given branch) would store the branch url in an artifact). The actual test jobs would grab this artifact. They will track whether they passed or failed along with the branch url somewhere. | 20:06 |
soren | mtaylor: One they've all reported in, I could check if they all passed. If they did, merge the branch. | 20:07 |
* soren hugs jk0 | 20:07 | |
jk0 | I was actually already looking at it when you asked :P | 20:08 |
mtaylor | soren: yes. although there's a slightly easier way ... | 20:08 |
*** jfluhmann has quit IRC | 20:08 | |
mtaylor | soren: which is there is a promoted build plugin, which allows you to have a job take actions only if all of its child jobs returned success | 20:09 |
mtaylor | soren: the problem is, it doesn't work with the parameterized build plugin | 20:09 |
mtaylor | soren: also, what you're talking about is something I was planning on taking care of as part of moving the logic of tarmac inside of jenkins | 20:10 |
mtaylor | soren: because at the end of the day - the fact that there are two tools and not one doing this task is, of course, silly | 20:11 |
btorch | termie: is the failed output of a simple nova-manage cmd such as "nova-manage db version" also a dep issue that you may have seen before ? "Command failed, please check log for more info" | 20:12 |
btorch | termie: btw the command works fine though | 20:13 |
termie | btorch: i don't really ever use nova-manage so i can't help much there | 20:13 |
openstackjenkins | Project nova build #740: SUCCESS in 2 min 30 sec: http://hudson.openstack.org/job/nova/740/ | 20:14 |
openstackjenkins | Tarmac: This branch adds support for linux containers (LXC) to nova. It uses the libvirt LXC driver to start and stop the instance. | 20:14 |
devcamcar | mtaylor: i'm back from vacation and harassing you about pypi again :) | 20:14 |
soren | mtaylor: I very much agree. | 20:16 |
mtaylor | soren: cool. I'll write up some blueprints and stuff | 20:18 |
*** lvaughn_ has quit IRC | 20:18 | |
openstackjenkins | Project nova build #741: SUCCESS in 2 min 27 sec: http://hudson.openstack.org/job/nova/741/ | 20:20 |
openstackjenkins | Tarmac: Now checking that exists at least one network marked injected (libvirt and xenapi) | 20:20 |
pquerna | hey, is anyone around who hacks on burrow? is it still going forward? | 20:20 |
termie | pquerna: eday ^^ | 20:20 |
pquerna | eday: hey.. | 20:21 |
*** nerens has quit IRC | 20:21 | |
*** HouseAway is now known as AimanA | 20:25 | |
zul | woot...thanks for the merge | 20:26 |
*** jfluhmann has joined #openstack | 20:26 | |
ttx | (Team meeting in 30 min. in #openstack-meeting) | 20:30 |
openstackjenkins | Project swift build #230: SUCCESS in 30 sec: http://hudson.openstack.org/job/swift/230/ | 20:32 |
openstackjenkins | Tarmac: Check the md5sum against metadata ETag on object GETs, and zero byte checks on GETs, HEADs, and POSTs. | 20:32 |
soren | mtaylor: Ok, so a job will go through merge proposals.. If it finds an approved on, it'll run the unit tests. If they pass, it'll trigger another (set of) job(s) passing them the branch url. These jobs will install Nova and trigger a bunch of other jobs that run some smoketests against the freshly installed nova. When all these things have run, a job will be triggered. If they all were succesful, it'll merge the branch and push it. If they failed, it'l | 20:32 |
uvirtbot | New bug: #745231 in nova "nova-manage reports "Command Failed"" [Undecided,New] https://launchpad.net/bugs/745231 | 20:32 |
*** jero has left #openstack | 20:32 | |
soren | mtaylor: something like that? | 20:32 |
vishy | soren: I didn't, I was using ami-tty, which is immune to injection | 20:32 |
soren | vishy: clever. | 20:32 |
mtaylor | soren: yes | 20:32 |
*** vernhart has joined #openstack | 20:33 | |
soren | mtaylor: Ok. The tricky bits (as in: the parts that my current setup cannot easily do) are: making sure that no new install jobs are run while still testing a previous one.. | 20:33 |
*** jakedahn has joined #openstack | 20:33 | |
mtaylor | soren: already have a solution for that one ... | 20:34 |
soren | mtaylor: ...and running a particular job after another set of tests has run. | 20:34 |
soren | mtaylor: ...or running a particular job after another number of runs of a set of tests has run. | 20:34 |
eday | pquerna: hey! | 20:34 |
mtaylor | soren: using ccustine's work that finished my jclouds plugin - so that we can spin up brand new machines for each test | 20:34 |
eday | pquerna: I can answer any burrow q's | 20:34 |
soren | mtaylor: That assumes that tests can run virtualised. | 20:35 |
mtaylor | soren: the two tricky bits in my brain are the reduction of results into a single pass/fail (running the merge job after other jobs have run) but I believe we can get hudson to do the hard bits of this for us already | 20:35 |
mtaylor | soren: indeed | 20:35 |
soren | mtaylor: That's not the case. | 20:35 |
mtaylor | soren: but if they can't, hudson already knows how to only run one job on a given build slave at a time, so still not a problem | 20:35 |
soren | mtaylor: Well, yes it is :) | 20:36 |
vishy | btorch: that is a new addition if the command throws an exception | 20:36 |
soren | mtaylor: See, what I want is: | 20:36 |
mtaylor | soren: oh - you want one job to do the install and other jobs to test that install? | 20:36 |
soren | mtaylor: On one and the same slave, what I want to happen is... | 20:36 |
soren | mtaylor: Yes, exactly. | 20:36 |
mtaylor | soren: k. yeah- we'll have to sort that | 20:36 |
vishy | justinsb: done eating | 20:37 |
mtaylor | the other problem is sensibly reporting errors back to the merge prop, which becomes a little bit hairier once we're all complex like this | 20:37 |
soren | mtaylor: I can tell Jenkins to not run a job if its upstream jobs are running. | 20:37 |
Ryan_Lane|lunch | vishy: back | 20:37 |
*** Ryan_Lane|lunch is now known as Ryan_Lane | 20:37 | |
soren | mtaylor: ...but that's only half the problem. | 20:37 |
mtaylor | yup. | 20:37 |
btorch | vishy: my commands are not throwing any exceptions | 20:38 |
soren | mtaylor: I don't think that'll be particularly hard. | 20:38 |
mtaylor | soren: it may just be some patches to the promoted build plugin | 20:38 |
mtaylor | soren: no - neither are particularly hard- they just involve a bit of code | 20:38 |
soren | mtaylor: We can just aggregate links back to the test results on the jenkins instance. | 20:38 |
vishy | btorch: so there are no errors in nova-manage.log? | 20:38 |
btorch | vishy: or at least not being logged I guess | 20:38 |
btorch | vishy: nope | 20:38 |
soren | mtaylor: No need to copy the data. | 20:38 |
*** omidhdl has quit IRC | 20:38 | |
soren | mtaylor: Hmm... Well, I gues there is. | 20:38 |
mtaylor | soren: that's not the hard part there - the hard part is the reporting itself | 20:38 |
btorch | vishy: https://bugs.launchpad.net/nova/+bug/745231 | 20:38 |
uvirtbot | Launchpad bug 745231 in nova "nova-manage reports "Command Failed"" [Undecided,New] | 20:38 |
mtaylor | soren: well - when I say "hard part" - I mean the part that necessitates writing some code | 20:39 |
*** gondoi has quit IRC | 20:39 | |
soren | mtaylor: He :) | 20:39 |
soren | heh, even. | 20:39 |
vishy | hahaha | 20:39 |
mtaylor | soren: because without jenkins actually understanding that this is what we're doing - it's going to get really kludgy really fast | 20:39 |
*** devdvd has quit IRC | 20:39 | |
vishy | btorch: i see the problem | 20:39 |
vishy | time for a ninja patch | 20:40 |
*** gondoi has joined #openstack | 20:40 | |
mtaylor | soren: but once jenkins itself understands how to report results back to merge props, then we can use its error consolidation abilities to produce the message | 20:40 |
Ryan_Lane | vishy: so the multi-tenant and shared auth threads that is prompting the authn change? | 20:40 |
vishy | sys.exit(0) apparently throws an exception | 20:40 |
vishy | Ryan_Lane: yes both | 20:40 |
Ryan_Lane | I skimmed those threads. hard to keep up with that much text though :) | 20:41 |
mtaylor | soren: so as I see it, the things jenkins needs to understand are "get list of approved merge props" "reject merge prop" "merge merge prop" ... and then "this set of jobs need to work in concert to produce a single result" | 20:41 |
Ryan_Lane | I'll get caught up before the design summit | 20:41 |
mtaylor | soren: does it sound like I'm missing anything there? | 20:41 |
justinsb | vishy, termie: When you're done ninja patching, the state transitions I want to agree are here: http://bazaar.launchpad.net/~justin-fathomdb/nova/restart-instance/view/head:/nova/virt/driver.py | 20:42 |
justinsb | vishy, termie: Action and PersistenceMode and the two PersistenceMode constants. First 120 lines | 20:43 |
justinsb | And anyone else that is interested in what should happen to instances when they are shut down, or when the host reboots etc! | 20:43 |
*** pothos has quit IRC | 20:43 | |
vishy | btorch: https://code.launchpad.net/~vishvananda/nova/fix-nova-manage-log/+merge/55432 | 20:45 |
*** pothos has joined #openstack | 20:45 | |
kpepple_ | vishy: does that (^^^) merge also take care of lp 745231 ? | 20:46 |
uvirtbot | Launchpad bug 745231 in nova "nova-manage reports "Command Failed"" [Undecided,New] https://launchpad.net/bugs/745231 | 20:46 |
soren | mtaylor: Nope, that sounds pretty much accurate. | 20:46 |
vishy | kpepple_: yes didn't know there was a bug for it | 20:46 |
kpepple_ | vishy: just got filed 10 minutes ago | 20:47 |
vishy | kpepple_: linked | 20:47 |
*** pothos has quit IRC | 20:48 | |
kbringard | this may be a silly question... but is it possible to set a default kernel and ramdisk when using glance? | 20:48 |
*** pothos has joined #openstack | 20:48 | |
kbringard | when I upload an image with the euca2ools I just do --kernel and --ramdisk | 20:48 |
vishy | kbringard: you can do it if you use nova-manage | 20:50 |
kbringard | ah, OK | 20:50 |
vishy | or you can set kernel_id and ramdisk_id in the properties section of the glance metadata | 20:50 |
kbringard | ahhh, ok, that's what I was looking for | 20:50 |
kbringard | perfect | 20:50 |
kbringard | I'd prefer to do it that way so I can do it all via the API | 20:51 |
vishy | justinsb: so i think we need another option | 20:51 |
*** adiantum has joined #openstack | 20:51 | |
kbringard | thanks vishy | 20:51 |
vishy | justinsb: that just updates the state of the instance without doing anything | 20:51 |
justinsb | vishy: Sounds good. Do you have a concrete use case in mind? | 20:52 |
justinsb | vishy: Like an operations system that can fix things, and just wants the 'least magic'? | 20:52 |
vishy | justinsb: sure, i think it is fine to show the state as crashed to the user and allow the user to decide whether to relaunch it or not | 20:53 |
vishy | justinsb: in fact i think that is the "easiest", in your persistence, what happens if a machine crashes over and over and over | 20:54 |
justinsb | vishy: OK, so on_host_failure=Action.LEAVE_AS_IS, on_guest_shutdown=Action.LEAVE_AS_IS, on_live_migrate=Action.MAKE_RUNNING | 20:54 |
justinsb | vishy: So live migrate is allowed, but otherwise we leave as-is | 20:54 |
vishy | yeah, although i don't really get what on_live_migrate is there for? | 20:54 |
justinsb | vishy: If a machine continuously crashes then we'll keep rebooting it | 20:54 |
vishy | justinsb: also, why is this stuff in the driver instead of the manager? | 20:55 |
justinsb | vishy: I'd like to talk about those sorts of issues separately, if that's OK (code structure) | 20:56 |
vishy | justinsb: ok | 20:56 |
justinsb | vishy: The on_live_migrate is to say whether an instance should be live-migratable | 20:56 |
justinsb | vishy: I'm thinking some people may prefer to just have their instance shut down | 20:56 |
justinsb | vishy: For example, a webserver in a pool | 20:57 |
justinsb | vishy: I don't really care if you shut it down | 20:57 |
justinsb | vishy: But I do care if it runs slow / not at all for a few seconds | 20:57 |
justinsb | vishy: Legitimate or overkill? | 20:57 |
*** jakedahn has quit IRC | 20:57 | |
vishy | justinsb: I don't really understand how this fits in with persistence mode? | 20:57 |
vishy | justinsb: wouldn't that be instance metadata? | 20:58 |
*** dmshelton has joined #openstack | 20:58 | |
*** adiantum has quit IRC | 20:58 | |
justinsb | vishy: Well, for one thing we're not allowed instance metadata/properties at the moment | 20:58 |
ttx | Meeting starting in 2 minutes in #openstack-meeting, join NOW! | 20:58 |
vishy | justinsb: gotta pop into the meeting | 20:58 |
jk0 | thx for the reminder | 20:58 |
justinsb | vishy: But I'm thinking that 'persistence mode' is all about what efforts I want the cloud to go to to keep my machine running | 20:58 |
justinsb | vishy: ttyl | 20:59 |
vishy | justinsb: i see | 20:59 |
justinsb | vishy: It's not completely cut and dry... this is a discussion :-) | 20:59 |
vishy | justinsb: ok we'll chat more after the meeting | 20:59 |
*** miclorb has joined #openstack | 21:00 | |
*** anotherjesse has joined #openstack | 21:00 | |
*** Ep5iloN_ has quit IRC | 21:01 | |
*** mray has quit IRC | 21:03 | |
*** ctennis has quit IRC | 21:03 | |
*** benb_ has joined #openstack | 21:06 | |
*** Ep5iloN_ has joined #openstack | 21:06 | |
*** benb_ has quit IRC | 21:09 | |
*** h0cin has quit IRC | 21:09 | |
*** littleidea has joined #openstack | 21:09 | |
*** bcwaldon has quit IRC | 21:09 | |
*** adiantum has joined #openstack | 21:11 | |
openstackjenkins | Project nova build #742: SUCCESS in 2 min 25 sec: http://hudson.openstack.org/job/nova/742/ | 21:14 |
openstackjenkins | Tarmac: Stop nova-manage from reporting an error every time. Apparently except: catches sys.exit(0). | 21:14 |
*** sparkycollier has quit IRC | 21:14 | |
*** pandemicsyn has quit IRC | 21:14 | |
soren | Yeah, sys.exit throws SystemQuit or something, right? | 21:14 |
soren | Ah, SystemExit. | 21:14 |
jk0 | I think so, yeah | 21:14 |
jk0 | it's the clean version compared to the one in os | 21:15 |
jk0 | os._exit() or something | 21:15 |
*** pharkmillups has quit IRC | 21:16 | |
*** ctennis has joined #openstack | 21:17 | |
*** ctennis has joined #openstack | 21:17 | |
mtaylor | soren: did we decide if we wanted to use openstack-build or openstack-ci to track this work? | 21:19 |
soren | mtaylor: No. I asked about openstack-ci vs. openstack-devel a couple of days ago... Either I missed your response, or forgot. | 21:19 |
mtaylor | soren: I'm not sure if I made a good response ... do you have a pref? | 21:20 |
*** reldan has joined #openstack | 21:20 | |
soren | mtaylor: I've tried to form an opinion. I've failed. | 21:22 |
*** bcwaldon has joined #openstack | 21:22 | |
mtaylor | soren: ok. I'm going to file some blueprints - so I'll just pick one :) | 21:22 |
*** bcwaldon has quit IRC | 21:22 | |
*** syah has quit IRC | 21:25 | |
*** syah has joined #openstack | 21:26 | |
kbringard | running the newest trunk, when I use glance as the backend and run euca-describe-images, I get this in my API log: http://paste.openstack.org/show/1028/ | 21:26 |
kbringard | any ideas/ | 21:26 |
*** photron has quit IRC | 21:27 | |
*** gasbakid has quit IRC | 21:27 | |
kbringard | or perhaps I should say, is this a known issue? I searched around the bugs and answers pages, but didn't see anything... | 21:27 |
*** MotoMilind has quit IRC | 21:31 | |
*** dmshelton has quit IRC | 21:33 | |
*** ppetraki has quit IRC | 21:34 | |
justinsb | kbringard: I think that's this: https://code.launchpad.net/~justin-fathomdb/nova/ec2-api-with-glance | 21:34 |
justinsb | kbringard: It got a bit confusing | 21:35 |
kbringard | ok... I set some custom fields in properties so it shouldn't be empty :-/ | 21:37 |
kbringard | but it is odd that when I do a glance details it doesn't show them | 21:38 |
justinsb | kbringard: If you're on the latest code (or even if you're not), then maybe file a bug? Get more eyes on it... | 21:38 |
kbringard | okie dokie, I just wanted to make sure it wasn't a known issue first | 21:38 |
kbringard | thanks! | 21:39 |
justinsb | kbringard: It probably isn't getting the attention it deserves at the moment with FF | 21:39 |
kbringard | yea | 21:39 |
kbringard | no worries | 21:39 |
soren | mtaylor: Do you know if there's a plugin for Jenkins that will hold off on runnning a job until it's polled its SCM and not found any changes? | 21:40 |
kbringard | it's not super critical, I'm just sorting out glance and trying to document these things as I come across them | 21:40 |
kbringard | thanks justinsb | 21:40 |
mtaylor | soren: the different scm plugins usually deal with scm polling | 21:40 |
*** gasbakid has joined #openstack | 21:40 | |
mtaylor | soren: like, that's implemented in the bzr plugin - are you saying you want it to wait until it's done two polls with no changes after having found changes before (like a quiet period?) | 21:41 |
mtaylor | soren: because we could add that the to the bzr plugin pretty easily | 21:41 |
mtaylor | soren: (don't know if you know this, but i maintain the bzr plugin) | 21:41 |
*** pharkmillups has joined #openstack | 21:41 | |
soren | mtaylor: Yes, a "quiet period" is exactly what I want. I coulnd't find a good phrase for it. | 21:42 |
soren | mtaylor: I actually need it for the HTTP polling thing (not bzr). | 21:42 |
*** gasbakid has quit IRC | 21:42 | |
mtaylor | soren: oh, there's an http polling thing? | 21:42 |
*** imsplitbit has quit IRC | 21:43 | |
soren | mtaylor: Yeah. I use it to notice when there are updates to the trunk ppa. | 21:43 |
soren | mtaylor: Every once in a while, the three ubuntu versions won't have built their new nova versions at the same time. | 21:44 |
*** mdomsch has quit IRC | 21:44 | |
mtaylor | soren: ah. | 21:44 |
soren | mtaylor: Or, sometimes there are updates in rapid succession and my downstream tests get rather upset when the install job uninstalls nova while it's being tested. | 21:44 |
*** gasbakid has joined #openstack | 21:44 | |
*** gasbakid has quit IRC | 21:46 | |
*** syah has quit IRC | 21:46 | |
soren | mtaylor: http://jenkins-ci.org/content/quiet-period-feature | 21:47 |
*** matiu has joined #openstack | 21:47 | |
*** matiu has joined #openstack | 21:47 | |
*** gasbakid has joined #openstack | 21:48 | |
*** enigma has quit IRC | 21:48 | |
soren | mtaylor: It's already there. Under "advanced settings". | 21:49 |
*** gondoi has quit IRC | 21:52 | |
mtaylor | soren: w00t | 21:53 |
openstackjenkins | Project nova build #743: SUCCESS in 2 min 24 sec: http://hudson.openstack.org/job/nova/743/ | 21:53 |
openstackjenkins | Tarmac: Make dnsmasq_interface configurable. | 21:53 |
*** mray has joined #openstack | 21:55 | |
*** hvaldivia has joined #openstack | 21:55 | |
hvaldivia | hello everybody | 21:56 |
hvaldivia | I'm running two types of instances | 21:56 |
hvaldivia | m1.tiny and m1.medium | 21:56 |
hvaldivia | now, my medium instance is always pending | 21:57 |
hvaldivia | my host is an i5-4GB dell | 21:57 |
hvaldivia | does anybody know how i can debug the vm behavior? a log maybe? | 21:58 |
*** cjreyn_ has joined #openstack | 21:59 | |
Ryan_Lane | hvaldivia: is there enough free memory for the instance to launch? | 22:00 |
*** adiantum_ has joined #openstack | 22:00 | |
*** santhosh has quit IRC | 22:00 | |
*** spectorclan_ has quit IRC | 22:00 | |
Ryan_Lane | if not, it'll just sit in the pending state | 22:01 |
vishy | hvaldivia: sounds like you are running out of memory | 22:01 |
vishy | yeah that | 22:01 |
*** ewanmellor has joined #openstack | 22:01 | |
Ryan_Lane | which would be a great bug to squash one day :) | 22:01 |
hvaldivia | ryan_lan: yeah I think so too, but I can't get any message. | 22:01 |
Ryan_Lane | yeah. there's essentially no logs for this condition | 22:02 |
hvaldivia | ryan_lan: 1.7GB is in used and I have 2.1GB of free memory | 22:02 |
tr3buchet | so long story short, i need to wait for the PTL. I've written the chances for xenapi, and grid dynamics has handled the libvirt side of things. But I wasn't sure what to do with the rest of the hypervisors | 22:02 |
*** kbringard has quit IRC | 22:02 | |
tr3buchet | This also affects the ec2 api | 22:02 |
*** adiantum has quit IRC | 22:02 | |
vishy | tr3buchet: are we breaking the ec2_api? | 22:03 |
hvaldivia | if a I have 2.1GB of RAM, I think I have enough memory, isnt it? | 22:03 |
vishy | hvaldivai: no a medium tries to use 2G | 22:03 |
*** dirakx has joined #openstack | 22:03 | |
vishy | and you only have .4 left it sounds like | 22:03 |
tr3buchet | vishy: instance['mac_address'] is referred to in the ec2 api | 22:03 |
Ryan_Lane | he has 2.1 free | 22:03 |
vishy | tr3buchet, ah so we just need to fix it | 22:04 |
tr3buchet | yes | 22:04 |
Ryan_Lane | but the system likely won't allow that much memory use | 22:04 |
*** santhoshtw has joined #openstack | 22:04 | |
tr3buchet | vishy: and i haven't really looked into it yet, found it grepping, made a note. | 22:04 |
hvaldivia | wait, wait. I will kill my tiny instances and I will try to run just the medium instance, I think it would run | 22:04 |
vishy | tr3buchet: seems like a compatibiltiy fix would be do add a mac_address property in the orm | 22:05 |
vishy | tr3buchet: and get the first one in the list | 22:05 |
vishy | tr3buchet: but for this change that might be overkill. We can assume that we really do wan't multiple macs in all of the hypervisors. | 22:05 |
*** adiantum_ has quit IRC | 22:06 | |
hvaldivia | now I have 3.5GB, but it's still pending :( | 22:07 |
*** lvaughn has joined #openstack | 22:07 | |
hvaldivia | well I will try to run the instance on a adm64-8GB | 22:07 |
tr3buchet | vishy: so multi-nic won't be supported by some hypervisors? | 22:08 |
cjreyn_ | so in the new ppa, objectstore can no longer be used by compute nodes to fetch images? | 22:08 |
vishy | tr3buchet: i would hope that they all would | 22:08 |
vishy | s/wan't/want/ | 22:09 |
*** comstud has quit IRC | 22:09 | |
*** comstud has joined #openstack | 22:09 | |
tr3buchet | doesn't multi-nic imply multiple macs? | 22:09 |
cjreyn_ | is glance the only way images can be fetched? | 22:10 |
vishy | tr3buchet: yes, sorry maybe my line wasn't clear | 22:10 |
vishy | we can assume that all hypervisors should support multple-nics, so perhaps compatibility mode may not be required for this change | 22:11 |
*** adiantum_ has joined #openstack | 22:11 | |
vishy | and we can focus instead on making all of the hypervisors work | 22:11 |
*** allsystemsarego has quit IRC | 22:12 | |
btorch | where does nova keep track of available ip pool ? I thouth it was under the fixed_ips table | 22:12 |
n1md4 | I've a new error I've not seen "Error: (OperationalError) (1054, "Unknown column 'networks_1.ra_server' in 'field list'")" followed by a huge mysql query. | 22:12 |
vishy | For future changes, I think we should attempt to have a compatiblity version, which would mean making instance['mac_address'] work as expected in the new data structure | 22:13 |
hvaldivia | okey I got it | 22:13 |
n1md4 | (I can pastebin that if needed) | 22:13 |
vishy | n1md4: nova-manage db sync | 22:13 |
hvaldivia | m1.tiny is an 512MB instance, and m1.medium is an 4GB instance | 22:13 |
cjreyn_ | i have a multinode install and the compute node fails to pull images. this didn't happen before the upgrade to the latest ppa. | 22:14 |
hvaldivia | Where can I find the description of the types? | 22:14 |
vishy | tr3buchet: I hope i'm not just confusing you | 22:14 |
vishy | hvaldivia: nova-manage instance_types list | 22:14 |
hvaldivia | I dont have instance_types among my options | 22:15 |
vishy | s/instance_types/instance_type | 22:15 |
vishy | sorry :) | 22:15 |
n1md4 | vishy: thanks. That's not fixed it, anything else? | 22:16 |
n1md4 | I'll add, there's an instance hanging at 'scheduling'. | 22:16 |
uvirtbot | New bug: #745309 in nova "LocalImageService images only accessable by admin user" [Undecided,New] https://launchpad.net/bugs/745309 | 22:16 |
hvaldivia | vishy: instance_type is implemente in bexar? I do not have that option | 22:16 |
soren | mtaylor: http://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin looks *very* handy. | 22:17 |
vishy | hvaldivia: oh sorry didn't know you were on bexar | 22:18 |
*** jk0 has quit IRC | 22:18 | |
kpepple_ | hvaldivia: no, dynamic instance_types are not implemented in Bexar. | 22:18 |
*** drico has quit IRC | 22:18 | |
vishy | hvaldivia: it is in nova/compute/instance_types i believe | 22:18 |
vishy | kpepple_: did you do instance_types? | 22:18 |
hvaldivia | vishy: yeah I found it thanks :) | 22:19 |
kpepple_ | vishy: yes | 22:19 |
vishy | found a bug! | 22:19 |
vishy | :) | 22:19 |
*** drico has joined #openstack | 22:19 | |
cjreyn_ | also, how do you specify a glance "endpoint/url" from which images are pulled by compute nodes? | 22:19 |
kpepple_ | vishy: in that case, no i did not do it | 22:19 |
vishy | hehe | 22:20 |
*** st-16473 has joined #openstack | 22:20 | |
*** jk0 has joined #openstack | 22:21 | |
*** ChanServ sets mode: +v jk0 | 22:21 | |
vishy | kpepple_: https://bugs.launchpad.net/nova/+bug/745320 | 22:23 |
uvirtbot | Launchpad bug 745320 in nova "nova-manage delete / create can't replace instance types." [Undecided,New] | 22:23 |
vishy | cjreyn_: they use the glance client. I believe it looks in various places for config | 22:23 |
kpepple_ | vishy: there is logic for permanently deleting ... use the --purge option to delete (yeah ... not sure if that is standard for nova but no one objected at the time). | 22:24 |
kpepple_ | vishy: however, i thought i had caught the error so that it didn't stack trace ... i'm assigning it to myself and will work it | 22:25 |
vishy | cjreyn_: FLAGS.glance_host, FLAGS.glance_port | 22:25 |
vishy | kpepple_: oh missed the purge option | 22:26 |
dabo | ok, I'm an idiot - I created a blueprint for a discussion at the summit, but created it under nova instead of under the summit. How do I move this to the right section? Or delete it and re-create it correctly? | 22:26 |
kpepple_ | vishy: it's still a problem, because we know people will try and do that | 22:26 |
cjreyn_ | vishy: that flag should be added to nova.conf? And pulling images from objectstore is now broken? | 22:26 |
vishy | cjreyn_: correct you have to pull from glance for multinode | 22:27 |
*** reldan has quit IRC | 22:27 | |
vishy | you can probably use default port so just --glance_host is all you need | 22:27 |
*** aliguori has quit IRC | 22:28 | |
vishy | cjreyn_: you could theoretically use local image service writing to a shared folder as well | 22:28 |
cjreyn_ | vishy: why did you scrap objectstore? and I pressume images are still cached on the compute node? | 22:28 |
btorch | anyone here having issues getting instances to respond on 2011.2~bzr907 ? | 22:28 |
cjreyn_ | vishy: yeah over nfs etc | 22:28 |
vishy | cjreyn_: it wasn't meant to be production level code | 22:29 |
vishy | cjreyn_: and it was the last bit of code still using twisted | 22:29 |
vishy | cjreyn_: so we scrapped instead of rewriting the whole thing | 22:29 |
*** hvaldivia has left #openstack | 22:30 | |
cjreyn_ | vishy: ok fair. thanks for the info. Can someone update the docs for the multinode install, cos this killed me this afternoon | 22:30 |
*** reldan has joined #openstack | 22:30 | |
*** littleidea has quit IRC | 22:32 | |
uvirtbot | New bug: #745320 in nova "nova-manage delete / create can't replace instance types." [Undecided,New] https://launchpad.net/bugs/745320 | 22:32 |
pvo | dabo: I'd just delete it and recreate it. | 22:33 |
dabo | pvo: i'd love to. Where do I delete a bp? | 22:33 |
pvo | link? | 22:34 |
dabo | https://blueprints.launchpad.net/nova/+spec/nova-instance-referencing | 22:35 |
pvo | re-target blueprint? upper right corner? | 22:36 |
pvo | or mark superseded | 22:37 |
dabo | that only moves it to another project | 22:37 |
dabo | for superseded I would have to create a second bp | 22:37 |
dabo | that'd work, but it'd be messy | 22:37 |
pvo | I'm not sure what you mean, created under nova instead of under the summit? | 22:38 |
*** littleidea has joined #openstack | 22:38 | |
dabo | It should be listed on https://launchpad.net/sprints/ods-d/+specs, but it's on https://blueprints.launchpad.net/nova | 22:39 |
dabo | So now it exists, but it isn't under the design summit discussion bps | 22:40 |
pvo | no idea then. :/ | 22:40 |
pvo | sorry | 22:40 |
dabo | thx anyway. I'll wait until Thierry wakes up :) | 22:41 |
*** cjreyn_ has quit IRC | 22:41 | |
*** _vinay has joined #openstack | 22:42 | |
_vinay | Hello, | 22:43 |
_vinay | I am wondering if there are some tests written for openstack | 22:43 |
termie | there are some, yes | 22:43 |
_vinay | I installed openstack (nova) and am wondering if the system works properly | 22:44 |
mtaylor | soren: ++ | 22:44 |
_vinay | ok | 22:44 |
termie | _vinay: the easiest test is probably to start up an instance? | 22:44 |
_vinay | I am looking at nova/tests/ dir | 22:44 |
_vinay | starting up an instance, I have done that | 22:45 |
_vinay | all that works | 22:45 |
_vinay | was looking for some pre-written tests which I can run | 22:45 |
*** vernhart has quit IRC | 22:46 | |
termie | _vinay: the smoketests are what we use, but i am not sure if you want to run them against your prod stuff | 22:46 |
_vinay | no not prod.. | 22:46 |
_vinay | I am just evaluating openstack ... so not prod yet :) | 22:47 |
_vinay | ok so nova/tests/ dir has smoketests right? | 22:47 |
termie | nope, root dir | 22:47 |
termie | smoketests | 22:47 |
_vinay | oh let me look | 22:47 |
termie | smokestests/test_sysadmin.py | 22:48 |
termie | for example | 22:48 |
*** st-16714 has joined #openstack | 22:48 | |
_vinay | ok I see that now | 22:48 |
*** adiantum_ has quit IRC | 22:49 | |
_vinay | so smoketests/ has smoke tests | 22:49 |
_vinay | and nova/tests/ has what ... unittests? | 22:49 |
n1md4 | Still getting "Error: (OperationalError) (1054, "Unknown column 'networks_1.ra_server' in 'field list'")" even after nova-manage db sync, any more ideas as to how to troubleshoot a hanging 'scheduling' instance? | 22:50 |
n1md4 | Error: (OperationalError) (1054, "Unknown column 'networks_1.ra_server' in 'field list'")" even after | 22:51 |
n1md4 | sorry, trackpad belm! | 22:51 |
termie | _vinay: yeah, unittests | 22:52 |
termie | smoketests run against a running system though | 22:52 |
termie | which is what you want | 22:54 |
justinsb | termie, vishy: Want to talk about persistence_mode now? (Maybe want to review the volumes-api patch first?) | 22:55 |
termie | justinsb: i thought y'all had taljked abotu that | 22:55 |
termie | justinsb: but sure i suppose, i am little worn out of philosophical debates | 22:55 |
justinsb | termie: I hope this is as non-philosophical as it comes | 22:55 |
_vinay | termie : yes .. actually I want both.. | 22:55 |
*** adiantum_ has joined #openstack | 22:55 | |
justinsb | termie: Just want to focus on 'what should we do when e.g. the host crashes' | 22:55 |
_vinay | I also want some perf tests. Are there any? | 22:56 |
pvo | justinsb: do you mean should the system auto restart them? | 22:56 |
termie | _vinay: not particulkarly | 22:56 |
pvo | or persistent in should the system keep them around? | 22:56 |
pvo | I think you mean #2 | 22:56 |
justinsb | pvo: Both actually | 22:56 |
justinsb | pvo: I tried codifying what I'm talking about... | 22:56 |
pvo | I see ops scripts for #1 and #2 is something that shoudl be configurable | 22:57 |
justinsb | pvo: First 120 lines here: http://bazaar.launchpad.net/~justin-fathomdb/nova/restart-instance/view/head:/nova/virt/driver.py | 22:57 |
*** st-16714 has quit IRC | 22:57 | |
pvo | sometimes you *dont* want to bring that host back online | 22:57 |
vishy | justinsb: i think default mode should just be do nothing | 22:57 |
pvo | for the vms, I mean | 22:57 |
justinsb | pvo: Exactly, which is what persistence_mode is supposed to be about | 22:57 |
*** dutsmoc has quit IRC | 22:57 | |
justinsb | vishy: OK... so we need a third mode which just updates the state but doesn't do anything else | 22:57 |
justinsb | vishy: i.e. updates the DB state | 22:58 |
*** santhoshtw_ has joined #openstack | 22:58 | |
justinsb | vishy: And if a host restarts, it leaves it as shutdown | 22:58 |
justinsb | vishy: And that will be the default | 22:58 |
vishy | justinsb: I'd prefer the compute worker to be pretty dumb | 22:58 |
*** santhoshtw has quit IRC | 22:58 | |
*** santhoshtw_ is now known as santhoshtw | 22:58 | |
vishy | justinsb: the persistence stuff you are talking about should be at a different layer IMO | 22:58 |
pvo | vishy: really? | 22:59 |
_vinay | termie: ok .. thx. So If I want to understand how my openstack installation behaves when I create lots of VMs, then I have to write my own tests ... correct? | 22:59 |
pvo | won't the compute node be the place to make that decision when the host reboots though? | 22:59 |
justinsb | vishy: Some sort of "nova supervisor system"? | 22:59 |
vishy | pvo: yeah, some "persistence modes" will be actually move the instance to a new box via migration | 22:59 |
vishy | pvo: if the compute host comes back and says oh, let me restart this and it is already running on another system, you have issues | 23:00 |
termie | _vinay: at this point yeah | 23:00 |
vishy | pvo, justinsb: I could get behind some very simple modes that are on the compute host, such as restart all of my instances if i reboot | 23:00 |
_vinay | cool.. thanks | 23:00 |
vishy | i think even that is dangerous though | 23:01 |
pvo | justinsb: I think this is why we introduced the 'bootlock' code | 23:01 |
pvo | at least in xenserver you can set a node to not boot ever | 23:01 |
*** mray has quit IRC | 23:01 | |
vishy | what if there was a power outage and the iscsi export on host b hasn't come back up yet | 23:01 |
uvirtbot | New bug: #745340 in nova "euca-attach-volume fails when using iSCSI on XenServer" [Undecided,New] https://launchpad.net/bugs/745340 | 23:01 |
vishy | perhaps we want to wait until the volume is available before rebooting the instance | 23:01 |
pvo | in xen we move the config out of the way to 'lock' it | 23:01 |
justinsb | vishy: My goal is to fix the basic behaviour, while leaving you the option for advanced behaviour | 23:01 |
vishy | justisb: right. So basic behavior in my mind is one flag for A) do nothing B) db wins C) hypervisor wins | 23:02 |
vishy | then we push more complicated logic a layer up | 23:02 |
justinsb | vishy: OK, so I can add a third flag for 'do nothing' mode | 23:02 |
pvo | vishy: do you see the db as the owner of state for the vm? | 23:02 |
pvo | or the compute node itself? | 23:03 |
vishy | pvo: i think it depends on the deployment | 23:03 |
justinsb | I think 'db wins' is what I call 'Relaunch' | 23:03 |
pvo | doesn't that introduce some confusion as to be behavior? | 23:03 |
justinsb | And I think 'hypervisor wins' is "Ephemeral" | 23:03 |
justinsb | Relaunch = Cloud Servers and Ephemeral = EC2, in my mind | 23:04 |
pvo | if you reboot a host that isn't really ephemeral since the instance will still exist, it'll just be off. | 23:04 |
justinsb | pvo: Well, right now, if you reboot a host it is marked deleted in the DB | 23:04 |
justinsb | pvo: And deleted from libvirt | 23:04 |
justinsb | pvo: That's the 'hair on fire' bug | 23:04 |
vishy | justinsb: db wins is slightly more than that but yes | 23:05 |
*** adiantum_ has quit IRC | 23:05 | |
vishy | justinsb: can we do a quick bugfix which is just update the state in the db | 23:05 |
vishy | without deleting it | 23:05 |
justinsb | vishy: Probably... | 23:06 |
justinsb | vishy: But I feel it's not really addressing the underlying issue | 23:06 |
justinsb | vishy: Which is that we don't have a defined behaviour for what should happen in the various circumstances | 23:06 |
vishy | justinsb: i think it is, i prefer leaving that to orchestration | 23:07 |
justinsb | vishy: If we can agree one persistence mode set of options, then persistence mode goes away | 23:07 |
pvo | I like #1 ... let the operator figure it out, but don't destroy it. | 23:07 |
vishy | there are obviously multiple sets of options | 23:07 |
vishy | and some providers may want to expose the functionality | 23:08 |
vishy | per instance | 23:08 |
justinsb | vishy: I would like to see a 'bring it back up' option | 23:08 |
pvo | its a requested feature for us too | 23:08 |
justinsb | vishy: I think that's pretty useful... | 23:08 |
justinsb | vishy: e.g. for the VM running my orchestration system :-) | 23:08 |
pvo | I can see it being in the next compute api | 23:09 |
justinsb | vishy: I can either fix this now (the code's there), or we can just defer it | 23:09 |
*** dendrobates is now known as dendro-afk | 23:09 | |
vishy | justinsb: sure bring it up is a great option | 23:09 |
vishy | justinsb: but there is a massive bug right now | 23:09 |
justinsb | vishy: Indeed :-) | 23:09 |
vishy | justinsb: and i don't think we have a good place to put this option | 23:09 |
vishy | it should be per vm and optionally suppported by the deployment | 23:10 |
justinsb | vishy: You mean to expose this option? | 23:10 |
vishy | justinsb: yes | 23:10 |
justinsb | vishy: That's fair | 23:10 |
justinsb | vishy: But it' | 23:10 |
justinsb | vishy: But it's easier to expose an option we have than one we don't | 23:10 |
*** adiantum has joined #openstack | 23:10 | |
justinsb | vishy: I can rework so it's just a 'don't delete my instances please' patch | 23:11 |
vishy | justinsb: if you want a simple thing right now, i think a simple flag called auto-relaunch will do what you want | 23:11 |
*** Zangetsu has quit IRC | 23:11 | |
justinsb | vishy: Yes | 23:11 |
vishy | justinsb: the rest is more like a new feature | 23:12 |
justinsb | vishy: I'm fine with that. I'll probably just do a new patch and keep the rest as a proposed new feature | 23:12 |
vishy | justinsb: sounds good | 23:12 |
justinsb | vishy: For Cactus, of course :-) | 23:12 |
*** santhoshtw has quit IRC | 23:12 | |
vishy | justinsb: about your feature, you'll have to explain the live migration thing a little more, and if there really is a way to tell the diff between a guest shutdown and a host reboot | 23:13 |
*** Daviey has quit IRC | 23:13 | |
justinsb | vishy: I think I'll leave that for the design summit / BP once I've thought through it all more | 23:13 |
*** aliguori has joined #openstack | 23:14 | |
justinsb | vishy: It rather depends on how the operator is managing live migration etc | 23:14 |
vishy | justinsb: another reason for pushing it up a level | 23:14 |
*** st-16890 has joined #openstack | 23:14 | |
vishy | justinsb: none of this code actually runs if the host dies | 23:14 |
*** anotherjesse_ has joined #openstack | 23:14 | |
vishy | justinsb: so we have to solve high availability at a higher level anyway | 23:14 |
justinsb | vishy: Not yet, but that's why I wanted to focus on those 120 lines that defined the desired behaviour | 23:14 |
justinsb | vishy: I think we've achieved that | 23:14 |
justinsb | vishy: Decision: "Too hard; just stop deleting instances for Cactus" | 23:15 |
vishy | justinsb: i think this is a good subject for the design summit for sure | 23:15 |
justinsb | vishy: Yeah, it would hopefully be less contentious than some of the others | 23:15 |
*** anotherjesse has quit IRC | 23:16 | |
*** anotherjesse_ is now known as anotherjesse | 23:16 | |
aixenv | hey guys im still having a problem where none of my instances are pingable/sshable -im happy to share whatever documentation you need | 23:16 |
*** ewanmellor has quit IRC | 23:16 | |
aixenv | network type is vlan | 23:16 |
*** anotherjesse has quit IRC | 23:16 | |
vishy | aixenv: are you on one box? | 23:17 |
aixenv | vishy: yes sir, trying proof of concept on 1 box | 23:17 |
justinsb | Nova-Core: Volumes-API could use a review: https://code.launchpad.net/~justin-fathomdb/nova/volumes-api/+merge/54464 | 23:17 |
*** vernhart has joined #openstack | 23:17 | |
*** bkkrw has quit IRC | 23:18 | |
*** dirakx has quit IRC | 23:20 | |
*** Daviey has joined #openstack | 23:21 | |
justinsb | vishy: Should I mark instances in the SHUTOFF state deleted in the DB? No, right? | 23:21 |
vishy | no | 23:21 |
*** dmshelton has joined #openstack | 23:23 | |
*** ewanmellor has joined #openstack | 23:28 | |
justinsb | jaypipes: I think I've reverted the extra code; are you OK abstaining on volumes-api now? https://code.launchpad.net/~justin-fathomdb/nova/volumes-api/+merge/54464 Thanks :-) | 23:32 |
*** pharkmillups has quit IRC | 23:32 | |
jaypipes | justinsb: going back in there now :) | 23:32 |
*** jk0 has quit IRC | 23:33 | |
*** jk0 has joined #openstack | 23:33 | |
*** ChanServ sets mode: +v jk0 | 23:33 | |
justinsb | jaypipes: Thanks! Happy to fix anything you still see in there... | 23:33 |
*** jk0 has quit IRC | 23:34 | |
*** jk0 has joined #openstack | 23:35 | |
*** ChanServ sets mode: +v jk0 | 23:35 | |
*** jk0 has quit IRC | 23:35 | |
*** jk0 has joined #openstack | 23:41 | |
*** ChanServ sets mode: +v jk0 | 23:41 | |
*** ewanmellor has quit IRC | 23:42 | |
*** jeffjapan has joined #openstack | 23:43 | |
*** nelson has quit IRC | 23:44 | |
*** nelson has joined #openstack | 23:45 | |
*** clauden_ has quit IRC | 23:46 | |
*** lionel has quit IRC | 23:47 | |
*** syah has joined #openstack | 23:47 | |
*** lionel has joined #openstack | 23:48 | |
*** gasbakid has quit IRC | 23:49 | |
*** _vinay has quit IRC | 23:50 | |
*** st-16890 has quit IRC | 23:51 | |
*** dsockwell has quit IRC | 23:52 | |
*** jk0 has quit IRC | 23:53 | |
*** st-17178 has joined #openstack | 23:54 | |
*** jk0 has joined #openstack | 23:56 | |
*** ChanServ sets mode: +v jk0 | 23:56 | |
justinsb | jaypipes: Thanks for the abstain | 23:57 |
Ryan_Lane | fun with semantic mediawiki, and the openstack manager mediawiki extension: http://nova-controller.tesla.usability.wikimedia.org/trunk.1/Resource_query_examples | 23:57 |
jaypipes | justinsb: np | 23:58 |
Ryan_Lane | when instances are created, their info is pulled from nova, placed in a mediawiki template that marks the data up with semantic properties | 23:58 |
*** jaypipes is now known as jaypipes-afk | 23:58 | |
Ryan_Lane | the semantic properties can be queried | 23:58 |
zedas | hey, question about the stats/ code in swift: is anyone actually running this? | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!