*** bsza has joined #openstack | 00:01 | |
*** krow has quit IRC | 00:01 | |
Kiall | bwong_, any luck? | 00:02 |
---|---|---|
bwong_ | http://paste.openstack.org/show/3234/ | 00:02 |
bwong_ | nope | 00:02 |
bwong_ | nova-volume will not start. | 00:02 |
bwong_ | well it will start, then it will go down | 00:03 |
Kiall | Ah yea, That.. | 00:03 |
Kiall | You need a nova-volumes LVM group for nova-volume to start | 00:03 |
*** MarkAtwood has quit IRC | 00:03 | |
Kiall | it shouldnt stop nova-test.sh from working though | 00:03 |
*** vernhart has quit IRC | 00:05 | |
*** po has quit IRC | 00:10 | |
*** jeromatron has quit IRC | 00:11 | |
*** n81 has quit IRC | 00:13 | |
*** nati2 has quit IRC | 00:14 | |
*** nati2 has joined #openstack | 00:15 | |
bwong_ | Kiall nova-test.sh could be a separate problem. | 00:16 |
*** bsza-I has joined #openstack | 00:17 | |
bwong_ | it keep saying invalid credentials. ill post what I see. hold on | 00:17 |
*** Guest79151 is now known as med_out | 00:18 | |
*** med_out has joined #openstack | 00:18 | |
bwong_ | http://paste.openstack.org/show/3235/ | 00:19 |
*** bsza has quit IRC | 00:20 | |
Kiall | and you havent changed the details in settings since setting up keystone? | 00:20 |
dosdawg | anyone installed openstack on fedora 16 yet? | 00:21 |
bwong_ | kiall: you mean settings file? | 00:23 |
Kiall | yea | 00:23 |
bwong_ | if so, nope haven't touched it since the beginning where it said to cahnge it | 00:23 |
Kiall | and was this a clean ubuntu install before you started? | 00:24 |
*** vernhart has joined #openstack | 00:24 | |
bwong_ | ya | 00:24 |
Kiall | or had you tried the ubuntu packages/devstack etc etc on it before? | 00:24 |
bwong_ | i installed it specifically just for openstack | 00:24 |
Kiall | can you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack)"` and `find /usr/local/` ? | 00:25 |
*** dtroyer has quit IRC | 00:25 | |
Kiall | (Just to be sure its not using a mix of packages/manually installed stuff) | 00:26 |
bwong_ | Kiall: http://paste.openstack.org/show/3236/ | 00:27 |
*** rnorwood has quit IRC | 00:27 | |
Kiall | Right, looks clean.. 1 set of stuff rather than a mix of multiple | 00:28 |
Kiall | Ohh maybe, what did you set region to in the settings file? | 00:29 |
bwong_ | was is suppose to be left as dub01? | 00:29 |
Kiall | Whatever region you wanted .. | 00:30 |
Kiall | but, you need to set nova to the same.. eg http://paste.openstack.org/show/3238/ in nova.conf | 00:30 |
Kiall | not 10% sure if that would cause it, but maybe... | 00:31 |
*** MarkAtwood has joined #openstack | 00:31 | |
bwong_ | hmm | 00:31 |
bwong_ | dont have those in my nova.conf | 00:31 |
bwong_ | i will put those in there. | 00:31 |
vidd | Kiall, does this mean if i have 3 nodes and three zones, i can set node one to only lauch zone A vm's, node 2 zone B and so on? | 00:32 |
vidd | and if so, can a machine have multiple zones available? | 00:33 |
Kiall | vidd, kinda, but i havent actually looked at what has been implemented so far | 00:33 |
bwong_ | Kiall: Going to restart the services, just checking Im suppose to restart all services that begins with "nova" right? | 00:33 |
Kiall | yea | 00:33 |
Kiall | vidd, each zone needs a full set of the services... | 00:33 |
vidd | can one sontroller handle multip[le zones? | 00:34 |
bwong_ | Kiall: ok all service restarted. | 00:34 |
Kiall | vidd, I doubt it very much | 00:35 |
Kiall | vidd, vidd, http://wiki.openstack.org/DistributedScheduler | 00:35 |
*** dragondm has quit IRC | 00:38 | |
bwong_ | Yeah, not working. Should I just re-do the installation again. | 00:39 |
Kiall | bwong_, I've gotta run, but.. heh beat me to it.. | 00:39 |
Kiall | if you dpkg -P all the packages listed in http://paste.openstack.org/show/3236/ | 00:40 |
*** mszilagyi has quit IRC | 00:40 | |
bwong_ | ya | 00:40 |
Kiall | and then `rm -rf /etc/nova /etc/glance /etc/keystone /var/lib/nova /var/lib/glance /var/lib/keystone` it will be as good as a fresh ubuntu install | 00:40 |
Kiall | no traces let.. | 00:41 |
Kiall | but backup the /etc folders, there might be a setting or two in there you might want to remember ;) | 00:41 |
bwong_ | Ok | 00:41 |
bwong_ | Thanks for your help Kiall | 00:41 |
Kiall | re the region setting, change it to "nova" thats the default and requires no changes to the config | 00:41 |
stevegjacobs | Kiall - I looked briefly but can't see where the config is for dashboard | 00:42 |
Kiall | and, ignore nova-volume until after everything else is working.. | 00:42 |
bwong_ | ok | 00:42 |
bwong_ | alright | 00:42 |
Kiall | stevegjacobs, heya.. | 00:42 |
*** adjohn has joined #openstack | 00:43 | |
Kiall | stevegjacobs, /etc/openstack-dashboard/* + /etc/apache2/conf.d/dashboard.conf+ /etc/apache2/sites-available/default | 00:43 |
Kiall | Anyway - 1am and I have meeting at 9am, stevegjacobs I'll be in touch tomorrow re that coffee... cyas.. good luck bwong_ ;) | 00:44 |
stevegjacobs | there is nothing in sites-available/default that refers to dashboard | 00:45 |
Kiall | sure, but the /etc/apache2/conf.d/dashboard.conf file kinda combines with /etc/apache2/sites-available/default | 00:45 |
stevegjacobs | ok - I'll try to figure it out. | 00:46 |
stevegjacobs | not urgent anyway | 00:46 |
Kiall | Youn were trying to change the port? `grep -r '80' /etc/apache2` | 00:46 |
Kiall | change anything in that list that looks like a port, and restart apache.. | 00:46 |
Kiall | anyway - gotta sleep, cyas.. | 00:47 |
stevegjacobs | ok thanks | 00:47 |
stevegjacobs | Im headin gthat way too | 00:47 |
stevegjacobs | g'nite | 00:47 |
vidd | stevegjacobs, how goes it | 00:47 |
*** jakedahn has quit IRC | 00:47 | |
vidd | paste me your /etc/apache2/conf.d/dashboard.conf file | 00:47 |
stevegjacobs | got a mostly working stack, set up four small web server vm's and one other bespoke app today :-) | 00:48 |
*** negronjl has quit IRC | 00:48 | |
vidd | nice | 00:49 |
*** supriya_ has joined #openstack | 00:49 | |
stevegjacobs | a few things still not working - mainly snapshots | 00:49 |
*** supriya_ has quit IRC | 00:49 | |
vidd | im working on rebuilding my openstack and scripting the process | 00:49 |
stevegjacobs | Mine is mostly based on Kiall's scripts, with a few diversions in the setup | 00:50 |
*** vernhart1 has joined #openstack | 00:51 | |
*** vernhart has quit IRC | 00:53 | |
*** vernhart has joined #openstack | 00:54 | |
*** vernhart1 has quit IRC | 00:55 | |
*** livemoon has joined #openstack | 00:56 | |
livemoon | morning | 00:57 |
*** jeromatron has joined #openstack | 00:59 | |
*** jollyfoo has quit IRC | 01:09 | |
vidd | hello livemoon | 01:09 |
*** Gollen has joined #openstack | 01:11 | |
*** thingee has left #openstack | 01:13 | |
*** jeromatron has quit IRC | 01:13 | |
*** statik has joined #openstack | 01:14 | |
*** bwong has quit IRC | 01:19 | |
*** lorin1 has quit IRC | 01:22 | |
*** lorin1 has joined #openstack | 01:22 | |
*** lorin1 has left #openstack | 01:23 | |
livemoon | hi,vidd | 01:24 |
vidd | hello livemoon | 01:24 |
*** bsza-I has quit IRC | 01:24 | |
*** rnorwood has joined #openstack | 01:24 | |
livemoon | a question: did you install keystone and dashboard in the same server? | 01:24 |
vidd | livemoon, yes | 01:24 |
vidd | currently, i only have the one machine capable of running VMs | 01:25 |
*** lorin1 has joined #openstack | 01:26 | |
livemoon | ok | 01:26 |
vidd | livemoon, still having issues? | 01:29 |
*** vernhart has quit IRC | 01:31 | |
*** GeoDud has quit IRC | 01:32 | |
*** bwong_ has quit IRC | 01:33 | |
livemoon | yes | 01:33 |
vidd | what problem? | 01:34 |
*** webx has quit IRC | 01:36 | |
livemoon | I can login but nothing show | 01:39 |
livemoon | I will do it in my vmware machine today | 01:39 |
*** reed has quit IRC | 01:40 | |
*** dysinger has quit IRC | 01:44 | |
stevegjacobs | vidd: tell me more about the scripting that you are doing | 01:44 |
vidd | there isnt much to tell...im writing a script that will walk anyone thru setting up a metal-to-active full stack install | 01:45 |
Gollen | keystone can not works on openstack 2011.3 version? how to configure it? | 01:45 |
stevegjacobs | On one machine or multiple? | 01:45 |
vidd | stevegjacobs, one machine | 01:45 |
vidd | to add multiple machines, you just add compute and mysqlclient on the additional machines and copy your nova.conf file to the other machines | 01:47 |
stevegjacobs | I've got three in my stack right now | 01:47 |
*** rsampaio has joined #openstack | 01:48 | |
stevegjacobs | two are nice new machines, but the third one is older and I was hoping to configure it to just do nova-volume or swift | 01:49 |
stevegjacobs | It's sitting on the stack but not doing anything at the moment :-) | 01:49 |
stevegjacobs | We have a couple other machines that I hope to add in later | 01:50 |
stevegjacobs | so I am trying to figure out what is best practice for expanding bit by bit | 01:50 |
*** isaacfinnegan has quit IRC | 01:51 | |
vidd | stevegjacobs, what have you put on it currently? | 01:51 |
stevegjacobs | don't ask - I think I've made a mess of it | 01:53 |
vidd | stevegjacobs, so..."nothing"? | 01:54 |
stevegjacobs | more like everything | 01:54 |
vidd | what are the specs of the machine | 01:54 |
*** Hakon|mbp has joined #openstack | 01:55 | |
vidd | im considering taking a relic machine i have here and setting it up as the MySQL/Keystone/Dashboard machine | 01:55 |
*** jakedahn has joined #openstack | 01:55 | |
vidd | its a PIII with 80Gb hard drive | 01:56 |
uvirtbot | New bug: #888370 in glance "glance show prints invalid URI" [Low,In progress] https://launchpad.net/bugs/888370 | 01:56 |
*** GeoDud has joined #openstack | 01:56 | |
*** ton_katsu has joined #openstack | 01:57 | |
stevegjacobs | vidd: the older server doesn't support kvm - older intel xeon processor, 6x2tb drives | 02:00 |
vidd | 6x2tb hd's? | 02:00 |
stevegjacobs | yup | 02:01 |
vidd | swift server...defenantly =] | 02:01 |
vidd | http://swift.openstack.org/development_saio.html# | 02:01 |
stevegjacobs | I have swift installed but I know it's not configured right. | 02:02 |
vidd | set it up with out keystone first | 02:02 |
vidd | once that works, tie keystone and dash into it | 02:03 |
vidd | then fianlly link glance to it | 02:03 |
stevegjacobs | this looks interesting | 02:03 |
*** bhall has quit IRC | 02:04 | |
vidd | i feal so stupid..... | 02:04 |
stevegjacobs | I have oneiric and swift packages installed | 02:05 |
vidd | the reason i had so much issues with keystone was that one tieney file was missing....python-mysqldb.....could not understand why it failed.... | 02:05 |
vidd | stevegjacobs, i know nothing of swift | 02:05 |
stevegjacobs | Is it worth it to start over - the link is saying to start from lucid | 02:06 |
vidd | i just know it takes ALOT of hard drive space =] | 02:06 |
stevegjacobs | vidd: ok, thats my problem too :-) | 02:06 |
vidd | stevegjacobs, i would assume "lucid" was just imported from the last version of the documentation....and you should be fine with current ubuntu install | 02:07 |
stevegjacobs | I can feel my brain cells frying and dying from trying to get my head around compute this past weeks! | 02:07 |
vidd | stevegjacobs, ive given up a month of my life for this | 02:08 |
vidd | and they dont want to pay me for the development time =\ | 02:08 |
stevegjacobs | I started at the beginning of August | 02:09 |
*** Otter768 has joined #openstack | 02:09 | |
stevegjacobs | but can't do it full time. | 02:10 |
uvirtbot | New bug: #888371 in swift "swift bug with python webob 1.2b2" [Undecided,New] https://launchpad.net/bugs/888371 | 02:10 |
uvirtbot | New bug: #888372 in glance "glance cache-reap-invalid causes 'NoneType' object is not subscriptable" [Undecided,New] https://launchpad.net/bugs/888372 | 02:11 |
vidd | stevegjacobs, the issue i have is i dont have server-grade equiptment right now....im working with low-grade desktop-centric machines | 02:11 |
stevegjacobs | Our company has bought a couple of new machines to get started with, but they want me to figure out something useful to do with some older ones | 02:12 |
vidd | and they cant understand why a "simple" 2-gb ram vm takes so long to do anything...the host machine only has 2 gb! | 02:12 |
vidd | stevegjacobs, take those 2tb drives and distribute them out between 3 servers and do a "proper" swift cluster | 02:13 |
stevegjacobs | Thats where I was at the beginning - installing stackops on cast-off desktops :-) | 02:13 |
*** GeoDud has quit IRC | 02:14 | |
vidd | they are promising me one "new[ish]" machine, and then i'll move existing servers onto it to free up new machines to convert | 02:14 |
stevegjacobs | Yeah - first step is to migrate some existing loads onto what I've got set up now so that I can retire a couple of them | 02:15 |
vidd | the cheif engineer says "we can reduce enough to free up anymore racks" | 02:15 |
*** jeromatron has joined #openstack | 02:16 | |
vidd | i tell him "i dont want to free up racks...if this goes as expected, i will be filling up the holes we already have in the racks with more machines [and paying customers]" | 02:16 |
*** jdurgin has quit IRC | 02:17 | |
stevegjacobs | Then I'll lash some new big drives and maybe a bit of memory into the older machines and create a swift cluster | 02:17 |
stevegjacobs | Once I can figure out how to get everything working together | 02:18 |
stevegjacobs | I gotta go to bed now. | 02:19 |
stevegjacobs | g'night | 02:20 |
*** stevegjacobs has quit IRC | 02:21 | |
*** GeoDud has joined #openstack | 02:24 | |
*** rods has quit IRC | 02:26 | |
*** vladimir3p has quit IRC | 02:27 | |
*** osier has joined #openstack | 02:28 | |
*** gyee has quit IRC | 02:32 | |
*** Hakon|mbp has quit IRC | 02:37 | |
*** daMaestro has quit IRC | 02:37 | |
*** vernhart has joined #openstack | 02:43 | |
*** redconnection has joined #openstack | 02:45 | |
*** katkee has joined #openstack | 02:50 | |
*** egant has joined #openstack | 02:58 | |
uvirtbot | New bug: #888382 in glance "glance-cache-cleaner causes 'Driver' object has no attribute 'delete_incomplete_files'" [Undecided,New] https://launchpad.net/bugs/888382 | 03:01 |
uvirtbot | New bug: #888383 in glance "glance-cache-prefetcher causes Unknown Scheme errors when using 'file://' images" [Undecided,New] https://launchpad.net/bugs/888383 | 03:01 |
*** ton_katsu has quit IRC | 03:06 | |
HugoKuo__ | morning | 03:07 |
*** Ryan_Lane1 has joined #openstack | 03:09 | |
*** Ryan_Lane has quit IRC | 03:09 | |
*** Ryan_Lane1 has quit IRC | 03:11 | |
uvirtbot | New bug: #888385 in nova "Failure when installing Dashboard - python tools/install_venv.py" [Undecided,New] https://launchpad.net/bugs/888385 | 03:13 |
*** rnorwood has quit IRC | 03:14 | |
*** bsza has joined #openstack | 03:15 | |
*** winston-d has joined #openstack | 03:22 | |
*** jingizu_ has quit IRC | 03:22 | |
*** jingizu_ has joined #openstack | 03:23 | |
*** rnorwood has joined #openstack | 03:27 | |
*** nati2_ has joined #openstack | 03:29 | |
*** rnorwood has quit IRC | 03:32 | |
*** nati2 has quit IRC | 03:32 | |
*** stuntmachine has joined #openstack | 03:33 | |
*** bsza has quit IRC | 03:37 | |
*** bsza has joined #openstack | 03:37 | |
vidd | zykes-, you here? | 03:42 |
*** rnorwood has joined #openstack | 03:52 | |
*** v0id has joined #openstack | 03:53 | |
*** lorin1 has quit IRC | 03:56 | |
*** jog0 has quit IRC | 04:01 | |
*** mmetheny has quit IRC | 04:08 | |
*** mmetheny has joined #openstack | 04:09 | |
*** vidd is now known as vidd-away | 04:13 | |
*** lionel has quit IRC | 04:14 | |
*** lionel has joined #openstack | 04:15 | |
*** emid has joined #openstack | 04:15 | |
*** nati2 has joined #openstack | 04:24 | |
*** nati2_ has quit IRC | 04:24 | |
*** vernhart1 has joined #openstack | 04:27 | |
*** vernhart has quit IRC | 04:31 | |
*** stuntmachine has quit IRC | 04:31 | |
*** tokuz has quit IRC | 04:44 | |
*** bsza-I has joined #openstack | 04:49 | |
*** bsza has quit IRC | 04:50 | |
*** neogenix has joined #openstack | 04:52 | |
*** PeteDaGuru has left #openstack | 04:59 | |
*** bsza-I has quit IRC | 05:01 | |
*** jwalcik has quit IRC | 05:05 | |
*** bhall has joined #openstack | 05:07 | |
*** bhall has quit IRC | 05:07 | |
*** bhall has joined #openstack | 05:07 | |
*** markwash has quit IRC | 05:11 | |
*** blamar has quit IRC | 05:11 | |
*** bsza has joined #openstack | 05:14 | |
*** AlanClark has quit IRC | 05:16 | |
*** rnorwood has quit IRC | 05:19 | |
*** bsza has quit IRC | 05:35 | |
livemoon | afternoon | 05:37 |
*** jj0hns0n has quit IRC | 05:38 | |
*** v0id has quit IRC | 05:49 | |
*** zaitcev has quit IRC | 05:56 | |
*** jamespage has quit IRC | 05:57 | |
*** chadh has quit IRC | 05:57 | |
*** paltman has quit IRC | 05:58 | |
*** superbobry has joined #openstack | 05:59 | |
*** obino has joined #openstack | 05:59 | |
*** Blah1 has joined #openstack | 05:59 | |
*** chadh has joined #openstack | 06:00 | |
*** giroro_ has joined #openstack | 06:00 | |
*** dosdawg__ has joined #openstack | 06:00 | |
*** pfibiger` has joined #openstack | 06:01 | |
*** oubiwann has quit IRC | 06:01 | |
*** rwmjones has quit IRC | 06:02 | |
*** dosdawg has quit IRC | 06:02 | |
*** chmouel_ has joined #openstack | 06:02 | |
*** statik has quit IRC | 06:02 | |
*** pfibiger has quit IRC | 06:02 | |
*** Ruetobas has quit IRC | 06:02 | |
*** chmouel has quit IRC | 06:02 | |
*** n0ano has quit IRC | 06:02 | |
*** nci has quit IRC | 06:02 | |
*** n0ano has joined #openstack | 06:02 | |
*** jedi4ever has joined #openstack | 06:02 | |
*** oubiwann1 has joined #openstack | 06:02 | |
*** pvo has quit IRC | 06:02 | |
*** nci has joined #openstack | 06:02 | |
*** pvo has joined #openstack | 06:03 | |
*** osier has quit IRC | 06:03 | |
*** paltman has joined #openstack | 06:03 | |
*** osier has joined #openstack | 06:03 | |
*** Bryanstein has quit IRC | 06:03 | |
*** jamespage has joined #openstack | 06:03 | |
*** jamespage has joined #openstack | 06:04 | |
*** miclorb_ has quit IRC | 06:04 | |
*** llang629 has joined #openstack | 06:11 | |
*** llang629 has left #openstack | 06:13 | |
*** rwmjones has joined #openstack | 06:14 | |
*** vernhart1 has quit IRC | 06:17 | |
*** nerens has joined #openstack | 06:18 | |
*** vernhart has joined #openstack | 06:20 | |
*** markwash has joined #openstack | 06:22 | |
*** exprexxo has joined #openstack | 06:23 | |
*** Blah1 has quit IRC | 06:29 | |
*** hingo has joined #openstack | 06:43 | |
*** derrick has quit IRC | 06:48 | |
*** derrick has joined #openstack | 06:49 | |
*** hezekiah_ has joined #openstack | 06:53 | |
*** winston-d has quit IRC | 06:58 | |
*** TheOsprey has joined #openstack | 07:07 | |
*** dnjaramba has joined #openstack | 07:09 | |
*** Rajaram has joined #openstack | 07:09 | |
*** kaigan has joined #openstack | 07:13 | |
*** bhall has quit IRC | 07:14 | |
*** Bryanstein has joined #openstack | 07:14 | |
*** Bryanstein has quit IRC | 07:19 | |
*** wariola has joined #openstack | 07:19 | |
*** winston-d has joined #openstack | 07:21 | |
*** krow has joined #openstack | 07:23 | |
*** Bryanstein has joined #openstack | 07:27 | |
*** nati2_ has joined #openstack | 07:27 | |
*** nati2 has quit IRC | 07:30 | |
*** adjohn has quit IRC | 07:38 | |
*** adjohn has joined #openstack | 07:46 | |
*** guigui1 has joined #openstack | 07:48 | |
*** Ryan_Lane has joined #openstack | 07:53 | |
*** superbobry2 has joined #openstack | 07:57 | |
*** superbobry2 has left #openstack | 07:57 | |
*** krow has quit IRC | 07:58 | |
*** rsampaio has quit IRC | 08:01 | |
*** foexle has joined #openstack | 08:02 | |
foexle | hiho | 08:02 |
*** exitdescription has joined #openstack | 08:03 | |
*** taihen has quit IRC | 08:04 | |
*** mgoldmann has joined #openstack | 08:05 | |
*** Ryan_Lane has quit IRC | 08:06 | |
*** adjohn has quit IRC | 08:06 | |
*** dirkx_ has joined #openstack | 08:09 | |
*** efcasado has joined #openstack | 08:09 | |
*** nRy_ has quit IRC | 08:16 | |
*** binbash_ has joined #openstack | 08:17 | |
*** Razique has joined #openstack | 08:22 | |
*** mnour has joined #openstack | 08:24 | |
foexle | ahoi Razique ;) | 08:24 |
Razique | hey foexle | 08:24 |
Razique | 'sup ? :d | 08:25 |
foexle | sup ? :) | 08:25 |
foexle | what you mean ? | 08:25 |
Razique | what's up ? :) | 08:26 |
foexle | i was verry tired yesterday :D .... | 08:26 |
Razique | haha no way :p | 08:27 |
*** halfss has joined #openstack | 08:27 | |
foexle | today now i'll documentation and i'll bind a new compute-node and swift to the cloud :D | 08:27 |
foexle | so wish me good luck hahaha :D | 08:27 |
halfss | hi: when i use curl to resize instance: # curl -X POST localhost:8774/v1.1/21/servers/143/action -H "Content-Type: application/json" -H "X-Auth-Token:232b0e48-c826-45b0-a564-dff2d4537244" -H "Accept:application/xml" -d '{"resize":{"flavorRef":" http://localhost:8774/v1.1/21/flavors/3"}}' | 08:28 |
halfss | <badRequest code="400" xmlns="http://docs.openstack.org/compute/api/v1.1"> | 08:28 |
halfss | <message> | 08:28 |
halfss | Unable to locate requested flavor. | 08:28 |
halfss | </message> | 08:28 |
halfss | </badRequest> | 08:28 |
*** reidrac has joined #openstack | 08:28 | |
halfss | is some one can help me ? | 08:28 |
efcasado | try to type: "flavorRef": "3" | 08:29 |
efcasado | instead of the whole URL | 08:30 |
halfss | ok | 08:30 |
halfss | oh yes | 08:30 |
halfss | but at nova-api.log:(nova.rpc): TRACE: Traceback (most recent call last): | 08:31 |
halfss | (nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data | 08:31 |
halfss | (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args) | 08:31 |
halfss | (nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 98, in wrapped | 08:31 |
halfss | (nova.rpc): TRACE: return f(*args, **kw) | 08:31 |
halfss | (nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 117, in decorated_function | 08:31 |
halfss | (nova.rpc): TRACE: function(self, context, instance_id, *args, **kwargs) | 08:31 |
halfss | (nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 931, in prep_resize | 08:31 |
halfss | (nova.rpc): TRACE: raise exception.Error(msg) | 08:31 |
halfss | (nova.rpc): TRACE: Error: Migration error: destination same as source! | 08:31 |
halfss | i have one compute node | 08:31 |
halfss | if i want to resize an instance,the instance will migrate and then resize? | 08:32 |
*** javiF has joined #openstack | 08:34 | |
zykes- | vidd-away: now ! | 08:35 |
zykes- | ;p | 08:37 |
*** alexn6 has joined #openstack | 08:37 | |
efcasado | Does anyone know how to list the virtual interfaces for a given instance? (using the restful interface) | 08:37 |
livemoon | hi.all | 08:37 |
*** tjikkun has quit IRC | 08:37 | |
*** stevegjacobs has joined #openstack | 08:40 | |
*** troya has joined #openstack | 08:43 | |
foexle | efcasado: du you mean the mapped ip-adresses ? or the nic names in each vm ? | 08:44 |
*** vdo has joined #openstack | 08:45 | |
*** troya has quit IRC | 08:46 | |
livemoon | does someone know "verified_claims = {'user': token_info['access']['user']['name']," when I use glance | 08:47 |
stevegjacobs | I have one instance that seems to have crashed - I've tried to terminate it but it won't terminate | 08:48 |
Razique | halfss: use paste :) | 08:50 |
Razique | hi zykes- livemoon stevegjacobs ! | 08:51 |
*** miclorb_ has joined #openstack | 08:51 | |
livemoon | hi | 08:58 |
livemoon | Razique | 08:58 |
livemoon | I meet new problem today | 08:58 |
livemoon | everyday I alwasy meet new bugs | 08:58 |
*** uksysadmin has joined #openstack | 08:58 | |
uvirtbot | New bug: #888448 in keystone "auth_token.py of keystone error when I use glance" [Undecided,New] https://launchpad.net/bugs/888448 | 09:01 |
*** pixelbeat has joined #openstack | 09:01 | |
zykes- | Razique: . | 09:03 |
zykes- | which bug livemoon ? | 09:03 |
*** wariola has quit IRC | 09:04 | |
livemoon | https://bugs.launchpad.net/keystone/+bug/888448 | 09:05 |
livemoon | zykes: have you meet it? | 09:05 |
*** uksysadmin has quit IRC | 09:05 | |
zykes- | don't remember | 09:06 |
*** jakedahn has quit IRC | 09:06 | |
zykes- | i haven't touched my deployment in a few weeks | 09:06 |
*** redconnection has quit IRC | 09:09 | |
*** anticw has quit IRC | 09:09 | |
livemoon | I first meet it | 09:09 |
livemoon | because today I install latest version in my server | 09:09 |
*** jj0hns0n has joined #openstack | 09:11 | |
livemoon | who know this coding " verified_claims = {'user': token_info['access']['user']['name']," | 09:14 |
*** anticw has joined #openstack | 09:15 | |
*** jj0hns0n has quit IRC | 09:17 | |
*** hezekiah_ has quit IRC | 09:17 | |
*** wariola has joined #openstack | 09:17 | |
*** uksysadmin has joined #openstack | 09:19 | |
*** jj0hns0n has joined #openstack | 09:21 | |
*** anticw has quit IRC | 09:21 | |
*** dobber has joined #openstack | 09:21 | |
*** jj0hns0n has quit IRC | 09:22 | |
*** jakedahn has joined #openstack | 09:23 | |
*** anticw has joined #openstack | 09:23 | |
Razique | livemoon: wasn't the bug linked to that temporary Keytsone hack the -A flag ? | 09:23 |
*** uksysadmin has quit IRC | 09:25 | |
*** stevegjacobs_ has joined #openstack | 09:25 | |
*** marrusl has quit IRC | 09:26 | |
livemoon | Razique: not only glance, I use python-novaclient, also this error | 09:27 |
stevegjacobs_ | Don't know what is going on,but one of the servers on my stack has crashed and disapeared | 09:27 |
Razique | stevegjacobs: an instance ? | 09:28 |
Razique | I mean, an instance has diseapperead ? | 09:28 |
stevegjacobs_ | yeah I still see a readout using nova show <serverID> | 09:28 |
*** redconnection has joined #openstack | 09:29 | |
stevegjacobs_ | but I can't ping or ssh into it. It was running a web site that was visible andthat's gone | 09:29 |
Razique | stevegjacobs: that happened to me in my lab | 09:30 |
Razique | (nova): TRACE: Error: Domain not found: no domain with matching name 'instance-0000004d' | 09:30 |
livemoon | Razique: | 09:30 |
Razique | while the server was running, it occured after I restarted the compute ndoe | 09:30 |
livemoon | I have meet it in my lab too | 09:30 |
*** Razique has quit IRC | 09:31 | |
stevegjacobs_ | I also tried nova reboot --hard | 09:31 |
livemoon | when I delete an instance, it also happened | 09:31 |
stevegjacobs_ | I want to get it back if possible | 09:31 |
stevegjacobs_ | You mean if you delete an instance, another instance disapears?? | 09:31 |
*** Gollen has quit IRC | 09:32 | |
*** Razique has joined #openstack | 09:33 | |
stevegjacobs_ | I have pasted nova-api.log from attempt to reboot it http://paste.openstack.org/show/3239/ | 09:33 |
stevegjacobs_ | Instance was fine last night but gone this morning | 09:33 |
Razique | sorry bug | 09:34 |
Razique | soren: | 09:35 |
Razique | stevegjacobs: livemoon | 09:35 |
uvirtbot | New bug: #888458 in openstack-ci "Stable branches should only be +2 by stable team maintainers" [High,New] https://launchpad.net/bugs/888458 | 09:36 |
Razique | I think it happends when the connection from nova-scheduler to the compute node is lost | 09:36 |
Razique | that makes the compute thinks the instnace no longer exists and then it removes it | 09:36 |
Razique | (I mean nova-compute does the virsh destroy domain) and the rm -rf /var/lib/nova/instances/instance | 09:37 |
Razique | I already had that in production I think ; which is belive me…. scary | 09:37 |
stevegjacobs_ | I just did nova show <serverID> and it is showing the status as REBOOT | 09:38 |
*** dirkx_ has quit IRC | 09:38 | |
soren | Razique: Sorry, what? | 09:39 |
*** dirkx_ has joined #openstack | 09:39 | |
*** wulianmeng has joined #openstack | 09:39 | |
Razique | soren: mistype, sorry ^^ | 09:39 |
Razique | stevegjacobs: does the instance exists ? | 09:39 |
*** dirkx_ has quit IRC | 09:39 | |
Razique | (I mean it's files) | 09:40 |
soren | Razique: Ah, ok :) | 09:40 |
Razique | Where are you from soren ? | 09:40 |
HugoKuo__ | does any docs talk about public a swift container in Swift 1.4.4+ ? | 09:40 |
uvirtbot | New bug: #888460 in openstack-ci "nova-milestone-tarball job fails to run on "nova" slave" [Medium,New] https://launchpad.net/bugs/888460 | 09:41 |
uvirtbot | New bug: #888461 in openstack-ci "Extraneous glance tarball should be cleaned up on nova.openstack.org/tarballs" [Low,New] https://launchpad.net/bugs/888461 | 09:41 |
HugoKuo__ | in Bexar , doc mentioned CDN , but CDN seems been removed from Swift .... Am I right ? | 09:41 |
stevegjacobs_ | arrgh - found the problem | 09:41 |
stevegjacobs_ | it's on a node that has crashed! | 09:42 |
wulianmeng | Is there anybody who install openstack with xen? | 09:42 |
*** darraghb has joined #openstack | 09:43 | |
Razique | stevegjacobs: doesn't really surprise me | 09:46 |
*** dnjaramba_ has joined #openstack | 09:48 | |
*** dnjaramba has quit IRC | 09:48 | |
*** nerens has quit IRC | 09:51 | |
Razique | any success with live migration here | 09:54 |
Razique | ? | 09:54 |
zykes- | Razique: i can try next week ;p | 09:54 |
Razique | I'm trying :D | 09:55 |
Razique | does someones knows if HA works | 09:55 |
zykes- | HugoKuo__: CDN is a seperate service which was announced @ diablo conference | 09:55 |
Razique | for instance : instance running on a node | 09:55 |
Razique | node gone, nova restarts the instance somewhere else | 09:55 |
*** dendrobates has quit IRC | 09:56 | |
HugoKuo__ | zykes- thanks | 09:57 |
*** livemoon has left #openstack | 10:00 | |
*** statik has joined #openstack | 10:02 | |
*** statik has quit IRC | 10:02 | |
*** statik has joined #openstack | 10:02 | |
foexle | Razique: you have defined an alias "glance -A "$OS_AUTH_KEY"" ... but this sys-var are not set. So if i correct understand is -A the API-Key ... right ? | 10:02 |
*** wariola has quit IRC | 10:06 | |
*** apevec has joined #openstack | 10:11 | |
*** syah_ has joined #openstack | 10:17 | |
uvirtbot | New bug: #888479 in openstack-ci "Bug should not be set to FixCommitted on non-master merge" [High,New] https://launchpad.net/bugs/888479 | 10:21 |
*** syah has quit IRC | 10:21 | |
*** dendrobates has joined #openstack | 10:24 | |
*** anticw has quit IRC | 10:25 | |
*** mcclurmc has quit IRC | 10:27 | |
*** anticw has joined #openstack | 10:27 | |
*** tyska has joined #openstack | 10:27 | |
*** mcclurmc has joined #openstack | 10:27 | |
*** taihen has joined #openstack | 10:30 | |
*** marrusl has joined #openstack | 10:30 | |
*** lelin has joined #openstack | 10:35 | |
*** vernhart has quit IRC | 10:39 | |
Razique | foexle: yup | 10:40 |
Razique | I've disabled it since I don't uee Keystone | 10:40 |
Razique | but in order to integrate glance/ glance tools (eg glance index) with keystone | 10:40 |
Razique | an -A (temporary- flag) has been added | 10:40 |
foexle | Razique: ok thx :) | 10:41 |
*** nati2_ has quit IRC | 10:43 | |
stevegjacobs_ | what could cause a lightly loaded node to crash? | 10:44 |
Razique | stevegjacobs: kern panic | 10:46 |
Razique | or hardware issue | 10:46 |
Razique | maybe a bug | 10:46 |
Razique | stevegjacobs: check kern.log | 10:47 |
Razique | foexle: will know what we are talking about here :D | 10:47 |
foexle | hahahah oh yeah ;) | 10:47 |
*** nerens has joined #openstack | 10:50 | |
tyska | hello guys!!! | 10:50 |
tyska | how u r doing? | 10:50 |
tyska | or better saying, how r u doing? =) | 10:50 |
Kiall | stevegjacobs_: node crashed? any idea why? | 10:51 |
stevegjacobs_ | Nope - still waiting on someone at the data centre to push a button for me | 10:52 |
Kiall | ouch get them to take a photo of the screen | 10:52 |
Kiall | kernel panics never get logged! | 10:52 |
Kiall | and .. `echo "kernel.panic = 20" > /etc/sysctl.d/30-panic.conf && sysctl -p` | 10:53 |
Kiall | ie .. auto reboot after 20 seconds if the kernel panics | 10:53 |
stevegjacobs_ | I would have gone out myself already but I let my wife have the car... | 10:54 |
Razique | erf :/ | 10:54 |
Kiall | and, to get the logs.. you can use netconsole to ship them to another server https://wiki.ubuntu.com/Kernel/Netconsole | 10:54 |
*** marrusl has quit IRC | 10:54 | |
Razique | stevegjacobs: how many instances on it ? | 10:54 |
stevegjacobs_ | Only one that counts - | 10:54 |
Razique | stevegjacobs: u have it backedup ? | 10:55 |
Razique | backed-up* | 10:55 |
Kiall | stevegjacobs: seriously do get them to take a photo of the screen! Otherwise, you'll have no idea what went wrong | 10:55 |
Razique | u could launch it then somewhere else (if u have another node) | 10:55 |
tyska | hey Razique | 10:56 |
stevegjacobs_ | It was a web site that was transfered to run live on this server lst night. It's back up on it's original server | 10:56 |
Kiall | last night? bad timing | 10:56 |
Razique | hi tyska :) | 10:57 |
Razique | Kiall: Murphy's law :( | 10:57 |
stevegjacobs_ | yes it is bad timing | 10:57 |
Kiall | Mind me asking what brand of server it was? I've been having some issues with a HP + Oneiric... | 10:57 |
tyska | Razique: my problem remains =( see it here and say something to me https://answers.launchpad.net/nova/+question/178209 | 10:58 |
stevegjacobs_ | It's a brand new dell | 10:58 |
tyska | Razique: you are the man and you put an end to my suffering =) | 10:58 |
Kiall | tyska: what kind of switch are you using? and have you configured it for vlans? | 10:58 |
tyska | Kiall: im using no switch, just direct connection between server1 and server2 | 10:59 |
Kiall | on eth1? | 10:59 |
stevegjacobs_ | I remember it acting a bit funny at the time I was installing Os | 10:59 |
tyska | yeah, eth1 of server1 is connected directly to eth0 of server2 | 11:00 |
tyska | through a cross-over cable | 11:00 |
stevegjacobs_ | two switches -both vlan capable | 11:00 |
Kiall | so shouldnt --vlan_interface be set to eth1 them? | 11:00 |
Kiall | then* | 11:00 |
Kiall | stevegjacobs: yea, oneiric has a huge bug preventing me from installing it on out 1950 and 2950 servers... -_- | 11:01 |
*** BasTichelaar has joined #openstack | 11:01 | |
stevegjacobs_ | One for public network one for private - | 11:01 |
Kiall | tyska: oh, your trying to make the second server a single interface? | 11:02 |
*** jakedahn_ has joined #openstack | 11:03 | |
tyska | Kiall: yeah, second server will use just 1 interface | 11:05 |
*** jakedahn has quit IRC | 11:05 | |
*** jakedahn_ is now known as jakedahn | 11:05 | |
tyska | Kiall: i want him isolated from others subnets, him will just comunicate with server1 | 11:05 |
tyska | it will* | 11:05 |
Razique | tyska: yah same setup here | 11:06 |
Razique | eth0 public eth1 <--> eth1 nova-com | 11:07 |
Razique | weird | 11:07 |
tyska | Razique: my br100 on nova-compute(server2) has no ip, is this right? | 11:07 |
Kiall | assuming br100 is the first network, then it should have an IP | 11:07 |
Razique | in vlan mode, no need to manually setup the brige | 11:08 |
Razique | a bridge is created per network | 11:08 |
*** stevegjacobs has quit IRC | 11:08 | |
Razique | with it's own vlan | 11:08 |
Kiall | yea - if you manually setup a br100, then remove it.. nova will make what it needs | 11:09 |
tyska | i did not manually setup | 11:09 |
tyska | i just thought that was weird | 11:10 |
tyska | since the server2 there is no interface configurated in the private ips subnet | 11:10 |
tyska | 172.16.50.0/24 | 11:10 |
tyska | and consenquently no route to this subnet | 11:10 |
tyska | then how the machine you handle packages to this subnet? | 11:10 |
tyska | server1 has a compute node too, and his br100 is configured in the subnet 172.16.50.0/24 | 11:11 |
Kiall | Oh wait, your using 1 nova-network rather than 1 per nova-compute... | 11:11 |
Kiall | The interface will not get an IP in that case | 11:11 |
Kiall | Since, its just a bridge, it does no routing | 11:12 |
Kiall | tyska: can S2 ping 192.168.1.254 ? | 11:13 |
Kiall | actually - it has to be able to.. | 11:13 |
Kiall | nevermind | 11:13 |
Kiall | I'm out of ideas.. ;) | 11:13 |
tyska | Kiall: yeah | 11:13 |
tyska | S2 can ping to 192.168.1.254 | 11:13 |
tyska | and even can run instances from S1, using euca-run-instances, in s2 | 11:14 |
*** ahasenack has joined #openstack | 11:14 | |
Kiall | if you assign a floating IP to the instance on S2, does it show in `ip addr show` on S1? (Or, Is the nova setup network for multi-host..) | 11:15 |
Razique | tyska: u mind sharing SSH access ? | 11:17 |
tyska | Razique: np, but i will need to fix a configuration problem here first, because this machines are without Internet Connection right now | 11:18 |
*** ahasenack has quit IRC | 11:18 | |
*** ahasenack has joined #openstack | 11:19 | |
Razique | ok ok =) | 11:20 |
Razique | again :p I'm interested here about node failure with running instances one it | 11:20 |
Razique | on* | 11:20 |
Razique | if someone has infos to share. Here is what I came so far | 11:21 |
*** Hakon|mbp has joined #openstack | 11:21 | |
Razique | shared storage for iinstances accross nodes : useless, virsh will complain about a Could not find filter 'nova-instance-instance-00000056-02163e1cd831 | 11:21 |
Razique | non shared storage and rescue (Diablo feature) doesn't seem to work | 11:21 |
Razique | in fact nova doesn't seem aware he lost a node wile there was instance on it | 11:22 |
Razique | so reboot : useless since the node no longer exist | 11:22 |
Kiall | shared storage for /var/lib/nova/instances should work? | 11:22 |
Razique | Kiall: already tried | 11:22 |
Razique | but the network part doesn't handle that scenario | 11:23 |
Razique | in fact my conclusion is : 1- custom heartbeat script | 11:23 |
Razique | 2- make a shared storage for instances | 11:24 |
Razique | 3- require migration ; that doens' seem to work here :D | 11:24 |
Kiall | Razique: or 3.. http://www.gluster.com/community/documentation/index.php/OSConnect | 11:25 |
Kiall | (Havent used it...) | 11:25 |
Razique | 4- DB field update : useless, since the only thing nova complains about it "in DB = XXX instance, running = 0" | 11:25 |
*** nerens has quit IRC | 11:25 | |
Razique | but the scheduler doesn't take the initiative to respawn | 11:25 |
Kiall | Yea - the scheduler should not respawn the instance IMO.. | 11:25 |
Razique | 5- if you relauch the node, then… use my script haha https://github.com/Razique/BashStuff/blob/master/SCR_5006_V00_NUAC-OPENSTACK-DRP-OpenStack.sh | 11:26 |
Kiall | What if the instance is running, and you spawn a second copy? | 11:26 |
Razique | you are right…black hole here | 11:26 |
Kiall | # We reset the database so the volumes are reset to an available state | 11:26 |
Kiall | or .. nova-manage volume reattach $id | 11:26 |
Razique | thanks for that info Kiall | 11:27 |
Razique | I don't really use nova-manage for instances admin actually :p | 11:27 |
Kiall | sure.. you still need to find the ID's, but I can only imagine it will be more reliable than trying to DIY ;) | 11:28 |
Razique | DIY ? | 11:28 |
Kiall | "do it yourself" | 11:28 |
Razique | ah :) | 11:28 |
Kiall | Guess you're not a native English speaker ;) | 11:28 |
Razique | Kiall: haha french :p | 11:29 |
Razique | it took me 5 minutes to figure out that BYOB means Bring Your Own Booze | 11:29 |
Kiall | lol | 11:29 |
Razique | Let's try Gluster<-> nova then | 11:29 |
Razique | if it's the "only" stable solution | 11:30 |
Razique | nova rescue seems full of potential , but atm, not really useable | 11:30 |
*** miclorb_ has quit IRC | 11:31 | |
Kiall | nova rescue is nothing to do with downed nodes? it basically the OS equivalent of booting to a shell off a live CD | 11:31 |
Kiall | It's basically* | 11:31 |
Razique | https://blueprints.launchpad.net/nova/+spec/host-aggregates | 11:31 |
Razique | woudln't he be usefull in such cas'e ? | 11:31 |
Razique | ah no, not really siince the node is missing | 11:32 |
*** PotHix has joined #openstack | 11:33 | |
*** cmagina has quit IRC | 11:33 | |
*** cmagina has joined #openstack | 11:34 | |
Razique | Kiall: on a pure let's say…. logical plan | 11:35 |
Razique | Glusterfs brings no more than cluster for running instance | 11:35 |
Razique | so a NFS ++++ | 11:35 |
Razique | Im' not saying at all GFS is like NFS :p but the goal here | 11:36 |
Razique | is to make sure our instances files are available accross all ndes | 11:36 |
Razique | nodes | 11:36 |
Kiall | and thats exactly (mostly) what GFS does.. | 11:37 |
Razique | that bring us back to my conclusions : how to restart then on a new node | 11:37 |
Kiall | Glusterfs is kinda like a poor man's SAN | 11:37 |
Razique | I have via NFS the instance file on the other node | 11:37 |
foexle | any know how i can get with euca tools a list with all instance types ? | 11:37 |
Razique | now what matters here is to say to nova "ok I"ve the instance files, but the instances doesn't run, let's start it !" | 11:38 |
zykes- | Razique: what about sheepdogg? | 11:38 |
Razique | foexle: nova-manage flavor list | 11:38 |
Razique | zykes-: same issue here | 11:38 |
foexle | Razique: thx again | 11:38 |
Kiall | zykes-: isnt sheepdog for nova-volume? | 11:39 |
Razique | nova is not aware about the fact he lost running instances | 11:39 |
zykes- | Kiall: no it's for instances | 11:39 |
Kiall | zykes-: are you sure? (I just checked.. http://wiki.openstack.org/SheepdogSupport) | 11:39 |
*** osier has quit IRC | 11:39 | |
Razique | Kiall: the project iself is for instances | 11:40 |
zykes- | https://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/45093 | 11:40 |
Razique | but you seem to be right regarding it's implementation into nova | 11:40 |
zykes- | ah ok then Kiall | 11:40 |
Kiall | zykes-: look at step 4 of that link ;) | 11:41 |
Kiall | nova-volume --volume_driver=nova.volume.driver.SheepdogDriver | 11:41 |
zykes- | yeh, i saw | 11:41 |
Razique | You should consider Sheepdog if you are looking for clustered storage that: | 11:41 |
zykes- | what's the difference on gluster then ? | 11:41 |
Razique | zykes-: consider them on a different approach I'd say | 11:41 |
Kiall | you can mount gluster @ /var/lib/nova/instances | 11:41 |
zykes- | ah | 11:41 |
Razique | GFS : stupid replication for instances | 11:41 |
Razique | Sheepdog qemu-kvm aware replication | 11:42 |
zykes- | Razique: say gluster instead, when you say GFS i think of GlobalFileSystem | 11:42 |
Razique | zykes-: sure | 11:42 |
zykes- | which is a totally different filesystem again :p | 11:42 |
*** jeromatron has quit IRC | 11:42 | |
halfss | i am test glfs too | 11:42 |
Razique | u right | 11:42 |
halfss | glusterfs | 11:42 |
Razique | so i'm now trying to respawn an instance | 11:43 |
Razique | and update the changes regarding the network | 11:43 |
Razique | hehe I think I know what prevent migration https://bugs.launchpad.net/nova/+bug/746821 | 11:46 |
zykes- | :) | 11:46 |
zykes- | i can't test anything that does outside 1 box at the moment which is horribly boring | 11:47 |
Razique | OpenStack Compute (nova) 2011.2 "cactus" ;. | 11:47 |
Razique | :/ | 11:47 |
Razique | ok now we definitely knows the issue here regarding instances restart somewhere else is linked to the network | 11:48 |
Razique | http://libvirt.org/firewall.html | 11:48 |
zykes- | Razique: what network cards do you run ? like what speeds brand etc? | 11:51 |
Razique | for which part ? | 11:52 |
Razique | oh guys it's moving ! | 11:54 |
Razique | the missing files are these custom instance firewall rules | 11:54 |
Razique | located into /etc/libvirt/nwfilter/instance-* filter | 11:55 |
Razique | if you sync also that dir | 11:55 |
Razique | you make sure the network filter per instance is also available | 11:55 |
alexn6 | btw, what do you think about that fact, that kvm cache image files(in _baze dir) and for running instance it need only very low in size "diff" of that images(in instanceNN dir) - so would it be better to share all images for all hosts in advance and for LM use only that small files, for example rsynced? Is it interesting feature to ask developers? | 11:56 |
*** praefect has joined #openstack | 11:58 | |
Razique | I did it o/ | 12:01 |
Razique | IT WORKED ! | 12:02 |
*** vernhart has joined #openstack | 12:02 | |
Razique | i've been able to down a node and make the same instance restart on theother node | 12:02 |
halfss | Razique:how did you make it ? | 12:03 |
Kiall | Razique: i wonder if the path for /etc/libvirt/nwfilter/instance-* is customizable? | 12:03 |
Razique | halfss: 1- sync of instances + nwfilter | 12:03 |
Razique | restart libvirt | 12:04 |
Razique | update the DB in order to update the instance's host | 12:04 |
Razique | then launch nova reboot/ euca-reboot | 12:04 |
Razique | no the iptables rules are recreaed | 12:04 |
Razique | then voila :) | 12:04 |
Razique | the instance is reacheable, etc... | 12:04 |
Razique | Kiall: leme check | 12:05 |
halfss | Razique:the most important is update the DB,change the instance's host ? right? | 12:05 |
Razique | halfss: as important as restarting libvirt in order to import the filters | 12:05 |
Razique | and restarting the instance in order to recreate the network rukes | 12:06 |
Razique | "They are all stored in /etc/libvirt/nwfilter, but don't edit the files there directly. Use virsh nwfilter-define to update them. This ensures the guests have their iptables/ebtables rules recreated. | 12:06 |
Razique | " | 12:06 |
Razique | in fact I bypass that =D | 12:06 |
foexle | Razique: if i map a external ip address to a new instance i get this from kvm Connected to domain instance-00000015 | 12:07 |
foexle | Escape character is ^] | 12:07 |
foexle | error: internal error character device (null) is not using a PTY | 12:07 |
*** stevegjacobs has joined #openstack | 12:07 | |
Razique | time to lunch for me :p | 12:07 |
Razique | foexle: let's see that later shall we :p | 12:07 |
halfss | i should copy the instance'xml in the /etc/libvirt/nwfilter of down host to the good host, and then restart libvirt on good host | 12:08 |
Razique | after lunch i'll quickly write a script and ask for u guys to kindly try it :) | 12:08 |
Razique | halfss: | 12:08 |
Razique | noe | 12:08 |
foexle | sure :D good hunger :) | 12:08 |
halfss | and then change db,reboot instance | 12:08 |
Razique | nope | 12:08 |
halfss | ? | 12:08 |
Razique | halfss: make sure /etc/libvirt/nwfilter and /var/lib/nova/instances are synched | 12:09 |
Razique | on both hosts, these two dirs should have the same files | 12:09 |
Razique | got it ? | 12:09 |
Razique | instance : for instances files | 12:09 |
halfss | oh yes | 12:09 |
Razique | nwfilter : livbirt security rules | 12:09 |
Razique | then restart libvirt | 12:09 |
Razique | service libvirt-bin restart | 12:09 |
halfss | no i use glusterfs sotre /var/lib/nova/instance | 12:10 |
Razique | update the database for the instance, set the two field 'host' and 'launched_on' the the dest nide | 12:10 |
Razique | node | 12:10 |
Razique | then euca-reboot or nova-reboot | 12:10 |
Razique | halfss: no use, if the NFS perf are ok for you | 12:11 |
*** vidd-away has quit IRC | 12:11 | |
Razique | halfss: we only see here that Gluster FS is a performant NFS, no more (in our context) | 12:11 |
halfss | NFS's speed is not good | 12:11 |
Razique | halfss: yah I know, but for validating here it was ok | 12:12 |
halfss | yes,i know | 12:12 |
Razique | ok lunch :p | 12:12 |
Razique | I love to talk here but that's the issue =d | 12:12 |
Razique | times goes | 12:12 |
halfss | Razique:have you see this:https://github.com/Mirantis/openstack-utils/blob/master/nova-compute | 12:12 |
Razique | be back in an hour guys ;) | 12:12 |
*** livemoon has joined #openstack | 12:12 | |
Razique | haha awesome thanks ! | 12:13 |
halfss | this scirpt can reboot the instance (share sotre) from bad host to another good host | 12:13 |
halfss | but looks it work on cactus ,not diablo | 12:14 |
halfss | can you make it work on diablo? | 12:14 |
Razique | will try to :p | 12:15 |
halfss | ok,if you done tell me,thanks | 12:15 |
halfss | ok? | 12:15 |
*** GeoDud has quit IRC | 12:16 | |
*** rods has joined #openstack | 12:17 | |
*** reidrac has quit IRC | 12:18 | |
*** vidd has joined #openstack | 12:20 | |
*** vidd has joined #openstack | 12:20 | |
*** reidrac has joined #openstack | 12:21 | |
tyska | \join #ubuntu-server | 12:23 |
tyska | wrong side =) | 12:23 |
livemoon | ....... | 12:24 |
*** livemoon has left #openstack | 12:24 | |
*** livemoon has joined #openstack | 12:24 | |
tyska | /join instead \join | 12:24 |
tyska | =) | 12:24 |
livemoon | :) | 12:24 |
vidd | tyska, happens to everyone | 12:25 |
tyska | vidd: =) | 12:25 |
livemoon | vidd | 12:25 |
vidd | first time i did that i was in a busy channel and tried to ghost myself...there was my poassword in chat =] | 12:25 |
livemoon | today I install the lastest nova, glance and keystone. but it cannot be started | 12:26 |
tyska | someone here already have to configure a server to use a authenticated proxy? | 12:26 |
*** mattstep has quit IRC | 12:27 | |
vidd | livemoon, you installed them over existing or fresh install? | 12:29 |
livemoon | fresh | 12:29 |
livemoon | a clean 10.10 | 12:29 |
vidd | if you installed them from git, you have to tell them to start | 12:30 |
livemoon | nono | 12:30 |
vidd | then what doe the error logs say? | 12:31 |
livemoon | I means it can not be worked fine | 12:32 |
livemoon | It all can be started | 12:32 |
livemoon | but nova and glance don't work with keystone | 12:32 |
livemoon | error show | 12:32 |
livemoon | https://bugs.launchpad.net/keystone/+bug/888448 | 12:34 |
livemoon | vidd here | 12:34 |
foexle | livemoon: it works, but only with many bypasses make sure you use the newest nova-client | 12:36 |
*** GheRivero has quit IRC | 12:37 | |
livemoon | I use the newest nova-client | 12:37 |
livemoon | when I use glance command ,the error also occur | 12:37 |
foexle | which one ? :D | 12:38 |
livemoon | what do you mean? | 12:38 |
vidd | livemoon, keystone from apt or git? | 12:38 |
livemoon | all of them from git | 12:38 |
foexle | livemoon: which error ? could you paste? | 12:39 |
vidd | and keystone is running? | 12:39 |
livemoon | yes | 12:39 |
livemoon | foexle: https://bugs.launchpad.net/keystone/+bug/888448 | 12:39 |
foexle | ah ok | 12:39 |
vidd | livemoon, backup your "/usr/share/pyshared/keystone/middleware/auth_token.py" and replace it with this one: https://github.com/managedit/keystone/blob/master/keystone/middleware/auth_token.py | 12:46 |
*** bsza has joined #openstack | 12:46 | |
vidd | make sure you back up your existing....dont just replace it | 12:46 |
*** javiF has quit IRC | 12:50 | |
*** dirkx_ has joined #openstack | 12:52 | |
*** wulianmeng has quit IRC | 12:53 | |
*** lorin1 has joined #openstack | 12:54 | |
livemoon | tomorrow I will try | 12:54 |
*** nelson has joined #openstack | 12:54 | |
livemoon | this is whose? | 12:54 |
vidd | livemoon, that is Kiall's work | 12:56 |
Kiall | huh? | 12:57 |
vidd | i dont know how well it will mingle with new git stuff...that's why i say back up the old before dropping his in | 12:57 |
*** popux has joined #openstack | 12:57 | |
vidd | Kiall, your auth_token script for keystone | 12:57 |
Kiall | thats just the stable/diablo version.. | 12:58 |
*** Otter768 has quit IRC | 12:58 | |
*** stevegjacobs has quit IRC | 12:58 | |
*** Rajaram has quit IRC | 12:58 | |
vidd | livemoon, remember that the "master" branch of openstack is all experimental | 12:58 |
*** n81 has joined #openstack | 12:58 | |
livemoon | Kiall, which stable version you fork? | 12:58 |
livemoon | vidd , thanks | 12:59 |
Kiall | my fork is simply packaging stuff.. | 12:59 |
Kiall | and my "master" branch is a openstack's "stable/diablo" branch + packaging stuff | 12:59 |
vidd | livemoon, he has created a ppa that has keystone and dashboard working from apt-get | 12:59 |
livemoon | what is packaging stuff? | 12:59 |
*** nerens has joined #openstack | 13:00 | |
vidd | livemoon, https://launchpad.net/~managedit/+archive/openstack/ | 13:00 |
*** lts has joined #openstack | 13:00 | |
livemoon | really? that's good work | 13:00 |
Kiall | changes for building .deb's | 13:00 |
livemoon | I want tu use Kiall's ppa in my production | 13:00 |
vidd | livemoon, Kiall's statue is being built as we speak =] | 13:00 |
Kiall | lol | 13:01 |
vidd | Kiall, have you applied to the ubuntu repo's yet? | 13:01 |
*** kaigan has quit IRC | 13:02 | |
tyska | Razique: are u there? | 13:02 |
Kiall | No, I probably wont.. I can't get my head around bzr where they keep everything (kinda dont want to more like..) | 13:02 |
Kiall | 99% of the changes in my packages are literally just updating to the right versions of stuff | 13:03 |
vidd | Kiall, then how about just applying for your packages =] | 13:03 |
*** lorin1 has quit IRC | 13:04 | |
Kiall | If they _wanted_ to have working packages, they could do it in a few hours at most. | 13:04 |
Kiall | but, I don't think they want to update them, due to policy's around updated versions.. and the stable/diablo branches being a moving target rather than a actual point release | 13:05 |
vidd | i just wish that openstack would release a month before ubuntu freeze | 13:05 |
Kiall | if OS released a 2011.3.1, I think ubuntu will update. But unless that happens, I don't think they will. | 13:06 |
Kiall | (I could be completely wrong BTW) | 13:06 |
vidd | perhaps ill pop in on my old xubuntu friends and ask if someone can take your ppa up the food chain =] | 13:07 |
Kiall | lol.. | 13:08 |
*** kaigan has joined #openstack | 13:08 | |
Kiall | vidd: or contact the maintainers first, rather then going above their heads ;) | 13:08 |
vidd | i was not aware OS had maintainers=] | 13:09 |
vidd | i thought THAT was the propbelm | 13:09 |
Kiall | These guys AFAIK https://launchpad.net/~ubuntu-server-dev | 13:09 |
livemoon | Kiall vidd : Do you mean diablo in ubuntu will not update ? | 13:09 |
vidd | livemoon, they havent updated the packages in a month...so it does not look promising | 13:10 |
Kiall | livemoon: as far as I know, ubuntu policy prevents them updating until openstack makes a release. | 13:10 |
livemoon | This means we only wait for essex | 13:11 |
Kiall | livemoon: nope, thats a major version change, that has to wait for the next release of ubuntu | 13:11 |
vidd | livemoon, you can expect the same issue there | 13:11 |
vidd | OS will release after the ubuntu freeze | 13:12 |
Kiall | (again - as far as I know, I'm only vaguely familiar with ubuntu polices) | 13:12 |
vidd | so more out-of-sync issues | 13:12 |
livemoon | but ubuntu is just release 11.10 | 13:13 |
livemoon | I think it maybe cost some time to release next | 13:13 |
vidd | ubuntu releases every 6 months | 13:14 |
vidd | between the 10th and the 25th of the month | 13:14 |
Kiall | livemoon: yes, 12.04 is the next release.. and its an LTS release, so they will be very cautions about what they put in... | 13:14 |
livemoon | essex maybe the same time release next year | 13:14 |
vidd | livemoon, right...which means OS will miss the ubuntu freeze and we will most likely be stuck with the broken stuff we already have | 13:15 |
*** Rajaram has joined #openstack | 13:15 | |
Kiall | vidd: at least keystone+dash will be core essex projects, and will be available from the official openstack PPAs | 13:16 |
Kiall | (ie https://launchpad.net/~openstack-release/+archive/2011.3 ) | 13:16 |
*** emid has quit IRC | 13:17 | |
livemoon | but now the keystone in ppa is older | 13:17 |
vidd | Kiall, i dont see alot of backporting in this project =\ | 13:17 |
Kiall | vidd: they only just decided to do stable branches | 13:17 |
vidd | perhaps because of the ubuntu 11.10 snafu? | 13:18 |
Kiall | AFAIK, at the time diablo was released, there was no official plan for stable branches.. It came a little after... | 13:18 |
*** shang has joined #openstack | 13:18 | |
vidd | Kiall, on a different subject.... | 13:19 |
vidd | im writing my script, and i am having visudo launch to fix the sudo rights issue with nova-volume.... | 13:19 |
vidd | when the runner closes visudo, will a bash script continue? | 13:20 |
Kiall | nova volume doesnt have issues with sudo? | 13:20 |
Kiall | at least, not with my packages, you're using them right? | 13:20 |
vidd | Kiall, yes | 13:20 |
Kiall | what needs chaning? | 13:20 |
Kiall | changing* | 13:20 |
livemoon | vidd | 13:20 |
Kiall | as in, what are you changing with visudo.. I'll just update the packages with whatever command is needed | 13:21 |
livemoon | I found nova-volume cannot remove lv sometime since of tgt | 13:21 |
vidd | Kiall, http://docs.openstack.org/diablo/openstack-compute/admin/content/managing-volumes.html | 13:21 |
Kiall | vidd: have you tested it yet? ;) | 13:21 |
*** bsza has quit IRC | 13:21 | |
Kiall | cat /etc/sudoers.d/nova_sudoers | 13:22 |
vidd | tested the script? | 13:22 |
*** Rajaram_ has joined #openstack | 13:22 | |
Kiall | The packages handle getting the sudo rights in place | 13:22 |
livemoon | vidd, you use nova-volume from repo? | 13:23 |
vidd | yes...i'm using everything...from Kiall 's ppa | 13:23 |
zykes- | vidd: aloha | 13:23 |
vidd | i havent gotten to dashboard yet...and i dont see nova-vncproxy in there | 13:24 |
livemoon | kiall's ppa is stable? | 13:24 |
*** Rajaram has quit IRC | 13:24 | |
livemoon | I wil try it | 13:24 |
vidd | zykes-, how did you chat with OS go? | 13:24 |
Kiall | vidd: Lots of the steps in the openstack docs can be skipped when using my packages (and ubuntus, at least for nova) | 13:24 |
*** Rajaram has joined #openstack | 13:24 | |
livemoon | Kiall, do you have docs about your ppa? | 13:25 |
Kiall | vidd: nova-vncproxy is included, by frankly, I havn't got it working -_- | 13:25 |
Kiall | livemoon: https://github.com/managedit/openstack-setup | 13:25 |
vidd | zykes-, did you get an answer about DNSaaS? | 13:25 |
*** Rajaram_ has quit IRC | 13:26 | |
zykes- | vidd: chat with who ? no not yet :( sent a mail to the ML but noone answered | 13:26 |
livemoon | kiall, it's your git? | 13:26 |
* vidd has found a need for it | 13:26 | |
zykes- | vidd: oh rly ? | 13:26 |
vidd | zykes-, yes.... | 13:26 |
Kiall | lionel: yea | 13:26 |
Kiall | livemoon: yes* | 13:26 |
livemoon | ok, fork you | 13:27 |
*** derjohn_mob has joined #openstack | 13:27 | |
vidd | zykes-, i have existing servers that i want to convert into VM's.... | 13:27 |
vidd | when there is need to load-balance, there may be a need to spawn additional instances... | 13:28 |
*** vernhart has quit IRC | 13:28 | |
zykes- | vidd: add stuff you mean to the etherpad in that case.. | 13:28 |
vidd | if the load balances detects the need, the load balances launches the new instance and the DNSaaS helps with the locating | 13:29 |
zykes- | vidd: the problem is that DNS performance doesn't just help with just spawning new instances :p | 13:29 |
livemoon | vidd: DNSaas ? | 13:29 |
livemoon | teach me | 13:29 |
vidd | etherbad....github....launchpad....pretty sone, im going to have accounts on half the internet =\ | 13:30 |
zykes- | no need for a account vidd | 13:30 |
zykes- | http://etherpad.openstack.org/HkEvt4crw9 | 13:30 |
Razique | Kiall: yah | 13:31 |
vidd | livemoon, i do not know anything about DNSaaS ... that is zykes- thing =] | 13:35 |
*** AlanClark has joined #openstack | 13:35 | |
livemoon | zkes: it need a dns image first ,isn't it? | 13:36 |
Razique | yah it's zykes- stuff :D | 13:36 |
*** anonymous_ has joined #openstack | 13:37 | |
vidd | Razique, nice work on the migration page ... i see you looked into my review and made some updates =] | 13:37 |
Razique | oh u have written to first one ? :D | 13:38 |
uvirtbot | New bug: #888546 in nova "Extended Status Admin API extension fails in multi-zone mode" [Undecided,New] https://launchpad.net/bugs/888546 | 13:38 |
livemoon | vidd, razique: you are both good boy | 13:38 |
*** sandywalsh_ has joined #openstack | 13:38 | |
Razique | hehe so are you livemoon :) | 13:38 |
vidd | livemoon, noit me...im the evil twin | 13:38 |
livemoon | twin? | 13:39 |
vidd | =] | 13:39 |
Razique | =D | 13:39 |
vidd | yes...im a twin...and the family joke is i can never say "i didnt do it, it was my evil twin because i AM the evil twin" | 13:40 |
zykes- | i haven't bother to do anything with it cause noone's willing to talk about how to do it so :) | 13:40 |
anonymous_ | ? | 13:40 |
Razique | zykes-: DNSaaS antonym | 13:40 |
Razique | anonymous_: | 13:40 |
vidd | zykes-, if i knew how to do it, id help =] | 13:41 |
Razique | is there a blueprint for that ? | 13:41 |
zykes- | Razique nop | 13:41 |
zykes- | just the etherpad at the moment | 13:41 |
anonymous_ | sorry about the ?; /help was ignored by this webclient. Or maybe I was typing prolog :-) | 13:41 |
*** sandywalsh has quit IRC | 13:41 | |
Razique | np :) | 13:41 |
foexle | Razique: i found the issue ;) | 13:42 |
Razique | oh ? what was that ? | 13:42 |
*** PeteDaGuru has joined #openstack | 13:42 | |
foexle | my mistake :) .... i forgot the keypair option ! | 13:42 |
foexle | normally should start the instance without that | 13:42 |
*** hallyn has quit IRC | 13:42 | |
anonymous_ | anyone have any comments about the solaris 11 announcement with "cloud support" and Zones. My intuition is that zones are not high enough isolation compared to a real VM | 13:43 |
foexle | but seemingly not :) | 13:43 |
*** hallyn has joined #openstack | 13:45 | |
*** sandywalsh_ has quit IRC | 13:45 | |
zykes- | anyone of you familiar with openvz ? | 13:45 |
*** sandywalsh_ has joined #openstack | 13:46 | |
*** chemikadze has joined #openstack | 13:50 | |
foexle | if i try to delete a volume with nova-manage it writes first with dd every block with zeros and following erase the lv ? | 13:50 |
*** zul has quit IRC | 13:50 | |
foexle | thats pretty i/o intensive | 13:50 |
Razique | foexle: yah | 13:50 |
Razique | it's a sec. measure | 13:51 |
foexle | can i disable that ? | 13:51 |
Kiall | foexle: i dont think so, not without changing code anyway | 13:51 |
foexle | hmmm ok | 13:52 |
*** mnour has quit IRC | 13:53 | |
*** zul has joined #openstack | 13:53 | |
*** halfss has quit IRC | 13:53 | |
*** halfss has joined #openstack | 13:54 | |
livemoon | hi.my all friends | 13:54 |
livemoon | see you tomorrow,bye, sleep | 13:55 |
Razique | bye livemoon ;) | 13:55 |
*** kbringard has joined #openstack | 13:55 | |
zykes- | and another thing is that i don't have any vms / hw to run on :( | 13:55 |
*** livemoon has left #openstack | 13:56 | |
*** uksysadmin has joined #openstack | 13:56 | |
*** tyska has quit IRC | 13:56 | |
*** ldlework has joined #openstack | 13:57 | |
foexle | ok next issue :D .... i try to resolve all issues alone, but this one are strange ... (nova.rpc): TRACE: TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType | 13:59 |
foexle | after i create a volume with euca tools | 14:00 |
Kiall | foexle: what version are you running? | 14:00 |
foexle | the volume are existing now | 14:00 |
foexle | Kiall: diable stable | 14:00 |
foexle | o | 14:00 |
stevegjacobs_ | back from the data centre :-) | 14:00 |
Kiall | from ubuntu repos, or? | 14:00 |
foexle | but if i describe volumes they are in error state | 14:00 |
Kiall | stevegjacobs_: took you're time ;) | 14:00 |
foexle | Kiall: yes | 14:00 |
Kiall | I had similar issues, if i remember right, the real exception is being covered up.. | 14:01 |
stevegjacobs_ | took digital pix what was on the screen - spent some time troubleshooting while out there | 14:01 |
Kiall | stevegjacobs_: what was the panic message in the end? | 14:01 |
Razique | I love to read the technical doc about cloud ; always the same words :D "Enterprises can scale capacity, performance, and availability on demand, with | 14:01 |
Razique | no vendor lock-in, across on-premise, public cloud, and hybrid environments." | 14:01 |
praefect | hi guys, good morning to everyone... | 14:02 |
*** msivanes has joined #openstack | 14:02 | |
Razique | hi praefect :) | 14:02 |
praefect | do you get something via "nova zone-info"? I get a 404 | 14:02 |
praefect | hi Razique! | 14:02 |
Kiall | (stevegjacobs_, I'm asking in case its the same issue as I had.. might not be limited to this particular HP server) | 14:03 |
*** nerens has quit IRC | 14:03 | |
foexle | Kiall: so i cant find any other error in volume log | 14:05 |
stevegjacobs_ | Kiall: I don't know how to interpret - I'll get the picture up somewhere you can have a look | 14:05 |
*** ldlework has quit IRC | 14:05 | |
Kiall | foexle: yea, it was a PITA to debug last time I had one of those | 14:06 |
*** stevegjacobs has joined #openstack | 14:07 | |
*** misheska has joined #openstack | 14:08 | |
*** nerens has joined #openstack | 14:08 | |
foexle | Kiall: so i cant use euca tools to create /delete volumes ? .... ApiError: Volume status must be available | 14:09 |
foexle | ups | 14:10 |
foexle | want to paste this one https://bugs.launchpad.net/nova/+bug/716847 | 14:10 |
foexle | its my issue | 14:10 |
sandywalsh_ | praefect, you need --allow_admin_api defined in nova.conf for zone-info to work | 14:10 |
*** kaigan has quit IRC | 14:12 | |
*** javiF has joined #openstack | 14:12 | |
*** bsza has joined #openstack | 14:12 | |
*** mdomsch has joined #openstack | 14:13 | |
vidd | Kiall, i dont see how your scripts get the env | 14:13 |
Kiall | "env" ? | 14:13 |
Kiall | you mean like NOVA_BLA etc? | 14:13 |
vidd | yes | 14:13 |
Kiall | they source the settings file.. | 14:13 |
praefect | thanks sandywalsh_ it works | 14:13 |
Razique | I just installed gluster fs for openstack | 14:13 |
Razique | let's try it with HA :D | 14:14 |
*** stevegjacobs has quit IRC | 14:14 | |
*** dprince has joined #openstack | 14:17 | |
*** zul has quit IRC | 14:17 | |
*** jeromatron has joined #openstack | 14:18 | |
*** zul has joined #openstack | 14:19 | |
*** chuck__ has joined #openstack | 14:20 | |
*** chuck__ is now known as zul | 14:20 | |
*** ldlework has joined #openstack | 14:20 | |
*** bcwaldon has joined #openstack | 14:22 | |
*** localhost has quit IRC | 14:23 | |
*** localhost has joined #openstack | 14:25 | |
*** stuntmachine has joined #openstack | 14:26 | |
*** halfss has quit IRC | 14:26 | |
Razique | that glusterfs is quite neat to administrate :) | 14:26 |
Razique | love the cli | 14:26 |
*** lborda has joined #openstack | 14:30 | |
*** stuntmachine has quit IRC | 14:31 | |
n81 | is there anybody here who knows their iSCSI stuff? | 14:32 |
*** mcclurmc has quit IRC | 14:32 | |
Razique | yh | 14:32 |
Razique | ask | 14:32 |
*** mcclurmc has joined #openstack | 14:32 | |
Razique | I'm no expert but just ask :) | 14:33 |
n81 | Raz: haha ok…I got my first OS cloud setup and after a few source code tweaks got it running. I originally installed using packages from Ubuntu so found out those were a bit out-dated. So instead switched over to Kiall's managed PPA packages. And the cloud works great, except I'm getting an error attaching volumes. | 14:34 |
vidd | n81, do you have --iscsi_ip_prefix= in your novaconf? | 14:35 |
BasTichelaar | sandywalsh_: got zones working with LXC | 14:35 |
n81 | vidd: here's the strange part. I got iscsi and volumes to work before | 14:35 |
n81 | vidd: and I'm using the exact same configuration file on a fresh clean Ubuntu 11.10 install using Kiall's PPA packages | 14:36 |
sandywalsh_ | BasTichelaar, nice! | 14:36 |
n81 | let me put the error in pastebin | 14:36 |
BasTichelaar | sandywalsh_: there was indeed a bug with libvirt: https://bugs.launchpad.net/nova/+bug/887805 | 14:36 |
n81 | vidd/raz: the error is with iscsiadm command…not with OS. If I run the iscsiadm command OS is trying to run on my compute node I get the same error as OS, but I'm at a loss for why the iscsiadm command is no timing out…I think it oculd be a firewall issue, maybe? | 14:37 |
BasTichelaar | sandywalsh_: only issue is that libvirt doesnt provide vcpus_used for LXC, will file a bug report for that | 14:37 |
*** stuntmachine has joined #openstack | 14:38 | |
*** livemoon has joined #openstack | 14:38 | |
livemoon | hi, is anyone here? | 14:38 |
n81 | vidd/raz: http://paste.openstack.org/show/3241/ | 14:39 |
Razique | yh | 14:39 |
livemoon | vidd razique? | 14:39 |
vidd | n81 there does need to be a firewall path open if a remote machine is trying to access them | 14:39 |
anonymous_ | yes | 14:39 |
livemoon | https://bugs.launchpad.net/bugs/888448 | 14:39 |
livemoon | look | 14:39 |
livemoon | this bug, can anyone tell me what means | 14:39 |
n81 | vidd/raz: I keep getting this failed to receive a PDU back | 14:39 |
livemoon | it is reported by me and now someone reply me | 14:39 |
*** Tsel has quit IRC | 14:40 | |
Razique | n81: is that host resolvable cloudcntlr | 14:40 |
foexle | Razique: i get this error to | 14:40 |
Razique | livemoon: have u added glance -A ? | 14:40 |
n81 | raz: at first it wasn't =P….but it is now…I just needed to make sure all my machines were on the same sub-domain... | 14:40 |
foexle | Razique: compute node try to do iscsi stuff | 14:41 |
livemoon | not only glance, if I use novaclient, this error also occur | 14:41 |
*** jeromatron has quit IRC | 14:41 | |
Razique | n81: telnet 192.1.253.194 3260 | 14:41 |
Razique | from the node | 14:41 |
n81 | raz: Trying 192.1.253.194... | 14:42 |
n81 | Connected to 192.1.253.194. | 14:42 |
n81 | Escape character is '^]'. | 14:42 |
n81 | Connection closed by foreign host. | 14:42 |
*** snet has joined #openstack | 14:42 | |
vidd | livemoon, you need to have the uers = {...} match auth['user'] = {...} | 14:42 |
livemoon | ok | 14:43 |
sandywalsh_ | BasTichelaar, yeah, vcpu support is scanty at best generally | 14:43 |
livemoon | thanks, I decided to learn English well | 14:43 |
vidd | change one or the other | 14:43 |
vidd | livemoon, or go with Kiall 's repos =] | 14:44 |
livemoon | no. now I am at home mid-night | 14:44 |
*** dongxu has joined #openstack | 14:44 | |
vidd | hehe | 14:44 |
livemoon | I will go to office tomorrow to try | 14:44 |
vidd | livemoon, you cant reach office from home? | 14:45 |
livemoon | night is the time chat with you | 14:45 |
livemoon | vpn is broken today | 14:45 |
*** dongxu has left #openstack | 14:46 | |
livemoon | vidd: | 14:47 |
livemoon | vidd: you are twins? you have brother? | 14:48 |
*** lborda has quit IRC | 14:50 | |
Razique | hehe | 14:51 |
BasTichelaar | sandywalsh_: how does the distributedscheduler by default decide where to run an instance? | 14:51 |
vidd | livemoon, yes | 14:51 |
Razique | BasTichelaar: funny u ask | 14:51 |
vidd | he's a microsoft junkie...and they call ME the evil twin =] | 14:51 |
BasTichelaar | Razique: why? | 14:51 |
livemoon | I have a twin name "deadsun" | 14:52 |
n81 | vidd/raz: should my telnet connection be terminated immediately into the iscsitarget service? | 14:52 |
*** joesavak has joined #openstack | 14:52 | |
livemoon | BasTichelaar: I am looking at this http://nova.openstack.org/devref/distributed_scheduler.html?highlight=zones | 14:52 |
*** jsavak has joined #openstack | 14:53 | |
BasTichelaar | livemoon: yes, me too | 14:53 |
vidd | n81, i have no idea...like Razique im no expert =] | 14:53 |
foexle | which iscsi packages needs the compute node ? the volume-manager runs on a other node | 14:53 |
sandywalsh_ | BasTichelaar, right now it filters on available ram and disk | 14:53 |
sandywalsh_ | BasTichelaar, ram is usually the gating metric | 14:53 |
Razique | BasTichelaar: because http://www.mail-archive.com/openstack@lists.launchpad.net/msg05317.html | 14:54 |
Razique | :p | 14:54 |
foexle | n81: i think the package iscsitarget are not installed on the comppute node .... but dont know i'm testing | 14:54 |
Razique | n81: not neceserally | 14:54 |
BasTichelaar | sandywalsh_: ahh ok, thought it did something with CPU as well | 14:54 |
foexle | n81: have the same issue | 14:54 |
BasTichelaar | Razique: hot topic :) | 14:54 |
Razique | BasTichelaar: yup | 14:54 |
sandywalsh_ | BasTichelaar, you can if you define extra parameters on the instance type | 14:54 |
Razique | n81: run that from the node iscsiadm -m discoverty -t st -p 192.1.253.194 | 14:55 |
sandywalsh_ | look at nova/scheduler/filters/instance_type_filter.py | 14:55 |
BasTichelaar | sandywalsh_: thanks! | 14:55 |
Razique | iscsiadm -m session -t st -p 192.1.253.194 | 14:55 |
sandywalsh_ | np | 14:55 |
livemoon | and iscsiadm -m node | 14:55 |
BasTichelaar | Razique: what is the output of nova zone-info for each of your zones? | 14:55 |
*** dongxu has joined #openstack | 14:56 | |
BasTichelaar | Razique: hmm, I see your nodes are in the same zone :) | 14:56 |
Razique | BasTichelaar yah the default one - nova | 14:56 |
*** dongxu has left #openstack | 14:56 | |
Razique | I've never played with zones so far | 14:56 |
BasTichelaar | Razique: ok, so my question is a little bit different | 14:57 |
*** joesavak has quit IRC | 14:57 | |
BasTichelaar | :) | 14:57 |
n81 | livemoon: thor@cloudnc1:~$ sudo iscsiadm -m session | 14:57 |
n81 | iscsiadm: No active sessions. | 14:57 |
Razique | n81: and mine ? :p | 14:57 |
livemoon | first you should use iscsiadm -m discoverty -t st -p IP | 14:57 |
Razique | BasTichelaar: oh u were asking accross zones ? | 14:57 |
*** dongxu has joined #openstack | 14:57 | |
livemoon | then use iscsiadm -m node | 14:57 |
BasTichelaar | Razique: yes, I'm setting up two separated zones with shared-nothing | 14:58 |
BasTichelaar | Razique: using trunk and the distributedscheduler | 14:58 |
Razique | oh ok, sorry ^^ | 14:58 |
BasTichelaar | Razique: apart from a few bugs it seems to work out quite ok | 14:58 |
BasTichelaar | Razique: np :) | 14:58 |
Razique | good to know then :) | 14:58 |
BasTichelaar | Razique: its only the lack of documentation and the chaos of different schedulers that makes it different | 14:59 |
*** robbiew has joined #openstack | 14:59 | |
BasTichelaar | Razique: maybe I should write some blog post about it :) | 14:59 |
Razique | that would be nice from you | 14:59 |
*** deshantm_laptop has joined #openstack | 14:59 | |
Razique | and we could ; if you want; update the doc accordingly | 14:59 |
Razique | exit | 14:59 |
Razique | opps | 15:00 |
n81 | livemoon: same error, failed to receive a PDU | 15:00 |
BasTichelaar | Razique: yes, would be a good idea, there is a lot of legacy stuff in the current diablo docs | 15:00 |
Razique | n81: what target discovery gives ? | 15:00 |
Razique | BasTichelaar: fantastic | 15:00 |
*** bsza has quit IRC | 15:01 | |
n81 | raz/livemoon: here's my iscsiadm discovery on level 8 verbose mode: =P http://paste.openstack.org/show/3242/ | 15:01 |
*** katkee has quit IRC | 15:02 | |
*** lborda has joined #openstack | 15:02 | |
*** rnorwood has joined #openstack | 15:02 | |
Razique | nothing here mmmm | 15:03 |
Razique | appart that PDU | 15:03 |
n81 | what the hell is that PDU =P | 15:03 |
n81 | I mean…my understanding is my node is making a valid discovery request and something on the cloud controller is not responding | 15:04 |
n81 | either is getting blocked via firewall or is erroring or is not even accepting the incoming request | 15:04 |
n81 | b/c the log shows a valid connection | 15:04 |
n81 | on port 3260 | 15:04 |
livemoon | look at your 192.1.253.194 | 15:04 |
livemoon | does service worked well in it? | 15:05 |
Razique | n81: is nova-volume running without error ? | 15:05 |
Razique | restart nova-volume and dump up the debug mode | 15:05 |
*** lorin1 has joined #openstack | 15:05 | |
livemoon | I think it is not nova-volume problem | 15:06 |
livemoon | just tgt and openiscsi | 15:06 |
Razique | depend, since nova-volume takes care is the iscsi part | 15:07 |
Razique | I see what you mean | 15:08 |
Razique | but if during the setup something went fubar, nova-volume.log should show it I hope | 15:08 |
*** Rajaram has quit IRC | 15:08 | |
livemoon | ok | 15:08 |
BasTichelaar | sandywalsh_: the distributedscheduler checks the zone info every 120 seconds | 15:10 |
*** livemoon has left #openstack | 15:10 | |
BasTichelaar | sandywalsh_: so when I fire up 10 instances at once, they will definitely get to the same zone, correct? | 15:10 |
sandywalsh_ | BasTichelaar, no, the distributed_scheduler._schedule() method will ask each child zone for a build plan. It will decide from there where the instance should go. | 15:11 |
sandywalsh_ | BasTichelaar, the polling of the child zones is only to check online and general capabilities | 15:11 |
n81 | Raz/livemoon: I restarted nova-volume…not seeing any errors or traces in the debug log | 15:12 |
sandywalsh_ | BasTichelaar, decision making is done at the time of request | 15:12 |
n81 | Running cmd (subprocess): sudo ietadm --op new --tid=1 --params Name=iqn.2010-10.org.openstack:volume-00000001 | 15:13 |
n81 | Running cmd (subprocess): sudo ietadm --op new --tid=1 --lun=0 --params Path=/dev/nova-volumes/volume-00000001,Type=fileio | 15:13 |
BasTichelaar | sandywalsh_: ok, clear | 15:13 |
n81 | those are the commands I'm seeing in my nova-volume.log on startup….seems to be 'mounting' the iscsi volume ok | 15:13 |
*** jollyfoo has joined #openstack | 15:13 | |
Razique | n81 ok | 15:13 |
n81 | raz/livemoon: do you know if iscsiadm keeps a log somewhere? | 15:13 |
snet | in swift, do you have to be an admin user to peform a HEAD (aka stat) on an account ? | 15:14 |
*** neogenix has quit IRC | 15:14 | |
*** dongxu1 has joined #openstack | 15:14 | |
Razique | now restart open-iscsi on the node; iscsitarget on the server 1 | 15:14 |
*** dongxu has quit IRC | 15:14 | |
Razique | and then restart nova-volume | 15:14 |
Razique | then again run the discovery | 15:14 |
*** mgoldmann has quit IRC | 15:14 | |
Razique | -m discovery -t st -p | 15:14 |
*** jfluhmann has joined #openstack | 15:15 | |
n81 | Raz: Will nova-volume start up iscsi-target automatically? | 15:15 |
*** dtroyer has joined #openstack | 15:15 | |
Razique | n81: don't think so | 15:16 |
Razique | look guys http://pacemaker-cloud.org/ | 15:17 |
*** Rajaram has joined #openstack | 15:17 | |
BasTichelaar | sandywalsh_: can I force an instance to get build in a specified zone? | 15:19 |
*** nphase has joined #openstack | 15:20 | |
*** stevegjacobs has joined #openstack | 15:21 | |
*** imsplitbit has joined #openstack | 15:23 | |
n81 | raz: ok…so here's some more info | 15:24 |
n81 | raz: I still can't get it to work even restarting, but I installed iscsiadm on my cloud controller | 15:25 |
n81 | raz: when I try to run this command: sudo iscsi_discovery 192.1.253.194 | 15:25 |
Razique | n81: why is that ? | 15:25 |
n81 | raz: I get the same error messages about failed to receive a PDU | 15:25 |
n81 | raz: but when I run: iscsi_discovery 127.0.0.1 | 15:25 |
n81 | raz: on the node controller…I get this: | 15:25 |
n81 | raz: discovered 1 targets at 127.0.0.1 | 15:26 |
n81 | raz: wait nevermind…I didn't let the 192 finish | 15:27 |
n81 | raz: in the end it finds a target too | 15:27 |
n81 | raz: discovered 1 targets at 192.1.253.194 | 15:27 |
foexle | n81: autostart can you configure in /etc/iscsi/iscsi.conf | 15:29 |
n81 | foexle: you mean set it to create session automatically? | 15:30 |
foexle | sry no not a session i mean spawn the luns on server startup | 15:30 |
foexle | so i think i found my problem .... the firewall on the compute node drops isci connection | 15:32 |
*** marrusl has joined #openstack | 15:33 | |
n81 | foexle: hmm…I'm thinking that's my problem too…everything seems to be working, but traffic is not reaching node | 15:33 |
n81 | foexle: how did you troubleshoot? did you just drop your firewalls on your comput node? | 15:34 |
sandywalsh_ | BasTichelaar, not currently. That would need a special host filter I suspect | 15:34 |
foexle | n81: yeah i can discovery on the node where the iscsi service runs, but not from the compute node | 15:34 |
foexle | i don't have troubleshoot atm i'm searching :) | 15:35 |
BasTichelaar | sandywalsh_: ok, and do the availability zones work together with the zones? | 15:35 |
BasTichelaar | sandywalsh_: so I create an availability zone inside a zone, and specify that as parameter? | 15:36 |
*** stevegjacobs has quit IRC | 15:36 | |
n81 | foexle: do you have: --iscsi_helper=tgtadm | 15:37 |
n81 | in your nova.conf? | 15:37 |
foexle | no | 15:37 |
n81 | foexle: see I do…I wonder if that's messing something up | 15:38 |
*** hugokuo has joined #openstack | 15:38 | |
foexle | ok its not the firewall | 15:39 |
foexle | PORT STATE SERVICE | 15:39 |
foexle | 3260/tcp open iscsi | 15:39 |
foexle | MAC Address: 00:30:48:66:18:6F (Supermicro Computer | 15:39 |
*** marrusl has quit IRC | 15:40 | |
n81 | I had iscsi working before…then did a clean re-install with new packages and now no luck | 15:41 |
*** joesavak has joined #openstack | 15:41 | |
*** blamar has joined #openstack | 15:42 | |
*** jsavak has quit IRC | 15:42 | |
Kiall | n81: `cat /etc/default/iscsitarget` true or false? | 15:46 |
foexle | Kiall: without true the service won't to start | 15:46 |
*** jeremy has joined #openstack | 15:46 | |
n81 | Kiall: thanks…unfortunately, it's true =( | 15:47 |
Kiall | open-iscsi will, iscsitarget wont.. just a quick check ;) | 15:47 |
*** tyska has joined #openstack | 15:47 | |
*** uksysadmin has quit IRC | 15:48 | |
Kiall | and `iscsiadm -m session` on the compute node? | 15:48 |
n81 | thor@cloudnc1:~$ sudo iscsiadm -m session | 15:48 |
n81 | iscsiadm: No active sessions. | 15:48 |
foexle | root@test3-os:~# iscsiadm -m session | 15:48 |
foexle | iscsiadm: No active sessions. | 15:48 |
foexle | :D | 15:48 |
vidd | Kiall, do euca commands work for your ppa? | 15:48 |
*** mies has quit IRC | 15:48 | |
Kiall | vidd: yes, you need to get the right env vars tho | 15:48 |
Kiall | and i believe there is a bug in euca-tools preventing image uploads from working in combo with keystone | 15:49 |
sandywalsh_ | BasTichelaar, there's not specific support for availability zones within zones (that is, no tests for that combination). They're unfortunately just similarly named. | 15:49 |
*** dolphm has joined #openstack | 15:50 | |
vidd | Kiall, im almost done with my scripting =] keept forgetting my source file needs "export" | 15:50 |
Kiall | ;) | 15:50 |
sandywalsh_ | jaypipes, will making mysql HA affect its row-locking ability? | 15:51 |
*** nerens has quit IRC | 15:53 | |
*** obino has quit IRC | 15:53 | |
*** Rajaram has quit IRC | 15:53 | |
*** rnirmal has joined #openstack | 15:53 | |
n81 | foexle: ok, you're right…definitely not firewall. I've cleared/shutdown firewalls on both machines and I get the same PDU error | 15:54 |
n81 | so it muts be something with the iscsitarget service on the cloud controller | 15:54 |
foexle | yap | 15:54 |
*** nerens has joined #openstack | 15:54 | |
tyska | Razique: are u there? | 15:56 |
*** TheOsprey has quit IRC | 15:56 | |
*** misheska has quit IRC | 15:56 | |
foexle | i see the targets from the local machine | 15:56 |
foexle | and there only 4 targets °° | 15:56 |
*** rsampaio has joined #openstack | 15:57 | |
*** dgags has joined #openstack | 15:57 | |
*** hezekiah_ has joined #openstack | 15:58 | |
n81 | so on the machine running iscsitarget…you can run the same command and you get 4 targets? | 15:58 |
*** obino has joined #openstack | 15:58 | |
*** adjohn has joined #openstack | 15:59 | |
Razique | tyska: yah | 15:59 |
Razique | finishing the HA script | 15:59 |
tyska | Razique: did you found something? | 16:00 |
Razique | I asked you on a pm :p | 16:00 |
*** andy-hk has joined #openstack | 16:01 | |
*** uksysadmin has joined #openstack | 16:01 | |
*** mies has joined #openstack | 16:02 | |
*** andy-hk has quit IRC | 16:03 | |
*** code_franco has joined #openstack | 16:04 | |
*** dragondm has joined #openstack | 16:04 | |
*** andy-hk has joined #openstack | 16:04 | |
*** andy-hk has quit IRC | 16:05 | |
*** kieron has joined #openstack | 16:08 | |
*** mmetheny has quit IRC | 16:09 | |
*** mmetheny_ has joined #openstack | 16:09 | |
*** reidrac has left #openstack | 16:09 | |
*** reidrac has quit IRC | 16:09 | |
uksysadmin | I've a question on authentication | 16:10 |
*** oubiwann1 has quit IRC | 16:10 | |
uksysadmin | I'm not using keystone... but I have my access and secret keys... | 16:10 |
uksysadmin | is the secret key used? | 16:10 |
uksysadmin | I can launch instances with random strings in my EC2_SECRET_KEY | 16:10 |
*** Shentonfreude has joined #openstack | 16:11 | |
hezekiah_ | if you are using nova, then you can. it doesn't use the secret key ( I believe ) | 16:13 |
vidd | it just needs something there =] | 16:13 |
*** swill has joined #openstack | 16:14 | |
uksysadmin | don't know whether to laugh or cry | 16:14 |
Razique | ok guys the hand-made HA script works | 16:15 |
Razique | :D | 16:15 |
Razique | node with running instances crasches | 16:15 |
uksysadmin | I've just checked the docs and it says to use no auth set it in api-paste.ini and it does have in the pipelines ec2noauth | 16:15 |
Razique | the scripts does now only two things : update the Databse | 16:15 |
Razique | and reboot the instance | 16:15 |
Razique | instance is now up on the other node o/ | 16:15 |
kieron | has anyone seen | 16:16 |
swill | i am trying to build a swift authentication middleware to authenticate against a cloudstack installation. is the only way to build it using tokens? | 16:16 |
*** obino has quit IRC | 16:16 | |
*** javiF has quit IRC | 16:16 | |
kieron | (oops) has anyone seen "you are not authorized to access /syspanel/" when trying to log in to dashboard. Can't figure out what I've missed. | 16:16 |
swill | right now i can not figure out a way to actually authenticate a cloudstack user with only a token. | 16:17 |
swill | cause a token is not enough for me to be able to connect to the cloudstack api and verify. | 16:17 |
notmyname | swill: have you read the swift docs on writing your own auth middleware? | 16:17 |
swill | i did | 16:17 |
swill | more than once | 16:18 |
notmyname | :-) | 16:18 |
swill | :) | 16:18 |
*** sannes has quit IRC | 16:18 | |
swill | i am assuming these are the only available references? http://swift.openstack.org/development_auth.html and http://swift.openstack.org/overview_auth.html | 16:18 |
*** cp16net has joined #openstack | 16:19 | |
*** neogenix has joined #openstack | 16:19 | |
notmyname | swill: you should be able to implement whatever kind of auth you want. let me look at it a little more before I say something that isnt' true | 16:19 |
*** krow has joined #openstack | 16:20 | |
swill | that was my assumption as well. for some reason i am having trouble getting my head around how. | 16:20 |
*** andy-hk has joined #openstack | 16:23 | |
notmyname | swill: there may be some implicit assumptions in swift about using a token. however, I think you should be able to use whatever you want. so, for example, your middleware's __call__() method should be able to check what you need and set up the authorize() callback. your authorize() method gets the request and can then look at anything you want | 16:23 |
notmyname | swill: perhaps another reference would be looking at the included swift3 middleware. it implements the S3 request signing for auth | 16:23 |
*** andy-hk has quit IRC | 16:24 | |
swill | notmyname: i will take a look at that now. thank you for your input. | 16:24 |
chmouel_ | I found swauth code pretty good to look at if you want to implement you own auth middleware | 16:25 |
*** Shentonfreude has quit IRC | 16:25 | |
notmyname | swill: and you can make things a little simpler perhaps if you don't write your auth middleware to work with other auth middleware's that may be running. that's inadvisable, but it really depends on your use case. all the auth middlewares (eg tempauth and swauth) assume that they may be running along side other auth middlewares | 16:25 |
swill | notmyname: right. | 16:26 |
*** chmouel_ is now known as chmouel | 16:27 | |
*** bsza has joined #openstack | 16:27 | |
*** vladimir3p has joined #openstack | 16:28 | |
*** gyee has joined #openstack | 16:29 | |
swill | chmouel: thanks, i will look at that one as well. | 16:30 |
*** dolphm has quit IRC | 16:30 | |
swill | thanks for the help. i am sure i will figure something out looking at these two references. | 16:30 |
notmyname | cool. I hope it helps | 16:30 |
*** dolphm has joined #openstack | 16:30 | |
swill | when i have something working, i will share it with you guys. | 16:31 |
foexle | n81: solved | 16:31 |
*** marrusl has joined #openstack | 16:31 | |
*** hezekiah_ has quit IRC | 16:32 | |
*** dolphm_ has joined #openstack | 16:35 | |
*** dolphm has quit IRC | 16:35 | |
*** derjohn_mob has quit IRC | 16:35 | |
jaypipes | sandywalsh_: no, it will not affect row-locking ability at all. That's dependent on the underlying storage engine. If you are using InnoDB, you have row-level locking in almost all situations except where in situations where InnoDB can predict that a data-modification query would affect a large percentage of rows in a table, in which case it might modify the lock to be on a page (or in extreme cases, the table) | 16:35 |
n81 | foexle: oh yeah? how so? | 16:37 |
sandywalsh_ | jaypipes, cool ... I may have some sqlalchemy Q's for you later :) | 16:37 |
jaypipes | sandywalsh_: I'll try my best :) | 16:37 |
*** mattstep has joined #openstack | 16:37 | |
foexle | n81: restart on compute node /etc/init.d/open-scsi | 16:38 |
foexle | resolved my problem | 16:38 |
*** sandywalsh has joined #openstack | 16:39 | |
*** vdo has quit IRC | 16:40 | |
*** cp16net has quit IRC | 16:41 | |
*** cp16net has joined #openstack | 16:41 | |
*** tdi has joined #openstack | 16:41 | |
*** krow has quit IRC | 16:41 | |
tdi | hello | 16:42 |
swill | chmouel: where do i find the swauth middleware code to reference? i am assuming it is not part of swift by default: https://github.com/openstack/swift | 16:44 |
*** anonymous_ is now known as avian | 16:44 | |
*** avian is now known as rfc1149 | 16:45 | |
*** rfc1149 is now known as rfc2549 | 16:45 | |
swill | chmouel: this? https://github.com/gholt/swauth/blob/master/swauth/middleware.py | 16:45 |
chmouel | yep | 16:45 |
swill | ty. :) | 16:45 |
rfc2549 | we switched from eucalyptus to openstack because we kept on having instances that were DOA, either never getting to "running" or never getting assigned a public IP. Did any of you see that on Eucalyptus? Is it much better on openstack? | 16:47 |
*** krow has joined #openstack | 16:48 | |
*** clauden_ has quit IRC | 16:48 | |
*** dobber has quit IRC | 16:48 | |
*** joesavak has quit IRC | 16:48 | |
*** clauden_ has joined #openstack | 16:48 | |
*** uksysadmin has quit IRC | 16:48 | |
*** marrusl has quit IRC | 16:49 | |
n81 | rfc: we experienced the same issue. We were getting an 8-15% DOA rate | 16:50 |
uvirtbot | New bug: #888621 in nova "exception for decalre consumer in the case of socket error" [Undecided,New] https://launchpad.net/bugs/888621 | 16:51 |
*** dongxu1 has quit IRC | 16:51 | |
*** dongxu has joined #openstack | 16:53 | |
foexle | Razique: ok volumes are running now :), but one questsion. i find in every instance /dev/vdb mounted in /mnt whats that ? its not an attached volume | 16:53 |
rfc2549 | thanks n81. Is it all better for you now on openstack? | 16:54 |
n81 | we haven't done as much extensive testing with openstack but in our limited uses to date we've seen better reliabilty | 16:55 |
*** dongxu has quit IRC | 16:55 | |
*** popux has quit IRC | 16:55 | |
*** jog0 has joined #openstack | 16:56 | |
*** jog0 has quit IRC | 16:57 | |
*** jog0 has joined #openstack | 16:57 | |
*** reed_ has joined #openstack | 16:57 | |
*** negronjl has joined #openstack | 16:57 | |
rfc2549 | the netflix tech blog says that they also have DOA and other badness on AWS. anyone have experience on other clouds and seeing DOAs? | 16:57 |
tdi | can somebody exaplain me please, how is the versioning done in openstack? for example in ubuntu 11.10 ive got 2011.3, is it diablo ? | 16:57 |
*** reed_ is now known as reed | 16:58 | |
vidd | tdi yes, 2011.3 is dioblo | 16:58 |
tdi | vidd: thanks, is the documentation for it up to date ? | 16:59 |
*** exprexxo has joined #openstack | 16:59 | |
vidd | tdi, it all depends on what documentation you are looking at | 16:59 |
tdi | vidd: just want to install it :) | 16:59 |
vidd | tdi, there more to it they "just installing" it | 17:00 |
vidd | once installed, it needs to be properly configured | 17:00 |
tdi | vidd: ofc, you are right, configuration and running it is what I meant | 17:01 |
*** hezekiah_ has joined #openstack | 17:01 | |
vidd | and the stock ubuntu keystone and dashboard will not work properly with stock nova and glance | 17:01 |
*** jeromatron has joined #openstack | 17:02 | |
tdi | vidd: ok, do you know whether the official doc is the proper one to get started? | 17:02 |
vidd | tdi, what parts are you trying to use? | 17:02 |
Kiall | if anyone else was using my packages, and is having issues with volumes and "Login I/O error, failed to receive a PDU" .. We think (n81 and I) have sorted it.. updated packages soon ;) | 17:02 |
Kiall | we think we have sorted it* | 17:03 |
tdi | vidd: maybe I just say what setup i want: got 7 machines connected to fast iSCSI storage, I want to give users possibility to manage their own machines and create new | 17:03 |
tdi | vidd: users, as in employees of the university, not outside users | 17:03 |
tdi | vidd: so i think this would be nova compute, volume and storage ? | 17:04 |
vidd | tdi, are you putting keystone and dashboard in or not? | 17:04 |
tdi | vidd: yes, I would like to give them GUI | 17:04 |
*** dolphm_ has quit IRC | 17:04 | |
*** Hakon|mbp has quit IRC | 17:05 | |
vidd | then the docs are good for nova parts | 17:05 |
tdi | great | 17:05 |
*** dolphm has joined #openstack | 17:05 | |
vidd | keystone and dashboard not so much | 17:05 |
tdi | vidd: do you then know a doc for keystone and dashboard? | 17:05 |
Kiall | the stock ubuntu packages for keystone and dashboard are broken (really broken) | 17:06 |
*** marrusl has joined #openstack | 17:06 | |
tdi | Kiall: is there a ppa ? | 17:06 |
vidd | tdi, Kiall has a ppa that works with all parts (have not tested swift) and an installer script to walk you thru it | 17:06 |
Kiall | yea ;) | 17:06 |
Kiall | PPA : https://launchpad.net/~managedit/+archive/openstack | 17:07 |
tdi | nice, | 17:07 |
Kiall | And some bare minimum setup scripts @ https://github.com/managedit/openstack-setup | 17:07 |
Kiall | Closer to bash docs that scripts, but they give you (almost) all the steps.. | 17:08 |
Kiall | (tell me what I fogot ;)) | 17:08 |
tdi | Kiall: thanks, should I apt-get remove --purge all nova things before I begin? | 17:08 |
Kiall | I would `dpkg -l | grep -E (nova|glance|swift|keystone)` and purge all those.. | 17:09 |
tdi | oki thanks | 17:09 |
tdi | so I start ;) | 17:09 |
Kiall | then rm -rf /etc/(nova|glance|swift|keystone) and /var/lib/(nova|glance|swift|keystone) .. since some stuff seems to stay even with dpkg -P | 17:09 |
tdi | Yes, last time also /etc/sudoers.d/nova cut me off | 17:10 |
Kiall | but - I'd give it a few mins before installing, I'm just sorting a packaging bug with n81 at the moment... | 17:10 |
*** dolphm has quit IRC | 17:10 | |
tdi | Kiall: ill wait, ill be the tester | 17:10 |
*** stevegjacobs_ has quit IRC | 17:11 | |
*** maplebed has quit IRC | 17:11 | |
*** lelin has quit IRC | 17:12 | |
*** dtroyer has quit IRC | 17:12 | |
*** dprince has quit IRC | 17:14 | |
Kiall | tdi: new packages uploading, launchpad will take 20 mins or so to build them.. | 17:15 |
Kiall | All the packages, bar nova-volume (which you can leave till the end) are fine.. | 17:15 |
tdi | Kiall: oki | 17:15 |
Kiall | all the current packages* | 17:15 |
*** foexle has quit IRC | 17:15 | |
hugokuo | good night | 17:16 |
*** hugokuo has left #openstack | 17:16 | |
tdi | Kiall: so I can just use your shell scripts for the installation ? | 17:16 |
tdi | they will suck in launchpad packages? | 17:16 |
Kiall | yea, watch out for nova.sh installing the broken nova-volume package.. | 17:16 |
Kiall | and, install the stuff from the readme first | 17:17 |
tdi | ok, when will nova-volume be fixed? | 17:17 |
Kiall | The scripts setup an all in 1 server, probably a smart move until it all works, then add more servers with specific roles after | 17:17 |
Kiall | 20 mins, whenever this finishes building+publishing: https://launchpad.net/~managedit/+archive/openstack/+build/2916227 | 17:18 |
*** tyska has quit IRC | 17:18 | |
tdi | Kiall: ok, sorry thought the volume is still broken, despite the lanuchpad update | 17:19 |
Kiall | Ah no, the current packages bar nova-volume are fine... | 17:19 |
Kiall | and a fixed nova-volume is on its way up.. | 17:19 |
*** deshantm_laptop has quit IRC | 17:24 | |
*** Rajaram has joined #openstack | 17:27 | |
*** TheOsprey has joined #openstack | 17:28 | |
*** wawa has joined #openstack | 17:30 | |
wawa | ii | 17:31 |
*** nacx has quit IRC | 17:31 | |
*** obino has joined #openstack | 17:32 | |
*** bsza has quit IRC | 17:34 | |
*** bsza has joined #openstack | 17:34 | |
rfc2549 | trying again: we switched from eucalyptus to openstack because we kept on having instances that were DOA, either never getting to "running" or never getting assigned a public IP. Did any of you see that on Eucalyptus? Is it much better on openstack? | 17:35 |
Kiall | rfc2549: all the time.. and I've had the same conversation with someone else (cant remember who) | 17:36 |
Kiall | Once everything is setup right, I've not seen any DOA's with OS | 17:36 |
*** tyska has joined #openstack | 17:37 | |
rfc2549 | Kiall: thanks. | 17:38 |
rfc2549 | the netflix tech blog says that they also have DOA and other badness on AWS. anyone have experience on other clouds and seeing DOAs? [repeat] | 17:38 |
tyska | Razique: are u still there? | 17:38 |
tdi | Kiall: you do not have swift scripts ? | 17:38 |
Razique | yup on ur servers | 17:38 |
tyska | Razique: =) | 17:38 |
Kiall | tdi: no, I've no use for swift | 17:38 |
tyska | Razique: did you found something? | 17:38 |
tyska | did you find* (sry for my language mistakes) =) | 17:39 |
*** wawa has quit IRC | 17:39 | |
tdi | Kiall: when I go in 10k machines, ill need it :) | 17:39 |
*** negronjl has quit IRC | 17:39 | |
Razique | tyska: i'm looking :) | 17:39 |
Kiall | yea - for that, you might ;) | 17:40 |
*** jakedahn has quit IRC | 17:40 | |
Razique | i'll let u know when I figure, but don't worry, we definitely will :) | 17:40 |
tyska | Razique: did you received my msg that saids i think thats ext3 message is not the problem? | 17:40 |
Razique | yah u were right :) | 17:40 |
tyska | it appears too on the instance i can reach | 17:40 |
*** devcamcar has joined #openstack | 17:42 | |
Razique | tyska: I think I found | 17:43 |
Razique | let's see:) | 17:43 |
* tyska is praying | 17:43 | |
tyska | =) | 17:43 |
*** nati2 has joined #openstack | 17:47 | |
*** po has joined #openstack | 17:48 | |
*** jaypipes has quit IRC | 17:49 | |
tdi | Kiall: one more question about the networks in openstack, ive got bridge with 10.50.0.0/16 network, where I want machines to be stored, this is FlatManager yes? | 17:50 |
Kiall | yea, flat for flat DHCP.. | 17:50 |
Kiall | VLAN will work aswell.. | 17:50 |
uvirtbot | New bug: #888649 in nova "Snapshots left in undeletable state" [Undecided,New] https://launchpad.net/bugs/888649 | 17:50 |
Kiall | where 10.50.0.0/16 = the public IPs and some other range is the internal range | 17:51 |
*** maplebed has joined #openstack | 17:51 | |
tdi | Kiall: I do not use any public ips | 17:51 |
Kiall | tdi: those packages are built+up | 17:51 |
*** dprince has joined #openstack | 17:51 | |
tdi | got internal network, both nova nodes and virtuals need to be in it | 17:52 |
Kiall | novaa "public ips" dont have to be internet routable... | 17:52 |
tdi | Kiall: yes, I already installed it, now working on network | 17:52 |
Kiall | nova's* | 17:52 |
Kiall | Its probably better to reserve the LAN accessible range as floating ips | 17:52 |
Kiall | otherwise you have no choice over server IPs | 17:52 |
Kiall | servers + DHCP is always fun... | 17:52 |
Kiall | Probably worth a read: http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html | 17:54 |
Kiall | esp the fixed vs floating IP part | 17:54 |
tdi | yes I am reading it now | 17:54 |
*** obino has quit IRC | 17:56 | |
tyska | Razique: ? | 17:56 |
tyska | someone here already tried to use windows with openstack? | 17:57 |
*** snet has quit IRC | 17:58 | |
Razique | tyska: yup | 17:58 |
*** bcwaldon_ has joined #openstack | 17:58 | |
Razique | works pretty well :) | 17:58 |
*** bcwaldon has quit IRC | 17:58 | |
*** jfluhmann has quit IRC | 17:58 | |
tyska | im with problems to create the image =( | 17:59 |
tyska | more specifically with virtio | 17:59 |
*** jdurgin has joined #openstack | 17:59 | |
tyska | first i tried using this: http://docs.openstack.org/cactus/openstack-compute/admin/content/creating-a-windows-image.html | 17:59 |
*** exprexxo has quit IRC | 17:59 | |
Razique | arf I should update that doc | 18:00 |
Razique | I had to find drivers I dunno where | 18:00 |
Razique | and make some extra stuff in order to install the rights virtio drivers | 18:00 |
Razique | another thing todo | 18:00 |
*** aliguori has quit IRC | 18:00 | |
tyska | after i run that command to create the image | 18:01 |
tyska | nothing happens | 18:01 |
tyska | and shell still freeze | 18:01 |
tyska | and cant even cancel with CTRL + C | 18:01 |
*** alexn6 has left #openstack | 18:01 | |
tyska | now im trying to create using this http://blogs.poolsidemenace.com/2011/06/16/porting-windows-to-openstack/ | 18:02 |
tyska | but with no success too =/ | 18:02 |
tyska | my question to god is: why everything needs to be so hard??? =) | 18:02 |
*** bcwaldon_ has quit IRC | 18:02 | |
*** blamar has quit IRC | 18:02 | |
*** blamar has joined #openstack | 18:02 | |
Razique | my other question is : why are we still trying to make it work | 18:03 |
*** joesavak has joined #openstack | 18:03 | |
*** bcwaldon has joined #openstack | 18:03 | |
*** pixelbeat has quit IRC | 18:03 | |
tyska | hahaha | 18:04 |
*** Ryan_Lane has joined #openstack | 18:04 | |
tyska | because if it works well, it will bring a lot of benefits to us | 18:04 |
tyska | that was easy | 18:04 |
tyska | =) | 18:04 |
vidd | Kiall, what does "sed -e "s,999888777666,$SERVICE_TOKEN,g" local_settings.py.tmpl > local_settings.py" do? | 18:05 |
Razique | tyska: yah ;) | 18:05 |
Razique | tyska Openstack makes me having nightmares | 18:05 |
Kiall | vidd: replaces the default service token with a real one | 18:06 |
vidd | does this make the change in local_settings.py.tmpl and then cp the whole local_settings.py.tmpl to local_settings.py? | 18:06 |
tyska | Razique: but at least the basic of your architecture is working, what you say for my case? | 18:06 |
Kiall | vidd: yea.. | 18:07 |
tyska | Razique: days and days working to solve the problems, just to see the basic working | 18:07 |
Kiall | wait no | 18:07 |
Kiall | it never changes .tmpl | 18:07 |
vidd | ok, so it copies tmpl to the real file and changes the real file | 18:07 |
Razique | tyska: I say it's the fact that u use Diablo + multi nic :) | 18:07 |
Razique | but I already had this issue on my pre-prod | 18:08 |
Razique | that has the same settiings | 18:08 |
*** jsavak has joined #openstack | 18:08 | |
vidd | Kiall, can ya tell im new to scripting ?=] | 18:08 |
*** krow has quit IRC | 18:08 | |
Kiall | vidd .. kinda .. sed spits the updated version to STDOUT, the > redirects it into a new file | 18:09 |
vidd | i have a grasp on sed | 18:09 |
vidd | but this was the first i looked at the piping | 18:09 |
*** lorin11 has joined #openstack | 18:10 | |
*** lorin11 has quit IRC | 18:10 | |
vidd | took me a bit to understand the difference between sed-i {data} file and sed -e{data} -i file | 18:10 |
*** obino has joined #openstack | 18:11 | |
*** joesavak has quit IRC | 18:12 | |
*** lorin1 has quit IRC | 18:12 | |
*** jaypipes has joined #openstack | 18:13 | |
*** superjudge has quit IRC | 18:17 | |
Kiall | ah fair enough :) | 18:18 |
*** shang has quit IRC | 18:21 | |
*** webx has joined #openstack | 18:23 | |
*** med_out has quit IRC | 18:23 | |
*** llang629 has joined #openstack | 18:24 | |
*** llang629 has left #openstack | 18:24 | |
*** redconnection has quit IRC | 18:24 | |
*** aliguori has joined #openstack | 18:26 | |
*** mszilagyi has joined #openstack | 18:27 | |
*** dtroyer has joined #openstack | 18:29 | |
*** tyska has quit IRC | 18:29 | |
*** Rajaram has quit IRC | 18:30 | |
*** magg has joined #openstack | 18:30 | |
*** jakedahn has joined #openstack | 18:34 | |
*** guigui1 has quit IRC | 18:35 | |
*** obino has quit IRC | 18:35 | |
*** tyska has joined #openstack | 18:37 | |
tyska | Razique: nothing? | 18:37 |
*** zaitcev has joined #openstack | 18:42 | |
magg | kiall | 18:43 |
magg | u there | 18:43 |
*** djw_ has joined #openstack | 18:43 | |
*** djw_ is now known as rfc2549_ | 18:44 | |
rfc2549 | q | 18:44 |
*** rfc2549 has quit IRC | 18:44 | |
magg | vidd? | 18:44 |
magg | u there | 18:45 |
*** rfc2549_ is now known as rfc2549 | 18:45 | |
vidd | yes magg | 18:45 |
magg | so i installed kiall packages | 18:45 |
magg | everything is working | 18:46 |
Kiall | magg: glad to hear :) | 18:46 |
vidd | nice | 18:46 |
magg | i try to create an instance in the dashboard | 18:46 |
magg | it says build | 18:46 |
magg | and never becomes active | 18:46 |
Kiall | magg: check your nova-compute and nova-network logs.. | 18:47 |
Kiall | probably nova-network from experience... | 18:47 |
magg | oh | 18:47 |
*** oubiwann has joined #openstack | 18:47 | |
magg | so i have a question, when using keystone i no longer have to create a user in nova | 18:47 |
Kiall | exactly, ignore nova's users and projects | 18:48 |
magg | ohh | 18:48 |
magg | i no longer need the creds? | 18:48 |
*** krow has joined #openstack | 18:50 | |
*** jollyfoo has quit IRC | 18:50 | |
magg | well compute says | 18:51 |
*** jollyfoo has joined #openstack | 18:51 | |
*** jollyfoo has quit IRC | 18:51 | |
magg | table nova.intances doesnt exists | 18:51 |
Kiall | that might be a problem ;) | 18:51 |
*** nycko has quit IRC | 18:51 | |
Kiall | you probably didnt run nova-manage db-sync | 18:51 |
*** jollyfoo has joined #openstack | 18:51 | |
Kiall | or | 18:51 |
Kiall | havent restarted all the nova services after you updated the config | 18:52 |
*** jakedahn has quit IRC | 18:53 | |
*** nati2 has quit IRC | 18:53 | |
*** bsza has quit IRC | 18:53 | |
*** mies has quit IRC | 18:54 | |
*** nati2 has joined #openstack | 18:55 | |
*** jsavak has quit IRC | 18:55 | |
*** mies has joined #openstack | 18:57 | |
*** reed has quit IRC | 18:58 | |
magg | how do i check all nova services are ok without euca-descre | 18:58 |
*** rsampaio has quit IRC | 19:01 | |
soren | magg: Why? | 19:04 |
*** tyska has quit IRC | 19:04 | |
*** jsavak has joined #openstack | 19:05 | |
Kiall | magg: you can use "nova-manage service list" | 19:06 |
*** Razique has quit IRC | 19:06 | |
*** nitram_macair has quit IRC | 19:06 | |
*** lorin1 has joined #openstack | 19:06 | |
*** Razique has joined #openstack | 19:07 | |
*** Light has joined #openstack | 19:08 | |
*** Light is now known as Guest58417 | 19:08 | |
*** mgius has joined #openstack | 19:09 | |
*** reed has joined #openstack | 19:10 | |
*** bsza has joined #openstack | 19:10 | |
*** dtroyer has quit IRC | 19:10 | |
*** dtroyer has joined #openstack | 19:11 | |
*** magg has quit IRC | 19:11 | |
*** Guest58417 has quit IRC | 19:14 | |
*** daMaestro has joined #openstack | 19:14 | |
*** jakedahn has joined #openstack | 19:14 | |
daMaestro | Anyone here from grid dynamics? | 19:14 |
*** magg has joined #openstack | 19:14 | |
magg | yo | 19:15 |
daMaestro | You need to publish your src.rpm in your repo, please. (Yes, I'm aware everything is on https://github.com/griddynamics/openstack-rhel) | 19:15 |
magg | nova-manage db sync gets me command failed | 19:15 |
Kiall | magg: with what error? | 19:16 |
daMaestro | I'm working on merging in spec stuff into the Fedora build system... and I find it odd you don't have a SRPM tree. | 19:16 |
*** rsampaio has joined #openstack | 19:16 | |
magg | command failed, please check log for more info | 19:16 |
magg | which log should i check | 19:16 |
*** imsplitbit has quit IRC | 19:18 | |
*** TheOsprey has quit IRC | 19:18 | |
Kiall | /var/log/nova/nova-manage.log | 19:18 |
webx | I was reading a press release from SDSC about their new cluster (https://cloud.sdsc.edu/hp/docs/SDSC_Cloud_Press_Release.pdf) and noticed this quote | 19:19 |
webx | "The HTTP-based SDSC Cloud supports the | 19:19 |
webx | RackSpace Swift and Amazon S3 APIs and is accessible from any web browser, clients | 19:19 |
webx | for Windows, OSX, UNIX, and mobile devices." | 19:19 |
webx | by default, does openstack support the s3 api and tools like s3cmd, etc ? | 19:20 |
webx | s/openstack/openstack swift/ | 19:20 |
*** negronjl has joined #openstack | 19:22 | |
*** stevegjacobs has joined #openstack | 19:23 | |
*** bsza has quit IRC | 19:24 | |
*** sloop has joined #openstack | 19:24 | |
*** bsza has joined #openstack | 19:24 | |
*** BasTichelaar has quit IRC | 19:25 | |
*** mcclurmc has quit IRC | 19:27 | |
*** mcclurmc has joined #openstack | 19:28 | |
*** BasTichelaar has joined #openstack | 19:28 | |
daMaestro | webx, there is a pipeline you have to add for the compatibility layer... but basically yes | 19:28 |
daMaestro | http://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html | 19:31 |
*** gyee has quit IRC | 19:31 | |
*** bsza has quit IRC | 19:31 | |
*** Razique has quit IRC | 19:31 | |
*** redconnection has joined #openstack | 19:33 | |
*** shang has joined #openstack | 19:33 | |
*** bsza has joined #openstack | 19:34 | |
*** gyee has joined #openstack | 19:34 | |
*** jakedahn has quit IRC | 19:34 | |
*** Nadeem has joined #openstack | 19:34 | |
Nadeem | guys i installed openstack via devstack nova.sh script | 19:35 |
*** dirkx_ has joined #openstack | 19:35 | |
Nadeem | however on reboot i couldnt login anymore on http://localhost | 19:35 |
Nadeem | keystone wasnt runing anymore on localhost:5000/2.0 | 19:35 |
Nadeem | any pointers how to start this keystone service manually? | 19:36 |
sloop | umm.. use the cloud? | 19:37 |
*** dnjaramba has joined #openstack | 19:38 | |
uvirtbot | New bug: #888685 in glance "Stacktrace from cache_image_iter" [Undecided,New] https://launchpad.net/bugs/888685 | 19:38 |
*** dnjaramba_ has quit IRC | 19:38 | |
*** binbash_ has quit IRC | 19:39 | |
webx | daMaestro: thanks for that link | 19:40 |
webx | daMaestro: can we still use the swift cli binary with s3 api enabled? | 19:41 |
*** nitram_macair has joined #openstack | 19:41 | |
*** egant has quit IRC | 19:42 | |
*** bsza has quit IRC | 19:44 | |
*** adjohn has quit IRC | 19:45 | |
magg | nop i still cant get the instance to say active | 19:45 |
*** mszilagyi_ has joined #openstack | 19:47 | |
*** mszilagyi has quit IRC | 19:48 | |
*** mszilagyi_ is now known as mszilagyi | 19:48 | |
*** krow has quit IRC | 19:48 | |
magg | compute and network cant find a table | 19:48 |
magg | nova.network and nova.instances | 19:49 |
*** redconnection has quit IRC | 19:49 | |
magg | help | 19:49 |
*** rfc2549 has quit IRC | 19:50 | |
*** binbash_ has joined #openstack | 19:52 | |
magg | kiall | 19:57 |
magg | help | 19:57 |
*** dtroyer has quit IRC | 19:57 | |
*** redconnection has joined #openstack | 19:57 | |
*** dprince has quit IRC | 19:58 | |
Kiall | magg: check the logs ;) | 19:58 |
Kiall | and, are they connecting to the right DB | 19:59 |
magg | compute and network? | 19:59 |
Kiall | yea | 19:59 |
*** nacx has joined #openstack | 19:59 | |
*** lorin1 has left #openstack | 20:00 | |
*** dirkx_ has quit IRC | 20:00 | |
*** lorin1 has joined #openstack | 20:01 | |
uvirtbot | New bug: #888711 in glance "assertGreaterEqual not in Python 2.6" [Undecided,New] https://launchpad.net/bugs/888711 | 20:01 |
magg | http://pastebin.com/QYXHNuuJ | 20:04 |
*** Nadeem has quit IRC | 20:05 | |
*** redconnection has quit IRC | 20:05 | |
Kiall | magg: you need to disable DNSmasq .. edit /etc/default/dnsmasq | 20:05 |
magg | http://pastebin.com/2Rs3d44T | 20:05 |
*** catintheroof has joined #openstack | 20:05 | |
Kiall | cc/ tdi ... you probably should edit that aswell | 20:05 |
*** catintheroof has quit IRC | 20:06 | |
Kiall | magg: then, killall dnsmasq and restart nova-network+compute for the first one... | 20:06 |
Kiall | same again for the second it seems | 20:06 |
magg | wait what do i edit in dnsmaq? | 20:06 |
*** n0ano has quit IRC | 20:07 | |
magg | cc/tdi? | 20:07 |
Kiall | change the enabled setting to ENABLED=0 | 20:07 |
Kiall | as in cc tdi the person in the channel ;) | 20:07 |
magg | LOL | 20:07 |
*** n0ano has joined #openstack | 20:08 | |
*** dtroyer has joined #openstack | 20:09 | |
*** dolphm has joined #openstack | 20:09 | |
magg | alright i have now an IP for my instance | 20:09 |
*** dtroyer has quit IRC | 20:13 | |
*** dtroyer has joined #openstack | 20:14 | |
*** imsplitbit has joined #openstack | 20:15 | |
uvirtbot | New bug: #888719 in nova "openvswitch-nova runs after firstboot scripts" [Undecided,In progress] https://launchpad.net/bugs/888719 | 20:16 |
magg | but it doesnt say Active | 20:16 |
magg | :( | 20:16 |
*** bcwaldon has quit IRC | 20:17 | |
*** dolphm has quit IRC | 20:17 | |
*** dolphm has joined #openstack | 20:18 | |
*** darraghb has quit IRC | 20:18 | |
*** dtroyer has quit IRC | 20:21 | |
magg | ok it worked | 20:21 |
magg | now i cant connec to the vnc console | 20:22 |
*** dtroyer has joined #openstack | 20:22 | |
*** tdi has quit IRC | 20:22 | |
*** dolphm_ has joined #openstack | 20:23 | |
*** dolphm has quit IRC | 20:23 | |
magg | do i need to install noVNC? | 20:24 |
*** webx has quit IRC | 20:24 | |
*** webx has joined #openstack | 20:25 | |
*** jsavak has quit IRC | 20:28 | |
*** ahasenack has quit IRC | 20:28 | |
*** nati2 has quit IRC | 20:28 | |
*** ahasenack has joined #openstack | 20:29 | |
*** paltman has quit IRC | 20:29 | |
*** joesavak has joined #openstack | 20:29 | |
*** duffman has quit IRC | 20:30 | |
*** duffman has joined #openstack | 20:30 | |
*** dgags has quit IRC | 20:32 | |
*** dgags has joined #openstack | 20:32 | |
*** jeromatron has quit IRC | 20:33 | |
*** GheRivero has joined #openstack | 20:34 | |
*** paltman has joined #openstack | 20:34 | |
*** jeblair has quit IRC | 20:37 | |
uvirtbot | New bug: #888730 in nova "vmwareapi suds debug logging very verbose" [Undecided,In progress] https://launchpad.net/bugs/888730 | 20:38 |
*** dpippenger has joined #openstack | 20:39 | |
*** nerens has quit IRC | 20:40 | |
*** jeblair has joined #openstack | 20:42 | |
*** PeteDaGuru has quit IRC | 20:43 | |
*** PeteDaGuru has joined #openstack | 20:45 | |
*** lborda has quit IRC | 20:46 | |
*** apevec has quit IRC | 20:47 | |
*** dtroyer has quit IRC | 20:47 | |
*** dtroyer has joined #openstack | 20:49 | |
*** nacx has quit IRC | 20:52 | |
*** statik has quit IRC | 20:52 | |
daMaestro | webx, i don't know | 20:56 |
daMaestro | webx, i will be finding out shortly i think .... | 20:56 |
*** mnour has joined #openstack | 20:56 | |
*** GheRivero has quit IRC | 20:57 | |
daMaestro | webx, what you *can* do is have multiple proxy pools ... one with the s3 rest api and one without | 20:57 |
daMaestro | webx, more then likely that is how it's supposed to be done | 20:57 |
*** marrusl has quit IRC | 21:00 | |
*** marrusl has joined #openstack | 21:00 | |
*** stevegjacobs_ has joined #openstack | 21:00 | |
*** marrusl has quit IRC | 21:00 | |
webx | daMaestro: ah, that makes sense. | 21:04 |
webx | daMaestro: do you happen to know how to point s3cmd to a swift installation? | 21:05 |
*** PotHix has quit IRC | 21:06 | |
*** Hakon|mbp has joined #openstack | 21:08 | |
*** magg has quit IRC | 21:12 | |
uvirtbot | New bug: #888753 in glance "Glance configs should use new Keystone auth_port" [Undecided,New] https://launchpad.net/bugs/888753 | 21:15 |
uvirtbot | New bug: #888755 in nova "stale external locks causing deadlock" [Undecided,New] https://launchpad.net/bugs/888755 | 21:15 |
daMaestro | webx, and i just confirmed it does not work (swift client) when the filter is installed | 21:19 |
daMaestro | so just run multiple proxy servers | 21:19 |
daMaestro | webx, just like you would to amazon | 21:20 |
*** mattstep has quit IRC | 21:21 | |
zykes- | Hakon|mbp: you a norwegian openstack user ? | 21:21 |
webx | daMaestro: interesting. for us, we'd probably prefer to run everything in s3 compatability if possible. I'll probably have one proxy that's non-s3 though, just in case. | 21:22 |
*** msivanes has quit IRC | 21:22 | |
guaqua | you need proxy redundancy, so 2 + 2 at minimum | 21:23 |
*** shang has quit IRC | 21:23 | |
guaqua | that's what i'm thinking | 21:23 |
webx | yea, we'll have much more than 2 running in s3 compat, but just the one pair in 'native' mode.. provided we can get s3cmd to work with swift. | 21:24 |
zykes- | anyone read http://www.slideshare.net/oldbam/security-issues-in-openstack ? | 21:24 |
*** joesavak has quit IRC | 21:26 | |
gnu111 | quick swift question. I am currently using /dev/sda3 which is mounted to /srv/node/sda3. I have a new disk and partition /dev/sdb1. I want to add that as a device. Should it be /srv/node/sdb1 ? | 21:28 |
*** dirkx_ has joined #openstack | 21:28 | |
guaqua | does it really matter where they are mounted? | 21:30 |
guaqua | (i actually don't know and would like to know) | 21:30 |
gnu111 | guaqua: not sure. I am trying to figure out if I can mount /srv/node/sda3 with /dev/sda3 and /srv/node/sdb1 with /dev/sdb1. I am not sure if this will properly mount. | 21:31 |
*** joesavak has joined #openstack | 21:32 | |
*** mattstep has joined #openstack | 21:32 | |
*** jakedahn has joined #openstack | 21:33 | |
*** shang has joined #openstack | 21:36 | |
*** dirkx_ has quit IRC | 21:39 | |
gnu111 | guaqua: it seemed to work. I think /srv/node is not manged by anything...that's the part I was confused about. | 21:39 |
*** dolphm_ has quit IRC | 21:39 | |
*** lorin1 has quit IRC | 21:41 | |
guaqua | my main question is, what is the ring device name really? | 21:41 |
guaqua | is it handled by the server and queried from the mount point? | 21:42 |
guaqua | or is it something else | 21:42 |
guaqua | because it looks a whole lot like a mount point and it isn't really defined anywhere on the storage nodes as such | 21:43 |
vidd | zykes-, read that article | 21:43 |
*** mrevell has joined #openstack | 21:44 | |
gnu111 | guaqua: When I added this new device. it said this: Device z1-192.168.0.12:6002/sdb1 I also have another device Device z1-192.168.0.12:6002/sda3 they are both in the same storage node. | 21:46 |
*** krow has joined #openstack | 21:46 | |
gnu111 | I think the way to identify is d0z1 that means device id zero in zone one. | 21:46 |
guaqua | the port is the same, is that correct? | 21:47 |
*** dolphm has joined #openstack | 21:47 | |
guaqua | gnu111: that can't be the same | 21:48 |
gnu111 | guaqua: Yes. same port but different disks. | 21:48 |
guaqua | hmm | 21:48 |
guaqua | oh, so is that definition basically just a definition for rsync path? | 21:48 |
guaqua | now i'm getting it... | 21:49 |
guaqua | this is simpler than i thought... | 21:49 |
gnu111 | I think so.... | 21:49 |
gnu111 | it seemed to add the device and rebalance fine for me here. I didn't see any errors..so far. | 21:49 |
guaqua | oh well. better it's simpler, not more complicated :) | 21:49 |
*** MarcMorata has joined #openstack | 21:50 | |
guaqua | but i'm off to bed now! good stuff! | 21:50 |
gnu111 | guaqua: good night! | 21:50 |
*** magg has joined #openstack | 21:51 | |
*** lvaughn has quit IRC | 21:51 | |
gnu111 | guaqua: I see some rsync erros. so need to look at it carefully... | 21:51 |
*** rods has quit IRC | 21:51 | |
*** lvaughn has joined #openstack | 21:51 | |
*** joesavak has quit IRC | 21:52 | |
*** miclorb_ has joined #openstack | 21:52 | |
webx | anyone happen to know if the patch and configuration file described here will work with swift? http://open.eucalyptus.com/wiki/s3cmd | 21:52 |
*** lvaughn has quit IRC | 21:52 | |
*** lvaughn has joined #openstack | 21:53 | |
*** arBmind has joined #openstack | 21:53 | |
*** nati2 has joined #openstack | 21:54 | |
gnu111 | it is trying to write in /sdb1 instead of /srv/node/sdb1. I added it to an existing zone which was in sda3. maybe this needs to be in a new zone. | 21:54 |
*** negronjl has quit IRC | 21:56 | |
*** AlanClark has quit IRC | 21:59 | |
*** neogenix has quit IRC | 22:02 | |
*** lvaughn has quit IRC | 22:02 | |
*** lvaughn has joined #openstack | 22:02 | |
*** praefect has quit IRC | 22:02 | |
*** dolphm has quit IRC | 22:02 | |
*** stuntmachine has quit IRC | 22:03 | |
*** dolphm has joined #openstack | 22:03 | |
*** lvaughn has quit IRC | 22:04 | |
*** lvaughn_ has joined #openstack | 22:04 | |
*** rods has joined #openstack | 22:05 | |
*** dolphm has quit IRC | 22:07 | |
*** jeromatron has joined #openstack | 22:08 | |
*** marrusl has joined #openstack | 22:08 | |
*** irctc193 has joined #openstack | 22:12 | |
magg | hey kiall u there? | 22:13 |
magg | im trying to add second node | 22:13 |
magg | but compute log says this host is not allowed to connect to mysql server | 22:14 |
magg | anybody? | 22:14 |
magg | help plz | 22:14 |
webx | http://paste.openstack.org/show/3251/ | 22:15 |
webx | the only 'real' buckets are "bbartlett", "myfiles", and "builders" | 22:15 |
webx | any idea what that other stuff is ? | 22:15 |
vidd | magg, im here | 22:15 |
vidd | you have mysqlclient installed on the remote machine? | 22:16 |
*** lvaughn_ has quit IRC | 22:16 | |
*** lvaughn has joined #openstack | 22:16 | |
*** neogenix has joined #openstack | 22:16 | |
irctc193 | I'm new here. migrating from bexar to diablo. I am trying to understand how to bundle image for glance. We were using euca-bundle which creates a manifest file and multi-part files. But how to bundle into a single image from a running instance for glance? | 22:17 |
magg | vidd, dont think so | 22:17 |
*** lvaughn has quit IRC | 22:17 | |
magg | do i need it? | 22:17 |
*** lvaughn has joined #openstack | 22:17 | |
vidd | magg, yes...that way your remote host has something to carry the mysql stuff to the mysqlserver =] | 22:18 |
vidd | irctc193, this is for catus to diablo...hope it helps http://docs.openstack.org/diablo/openstack-compute/admin/content/migrating-from-cactus-to-diablo.html | 22:19 |
*** negronjl has joined #openstack | 22:19 | |
*** apevec has joined #openstack | 22:20 | |
magg | oks thanks i will install it | 22:20 |
*** dgags has quit IRC | 22:22 | |
*** Rajaram has joined #openstack | 22:22 | |
*** Rajaram has quit IRC | 22:23 | |
*** tdi has joined #openstack | 22:23 | |
irctc193 | Thnx for the link vidd. But I don't see a way to bundle images in that doc | 22:25 |
*** df1 has joined #openstack | 22:26 | |
*** bcwaldon has joined #openstack | 22:26 | |
irctc193 | I mean a way to bundle images for glance | 22:27 |
irctc193 | from a running instance | 22:28 |
*** sandywalsh_ has quit IRC | 22:28 | |
vidd | ah...sorry...have not learned that yet =] | 22:28 |
vidd | have you tried to snapshot it? | 22:28 |
irctc193 | snapshot will have everything including any sensitive data | 22:29 |
irctc193 | I want to be able to bundle it someway | 22:29 |
irctc193 | so that I can make it available for other | 22:30 |
irctc193 | s | 22:30 |
uvirtbot | New bug: #888784 in devstack "devstack need dnsmasq-utils which is not available on natty" [Undecided,New] https://launchpad.net/bugs/888784 | 22:31 |
vidd | irctc193, so...you want to take the peices you used to make the running instance and bundle them? | 22:31 |
*** ldlework has quit IRC | 22:31 | |
vidd | or you want the actual running parts? | 22:31 |
irctc193 | yes | 22:32 |
irctc193 | for example, I have a base ubuntu oneiric instance running | 22:32 |
irctc193 | I have installed some packages to it | 22:33 |
irctc193 | Now, I want to be able to bundle it and make it public | 22:33 |
vidd | then you take a snapshot and upload it | 22:33 |
irctc193 | But I have some sensitive data in the instance | 22:34 |
irctc193 | that I don't want to share | 22:34 |
magg | vidd, i installed mysql-client and i still get the error http://pastebin.com/SSBW4PVP | 22:34 |
irctc193 | In euca-bundle, you can exclude some directories and bundle it | 22:35 |
*** jj0hns0n has joined #openstack | 22:35 | |
*** neogenix has quit IRC | 22:35 | |
vidd | magg, check the mysql tag in your cloudHQ2 nova.config file and make sure it matches.... | 22:35 |
vidd | also, on your controller, run "sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf ; service mysql restart" | 22:36 |
irctc193 | vidd, am I hearing that in Diablo, we can create image from .iso, .vdk etc... or snapshot are the options? | 22:37 |
*** bcwaldon has quit IRC | 22:37 | |
vidd | your msql server may be set to only listen to requests from within | 22:37 |
vidd | irctc193, yes | 22:37 |
irctc193 | k, Thnx | 22:37 |
vidd | irctc193, but i have not had much experience with glance | 22:38 |
*** kieron has quit IRC | 22:38 | |
magg | vidd: i have the same tag | 22:38 |
*** robbiew has quit IRC | 22:39 | |
*** negronjl has quit IRC | 22:39 | |
vidd | run "sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf ; service mysql restart" on your controller magg | 22:39 |
*** jog0 has quit IRC | 22:39 | |
magg | vidd: already did | 22:39 |
*** irctc193 has left #openstack | 22:40 | |
vidd | restart nova-* on controller? | 22:40 |
*** irctc963 has joined #openstack | 22:41 | |
uvirtbot | New bug: #850644 in quantum "Quantum needs proper packaging" [High,Fix committed] https://launchpad.net/bugs/850644 | 22:41 |
magg | i did that alo | 22:41 |
magg | also** | 22:41 |
vidd | magg, on your cloudHQ2 what is the URL of the mysql? | 22:42 |
vidd | irctc963, sorry i cant be more helpful =\ | 22:42 |
*** aliguori has quit IRC | 22:42 | |
magg | --sql_connection=mysql://root:123456@10.10.10.2/nova | 22:42 |
vidd | and your controller ip is 10.10.10.2? | 22:43 |
magg | yep | 22:43 |
vidd | is the port open on the controller? | 22:43 |
*** jog0 has joined #openstack | 22:43 | |
vidd | magg, test that the controller is accepting traffic on port 3306 ... | 22:44 |
vidd | magg, from another machine on the local network run "telnet 10.10.10.2 3306" | 22:45 |
uvirtbot | New bug: #888790 in quantum "Query extensions supported by plugin" [Medium,New] https://launchpad.net/bugs/888790 | 22:45 |
magg | Trying 10.10.10.2... | 22:46 |
magg | Connected to 10.10.10.2. | 22:46 |
magg | Escape character is '^]'. | 22:46 |
magg | AHost 'cloudHQ2' is not allowed to connect to this MySQL serverConnection closed by foreign host. | 22:46 |
magg | user@cloudHQ2:~$ | 22:46 |
*** irctc963 has quit IRC | 22:46 | |
magg | i think that's a no | 22:46 |
*** hezekiah_ is now known as isaacfinnegan | 22:46 | |
vidd | magg, the port is open | 22:47 |
magg | oh | 22:47 |
magg | but i cant connect | 22:47 |
vidd | magg, if the port was not open, you never would have gotten the "not allowed" message....you just would have timed out | 22:47 |
*** GeoDud has joined #openstack | 22:47 | |
magg | ohh | 22:48 |
magg | so? | 22:48 |
vidd | magg, so the issue is that your mysql on the controller is not taking requests | 22:48 |
magg | basically | 22:48 |
magg | how do i fix it | 22:49 |
vidd | this is the reason i set each of my databases up with thier own usernames =] | 22:49 |
*** mrevell has quit IRC | 22:50 | |
*** mrevell has joined #openstack | 22:50 | |
*** jorgew has joined #openstack | 22:50 | |
magg | OOOHH | 22:50 |
vidd | by default, the user "root" is only allowed to connect from "localhost" "127.0.0.1" and your server by hostname | 22:50 |
vidd | magg, MAJOR security flaw to have root being allowed in from anyone | 22:51 |
magg | yeah | 22:51 |
magg | i get it | 22:51 |
vidd | *anywhere | 22:52 |
vidd | also, when i set up my database users, i only give them god rights to thier own stuff....they need to keep thier patties of other ppl's stuff =] | 22:53 |
vidd | each app has its own database, its own username, its own password =] | 22:54 |
vidd | and never the twain shall meat | 22:55 |
*** isaacfinnegan has left #openstack | 22:55 | |
*** negronjl has joined #openstack | 22:55 | |
*** jakedahn has quit IRC | 22:55 | |
*** jakedahn has joined #openstack | 22:56 | |
*** mrevell has quit IRC | 22:58 | |
magg | yeah i got it fixed | 22:59 |
magg | thanks vidd | 22:59 |
vidd | no problem | 22:59 |
*** Vek has quit IRC | 22:59 | |
vidd | now if only i could get my dashboard to talk to nova | 22:59 |
vidd | is Kiall in the house? | 23:00 |
vidd | i must have missed a spot in my script =\ | 23:00 |
*** gnu111 has quit IRC | 23:00 | |
vidd | magg, you used Kiall 's ppa's right? | 23:00 |
*** rsampaio has quit IRC | 23:01 | |
*** mdomsch has quit IRC | 23:01 | |
*** rnirmal has quit IRC | 23:01 | |
*** arBmind has quit IRC | 23:02 | |
*** jorgew has left #openstack | 23:03 | |
*** magg has quit IRC | 23:04 | |
*** mdomsch has joined #openstack | 23:07 | |
*** kbringard has left #openstack | 23:08 | |
*** apevec has quit IRC | 23:08 | |
*** mgius has quit IRC | 23:09 | |
*** jakedahn has quit IRC | 23:10 | |
uvirtbot | New bug: #888802 in glance "glance-prefetcher requires authorization to run" [Critical,In progress] https://launchpad.net/bugs/888802 | 23:10 |
*** Teknix has joined #openstack | 23:12 | |
uvirtbot | New bug: #888795 in quantum "Condense source tree directories" [Low,Confirmed] https://launchpad.net/bugs/888795 | 23:13 |
*** lts has quit IRC | 23:14 | |
*** code_franco has quit IRC | 23:17 | |
*** apevec has joined #openstack | 23:21 | |
*** webx has quit IRC | 23:22 | |
*** webx has joined #openstack | 23:22 | |
*** mnour has quit IRC | 23:25 | |
stevegjacobs_ | Something seems to be wrong with on one of my compute nodes | 23:25 |
tdi | stevegjacobs_: if you have only one node, then you are in a very dark place | 23:27 |
stevegjacobs_ | only one vm is picking up a fixed ip | 23:27 |
vidd | stevegjacobsare you using --auto-assign | 23:27 |
vidd | fixed ip...nvmd | 23:28 |
vidd | stevegjacobs what does compute error log say | 23:28 |
stevegjacobs_ | vidd should that be a flag in the nova-conf? | 23:28 |
vidd | stevegjacobs i was thinking floating....the question is irrelevant | 23:29 |
stevegjacobs_ | 2011-11-10 23:30:03,087 INFO nova.compute.manager [-] Updating host status | 23:30 |
stevegjacobs_ | 2011-11-10 23:30:04,792 INFO nova.compute.manager [-] Found 3 in the database and 1 on the hypervisor. | 23:30 |
stevegjacobs_ | I launched a number instances using dashboard | 23:31 |
stevegjacobs_ | new ones, after I got dashboard working two days ago | 23:31 |
stevegjacobs_ | and those that were assigned to this particular node don't seem to have got their networking set up correctly | 23:33 |
stevegjacobs_ | need to do a bit more digging but I think I am seeing the correct number of instances (files) in /var/lib/nova/instances | 23:34 |
vidd | stevegjacobs you need to pastebin that stuff...it s hard to read here | 23:35 |
vidd | stevegjacobs have you restarted compute on that node? | 23:36 |
uvirtbot | New bug: #888809 in devstack "screen not working for me" [Undecided,New] https://launchpad.net/bugs/888809 | 23:36 |
stevegjacobs_ | not just now, but not very long ago | 23:36 |
stevegjacobs_ | this node had kernel panic earlier today too | 23:36 |
stevegjacobs_ | but one instance is still running on it | 23:37 |
vidd | stevegjacobs how much ram does that machine have? | 23:37 |
vidd | wait...did the instances work befor the kernel panic? | 23:38 |
stevegjacobs_ | 32G | 23:38 |
stevegjacobs_ | one did for sure - the one that is still running | 23:38 |
stevegjacobs_ | maybe not the others because I only launched them yesterday evening and hadn't done anytthing with them yet | 23:39 |
vidd | stevegjacobs have you tried rebooting those instances? the issue may be with the instances and not the node =] | 23:39 |
stevegjacobs_ | ok - worth a try :-) | 23:39 |
vidd | and do you have nova-network running on all machines? | 23:40 |
stevegjacobs_ | vidd: I rebooted one and it's working! you are a genius | 23:42 |
stevegjacobs_ | yes nova network on all | 23:42 |
tdi | is there some proper way to attach iscsi to openstack ? | 23:42 |
vidd | stevegjacobs nah...just trowing stuff against the wall to see what sticks =] | 23:43 |
tdi | or can i just add luns to the nova-volumes group and im done? | 23:43 |
*** rods has quit IRC | 23:44 | |
stevegjacobs_ | vidd: well thanks anyway! | 23:46 |
*** BasTichelaar has quit IRC | 23:46 | |
uvirtbot | New bug: #888811 in quantum "Brokenness in ubuntu oneiric" [High,New] https://launchpad.net/bugs/888811 | 23:46 |
uvirtbot | New bug: #888813 in horizon "Duplicate dependencies/Dependency management problems" [Undecided,New] https://launchpad.net/bugs/888813 | 23:50 |
*** imsplitbit has quit IRC | 23:53 | |
*** rods has joined #openstack | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!