Thursday, 2011-11-10

*** bsza has joined #openstack00:01
*** krow has quit IRC00:01
Kiallbwong_, any luck?00:02
bwong_http://paste.openstack.org/show/3234/00:02
bwong_nope00:02
bwong_nova-volume will not start.00:02
bwong_well it will start, then it will go down00:03
KiallAh yea, That..00:03
KiallYou need a nova-volumes LVM group for nova-volume to start00:03
*** MarkAtwood has quit IRC00:03
Kiallit shouldnt stop nova-test.sh from working though00:03
*** vernhart has quit IRC00:05
*** po has quit IRC00:10
*** jeromatron has quit IRC00:11
*** n81 has quit IRC00:13
*** nati2 has quit IRC00:14
*** nati2 has joined #openstack00:15
bwong_Kiall nova-test.sh could be a separate problem.00:16
*** bsza-I has joined #openstack00:17
bwong_it keep saying invalid credentials. ill post what I see. hold on00:17
*** Guest79151 is now known as med_out00:18
*** med_out has joined #openstack00:18
bwong_http://paste.openstack.org/show/3235/00:19
*** bsza has quit IRC00:20
Kialland you havent changed the details in settings since setting up keystone?00:20
dosdawganyone installed openstack on fedora 16 yet?00:21
bwong_kiall: you mean settings file?00:23
Kiallyea00:23
bwong_if so, nope haven't touched it since the beginning where it said to cahnge it00:23
Kialland was this a clean ubuntu install before you started?00:24
*** vernhart has joined #openstack00:24
bwong_ya00:24
Kiallor had you tried the ubuntu packages/devstack etc etc on it before?00:24
bwong_i installed it specifically just for openstack00:24
Kiallcan you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack)"` and `find /usr/local/` ?00:25
*** dtroyer has quit IRC00:25
Kiall(Just to be sure its not using a mix of packages/manually installed stuff)00:26
bwong_Kiall: http://paste.openstack.org/show/3236/00:27
*** rnorwood has quit IRC00:27
KiallRight, looks clean.. 1 set of stuff rather than a mix of multiple00:28
KiallOhh maybe, what did you set region to in the settings file?00:29
bwong_was is suppose to be left as dub01?00:29
KiallWhatever region you wanted ..00:30
Kiallbut, you need to set nova to the same.. eg http://paste.openstack.org/show/3238/ in nova.conf00:30
Kiallnot 10% sure if that would cause it, but maybe...00:31
*** MarkAtwood has joined #openstack00:31
bwong_hmm00:31
bwong_dont have those in my nova.conf00:31
bwong_i will put those in there.00:31
viddKiall, does this mean if i have 3 nodes and three zones, i can set node one to only lauch zone A vm's, node 2 zone B and so on?00:32
viddand if so, can a machine have multiple zones available?00:33
Kiallvidd, kinda, but i havent actually looked at what has been implemented so far00:33
bwong_Kiall: Going to restart the services, just checking Im suppose to restart all services that begins with "nova" right?00:33
Kiallyea00:33
Kiallvidd, each zone needs a full set of the services...00:33
viddcan one sontroller handle multip[le zones?00:34
bwong_Kiall: ok all service restarted.00:34
Kiallvidd, I doubt it very  much00:35
Kiallvidd, vidd, http://wiki.openstack.org/DistributedScheduler00:35
*** dragondm has quit IRC00:38
bwong_Yeah, not working. Should I just re-do the installation again.00:39
Kiallbwong_, I've gotta run, but.. heh beat me to it..00:39
Kiallif you dpkg -P all the packages listed in http://paste.openstack.org/show/3236/00:40
*** mszilagyi has quit IRC00:40
bwong_ya00:40
Kialland then `rm -rf /etc/nova /etc/glance /etc/keystone /var/lib/nova /var/lib/glance /var/lib/keystone` it will be as good as a fresh ubuntu install00:40
Kiallno traces let..00:41
Kiallbut backup the /etc folders, there might be a setting or two in there you might want to remember ;)00:41
bwong_Ok00:41
bwong_Thanks for your help Kiall00:41
Kiallre the region setting, change it to "nova" thats the default and requires no changes to the config00:41
stevegjacobsKiall - I looked briefly but can't see where the config is for dashboard00:42
Kialland, ignore nova-volume until after everything else is working..00:42
bwong_ok00:42
bwong_alright00:42
Kiallstevegjacobs, heya..00:42
*** adjohn has joined #openstack00:43
Kiallstevegjacobs, /etc/openstack-dashboard/*  + /etc/apache2/conf.d/dashboard.conf+ /etc/apache2/sites-available/default00:43
KiallAnyway - 1am and I have meeting at 9am, stevegjacobs I'll be in touch tomorrow re that coffee... cyas.. good luck bwong_ ;)00:44
stevegjacobsthere is nothing in sites-available/default that refers to dashboard00:45
Kiallsure, but the /etc/apache2/conf.d/dashboard.conf file kinda combines with /etc/apache2/sites-available/default00:45
stevegjacobsok - I'll try to figure it out.00:46
stevegjacobsnot urgent anyway00:46
KiallYoun were trying to change the port? `grep -r '80' /etc/apache2`00:46
Kiallchange anything in that list that looks like a port, and restart apache..00:46
Kiallanyway - gotta sleep, cyas..00:47
stevegjacobsok thanks00:47
stevegjacobsIm headin gthat way too00:47
stevegjacobsg'nite00:47
viddstevegjacobs, how goes it00:47
*** jakedahn has quit IRC00:47
viddpaste me your /etc/apache2/conf.d/dashboard.conf file00:47
stevegjacobsgot a mostly working stack, set up four small web server vm's and one other bespoke app today :-)00:48
*** negronjl has quit IRC00:48
viddnice00:49
*** supriya_ has joined #openstack00:49
stevegjacobsa few things still not working - mainly snapshots00:49
*** supriya_ has quit IRC00:49
viddim working on rebuilding my openstack and scripting the process00:49
stevegjacobsMine is mostly based on Kiall's scripts, with a few diversions in the setup00:50
*** vernhart1 has joined #openstack00:51
*** vernhart has quit IRC00:53
*** vernhart has joined #openstack00:54
*** vernhart1 has quit IRC00:55
*** livemoon has joined #openstack00:56
livemoonmorning00:57
*** jeromatron has joined #openstack00:59
*** jollyfoo has quit IRC01:09
viddhello livemoon01:09
*** Gollen has joined #openstack01:11
*** thingee has left #openstack01:13
*** jeromatron has quit IRC01:13
*** statik has joined #openstack01:14
*** bwong has quit IRC01:19
*** lorin1 has quit IRC01:22
*** lorin1 has joined #openstack01:22
*** lorin1 has left #openstack01:23
livemoonhi,vidd01:24
viddhello livemoon01:24
*** bsza-I has quit IRC01:24
*** rnorwood has joined #openstack01:24
livemoona question: did you install keystone and dashboard in the same server?01:24
viddlivemoon, yes01:24
viddcurrently, i only have the one machine capable of running VMs01:25
*** lorin1 has joined #openstack01:26
livemoonok01:26
viddlivemoon, still having issues?01:29
*** vernhart has quit IRC01:31
*** GeoDud has quit IRC01:32
*** bwong_ has quit IRC01:33
livemoonyes01:33
viddwhat problem?01:34
*** webx has quit IRC01:36
livemoonI can login but nothing show01:39
livemoonI will do it in my vmware machine today01:39
*** reed has quit IRC01:40
*** dysinger has quit IRC01:44
stevegjacobsvidd: tell me more about the scripting that you are doing01:44
viddthere isnt much to tell...im writing a script that will walk anyone thru setting up a metal-to-active full stack install01:45
Gollenkeystone can not works on openstack 2011.3 version? how to configure it?01:45
stevegjacobsOn one machine or multiple?01:45
viddstevegjacobs, one machine01:45
viddto add multiple machines, you just add compute and mysqlclient on the additional machines and copy your nova.conf file to the other machines01:47
stevegjacobsI've got three in my stack right now01:47
*** rsampaio has joined #openstack01:48
stevegjacobstwo are nice new machines, but the third one is older and I was hoping to configure it to just do nova-volume or swift01:49
stevegjacobsIt's sitting on the stack but not doing anything at the moment :-)01:49
stevegjacobsWe have a couple other machines that I hope to add in later01:50
stevegjacobsso I am trying to figure out what is best practice for expanding bit by bit01:50
*** isaacfinnegan has quit IRC01:51
viddstevegjacobs, what have you put on it currently?01:51
stevegjacobsdon't ask - I think I've made a mess of it01:53
viddstevegjacobs, so..."nothing"?01:54
stevegjacobsmore like everything01:54
viddwhat are the specs of the machine01:54
*** Hakon|mbp has joined #openstack01:55
viddim considering taking a relic machine i have here and setting it up as the MySQL/Keystone/Dashboard machine01:55
*** jakedahn has joined #openstack01:55
viddits a PIII with 80Gb hard drive01:56
uvirtbotNew bug: #888370 in glance "glance show prints invalid URI" [Low,In progress] https://launchpad.net/bugs/88837001:56
*** GeoDud has joined #openstack01:56
*** ton_katsu has joined #openstack01:57
stevegjacobsvidd: the older server doesn't support kvm - older intel xeon processor,  6x2tb drives02:00
vidd6x2tb hd's?02:00
stevegjacobsyup02:01
viddswift server...defenantly =]02:01
viddhttp://swift.openstack.org/development_saio.html#02:01
stevegjacobsI have swift installed but I know it's not configured right.02:02
viddset it up with out keystone first02:02
viddonce that works, tie keystone and dash into it02:03
viddthen fianlly link glance to it02:03
stevegjacobsthis looks interesting02:03
*** bhall has quit IRC02:04
viddi feal so stupid.....02:04
stevegjacobsI have oneiric  and swift packages installed02:05
viddthe reason i had so much issues with keystone was that one tieney file was missing....python-mysqldb.....could not understand why it failed....02:05
viddstevegjacobs, i know nothing of swift02:05
stevegjacobsIs it worth it to start over - the link is saying to start from lucid02:06
viddi just know it takes ALOT of hard drive space =]02:06
stevegjacobsvidd: ok, thats my problem too :-)02:06
viddstevegjacobs, i would assume "lucid" was just imported from the last version of the documentation....and you should be fine with current ubuntu install02:07
stevegjacobsI can feel my brain cells frying and dying from trying to get my head around compute this past weeks!02:07
viddstevegjacobs, ive given up a month of my life for this02:08
viddand they dont want to pay me for the development time =\02:08
stevegjacobsI started at the beginning of August02:09
*** Otter768 has joined #openstack02:09
stevegjacobsbut can't do it full time.02:10
uvirtbotNew bug: #888371 in swift "swift bug with python webob 1.2b2" [Undecided,New] https://launchpad.net/bugs/88837102:10
uvirtbotNew bug: #888372 in glance "glance cache-reap-invalid causes 'NoneType' object is not subscriptable" [Undecided,New] https://launchpad.net/bugs/88837202:11
viddstevegjacobs, the issue i have is i dont have server-grade equiptment right now....im working with low-grade desktop-centric machines02:11
stevegjacobsOur company has bought a couple of new machines to get started with, but they want me to figure out something useful to do with some older ones02:12
viddand they cant understand why a "simple" 2-gb ram vm takes so long to do anything...the host machine only has 2 gb!02:12
viddstevegjacobs, take those 2tb drives and distribute them out between 3 servers and do a "proper" swift cluster02:13
stevegjacobsThats where I was at the beginning - installing stackops on cast-off desktops :-)02:13
*** GeoDud has quit IRC02:14
viddthey are promising me one "new[ish]" machine, and then i'll move existing servers onto it to free up new machines to convert02:14
stevegjacobsYeah - first step is to migrate some existing loads onto what I've got set up now so that I can retire a couple of them02:15
viddthe cheif engineer says "we can reduce enough to free up anymore racks"02:15
*** jeromatron has joined #openstack02:16
viddi tell him "i dont want to free up racks...if this goes as expected, i will be filling up the holes we already have in the racks with more machines [and paying customers]"02:16
*** jdurgin has quit IRC02:17
stevegjacobsThen I'll lash some new big drives and maybe a bit of memory into the older machines and create a swift cluster02:17
stevegjacobsOnce I can figure out how to get everything working together02:18
stevegjacobsI gotta go to bed now.02:19
stevegjacobsg'night02:20
*** stevegjacobs has quit IRC02:21
*** GeoDud has joined #openstack02:24
*** rods has quit IRC02:26
*** vladimir3p has quit IRC02:27
*** osier has joined #openstack02:28
*** gyee has quit IRC02:32
*** Hakon|mbp has quit IRC02:37
*** daMaestro has quit IRC02:37
*** vernhart has joined #openstack02:43
*** redconnection has joined #openstack02:45
*** katkee has joined #openstack02:50
*** egant has joined #openstack02:58
uvirtbotNew bug: #888382 in glance "glance-cache-cleaner causes 'Driver' object has no attribute 'delete_incomplete_files'" [Undecided,New] https://launchpad.net/bugs/88838203:01
uvirtbotNew bug: #888383 in glance "glance-cache-prefetcher causes Unknown Scheme errors when using 'file://' images" [Undecided,New] https://launchpad.net/bugs/88838303:01
*** ton_katsu has quit IRC03:06
HugoKuo__morning03:07
*** Ryan_Lane1 has joined #openstack03:09
*** Ryan_Lane has quit IRC03:09
*** Ryan_Lane1 has quit IRC03:11
uvirtbotNew bug: #888385 in nova "Failure when installing Dashboard - python tools/install_venv.py" [Undecided,New] https://launchpad.net/bugs/88838503:13
*** rnorwood has quit IRC03:14
*** bsza has joined #openstack03:15
*** winston-d has joined #openstack03:22
*** jingizu_ has quit IRC03:22
*** jingizu_ has joined #openstack03:23
*** rnorwood has joined #openstack03:27
*** nati2_ has joined #openstack03:29
*** rnorwood has quit IRC03:32
*** nati2 has quit IRC03:32
*** stuntmachine has joined #openstack03:33
*** bsza has quit IRC03:37
*** bsza has joined #openstack03:37
viddzykes-, you here?03:42
*** rnorwood has joined #openstack03:52
*** v0id has joined #openstack03:53
*** lorin1 has quit IRC03:56
*** jog0 has quit IRC04:01
*** mmetheny has quit IRC04:08
*** mmetheny has joined #openstack04:09
*** vidd is now known as vidd-away04:13
*** lionel has quit IRC04:14
*** lionel has joined #openstack04:15
*** emid has joined #openstack04:15
*** nati2 has joined #openstack04:24
*** nati2_ has quit IRC04:24
*** vernhart1 has joined #openstack04:27
*** vernhart has quit IRC04:31
*** stuntmachine has quit IRC04:31
*** tokuz has quit IRC04:44
*** bsza-I has joined #openstack04:49
*** bsza has quit IRC04:50
*** neogenix has joined #openstack04:52
*** PeteDaGuru has left #openstack04:59
*** bsza-I has quit IRC05:01
*** jwalcik has quit IRC05:05
*** bhall has joined #openstack05:07
*** bhall has quit IRC05:07
*** bhall has joined #openstack05:07
*** markwash has quit IRC05:11
*** blamar has quit IRC05:11
*** bsza has joined #openstack05:14
*** AlanClark has quit IRC05:16
*** rnorwood has quit IRC05:19
*** bsza has quit IRC05:35
livemoonafternoon05:37
*** jj0hns0n has quit IRC05:38
*** v0id has quit IRC05:49
*** zaitcev has quit IRC05:56
*** jamespage has quit IRC05:57
*** chadh has quit IRC05:57
*** paltman has quit IRC05:58
*** superbobry has joined #openstack05:59
*** obino has joined #openstack05:59
*** Blah1 has joined #openstack05:59
*** chadh has joined #openstack06:00
*** giroro_ has joined #openstack06:00
*** dosdawg__ has joined #openstack06:00
*** pfibiger` has joined #openstack06:01
*** oubiwann has quit IRC06:01
*** rwmjones has quit IRC06:02
*** dosdawg has quit IRC06:02
*** chmouel_ has joined #openstack06:02
*** statik has quit IRC06:02
*** pfibiger has quit IRC06:02
*** Ruetobas has quit IRC06:02
*** chmouel has quit IRC06:02
*** n0ano has quit IRC06:02
*** nci has quit IRC06:02
*** n0ano has joined #openstack06:02
*** jedi4ever has joined #openstack06:02
*** oubiwann1 has joined #openstack06:02
*** pvo has quit IRC06:02
*** nci has joined #openstack06:02
*** pvo has joined #openstack06:03
*** osier has quit IRC06:03
*** paltman has joined #openstack06:03
*** osier has joined #openstack06:03
*** Bryanstein has quit IRC06:03
*** jamespage has joined #openstack06:03
*** jamespage has joined #openstack06:04
*** miclorb_ has quit IRC06:04
*** llang629 has joined #openstack06:11
*** llang629 has left #openstack06:13
*** rwmjones has joined #openstack06:14
*** vernhart1 has quit IRC06:17
*** nerens has joined #openstack06:18
*** vernhart has joined #openstack06:20
*** markwash has joined #openstack06:22
*** exprexxo has joined #openstack06:23
*** Blah1 has quit IRC06:29
*** hingo has joined #openstack06:43
*** derrick has quit IRC06:48
*** derrick has joined #openstack06:49
*** hezekiah_ has joined #openstack06:53
*** winston-d has quit IRC06:58
*** TheOsprey has joined #openstack07:07
*** dnjaramba has joined #openstack07:09
*** Rajaram has joined #openstack07:09
*** kaigan has joined #openstack07:13
*** bhall has quit IRC07:14
*** Bryanstein has joined #openstack07:14
*** Bryanstein has quit IRC07:19
*** wariola has joined #openstack07:19
*** winston-d has joined #openstack07:21
*** krow has joined #openstack07:23
*** Bryanstein has joined #openstack07:27
*** nati2_ has joined #openstack07:27
*** nati2 has quit IRC07:30
*** adjohn has quit IRC07:38
*** adjohn has joined #openstack07:46
*** guigui1 has joined #openstack07:48
*** Ryan_Lane has joined #openstack07:53
*** superbobry2 has joined #openstack07:57
*** superbobry2 has left #openstack07:57
*** krow has quit IRC07:58
*** rsampaio has quit IRC08:01
*** foexle has joined #openstack08:02
foexlehiho08:02
*** exitdescription has joined #openstack08:03
*** taihen has quit IRC08:04
*** mgoldmann has joined #openstack08:05
*** Ryan_Lane has quit IRC08:06
*** adjohn has quit IRC08:06
*** dirkx_ has joined #openstack08:09
*** efcasado has joined #openstack08:09
*** nRy_ has quit IRC08:16
*** binbash_ has joined #openstack08:17
*** Razique has joined #openstack08:22
*** mnour has joined #openstack08:24
foexleahoi Razique ;)08:24
Raziquehey foexle08:24
Razique'sup ? :d08:25
foexlesup ? :)08:25
foexlewhat you mean ?08:25
Raziquewhat's up ? :)08:26
foexlei was verry tired yesterday :D ....08:26
Raziquehaha no way :p08:27
*** halfss has joined #openstack08:27
foexletoday now i'll documentation and i'll bind a new compute-node and swift to the cloud :D08:27
foexleso wish me good luck hahaha :D08:27
halfsshi: when i use curl to resize instance: # curl -X POST  localhost:8774/v1.1/21/servers/143/action -H "Content-Type: application/json" -H "X-Auth-Token:232b0e48-c826-45b0-a564-dff2d4537244" -H "Accept:application/xml" -d '{"resize":{"flavorRef":" http://localhost:8774/v1.1/21/flavors/3"}}'08:28
halfss<badRequest code="400" xmlns="http://docs.openstack.org/compute/api/v1.1">08:28
halfss    <message>08:28
halfss        Unable to locate requested flavor.08:28
halfss    </message>08:28
halfss</badRequest>08:28
*** reidrac has joined #openstack08:28
halfssis some one can help me ?08:28
efcasadotry to type: "flavorRef": "3"08:29
efcasadoinstead of the whole URL08:30
halfssok08:30
halfssoh  yes08:30
halfssbut at nova-api.log:(nova.rpc): TRACE: Traceback (most recent call last):08:31
halfss(nova.rpc): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data08:31
halfss(nova.rpc): TRACE:     rval = node_func(context=ctxt, **node_args)08:31
halfss(nova.rpc): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 98, in wrapped08:31
halfss(nova.rpc): TRACE:     return f(*args, **kw)08:31
halfss(nova.rpc): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 117, in decorated_function08:31
halfss(nova.rpc): TRACE:     function(self, context, instance_id, *args, **kwargs)08:31
halfss(nova.rpc): TRACE:   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 931, in prep_resize08:31
halfss(nova.rpc): TRACE:     raise exception.Error(msg)08:31
halfss(nova.rpc): TRACE: Error: Migration error: destination same as source!08:31
halfssi have one compute node08:31
halfssif i want to resize an instance,the instance will migrate and then resize?08:32
*** javiF has joined #openstack08:34
zykes-vidd-away: now !08:35
zykes-;p08:37
*** alexn6 has joined #openstack08:37
efcasadoDoes anyone know how to list the virtual interfaces for a given instance? (using the restful interface)08:37
livemoonhi.all08:37
*** tjikkun has quit IRC08:37
*** stevegjacobs has joined #openstack08:40
*** troya has joined #openstack08:43
foexleefcasado: du you mean the mapped ip-adresses ? or the nic names in each vm ?08:44
*** vdo has joined #openstack08:45
*** troya has quit IRC08:46
livemoondoes someone know "verified_claims = {'user': token_info['access']['user']['name']," when I use glance08:47
stevegjacobsI have one instance that seems to have crashed - I've tried to terminate it but it won't terminate08:48
Raziquehalfss: use paste :)08:50
Raziquehi zykes- livemoon stevegjacobs !08:51
*** miclorb_ has joined #openstack08:51
livemoonhi08:58
livemoonRazique08:58
livemoonI meet new problem today08:58
livemooneveryday I alwasy meet new bugs08:58
*** uksysadmin has joined #openstack08:58
uvirtbotNew bug: #888448 in keystone "auth_token.py of keystone error when I use glance" [Undecided,New] https://launchpad.net/bugs/88844809:01
*** pixelbeat has joined #openstack09:01
zykes-Razique: .09:03
zykes-which bug livemoon ?09:03
*** wariola has quit IRC09:04
livemoonhttps://bugs.launchpad.net/keystone/+bug/88844809:05
livemoonzykes: have you meet it?09:05
*** uksysadmin has quit IRC09:05
zykes-don't remember09:06
*** jakedahn has quit IRC09:06
zykes-i haven't touched my deployment in a few weeks09:06
*** redconnection has quit IRC09:09
*** anticw has quit IRC09:09
livemoonI first meet it09:09
livemoonbecause today I install latest version in my server09:09
*** jj0hns0n has joined #openstack09:11
livemoonwho know this coding " verified_claims = {'user': token_info['access']['user']['name'],"09:14
*** anticw has joined #openstack09:15
*** jj0hns0n has quit IRC09:17
*** hezekiah_ has quit IRC09:17
*** wariola has joined #openstack09:17
*** uksysadmin has joined #openstack09:19
*** jj0hns0n has joined #openstack09:21
*** anticw has quit IRC09:21
*** dobber has joined #openstack09:21
*** jj0hns0n has quit IRC09:22
*** jakedahn has joined #openstack09:23
*** anticw has joined #openstack09:23
Raziquelivemoon: wasn't the bug linked to that temporary Keytsone hack the -A flag ?09:23
*** uksysadmin has quit IRC09:25
*** stevegjacobs_ has joined #openstack09:25
*** marrusl has quit IRC09:26
livemoonRazique: not only glance, I use python-novaclient, also this error09:27
stevegjacobs_Don't know what is going on,but one of the servers on my stack has crashed and disapeared09:27
Raziquestevegjacobs: an instance ?09:28
RaziqueI mean, an instance has diseapperead ?09:28
stevegjacobs_yeah I still see a readout using nova show <serverID>09:28
*** redconnection has joined #openstack09:29
stevegjacobs_but I can't ping or ssh into it. It was running a web site that was visible andthat's gone09:29
Raziquestevegjacobs: that happened to me in my lab09:30
Razique(nova): TRACE: Error: Domain not found: no domain with matching name 'instance-0000004d'09:30
livemoonRazique:09:30
Raziquewhile the server was running, it occured after I restarted the compute ndoe09:30
livemoonI have meet it in my lab too09:30
*** Razique has quit IRC09:31
stevegjacobs_I also tried nova reboot --hard09:31
livemoonwhen I delete an instance, it also happened09:31
stevegjacobs_I want to get it back if possible09:31
stevegjacobs_You mean if you delete an instance, another instance disapears??09:31
*** Gollen has quit IRC09:32
*** Razique has joined #openstack09:33
stevegjacobs_I have pasted nova-api.log from attempt to reboot it http://paste.openstack.org/show/3239/09:33
stevegjacobs_Instance was fine last night but gone this morning09:33
Raziquesorry bug09:34
Raziquesoren:09:35
Raziquestevegjacobs: livemoon09:35
uvirtbotNew bug: #888458 in openstack-ci "Stable branches should only be +2 by stable team maintainers" [High,New] https://launchpad.net/bugs/88845809:36
RaziqueI think it happends when the connection from nova-scheduler to the compute node is lost09:36
Raziquethat makes the compute thinks the instnace no longer exists and then it removes it09:36
Razique(I mean nova-compute does the virsh destroy domain) and the rm -rf /var/lib/nova/instances/instance09:37
RaziqueI already had that in production I think ; which is belive me…. scary09:37
stevegjacobs_I just did nova show <serverID> and it is showing the status as REBOOT09:38
*** dirkx_ has quit IRC09:38
sorenRazique: Sorry, what?09:39
*** dirkx_ has joined #openstack09:39
*** wulianmeng has joined #openstack09:39
Raziquesoren: mistype, sorry ^^09:39
Raziquestevegjacobs: does the instance exists ?09:39
*** dirkx_ has quit IRC09:39
Razique(I mean it's files)09:40
sorenRazique: Ah, ok :)09:40
RaziqueWhere are you from soren ?09:40
HugoKuo__does any docs talk about public a swift container in Swift 1.4.4+ ?09:40
uvirtbotNew bug: #888460 in openstack-ci "nova-milestone-tarball job fails to run on "nova" slave" [Medium,New] https://launchpad.net/bugs/88846009:41
uvirtbotNew bug: #888461 in openstack-ci "Extraneous glance tarball should be cleaned up on nova.openstack.org/tarballs" [Low,New] https://launchpad.net/bugs/88846109:41
HugoKuo__in Bexar , doc mentioned CDN , but CDN seems been removed from Swift .... Am I right ?09:41
stevegjacobs_arrgh - found the problem09:41
stevegjacobs_it's on a node that has crashed!09:42
wulianmengIs there anybody who install openstack with xen?09:42
*** darraghb has joined #openstack09:43
Raziquestevegjacobs: doesn't really surprise me09:46
*** dnjaramba_ has joined #openstack09:48
*** dnjaramba has quit IRC09:48
*** nerens has quit IRC09:51
Raziqueany success with live migration here09:54
Razique?09:54
zykes-Razique: i can try next week ;p09:54
RaziqueI'm trying :D09:55
Raziquedoes someones knows if HA works09:55
zykes-HugoKuo__: CDN is a seperate service which was announced @ diablo conference09:55
Raziquefor instance : instance running on a node09:55
Raziquenode gone, nova restarts the instance somewhere else09:55
*** dendrobates has quit IRC09:56
HugoKuo__zykes- thanks09:57
*** livemoon has left #openstack10:00
*** statik has joined #openstack10:02
*** statik has quit IRC10:02
*** statik has joined #openstack10:02
foexleRazique: you have defined an alias "glance -A "$OS_AUTH_KEY"" ... but this sys-var are not set. So if i correct understand is -A the API-Key ... right ?10:02
*** wariola has quit IRC10:06
*** apevec has joined #openstack10:11
*** syah_ has joined #openstack10:17
uvirtbotNew bug: #888479 in openstack-ci "Bug should not be set to FixCommitted on non-master merge" [High,New] https://launchpad.net/bugs/88847910:21
*** syah has quit IRC10:21
*** dendrobates has joined #openstack10:24
*** anticw has quit IRC10:25
*** mcclurmc has quit IRC10:27
*** anticw has joined #openstack10:27
*** tyska has joined #openstack10:27
*** mcclurmc has joined #openstack10:27
*** taihen has joined #openstack10:30
*** marrusl has joined #openstack10:30
*** lelin has joined #openstack10:35
*** vernhart has quit IRC10:39
Raziquefoexle: yup10:40
RaziqueI've disabled it since I don't uee Keystone10:40
Raziquebut in order to integrate glance/ glance tools (eg glance index) with keystone10:40
Raziquean -A (temporary- flag) has been added10:40
foexleRazique: ok thx :)10:41
*** nati2_ has quit IRC10:43
stevegjacobs_what could cause a lightly loaded node to crash?10:44
Raziquestevegjacobs: kern panic10:46
Raziqueor hardware issue10:46
Raziquemaybe a bug10:46
Raziquestevegjacobs: check kern.log10:47
Raziquefoexle: will know what we are talking about here :D10:47
foexlehahahah oh yeah ;)10:47
*** nerens has joined #openstack10:50
tyskahello guys!!!10:50
tyskahow u r doing?10:50
tyskaor better saying, how r u doing? =)10:50
Kiallstevegjacobs_: node crashed? any idea why?10:51
stevegjacobs_Nope - still waiting on someone at the data centre to push a button for me10:52
Kiallouch get them to take a photo of the screen10:52
Kiallkernel panics never get logged!10:52
Kialland .. `echo "kernel.panic = 20" > /etc/sysctl.d/30-panic.conf && sysctl -p`10:53
Kiallie .. auto reboot after 20 seconds if the kernel panics10:53
stevegjacobs_I would have gone out myself already but I let my wife have the car...10:54
Raziqueerf :/10:54
Kialland, to get the logs.. you can use netconsole to ship them to another server https://wiki.ubuntu.com/Kernel/Netconsole10:54
*** marrusl has quit IRC10:54
Raziquestevegjacobs: how many instances on it ?10:54
stevegjacobs_Only one that counts -10:54
Raziquestevegjacobs: u have it backedup ?10:55
Raziquebacked-up*10:55
Kiallstevegjacobs: seriously do get them to take a photo of the screen! Otherwise, you'll have no idea what went wrong10:55
Raziqueu could launch it then somewhere else (if u have another node)10:55
tyskahey Razique10:56
stevegjacobs_It was a web site that was transfered to run live on this server lst night. It's back up on it's original server10:56
Kialllast night? bad timing10:56
Raziquehi tyska :)10:57
RaziqueKiall: Murphy's law :(10:57
stevegjacobs_yes it is bad timing10:57
KiallMind me asking what brand of server it was? I've been having some issues with a HP + Oneiric...10:57
tyskaRazique: my problem remains =( see it here and say something to me https://answers.launchpad.net/nova/+question/17820910:58
stevegjacobs_It's a brand new dell10:58
tyskaRazique: you are the man and you put an end to my suffering =)10:58
Kialltyska: what kind of switch are you using? and have you configured it for vlans?10:58
tyskaKiall: im using no switch, just direct connection between server1 and server210:59
Kiallon eth1?10:59
stevegjacobs_I remember it acting a bit funny at the time I was installing Os10:59
tyskayeah, eth1 of server1 is connected directly to eth0 of server211:00
tyskathrough a cross-over cable11:00
stevegjacobs_two switches -both vlan capable11:00
Kiallso shouldnt --vlan_interface be set to eth1 them?11:00
Kiallthen*11:00
Kiallstevegjacobs: yea, oneiric has a huge bug preventing me from installing it on out 1950 and 2950 servers... -_-11:01
*** BasTichelaar has joined #openstack11:01
stevegjacobs_One for public network one for private -11:01
Kialltyska: oh, your trying to make the second server a single interface?11:02
*** jakedahn_ has joined #openstack11:03
tyskaKiall: yeah, second server will use just 1 interface11:05
*** jakedahn has quit IRC11:05
*** jakedahn_ is now known as jakedahn11:05
tyskaKiall: i want him isolated from others subnets, him will just comunicate with server111:05
tyskait will*11:05
Raziquetyska: yah same setup here11:06
Raziqueeth0 public eth1 <--> eth1 nova-com11:07
Raziqueweird11:07
tyskaRazique: my br100 on nova-compute(server2) has no ip, is this right?11:07
Kiallassuming br100 is the first network, then it should have an IP11:07
Raziquein vlan mode, no need to manually setup the brige11:08
Raziquea bridge is created per network11:08
*** stevegjacobs has quit IRC11:08
Raziquewith it's own vlan11:08
Kiallyea - if you manually setup a br100, then remove it.. nova will make what it needs11:09
tyskai did not manually setup11:09
tyskai just thought that was weird11:10
tyskasince the server2 there is no interface configurated in the private ips subnet11:10
tyska172.16.50.0/2411:10
tyskaand consenquently no route to this subnet11:10
tyskathen how the machine you handle packages to this subnet?11:10
tyskaserver1 has a compute node too, and his br100 is configured in the subnet 172.16.50.0/2411:11
KiallOh wait, your using 1 nova-network rather than 1 per nova-compute...11:11
KiallThe interface will not get an IP in that case11:11
KiallSince, its just a bridge, it does no routing11:12
Kialltyska: can S2 ping 192.168.1.254 ?11:13
Kiallactually - it has to be able to..11:13
Kiallnevermind11:13
KiallI'm out of ideas.. ;)11:13
tyskaKiall: yeah11:13
tyskaS2 can ping to 192.168.1.25411:13
tyskaand even can run instances from S1, using euca-run-instances, in s211:14
*** ahasenack has joined #openstack11:14
Kiallif you assign a floating IP to the instance on S2, does it show in `ip addr show` on S1? (Or, Is the nova setup network for multi-host..)11:15
Raziquetyska: u mind sharing SSH access ?11:17
tyskaRazique: np, but i will need to fix a configuration problem here first, because this machines are without Internet Connection right now11:18
*** ahasenack has quit IRC11:18
*** ahasenack has joined #openstack11:19
Raziqueok ok =)11:20
Raziqueagain :p I'm interested here about node failure with running instances one it11:20
Raziqueon*11:20
Raziqueif someone has infos to share. Here is what I came so far11:21
*** Hakon|mbp has joined #openstack11:21
Raziqueshared storage for iinstances accross nodes : useless, virsh will complain about a Could not find filter 'nova-instance-instance-00000056-02163e1cd83111:21
Raziquenon shared storage and rescue (Diablo feature) doesn't seem to work11:21
Raziquein fact nova doesn't seem aware he lost a node wile there was instance on it11:22
Raziqueso reboot : useless since the node no longer exist11:22
Kiallshared storage for /var/lib/nova/instances should work?11:22
RaziqueKiall: already tried11:22
Raziquebut the network part doesn't handle that scenario11:23
Raziquein fact my conclusion is : 1- custom heartbeat script11:23
Razique2- make a shared storage for instances11:24
Razique3- require migration ; that doens' seem to work here :D11:24
KiallRazique: or 3.. http://www.gluster.com/community/documentation/index.php/OSConnect11:25
Kiall(Havent used  it...)11:25
Razique4- DB field update : useless, since the only thing nova complains about it "in DB = XXX instance, running = 0"11:25
*** nerens has quit IRC11:25
Raziquebut the scheduler doesn't take the initiative to respawn11:25
KiallYea - the scheduler should not respawn the instance IMO..11:25
Razique5- if you relauch the node, then… use my script haha https://github.com/Razique/BashStuff/blob/master/SCR_5006_V00_NUAC-OPENSTACK-DRP-OpenStack.sh11:26
KiallWhat if the instance is running, and you spawn a second copy?11:26
Raziqueyou are right…black hole here11:26
Kiall# We reset the database so the volumes are reset to an available state11:26
Kiallor .. nova-manage volume reattach $id11:26
Raziquethanks for that info Kiall11:27
RaziqueI don't really use nova-manage for instances admin actually :p11:27
Kiallsure.. you still need to find the ID's, but I can only imagine it will be more reliable than trying to DIY ;)11:28
RaziqueDIY ?11:28
Kiall"do it yourself"11:28
Raziqueah :)11:28
KiallGuess you're not a native English speaker ;)11:28
RaziqueKiall: haha french :p11:29
Raziqueit took me 5 minutes to figure out that BYOB means Bring Your Own Booze11:29
Kialllol11:29
RaziqueLet's try Gluster<-> nova then11:29
Raziqueif it's the "only" stable solution11:30
Raziquenova rescue seems full of potential , but atm, not really useable11:30
*** miclorb_ has quit IRC11:31
Kiallnova rescue is nothing to do with downed nodes? it basically the OS equivalent of booting to a shell off a live CD11:31
KiallIt's basically*11:31
Raziquehttps://blueprints.launchpad.net/nova/+spec/host-aggregates11:31
Raziquewoudln't he be usefull in such cas'e ?11:31
Raziqueah no, not really siince the node is missing11:32
*** PotHix has joined #openstack11:33
*** cmagina has quit IRC11:33
*** cmagina has joined #openstack11:34
RaziqueKiall: on a pure let's say…. logical plan11:35
RaziqueGlusterfs brings no more than cluster for running instance11:35
Raziqueso a NFS ++++11:35
RaziqueIm' not saying at all GFS is like NFS :p but the goal here11:36
Raziqueis to make sure our instances files are available accross all ndes11:36
Raziquenodes11:36
Kialland thats exactly (mostly) what GFS does..11:37
Raziquethat bring us back to my conclusions : how to restart then on a new node11:37
KiallGlusterfs is kinda like a poor man's SAN11:37
RaziqueI have via NFS the instance file on the other node11:37
foexleany know how i can get with euca tools a list with all instance types ?11:37
Raziquenow what matters here is to say to nova "ok I"ve the instance files, but the instances doesn't run, let's start it !"11:38
zykes-Razique: what about sheepdogg?11:38
Raziquefoexle: nova-manage flavor list11:38
Raziquezykes-: same issue here11:38
foexleRazique: thx again11:38
Kiallzykes-: isnt sheepdog for nova-volume?11:39
Raziquenova is not aware about the fact he lost running instances11:39
zykes-Kiall: no it's for instances11:39
Kiallzykes-: are you sure? (I just checked.. http://wiki.openstack.org/SheepdogSupport)11:39
*** osier has quit IRC11:39
RaziqueKiall: the project iself is for instances11:40
zykes-https://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/4509311:40
Raziquebut you seem to be right regarding it's implementation into nova11:40
zykes-ah ok then Kiall11:40
Kiallzykes-: look at step 4 of that link ;)11:41
Kiallnova-volume --volume_driver=nova.volume.driver.SheepdogDriver11:41
zykes-yeh, i saw11:41
RaziqueYou should consider Sheepdog if you are looking for clustered storage that:11:41
zykes-what's the difference on gluster then ?11:41
Raziquezykes-: consider them on a different approach I'd say11:41
Kiallyou can mount gluster @ /var/lib/nova/instances11:41
zykes-ah11:41
RaziqueGFS : stupid replication for instances11:41
RaziqueSheepdog qemu-kvm aware replication11:42
zykes-Razique: say gluster instead, when you say GFS i think of GlobalFileSystem11:42
Raziquezykes-: sure11:42
zykes-which is a totally different filesystem again :p11:42
*** jeromatron has quit IRC11:42
halfssi am test glfs too11:42
Raziqueu right11:42
halfssglusterfs11:42
Raziqueso i'm now trying to respawn an instance11:43
Raziqueand update the changes regarding the network11:43
Raziquehehe I think I know what prevent migration https://bugs.launchpad.net/nova/+bug/74682111:46
zykes-:)11:46
zykes-i can't test anything that does outside 1 box at the moment which is horribly boring11:47
RaziqueOpenStack Compute (nova) 2011.2 "cactus" ;.11:47
Razique:/11:47
Raziqueok now we definitely knows the issue here regarding instances restart somewhere else is linked to the network11:48
Raziquehttp://libvirt.org/firewall.html11:48
zykes-Razique: what network cards do you run ? like what speeds brand etc?11:51
Raziquefor which part ?11:52
Raziqueoh guys it's moving !11:54
Raziquethe missing files are these custom instance firewall rules11:54
Raziquelocated into /etc/libvirt/nwfilter/instance-* filter11:55
Raziqueif you sync also that dir11:55
Raziqueyou make sure the network filter per instance is also available11:55
alexn6btw, what do you think about that fact, that kvm cache image files(in _baze dir) and for running instance it need only very low in size "diff" of that images(in instanceNN dir) - so would it be better to share all images for all hosts in advance and for LM use only that small files, for example rsynced? Is it interesting feature to ask developers?11:56
*** praefect has joined #openstack11:58
RaziqueI did it o/12:01
RaziqueIT WORKED !12:02
*** vernhart has joined #openstack12:02
Raziquei've been able to down a node and make the same instance restart on theother node12:02
halfssRazique:how did you make it ?12:03
KiallRazique: i wonder if the path for /etc/libvirt/nwfilter/instance-* is customizable?12:03
Raziquehalfss: 1- sync of instances + nwfilter12:03
Raziquerestart libvirt12:04
Raziqueupdate the DB in order to update the instance's host12:04
Raziquethen launch nova reboot/ euca-reboot12:04
Raziqueno the iptables rules are recreaed12:04
Raziquethen voila :)12:04
Raziquethe instance is reacheable, etc...12:04
RaziqueKiall: leme check12:05
halfssRazique:the most important is update the DB,change the instance's host ? right?12:05
Raziquehalfss: as important as restarting libvirt in order to import the filters12:05
Raziqueand restarting the instance in order to recreate the network rukes12:06
Razique"They are all stored in /etc/libvirt/nwfilter, but don't edit the files there directly. Use virsh nwfilter-define to update them. This ensures the guests have their iptables/ebtables rules recreated.12:06
Razique"12:06
Raziquein fact I bypass that =D12:06
foexleRazique: if i map a external ip address to a new instance i get this from kvm Connected to domain instance-0000001512:07
foexleEscape character is ^]12:07
foexleerror: internal error character device (null) is not using a PTY12:07
*** stevegjacobs has joined #openstack12:07
Raziquetime to lunch for me :p12:07
Raziquefoexle: let's see that later shall we :p12:07
halfssi should copy the instance'xml  in the /etc/libvirt/nwfilter of down host  to the good host, and then restart libvirt on good host12:08
Raziqueafter lunch i'll quickly write a script and ask for u guys to kindly try it :)12:08
Raziquehalfss:12:08
Raziquenoe12:08
foexlesure :D good hunger :)12:08
halfssand then change db,reboot instance12:08
Raziquenope12:08
halfss?12:08
Raziquehalfss: make sure  /etc/libvirt/nwfilter and /var/lib/nova/instances are synched12:09
Raziqueon both hosts, these two dirs should have the same files12:09
Raziquegot it ?12:09
Raziqueinstance  : for instances files12:09
halfssoh  yes12:09
Raziquenwfilter : livbirt security rules12:09
Raziquethen restart libvirt12:09
Raziqueservice libvirt-bin restart12:09
halfssno i use glusterfs sotre /var/lib/nova/instance12:10
Raziqueupdate the database for the instance, set the two field 'host' and 'launched_on' the the dest nide12:10
Raziquenode12:10
Raziquethen euca-reboot or nova-reboot12:10
Raziquehalfss: no use, if the NFS perf are ok for you12:11
*** vidd-away has quit IRC12:11
Raziquehalfss: we only see here that Gluster FS is a performant NFS, no more (in our context)12:11
halfssNFS's speed is not good12:11
Raziquehalfss: yah I know, but for validating here it was ok12:12
halfssyes,i know12:12
Raziqueok lunch :p12:12
RaziqueI love to talk here but that's the issue =d12:12
Raziquetimes goes12:12
halfssRazique:have you see this:https://github.com/Mirantis/openstack-utils/blob/master/nova-compute12:12
Raziquebe back in an hour guys ;)12:12
*** livemoon has joined #openstack12:12
Raziquehaha awesome thanks !12:13
halfssthis scirpt can reboot the instance (share sotre) from bad host to another good host12:13
halfssbut looks it work on cactus ,not diablo12:14
halfsscan you make it work on diablo?12:14
Raziquewill try to :p12:15
halfssok,if you done tell me,thanks12:15
halfssok?12:15
*** GeoDud has quit IRC12:16
*** rods has joined #openstack12:17
*** reidrac has quit IRC12:18
*** vidd has joined #openstack12:20
*** vidd has joined #openstack12:20
*** reidrac has joined #openstack12:21
tyska\join #ubuntu-server12:23
tyskawrong side =)12:23
livemoon.......12:24
*** livemoon has left #openstack12:24
*** livemoon has joined #openstack12:24
tyska /join instead \join12:24
tyska=)12:24
livemoon:)12:24
viddtyska, happens to everyone12:25
tyskavidd: =)12:25
livemoonvidd12:25
viddfirst time i did that i was in a busy channel and tried to ghost myself...there was my poassword in chat =]12:25
livemoontoday I install the lastest nova, glance and keystone. but it cannot be started12:26
tyskasomeone here already have to configure a server to use a authenticated proxy?12:26
*** mattstep has quit IRC12:27
viddlivemoon, you installed them over existing or fresh install?12:29
livemoonfresh12:29
livemoona clean 10.1012:29
viddif you installed them from git, you have to tell them to start12:30
livemoonnono12:30
viddthen what doe the error logs say?12:31
livemoonI means it can not be worked fine12:32
livemoonIt all can be started12:32
livemoonbut nova and glance don't work with keystone12:32
livemoonerror show12:32
livemoonhttps://bugs.launchpad.net/keystone/+bug/88844812:34
livemoonvidd here12:34
foexlelivemoon: it works, but only with many bypasses make sure you use the newest nova-client12:36
*** GheRivero has quit IRC12:37
livemoonI use the newest nova-client12:37
livemoonwhen I use glance command ,the error also occur12:37
foexlewhich one ? :D12:38
livemoonwhat do you mean?12:38
viddlivemoon, keystone from apt or git?12:38
livemoonall of them from git12:38
foexlelivemoon: which error ? could you paste?12:39
viddand keystone is running?12:39
livemoonyes12:39
livemoonfoexle: https://bugs.launchpad.net/keystone/+bug/88844812:39
foexleah ok12:39
viddlivemoon, backup your "/usr/share/pyshared/keystone/middleware/auth_token.py" and replace it with this one: https://github.com/managedit/keystone/blob/master/keystone/middleware/auth_token.py12:46
*** bsza has joined #openstack12:46
viddmake sure you back up your existing....dont just replace it12:46
*** javiF has quit IRC12:50
*** dirkx_ has joined #openstack12:52
*** wulianmeng has quit IRC12:53
*** lorin1 has joined #openstack12:54
livemoontomorrow I will try12:54
*** nelson has joined #openstack12:54
livemoonthis is whose?12:54
viddlivemoon, that is Kiall's work12:56
Kiallhuh?12:57
viddi dont know how well it will mingle with new git stuff...that's why i say back up the old before dropping his in12:57
*** popux has joined #openstack12:57
viddKiall, your auth_token script for keystone12:57
Kiallthats just the stable/diablo version..12:58
*** Otter768 has quit IRC12:58
*** stevegjacobs has quit IRC12:58
*** Rajaram has quit IRC12:58
viddlivemoon, remember that the "master" branch of openstack is all experimental12:58
*** n81 has joined #openstack12:58
livemoonKiall, which stable version you fork?12:58
livemoonvidd , thanks12:59
Kiallmy fork is simply packaging stuff..12:59
Kialland my "master" branch is a openstack's "stable/diablo" branch + packaging stuff12:59
viddlivemoon, he has created a ppa that has keystone and dashboard working from apt-get12:59
livemoonwhat is packaging stuff?12:59
*** nerens has joined #openstack13:00
viddlivemoon, https://launchpad.net/~managedit/+archive/openstack/13:00
*** lts has joined #openstack13:00
livemoonreally? that's good work13:00
Kiallchanges for building .deb's13:00
livemoonI want tu use Kiall's ppa in my production13:00
viddlivemoon, Kiall's statue is being built as we speak =]13:00
Kialllol13:01
viddKiall, have you applied to the ubuntu repo's yet?13:01
*** kaigan has quit IRC13:02
tyskaRazique: are u there?13:02
KiallNo, I probably wont.. I can't get my head around bzr where they keep everything (kinda dont want to more like..)13:02
Kiall99% of the changes in my packages are literally just updating to the right versions of stuff13:03
viddKiall, then how about just applying for your packages =]13:03
*** lorin1 has quit IRC13:04
KiallIf they _wanted_ to have working packages, they could do it in a few hours at most.13:04
Kiallbut, I don't think they want to update them, due to policy's around updated versions.. and the stable/diablo branches being a moving target rather than a actual point release13:05
viddi just wish that openstack would release a month before ubuntu freeze13:05
Kiallif OS released a 2011.3.1, I think ubuntu will update. But unless that happens, I don't think they will.13:06
Kiall(I could be completely wrong BTW)13:06
viddperhaps ill pop in on my old xubuntu friends and ask if someone can take your ppa up the food chain =]13:07
Kialllol..13:08
*** kaigan has joined #openstack13:08
Kiallvidd: or contact the maintainers first, rather then going above their heads ;)13:08
viddi was not aware OS had maintainers=]13:09
viddi thought THAT was the propbelm13:09
KiallThese guys AFAIK https://launchpad.net/~ubuntu-server-dev13:09
livemoonKiall vidd : Do you mean diablo in ubuntu will not update ?13:09
viddlivemoon, they havent updated the packages in a month...so it does not look promising13:10
Kialllivemoon: as far as I know, ubuntu policy prevents them updating until openstack makes a release.13:10
livemoonThis means we only wait for essex13:11
Kialllivemoon: nope, thats a major version change, that has to wait for the next release of ubuntu13:11
viddlivemoon, you can expect the same issue there13:11
viddOS will release after the ubuntu freeze13:12
Kiall(again - as far as I know, I'm only vaguely  familiar with ubuntu polices)13:12
viddso more out-of-sync issues13:12
livemoonbut ubuntu is just release 11.1013:13
livemoonI think it maybe cost some time to release next13:13
viddubuntu releases every 6 months13:14
viddbetween the 10th and the 25th of the month13:14
Kialllivemoon: yes, 12.04 is the next release.. and its an LTS release, so they will be very cautions about what they put in...13:14
livemoonessex maybe the same time release next year13:14
viddlivemoon, right...which means OS will miss the ubuntu freeze and we will most likely be stuck with the broken stuff we already have13:15
*** Rajaram has joined #openstack13:15
Kiallvidd: at least keystone+dash will be core essex projects, and will be available from the official openstack PPAs13:16
Kiall(ie https://launchpad.net/~openstack-release/+archive/2011.3 )13:16
*** emid has quit IRC13:17
livemoonbut now the keystone in ppa is older13:17
viddKiall, i dont see alot of backporting in this project =\13:17
Kiallvidd: they only just decided to do stable branches13:17
viddperhaps because of the ubuntu 11.10 snafu?13:18
KiallAFAIK, at the time diablo was released, there was no official plan for stable branches.. It came a little after...13:18
*** shang has joined #openstack13:18
viddKiall, on a different subject....13:19
viddim writing my script, and i am having visudo launch to fix the sudo rights issue with nova-volume....13:19
viddwhen the runner closes visudo, will a bash script continue?13:20
Kiallnova volume doesnt have issues with sudo?13:20
Kiallat least, not with my packages, you're using them right?13:20
viddKiall, yes13:20
Kiallwhat needs chaning?13:20
Kiallchanging*13:20
livemoonvidd13:20
Kiallas in, what are you changing with visudo.. I'll just update the packages with whatever command is needed13:21
livemoonI found nova-volume cannot remove lv sometime since of tgt13:21
viddKiall, http://docs.openstack.org/diablo/openstack-compute/admin/content/managing-volumes.html13:21
Kiallvidd: have you tested it yet? ;)13:21
*** bsza has quit IRC13:21
Kiallcat /etc/sudoers.d/nova_sudoers13:22
viddtested the script?13:22
*** Rajaram_ has joined #openstack13:22
KiallThe packages handle getting the sudo rights in place13:22
livemoonvidd, you use nova-volume from repo?13:23
viddyes...i'm using everything...from Kiall 's ppa13:23
zykes-vidd: aloha13:23
viddi havent gotten to dashboard yet...and i dont see nova-vncproxy in there13:24
livemoonkiall's ppa is stable?13:24
*** Rajaram has quit IRC13:24
livemoonI wil try it13:24
viddzykes-, how did you chat with OS go?13:24
Kiallvidd: Lots of the steps in the openstack docs can be skipped when using my packages (and ubuntus, at least for nova)13:24
*** Rajaram has joined #openstack13:24
livemoonKiall, do you have docs about your ppa?13:25
Kiallvidd: nova-vncproxy is included, by frankly, I havn't got it working -_-13:25
Kialllivemoon: https://github.com/managedit/openstack-setup13:25
viddzykes-, did you get an answer about DNSaaS?13:25
*** Rajaram_ has quit IRC13:26
zykes-vidd: chat with who ? no not yet :( sent a mail to the ML but noone answered13:26
livemoonkiall, it's your git?13:26
* vidd has found a need for it13:26
zykes-vidd: oh rly ?13:26
viddzykes-, yes....13:26
Kialllionel: yea13:26
Kialllivemoon: yes*13:26
livemoonok, fork you13:27
*** derjohn_mob has joined #openstack13:27
viddzykes-, i have existing servers that i want to convert into VM's....13:27
viddwhen there is need to load-balance, there may be a need to spawn additional instances...13:28
*** vernhart has quit IRC13:28
zykes-vidd: add stuff you mean to the etherpad in that case..13:28
viddif the load balances detects the need, the load balances launches the new instance and the DNSaaS helps with the locating13:29
zykes-vidd: the problem is that DNS performance doesn't just help with just spawning new instances :p13:29
livemoonvidd: DNSaas ?13:29
livemoonteach me13:29
viddetherbad....github....launchpad....pretty sone, im going to have accounts on half the internet =\13:30
zykes-no need for a account vidd13:30
zykes-http://etherpad.openstack.org/HkEvt4crw913:30
RaziqueKiall: yah13:31
viddlivemoon, i do not know anything about DNSaaS ... that is zykes-  thing =]13:35
*** AlanClark has joined #openstack13:35
livemoonzkes: it need a dns image first ,isn't it?13:36
Raziqueyah it's zykes- stuff :D13:36
*** anonymous_ has joined #openstack13:37
viddRazique, nice work on the migration page ... i see you looked into my review and made some updates =]13:37
Raziqueoh u have written to first one ? :D13:38
uvirtbotNew bug: #888546 in nova "Extended Status Admin API extension fails in multi-zone mode" [Undecided,New] https://launchpad.net/bugs/88854613:38
livemoonvidd, razique: you are both good boy13:38
*** sandywalsh_ has joined #openstack13:38
Raziquehehe so are you livemoon :)13:38
viddlivemoon, noit me...im the evil twin13:38
livemoontwin?13:39
vidd=]13:39
Razique=D13:39
viddyes...im a twin...and the family joke is i can never say "i didnt do it, it was my evil twin because i AM the evil twin"13:40
zykes-i haven't bother to do anything with it cause noone's willing to talk about how to do it so :)13:40
anonymous_?13:40
Raziquezykes-: DNSaaS antonym13:40
Raziqueanonymous_:13:40
viddzykes-, if i knew how to do it, id help =]13:41
Raziqueis there a blueprint for that ?13:41
zykes- Razique nop13:41
zykes-just the etherpad at the moment13:41
anonymous_sorry about the ?; /help was ignored by this webclient.  Or maybe I was typing prolog :-)13:41
*** sandywalsh has quit IRC13:41
Raziquenp :)13:41
foexleRazique: i found the issue ;)13:42
Raziqueoh ? what was that ?13:42
*** PeteDaGuru has joined #openstack13:42
foexlemy mistake :) .... i forgot the keypair option !13:42
foexlenormally should start the instance without that13:42
*** hallyn has quit IRC13:42
anonymous_anyone have any comments about the solaris 11 announcement with "cloud support" and Zones.  My intuition is that zones are not high enough isolation compared to a real VM13:43
foexlebut seemingly not :)13:43
*** hallyn has joined #openstack13:45
*** sandywalsh_ has quit IRC13:45
zykes-anyone of you familiar with openvz ?13:45
*** sandywalsh_ has joined #openstack13:46
*** chemikadze has joined #openstack13:50
foexleif i try to delete a volume with nova-manage it writes first with dd every block with zeros and following erase the lv ?13:50
*** zul has quit IRC13:50
foexlethats pretty i/o intensive13:50
Raziquefoexle: yah13:50
Raziqueit's a sec. measure13:51
foexlecan i disable that ?13:51
Kiallfoexle: i dont think so, not without changing code anyway13:51
foexlehmmm ok13:52
*** mnour has quit IRC13:53
*** zul has joined #openstack13:53
*** halfss has quit IRC13:53
*** halfss has joined #openstack13:54
livemoonhi.my all friends13:54
livemoonsee you tomorrow,bye, sleep13:55
Raziquebye livemoon ;)13:55
*** kbringard has joined #openstack13:55
zykes-and another thing is that i don't have any vms / hw to run on :(13:55
*** livemoon has left #openstack13:56
*** uksysadmin has joined #openstack13:56
*** tyska has quit IRC13:56
*** ldlework has joined #openstack13:57
foexleok next issue :D .... i try to resolve all issues alone, but this one are strange ... (nova.rpc): TRACE: TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType13:59
foexleafter i create a volume with euca tools14:00
Kiallfoexle: what version are you running?14:00
foexlethe volume are existing now14:00
foexleKiall: diable stable14:00
foexleo14:00
stevegjacobs_back from the data centre :-)14:00
Kiallfrom ubuntu repos, or?14:00
foexlebut if i describe volumes they are in error state14:00
Kiallstevegjacobs_: took you're time ;)14:00
foexleKiall: yes14:00
KiallI had similar issues, if i remember right, the real exception is being covered up..14:01
stevegjacobs_took digital pix  what was on the screen - spent some time troubleshooting while out there14:01
Kiallstevegjacobs_: what was the panic message in the end?14:01
RaziqueI love to read the technical doc about cloud ; always the same words :D "Enterprises can scale capacity, performance, and availability on demand, with14:01
Raziqueno vendor lock-in, across on-premise, public cloud, and hybrid environments."14:01
praefecthi guys, good morning to everyone...14:02
*** msivanes has joined #openstack14:02
Raziquehi praefect :)14:02
praefectdo you get something via "nova zone-info"? I get a 40414:02
praefecthi Razique!14:02
Kiall(stevegjacobs_, I'm asking in case its the same issue as I had.. might not be limited to this particular HP server)14:03
*** nerens has quit IRC14:03
foexleKiall: so i cant find any other error in volume log14:05
stevegjacobs_Kiall: I don't know how to interpret - I'll get the picture up somewhere you can have a look14:05
*** ldlework has quit IRC14:05
Kiallfoexle: yea, it was a PITA to debug last time I had one of those14:06
*** stevegjacobs has joined #openstack14:07
*** misheska has joined #openstack14:08
*** nerens has joined #openstack14:08
foexleKiall: so i cant use euca tools to create /delete volumes ? .... ApiError: Volume status must be available14:09
foexleups14:10
foexlewant to paste this one https://bugs.launchpad.net/nova/+bug/71684714:10
foexleits my issue14:10
sandywalsh_praefect, you need --allow_admin_api defined in nova.conf for zone-info to work14:10
*** kaigan has quit IRC14:12
*** javiF has joined #openstack14:12
*** bsza has joined #openstack14:12
*** mdomsch has joined #openstack14:13
viddKiall, i dont see how your scripts get the env14:13
Kiall"env" ?14:13
Kiallyou mean like NOVA_BLA etc?14:13
viddyes14:13
Kiallthey source the settings file..14:13
praefectthanks sandywalsh_ it works14:13
RaziqueI just installed gluster fs for openstack14:13
Raziquelet's try it with HA :D14:14
*** stevegjacobs has quit IRC14:14
*** dprince has joined #openstack14:17
*** zul has quit IRC14:17
*** jeromatron has joined #openstack14:18
*** zul has joined #openstack14:19
*** chuck__ has joined #openstack14:20
*** chuck__ is now known as zul14:20
*** ldlework has joined #openstack14:20
*** bcwaldon has joined #openstack14:22
*** localhost has quit IRC14:23
*** localhost has joined #openstack14:25
*** stuntmachine has joined #openstack14:26
*** halfss has quit IRC14:26
Raziquethat glusterfs is quite neat to administrate :)14:26
Raziquelove the cli14:26
*** lborda has joined #openstack14:30
*** stuntmachine has quit IRC14:31
n81is there anybody here who knows their iSCSI stuff?14:32
*** mcclurmc has quit IRC14:32
Raziqueyh14:32
Raziqueask14:32
*** mcclurmc has joined #openstack14:32
RaziqueI'm no expert but just ask :)14:33
n81Raz: haha ok…I got my first OS cloud setup and after a few source code tweaks got it running. I originally installed using packages from Ubuntu so found out those were a bit out-dated. So instead switched over to Kiall's managed PPA packages.  And the cloud works great, except I'm getting an error attaching volumes.14:34
viddn81, do you have --iscsi_ip_prefix= in your novaconf?14:35
BasTichelaarsandywalsh_: got zones working with LXC14:35
n81vidd: here's the strange part. I got iscsi and volumes to work before14:35
n81vidd: and I'm using the exact same configuration file on a fresh clean Ubuntu 11.10 install using Kiall's PPA packages14:36
sandywalsh_BasTichelaar, nice!14:36
n81let me put the error in pastebin14:36
BasTichelaarsandywalsh_: there was indeed a bug with libvirt: https://bugs.launchpad.net/nova/+bug/88780514:36
n81vidd/raz: the error is with iscsiadm command…not with OS. If I run the iscsiadm command OS is trying to run on my compute node I get the same error as OS, but I'm at a loss for why the iscsiadm command is no timing out…I think it oculd be a firewall issue, maybe?14:37
BasTichelaarsandywalsh_: only issue is that libvirt doesnt provide vcpus_used for LXC, will file a bug report for that14:37
*** stuntmachine has joined #openstack14:38
*** livemoon has joined #openstack14:38
livemoonhi, is anyone here?14:38
n81vidd/raz: http://paste.openstack.org/show/3241/14:39
Raziqueyh14:39
livemoonvidd razique?14:39
viddn81 there does need to be a firewall path open if a remote machine is trying to access them14:39
anonymous_yes14:39
livemoonhttps://bugs.launchpad.net/bugs/88844814:39
livemoonlook14:39
livemoonthis bug, can anyone tell me what means14:39
n81vidd/raz: I keep getting this failed to receive a PDU back14:39
livemoonit is reported by me and now someone reply me14:39
*** Tsel has quit IRC14:40
Raziquen81: is that host resolvable cloudcntlr14:40
foexleRazique: i get this error to14:40
Raziquelivemoon: have u added glance -A ?14:40
n81raz: at first it wasn't =P….but it is now…I just needed to make sure all my machines were on the same sub-domain...14:40
foexleRazique: compute node try to do iscsi stuff14:41
livemoonnot only glance, if I use novaclient, this error also occur14:41
*** jeromatron has quit IRC14:41
Raziquen81: telnet 192.1.253.194 326014:41
Raziquefrom the node14:41
n81raz: Trying 192.1.253.194...14:42
n81Connected to 192.1.253.194.14:42
n81Escape character is '^]'.14:42
n81Connection closed by foreign host.14:42
*** snet has joined #openstack14:42
viddlivemoon, you need to have the uers = {...} match  auth['user'] = {...}14:42
livemoonok14:43
sandywalsh_BasTichelaar, yeah, vcpu support is scanty at best generally14:43
livemoonthanks, I decided to learn English well14:43
viddchange one or the other14:43
viddlivemoon, or go with Kiall 's repos =]14:44
livemoonno. now I am at home mid-night14:44
*** dongxu has joined #openstack14:44
viddhehe14:44
livemoonI will go to office tomorrow to try14:44
viddlivemoon, you cant reach office from home?14:45
livemoonnight is the time chat with you14:45
livemoonvpn is broken today14:45
*** dongxu has left #openstack14:46
livemoonvidd:14:47
livemoonvidd: you are twins? you have brother?14:48
*** lborda has quit IRC14:50
Raziquehehe14:51
BasTichelaarsandywalsh_: how does the distributedscheduler by default decide where to run an instance?14:51
viddlivemoon, yes14:51
RaziqueBasTichelaar: funny u ask14:51
viddhe's a microsoft junkie...and they call ME the evil twin =]14:51
BasTichelaarRazique: why?14:51
livemoonI have a twin name "deadsun"14:52
n81vidd/raz: should my telnet connection be terminated immediately into the iscsitarget service?14:52
*** joesavak has joined #openstack14:52
livemoonBasTichelaar: I am looking at this http://nova.openstack.org/devref/distributed_scheduler.html?highlight=zones14:52
*** jsavak has joined #openstack14:53
BasTichelaarlivemoon: yes, me too14:53
viddn81, i have no idea...like Razique  im no expert =]14:53
foexlewhich iscsi packages needs the compute node ? the volume-manager runs on a other node14:53
sandywalsh_BasTichelaar, right now it filters on available ram and disk14:53
sandywalsh_BasTichelaar, ram is usually the gating metric14:53
RaziqueBasTichelaar: because http://www.mail-archive.com/openstack@lists.launchpad.net/msg05317.html14:54
Razique:p14:54
foexlen81: i think the package iscsitarget are not installed on the comppute node .... but dont know i'm testing14:54
Raziquen81: not neceserally14:54
BasTichelaarsandywalsh_: ahh ok, thought it did something with CPU as well14:54
foexlen81: have the same issue14:54
BasTichelaarRazique: hot topic :)14:54
RaziqueBasTichelaar: yup14:54
sandywalsh_BasTichelaar, you can if you define extra parameters on the instance type14:54
Raziquen81: run that from the node iscsiadm -m discoverty -t st -p 192.1.253.19414:55
sandywalsh_look at nova/scheduler/filters/instance_type_filter.py14:55
BasTichelaarsandywalsh_: thanks!14:55
Raziqueiscsiadm -m session -t st -p 192.1.253.19414:55
sandywalsh_np14:55
livemoonand iscsiadm -m node14:55
BasTichelaarRazique: what is the output of nova zone-info for each of your zones?14:55
*** dongxu has joined #openstack14:56
BasTichelaarRazique: hmm, I see your nodes are in the same zone :)14:56
Razique BasTichelaar yah the default one - nova14:56
*** dongxu has left #openstack14:56
RaziqueI've never played with zones so far14:56
BasTichelaarRazique: ok, so my question is a little bit different14:57
*** joesavak has quit IRC14:57
BasTichelaar:)14:57
n81livemoon: thor@cloudnc1:~$ sudo iscsiadm -m session14:57
n81iscsiadm: No active sessions.14:57
Raziquen81: and mine ? :p14:57
livemoonfirst you should use iscsiadm -m discoverty -t st -p IP14:57
RaziqueBasTichelaar: oh u were asking accross zones ?14:57
*** dongxu has joined #openstack14:57
livemoonthen use iscsiadm -m node14:57
BasTichelaarRazique: yes, I'm setting up two separated zones with shared-nothing14:58
BasTichelaarRazique: using trunk and the distributedscheduler14:58
Raziqueoh ok, sorry ^^14:58
BasTichelaarRazique: apart from a few bugs it seems to work out quite ok14:58
BasTichelaarRazique: np :)14:58
Raziquegood to know then :)14:58
BasTichelaarRazique: its only the lack of documentation and the chaos of different schedulers that makes it different14:59
*** robbiew has joined #openstack14:59
BasTichelaarRazique: maybe I should write some blog post about it :)14:59
Raziquethat would be nice from you14:59
*** deshantm_laptop has joined #openstack14:59
Raziqueand we could ; if you want; update the doc accordingly14:59
Raziqueexit14:59
Raziqueopps15:00
n81livemoon: same error, failed to receive a PDU15:00
BasTichelaarRazique: yes, would be a good idea, there is a lot of legacy stuff in the current diablo docs15:00
Raziquen81: what target discovery gives ?15:00
RaziqueBasTichelaar: fantastic15:00
*** bsza has quit IRC15:01
n81raz/livemoon: here's my iscsiadm discovery on level 8 verbose mode: =P http://paste.openstack.org/show/3242/15:01
*** katkee has quit IRC15:02
*** lborda has joined #openstack15:02
*** rnorwood has joined #openstack15:02
Raziquenothing here mmmm15:03
Raziqueappart that PDU15:03
n81what the hell is that PDU =P15:03
n81I mean…my understanding is my node is making a valid discovery request and something on the cloud controller is not responding15:04
n81either is getting blocked via firewall or is erroring or is not even accepting the incoming request15:04
n81b/c the log shows a valid connection15:04
n81on port 326015:04
livemoonlook at your 192.1.253.19415:04
livemoondoes service worked well in it?15:05
Raziquen81: is nova-volume running without error ?15:05
Raziquerestart nova-volume and dump up the debug mode15:05
*** lorin1 has joined #openstack15:05
livemoonI think it is not nova-volume problem15:06
livemoonjust tgt and openiscsi15:06
Raziquedepend, since nova-volume takes care is the iscsi part15:07
RaziqueI see what you mean15:08
Raziquebut if during the setup something went fubar, nova-volume.log should show it I hope15:08
*** Rajaram has quit IRC15:08
livemoonok15:08
BasTichelaarsandywalsh_: the distributedscheduler checks the zone info every 120 seconds15:10
*** livemoon has left #openstack15:10
BasTichelaarsandywalsh_: so when I fire up 10 instances at once, they will definitely get to the same zone, correct?15:10
sandywalsh_BasTichelaar, no, the distributed_scheduler._schedule() method will ask each child zone for a build plan. It will decide from there where the instance should go.15:11
sandywalsh_BasTichelaar, the polling of the child zones is only to check online and general capabilities15:11
n81Raz/livemoon: I restarted nova-volume…not seeing any errors or traces in the debug log15:12
sandywalsh_BasTichelaar, decision making is done at the time of request15:12
n81Running cmd (subprocess): sudo ietadm --op new --tid=1 --params Name=iqn.2010-10.org.openstack:volume-0000000115:13
n81Running cmd (subprocess): sudo ietadm --op new --tid=1 --lun=0 --params Path=/dev/nova-volumes/volume-00000001,Type=fileio15:13
BasTichelaarsandywalsh_: ok, clear15:13
n81those are the commands I'm seeing in my nova-volume.log on startup….seems to be 'mounting' the iscsi volume ok15:13
*** jollyfoo has joined #openstack15:13
Raziquen81 ok15:13
n81raz/livemoon: do you know if iscsiadm keeps a log somewhere?15:13
snetin swift, do you have to be an admin user to peform a HEAD (aka stat) on an account ?15:14
*** neogenix has quit IRC15:14
*** dongxu1 has joined #openstack15:14
Raziquenow restart open-iscsi on the node; iscsitarget on the server 115:14
*** dongxu has quit IRC15:14
Raziqueand then restart nova-volume15:14
Raziquethen again run the discovery15:14
*** mgoldmann has quit IRC15:14
Razique-m discovery -t st -p15:14
*** jfluhmann has joined #openstack15:15
n81Raz: Will nova-volume start up iscsi-target automatically?15:15
*** dtroyer has joined #openstack15:15
Raziquen81: don't think so15:16
Raziquelook guys http://pacemaker-cloud.org/15:17
*** Rajaram has joined #openstack15:17
BasTichelaarsandywalsh_: can I force an instance to get build in a specified zone?15:19
*** nphase has joined #openstack15:20
*** stevegjacobs has joined #openstack15:21
*** imsplitbit has joined #openstack15:23
n81raz: ok…so here's some more info15:24
n81raz: I still can't get it to work even restarting, but I installed iscsiadm on my cloud controller15:25
n81raz: when I try to run this command: sudo iscsi_discovery 192.1.253.19415:25
Raziquen81: why is that ?15:25
n81raz: I get the same error messages about failed to receive a PDU15:25
n81raz: but when I run: iscsi_discovery 127.0.0.115:25
n81raz: on the node controller…I get this:15:25
n81raz: discovered 1 targets at 127.0.0.115:26
n81raz: wait nevermind…I didn't let the 192 finish15:27
n81raz: in the end it finds a target too15:27
n81raz: discovered 1 targets at 192.1.253.19415:27
foexlen81: autostart can you configure in /etc/iscsi/iscsi.conf15:29
n81foexle: you mean set it to create session automatically?15:30
foexlesry no not a session i mean spawn the luns on server startup15:30
foexleso i think i found my problem .... the firewall on the compute node drops isci connection15:32
*** marrusl has joined #openstack15:33
n81foexle: hmm…I'm thinking that's my problem too…everything seems to be working, but traffic is not reaching node15:33
n81foexle: how did you troubleshoot? did you just drop your firewalls on your comput node?15:34
sandywalsh_BasTichelaar, not currently. That would need a special host filter I suspect15:34
foexlen81: yeah i can discovery on the node where the iscsi service runs, but not from the compute node15:34
foexlei don't have troubleshoot atm i'm searching :)15:35
BasTichelaarsandywalsh_: ok, and do the availability zones work together with the zones?15:35
BasTichelaarsandywalsh_: so I create an availability zone inside a zone, and specify that as parameter?15:36
*** stevegjacobs has quit IRC15:36
n81foexle: do you have: --iscsi_helper=tgtadm15:37
n81 in your nova.conf?15:37
foexleno15:37
n81foexle: see I do…I wonder if that's messing something up15:38
*** hugokuo has joined #openstack15:38
foexleok its not the firewall15:39
foexlePORT     STATE SERVICE15:39
foexle3260/tcp open  iscsi15:39
foexleMAC Address: 00:30:48:66:18:6F (Supermicro Computer15:39
*** marrusl has quit IRC15:40
n81I had iscsi working before…then did a clean re-install with new packages and now no luck15:41
*** joesavak has joined #openstack15:41
*** blamar has joined #openstack15:42
*** jsavak has quit IRC15:42
Kialln81: `cat /etc/default/iscsitarget` true or false?15:46
foexleKiall: without true the service won't to start15:46
*** jeremy has joined #openstack15:46
n81Kiall: thanks…unfortunately, it's true =(15:47
Kiallopen-iscsi will, iscsitarget wont.. just a quick check ;)15:47
*** tyska has joined #openstack15:47
*** uksysadmin has quit IRC15:48
Kialland `iscsiadm -m session` on the compute node?15:48
n81thor@cloudnc1:~$ sudo iscsiadm -m session15:48
n81iscsiadm: No active sessions.15:48
foexleroot@test3-os:~# iscsiadm -m session15:48
foexleiscsiadm: No active sessions.15:48
foexle:D15:48
viddKiall, do euca commands work for your ppa?15:48
*** mies has quit IRC15:48
Kiallvidd: yes, you need to get the right env vars tho15:48
Kialland i believe there is a bug in euca-tools preventing image uploads from working in combo with keystone15:49
sandywalsh_BasTichelaar, there's not specific support for availability zones within zones (that is, no tests for that combination). They're unfortunately just similarly named.15:49
*** dolphm has joined #openstack15:50
viddKiall, im almost done with my scripting =] keept forgetting my source file needs "export"15:50
Kiall;)15:50
sandywalsh_jaypipes, will making mysql HA affect its row-locking ability?15:51
*** nerens has quit IRC15:53
*** obino has quit IRC15:53
*** Rajaram has quit IRC15:53
*** rnirmal has joined #openstack15:53
n81foexle: ok, you're right…definitely not firewall. I've cleared/shutdown firewalls on both machines and I get the same PDU error15:54
n81so it muts be something with the iscsitarget service on the cloud controller15:54
foexleyap15:54
*** nerens has joined #openstack15:54
tyskaRazique: are u there?15:56
*** TheOsprey has quit IRC15:56
*** misheska has quit IRC15:56
foexlei see the targets from the local machine15:56
foexleand there only 4 targets °°15:56
*** rsampaio has joined #openstack15:57
*** dgags has joined #openstack15:57
*** hezekiah_ has joined #openstack15:58
n81so on the machine running iscsitarget…you can run the same command and you get 4 targets?15:58
*** obino has joined #openstack15:58
*** adjohn has joined #openstack15:59
Raziquetyska: yah15:59
Raziquefinishing the HA script15:59
tyskaRazique: did you found something?16:00
RaziqueI asked you on a pm :p16:00
*** andy-hk has joined #openstack16:01
*** uksysadmin has joined #openstack16:01
*** mies has joined #openstack16:02
*** andy-hk has quit IRC16:03
*** code_franco has joined #openstack16:04
*** dragondm has joined #openstack16:04
*** andy-hk has joined #openstack16:04
*** andy-hk has quit IRC16:05
*** kieron has joined #openstack16:08
*** mmetheny has quit IRC16:09
*** mmetheny_ has joined #openstack16:09
*** reidrac has left #openstack16:09
*** reidrac has quit IRC16:09
uksysadminI've a question on authentication16:10
*** oubiwann1 has quit IRC16:10
uksysadminI'm not using keystone... but I have my access and secret keys...16:10
uksysadminis the secret key used?16:10
uksysadminI can launch instances with random strings in my EC2_SECRET_KEY16:10
*** Shentonfreude has joined #openstack16:11
hezekiah_if you are using nova, then you can. it doesn't use the secret key ( I believe )16:13
viddit just needs something there =]16:13
*** swill has joined #openstack16:14
uksysadmindon't know whether to laugh or cry16:14
Raziqueok guys the hand-made HA script works16:15
Razique:D16:15
Raziquenode with running instances crasches16:15
uksysadminI've just checked the docs and it says to use no auth set it in api-paste.ini and it does have in the pipelines ec2noauth16:15
Raziquethe scripts does now only two things : update the Databse16:15
Raziqueand reboot the instance16:15
Raziqueinstance is now up on the other node o/16:15
kieronhas anyone seen16:16
swilli am trying to build a swift authentication middleware to authenticate against a cloudstack installation.  is the only way to build it using tokens?16:16
*** obino has quit IRC16:16
*** javiF has quit IRC16:16
kieron(oops) has anyone seen "you are not authorized to access /syspanel/" when trying to log in to dashboard.  Can't figure out what I've missed.16:16
swillright now i can not figure out a way to actually authenticate a cloudstack user with only a token.16:17
swillcause a token is not enough for me to be able to connect to the cloudstack api and verify.16:17
notmynameswill: have you read the swift docs on writing your own auth middleware?16:17
swilli did16:17
swillmore than once16:18
notmyname:-)16:18
swill:)16:18
*** sannes has quit IRC16:18
swilli am assuming these are the only available references?  http://swift.openstack.org/development_auth.html  and http://swift.openstack.org/overview_auth.html16:18
*** cp16net has joined #openstack16:19
*** neogenix has joined #openstack16:19
notmynameswill: you should be able to implement whatever kind of auth you want. let me look at it a little more before I say something that isnt' true16:19
*** krow has joined #openstack16:20
swillthat was my assumption as well.  for some reason i am having trouble getting my head around how.16:20
*** andy-hk has joined #openstack16:23
notmynameswill: there may be some implicit assumptions in swift about using a token. however, I think you should be able to use whatever you want. so, for example, your middleware's __call__() method should be able to check what you need and set up the authorize() callback. your authorize() method gets the request and can then look at anything you want16:23
notmynameswill: perhaps another reference would be looking at the included swift3 middleware. it implements the S3 request signing for auth16:23
*** andy-hk has quit IRC16:24
swillnotmyname: i will take a look at that now.  thank you for your input.16:24
chmouel_I found swauth code pretty good to look at if you want to implement you own auth middleware16:25
*** Shentonfreude has quit IRC16:25
notmynameswill: and you can make things a little simpler perhaps if you don't write your auth middleware to work with other auth middleware's that may be running. that's inadvisable, but it really depends on your use case. all the auth middlewares (eg tempauth and swauth) assume that they may be running along side other auth middlewares16:25
swillnotmyname: right.16:26
*** chmouel_ is now known as chmouel16:27
*** bsza has joined #openstack16:27
*** vladimir3p has joined #openstack16:28
*** gyee has joined #openstack16:29
swillchmouel: thanks, i will look at that one as well.16:30
*** dolphm has quit IRC16:30
swillthanks for the help.  i am sure i will figure something out looking at these two references.16:30
notmynamecool. I hope it helps16:30
*** dolphm has joined #openstack16:30
swillwhen i have something working, i will share it with you guys.16:31
foexlen81: solved16:31
*** marrusl has joined #openstack16:31
*** hezekiah_ has quit IRC16:32
*** dolphm_ has joined #openstack16:35
*** dolphm has quit IRC16:35
*** derjohn_mob has quit IRC16:35
jaypipessandywalsh_: no, it will not affect row-locking ability at all. That's dependent on the underlying storage engine. If you are using InnoDB, you have row-level locking in almost all situations except where in situations where InnoDB can predict that a data-modification query would affect a large percentage of rows in a table, in which case it might modify the lock to be on a page (or in extreme cases, the table)16:35
n81foexle: oh yeah? how so?16:37
sandywalsh_jaypipes, cool ... I may have some sqlalchemy Q's for you later :)16:37
jaypipessandywalsh_: I'll try my best :)16:37
*** mattstep has joined #openstack16:37
foexlen81: restart on compute node /etc/init.d/open-scsi16:38
foexleresolved my problem16:38
*** sandywalsh has joined #openstack16:39
*** vdo has quit IRC16:40
*** cp16net has quit IRC16:41
*** cp16net has joined #openstack16:41
*** tdi has joined #openstack16:41
*** krow has quit IRC16:41
tdihello16:42
swillchmouel: where do i find the swauth middleware code to reference?  i am assuming it is not part of swift by default: https://github.com/openstack/swift16:44
*** anonymous_ is now known as avian16:44
*** avian is now known as rfc114916:45
*** rfc1149 is now known as rfc254916:45
swillchmouel: this?  https://github.com/gholt/swauth/blob/master/swauth/middleware.py16:45
chmouelyep16:45
swillty.  :)16:45
rfc2549we switched from eucalyptus to openstack because we kept on having instances that were DOA, either never getting to "running" or never getting assigned a public IP.  Did any of you see that on Eucalyptus?  Is it much better on openstack?16:47
*** krow has joined #openstack16:48
*** clauden_ has quit IRC16:48
*** dobber has quit IRC16:48
*** joesavak has quit IRC16:48
*** clauden_ has joined #openstack16:48
*** uksysadmin has quit IRC16:48
*** marrusl has quit IRC16:49
n81rfc: we experienced the same issue. We were getting an 8-15% DOA rate16:50
uvirtbotNew bug: #888621 in nova "exception for decalre consumer in the case of socket error" [Undecided,New] https://launchpad.net/bugs/88862116:51
*** dongxu1 has quit IRC16:51
*** dongxu has joined #openstack16:53
foexleRazique: ok volumes are running now :), but one questsion. i find in every instance /dev/vdb mounted in /mnt whats that ? its not an attached volume16:53
rfc2549thanks n81. Is it all better for you now on openstack?16:54
n81we haven't done as much extensive testing with openstack but in our limited uses to date we've seen better reliabilty16:55
*** dongxu has quit IRC16:55
*** popux has quit IRC16:55
*** jog0 has joined #openstack16:56
*** jog0 has quit IRC16:57
*** jog0 has joined #openstack16:57
*** reed_ has joined #openstack16:57
*** negronjl has joined #openstack16:57
rfc2549 the netflix tech blog says that they also have DOA and other badness on AWS.  anyone have experience on other clouds and seeing DOAs?16:57
tdican somebody exaplain me please, how is the versioning done in openstack? for example in ubuntu 11.10 ive got 2011.3, is it diablo ?16:57
*** reed_ is now known as reed16:58
viddtdi yes, 2011.3 is dioblo16:58
tdividd: thanks, is the documentation for it up to date ?16:59
*** exprexxo has joined #openstack16:59
viddtdi, it all depends on what documentation you are looking at16:59
tdividd: just want to install it :)16:59
viddtdi, there more to it they "just installing" it17:00
viddonce installed, it needs to be properly configured17:00
tdividd: ofc, you are right, configuration and running it is what I meant17:01
*** hezekiah_ has joined #openstack17:01
viddand the stock ubuntu keystone and dashboard will not work properly with stock nova and glance17:01
*** jeromatron has joined #openstack17:02
tdividd: ok, do you know whether the official doc is the proper one to get started?17:02
viddtdi, what parts are you trying to use?17:02
Kiallif anyone else was using my packages, and is having issues with volumes and "Login I/O error, failed to receive a PDU" .. We think (n81 and I) have sorted it.. updated packages soon ;)17:02
Kiallwe think we have sorted it*17:03
tdividd: maybe I just say what setup i want: got 7 machines connected to fast iSCSI storage, I want to give users possibility to manage their own machines and create new17:03
tdividd: users, as in employees of the university, not outside users17:03
tdividd: so i think this would be nova compute, volume and storage ?17:04
viddtdi, are you putting keystone and dashboard in or not?17:04
tdividd: yes, I would like to give them GUI17:04
*** dolphm_ has quit IRC17:04
*** Hakon|mbp has quit IRC17:05
viddthen the docs are good for nova parts17:05
tdigreat17:05
*** dolphm has joined #openstack17:05
viddkeystone and dashboard not so much17:05
tdividd: do you then know a doc for keystone and dashboard?17:05
Kiallthe stock ubuntu packages for keystone and dashboard are broken (really broken)17:06
*** marrusl has joined #openstack17:06
tdiKiall: is there a ppa ?17:06
viddtdi, Kiall has a ppa that works with all parts (have not tested swift) and an installer script to walk you thru it17:06
Kiallyea ;)17:06
KiallPPA : https://launchpad.net/~managedit/+archive/openstack17:07
tdinice,17:07
KiallAnd some bare minimum setup scripts @ https://github.com/managedit/openstack-setup17:07
KiallCloser to bash docs that scripts, but they give you (almost) all the steps..17:08
Kiall(tell me what I fogot ;))17:08
tdiKiall: thanks, should I apt-get remove --purge all nova things before I begin?17:08
KiallI would `dpkg -l | grep -E (nova|glance|swift|keystone)` and purge all those..17:09
tdioki thanks17:09
tdiso I start ;)17:09
Kiallthen rm -rf /etc/(nova|glance|swift|keystone) and /var/lib/(nova|glance|swift|keystone) .. since some stuff seems to stay even with dpkg -P17:09
tdiYes, last time also /etc/sudoers.d/nova cut me off17:10
Kiallbut - I'd give it a few mins before installing, I'm just sorting a packaging bug with n81 at the moment...17:10
*** dolphm has quit IRC17:10
tdiKiall: ill wait, ill be the tester17:10
*** stevegjacobs_ has quit IRC17:11
*** maplebed has quit IRC17:11
*** lelin has quit IRC17:12
*** dtroyer has quit IRC17:12
*** dprince has quit IRC17:14
Kialltdi: new packages uploading, launchpad will take 20 mins or so to build them..17:15
KiallAll the packages, bar nova-volume (which you can leave till the end) are fine..17:15
tdiKiall: oki17:15
Kiallall the current packages*17:15
*** foexle has quit IRC17:15
hugokuogood night17:16
*** hugokuo has left #openstack17:16
tdiKiall: so I can just use your shell scripts for the installation ?17:16
tdithey will suck in launchpad packages?17:16
Kiallyea, watch out for nova.sh installing the broken nova-volume package..17:16
Kialland, install the stuff from the readme first17:17
tdiok, when will nova-volume be fixed?17:17
KiallThe scripts setup an all in 1 server, probably a smart move until it all works, then add more servers with specific roles after17:17
Kiall20 mins, whenever this finishes building+publishing: https://launchpad.net/~managedit/+archive/openstack/+build/291622717:18
*** tyska has quit IRC17:18
tdiKiall: ok, sorry thought the volume is still broken, despite the lanuchpad update17:19
KiallAh no, the current packages bar nova-volume are fine...17:19
Kialland a fixed nova-volume is on its way up..17:19
*** deshantm_laptop has quit IRC17:24
*** Rajaram has joined #openstack17:27
*** TheOsprey has joined #openstack17:28
*** wawa has joined #openstack17:30
wawaii17:31
*** nacx has quit IRC17:31
*** obino has joined #openstack17:32
*** bsza has quit IRC17:34
*** bsza has joined #openstack17:34
rfc2549trying again: we switched from eucalyptus to openstack because we kept on having instances that were DOA, either never getting to "running" or never getting assigned a public IP.  Did any of you see that on Eucalyptus?  Is it much better on openstack?17:35
Kiallrfc2549: all the time.. and I've had the same conversation with someone else (cant remember who)17:36
KiallOnce everything is setup right, I've not seen any DOA's with OS17:36
*** tyska has joined #openstack17:37
rfc2549Kiall: thanks.17:38
rfc2549the netflix tech blog says that they also have DOA and other badness on AWS.  anyone have experience on other clouds and seeing DOAs? [repeat]17:38
tyskaRazique: are u still there?17:38
tdiKiall: you do not have swift scripts ?17:38
Raziqueyup on ur servers17:38
tyskaRazique: =)17:38
Kialltdi: no, I've no use for swift17:38
tyskaRazique: did you found something?17:38
tyskadid you find* (sry for my language mistakes) =)17:39
*** wawa has quit IRC17:39
tdiKiall: when I go in 10k machines, ill need it :)17:39
*** negronjl has quit IRC17:39
Raziquetyska: i'm looking :)17:39
Kiallyea - for that, you might ;)17:40
*** jakedahn has quit IRC17:40
Raziquei'll let u know when I figure, but don't worry, we definitely will :)17:40
tyskaRazique: did you received my msg that saids i think thats ext3 message is not the problem?17:40
Raziqueyah u were right :)17:40
tyskait appears too on the instance i can reach17:40
*** devcamcar has joined #openstack17:42
Raziquetyska: I think I found17:43
Raziquelet's see:)17:43
* tyska is praying17:43
tyska=)17:43
*** nati2 has joined #openstack17:47
*** po has joined #openstack17:48
*** jaypipes has quit IRC17:49
tdiKiall: one more question about the networks in openstack, ive got bridge with 10.50.0.0/16 network, where I want machines to be stored, this is FlatManager yes?17:50
Kiallyea, flat for flat DHCP..17:50
KiallVLAN will work aswell..17:50
uvirtbotNew bug: #888649 in nova "Snapshots left in undeletable state" [Undecided,New] https://launchpad.net/bugs/88864917:50
Kiallwhere 10.50.0.0/16 = the public IPs and some other range is the internal range17:51
*** maplebed has joined #openstack17:51
tdiKiall: I do not use any public ips17:51
Kialltdi: those packages are built+up17:51
*** dprince has joined #openstack17:51
tdigot internal network, both nova nodes and virtuals need to be in it17:52
Kiallnovaa "public ips" dont have to be internet routable...17:52
tdiKiall: yes, I already installed it, now working on network17:52
Kiallnova's*17:52
KiallIts probably better to reserve the LAN accessible range as floating ips17:52
Kiallotherwise you have no choice over server IPs17:52
Kiallservers + DHCP is always fun...17:52
KiallProbably worth a read: http://docs.openstack.org/diablo/openstack-compute/admin/content/networking-options.html17:54
Kiallesp the fixed vs floating IP part17:54
tdiyes I am reading it now17:54
*** obino has quit IRC17:56
tyskaRazique: ?17:56
tyskasomeone here already tried to use windows with openstack?17:57
*** snet has quit IRC17:58
Raziquetyska: yup17:58
*** bcwaldon_ has joined #openstack17:58
Raziqueworks pretty well :)17:58
*** bcwaldon has quit IRC17:58
*** jfluhmann has quit IRC17:58
tyskaim with problems to create the image =(17:59
tyskamore specifically with virtio17:59
*** jdurgin has joined #openstack17:59
tyskafirst i tried using this: http://docs.openstack.org/cactus/openstack-compute/admin/content/creating-a-windows-image.html17:59
*** exprexxo has quit IRC17:59
Raziquearf I should update that doc18:00
RaziqueI had to find drivers I dunno where18:00
Raziqueand make some extra stuff in order to install the rights virtio drivers18:00
Raziqueanother thing todo18:00
*** aliguori has quit IRC18:00
tyskaafter i run that command to create the image18:01
tyskanothing happens18:01
tyskaand shell still freeze18:01
tyskaand cant even cancel with CTRL + C18:01
*** alexn6 has left #openstack18:01
tyskanow im trying to create using this http://blogs.poolsidemenace.com/2011/06/16/porting-windows-to-openstack/18:02
tyskabut with no success too =/18:02
tyskamy question to god is: why everything needs to be so hard??? =)18:02
*** bcwaldon_ has quit IRC18:02
*** blamar has quit IRC18:02
*** blamar has joined #openstack18:02
Raziquemy other question is : why are we still trying to make it work18:03
*** joesavak has joined #openstack18:03
*** bcwaldon has joined #openstack18:03
*** pixelbeat has quit IRC18:03
tyskahahaha18:04
*** Ryan_Lane has joined #openstack18:04
tyskabecause if it works well, it will bring a lot of benefits to us18:04
tyskathat was easy18:04
tyska=)18:04
viddKiall, what does "sed -e "s,999888777666,$SERVICE_TOKEN,g" local_settings.py.tmpl > local_settings.py" do?18:05
Raziquetyska: yah ;)18:05
Raziquetyska Openstack makes me having nightmares18:05
Kiallvidd: replaces the default service token with a real one18:06
vidddoes this make the change in local_settings.py.tmpl and then cp the whole local_settings.py.tmpl to local_settings.py?18:06
tyskaRazique: but at least the basic of your architecture is working, what you say for my case?18:06
Kiallvidd: yea..18:07
tyskaRazique: days and days working to solve the problems, just to see the basic working18:07
Kiallwait no18:07
Kiallit never changes .tmpl18:07
viddok, so it copies tmpl to the real file and changes the real file18:07
Raziquetyska: I say it's the fact that u use Diablo + multi nic :)18:07
Raziquebut I already had this issue on my pre-prod18:08
Raziquethat has the same settiings18:08
*** jsavak has joined #openstack18:08
viddKiall, can ya tell im new to scripting ?=]18:08
*** krow has quit IRC18:08
Kiallvidd .. kinda .. sed spits the updated version to STDOUT, the  > redirects it into a new file18:09
viddi have a grasp on sed18:09
viddbut this was the first i looked at  the piping18:09
*** lorin11 has joined #openstack18:10
*** lorin11 has quit IRC18:10
viddtook me a bit to understand the difference between sed-i {data} file and sed -e{data} -i file18:10
*** obino has joined #openstack18:11
*** joesavak has quit IRC18:12
*** lorin1 has quit IRC18:12
*** jaypipes has joined #openstack18:13
*** superjudge has quit IRC18:17
Kiallah fair enough :)18:18
*** shang has quit IRC18:21
*** webx has joined #openstack18:23
*** med_out has quit IRC18:23
*** llang629 has joined #openstack18:24
*** llang629 has left #openstack18:24
*** redconnection has quit IRC18:24
*** aliguori has joined #openstack18:26
*** mszilagyi has joined #openstack18:27
*** dtroyer has joined #openstack18:29
*** tyska has quit IRC18:29
*** Rajaram has quit IRC18:30
*** magg has joined #openstack18:30
*** jakedahn has joined #openstack18:34
*** guigui1 has quit IRC18:35
*** obino has quit IRC18:35
*** tyska has joined #openstack18:37
tyskaRazique: nothing?18:37
*** zaitcev has joined #openstack18:42
maggkiall18:43
maggu there18:43
*** djw_ has joined #openstack18:43
*** djw_ is now known as rfc2549_18:44
rfc2549q18:44
*** rfc2549 has quit IRC18:44
maggvidd?18:44
maggu there18:45
*** rfc2549_ is now known as rfc254918:45
viddyes magg18:45
maggso i installed kiall packages18:45
maggeverything is working18:46
Kiallmagg: glad to hear :)18:46
viddnice18:46
maggi try to create an instance in the dashboard18:46
maggit says build18:46
maggand never becomes active18:46
Kiallmagg: check your nova-compute and nova-network logs..18:47
Kiallprobably nova-network from experience...18:47
maggoh18:47
*** oubiwann has joined #openstack18:47
maggso i have a question, when using keystone i no longer have to create a user in nova18:47
Kiallexactly, ignore nova's users and projects18:48
maggohh18:48
maggi no longer need the creds?18:48
*** krow has joined #openstack18:50
*** jollyfoo has quit IRC18:50
maggwell compute says18:51
*** jollyfoo has joined #openstack18:51
*** jollyfoo has quit IRC18:51
maggtable nova.intances doesnt exists18:51
Kiallthat might be a problem ;)18:51
*** nycko has quit IRC18:51
Kiallyou probably didnt run nova-manage db-sync18:51
*** jollyfoo has joined #openstack18:51
Kiallor18:51
Kiallhavent restarted all the nova services after you updated the config18:52
*** jakedahn has quit IRC18:53
*** nati2 has quit IRC18:53
*** bsza has quit IRC18:53
*** mies has quit IRC18:54
*** nati2 has joined #openstack18:55
*** jsavak has quit IRC18:55
*** mies has joined #openstack18:57
*** reed has quit IRC18:58
magghow do i check all nova services are ok without euca-descre18:58
*** rsampaio has quit IRC19:01
sorenmagg: Why?19:04
*** tyska has quit IRC19:04
*** jsavak has joined #openstack19:05
Kiallmagg: you can use "nova-manage service list"19:06
*** Razique has quit IRC19:06
*** nitram_macair has quit IRC19:06
*** lorin1 has joined #openstack19:06
*** Razique has joined #openstack19:07
*** Light has joined #openstack19:08
*** Light is now known as Guest5841719:08
*** mgius has joined #openstack19:09
*** reed has joined #openstack19:10
*** bsza has joined #openstack19:10
*** dtroyer has quit IRC19:10
*** dtroyer has joined #openstack19:11
*** magg has quit IRC19:11
*** Guest58417 has quit IRC19:14
*** daMaestro has joined #openstack19:14
*** jakedahn has joined #openstack19:14
daMaestroAnyone here from grid dynamics?19:14
*** magg has joined #openstack19:14
maggyo19:15
daMaestroYou need to publish your src.rpm in your repo, please. (Yes, I'm aware everything is on https://github.com/griddynamics/openstack-rhel)19:15
maggnova-manage db sync gets me command failed19:15
Kiallmagg: with what error?19:16
daMaestroI'm working on merging in spec stuff into the Fedora build system... and I find it odd you don't have a SRPM tree.19:16
*** rsampaio has joined #openstack19:16
maggcommand failed, please check log for more info19:16
maggwhich log should i check19:16
*** imsplitbit has quit IRC19:18
*** TheOsprey has quit IRC19:18
Kiall/var/log/nova/nova-manage.log19:18
webxI was reading a press release from SDSC about their new cluster (https://cloud.sdsc.edu/hp/docs/SDSC_Cloud_Press_Release.pdf) and noticed this quote19:19
webx"The HTTP-based SDSC Cloud supports the19:19
webxRackSpace Swift and Amazon S3 APIs and is accessible from any web browser, clients19:19
webxfor Windows, OSX, UNIX, and mobile devices."19:19
webxby default, does openstack support the s3 api and tools like s3cmd, etc ?19:20
webxs/openstack/openstack swift/19:20
*** negronjl has joined #openstack19:22
*** stevegjacobs has joined #openstack19:23
*** bsza has quit IRC19:24
*** sloop has joined #openstack19:24
*** bsza has joined #openstack19:24
*** BasTichelaar has quit IRC19:25
*** mcclurmc has quit IRC19:27
*** mcclurmc has joined #openstack19:28
*** BasTichelaar has joined #openstack19:28
daMaestrowebx, there is a pipeline you have to add for the compatibility layer... but basically yes19:28
daMaestrohttp://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html19:31
*** gyee has quit IRC19:31
*** bsza has quit IRC19:31
*** Razique has quit IRC19:31
*** redconnection has joined #openstack19:33
*** shang has joined #openstack19:33
*** bsza has joined #openstack19:34
*** gyee has joined #openstack19:34
*** jakedahn has quit IRC19:34
*** Nadeem has joined #openstack19:34
Nadeemguys i installed openstack via devstack nova.sh script19:35
*** dirkx_ has joined #openstack19:35
Nadeemhowever on reboot i couldnt login anymore on http://localhost19:35
Nadeemkeystone wasnt runing anymore on localhost:5000/2.019:35
Nadeemany pointers how to start this keystone service manually?19:36
sloopumm.. use the cloud?19:37
*** dnjaramba has joined #openstack19:38
uvirtbotNew bug: #888685 in glance "Stacktrace from cache_image_iter" [Undecided,New] https://launchpad.net/bugs/88868519:38
*** dnjaramba_ has quit IRC19:38
*** binbash_ has quit IRC19:39
webxdaMaestro: thanks for that link19:40
webxdaMaestro: can we still use the swift cli binary with s3 api enabled?19:41
*** nitram_macair has joined #openstack19:41
*** egant has quit IRC19:42
*** bsza has quit IRC19:44
*** adjohn has quit IRC19:45
maggnop i still cant get the instance to say active19:45
*** mszilagyi_ has joined #openstack19:47
*** mszilagyi has quit IRC19:48
*** mszilagyi_ is now known as mszilagyi19:48
*** krow has quit IRC19:48
maggcompute and network cant find a table19:48
maggnova.network and nova.instances19:49
*** redconnection has quit IRC19:49
magghelp19:49
*** rfc2549 has quit IRC19:50
*** binbash_ has joined #openstack19:52
maggkiall19:57
magghelp19:57
*** dtroyer has quit IRC19:57
*** redconnection has joined #openstack19:57
*** dprince has quit IRC19:58
Kiallmagg: check the logs ;)19:58
Kialland, are they connecting to the right DB19:59
maggcompute and network?19:59
Kiallyea19:59
*** nacx has joined #openstack19:59
*** lorin1 has left #openstack20:00
*** dirkx_ has quit IRC20:00
*** lorin1 has joined #openstack20:01
uvirtbotNew bug: #888711 in glance "assertGreaterEqual not in Python 2.6" [Undecided,New] https://launchpad.net/bugs/88871120:01
magghttp://pastebin.com/QYXHNuuJ20:04
*** Nadeem has quit IRC20:05
*** redconnection has quit IRC20:05
Kiallmagg: you need to disable DNSmasq .. edit /etc/default/dnsmasq20:05
magghttp://pastebin.com/2Rs3d44T20:05
*** catintheroof has joined #openstack20:05
Kiallcc/ tdi ... you probably should edit that aswell20:05
*** catintheroof has quit IRC20:06
Kiallmagg: then, killall dnsmasq and restart nova-network+compute for the first one...20:06
Kiallsame again for the second it seems20:06
maggwait what do i edit in dnsmaq?20:06
*** n0ano has quit IRC20:07
maggcc/tdi?20:07
Kiallchange the enabled setting to ENABLED=020:07
Kiallas in cc tdi the person in the channel ;)20:07
maggLOL20:07
*** n0ano has joined #openstack20:08
*** dtroyer has joined #openstack20:09
*** dolphm has joined #openstack20:09
maggalright i have now an IP for my instance20:09
*** dtroyer has quit IRC20:13
*** dtroyer has joined #openstack20:14
*** imsplitbit has joined #openstack20:15
uvirtbotNew bug: #888719 in nova "openvswitch-nova runs after firstboot scripts" [Undecided,In progress] https://launchpad.net/bugs/88871920:16
maggbut it doesnt say Active20:16
magg:(20:16
*** bcwaldon has quit IRC20:17
*** dolphm has quit IRC20:17
*** dolphm has joined #openstack20:18
*** darraghb has quit IRC20:18
*** dtroyer has quit IRC20:21
maggok it worked20:21
maggnow i cant connec to the vnc console20:22
*** dtroyer has joined #openstack20:22
*** tdi has quit IRC20:22
*** dolphm_ has joined #openstack20:23
*** dolphm has quit IRC20:23
maggdo i need to install noVNC?20:24
*** webx has quit IRC20:24
*** webx has joined #openstack20:25
*** jsavak has quit IRC20:28
*** ahasenack has quit IRC20:28
*** nati2 has quit IRC20:28
*** ahasenack has joined #openstack20:29
*** paltman has quit IRC20:29
*** joesavak has joined #openstack20:29
*** duffman has quit IRC20:30
*** duffman has joined #openstack20:30
*** dgags has quit IRC20:32
*** dgags has joined #openstack20:32
*** jeromatron has quit IRC20:33
*** GheRivero has joined #openstack20:34
*** paltman has joined #openstack20:34
*** jeblair has quit IRC20:37
uvirtbotNew bug: #888730 in nova "vmwareapi suds debug logging very verbose" [Undecided,In progress] https://launchpad.net/bugs/88873020:38
*** dpippenger has joined #openstack20:39
*** nerens has quit IRC20:40
*** jeblair has joined #openstack20:42
*** PeteDaGuru has quit IRC20:43
*** PeteDaGuru has joined #openstack20:45
*** lborda has quit IRC20:46
*** apevec has quit IRC20:47
*** dtroyer has quit IRC20:47
*** dtroyer has joined #openstack20:49
*** nacx has quit IRC20:52
*** statik has quit IRC20:52
daMaestrowebx, i don't know20:56
daMaestrowebx, i will be finding out shortly i think ....20:56
*** mnour has joined #openstack20:56
*** GheRivero has quit IRC20:57
daMaestrowebx, what you *can* do is have multiple proxy pools ... one with the s3 rest api and one without20:57
daMaestrowebx, more then likely that is how it's supposed to be done20:57
*** marrusl has quit IRC21:00
*** marrusl has joined #openstack21:00
*** stevegjacobs_ has joined #openstack21:00
*** marrusl has quit IRC21:00
webxdaMaestro: ah, that makes sense.21:04
webxdaMaestro: do you happen to know how to point s3cmd to a swift installation?21:05
*** PotHix has quit IRC21:06
*** Hakon|mbp has joined #openstack21:08
*** magg has quit IRC21:12
uvirtbotNew bug: #888753 in glance "Glance configs should use new Keystone auth_port" [Undecided,New] https://launchpad.net/bugs/88875321:15
uvirtbotNew bug: #888755 in nova "stale external locks causing deadlock" [Undecided,New] https://launchpad.net/bugs/88875521:15
daMaestrowebx, and i just confirmed it does not work (swift client) when the filter is installed21:19
daMaestroso just run multiple proxy servers21:19
daMaestrowebx, just like you would to amazon21:20
*** mattstep has quit IRC21:21
zykes-Hakon|mbp: you a norwegian openstack user ?21:21
webxdaMaestro: interesting.  for us, we'd probably prefer to run everything in s3 compatability if possible.  I'll probably have one proxy that's non-s3 though, just in case.21:22
*** msivanes has quit IRC21:22
guaquayou need proxy redundancy, so 2 + 2 at minimum21:23
*** shang has quit IRC21:23
guaquathat's what i'm thinking21:23
webxyea, we'll have much more than 2 running in s3 compat, but just the one pair in 'native' mode.. provided we can get s3cmd to work with swift.21:24
zykes-anyone read http://www.slideshare.net/oldbam/security-issues-in-openstack ?21:24
*** joesavak has quit IRC21:26
gnu111quick swift question. I am currently using /dev/sda3 which is mounted to /srv/node/sda3. I have a new disk and partition /dev/sdb1. I want to add that as a device. Should it be /srv/node/sdb1 ?21:28
*** dirkx_ has joined #openstack21:28
guaquadoes it really matter where they are mounted?21:30
guaqua(i actually don't know and would like to know)21:30
gnu111guaqua: not sure. I am trying to figure out if I can mount /srv/node/sda3 with /dev/sda3 and /srv/node/sdb1 with /dev/sdb1. I am not sure if this will properly mount.21:31
*** joesavak has joined #openstack21:32
*** mattstep has joined #openstack21:32
*** jakedahn has joined #openstack21:33
*** shang has joined #openstack21:36
*** dirkx_ has quit IRC21:39
gnu111guaqua: it seemed to work. I think /srv/node is not manged by anything...that's the part I was confused about.21:39
*** dolphm_ has quit IRC21:39
*** lorin1 has quit IRC21:41
guaquamy main question is, what is the ring device name really?21:41
guaquais it handled by the server and queried from the mount point?21:42
guaquaor is it something else21:42
guaquabecause it looks a whole lot like a mount point and it isn't really defined anywhere on the storage nodes as such21:43
viddzykes-, read that article21:43
*** mrevell has joined #openstack21:44
gnu111guaqua: When I added this new device. it said this: Device z1-192.168.0.12:6002/sdb1 I also have another device Device z1-192.168.0.12:6002/sda3 they are both in the same storage node.21:46
*** krow has joined #openstack21:46
gnu111I think the way to identify is d0z1 that means device id zero in zone one.21:46
guaquathe port is the same, is that correct?21:47
*** dolphm has joined #openstack21:47
guaquagnu111: that can't be the same21:48
gnu111guaqua: Yes. same port but different disks.21:48
guaquahmm21:48
guaquaoh, so is that definition basically just a definition for rsync path?21:48
guaquanow i'm getting it...21:49
guaquathis is simpler than i thought...21:49
gnu111I think so....21:49
gnu111it seemed to add the device and rebalance fine for me here. I didn't see any errors..so far.21:49
guaquaoh well. better it's simpler, not more complicated :)21:49
*** MarcMorata has joined #openstack21:50
guaquabut i'm off to bed now! good stuff!21:50
gnu111guaqua: good night!21:50
*** magg has joined #openstack21:51
*** lvaughn has quit IRC21:51
gnu111guaqua: I see some rsync erros. so need to look at it carefully...21:51
*** rods has quit IRC21:51
*** lvaughn has joined #openstack21:51
*** joesavak has quit IRC21:52
*** miclorb_ has joined #openstack21:52
webxanyone happen to know if the patch and configuration file described here will work with swift?  http://open.eucalyptus.com/wiki/s3cmd21:52
*** lvaughn has quit IRC21:52
*** lvaughn has joined #openstack21:53
*** arBmind has joined #openstack21:53
*** nati2 has joined #openstack21:54
gnu111it is trying to write in /sdb1 instead of /srv/node/sdb1. I added it to an existing zone which was in sda3. maybe this needs to be in a new zone.21:54
*** negronjl has quit IRC21:56
*** AlanClark has quit IRC21:59
*** neogenix has quit IRC22:02
*** lvaughn has quit IRC22:02
*** lvaughn has joined #openstack22:02
*** praefect has quit IRC22:02
*** dolphm has quit IRC22:02
*** stuntmachine has quit IRC22:03
*** dolphm has joined #openstack22:03
*** lvaughn has quit IRC22:04
*** lvaughn_ has joined #openstack22:04
*** rods has joined #openstack22:05
*** dolphm has quit IRC22:07
*** jeromatron has joined #openstack22:08
*** marrusl has joined #openstack22:08
*** irctc193 has joined #openstack22:12
magghey kiall u there?22:13
maggim trying to add second node22:13
maggbut compute log says this host is not allowed to connect to mysql server22:14
magganybody?22:14
magghelp plz22:14
webxhttp://paste.openstack.org/show/3251/22:15
webxthe only 'real' buckets are "bbartlett", "myfiles", and "builders"22:15
webxany idea what that other stuff is ?22:15
viddmagg, im here22:15
viddyou have mysqlclient installed on the remote machine?22:16
*** lvaughn_ has quit IRC22:16
*** lvaughn has joined #openstack22:16
*** neogenix has joined #openstack22:16
irctc193I'm new here.  migrating from bexar to diablo.  I am trying to understand how to bundle image for glance.  We were using euca-bundle which creates a manifest file and multi-part files.  But how to bundle into a single image from a running instance for glance?22:17
maggvidd, dont think so22:17
*** lvaughn has quit IRC22:17
maggdo i need it?22:17
*** lvaughn has joined #openstack22:17
viddmagg, yes...that way your remote host has something to carry the mysql stuff to the mysqlserver =]22:18
viddirctc193, this is for catus to diablo...hope it helps http://docs.openstack.org/diablo/openstack-compute/admin/content/migrating-from-cactus-to-diablo.html22:19
*** negronjl has joined #openstack22:19
*** apevec has joined #openstack22:20
maggoks thanks i will install it22:20
*** dgags has quit IRC22:22
*** Rajaram has joined #openstack22:22
*** Rajaram has quit IRC22:23
*** tdi has joined #openstack22:23
irctc193Thnx for the link vidd.  But I don't see a way to bundle images in that doc22:25
*** df1 has joined #openstack22:26
*** bcwaldon has joined #openstack22:26
irctc193I mean a way to bundle images for glance22:27
irctc193from a running instance22:28
*** sandywalsh_ has quit IRC22:28
viddah...sorry...have not learned that yet =]22:28
viddhave you tried to snapshot it?22:28
irctc193snapshot will have everything including any sensitive data22:29
irctc193I want to be able to bundle it someway22:29
irctc193so that I can make it available for other22:30
irctc193s22:30
uvirtbotNew bug: #888784 in devstack "devstack need dnsmasq-utils which is not available on natty" [Undecided,New] https://launchpad.net/bugs/88878422:31
viddirctc193, so...you want to take the peices you used to make the running instance and bundle them?22:31
*** ldlework has quit IRC22:31
viddor you want the actual running parts?22:31
irctc193yes22:32
irctc193for example, I have a base ubuntu oneiric instance running22:32
irctc193I  have installed some packages to it22:33
irctc193Now, I want to be able to bundle it and make it public22:33
viddthen you take a snapshot and upload it22:33
irctc193But I have some sensitive data in the instance22:34
irctc193that I don't want to share22:34
maggvidd, i installed mysql-client and i still get the error http://pastebin.com/SSBW4PVP22:34
irctc193In euca-bundle, you can exclude some directories and bundle it22:35
*** jj0hns0n has joined #openstack22:35
*** neogenix has quit IRC22:35
viddmagg, check the mysql tag in your cloudHQ2 nova.config file and make sure it matches....22:35
viddalso, on your controller, run "sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf ; service mysql restart"22:36
irctc193vidd, am I hearing that in Diablo, we can create image from .iso, .vdk etc... or snapshot are the options?22:37
*** bcwaldon has quit IRC22:37
viddyour msql server may be set to only listen to requests from within22:37
viddirctc193, yes22:37
irctc193k, Thnx22:37
viddirctc193, but i have not had much experience with glance22:38
*** kieron has quit IRC22:38
maggvidd: i have the same tag22:38
*** robbiew has quit IRC22:39
*** negronjl has quit IRC22:39
viddrun "sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf ; service mysql restart" on your controller magg22:39
*** jog0 has quit IRC22:39
maggvidd: already did22:39
*** irctc193 has left #openstack22:40
viddrestart nova-* on controller?22:40
*** irctc963 has joined #openstack22:41
uvirtbotNew bug: #850644 in quantum "Quantum needs proper packaging" [High,Fix committed] https://launchpad.net/bugs/85064422:41
maggi did that alo22:41
maggalso**22:41
viddmagg, on your cloudHQ2 what is the URL of the mysql?22:42
viddirctc963, sorry i cant be more helpful =\22:42
*** aliguori has quit IRC22:42
magg--sql_connection=mysql://root:123456@10.10.10.2/nova22:42
viddand your controller ip is 10.10.10.2?22:43
maggyep22:43
viddis the port open on the controller?22:43
*** jog0 has joined #openstack22:43
viddmagg, test that the controller is accepting traffic on port 3306 ...22:44
viddmagg, from another machine on the local network run "telnet 10.10.10.2 3306"22:45
uvirtbotNew bug: #888790 in quantum "Query extensions supported by plugin" [Medium,New] https://launchpad.net/bugs/88879022:45
maggTrying 10.10.10.2...22:46
maggConnected to 10.10.10.2.22:46
maggEscape character is '^]'.22:46
maggAHost 'cloudHQ2' is not allowed to connect to this MySQL serverConnection closed by foreign host.22:46
magguser@cloudHQ2:~$22:46
*** irctc963 has quit IRC22:46
maggi think that's a no22:46
*** hezekiah_ is now known as isaacfinnegan22:46
viddmagg, the port is open22:47
maggoh22:47
maggbut i cant connect22:47
viddmagg, if the port was not open, you never would have gotten the "not allowed" message....you just would have timed out22:47
*** GeoDud has joined #openstack22:47
maggohh22:48
maggso?22:48
viddmagg, so the issue is that your mysql on the controller is not taking requests22:48
maggbasically22:48
magghow do i fix it22:49
viddthis is the reason i set each of my databases up with thier own usernames =]22:49
*** mrevell has quit IRC22:50
*** mrevell has joined #openstack22:50
*** jorgew has joined #openstack22:50
maggOOOHH22:50
viddby default, the user "root" is only allowed to connect from "localhost" "127.0.0.1" and your server by hostname22:50
viddmagg, MAJOR security flaw to have root being allowed in from anyone22:51
maggyeah22:51
maggi get it22:51
vidd*anywhere22:52
viddalso, when i set up my database users, i only give them god rights to thier own stuff....they need to keep thier patties of other ppl's stuff =]22:53
viddeach app has its own database, its own username, its own password =]22:54
viddand never the twain shall meat22:55
*** isaacfinnegan has left #openstack22:55
*** negronjl has joined #openstack22:55
*** jakedahn has quit IRC22:55
*** jakedahn has joined #openstack22:56
*** mrevell has quit IRC22:58
maggyeah i got it fixed22:59
maggthanks vidd22:59
viddno problem22:59
*** Vek has quit IRC22:59
viddnow if only i could get my dashboard to talk to nova22:59
viddis Kiall in the house?23:00
viddi must have missed a spot in my script =\23:00
*** gnu111 has quit IRC23:00
viddmagg, you used Kiall 's ppa's right?23:00
*** rsampaio has quit IRC23:01
*** mdomsch has quit IRC23:01
*** rnirmal has quit IRC23:01
*** arBmind has quit IRC23:02
*** jorgew has left #openstack23:03
*** magg has quit IRC23:04
*** mdomsch has joined #openstack23:07
*** kbringard has left #openstack23:08
*** apevec has quit IRC23:08
*** mgius has quit IRC23:09
*** jakedahn has quit IRC23:10
uvirtbotNew bug: #888802 in glance "glance-prefetcher requires authorization to run" [Critical,In progress] https://launchpad.net/bugs/88880223:10
*** Teknix has joined #openstack23:12
uvirtbotNew bug: #888795 in quantum "Condense source tree directories" [Low,Confirmed] https://launchpad.net/bugs/88879523:13
*** lts has quit IRC23:14
*** code_franco has quit IRC23:17
*** apevec has joined #openstack23:21
*** webx has quit IRC23:22
*** webx has joined #openstack23:22
*** mnour has quit IRC23:25
stevegjacobs_Something seems to be wrong with on one of my compute nodes23:25
tdistevegjacobs_: if you have only one node, then you are in a very dark place23:27
stevegjacobs_only one vm is picking up a fixed ip23:27
viddstevegjacobsare you using --auto-assign23:27
viddfixed ip...nvmd23:28
viddstevegjacobs what does compute error log say23:28
stevegjacobs_vidd should that be a flag in the nova-conf?23:28
viddstevegjacobs i was thinking floating....the question is irrelevant23:29
stevegjacobs_2011-11-10 23:30:03,087 INFO nova.compute.manager [-] Updating host status23:30
stevegjacobs_2011-11-10 23:30:04,792 INFO nova.compute.manager [-] Found 3 in the database and 1 on the hypervisor.23:30
stevegjacobs_I launched a number instances using dashboard23:31
stevegjacobs_new ones, after I got dashboard working two days ago23:31
stevegjacobs_and those that were assigned to this particular node don't seem to have got their networking set up correctly23:33
stevegjacobs_need to do a bit more digging but I think I am seeing the correct number of instances (files) in /var/lib/nova/instances23:34
viddstevegjacobs you need to pastebin that stuff...it s hard to read here23:35
viddstevegjacobs have you restarted compute on that node?23:36
uvirtbotNew bug: #888809 in devstack "screen not working for me" [Undecided,New] https://launchpad.net/bugs/88880923:36
stevegjacobs_not just now, but not very long ago23:36
stevegjacobs_this node had kernel panic earlier today too23:36
stevegjacobs_but one instance is still running on it23:37
viddstevegjacobs how much ram does that machine have?23:37
viddwait...did the instances work befor the kernel panic?23:38
stevegjacobs_32G23:38
stevegjacobs_one did for sure - the one that is still running23:38
stevegjacobs_maybe not the others because I only launched them yesterday evening and hadn't done anytthing with them yet23:39
viddstevegjacobs have you tried rebooting those instances? the issue may be with the instances and not the node =]23:39
stevegjacobs_ok - worth a try :-)23:39
viddand do you have nova-network running on all machines?23:40
stevegjacobs_vidd: I rebooted one and it's working! you are a genius23:42
stevegjacobs_yes nova network on all23:42
tdiis there some proper way to attach iscsi to openstack ?23:42
viddstevegjacobs nah...just trowing stuff against the wall to see what sticks =]23:43
tdior can i just add luns to the nova-volumes group and im done?23:43
*** rods has quit IRC23:44
stevegjacobs_vidd: well thanks anyway!23:46
*** BasTichelaar has quit IRC23:46
uvirtbotNew bug: #888811 in quantum "Brokenness in ubuntu oneiric" [High,New] https://launchpad.net/bugs/88881123:46
uvirtbotNew bug: #888813 in horizon "Duplicate dependencies/Dependency management problems" [Undecided,New] https://launchpad.net/bugs/88881323:50
*** imsplitbit has quit IRC23:53
*** rods has joined #openstack23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!