Tuesday, 2011-11-08

*** juddm has quit IRC00:00
*** hezekiah_1 has quit IRC00:01
*** heckj has joined #openstack00:01
*** syah has quit IRC00:02
*** nati2 has quit IRC00:03
*** dtroyer has quit IRC00:03
*** qazwsx has quit IRC00:05
*** syah has joined #openstack00:08
*** oubiwann has joined #openstack00:09
*** Gordonz has joined #openstack00:09
*** rnorwood has joined #openstack00:10
*** stewart has joined #openstack00:10
*** miclorb_ has joined #openstack00:14
*** chomping has quit IRC00:16
*** chomping has joined #openstack00:17
*** dolphm has quit IRC00:17
*** obino has joined #openstack00:18
*** dolphm has joined #openstack00:18
*** tlehman has joined #openstack00:18
*** tlehman has left #openstack00:18
*** dolphm has quit IRC00:22
viddi seem to be having issues working with a volume added to a vm00:24
*** heckj has quit IRC00:24
*** rnorwood has quit IRC00:28
*** nati2 has joined #openstack00:28
*** syah has quit IRC00:31
*** cmagina has joined #openstack00:33
stevegjacobsvidd: dashboard issues after those updates :'Settings' object has no attribute 'SWIFT_ENABLED'00:34
viddswift_enabled = False00:35
*** syah has joined #openstack00:38
*** jakedahn has quit IRC00:39
stevegjacobsI added that line in but still have django error - Do I have to do something to get it to re-read the settings file?00:42
*** rods has quit IRC00:43
viddstevegjacobs yes service restart apache00:44
*** egant has quit IRC00:46
bwongwow these instructions are pretty smooth00:46
viddbwong, told ya...dont get much easier then that =]00:46
*** Gordonz has quit IRC00:47
viddim telling ya...just a touch more work and Kiall will have himself a apt-based devstck =]00:47
bwongthat would be awesome.00:48
*** dragondm has quit IRC00:48
bwongstill configuring, hopefully I don't run into any problems. if I do, ill let him know.00:48
*** syah has joined #openstack00:48
*** clopez has quit IRC00:49
*** egant has joined #openstack00:50
*** stevegjacobs has quit IRC00:50
*** stevegjacobs__ has quit IRC00:50
*** dolphm has joined #openstack00:50
*** marrusl has quit IRC00:51
*** miclorb_ has quit IRC00:51
*** n8 has joined #openstack00:52
*** n8 is now known as Guest9030000:52
*** syah has quit IRC00:54
*** winston-d has joined #openstack00:55
*** rsampaio has joined #openstack00:56
*** syah has joined #openstack00:56
*** syah has quit IRC01:00
*** dolphm has quit IRC01:00
*** vidd is now known as vidd-away01:00
*** dolphm has joined #openstack01:00
*** stevegjacobs has joined #openstack01:03
*** misheska has joined #openstack01:04
*** dolphm has quit IRC01:05
stevegjacobsI lost my connection01:06
*** dolphm has joined #openstack01:06
stevegjacobsvidd: you still here?01:06
*** AlanClark has joined #openstack01:07
*** livemoon has joined #openstack01:07
livemoonvidd ping01:08
*** Guest90300 has quit IRC01:08
stevegjacobslooks like vidd is away01:09
stevegjacobsbwong - how is your setup going?01:09
bwongconfiguring nova.sh right now01:10
bwongI mean, going through the steps01:10
bwongAFTER nova.sh01:10
stevegjacobsI did all that just last night :-)01:10
bwonghey01:10
bwongdo you remember that part where your suppose to chown glance:glance01:11
bwongwhat directory or file was it referring to?01:11
bwongbecaues all it said was chown glance:glance it01:11
bwongI just assume "it" was the glance dir01:11
*** stewart has quit IRC01:12
*** stewart has joined #openstack01:13
stevegjacobsIt was the glance-registry.conf and glance-api.conf files that you were to copy into /etc/glance/01:13
*** jakedahn has joined #openstack01:14
stevegjacobschown glance:glance /etc/glance/glance-registry.conf - after you copy the ones that were generated by the script into the /etc/glance folder01:15
*** jakedahn_ has joined #openstack01:16
bwongso only those files should be glance glance?01:17
bwongbecause when I chown the directory01:17
bwongall the files inside is glance.01:17
*** mandela123 has joined #openstack01:18
*** jakedahn has quit IRC01:19
*** jakedahn_ is now known as jakedahn01:19
*** AlanClark has quit IRC01:19
stevegjacobsdon't think it hurts01:19
stevegjacobsall of mine are too - just need to make sure that the new ones you copy in are owned by glance as well01:20
bwongalright, thanks for the tip01:22
*** nerdstein has joined #openstack01:23
*** jog0 has quit IRC01:26
*** nphase has joined #openstack01:26
*** lynxman has quit IRC01:28
*** lynxman has joined #openstack01:29
*** jdurgin has quit IRC01:32
*** jakedahn has quit IRC01:35
*** stevegjacobs has left #openstack01:38
*** nerdstein has left #openstack01:41
*** Otter768 has joined #openstack01:45
*** n8 has joined #openstack01:46
*** jj0hns0n has joined #openstack01:46
*** n8 is now known as Guest2969901:46
*** bwong has quit IRC01:47
*** gyee has quit IRC01:54
*** vUNK has joined #openstack01:55
*** dolphm has quit IRC01:56
*** dolphm has joined #openstack01:56
*** rnorwood has joined #openstack01:58
*** dolphm has quit IRC02:00
*** HugoKuo_ has joined #openstack02:02
*** joeyang has joined #openstack02:02
*** tokuzfunpi has quit IRC02:05
*** hugokuo has quit IRC02:05
*** rnorwood has quit IRC02:12
*** Gollen has joined #openstack02:13
*** rsampaio has quit IRC02:15
*** osier has joined #openstack02:19
*** maplebed has quit IRC02:20
*** rnorwood has joined #openstack02:21
*** vUNK has quit IRC02:21
mandela123since  nova change its openstackx  plugin i want to know  how to make dashboard works well with it02:21
*** pixelbeat has quit IRC02:21
*** rsampaio has joined #openstack02:24
*** dwcramer has joined #openstack02:25
*** cdub has joined #openstack02:25
*** Turicas has joined #openstack02:34
*** adjohn has quit IRC02:37
*** negronjl has quit IRC02:38
*** negronjl has joined #openstack02:39
*** tokuz has joined #openstack02:40
*** joeyang has quit IRC02:43
*** chomping has quit IRC02:43
livemoonhi, anyone here?02:49
livemoondashboard cannot use,help02:49
livemoonException Type:NameError02:49
livemoonException Value:02:49
livemoonname '_' is not defined02:49
uvirtbotNew bug: #887402 in nova "can't terminate instance with attached volumes" [Undecided,New] https://launchpad.net/bugs/88740202:51
*** egant has quit IRC02:53
*** egant has joined #openstack02:53
*** chomping has joined #openstack02:54
*** rnorwood has quit IRC02:56
*** syah has joined #openstack02:57
*** osier has quit IRC02:59
*** dolphm has joined #openstack03:00
*** ksteward has joined #openstack03:01
*** aliguori_ has quit IRC03:02
*** sandywalsh_ has quit IRC03:07
*** ksteward has quit IRC03:07
*** neogenix has joined #openstack03:09
*** Turicas has quit IRC03:10
*** rnorwood has joined #openstack03:10
*** pothos has quit IRC03:11
*** chomping has quit IRC03:11
*** dolphm has quit IRC03:13
*** ldlework has quit IRC03:13
*** dolphm has joined #openstack03:14
*** littleidea has joined #openstack03:15
*** Ryan_Lane has quit IRC03:16
*** dolphm_ has joined #openstack03:17
*** dolphm has quit IRC03:18
*** rnorwood has quit IRC03:19
*** neohippie has joined #openstack03:20
*** neohippie has left #openstack03:21
*** negronjl_ has joined #openstack03:23
winston-dlivemoon : hi03:24
*** negronjl_ has quit IRC03:25
livemoonhi03:25
livemoonwinston-d03:25
*** negronjl_ has joined #openstack03:25
livemoondo you know python?03:25
winston-dlivemoon : i know some03:25
*** negronjl has quit IRC03:25
livemoonwhen I use dashboard, the web show "name '_' is not defined"03:25
livemoonin /usr/local/lib/python2.7/dist-packages/glance-2012.1-py2.7.egg/glance/common/exception.py03:26
*** rnorwood has joined #openstack03:26
livemoondefine message = _("An unknown exception occurred")03:26
*** negronjl_ is now known as negronjl03:27
*** dwcramer has quit IRC03:28
*** stewart has quit IRC03:29
*** stewart has joined #openstack03:29
winston-dcan you post the whole error message?03:31
livemoonhttp://paste.openstack.org/show/3155/03:32
winston-dhmm... I suppose you can see similar error when using 'glance' or 'nova' CLI tool?03:33
livemoonI can use glance and nova with keystone fine03:34
*** MarkAt2od has joined #openstack03:35
*** MarkAtwood has quit IRC03:36
winston-dlivemoon : can you do 'glance add' ?03:37
livemoonoff course03:37
*** jog0 has joined #openstack03:37
livemoonI think it is something with python03:37
winston-dlivemoon : no error?03:37
livemoonbecause show NameError: name '_' is not defined03:38
livemoonin /usr/local/lib/python2.7/dist-packages/glance-2012.1-py2.7.egg/glance/common/exception.py", line03:38
*** MarkAtwood has joined #openstack03:38
livemoonI think it need to define '_'03:38
*** jog0 has quit IRC03:39
*** miclorb_ has joined #openstack03:40
*** MarkAt2od has quit IRC03:40
*** miclorb_ has quit IRC03:41
winston-dsorry, i can't barely read any useful information out of your error log03:42
winston-dbtw, can you tell me how to set the ENV variables for glance? in order to use keystone?03:43
livemoonI don't set any ENV for glance03:46
livemoonjust in glance conf files03:46
*** negronjl has quit IRC03:46
winston-dlivemoon : really? no ENV03:47
*** openpercept has joined #openstack03:47
winston-dlivemoon : then how do you configure credentials for glance in conf file?03:48
livemoonhttps://github.com/livemoon/openstack03:48
*** rsampaio has quit IRC03:50
winston-di don't see any configuration related to authentication.  weird.03:52
livemoonI use "glance -A token [command]03:52
winston-dlivemoon : the token is like the password for user?03:54
winston-dI always got this error 'Not authorized to make this request. Check your credentials (OS_AUTH_USER, OS_AUTH_KEY, ...)'03:57
winston-dlivemoon : don't matter, i figure it out. thx03:58
*** rsampaio has joined #openstack03:59
*** maplebed has joined #openstack04:01
*** PiotrSikora has quit IRC04:02
livemoonwinston-d: :)04:02
*** PiotrSikora has joined #openstack04:02
*** jj0hns0n has quit IRC04:03
livemoonI wait someone can share dashboard installation doc04:03
*** jj0hns0n has joined #openstack04:04
*** kashyap_ has joined #openstack04:06
*** dysinger has quit IRC04:09
*** ricky_99 has joined #openstack04:21
*** littleidea has quit IRC04:21
*** supriya_ has joined #openstack04:22
*** miclorb_ has joined #openstack04:30
*** hezekiah_ has joined #openstack04:31
*** nRy_ has quit IRC04:39
*** adjohn has joined #openstack04:41
*** Rajaram has joined #openstack04:50
*** dolphm_ has quit IRC05:02
*** dolphm has joined #openstack05:03
*** PeteDaGuru has quit IRC05:07
*** miclorb_ has quit IRC05:07
*** dolphm has quit IRC05:08
*** syah has quit IRC05:10
*** Guest29699 has quit IRC05:21
*** rnorwood has quit IRC05:22
*** mmetheny has quit IRC05:26
*** MarkAtwood has quit IRC05:27
*** hezekiah_ has quit IRC05:32
*** MarkAtwood has joined #openstack05:46
*** Rajaram has quit IRC05:50
*** cdub has quit IRC05:53
*** kashyap_ has quit IRC05:54
*** Rajaram has joined #openstack05:54
*** negronjl has joined #openstack05:55
*** cdub has joined #openstack05:55
*** nerens has joined #openstack05:59
*** negronjl has quit IRC05:59
*** nati2_ has joined #openstack06:02
*** nati2 has quit IRC06:03
*** krow has joined #openstack06:03
*** jkoelker has quit IRC06:05
*** jkoelker_ has joined #openstack06:05
*** aceat64 has quit IRC06:06
*** nerens has quit IRC06:07
*** aceat64 has joined #openstack06:07
*** hezekiah_ has joined #openstack06:08
*** dysinger has joined #openstack06:10
*** hezekiah_ has quit IRC06:11
*** doorlock has joined #openstack06:11
*** catintheroof has joined #openstack06:15
*** Rajaram has quit IRC06:16
*** Rajaram has joined #openstack06:16
*** catintheroof has quit IRC06:20
*** catintheroof has joined #openstack06:26
*** dysinger has quit IRC06:32
*** rsampaio has quit IRC06:37
*** sebastianstadil has quit IRC06:42
*** sebastianstadil has joined #openstack06:43
*** MarkAtwood has quit IRC06:43
*** arBmind has joined #openstack06:43
*** MarkAtwood has joined #openstack06:43
*** gohko_nao has quit IRC06:55
*** gohko_nao has joined #openstack06:56
*** ton_katsu has joined #openstack06:56
*** Rajaram has quit IRC06:59
*** kaigan has joined #openstack07:01
*** krow has quit IRC07:02
*** guigui has joined #openstack07:02
*** jj0hns0n has quit IRC07:04
*** nati2_ has quit IRC07:16
*** hingo has joined #openstack07:17
*** neogenix has quit IRC07:19
*** TheOsprey has joined #openstack07:23
*** catintheroof has quit IRC07:25
*** ejat has joined #openstack07:30
*** ejat has joined #openstack07:30
*** Telamon has joined #openstack07:33
*** sebastianstadil has quit IRC07:36
*** yeming has joined #openstack07:40
*** sebastianstadil has joined #openstack07:41
*** supriya_ has quit IRC07:42
*** nerens has joined #openstack07:44
TelamonDoes anyone know of some recent install docs for keystone?  I'm using http://docs.openstack.org/diablo/openstack-identity/admin/content/configuring-the-identity-service.html but I'm not sure how up to date they are...07:46
*** ejat- has joined #openstack07:49
*** mgoldmann has joined #openstack07:49
*** ejat has quit IRC07:49
*** ejat- is now known as ejat07:49
*** ejat has joined #openstack07:49
*** arBmind has quit IRC08:00
yemingI'm experimenting FlatManager. 'euca-describe-instances' shows I can get IP address, but I cannot ping or ssh the instance. Anything I miss? Last I succeeded with FlatDHCPManager.08:02
*** nacx has joined #openstack08:04
*** HugoKuo__ has joined #openstack08:08
*** vishy has quit IRC08:08
*** hallyn has quit IRC08:08
*** vishy has joined #openstack08:09
*** hallyn has joined #openstack08:10
*** HugoKuo_ has quit IRC08:12
*** reidrac has joined #openstack08:13
*** Rajaram has joined #openstack08:20
*** supriya_ has joined #openstack08:20
*** adjohn has quit IRC08:24
*** misheska has quit IRC08:32
*** catintheroof has joined #openstack08:35
*** ejat has quit IRC08:37
*** opsnare has joined #openstack08:40
*** adjohn has joined #openstack08:41
yemingIn FlatManager mode, how does the instance get ip? Is it written to the image before starting?08:41
*** Gerr1t has quit IRC08:44
*** catintheroof has quit IRC08:44
*** dobber has joined #openstack08:44
*** adjohn has quit IRC08:45
Telamonyeming: I think the cloud-service package grabs it from a kernel parameter08:46
TelamonSo if your image doesn't have that cloud-service package, you won't get one.08:46
TelamonI'd try booting the service and using the VNC console to manually set an IP and see if that works first.08:46
TelamonDo you by any chance have keystack working?08:47
TelamonSorry, keystone.08:47
yemingTelamon: No, I just started playing with Nova. nothing else yet.08:48
*** Razique has joined #openstack08:49
*** arBmind has joined #openstack08:49
TelamonAh.  Avoid keystone.  It will drive you insane. ;-)08:50
RaziqueHi all :)08:54
Raziquehaha, I just came and read Telamon last sentence08:54
RaziqueKeystone is definitely driving us all crazy =D08:55
TelamonHah, but I'm crazy like a fox now!  I figured out that the new keystone DB has encrypted passwords.08:55
TelamonNow I can make the damned docs work again...08:55
TelamonOf course all the ports in the docs are wrong, but that's just to weed out the people who take their meds....08:56
*** ryan_fox1985 has joined #openstack08:56
yemingHi Razique08:58
TelamonAny ideas on how to test if nova/glance are properly using keystone?08:58
RaziqueTelamon: yah, you check keystone logs08:59
*** ejat has joined #openstack08:59
*** ejat has joined #openstack08:59
Raziqueand check if the tokens returned by the request are the ones into Keystone db08:59
*** ChrisAM1 has quit IRC08:59
livemoonhi,help09:00
TelamonRazique: Okay, but what command do I use that will trigger a request?  The euca stuff doesn't seem to work once you switch to keystone09:01
Raziquelivemoon: yup ?09:01
RaziqueTelamon: I've been able to integrate euca2ools along Keystone, but it was a pain09:01
RaziqueTelamon: use Curl in order to make auth hits against both nova and glance09:01
livemoonRazique: can you see me mail in maillist09:01
*** ChrisAM1 has joined #openstack09:02
*** yeming has quit IRC09:02
Raziqueyup; leme check09:02
*** miclorb_ has joined #openstack09:04
TelamonHmm, I think I'm going to blow away my keystone.db and reinit it with the devstack script.  It looks to be more flushed out than the keystone install docs...09:04
*** javiF has joined #openstack09:08
RaziqueTelamon: the devstack seems not to contain all the entries09:11
Raziqueespecially the "roles" tables lacks of KeystoneService and KeystoneServiceAdmin entries09:11
TelamonCrap.  Any docs around that do contain them all?09:12
*** BasTichelaar has joined #openstack09:12
livemoondoes anyone use dashboard?09:13
BasTichelaaranyone has knowledge of zones and schedulers in nova?09:15
*** catintheroof has joined #openstack09:17
RaziqueTelamon: not yet but i can give you my sql dump09:18
*** Telamon has quit IRC09:18
livemoonhi, Razique, Hava you installed dash?09:21
*** mandela123 has quit IRC09:22
Raziquelivemoon: nope I tried it via the devstack script09:23
Raziquewhat about u livemoon  ?09:23
livemoonI according devstack script09:25
livemoonbut failed09:25
*** Gollen has quit IRC09:25
livemoonit say "name '_' is not defined"09:25
*** reidrac has quit IRC09:28
*** Jigen90 has joined #openstack09:29
Jigen90Hi guys, I need a little help understanding a meaning of vcpu.09:30
Jigen90What's the link between 1 vpcu e 1 core of my processor ? Are they related?09:30
tjoygood question09:31
Jigen90When I spawn an instance with 1 vcpu, how many cores of my processor are used?09:34
Jigen90All or only one?09:34
*** MarkAtwood has quit IRC09:36
Raziquelivemoon: during the install ?09:38
*** Telamon has joined #openstack09:39
livemoonRazique: during run test script09:39
*** reidrac has joined #openstack09:40
Raziquehave you run the install first ?09:42
Raziquepython setyp.py build && python setup.py install09:43
TelamonRazique: Sorry, I lost my net connection.  How do you go about loading images when you are using keystone?09:44
RaziqueTelamon: in fact see Keystone as an intermediary, no more, so use traditional commands09:44
Raziquethis is Glance that will handle the communication with Keystone regarding it's operations09:45
Raziquewhile you only send to keystone temp. token and tenant token09:45
zykes-Razique: wazzup09:45
RaziqueHere is a scheme I did, in the review : http://img845.imageshack.us/img845/5205/sch5002v00nuackeystonen.png09:46
Raziquehey zykes- :)09:46
Raziqueplaying with live migration09:46
RaziqueWe are about to sign a 50 instances customer09:46
RaziqueI'd like to make sure the whole HA stuff is ready :D09:47
*** darraghb has joined #openstack09:51
zykes-what you using for storing vm instances Razique ?09:56
zykes-sheepdogg ?09:56
*** dobber has quit IRC09:56
*** Jigen90 has quit IRC09:57
Raziqueatm the instances themselves are stored locally in every node, while the volumes use an ISCSI SAN09:59
RaziqueI asked myself just yesterday if I shouldn't put the instances on the SAN alos09:59
Raziquebut when I benched the instance, the fact that they were stored locally gave me outstanding performance10:00
*** ninkotech has joined #openstack10:00
Raziquehttp://img820.imageshack.us/img820/507/plop20111010181702.jpg10:01
RaziqueIf I'm working through the boot from volumes, maybe I could also try to bench an on-san solution10:02
*** supriya_ has quit IRC10:06
*** clopez has joined #openstack10:06
*** opsnare has quit IRC10:10
*** pixelbeat has joined #openstack10:11
*** opsanre has joined #openstack10:12
*** catintheroof has quit IRC10:12
TelamonAnyone know why "euca-describe-availability-zones verbose" trhows this error: Warning: failed to parse error message from AWS: <unknown>:1:0: syntax error10:14
livemoonRazique: hi10:16
RaziqueTelamon: incorrect endpoint10:16
Raziquecheck either ec2_url and ec2_host10:16
Raziqueand the file your source10:17
RaziqueEC2_URL10:17
Raziquelivemoon: 'sup ?10:17
livemoonI havenot completed dashboard10:17
livemoonI think it is hard to me10:17
Razique:(10:19
Raziquebefore Dashboard10:19
TelamonRazique: In my nova.conf I have  --ec2_url=http://192.168.2.254:8773/services/Cloud and in my env I have EC2_URL=http://192.168.2.254:8773/services/Cloud so that looks good.10:19
Raziqueis Keystone integrated ?10:19
RaziqueTelamon: does euca_describe_instances work ?10:20
TelamonNope, same error.10:20
RaziqueTelamon: you use keystone ?10:23
* Razique fears the answer10:23
TelamonYep.  I'm seeing this in my api log: POST /services/Cloud/ None:None 400 [Boto/2.0 (linux2)] application/x-www-form-urlencoded text/plain10:23
Raziqueok so add this10:24
Raziquenova.conf : --keystone_ec2_url=http://172.16.40.11:5000/v2.0/ec2tokens10:24
Raziqueinto keystone/middleware/ec2_token.py10:24
Raziquemake sure that  u have         # o = urlparse(FLAGS.keystone_ec1_url)10:24
Razique        o = urlparse(FLAGS.keystone_ec2_url)10:24
Raziqueand    # token_id = result['auth']['token']['id']10:25
Raziquetoken_id = result['access']['token']['id']10:25
Raziquerestart nova-api and it should work10:25
*** ton_katsu has quit IRC10:26
*** jollyfoo has quit IRC10:26
*** ambo has quit IRC10:27
*** ambo has joined #openstack10:29
TelamonNope, same thing.10:31
*** Vek has quit IRC10:32
*** dosdawg has quit IRC10:32
TelamonWhen I use curl to go to the ec2tokens URL I get a 404...10:32
livemoonquit10:33
*** livemoon has quit IRC10:33
RaziqueTelamon: what version of keystone are u using ,10:34
Raziquehttp://paste.openstack.org/show/3157/ migration doesn't work :(10:34
RaziqueI don't have any errors here, have I ?10:34
*** perestre1ka has quit IRC10:35
*** miclorb_ has quit IRC10:35
TelamonRazique: 1.0~d4+20111106-0mit1 from kalil's PPA10:35
TelamonOnly one reported to work...10:35
*** perestrelka has joined #openstack10:35
RaziqueTelamon: I've been able only with the trunk one10:35
Razique(from github)10:36
uvirtbotNew bug: #887495 in horizon "Error authenticating with keystone: Unhandled error" [Undecided,New] https://launchpad.net/bugs/88749510:36
TelamonAhh...  Okay.  I can load images directly with glance, and start them from Dashboard, so I think I'm going to just leave it as is for the moment.  I don't want to upgrade to trunk and have it break some other component.10:37
*** naehring has joined #openstack10:38
zykes-Razique: where are you from ? Spain ?10:39
Raziquezykes-: France :D10:39
zykes-oh10:39
zykes-Razique: working for a large hosting provider or ?10:40
Raziquenot all all10:40
RaziqueYoung sysadmin in a company we created the last year :D10:40
zykes-:p10:40
RaziqueWe first wanted to use eucalyptus10:40
zykes-but then OS came along ?10:40
Raziquebut after some months of exploitation, and lot of troubles, I started to do the migration to OS10:40
RaziqueWe offer hosting and server management cloud-based10:41
Raziquethe first year, that was KVM only, worked pretty well :)10:41
Raziquewhat about you ? :)10:41
zykes-norawy :)10:42
zykes-Norway, trying to convince folks here but not easy10:42
*** lelin has joined #openstack10:42
Raziquemmm well if you want some user cases, you can ask10:43
zykes-fire away captain10:43
ryan_fox1985Hi I'm installing swift with swauth and when I start the proxy-server the service doesn't start.10:43
Raziquecool country :D10:43
ryan_fox1985In the syslog appears Nov  8 11:41:35 proxy proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012  File "/usr/bin/swift-proxy-server", line 22, in <module>#012    run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012  File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012    app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012  File "/usr/lib/pymodules/py10:43
ryan_fox1985thon2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012    return loadobj(APP, uri, name=name, **kw)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012    global_conf=global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012    global_conf=global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#01210:43
ryan_fox1985return loader.get_context(object_type, name, global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 405, in get_context#012    global_additions=global_additions)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012    for name in pipeline[:-1]]#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012    section)#012  File "/10:43
ryan_fox1985usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012    object_type, name=use, global_conf=global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012    global_conf=global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012    global_conf=global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadws10:43
ryan_fox1985gi.py", line 285, in _loadegg#012    return loader.get_context(object_type, name, global_conf)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012    object_type, name=name)#012  File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587,10:43
Raziqueryan_fox1985: use paste please :)10:44
Raziquehere http://paste.openstack.org/10:44
ryan_fox1985ok sorry10:44
ryan_fox1985I already install the swauth library10:46
ryan_fox1985And I already configure de proxy-server.conf10:47
*** dobber has joined #openstack10:48
*** stevegjacobs_ has quit IRC10:50
*** jedi4ever has quit IRC10:51
*** ejat has quit IRC10:53
*** Rajaram has quit IRC10:53
Telamonryan_fox1985: If you want to pastebin your proxy-server.conf I can take a look at it.  Swift was one of the few parts I got working easily. :)10:53
ryan_fox1985Ok one moment10:54
ryan_fox1985http://pastebin.com/1aYeKZ1f proxy-server.conf10:55
ryan_fox1985I download swauth from the git and I do python setup.py install10:56
*** foexle has joined #openstack10:57
TelamonOkay, first thing to check: do the /etc/swift/cert.* files exist?10:57
ryan_fox1985yes I created10:57
TelamonOkay, and your machine's IP is 10.30.239.198 ?10:58
ryan_fox1985cd /etc/swift  || openssl req -new -x509 -nodes -out cert.crt -keyout cert.key10:58
ryan_fox1985yes I check with ifconfig eth010:58
TelamonOkay, can you pastebin the error from above?  It looks like the last few lines are missing.11:00
foexlehi guys, i've uploaded an image to glance => all done. Now i try to start an instance and nova calls glance to get this requested image, but glance says can't find. Now i request the sql command which glance do. If i try this command (http://pastebin.com/bR6WnYLs) with the 2 Agruments (False,2) so False=deleted and id=2, but i don't get a result. If i try this with (0,2) i get the image. is this a bug?11:00
ryan_fox1985http://pastebin.com/iURmkmCR11:01
ryan_fox1985it's all that have in the /var/log/syslog11:02
Telamonryan_fox1985: You seem to be missing a few lines of the error there.  After the "line 587," it should have a function name and an error description.11:02
ryan_fox1985the log finish with comma11:02
Telamonryan_fox1985: Hmm, try running it from the command line: swift-init main start11:03
ryan_fox1985ok11:03
TelamonAs root.  So you might need to put sudo in front of that11:04
ryan_fox1985appears an error11:04
ryan_fox1985main-server.conf don't found11:04
Telamonryan_fox1985: Hmm, do you have a bunch of other conf files and folders (say object-server.conf) in /etc/swift ?11:05
ryan_fox1985in the folder of etc/swift11:06
*** clopez has quit IRC11:06
ryan_fox1985One moment I created a pastebin11:06
ryan_fox1985http://pastebin.com/kA3604Qy11:07
ryan_fox1985this is my proxy node with swauth11:08
foexleno one any idea ?11:08
*** Rajaram has joined #openstack11:09
Telamonryan_fox1985: Okay, I think you are missing some of the setup steps.  1 sec...11:09
*** PotHix has joined #openstack11:09
ryan_fox1985I do the steps from os-objectstorage-adminguide-trunk.pdf11:10
Telamonryan_fox1985: Try using these: http://docs.openstack.org/diablo/openstack-object-storage/admin/content/  I don't actually know which are more recent, but the website ones seem to work for me.11:11
ryan_fox1985you already install swift with swauth?11:11
Telamonfoexle: What does euca-describe-images say11:11
foexleTelamon: doesn't work11:12
foexleauth require11:12
foexlei think its the combination of keystone and glance11:12
foexlebut glance client works11:12
Telamonryan_fox1985: I installed it with keystone for the auth backend, but it's pretty much the same except for the password lookups.11:12
Telamonfoexle: Hmm, I dunno then.  Are you using dashboard?  Do you see them in there?11:13
foexleyes and yes11:13
foexle:)11:13
ryan_fox1985The web page It's the same steps that the manual that I have11:14
TelamonHmm, did you check in the system panel->images that the image fiel has the right ID for the kernel and ramdisk file?  I don't know, just guessing...11:15
foexlehmmmm you'r right ....11:17
foexlekernel and ramdisk id = 12311:17
foexlehmmm11:17
Telamonryan_fox1985: You should have a bunch more config files in /etc/swift then. http://pastebin.com/R3PSykCw11:17
Telamonfoexle: If it turns out that I actually properly diagnosed an OpenStack problem I may have a heart attack...  Don't tease me like that! ;-)11:18
foexle:>11:18
foexlebut i followed the glance howto -.-11:19
TelamonThe 123 means nothing uploaded (it's probably in a grey font)  If you have a separate kernel uploaded, just grab it's ID from it's own edit page and pop it in for the image one.  Then you can start an instance.11:19
ryan_fox1985the account-server.conf, proxy-server.conf and object-server.conf I have in the storages nodes11:19
Telamonryan_fox1985: Ah, the docs must be slightly different...  I don't know then, sorry.  You might wan to double check your syslog though.  Your error message is definitely getting cut off in the middle.  syslog doesn't always flush the cash so that can happen.11:21
TelamonAnyone know the default logins to the uec-images.ubuntu.com images?11:21
*** zorzar has quit IRC11:22
*** ianloic has quit IRC11:22
*** fujin has quit IRC11:23
ryan_fox1985how I can see all the log?11:23
*** ahasenack has joined #openstack11:23
TelamonProbably tail /var/log/syslog11:24
ryan_fox1985ok I start again the proxy-server11:24
ryan_fox1985and I paste the logs11:24
ryan_fox1985http://pastebin.com/Hwp3bQfG11:26
ryan_fox1985I think that swift don't found the swauth module11:27
TelamonYeah, still missing part of the error message.  Are you running Ubuntu?11:27
ryan_fox1985yes11:29
ryan_fox1985ubuntu 10.04 LTS11:29
ryan_fox1985server11:30
ryan_fox198532 bits11:30
TelamonHmm, I dunno then.  It may very well be that swauth module is missing, but I can't really say more without the rest of the error message.  Sorry I can't be of more help.11:31
ryan_fox1985Oks thanks!11:32
*** arun has quit IRC11:35
*** clopez has joined #openstack11:36
*** arun has joined #openstack11:37
*** arun has joined #openstack11:37
*** zorzar has joined #openstack11:38
*** mmetheny has joined #openstack11:38
*** Nathariel has joined #openstack11:38
NatharielHey guys. While installing the nova-common package on Ubuntu 11.10 I get "/usr/lib/python2.7/dist-packages/migrate/changeset/schema.py:124: MigrateDeprecationWarning: Passing a Column object to alter_column is deprecated. Just pass in keyword parameters instead." Any thoughts?11:40
TelamonNathariel: That's probably not a problem.  It's just saying the package is using an upgrade option for SQL Alchemy that isn't officially supported any more.  It should still work.11:44
*** Otter768 has quit IRC11:45
NatharielThanks, Telamon11:47
foexleTelamon: i think the main problem ist the auth ..... if i try to get a response from syspanal-> tenats its unauthorized too11:47
foexlethis user have sysadmin, admin and netadmin role in keystone11:48
foexleand the creds are correct too11:48
foexle-.- oh man :D ...11:48
Telamonfoexle: Are you using devstack or packages?  And if packages, did you use the devstack keystone_data.sh script?11:50
*** termie has quit IRC11:50
*** rods has joined #openstack11:52
foexleTelamon: i'm using packages and no11:54
foexleTelamon: i dont use any automate install scripts11:55
ninkotechhi, i would like to use swift like solution, but i need to be able to configure number of copies of the blob when i upload data into it...  would it be hard to achieve this with swift somehow?11:56
*** BasTichelaar has quit IRC11:57
*** termie has joined #openstack11:58
Telamonfoexle: Okay, the docs for keystone on the website are missing a bunch of setup commands, which might be causing your problems.  Try using this:  https://answers.launchpad.net/swift/+question/175595  just make sure to change the token for admin/admin to the one from your /etc/nova/api-paste.ini file11:58
*** supriya has joined #openstack11:59
*** PeteDaGuru has joined #openstack12:01
foexleTelamon: ok .... the role MUST BE Admin ....12:05
foexleTelamon: i configured in keystone conf with KeystoneAdmin12:05
foexlebut i think the communication between nova and keystone requires this role "Admin"12:05
foexleok Tenants show works now :)12:06
foexleok the last error euca-describe-images :D .... thx Telamon12:07
TelamonHeh, let me know if you get that one working.  No joy for me.12:07
foexleother euca commands are running without errors .... hmmmm :) ...12:08
foexleTelamon: ok i'll do12:08
*** supriya has quit IRC12:10
*** livemoon has joined #openstack12:15
*** praefect has joined #openstack12:15
*** yeming has joined #openstack12:22
*** Turicas has joined #openstack12:22
*** yeming has quit IRC12:27
*** Vek has joined #openstack12:29
Raziqueback guys !12:31
*** nerdstein has joined #openstack12:32
zykes-Razique: !12:32
zykes-:d12:32
Raziqueyup12:33
Razique'sup :p12:33
lelinis there a way to create an image from a running system?12:36
Raziquelelin: yup, but I haven't tried yet12:37
Raziqueit's into my todo list :p12:37
lelinRazique,  can you point me to some docs pls?12:37
Raziqueonly that atm https://lists.launchpad.net/openstack/msg03825.html12:38
Raziquegood luck!12:38
*** Telamon has quit IRC12:38
*** dirkx_ has joined #openstack12:39
Raziquelelin: https://github.com/canarie/vm-toolkit#readme12:39
*** stevegjacobs_ has joined #openstack12:43
*** Turicas has quit IRC12:44
stevegjacobs_I have set up a web server instance, working perfectly, and I would like to figure out best way to snapshot and then make it available as a new image.12:45
*** stuntmachine has joined #openstack12:47
*** Turicas has joined #openstack12:49
*** stuntmachine has quit IRC12:49
lelintnx Razique12:49
livemoonhi13:03
livemoonlelin13:04
livemooncreate an image from a running system, you can use python-novaclient13:04
*** openpercept has quit IRC13:05
*** shang has joined #openstack13:05
*** cmagina_ has joined #openstack13:06
lelinlivemoon, tnx i ll give a try also to that. does it mandatory needs a volume attached?13:06
*** lorin1 has joined #openstack13:06
livemoonNo13:06
lelincool13:07
livemoonbut I haven't to try snapshot a server attaching a volume13:08
*** cmagina has quit IRC13:09
*** dprince has joined #openstack13:10
*** PeteDaGuru has quit IRC13:16
*** nRy has joined #openstack13:19
*** andredieb has joined #openstack13:20
lelinlivemoon, i'm using "nova image-create" but after several minutes, nova image-list still shows state as "saving". does it take so long for you too?13:25
nRyHello13:26
livemoonaccording your size of instances13:26
livemoonbut you can see compute.log of your host13:26
*** bcwaldon has joined #openstack13:26
nRyDoes anyone know of some instructions, possibly a web link with info on "Starting" Amazon EC2 instances using a component of Openstack?13:27
nRyor possibly Openstack with Chef?13:27
*** andredieb has quit IRC13:27
Raziquelivemoon: thanks for the nova stuff :D13:28
Raziqueomg, i'll test that13:28
Raziquecreate image from instance, another thing I need to chekc13:28
*** stuntmachine has joined #openstack13:30
livemoonRazique: I see you email about migration13:30
Raziquelivemoon: yah, i'm starting to be desperate here :d13:30
kaigan41713:30
kaiganerr13:30
livemoonI need know how to block migration13:30
livemoonif you know, tell me13:31
Raziqueyup, I'm trying to do both (live and block)13:32
Raziquebut not success atm :D13:32
Raziquelelin: ok I just tried13:32
livemoongo on13:32
RaziqueI tried to create an image from a running instance13:33
Raziquethe process goes well, and Glance gets the new image as a private one13:33
Raziquethen I create a new server based on that image13:33
Raziqueits boots, and I can connect to it13:33
Razique… but :D13:33
DuncanTIf you update an api extension, such as os-volume, in a back-compatible manner, should you update the 'updated' timestamp in its attributes, or leave it alone?13:33
RaziqueI created a file into the instance "razique"13:33
Raziquethat is missing from the new server13:34
DuncanTExample change: https://review.openstack.org/#change,120213:34
livemoonwhy?13:34
RaziqueI dunno13:35
* Razique is starting to like novaclient tool13:35
lelinRazique, for me is still in "creating" state. but i think is because nova-volume is not configured (i need a different partition for that, right?)13:35
Raziquelelin: not necesseraly13:35
lelinok13:35
RaziqueHere I don't use nova-volume for the instance13:35
RaziqueI've a small test instance on a server13:36
Raziquea ttylinux13:36
lelinso what could be the problem Razique ? i have no evidence in the logs. now i ll try to make a snap of the tty13:36
*** stuntmachine has quit IRC13:36
Raziquelelin: I would check nova-compute in verbose mode13:36
zykes-novaclient tool Razique ?13:37
lelinRazique, i will13:38
Raziquezykes-: the nova client13:39
*** Nadeem has joined #openstack13:39
Nadeemguys13:39
Raziqueyou know the replacement for euca2ools13:39
Nadeemam not too familiar with git13:39
praefectRazique: to do your create image from instance test, do you use something like "nova rebuild xxx y" ?13:39
Nadeembut i am getting this:13:39
Raziqueweird because I see "qemu-img convert -f qcow2 -O raw -s 20eb85f3af074bf6a5c94c97932b7999"13:39
Nadeem+ git clone https://github.com/openstack/nova.git /root/cloud/nova/nova Cloning into /root/cloud/nova/nova... warning: remote HEAD refers to nonexistent ref, unable to checkout.13:39
Raziquepraefect: i've a running instance that I create image from13:39
praefectRazique: ok but you use "nova rebuild" right?13:39
Raziquepraefect: I don't13:40
Raziquerebuild "Shutdown, re-image, and re-boot a server."13:40
praefectRazique: what do you do then?13:40
praefectI'm lost13:40
RaziqueI do nova image-create $server $name13:40
praefectthanks13:40
Raziquethen glance index shows me the image13:40
Raziquethen nova boot --image $clone --flavor $flavor13:40
Raziquebut it's like it uses the backing files since I don't find the test file I created13:41
Raziqueand 2011-11-08 14:38:10,283 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img convert -f qcow2 -O raw -s 20eb85f3af074bf6a5c94c97932b7999 /var/lib/nova/instances/instance-00000036/d13:41
Raziquewe see that is create a clone from the running instance's disk, not the backing file13:41
*** ninkotech has quit IRC13:42
Raziqueok weirder :D13:43
*** stuntmachine has joined #openstack13:43
Raziquenow the new server has the file I created13:43
Raziquebut I created a second file, re-image and re-boot13:44
*** imsplitbit has joined #openstack13:44
RaziqueNow the second file doesn't exist13:44
Raziquehaha13:44
praefectRazique: this is very interesting, I've noticed that "glance index" never lists anything for me but I'm pretty sure I'm using glance... I have 10+ images in there and glance index shows nothing13:44
Raziqueu mean the image ?13:44
Raziqueor all ur images ?13:44
praefecteven with 10+ images in glance, "glance index" never returns anything, and if I look at glance-api.log I see entries with GET HEAD POST for all my image manipulation...13:45
Raziquepraefect: u use Keystone ?13:46
uvirtbotNew bug: #887572 in keystone "Error authenticating with keystone: Unhandled error" [Undecided,New] https://launchpad.net/bugs/88757213:46
praefectRazique: no nothing keystone related..13:46
Raziquegreat13:46
Raziquesince Keystone requires extra changes in order to be able to use glance shell13:46
Raziqueare the images private or public13:47
praefectok at least, nova image-list shows me an image that is SAVING...13:47
Razique does nova image-list shows them ?13:47
praefectRazique: yes13:47
*** msivanes has joined #openstack13:47
*** rsampaio has joined #openstack13:47
praefectand one is in SAVING state (the test I just did clone a VM)13:47
Raziqueput glance in debug mode13:47
Raziquewhile u run glance index, and get the sql request13:48
praefectRazique: glance-api restarted with debug=true13:48
livemoonrazique13:49
livemoon give me your backup mysql script again.13:50
*** livemoon has left #openstack13:50
*** livemoon has joined #openstack13:50
Raziquelivemoon: https://github.com/Razique/BashStuff13:51
Raziquehelp urself :p13:51
*** Arminder has quit IRC13:51
livemoonfork your branch13:51
Raziquesure13:52
foexleHey guys, if i run euca-describe-images i get this error AttributeError: keystone_ec1_url13:52
Raziquepraefect: ok, so It's like the clone of the server is one step back13:52
Raziquefoexle: Keystone user :p13:53
foexleother euca commands are running13:53
stevegjacobs_I am just installed python-novaclient and I'm trying to use it for the first time13:53
praefectRazique: mine is still in SAVING state...13:53
foexleRazique: keystone user ?13:53
*** naehring has quit IRC13:53
foexleRazique: this user has all roles and tenants they are need :/13:53
Raziquefoexle: do u have that file ? /usr/local/lib/python2.6/dist-packages/keystone-1.0-py2.6.egg/keystone/middleware/ec2_token.py13:53
Raziquepraefect: what ur's setup ?13:53
foexleRazique: yes13:54
Raziqueok13:54
Raziquepaste it please :)13:54
stevegjacobs_I'm used to authenticating with euca-tools, but I need some instruction on how to do this with keystone and novaclient#13:54
praefectRazique: not sure what you mean but my server is a xeon workstation with SATA disks13:55
praefectif that's what you wanted to know13:55
foexleRazique: http://pastebin.com/WZyGe9Nj13:55
Raziquestevegjacobs_: I've written up a guide here http://docs.openstack.org/trunk/openstack-compute/admin/content/migrating-from-cactus-to-diablo.html13:56
Raziquelook the end of the doc13:56
Raziquefoexle: the good one http://paste.openstack.org/show/3161/13:56
foexleRazique: ok thx13:57
Raziquetwo lines have changed, the o=  url parse...13:57
Raziqueand the array at the end of file13:57
*** Turicas has quit IRC13:57
Raziquepraefect: look into nova-compute.log13:57
*** Turicas has joined #openstack13:57
*** aliguori has joined #openstack13:58
Raziquemaybe it hangs of the clone creation13:58
Raziqueor maybe nova-api.log since it's trying to put it into glance13:58
livemoonyes,look at nova-compute.log13:58
praefectRazique: thanks, yes nova-compute is full of errors13:58
Raziqueok paste o/13:58
*** dendro-afk is now known as dendrobates13:59
praefectRazique: qemu-img command failed: invalid option "-s"13:59
Razique-s ?14:00
praefectpaste.openstack.org/show/316214:00
Raziquewhat version of nova are u running ?14:00
livemoonit is older14:01
livemoonpython is 2.614:01
livemoonin my server. it is python 2.714:01
praefectRazique: I've had problems before because of qemu-img (I'm on centos using packages from griddynamics) (nova-manage version = 2011.3 (2011.3-LOCALBRANCH:LOCALREVISION))14:01
Raziquepraefect: diablo from trunk :/14:02
Raziquemmm I use diablo stable14:02
praefectRazique: like I said they are packages from griddynamics, the thing is - it's pretty hard to come up with the latest version of qemu-img on centos...14:03
praefectpackage problems..14:03
foexleRazique: in your doc last line export NOVA_URL=http://$KEYSTONe-IP:5000/v.20/ should be /v2.0/ ?14:03
Raziquepraefect: whatt's ur qemu version ?14:03
livemoonRazique:you use stable?14:03
Raziquelivemoon: diablo stable yah14:03
Raziquefoexle: yup14:03
livemoonbut your keystone and novaclient is git?14:03
Raziquelivemoon: yah14:03
livemooncool14:03
praefectqemu-img 0.12.1 .. simply not enough14:03
Raziquehere 0.14-014:03
Raziquepraefect: changelog og 0.14 version http://repo.or.cz/w/qemu.git/commitdiff/51ef67270b1d10e1fcf3de7368dccad1ba0bf9d114:05
Razique"The following patch adds a new option in "qemu-img": qemu-img convert -f qcow2 -O qcow2 -s snapshot_name src_img bck_img.14:05
Razique"14:05
Raziquepraefect: that's our answer here :)14:05
Raziquelivemoon: I bypass Keystone remember :p14:06
Raziquebreaks existing cactus project :/14:06
praefectRazique: yes, just got qemu-img 0.15.0 installed... going to retry14:07
livemoonrazique: you are so clover14:07
*** antenagora has joined #openstack14:08
foexleRazique:     token_id = result['access']['token']['id']14:08
foexleKeyError: 'access'14:08
livemoonclever14:08
*** kbringard has joined #openstack14:08
*** dirkx_ has quit IRC14:09
Raziquelivemoon: u'll laugh14:09
Raziquecheck the mail I just sent to the ML14:09
livemoontoday I run dashboard in virtualenv, it can running14:10
livemoonbut I run with apache, somethins error14:10
*** sandywalsh_ has joined #openstack14:10
praefectRazique: new test "nova image-create 298 up" now complains about Unknown file format "ami" ... with qemu-img 0.15.0 -----> http://paste.openstack.org/show/316314:10
*** joesavak has joined #openstack14:11
Raziquelivemoon: what are you on actually ?14:11
livemoonI don't understand14:11
Raziquepraefect: I'd say it's the format of the image but not sure14:11
Raziquepraefect: can u look into Glance DB14:12
*** shang has quit IRC14:12
Raziqueespecially the disk_format field ? (for the base image)14:12
*** imsplitbit has quit IRC14:12
Raziquelivemoon: with the devstack script ?14:12
livemoonno,just according README in git14:13
praefectRazique: could you just run "qemu-img" on your system and look at the last line "Supported formats"... do you see ami there? and what about the qemu-img command that gets run on your compute node, is it trying to output in ami format?14:13
*** cmagina has joined #openstack14:13
Raziquefoexle: can I see our nova.conf ?14:13
livemoonaccording to devstack, it cannot be running14:13
Raziquepraefect: mine uses raw http://paste.openstack.org/show/3164/14:13
foexleRazique: i think it's an 401 in keystone14:13
*** lts has joined #openstack14:14
foexlekeystone gets ec2key xxx-xxx-xx:<tenant_name>14:14
praefectRazique: thanks for that14:14
Raziquepraefect: ami means amazon machine image14:14
Raziquefoexle: whuch version of KS are u using ?14:14
livemoonhi14:14
*** dirkx_ has joined #openstack14:15
livemoonI want to know how openstack used in your countrie?14:15
Raziquepraefect: that's why I think u must have the wrong format defined for a disk14:15
livemooncontries14:15
*** imsplitbit has joined #openstack14:15
foexleRazique: hmpf .... where i can get this informations -.-14:16
Raziquefoexle: dunno… how did u install it ?14:16
foexlecant see it in log after restart14:16
livemoonI want develop openstack in our city and country14:16
livemoonhow to do it?14:16
foexleand keystone --version gets keystone <function version at 0x7fec002516e0>14:16
foexlei installed keystone as deb package14:17
praefectRazique: would you be so kind as to confirm that you have raw instead of ami in one of these column from the glance DB? http://paste.openstack.org/show/316514:17
foexleRazique: oh wait .... no deb package ... hmm14:18
*** cereal_bars has joined #openstack14:18
foexleno was manually git checkout14:18
praefectRazique:so I've got a problem with my glance: (1) it doesn't list anything with "glance index" and (2) it uses ami as an image format which does not make sense...14:18
Raziquepraefect: hehe that's what I told u :p14:19
Raziquehere I've raw14:19
Raziquedisk_format either raw or qcow14:19
Raziqueqcow/ qcow214:19
*** shawn has quit IRC14:19
guaquaanyone else having trouble with autocreation of accounts in swift?14:19
guaquai'm trying to go through this, but running into trouble: http://swift.openstack.org/howto_installmultinode.html14:20
*** supriya has joined #openstack14:20
Raziquepraefect: the image creation extracts the info from Glance db imho in order to create the same type for the clone14:20
Raziquepraefect: how do u populate ur glance repo ?14:20
Raziquevia nova-manage or glance directly ?14:21
praefectRazique: euca-bundle etc... only14:21
praefectnever imported an image otherwise14:21
Raziquepraefect: erf… don't use euca2ools for image upload14:21
praefectI won't if that doesn't work properly.. what's the preferred method?14:21
Raziqueit splits files, duplicate files into local and glance repo14:21
praefectit does and it takes forever14:21
Raziquepraefect: nova-manage image image_register/ kernel_register / ramdisk_register :)14:22
Raziquepraefect: yah, check /var/lib/nova/images14:22
RaziqueI bet u have images here14:22
*** localhost has quit IRC14:22
Raziqueyou can also use native glance tools, but I like nova-manage image's way14:22
Raziquepraefect: if you can, don't use euca2ools14:23
Raziqueor only to "consult" infos14:23
*** marrusl has joined #openstack14:23
*** rsampaio has quit IRC14:24
gnu111Are the node_timeout and conn_timeout options in swift conf files in seconds?14:24
*** ldlework has joined #openstack14:24
*** localhost has joined #openstack14:24
*** Nadeem has quit IRC14:25
*** ldlework has quit IRC14:29
*** ldlework has joined #openstack14:29
dweimergnu111: Yes.14:30
gnu111dweimer: Thanks. I am using the defaults now and testing with files > 200GB. Are there any recommended values?14:31
*** shawn has joined #openstack14:32
livemoonRazique14:34
livemoonbye14:34
livemoongood night14:34
Raziquegood bye my friend :)14:35
*** livemoon has left #openstack14:35
Raziquepraefect: ok so I think i've figured out a way to create the images14:35
praefectRazique: thanks to you I managed to clone a VM, I will boot it in 5 sec, I'm listening14:36
Raziqueso well If I create two files, and image, the last one is missing14:36
Raziqueif I create 5 files, I'll have 414:36
Raziqueso its like a delay happened everytime14:37
praefectok do you do sync on the shell before cloning?14:37
praefectsync;14:37
Raziquemmm what is that ?14:37
praefectI'm sure you know about that14:37
Raziquehaha14:37
praefectit flushes pendiong io to disk14:37
Raziqueno :D14:37
Raziqueahh14:37
Raziqueyes14:37
praefectwell... if that's your problem I'm happy I could help14:37
RaziqueI thought that was a nova stuff14:37
Raziqueok leme try14:38
Razique:p14:38
praefectI always do sync, an old habit that predates good linux systems that flushes io on reboot etc...14:38
Raziqueso I create my file, sync14:39
Raziquethen nova-image-create14:39
praefectthat's right14:39
*** cmagina_ has quit IRC14:39
nRyHello, any Openstack people interested in some freelance work? :-)14:39
lelindo you know if is there any plan to support selinux on openstack?14:39
praefectRazique: I jsut booted my clonde, both files are there (up and down) and I'm pretty sure I did a sync... let me know14:41
*** shawn has quit IRC14:41
Raziqueman… I owe u one!14:41
RaziqueIt's working !!!14:41
Raziquesick14:41
RaziquenRy: sure :D14:42
nRyI am looking for someone who is familar with Openstack to help with a cool project ;-)14:42
RaziquenRy: can you give us few details or is it kinda private ?14:43
nRywell I can say, without giving too much away14:43
Raziqueofc :)14:44
nRywe have some servers running on AWS, and some of our own private servers14:44
nRyby private servers I mean ones we own that are located in other datacenters besides AWS resources14:44
nRywe want to use Openstack to create a centralized management system for all of the servers14:45
*** nerens has quit IRC14:45
*** dendrobates is now known as dendro-afk14:46
*** lborda has joined #openstack14:46
*** shang has joined #openstack14:47
*** dtroyer has joined #openstack14:49
nRyRazique: what do you think?14:49
sandywalsh_ttx around?14:49
ttxsandywalsh: yes14:50
sandywalsh_ttx, hey! I moved the UTC time for the orch meeting to keep the CST time the same. What is everyone else doing?14:50
sandywalsh_ttx (so I don't tread on toes)14:50
ttxsandywalsh: everyone else should not move time14:50
ttxSo far we tried to keep the meeting times consistent in UTC14:51
sandywalsh_ttx, ok, I'll keep the UTC the same and start an hour earlier ... thanks14:51
*** AlanClark has joined #openstack14:51
ttxsandywalsh: great14:51
praefectRazique: you rock.. the files are there but more importantly you showed me the path of nova-manage bundling14:52
dweimergnu111: It depends on your setup. If you have enough storage nodes to handle the load then you may not need to increase them at all. As your storage server I/O starts to go higher there's a larger chance of node_timeout. That's been our experience anyway.14:56
*** shawn has joined #openstack14:56
praefect..14:56
*** robbiew has joined #openstack14:57
dweimergnu111: When the different timeouts are hit, it will be logged on the proxies. Because of segmentation the actual size of the files doesn't matter as much. Consider that uploading a 200GB file is equivalent to uploading 40 5GB files if you use 5GB segments. If you use the standard swift client, I believe it will do 10 threads at once by default.14:59
*** AlanClark has quit IRC14:59
*** datajerk has quit IRC14:59
*** dirkx_ has quit IRC15:00
*** dirkx_ has joined #openstack15:00
*** dirkx_ has quit IRC15:01
*** rnirmal has joined #openstack15:01
uvirtbotNew bug: #887596 in glance "Allow syslog facility to be selected" [Undecided,New] https://launchpad.net/bugs/88759615:01
*** datajerk has joined #openstack15:02
*** dendro-afk is now known as dendrobates15:03
*** Rajaram has quit IRC15:04
Raziquepraefect: fantastic15:04
*** supriya has quit IRC15:05
*** marrusl has quit IRC15:06
gnu111dweimer: Once in a while I noticed when I run list, not all files show up. For instance, I have one directory with a 8G, 10G and 20GB file. If I run list repeatedly, sometime the 10G and 20G do not show up. is this a rsync issue?15:06
*** jwalcik has joined #openstack15:07
*** dolphm has joined #openstack15:07
*** dragondm has joined #openstack15:07
*** marrusl has joined #openstack15:08
*** dgags has quit IRC15:10
*** dgags has joined #openstack15:10
*** antenagora has quit IRC15:11
*** AlanClark has joined #openstack15:11
*** winston-d_ has joined #openstack15:15
winston-d_hi, all15:16
*** winston-d_ has left #openstack15:17
foexleRazique: i need your help again :(. I'm sorry ...15:18
dweimergnu111: It does sound like an issue with the container replicator, which I believe uses rsync. Make sure that you have the container-replicator and rsyncd running on all of the storage nodes. The other thing it could be is that some of the nodes may have outdated ring files.15:18
Raziquefoexle: sure!15:18
*** shawn has quit IRC15:18
foexleRazique: i think the problem are deeper .... if i try now with nova-manage image xxxxxx i get every time 401 -.-15:18
dweimergnu111: It's a good idea to monitor the ring file md5sums on all of the storage and proxy nodes. md5sum /etc/swift/*.ring.gz should be the same across all of the nodes.15:19
foexleRazique: i check all logs15:19
gnu111dweimer: ok. will do that.15:19
foexleand nova-manage dont call keystone or nova-api or glance-register.log15:19
foexleso i dont see anything15:20
*** hggdh has quit IRC15:20
*** hggdh has joined #openstack15:20
*** dirkx_ has joined #openstack15:20
*** dendrobates is now known as dendro-afk15:21
Raziquefoexle: have u tried to bypass Keystone, or do u need it ?15:21
foexlei need it :/15:22
Raziqueok15:22
Raziqueleme 10 minutes to validate something here15:23
Raziquethen I'll look15:23
Raziquehelp*15:23
kbringardthe —allow_admin_api is just inherent now? doesn't work if you pass it a value?15:23
foexlehttp://pastebin.com/6tPyiELr15:23
Raziquewould u mind sharing SSH access ?15:23
foexleRazique: thx15:23
Raziquekbringard: it is required if you need to do operations like pause /suspend15:23
*** dendro-afk is now known as dendrobates15:23
Raziquekbringard: just use it as is into nova.conf15:23
kbringardright, but I mean15:23
kbringardit used to be --allow_admin_api=true15:24
Raziqueahhh15:24
Raziquesorry :D15:24
kbringardbut I'm seeing this15:24
kbringardhttps://skitch.com/aub17/gg6ba/dreamweaver15:24
RaziqueI use it like this "--allow_admin_api"15:24
kbringardyea, I'd always done =true15:24
kbringard:shrug:15:24
RaziqueI think passing an non-value option into conf files make them evaluated as true15:26
Raziqueexcept if you add=False15:26
Raziquebut I'm not that sure :/15:26
kbringardwell, the error specifically says "option does not take a valie"15:26
kbringardvalue*15:26
kbringardso I'd assume putting the flag in makes it true15:26
kbringardand not putting it in makes it false15:26
*** bcwaldon has quit IRC15:26
Raziquekbringard: yah u totally could be right on that15:27
kbringardno biggie, though, was just askin'15:27
*** epsas has joined #openstack15:29
*** shawn has joined #openstack15:29
*** code_franco has joined #openstack15:31
annegentlekbringard: I've asked about that as well (boolean or blank so true if present) and hear it works either way15:31
kbringardannegentle: doesn't seem to anymore :-)15:32
kbringardhttps://skitch.com/aub17/gg6ba/dreamweaver15:32
annegentlekbringard: hm there were flag changes recently in trunk?15:32
kbringardthat's running essex trunk from  2012.1~e1~20111021.11232-0ubuntu0ppa1~natty115:32
kbringardI'm not sure when it changed, I was troubleshooting that for someone else15:32
gnu111dweimer: You were right! The md5sum wasn't the same. I fixed that.15:33
kbringardso I'm not 100% sure what they updated from15:33
kbringardbut it was a diablo trunk build (I think in D2 or D3)15:33
kbringardnot that it matters, just good to know15:33
*** MarkAtwood has joined #openstack15:33
annegentlekbringard: sometime last week?15:34
kbringardit looks like his build is from 10/2115:34
kbringardhe updated from a pretty old diablo build, like June or July15:35
*** termie has quit IRC15:36
*** negronjl has joined #openstack15:37
*** jedi4ever has joined #openstack15:39
dweimergnu111: Changing the rings will replicate the data to the new locations. Once that is done you shouldn't have the differing container listings any more.15:41
*** troytoman-away is now known as troytoman15:41
*** vidd-away has quit IRC15:41
*** jdg has joined #openstack15:45
*** dirakx1 has quit IRC15:46
*** Arminder has joined #openstack15:48
*** cereal_bars has quit IRC15:48
*** rnorwood has joined #openstack15:48
*** supriya has joined #openstack15:48
*** shawn has quit IRC15:48
*** rsampaio has joined #openstack15:49
stevegjacobs_Razique - I was on earlier asking about using novaclient - about getting authenticated. Got completely sidetracked for a bit but I am trying to follow your instructions now15:50
*** termie has joined #openstack15:51
stevegjacobs_My installation is based on Kiall's ppa15:51
Raziquestevegjacobs_: hehe ok15:52
*** shang has quit IRC15:52
Raziquefoexle: u there ?15:52
stevegjacobs_but it modified - I have three nodes15:52
Kiallstevegjacobs_: have a look at the settings file from the scripts..15:52
foexleRazique: !:)15:52
Raziqueok ok15:52
*** shang has joined #openstack15:52
Kiallit has a pile of NOVA_* settings, those are what would go into novarc for python-novaclient auth15:52
* Kiall has had a horrible day.. dell storage array decided to mark all disks as dead -_-15:53
*** Hakon|mbp has quit IRC15:54
stevegjacobs_ok Kiall - I'll have a look - sorry to hear about your woes!15:54
*** termie has quit IRC15:55
Kialllol .. everything is back in order now.. nothing lost :)15:55
*** adjohn has joined #openstack15:56
*** rsampaio has quit IRC15:56
*** n81 has joined #openstack15:56
*** dirkx_ has quit IRC15:57
*** rnorwood has quit IRC15:57
*** krow has joined #openstack15:59
*** rsampaio has joined #openstack15:59
*** rwmjones has joined #openstack16:00
*** termie has joined #openstack16:00
*** termie has quit IRC16:00
*** termie has joined #openstack16:00
*** rnorwood has joined #openstack16:02
*** nerens has joined #openstack16:02
*** po has joined #openstack16:03
*** rnorwood has quit IRC16:04
*** alperkanat has joined #openstack16:07
*** alperkanat has joined #openstack16:07
*** llang629 has joined #openstack16:09
alperkanatanybody here experienced in JBOD setup for HP Smart Array P410?16:09
*** llang629 has left #openstack16:09
*** PotHix has quit IRC16:11
stevegjacobs_Kiall: thanks - got that to work - made a new file with some of the credentials, sourced it and then nova image-list gives me an output16:11
KiallCool - BTW You'll need a pile more settings in it for the euca-* tools to work.. :)16:12
stevegjacobs_Still have some issues with dashboard - especially a strange error when trying to do snapshots16:12
*** reidrac has quit IRC16:12
Kiallis the nova-compute node running my packages, or the original ubuntu ones?16:13
stevegjacobs_Yeah I saw a document on getting both to work - some kind of plumbing job16:13
Kiallthere was a bug in the ubuntu packages, thats been fixed in stable/diablo (so, its included in my packages)16:13
stevegjacobs_umm - you're talking about that second node?16:13
uvirtbotNew bug: #887611 in horizon "Console Log should have a nice message if instance state isn't running" [Undecided,New] https://launchpad.net/bugs/88761116:13
*** hezekiah_ has joined #openstack16:14
stevegjacobs_I don't think I need to install everything from your packages on there do I?16:14
*** TheOsprey has quit IRC16:14
Kiallyea - the server with nova-compute installed16:14
Kiallit just needs the nova-copmpute package on the second node, which will bring in the right libs etc16:15
KiallStaceyTien: just got back to your email BTW .. sorry for the delay.. Its been one hell of a day ;)16:16
Kiallstevegjacobs_: *16:16
*** webx has joined #openstack16:16
*** jaypipes has quit IRC16:17
*** jaypipes has joined #openstack16:17
*** krow has quit IRC16:18
webxwhich package provides the swauth command?  I'm using the diablo-centos repo found here: http://yum.griddynamics.net/yum/diablo-centos/16:20
*** rnorwood has joined #openstack16:21
stevegjacobs_Kiall: when upgrading the packages, is it a good idea to add the 'recommended' packages as well? in this case: radvd python-suds16:23
*** clauden___ has quit IRC16:24
*** clauden_ has joined #openstack16:24
KiallI haven't installed any of them recommended packages, my dep lists came from the official ubuntu ones, so I left them as is and only fixed what was broken16:24
Kiallany of the*16:24
*** alperkanat has quit IRC16:26
*** cp16net has joined #openstack16:26
rwmjonesI want to boot a VM from external kernel + initrd + root disk ...  is there any documentation on doing this?16:27
*** shawn has joined #openstack16:28
*** alperkanat has joined #openstack16:28
*** alperkanat has joined #openstack16:28
alperkanatnotmyname: ping16:29
notmynamealperkanat: good morning. get anywhere with performance last night?16:29
*** jdg has quit IRC16:30
alperkanatnope.. i couldn't do the tests.. but i want to tell you something16:30
alperkanatleaseweb confirmed that our RAID controllers are backed with batteries16:30
*** stevegjacobs has joined #openstack16:30
alperkanathowever the we seem to have RAID0 and not JBOD16:30
epsashmm -- building out tools with right_aws now16:31
alperkanati now enabled write-cache battery on one of the servers16:31
alperkanathoping to have increase for writes16:31
epsasis anybody using right_aws with openstack?16:31
alperkanatnotmyname: do you think that RAID0 is responsible for slow performance?16:31
*** dprince has quit IRC16:32
*** guigui has quit IRC16:32
notmynamealperkanat: no. RAID0 should give you better performance, in fact16:32
*** nati2 has joined #openstack16:32
alperkanatnotmyname: hmm i see16:32
stevegjacobs_Kiall: updated compute node. Still have a problem with dashboard when trying to create snapshot16:32
stevegjacobs_Unexpected error: The server has either erred or is incapable of performing the requested operation.16:33
*** supriya has quit IRC16:33
notmynamealperkanat: since you have the battery backed cache, you should be able to go with the nobarrier option with no problem16:33
alperkanatok i'm enabling write cache for all storage nodes 1 by 1 now16:34
stevegjacobs_I set tried this on a new small instance as well as the old one from pre-keystone -same error16:34
*** shawn has quit IRC16:34
notmynamealperkanat: once that's done, I think your best bet is to look at each component independently. do the walt test (and perhaps the bonnie test).16:34
hezekiah_anyone seeing libvirtd restarting over and over on natty?16:34
alperkanatnotmyname: component?16:34
stevegjacobs_Kiall - not sure what logs to look at to see if I can tell where the error is coming from16:35
*** bcwaldon has joined #openstack16:35
alperkanatnotmyname: can you try this on your servers? dd if=/dev/urandom of=testfile bs=1M count=10016:35
*** dolphm has quit IRC16:35
*** zaitcev has joined #openstack16:36
*** juddm has joined #openstack16:36
*** dolphm has joined #openstack16:36
hezekiah_anyone see anything liek this?16:36
hezekiah_http://paste.openstack.org/show/3172/16:36
alperkanatmy result on a cache enabled server and disabled server is almost the same: 104857600 bytes (105 MB) copied, 18,2724 s, 5,7 MB/s ///// 104857600 bytes (105 MB) copied, 18,3694 s, 5,7 MB/s16:36
*** primeministerp has joined #openstack16:36
*** dobber has quit IRC16:37
*** jsavak has joined #openstack16:38
*** lelin has quit IRC16:39
alperkanatnotmyname: http://cl.ly/BcYg (logical drive status on storage node), write cache battery stat: http://cl.ly/Bcu316:40
notmynamealperkanat: I get 5.3 MB/sec on my VM. (but I don't think /dev/urandom is a good test)16:40
alperkanatnotmyname: the reason i haven't tried walt and bonnie is that get-nodes does not provide correct information and i didn't know how to use them16:40
*** rsampaio has quit IRC16:41
*** dolphm has quit IRC16:41
*** lvaughn_ has joined #openstack16:42
*** lvaughn has quit IRC16:42
hezekiah_I'm building nova-compute boxes with puppet16:42
hezekiah_and I keep seeing16:42
hezekiah_Nov  8 10:41:06 m0005048 libvirtd: 10:41:06.349: 8030: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range16:42
*** GheRivero_ has joined #openstack16:42
hezekiah_and libvirtd goes into a restart loop16:42
*** smeier00 has joined #openstack16:43
*** joesavak has quit IRC16:43
*** alperkanat has left #openstack16:43
*** alperkanat has joined #openstack16:43
webxhttp://paste.openstack.org/show/3173/16:44
webxI've been following the instructions here: http://swift.openstack.org/howto_installmultinode.html16:44
*** GheRivero_ has quit IRC16:45
*** GheRivero_ has joined #openstack16:45
webxnow that I'm to the part where I actually test, I'm seeing the error in the paste.  any ideas?16:45
*** smeier00 has left #openstack16:45
*** GheRivero_ has quit IRC16:46
notmynamewebx: can you paste your proxy config?16:46
*** GheRivero_ has joined #openstack16:46
webxproxy-server.conf ?16:46
*** rsampaio has joined #openstack16:46
alperkanatnotmyname: have you written something or maybe i missed it?16:47
webxnotmyname. http://paste.openstack.org/show/3174/16:47
notmynamealperkanat: nope16:47
*** popux has joined #openstack16:47
alperkanatnotmyname: ok..16:48
*** dolphm has joined #openstack16:48
notmynamewebx: add "account_autocreate = true" to the [app:proxy-server] section. then reload the proxy and you should be good16:49
Kiallstevegjacobs_: humm.. I'd check the nova-compute, nova-api and glance logs16:50
webxnotmyname. same error after adding that and swift-init proxy restart16:51
notmynamewebx: hmm16:52
webxhttp://paste.openstack.org/show/3175/16:52
*** datajerk has quit IRC16:52
*** arBmind_ has joined #openstack16:55
*** rnirmal has quit IRC16:55
*** termie has quit IRC16:56
*** cp16net has quit IRC16:57
*** cp16net has joined #openstack16:57
*** arBmind has quit IRC16:58
*** arBmind_ is now known as arBmind16:58
*** jog0 has joined #openstack16:58
*** misheska has joined #openstack16:59
*** rsampaio has quit IRC16:59
*** jj0hns0n has joined #openstack17:00
*** termie has joined #openstack17:00
*** termie has quit IRC17:00
*** termie has joined #openstack17:00
*** datajerk has joined #openstack17:01
*** javiF has quit IRC17:01
*** lvaughn has joined #openstack17:02
*** lvaughn_ has quit IRC17:02
*** kaigan has quit IRC17:02
*** TheOsprey has joined #openstack17:03
*** datajerk has quit IRC17:05
stevegjacobs_Kiall: haven't checked logs yet but I get the same error when I run nova image-create <server> <name>17:05
stevegjacobs_The server has either erred or is incapable of performing the requested operation. (HTTP 500)17:06
KiallI havent created a SS in a while.. let me see if I get the same error..17:06
*** termie has quit IRC17:06
Kiallthe image-create command completed without error, is that the point you get the error?17:07
*** rsampaio has joined #openstack17:08
Kiallstevegjacobs: BTW .. get a real IRC client so oyu get notified when your name is mentioned ;)17:08
*** krow has joined #openstack17:08
*** ambo has quit IRC17:09
*** ambo has joined #openstack17:10
*** MarkAtwood has quit IRC17:12
stevegjacobsis this a real irc client?17:12
*** arun has quit IRC17:12
KiallI thought you were on one of the web browser based ones?17:12
*** obino has quit IRC17:12
Kiall(Must have been someone else .. whoops)17:12
notmynamewebx: check the storage servers to see if there is an error there. you could probably see if there are any other errors in the proxy logs17:12
KiallBTW - My snapshot has gone from queue -> snapshotting -> saving...17:13
stevegjacobsI have two open right now - a web based and a gnome one17:13
*** gyee has joined #openstack17:13
*** termie has joined #openstack17:13
Kiall->active17:13
stevegjacobsXchat gnome on ubuntu17:13
webxNov  8 17:13:35 netops-z3-a proxy-server Account HEAD returning 503 for []17:13
Kiallsame ;)17:13
webxNov  8 17:13:35 netops-z3-a proxy-server - - 08/Nov/2011/17/13/35 HEAD /v1/AUTH_system HTTP/1.0 503 - TempAuth - - - - - - 0.013517:13
webxnotmyname. that's all I see on the proxy server in messages..17:14
stevegjacobstbh  I've never irc'd much till the last few days17:14
*** arBmind has quit IRC17:14
*** misheska has quit IRC17:14
notmynamewebx: ok, check your account server logs17:15
webxnotmyname. there are no configured logs in the conf... wouldn't that mean that they default to syslog?17:16
notmynamewebx: yes. it shoudl be in /var/log/syslog unless you've changed your syslog config17:16
*** tyska has joined #openstack17:16
webxit's /var/log/messages, but yea.. that's the place I'm looking17:17
stevegjacobs_Kiall: check this paste out - /var/log/nova-api.log http://paste.openstack.org/show/3176/17:17
tyskaHello guys!17:17
tyskawhat's up?17:17
webxNov  8 15:34:42 netops-z3-a-1 account-replicator Skipping sdb1 as it is not mounted17:17
webxNov  8 15:34:42 netops-z3-a-1 account-replicator Beginning replication run17:17
webxNov  8 15:34:42 netops-z3-a-1 account-replicator Replication run OVER17:17
webxhmm, maybe I mistyped the storage location ?17:18
gnu111webx: I had this error once. not sure exactly why. I rebuilt my system without RAID now. Do you see a "accounts" folder in your /srv/node/sda3 ?17:18
webxwould that cause what we're seeing?17:18
*** ccustine has joined #openstack17:18
gnu111webx: not sure. I had seven nodes. with one node proxy which was also storage node. When I ran curl, it created the accounts folder only in the proxy node not in the others.17:19
*** shawn_ has joined #openstack17:19
Kiallstevegjacobs_: humm thats a nova log?17:19
webxgnu111. no accounts folder.  I think I see why though17:19
*** krow has quit IRC17:20
webxDevices:    id  zone      ip address  port      name weight partitions balance meta17:20
webx             0     1   10.68.224.190  6002      sdb1 100.00       6144    0.0017:20
webxname should be sdb, not sdb117:20
webxI think17:20
Kiallstevegjacobs_: and, thats "/var/log/nova-api.log"? Not /var/log/nova/nova-api.log ?17:20
webxis there a way to edit that setting?17:20
*** dirkx_ has joined #openstack17:22
stevegjacobs/var/log/nova/nova-api.log17:22
gnu111webx: swift-ring-builder <builder-file> remove <ip_address>/<device_name>. Not sure if there is a modify function. So you have to remove the node and add it again then rebalance.17:22
Kiallstevegjacobs_: anyway .. the error is in glance it seems, rather than nova. Somewhere around "2011-11-08 17:10" in the glance api logs17:22
webxk, I'll just remove and re-add17:22
Kialllook for "DEBUG [glance.api.middleware.version_negotiation] Processing request: POST /v1/image"17:22
*** dolphm has quit IRC17:22
tyskaguys, im having trouble to connect in instances that are running in the server2 of a dual node openstack architecture, can anyone help me?17:23
*** dolphm has joined #openstack17:23
Kiallanything interesting in/around it?17:23
*** maplebed has joined #openstack17:23
*** code_franco has quit IRC17:24
*** Ruetobas has quit IRC17:24
*** dprince has joined #openstack17:27
*** javiF has joined #openstack17:27
*** dolphm has quit IRC17:27
*** Ruetobas has joined #openstack17:28
*** heckj has joined #openstack17:28
stevegjacobsKiall: http://paste.openstack.org/show/3178/ - ends with a key error?17:29
stevegjacobsThis is from /var/log/glance/api.log17:30
KiallHeh - I just had a double take at that log, thinking it was from my servers..17:30
Kiallthe IP is nearly identical ;)17:30
*** dolphm has joined #openstack17:31
*** neogenix has joined #openstack17:31
*** misheska has joined #openstack17:32
Kiallstevegjacobs_: okay the next line that should be in your logs, before the stacktrace, is17:33
Kiall2011-11-08 17:07:11    DEBUG [glance.registry] Returned image metadata from call to RegistryClient.add_image():17:33
KiallFrankly - I dont know where that calls out to though...17:33
webxgnu111/notmyname: it was that my partition name was wrong.  after I deleted, re-added, then re-balanced, the account creation seems to work now17:34
gnu111webx: great!17:34
*** shang has quit IRC17:35
*** cmagina has quit IRC17:35
uvirtbotNew bug: #887672 in glance "internationalization bug in exceptions - missing import" [Undecided,New] https://launchpad.net/bugs/88767217:35
webxalthough I don't follow how users are managed, I'll remain ignorant for the sake of continuing the testing17:35
*** mtaylor has quit IRC17:36
*** mtaylor has joined #openstack17:36
*** mtaylor has quit IRC17:36
*** mtaylor has joined #openstack17:36
*** ChanServ sets mode: +v mtaylor17:36
*** misheska has quit IRC17:38
Kiallstevegjacobs_: it might be that there is a mix of packages installed.. can you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack|)"` on both server?17:38
Kiallstevegjacobs_: it might be that there is a mix of packages installed.. can you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack)"` on both server?*17:38
*** popux has quit IRC17:39
*** datajerk has joined #openstack17:40
*** neogenix has quit IRC17:41
*** neogenix has joined #openstack17:41
*** jiva has quit IRC17:42
*** MarkAtwood has joined #openstack17:42
*** jiva has joined #openstack17:43
*** dirkx_ has quit IRC17:43
*** dolphm has quit IRC17:44
*** exprexxo has joined #openstack17:44
*** dolphm has joined #openstack17:45
*** dysinger has joined #openstack17:45
*** jdurgin has joined #openstack17:46
*** dolphm_ has joined #openstack17:46
*** thingee has joined #openstack17:47
*** BasTichelaar has joined #openstack17:48
*** haji has joined #openstack17:48
BasTichelaaranyone here who can help me with zones implementation in nova?17:48
hajihey Kiall how do i use the keystone_data script17:48
*** dolphm has quit IRC17:49
*** alperkanat has quit IRC17:50
*** PotHix has joined #openstack17:50
*** rnorwood has quit IRC17:51
*** nacx has quit IRC17:53
stevegjacobsKiall: l don't see too much difference in the packages that are on both servers17:53
stevegjacobshttp://paste.openstack.org/show/3179/17:53
webxis there a way to convert utilities like s3cmd to use a local swift installation?17:53
webx(as the back-end store, instead of s3)17:54
*** obino has joined #openstack17:56
*** tyska has quit IRC17:56
*** Nathariel has quit IRC17:57
*** hugokuo has joined #openstack17:57
Kiallstevegjacobs: I think I see the issue17:57
Kiallon the compute node, you have the original ubuntu python-glance package installed17:58
stevegjacobsI just noticed that too17:58
stevegjacobsdoes it need to be there at all?17:58
Kialland, python-novaclien17:58
Kiallt17:58
Kialloff the top of my head, I'm not sure..17:59
hugokuohi all18:00
*** javiF has quit IRC18:00
Kiallstevegjacobs: yea, it does need to be installed.. (Just checked)18:01
KiallI'd bet if you apt-get install python-glance python-novaclient things will work...18:01
*** arun has joined #openstack18:01
*** arun has joined #openstack18:01
*** dirkx_ has joined #openstack18:03
stevegjacobsjust did that in x.x.x.199 - trying things out now18:03
*** tyska has joined #openstack18:03
stevegjacobsgoing to restart apache18:03
stevegjacobsand also try it from nova-client18:03
KiallThe controller node had the right stuff, it looks like it was just the compute node that needed the updates+a restart of the nova components18:04
*** cp16net has quit IRC18:05
*** exprexxo has quit IRC18:06
*** tyska has quit IRC18:06
webx(root@netops-z3-a-2) ~ > du -sh splunksearch-backups.tar18:06
webx877M    splunksearch-backups.tar18:06
webx(root@netops-z3-a-2) ~ > swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload bbartlett splunksearch-backups.tar18:06
webxObject PUT failed: https://10.68.224.147:8080/v1/AUTH_system/bbartlett/splunksearch-backups.tar 503 Service Unavailable18:06
webxhmm.. is there a filesize limit that I may be hitting here?18:07
*** Hakon|mbp has joined #openstack18:07
*** dgags has joined #openstack18:07
stevegjacobsnope - still no joy either from dashboard or novaclient :-(18:08
*** lorin1 has quit IRC18:08
Kiallhumm..18:08
*** mdomsch has quit IRC18:09
Kiallstevegjacobs: you restarted the various nova-* services on the compute node after installing?18:10
BasTichelaaranyone who can help me with zones and the basescheduler?18:12
*** Hakon|mbp has quit IRC18:13
*** nati2 has quit IRC18:15
*** nati2 has joined #openstack18:15
*** Ryan_Lane has joined #openstack18:16
*** hugokuo has left #openstack18:16
*** shang has joined #openstack18:16
uvirtbotNew bug: #887692 in nova "The QuantumManager could use some refactoring" [Undecided,Confirmed] https://launchpad.net/bugs/88769218:18
*** jdg has joined #openstack18:20
*** negronjl has quit IRC18:22
*** obino has quit IRC18:24
*** obino has joined #openstack18:25
*** Hakon|mbp has joined #openstack18:25
*** stevegjacobs has quit IRC18:27
webxhttp://paste.openstack.org/show/3180/18:27
webxis there a way to enable debug or something so that the logs will tell me which storage server(s) it's trying to connect to?  trying to troubleshoot this is a magical nightmare right now18:27
*** dirkx_ has quit IRC18:30
*** hadrian has joined #openstack18:30
*** shang_ has joined #openstack18:30
uvirtbotNew bug: #887706 in quantum "Exlude pyc files from pep8 verifications" [Undecided,New] https://launchpad.net/bugs/88770618:31
hajikiall18:32
Kiallyup?18:32
hajihow do i use the keystone_data script18:32
Kiallthe one from my repo, or the devstack one?18:33
hajithe one from your repo18:33
*** thingee has left #openstack18:33
KiallOh sure.. you just run it, once the keystone.sh script has already been ran...18:33
hajibut...18:34
hajiwhere is keystone.sh18:34
Kiallthere all in the repo ;) https://github.com/managedit/openstack-setup18:34
hajii just installed the packages18:34
Kiallah..18:34
Kiallhave a look at that link above..18:35
Kiallit handles installing everything (doesnt matter if you have the packages installed already), generates some config files, and guides you through most of the steps to install everything..18:35
hajioohhh18:36
hajiNICE!18:36
hajithanks18:36
uvirtbotNew bug: #887708 in nova "xenapi returns HANDLE_INVALID randomly" [Undecided,New] https://launchpad.net/bugs/88770818:36
KiallYea - I found myself doing the same steps over and over, forgetting 1 each time, and decided to just script it ;)18:36
*** lorin1 has joined #openstack18:36
*** fulanito has joined #openstack18:37
hajigreat work!18:38
Kiall;)18:38
KiallThe scripts are all of 100 lines or so.. Its really not much, It mostly just generating config files based on some templates, and then giving a list of instructions18:38
*** ccustine has quit IRC18:39
webxI seem to be able to create and list containers, but I am not able to upload files to my swift cluster18:39
*** clopez has quit IRC18:40
webxand then I see really fun behavior where a container shows up in a list, but if I ask to see the contents of the container, swift reports it as non-existant18:40
hajikiall: in the install instructions why you don't install nova scheduler and objstore18:41
webxfor example: http://paste.openstack.org/show/3181/18:41
uvirtbotNew bug: #887712 in openstack-qa "instance_update with uuid as instance_Id and metadata fails" [Medium,Confirmed] https://launchpad.net/bugs/88771218:41
*** mszilagyi has joined #openstack18:41
*** hadrian has quit IRC18:43
dolphm_sandywalsh: my apologies for my order word of choice18:44
Kiallhaji: really? whoops ;)18:44
Kiallthe scripts do install it I'm sure though ;)18:44
hajioh18:44
KiallI should probably update the PPA instructions to just point at the scripts ;)18:44
*** py___ has joined #openstack18:45
*** _jeh_ has joined #openstack18:45
*** _jeh_ has joined #openstack18:45
*** fulanito has quit IRC18:45
*** shang has quit IRC18:45
*** dgags has quit IRC18:45
*** datajerk has quit IRC18:45
*** hezekiah_ has quit IRC18:45
*** primeministerp has quit IRC18:45
*** rwmjones has quit IRC18:45
*** jeh has quit IRC18:45
*** jamespage has quit IRC18:45
*** nci has quit IRC18:45
*** crayon has quit IRC18:45
*** py has quit IRC18:45
*** pfibiger has quit IRC18:45
*** paltman has quit IRC18:45
*** chadh has quit IRC18:45
*** n0ano has quit IRC18:45
*** troytoman has quit IRC18:45
*** pvo has quit IRC18:45
*** notmyname has quit IRC18:45
*** chmouel has quit IRC18:45
*** dgags has joined #openstack18:45
*** notmyname has joined #openstack18:46
*** ChanServ sets mode: +v notmyname18:46
*** fulanito has joined #openstack18:46
*** shang has joined #openstack18:46
*** datajerk has joined #openstack18:46
*** primeministerp has joined #openstack18:46
*** hezekiah_ has joined #openstack18:46
*** rwmjones has joined #openstack18:46
*** jamespage has joined #openstack18:46
*** nci has joined #openstack18:46
*** crayon has joined #openstack18:46
*** pfibiger has joined #openstack18:46
*** paltman has joined #openstack18:46
*** chadh has joined #openstack18:46
*** n0ano has joined #openstack18:46
*** troytoman has joined #openstack18:46
*** pvo has joined #openstack18:46
*** chmouel has joined #openstack18:46
*** hezekiah_ has quit IRC18:46
*** hezekiah_ has joined #openstack18:46
*** nati2_ has joined #openstack18:51
*** nati2 has quit IRC18:51
hajikiall: the warning is a joke right?18:51
hajiahahha18:51
Kiall;)18:51
Kiallkinda18:51
mtuclouddoes anyone know why i cant access the public ip of an instance even with it properaly allocated, associated and put in security group?18:51
mtucloudprivate ip works fine18:52
*** darraghb has quit IRC18:52
Kiallmtucloud: not much to go on there ;) does `ip addr show` show the IP listed anywhere?18:52
mtucloudKiall:  yep.18:53
mtucloudunder eth0:   inet 192.168.0.100/24 brd 192.168.0.255 scope global eth018:54
mtucloud    inet 192.168.0.225/32 scope global eth018:54
Kialland `iptables -t nat -L` has a NAT rule for it?18:54
mtucloudthe ip the vm is assoicated with is 192.168.0.25518:54
*** code_franco has joined #openstack18:54
mtucloudDNAT       all  --  anywhere             192.168.0.225       to:10.0.0.218:54
mtucloudthe 10.0.0.0 network is my eth1 private network18:55
Kialland 10.0.0.2 is the instances private IP?18:55
mtucloudyes sir18:55
mtucloudand i can ssh and ping to that fine18:55
mtucloudthis is just a dual node arch as well18:55
Kialland, can you ping the public IP from the compute node?18:55
hajifulanito: did u installed openstack already??18:55
mtucloudlet me check18:56
Kiallor just unable to get to it from outside the compute/network node18:56
fulanitoyes its working fine18:56
hajifulanito: awesome18:56
*** fulanito has quit IRC18:56
mtucloudi can get to 10.0.0.2 from the controller node fine18:56
mtucloudbut the network node is also running on the same node18:57
mtucloudi cant ping the public ip from the compute node18:57
*** rnorwood has joined #openstack18:57
Kiallmtucloud: weird.. all the basics look covered off18:58
Kiallyou sure the instance is part of the security group you set the rules on?18:58
*** hggdh has quit IRC18:59
mtucloudya its just the default group18:59
*** haji has quit IRC18:59
mtucloudbut hold on for the compute node iptables, should i be seeing those specific rules for that public ip?19:00
*** jeromatron has joined #openstack19:00
jeromatronjust wanting to make sure I'm not in left field - openstack was deployed as part of rackspace UK from day 1, right?19:01
KiallOff the top of my head, I can't remember .. And - I've gotta run, sorry :)19:01
*** hggdh has joined #openstack19:01
*** code_franco has quit IRC19:01
mtucloudkiall, thanks for the help19:02
*** cp16net has joined #openstack19:03
DuncanTjeromatron: I'm not sure, but articles like http://www.theregister.co.uk/2011/11/08/rackspace_openstack_private_cloud/ suggest not...19:03
*** tjikkun has quit IRC19:03
*** AlanClark has quit IRC19:05
jeromatronDuncanT: Okay, when I worked at rackspace, when they talked about the UK DC, it always sounded like they were planning on doing openstack there from day 1.  maybe it was just openstack files or something...19:05
*** AlanClark has joined #openstack19:05
annegentlejeromatron: probably Cloud Files as an OpenStack project19:05
annegentlebut definitely always question The Register's journalistic chops :)19:06
DuncanTjeromatron: I've never worked for rackspace, so I can only guess based on their blogs and articles like the above19:06
BasTichelaaranyone who can help with zone scheduling within nova?19:06
jeromatronannegentle: yeah - that would make sense.  (this is jeremy, previously from the austin office btw, worked on cassandra/hadoop there)19:06
gnu111webx:  I had a the exact same issue you are describing. I wasn't able to solve it. Not sure if this is network issue or not. Did you check rsyncd setings? I also tried chmod 777 /srv/node. you can give that a try.19:07
annegentlehey jeromatron nice nick :)19:07
jeromatronDuncanT: yeah - that's why I wanted to clarify since it sounds like there's misinformation out there, at least partly.  Would be good to clarify I think where it's deployed.19:07
*** tjikkun has joined #openstack19:07
*** tjikkun has joined #openstack19:07
jeromatronannegentle: thanks :) just something unique.19:07
*** hezekiah_ has quit IRC19:08
notmynamewebx: are you running all the consistency servers (swift-init rest start)? are the rings consistent? do they all have teh same md5 hash?19:08
DuncanTjeromatron: I'm intregued by the answer... I'm off now but I'll read the channel logs later to see if you get a response from a rackspacer19:08
*** cereal_bars has joined #openstack19:14
*** bhall has joined #openstack19:14
*** cereal_bars has quit IRC19:20
*** datajerk has quit IRC19:21
*** dirkx_ has joined #openstack19:22
*** dnjaramba has quit IRC19:24
*** aliguori has quit IRC19:25
*** syah has joined #openstack19:27
*** krow has joined #openstack19:27
*** hezekiah_ has joined #openstack19:29
*** reed has joined #openstack19:30
*** krow has quit IRC19:30
*** cmagina has joined #openstack19:31
webxnotmyname. I use swift-init all start.  would that work the same?19:33
*** smeier001 has joined #openstack19:33
*** nati2 has joined #openstack19:34
*** nati2_ has quit IRC19:35
*** cmagina has quit IRC19:37
*** jakedahn has joined #openstack19:38
*** krow has joined #openstack19:39
*** cmagina has joined #openstack19:39
webxnotmyname. I only have one storage node with anything in it.  shouldn't that be distributed out?19:39
*** jj0hns0n has quit IRC19:40
webxahh.. I didn't chown /srv/node like I was supposed to on all of them19:42
hezekiah_has anyone seen this?19:43
hezekiah_Nov  8 13:43:16 m0005048 libvirtd: 13:43:16.085: 11810: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range19:43
*** marrusl has quit IRC19:44
*** vladimir3p has joined #openstack19:45
*** smeier001 has left #openstack19:45
notmynamewebx: ya, all start starts everything on that box (that may not actually be what you want, though)19:45
webxyea, I just don't put a proxy config on the storage nodes and then start everything19:46
webxseems like an easier way than manually starting every service19:46
webxnotmyname. is there a filesize limitation that we can configure somewhere?19:48
notmynamewebx: yes. there is a constant in the code (swift/common/constraints.py) that specifies the maximum object size. it's set to 5GB. I wouldn't recommend changing it unless you have a very good understanding of your use case19:49
notmynamewebx: however, larger objects can be saved with the object manifest feature19:50
notmynamehttp://swift.openstack.org/overview_large_objects.html19:50
*** stuntmachine has quit IRC19:51
*** primeministerp has quit IRC19:51
webxI understand our use case, but I don't know what sort of impact it will have on the swift system if we up that to the 25gb neighborhood19:51
*** oubiwann has quit IRC19:51
uvirtbotNew bug: #887743 in keystone "User within ServiceCatalog need to change." [Undecided,New] https://launchpad.net/bugs/88774319:51
notmynamewebx: how varied will the object sizes be in your cluster? 0 bytes to 25 GB? or something with a much smaller range?19:52
*** marrusl has joined #openstack19:52
*** andyandy has quit IRC19:52
*** andyandy_ has quit IRC19:52
webxyea, it will vary greatly.  primary use in terms of file count will be in the 1mb or less, but the heaviest usage storage-wise will be 10-25gb backup blobs19:53
notmynameah ok19:53
uvirtbotNew bug: #887739 in keystone "Issues in Rackspace style Legacy Authentication" [Undecided,New] https://launchpad.net/bugs/88773919:53
uvirtbotNew bug: #887740 in keystone "Elements in RAX-KSADM-users.xsd  not used in contract." [Undecided,New] https://launchpad.net/bugs/88774019:53
*** stevegjacobs has joined #openstack19:54
notmynamewebx: then I strongly recommend not changing it from 5GB. there is a high-level explanation of why you don't want to change it in an old blog post of mine (http://programmerthoughts.com/openstack/the-story-of-an-openstack-feature/). the summary is that the variance of fullness across all of your storage volumes will be greater, and therefore it's much harder to capacity plan and efficiently use all of your disk space. that and a 25GB upload general19:55
webxnotmyname. do I understand right in that swift would prefer to have non-raided drives as separate partitions instead of raid'ing them together and making a single partition ?19:55
notmynamely would take so long the opportunity for a connection problem (and therefore losing all of the data uploaded so far) is much higher19:55
*** jakedahn has quit IRC19:55
notmynamewebx: correct (in the general sense)19:55
notmynamewebx: raid5 or raid 6 + swift is a bad idea. performance will suffer and raid rebuild times take forever19:56
*** rsampaio has quit IRC19:56
webxyea.. I just tested with an 877M file and it took 60 seconds.19:56
notmynamewebx: but raid 10 for dedicated account and container nodes could be a very good idea for large clusters19:56
webx(~14mb/sec... and it's raid6)19:57
*** alperkanat has joined #openstack19:57
*** alperkanat has joined #openstack19:57
alperkanatnotmyname: http://paste.openstack.org/show/3182/19:58
webxnotmyname. thanks for the 5gb recommendation.  can you point me to docs on how swift deals with files >5gb?19:58
notmynamewebx: the link I pasted above about large objects19:59
*** rsampaio has joined #openstack19:59
*** j^2 has quit IRC19:59
notmynamehttp://swift.openstack.org/overview_large_objects.html19:59
webxk, will read.19:59
*** jakedahn has joined #openstack19:59
*** rsampaio has quit IRC19:59
rmkHas anyone been able to get vnc working with the diablo dash?20:00
*** termie has quit IRC20:00
notmynamewebx: you are just building a POC right now, right? as your cluster gets very large, there are certain things you will need to keep in mind and lessons we have learned that we can share20:00
rmkI always get "server disconnected" immediately.  Several different setups.20:00
webxnotmyname. yea, POC right now.  only 4 storage nodes and 1 proxy, but one of the tests we want to do are with those 10-25gb backups.20:03
*** shang_ has quit IRC20:03
*** shang has quit IRC20:03
notmynamewebx: ya, that's not an issue.20:03
*** termie has joined #openstack20:04
*** termie has joined #openstack20:04
webxnotmyname. I can set the segment size to 5gb, and then swift does all the magic behind the scenes when I retrieve it?20:04
notmynamewebx: "swift" as in the cli tool that ships with the code?20:04
webxright.. I'm guessing there's more work being done on the backend as well.  basically, my user would just have to know the filename to retrieve and they'll be good to go20:05
*** rsampaio has joined #openstack20:06
webx.. right? :)20:06
webxaccording to this: http://swift.openstack.org/overview_large_objects.html -- when you download the file, all you need is the container and filename.  I was just checking to make sure we wouldn't have to give them any more info about how we sliced the 'bigfile' when uploading20:07
*** po has quit IRC20:07
notmynamewebx: right. as a smart client, the swift cli tool can split the data and upload the parts for you. swift the storage system (the server-side) doesn't do any automatic splitting of large objects20:07
notmynamewebx: correct. just the container and filename (of the manifest file)20:08
webxright, thanks20:08
webxis there a way to shortcut the size?  ie, -s 1gb instead of -s 107374182420:09
webxor 1g, 100m, etc.20:09
notmynamewebx: not sure. check the --help message20:09
*** stevegjacobs has quit IRC20:10
*** stevegjacobs has joined #openstack20:12
stevegjacobshi20:12
*** arBmind has joined #openstack20:14
*** rsampaio has quit IRC20:15
Spirilisdoesn't look like there is a shorthand for that -S option fyi20:15
Spirilisthe python code just takes int(segment_size) without analyzing for suffixes from what I can tell by cursory glance20:15
alperkanatnotmyname: http://paste.openstack.org/show/3183/20:16
notmynamealperkanat: use a much bigger concurrency (like 50) and something more like 1000+ for the number of requests20:17
alperkanatnotmyname: i'm out of options about this performance problem. today afternoon, i retried creating a new account, started a new proxy at 9090 without SSL, checked if the direct proxy url was correct and still no change20:18
webx(root@netops-z3-a-2) ~ > swift --version20:19
webxswift 1.020:19
webxis that the latest ?20:19
*** po has joined #openstack20:20
*** aliguori has joined #openstack20:20
uvirtbotNew bug: #887762 in quantum "document keystone integration in Admin Guide" [Undecided,New] https://launchpad.net/bugs/88776220:20
notmynamewebx: unfortunately that's the version of the cli tool, not the installed swift version. check dpkg (or whatever) for you installed version of swift. the current version is 1.4.320:21
alperkanatnotmyname: http://paste.openstack.org/show/3184/20:21
webx(root@netops-z3-a-2) ~ > rpm -qf `which swift`20:21
webxopenstack-swift-1.4.3-b447.noarch20:21
webxyea, I have the latest.. just confusing versioning I guess20:22
notmynamealperkanat: that looks better. do all storage nodes give similar results?20:22
alperkanatnotmyname: checking20:22
notmynamewebx: ya, sorry about that20:22
*** Turicas has quit IRC20:24
webxnotmyname. is there a way to default the split size automatically?20:24
alperkanatnotmyname: http://paste.openstack.org/show/3185/20:24
notmynamewebx: I don't think so20:24
webxk20:24
notmynamealperkanat: so the hard question is "why are your storage servers getting so much better performance than the entire set of servers?" perhaps that means there is a config issue in the proxy server20:25
*** PotHix has quit IRC20:26
alperkanatpasting proxy conf20:26
alperkanatnotmyname: http://paste.openstack.org/show/3186/20:27
notmynamealperkanat: not much there to go wrong20:28
alperkanatnotmyname: what else would it be?20:30
alperkanatnotmyname: http://paste.openstack.org/show/3187/20:31
notmynamealperkanat: it's hard to test things in isolation because you are already running it in prod. it could be something in the proxy config (check the available options in the docs or in the sample proxy config file). it could be networking settings somewhere in your cluster20:31
uvirtbotNew bug: #887766 in horizon "Tenant switch list is empty after architecture merge" [Undecided,New] https://launchpad.net/bugs/88776620:31
uvirtbotNew bug: #887767 in horizon "Tenant switch list shouldn't change when on syspanel tenants panel" [Undecided,New] https://launchpad.net/bugs/88776720:31
uvirtbotNew bug: #887768 in horizon "Duplicate code in auth views" [Undecided,New] https://launchpad.net/bugs/88776820:31
uvirtbotNew bug: #887770 in horizon "user_home function in dashboard views redirects to wrong dashboard name" [Undecided,New] https://launchpad.net/bugs/88777020:31
*** krow has quit IRC20:32
webxnotmyname. do you guys have any benchmarks on expected performance given specific hardware ?20:34
alperkanatnotmyname: the only networking settings we made was about NAT. the storage nodes going online through proxy (for testing purposes and system updates)20:35
alperkanatnotmyname: and you just checked my prod. proxy conf which seems ok20:35
alperkanati know it's hard to test but i'm clueless why this may happen20:36
notmynamewebx: there are a lot of variables. generally, think about 1K req/sec/proxy as a decent number (-ish)20:36
notmynamewebx: but again, there are a ton of factors there20:37
webxnotmyname. yea, and what about throughput?  as an example, on a raid10 device over local gigabit, I just transferred a file at ~14mb/sec which seems very low to me.20:38
webxbut maybe that's about right?20:38
*** dolphm_ has quit IRC20:38
*** stuntmac_ has joined #openstack20:38
*** dolphm has joined #openstack20:38
*** clopez has joined #openstack20:40
*** dolphm_ has joined #openstack20:40
*** dolphm has quit IRC20:43
*** haji has joined #openstack20:45
hajikiall: the settings file theres a bridge variable, you dont use the vlan set up?20:46
hezekiah_ugh20:46
hezekiah_I can't get any traction on this libvirt-bin issue20:46
hezekiah_Nov  8 14:17:11 m0005048 libvirtd: 14:17:11.796: 12918: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range20:46
hezekiah_it looks like it just can't find the group20:46
hezekiah_but that group is  there20:46
notmynamewebx: give me a bit. multitasking now20:47
webxnotmyname. not a problem20:49
*** rsampaio has joined #openstack20:51
*** alperkanat has quit IRC20:56
notmynamewebx: sorry I was wrong with the earlier numbers20:56
*** Matzie has joined #openstack20:57
notmynamewebx: think 1-2K req/sec per proxy (with 4 storage nodes). the network throughput should be roughly whatever your NICs can support20:57
webxso you guys don't anticipate much slowdown with the processing and storage of the files?20:58
notmynamewebx: no. swift is fast enough to saturate your network before CPU runs out20:58
notmynamewell, don't run it on a 386 or anything :-)20:58
*** troytoman is now known as troytoman-away20:59
*** dprince has quit IRC20:59
webxhmm, interesting... I wonder why I have such poor performance20:59
webxI'll dig20:59
Matziehi... qn re nova volume... does it need exclusive access to the LVM volume group defined as volume_group (ie, "nova-volumes" usually, but I want to use a different value)20:59
*** arBmind_ has joined #openstack21:00
*** rnorwood has quit IRC21:02
*** rnorwood1 has joined #openstack21:02
*** mtucloud has quit IRC21:02
mjforkMatzie: i ran it in a shared group21:04
*** j^2 has joined #openstack21:05
mjforkit was testing/POC, but it seemed to work21:05
*** Zinic has joined #openstack21:05
*** oubiwann has joined #openstack21:06
ZinicAnyone here work with devstack?21:06
*** praefect has quit IRC21:06
*** nerdstein has quit IRC21:07
*** vkp has joined #openstack21:09
hajikiall?21:10
hajiu there21:11
haji??21:11
Matziethanks mjfork21:11
vishyZinic: lots of us21:12
webxnotmyname: where would I look for potential issues with the swift backend being slow?  I've tested simple scp transfers between the proxy and the storage nodes, and all of them are well over 55MB/sec.  When I send a file from a proxy server to swift, using the swift binary, I'm getting max speeds of ~15MB/sec.21:12
vkp#openstack-meeting21:12
Zinicvishy: has anyone run into an issue where rabbitmq-server refuses to start? if not, I'll keep digging on my end to see what's up but I thought I'd pop in to see if this was something others had run into.21:14
vishyi have not21:14
vishybut i have had rabbit fail many times21:14
vishyoften you have to delete /var/ib/rabbin/mnesia21:15
vishyto get it to start21:15
vishybut I've never had it fail with devstack yet21:15
notmynamewebx: hmm..I wonder if it's at all similar to what I've been talking to alperkanat about21:15
notmynamewebx: first look at your proxy logs. see if there are any errors there21:16
webxI haven't been following.21:16
notmynamewebx: no worries. we havent' found the issue yet :-021:16
notmynamewebx: then check the proxy for memory or I/O contention21:16
notmynamethen check the same on the storage nodes21:16
notmynamewebx: the best place to get numbers is to use the included swift-bench tool. since all deployments have that, it's easier to get numbers that can be compared to one another21:17
Zinicvishy: it fails to start for me on initial install using a fresh natty server (RS cloud server) - I'll keep poking around on my end just to make sure I haven't missed anything simple21:17
webxthey're brand new servers, and they're only being used for these tests.  dual 6 core, 96gb ram.. a lot of beef so I'm hoping there's no contention21:17
webxI'll check again while a big upload is going21:18
vishyZinic: natty cloud server won't work21:18
vishyyou need a pvgrub server with oneiric21:18
notmynamewebx: ya, doesn't sound like an issue. how many workers are you running on your proxy and storage nodes? it should be at least 1 per core21:18
Zinicaha21:18
notmynamewebx: you are using tempauth, right?21:18
webxnotmyname: hmm, any issues with hyperthreading?  yes, using tempauth21:18
notmynamewebx: I'm not sure about hyperthreading. that would be an interesting issue if there is one21:19
webxnotmyname: workers = 8 right now, but there are 24 reported to cpuinfo because of HT..21:19
webxI'll increase that to 24 just to see21:19
Zinicgotcha. thanks vishy, that's exactly what I needed21:19
notmynamewebx: if you have a moment, run swift-bench from your proxy server (use --help to see usage)21:19
*** troytoman-away is now known as troytoman21:20
webx(root@netops-z3-a-2) swift > locate swift-bench21:20
webx(root@netops-z3-a-2) swift >21:20
webxwhich package provides that?21:20
notmynamewebx: that depends on which packages you're using :-)21:20
webxdiablo-centos from griddynamics repo21:20
webxhttp://yum.griddynamics.net/yum/diablo-centos/21:21
notmynamewebx: I'm not familiar with that one. I'm not sure if they've included it21:21
webxah.. do you have 'official' centos rpms I can use instead?21:21
notmynamewebx: https://github.com/openstack/swift/blob/master/bin/swift-bench21:21
notmynamewebx: no, the only "official" stuff is debs (since we target ubuntu 10.04)21:22
*** arBmind has quit IRC21:23
notmynamewebx: worst case, you can grab the code, run `sudo python ./setup.py develop && swift-bench --help`21:23
webxyea, going to do that now21:23
notmynamewebx: http://paste.openstack.org/show/3188/  <-- running swift-all-in-one on a 1GB slicehost VPS (so your numbers should be _much_ better)21:24
*** sebastianstadil has quit IRC21:25
*** msivanes has quit IRC21:26
*** Matzie has quit IRC21:28
webxhttp://paste.openstack.org/show/3190/21:30
notmynamewebx: ya that looks good21:30
*** stuntmac_ has quit IRC21:30
*** rnorwood1 has quit IRC21:30
webxya21:30
notmynamewebx: just to verify (since I see it in your swift-bench command), are you running ssl direct to the proxy?21:31
*** miclorb_ has joined #openstack21:31
webxyea.. this is all direct to the proxy, from the proxy21:32
webxno F5s or gateways in between21:32
webxthere are a few switches in between the proxy and storage nodes, but that's it.. all on the same network21:32
notmynamewebx: ok. don't do that. you should terminate ssl at your load balancer. ssl + python seems to have some problems under load21:32
webxhmm21:33
notmynamewebx: we recommend using either zeus (commercial and pricy, but the best throughput) or pound (free and slightly less throughput) for your LB21:33
webxnotmyname: can I try it with http to see if that's a problem?21:33
webxnotmyname: we have plenty of F5s, which is what I'd guess we'll use when this gets to production21:33
notmynamewebx: your numbers don't indicate a problem. it's somethign that you would probably see at scale21:33
webxnotmyname: all, I'll show you the problem.  :)21:34
*** rnorwood has joined #openstack21:34
notmynamewebx: the trick is how much ssl throughput you can get. ssl connections/sec is not the issue. sustained throughput is. we are able to get 7-8Gb/sec with zeus (and 6-7Gb/sec with pound)21:35
*** juddm has left #openstack21:35
notmynameour tests were run a while back, and we didn't do anything fancy like using the intel chips that offload AES21:35
notmynamebut we haven't yet found a LB that can keep up with a 10gb pipe of ssl traffic21:36
notmyname(we've got lots of 10g lines going to the LBs)21:37
*** jj0hns0n has joined #openstack21:37
webxyea21:37
webxit's something I'm interested in testing on our hardware once I get to that point :)21:37
notmynamewebx: sounds like you've got an interesting project :-)21:37
webxhttp://paste.openstack.org/show/3191/21:37
webxit's going to be :)21:38
*** dolphm_ has quit IRC21:39
notmynameya, that seems a little slow. the easiest first thing to do (especially since it's a bad idea to do at scale) is to remove ssl from the proxy21:39
*** dolphm has joined #openstack21:39
webxsounds good to me.. I'd like to try with straight http but I haven't looked yet at how to do it21:40
webxjust to narrow down if it's ssl + python even at very low load21:40
*** clauden_ has quit IRC21:40
notmynamewebx: simply remove the 2 cert config options from the proxy server (of course, they shouldn't be any other configs either)21:40
*** clauden_ has joined #openstack21:40
uvirtbotNew bug: #887792 in keystone "get Tenants list with limit and marker get 500 unhandled ..." [Undecided,New] https://launchpad.net/bugs/88779221:41
*** medberry has joined #openstack21:41
*** medberry has quit IRC21:41
*** medberry has joined #openstack21:41
webxnotmyname: ok, trying that out now21:42
*** doorlock has quit IRC21:42
*** troytoman is now known as troytoman-away21:42
*** nati2 has quit IRC21:43
*** troytoman-away is now known as troytoman21:43
*** dolphm has quit IRC21:44
webxhttp://paste.openstack.org/show/3192/21:44
*** dirkx_ has quit IRC21:45
uvirtbotNew bug: #887797 in keystone "Create User failed with '400 Expecting User' message" [Undecided,New] https://launchpad.net/bugs/88779721:45
notmynamewebx: I'll bet you didn't update the tempauth config to return an http storage url21:45
webxI'll bet you're right21:46
*** cp16net has quit IRC21:48
gnu111notmyname: do you have suggestions on how to test/simulate node failures?21:48
notmynamegnu111: depends on what kind of failure you want to simulate. the most extreme (and we've done this) is to walk up to a rack on a production system an pull the power plug21:49
notmynamegnu111: but more simply, you can simply unmount drives or turn off servers21:49
notmynamegnu111: for more complicated testing, you could go as far as to write middleware that simulates network failures21:50
*** jj0hns0n has quit IRC21:50
webxhttp://paste.openstack.org/show/3193/21:54
webxthat's better..21:54
*** rsampaio has quit IRC21:55
*** kieron has joined #openstack21:55
*** lorin1 has quit IRC21:58
notmynameindeed21:58
*** heckj has quit IRC22:01
*** thingee has joined #openstack22:01
gnu111notmyname: thanks. I will need to think about it.22:02
*** kieron has quit IRC22:02
*** dolphm has joined #openstack22:05
*** jsavak has quit IRC22:07
*** Tsel has joined #openstack22:09
*** lionel has quit IRC22:09
*** lionel has joined #openstack22:10
uvirtbotNew bug: #887805 in nova "Error during report_driver_status(): 'LibvirtConnection' object has no attribute '_host_state'" [Undecided,New] https://launchpad.net/bugs/88780522:10
*** jj0hns0n has joined #openstack22:21
*** jdg has quit IRC22:23
*** BasTichelaar has quit IRC22:23
*** mgoldmann has quit IRC22:23
*** vkp has quit IRC22:26
*** arun_ has joined #openstack22:29
*** arun_ has joined #openstack22:29
*** exprexxo has joined #openstack22:29
*** jeromatron has quit IRC22:33
*** cdub has quit IRC22:37
*** cdub has joined #openstack22:37
*** vidd has joined #openstack22:38
*** vidd has joined #openstack22:38
tjoyis there a new high-level HOWTO doc for Nova since the Diablo release?22:39
*** bcwaldon has quit IRC22:41
*** TheOsprey has quit IRC22:41
*** MarkAtwood has quit IRC22:44
annegentletjoy: http://docs.openstack.org/diablo/openstack-compute/starter/content/ is the Diablo Starter Guide - but doesn't address Identity (Keystone) or Dashboard (Horizon)22:45
tjoycool. How's documentation for XCP?22:45
tjoyas it relates to interfacing with nova22:45
*** dgags has quit IRC22:46
*** GheRivero_ has quit IRC22:47
medberrytjoy, heh... I think that could be described as a work in progress.22:47
tjoyhas the xcp driver been demonstrated to work?22:48
*** ldlework has quit IRC22:49
viddannegentle, is there any progress to a documented "full stack" with keystone and dashboard?22:49
viddaccording to launchpad, horizon/dashboard has been promoted to a "core" project22:50
viddas of the diablo release22:51
annegentletjoy: I haven't had much traction asking for people to write that. Do you know someone who would want to write about XCP with nova? the most I saw was this: http://etherpad.openstack.org/openstack-xen22:51
tjoyannegentle: xcp is supposed to be a drop-in replacement for xenserver, from what I understand.22:51
annegentlevidd: those are core projects for Essex, but I do have backporting available for docs.22:51
annegentletjoy: I keep hoping deshantm will go on a writing spree :)22:52
tjoyI could document my work setting up openstack with xcp this week22:52
tjoyfinally got some headroom wrt the dayjob22:53
zykes-vidd: sitll no progress ? M22:54
viddzykes?22:54
zykes-vidd: yeah, horizon is core now but that's essex not diablo22:54
viddthen why would it say "As of the Diablo release, Horizon is now an OpenStack Core project and is a fully supported OpenStack offering." =\22:55
vidddont matter...its not that important22:55
*** hingo has quit IRC22:57
annegentletjoy: would love that, write it up in any format and I'll take it :)22:57
tjoyok22:58
viddzykes-, does your tenant and user pages work properly?22:58
annegentlevidd: where does it say that?22:58
viddannegentle, https://launchpad.net/horizon22:58
annegentlevidd: ah will have to let devcamcar know (or heckj do you have access to that page?)22:59
viddannegentle, so its a typo? =]22:59
annegentlevidd: welll.... it's a bit misleading... during the Diablo timeframe, the PPB voted Dashboard in, but incubation doesn't track very well with project release management....23:00
annegentlevidd: so yeah I'd say it's inaccurate slightly23:00
*** Zinic has quit IRC23:01
annegentleok, I'm outta here, gets dark too early now! :)23:01
viddlater annegentle23:01
*** dolphm_ has joined #openstack23:06
*** Ruetobas has quit IRC23:08
*** imsplitbit has quit IRC23:09
*** dolphm has quit IRC23:09
*** dendrobates is now known as dendro-afk23:11
*** Jamey___ has joined #openstack23:11
*** Ruetobas has joined #openstack23:13
*** AlanClark has quit IRC23:13
*** AlanClark has joined #openstack23:14
*** dendro-afk is now known as dendrobates23:15
*** jeromatron has joined #openstack23:17
*** dosdawg has joined #openstack23:21
*** Jamey___ has quit IRC23:23
*** Jamey___ has joined #openstack23:23
*** AlfaBetaGamma has joined #openstack23:25
*** s1n4 has joined #openstack23:26
*** exprexxo has quit IRC23:27
*** robbiew has quit IRC23:31
*** lborda has quit IRC23:35
*** nerdstein has joined #openstack23:37
*** mmetheny has quit IRC23:37
*** mmetheny has joined #openstack23:38
*** jj0hns0n has quit IRC23:39
*** aliguori has quit IRC23:39
*** jj0hns0n has joined #openstack23:39
Ryan_Laneis there no way to get and set quotas via the API?23:40
*** troytoman is now known as troytoman-away23:40
*** Razique has quit IRC23:42
*** uneti has joined #openstack23:44
*** neogenix has quit IRC23:44
*** uneti has quit IRC23:45
*** MarkAtwood has joined #openstack23:46
*** nati2 has joined #openstack23:47
*** AlfaBetaGamma has quit IRC23:50
*** Jamey___ has quit IRC23:50
*** straylyon has joined #openstack23:52
*** s1n4 has left #openstack23:53
*** datajerk has joined #openstack23:53
*** jeromatron has quit IRC23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!