*** juddm has quit IRC | 00:00 | |
*** hezekiah_1 has quit IRC | 00:01 | |
*** heckj has joined #openstack | 00:01 | |
*** syah has quit IRC | 00:02 | |
*** nati2 has quit IRC | 00:03 | |
*** dtroyer has quit IRC | 00:03 | |
*** qazwsx has quit IRC | 00:05 | |
*** syah has joined #openstack | 00:08 | |
*** oubiwann has joined #openstack | 00:09 | |
*** Gordonz has joined #openstack | 00:09 | |
*** rnorwood has joined #openstack | 00:10 | |
*** stewart has joined #openstack | 00:10 | |
*** miclorb_ has joined #openstack | 00:14 | |
*** chomping has quit IRC | 00:16 | |
*** chomping has joined #openstack | 00:17 | |
*** dolphm has quit IRC | 00:17 | |
*** obino has joined #openstack | 00:18 | |
*** dolphm has joined #openstack | 00:18 | |
*** tlehman has joined #openstack | 00:18 | |
*** tlehman has left #openstack | 00:18 | |
*** dolphm has quit IRC | 00:22 | |
vidd | i seem to be having issues working with a volume added to a vm | 00:24 |
---|---|---|
*** heckj has quit IRC | 00:24 | |
*** rnorwood has quit IRC | 00:28 | |
*** nati2 has joined #openstack | 00:28 | |
*** syah has quit IRC | 00:31 | |
*** cmagina has joined #openstack | 00:33 | |
stevegjacobs | vidd: dashboard issues after those updates :'Settings' object has no attribute 'SWIFT_ENABLED' | 00:34 |
vidd | swift_enabled = False | 00:35 |
*** syah has joined #openstack | 00:38 | |
*** jakedahn has quit IRC | 00:39 | |
stevegjacobs | I added that line in but still have django error - Do I have to do something to get it to re-read the settings file? | 00:42 |
*** rods has quit IRC | 00:43 | |
vidd | stevegjacobs yes service restart apache | 00:44 |
*** egant has quit IRC | 00:46 | |
bwong | wow these instructions are pretty smooth | 00:46 |
vidd | bwong, told ya...dont get much easier then that =] | 00:46 |
*** Gordonz has quit IRC | 00:47 | |
vidd | im telling ya...just a touch more work and Kiall will have himself a apt-based devstck =] | 00:47 |
bwong | that would be awesome. | 00:48 |
*** dragondm has quit IRC | 00:48 | |
bwong | still configuring, hopefully I don't run into any problems. if I do, ill let him know. | 00:48 |
*** syah has joined #openstack | 00:48 | |
*** clopez has quit IRC | 00:49 | |
*** egant has joined #openstack | 00:50 | |
*** stevegjacobs has quit IRC | 00:50 | |
*** stevegjacobs__ has quit IRC | 00:50 | |
*** dolphm has joined #openstack | 00:50 | |
*** marrusl has quit IRC | 00:51 | |
*** miclorb_ has quit IRC | 00:51 | |
*** n8 has joined #openstack | 00:52 | |
*** n8 is now known as Guest90300 | 00:52 | |
*** syah has quit IRC | 00:54 | |
*** winston-d has joined #openstack | 00:55 | |
*** rsampaio has joined #openstack | 00:56 | |
*** syah has joined #openstack | 00:56 | |
*** syah has quit IRC | 01:00 | |
*** dolphm has quit IRC | 01:00 | |
*** vidd is now known as vidd-away | 01:00 | |
*** dolphm has joined #openstack | 01:00 | |
*** stevegjacobs has joined #openstack | 01:03 | |
*** misheska has joined #openstack | 01:04 | |
*** dolphm has quit IRC | 01:05 | |
stevegjacobs | I lost my connection | 01:06 |
*** dolphm has joined #openstack | 01:06 | |
stevegjacobs | vidd: you still here? | 01:06 |
*** AlanClark has joined #openstack | 01:07 | |
*** livemoon has joined #openstack | 01:07 | |
livemoon | vidd ping | 01:08 |
*** Guest90300 has quit IRC | 01:08 | |
stevegjacobs | looks like vidd is away | 01:09 |
stevegjacobs | bwong - how is your setup going? | 01:09 |
bwong | configuring nova.sh right now | 01:10 |
bwong | I mean, going through the steps | 01:10 |
bwong | AFTER nova.sh | 01:10 |
stevegjacobs | I did all that just last night :-) | 01:10 |
bwong | hey | 01:10 |
bwong | do you remember that part where your suppose to chown glance:glance | 01:11 |
bwong | what directory or file was it referring to? | 01:11 |
bwong | becaues all it said was chown glance:glance it | 01:11 |
bwong | I just assume "it" was the glance dir | 01:11 |
*** stewart has quit IRC | 01:12 | |
*** stewart has joined #openstack | 01:13 | |
stevegjacobs | It was the glance-registry.conf and glance-api.conf files that you were to copy into /etc/glance/ | 01:13 |
*** jakedahn has joined #openstack | 01:14 | |
stevegjacobs | chown glance:glance /etc/glance/glance-registry.conf - after you copy the ones that were generated by the script into the /etc/glance folder | 01:15 |
*** jakedahn_ has joined #openstack | 01:16 | |
bwong | so only those files should be glance glance? | 01:17 |
bwong | because when I chown the directory | 01:17 |
bwong | all the files inside is glance. | 01:17 |
*** mandela123 has joined #openstack | 01:18 | |
*** jakedahn has quit IRC | 01:19 | |
*** jakedahn_ is now known as jakedahn | 01:19 | |
*** AlanClark has quit IRC | 01:19 | |
stevegjacobs | don't think it hurts | 01:19 |
stevegjacobs | all of mine are too - just need to make sure that the new ones you copy in are owned by glance as well | 01:20 |
bwong | alright, thanks for the tip | 01:22 |
*** nerdstein has joined #openstack | 01:23 | |
*** jog0 has quit IRC | 01:26 | |
*** nphase has joined #openstack | 01:26 | |
*** lynxman has quit IRC | 01:28 | |
*** lynxman has joined #openstack | 01:29 | |
*** jdurgin has quit IRC | 01:32 | |
*** jakedahn has quit IRC | 01:35 | |
*** stevegjacobs has left #openstack | 01:38 | |
*** nerdstein has left #openstack | 01:41 | |
*** Otter768 has joined #openstack | 01:45 | |
*** n8 has joined #openstack | 01:46 | |
*** jj0hns0n has joined #openstack | 01:46 | |
*** n8 is now known as Guest29699 | 01:46 | |
*** bwong has quit IRC | 01:47 | |
*** gyee has quit IRC | 01:54 | |
*** vUNK has joined #openstack | 01:55 | |
*** dolphm has quit IRC | 01:56 | |
*** dolphm has joined #openstack | 01:56 | |
*** rnorwood has joined #openstack | 01:58 | |
*** dolphm has quit IRC | 02:00 | |
*** HugoKuo_ has joined #openstack | 02:02 | |
*** joeyang has joined #openstack | 02:02 | |
*** tokuzfunpi has quit IRC | 02:05 | |
*** hugokuo has quit IRC | 02:05 | |
*** rnorwood has quit IRC | 02:12 | |
*** Gollen has joined #openstack | 02:13 | |
*** rsampaio has quit IRC | 02:15 | |
*** osier has joined #openstack | 02:19 | |
*** maplebed has quit IRC | 02:20 | |
*** rnorwood has joined #openstack | 02:21 | |
*** vUNK has quit IRC | 02:21 | |
mandela123 | since nova change its openstackx plugin i want to know how to make dashboard works well with it | 02:21 |
*** pixelbeat has quit IRC | 02:21 | |
*** rsampaio has joined #openstack | 02:24 | |
*** dwcramer has joined #openstack | 02:25 | |
*** cdub has joined #openstack | 02:25 | |
*** Turicas has joined #openstack | 02:34 | |
*** adjohn has quit IRC | 02:37 | |
*** negronjl has quit IRC | 02:38 | |
*** negronjl has joined #openstack | 02:39 | |
*** tokuz has joined #openstack | 02:40 | |
*** joeyang has quit IRC | 02:43 | |
*** chomping has quit IRC | 02:43 | |
livemoon | hi, anyone here? | 02:49 |
livemoon | dashboard cannot use,help | 02:49 |
livemoon | Exception Type:NameError | 02:49 |
livemoon | Exception Value: | 02:49 |
livemoon | name '_' is not defined | 02:49 |
uvirtbot | New bug: #887402 in nova "can't terminate instance with attached volumes" [Undecided,New] https://launchpad.net/bugs/887402 | 02:51 |
*** egant has quit IRC | 02:53 | |
*** egant has joined #openstack | 02:53 | |
*** chomping has joined #openstack | 02:54 | |
*** rnorwood has quit IRC | 02:56 | |
*** syah has joined #openstack | 02:57 | |
*** osier has quit IRC | 02:59 | |
*** dolphm has joined #openstack | 03:00 | |
*** ksteward has joined #openstack | 03:01 | |
*** aliguori_ has quit IRC | 03:02 | |
*** sandywalsh_ has quit IRC | 03:07 | |
*** ksteward has quit IRC | 03:07 | |
*** neogenix has joined #openstack | 03:09 | |
*** Turicas has quit IRC | 03:10 | |
*** rnorwood has joined #openstack | 03:10 | |
*** pothos has quit IRC | 03:11 | |
*** chomping has quit IRC | 03:11 | |
*** dolphm has quit IRC | 03:13 | |
*** ldlework has quit IRC | 03:13 | |
*** dolphm has joined #openstack | 03:14 | |
*** littleidea has joined #openstack | 03:15 | |
*** Ryan_Lane has quit IRC | 03:16 | |
*** dolphm_ has joined #openstack | 03:17 | |
*** dolphm has quit IRC | 03:18 | |
*** rnorwood has quit IRC | 03:19 | |
*** neohippie has joined #openstack | 03:20 | |
*** neohippie has left #openstack | 03:21 | |
*** negronjl_ has joined #openstack | 03:23 | |
winston-d | livemoon : hi | 03:24 |
*** negronjl_ has quit IRC | 03:25 | |
livemoon | hi | 03:25 |
livemoon | winston-d | 03:25 |
*** negronjl_ has joined #openstack | 03:25 | |
livemoon | do you know python? | 03:25 |
winston-d | livemoon : i know some | 03:25 |
*** negronjl has quit IRC | 03:25 | |
livemoon | when I use dashboard, the web show "name '_' is not defined" | 03:25 |
livemoon | in /usr/local/lib/python2.7/dist-packages/glance-2012.1-py2.7.egg/glance/common/exception.py | 03:26 |
*** rnorwood has joined #openstack | 03:26 | |
livemoon | define message = _("An unknown exception occurred") | 03:26 |
*** negronjl_ is now known as negronjl | 03:27 | |
*** dwcramer has quit IRC | 03:28 | |
*** stewart has quit IRC | 03:29 | |
*** stewart has joined #openstack | 03:29 | |
winston-d | can you post the whole error message? | 03:31 |
livemoon | http://paste.openstack.org/show/3155/ | 03:32 |
winston-d | hmm... I suppose you can see similar error when using 'glance' or 'nova' CLI tool? | 03:33 |
livemoon | I can use glance and nova with keystone fine | 03:34 |
*** MarkAt2od has joined #openstack | 03:35 | |
*** MarkAtwood has quit IRC | 03:36 | |
winston-d | livemoon : can you do 'glance add' ? | 03:37 |
livemoon | off course | 03:37 |
*** jog0 has joined #openstack | 03:37 | |
livemoon | I think it is something with python | 03:37 |
winston-d | livemoon : no error? | 03:37 |
livemoon | because show NameError: name '_' is not defined | 03:38 |
livemoon | in /usr/local/lib/python2.7/dist-packages/glance-2012.1-py2.7.egg/glance/common/exception.py", line | 03:38 |
*** MarkAtwood has joined #openstack | 03:38 | |
livemoon | I think it need to define '_' | 03:38 |
*** jog0 has quit IRC | 03:39 | |
*** miclorb_ has joined #openstack | 03:40 | |
*** MarkAt2od has quit IRC | 03:40 | |
*** miclorb_ has quit IRC | 03:41 | |
winston-d | sorry, i can't barely read any useful information out of your error log | 03:42 |
winston-d | btw, can you tell me how to set the ENV variables for glance? in order to use keystone? | 03:43 |
livemoon | I don't set any ENV for glance | 03:46 |
livemoon | just in glance conf files | 03:46 |
*** negronjl has quit IRC | 03:46 | |
winston-d | livemoon : really? no ENV | 03:47 |
*** openpercept has joined #openstack | 03:47 | |
winston-d | livemoon : then how do you configure credentials for glance in conf file? | 03:48 |
livemoon | https://github.com/livemoon/openstack | 03:48 |
*** rsampaio has quit IRC | 03:50 | |
winston-d | i don't see any configuration related to authentication. weird. | 03:52 |
livemoon | I use "glance -A token [command] | 03:52 |
winston-d | livemoon : the token is like the password for user? | 03:54 |
winston-d | I always got this error 'Not authorized to make this request. Check your credentials (OS_AUTH_USER, OS_AUTH_KEY, ...)' | 03:57 |
winston-d | livemoon : don't matter, i figure it out. thx | 03:58 |
*** rsampaio has joined #openstack | 03:59 | |
*** maplebed has joined #openstack | 04:01 | |
*** PiotrSikora has quit IRC | 04:02 | |
livemoon | winston-d: :) | 04:02 |
*** PiotrSikora has joined #openstack | 04:02 | |
*** jj0hns0n has quit IRC | 04:03 | |
livemoon | I wait someone can share dashboard installation doc | 04:03 |
*** jj0hns0n has joined #openstack | 04:04 | |
*** kashyap_ has joined #openstack | 04:06 | |
*** dysinger has quit IRC | 04:09 | |
*** ricky_99 has joined #openstack | 04:21 | |
*** littleidea has quit IRC | 04:21 | |
*** supriya_ has joined #openstack | 04:22 | |
*** miclorb_ has joined #openstack | 04:30 | |
*** hezekiah_ has joined #openstack | 04:31 | |
*** nRy_ has quit IRC | 04:39 | |
*** adjohn has joined #openstack | 04:41 | |
*** Rajaram has joined #openstack | 04:50 | |
*** dolphm_ has quit IRC | 05:02 | |
*** dolphm has joined #openstack | 05:03 | |
*** PeteDaGuru has quit IRC | 05:07 | |
*** miclorb_ has quit IRC | 05:07 | |
*** dolphm has quit IRC | 05:08 | |
*** syah has quit IRC | 05:10 | |
*** Guest29699 has quit IRC | 05:21 | |
*** rnorwood has quit IRC | 05:22 | |
*** mmetheny has quit IRC | 05:26 | |
*** MarkAtwood has quit IRC | 05:27 | |
*** hezekiah_ has quit IRC | 05:32 | |
*** MarkAtwood has joined #openstack | 05:46 | |
*** Rajaram has quit IRC | 05:50 | |
*** cdub has quit IRC | 05:53 | |
*** kashyap_ has quit IRC | 05:54 | |
*** Rajaram has joined #openstack | 05:54 | |
*** negronjl has joined #openstack | 05:55 | |
*** cdub has joined #openstack | 05:55 | |
*** nerens has joined #openstack | 05:59 | |
*** negronjl has quit IRC | 05:59 | |
*** nati2_ has joined #openstack | 06:02 | |
*** nati2 has quit IRC | 06:03 | |
*** krow has joined #openstack | 06:03 | |
*** jkoelker has quit IRC | 06:05 | |
*** jkoelker_ has joined #openstack | 06:05 | |
*** aceat64 has quit IRC | 06:06 | |
*** nerens has quit IRC | 06:07 | |
*** aceat64 has joined #openstack | 06:07 | |
*** hezekiah_ has joined #openstack | 06:08 | |
*** dysinger has joined #openstack | 06:10 | |
*** hezekiah_ has quit IRC | 06:11 | |
*** doorlock has joined #openstack | 06:11 | |
*** catintheroof has joined #openstack | 06:15 | |
*** Rajaram has quit IRC | 06:16 | |
*** Rajaram has joined #openstack | 06:16 | |
*** catintheroof has quit IRC | 06:20 | |
*** catintheroof has joined #openstack | 06:26 | |
*** dysinger has quit IRC | 06:32 | |
*** rsampaio has quit IRC | 06:37 | |
*** sebastianstadil has quit IRC | 06:42 | |
*** sebastianstadil has joined #openstack | 06:43 | |
*** MarkAtwood has quit IRC | 06:43 | |
*** arBmind has joined #openstack | 06:43 | |
*** MarkAtwood has joined #openstack | 06:43 | |
*** gohko_nao has quit IRC | 06:55 | |
*** gohko_nao has joined #openstack | 06:56 | |
*** ton_katsu has joined #openstack | 06:56 | |
*** Rajaram has quit IRC | 06:59 | |
*** kaigan has joined #openstack | 07:01 | |
*** krow has quit IRC | 07:02 | |
*** guigui has joined #openstack | 07:02 | |
*** jj0hns0n has quit IRC | 07:04 | |
*** nati2_ has quit IRC | 07:16 | |
*** hingo has joined #openstack | 07:17 | |
*** neogenix has quit IRC | 07:19 | |
*** TheOsprey has joined #openstack | 07:23 | |
*** catintheroof has quit IRC | 07:25 | |
*** ejat has joined #openstack | 07:30 | |
*** ejat has joined #openstack | 07:30 | |
*** Telamon has joined #openstack | 07:33 | |
*** sebastianstadil has quit IRC | 07:36 | |
*** yeming has joined #openstack | 07:40 | |
*** sebastianstadil has joined #openstack | 07:41 | |
*** supriya_ has quit IRC | 07:42 | |
*** nerens has joined #openstack | 07:44 | |
Telamon | Does anyone know of some recent install docs for keystone? I'm using http://docs.openstack.org/diablo/openstack-identity/admin/content/configuring-the-identity-service.html but I'm not sure how up to date they are... | 07:46 |
*** ejat- has joined #openstack | 07:49 | |
*** mgoldmann has joined #openstack | 07:49 | |
*** ejat has quit IRC | 07:49 | |
*** ejat- is now known as ejat | 07:49 | |
*** ejat has joined #openstack | 07:49 | |
*** arBmind has quit IRC | 08:00 | |
yeming | I'm experimenting FlatManager. 'euca-describe-instances' shows I can get IP address, but I cannot ping or ssh the instance. Anything I miss? Last I succeeded with FlatDHCPManager. | 08:02 |
*** nacx has joined #openstack | 08:04 | |
*** HugoKuo__ has joined #openstack | 08:08 | |
*** vishy has quit IRC | 08:08 | |
*** hallyn has quit IRC | 08:08 | |
*** vishy has joined #openstack | 08:09 | |
*** hallyn has joined #openstack | 08:10 | |
*** HugoKuo_ has quit IRC | 08:12 | |
*** reidrac has joined #openstack | 08:13 | |
*** Rajaram has joined #openstack | 08:20 | |
*** supriya_ has joined #openstack | 08:20 | |
*** adjohn has quit IRC | 08:24 | |
*** misheska has quit IRC | 08:32 | |
*** catintheroof has joined #openstack | 08:35 | |
*** ejat has quit IRC | 08:37 | |
*** opsnare has joined #openstack | 08:40 | |
*** adjohn has joined #openstack | 08:41 | |
yeming | In FlatManager mode, how does the instance get ip? Is it written to the image before starting? | 08:41 |
*** Gerr1t has quit IRC | 08:44 | |
*** catintheroof has quit IRC | 08:44 | |
*** dobber has joined #openstack | 08:44 | |
*** adjohn has quit IRC | 08:45 | |
Telamon | yeming: I think the cloud-service package grabs it from a kernel parameter | 08:46 |
Telamon | So if your image doesn't have that cloud-service package, you won't get one. | 08:46 |
Telamon | I'd try booting the service and using the VNC console to manually set an IP and see if that works first. | 08:46 |
Telamon | Do you by any chance have keystack working? | 08:47 |
Telamon | Sorry, keystone. | 08:47 |
yeming | Telamon: No, I just started playing with Nova. nothing else yet. | 08:48 |
*** Razique has joined #openstack | 08:49 | |
*** arBmind has joined #openstack | 08:49 | |
Telamon | Ah. Avoid keystone. It will drive you insane. ;-) | 08:50 |
Razique | Hi all :) | 08:54 |
Razique | haha, I just came and read Telamon last sentence | 08:54 |
Razique | Keystone is definitely driving us all crazy =D | 08:55 |
Telamon | Hah, but I'm crazy like a fox now! I figured out that the new keystone DB has encrypted passwords. | 08:55 |
Telamon | Now I can make the damned docs work again... | 08:55 |
Telamon | Of course all the ports in the docs are wrong, but that's just to weed out the people who take their meds.... | 08:56 |
*** ryan_fox1985 has joined #openstack | 08:56 | |
yeming | Hi Razique | 08:58 |
Telamon | Any ideas on how to test if nova/glance are properly using keystone? | 08:58 |
Razique | Telamon: yah, you check keystone logs | 08:59 |
*** ejat has joined #openstack | 08:59 | |
*** ejat has joined #openstack | 08:59 | |
Razique | and check if the tokens returned by the request are the ones into Keystone db | 08:59 |
*** ChrisAM1 has quit IRC | 08:59 | |
livemoon | hi,help | 09:00 |
Telamon | Razique: Okay, but what command do I use that will trigger a request? The euca stuff doesn't seem to work once you switch to keystone | 09:01 |
Razique | livemoon: yup ? | 09:01 |
Razique | Telamon: I've been able to integrate euca2ools along Keystone, but it was a pain | 09:01 |
Razique | Telamon: use Curl in order to make auth hits against both nova and glance | 09:01 |
livemoon | Razique: can you see me mail in maillist | 09:01 |
*** ChrisAM1 has joined #openstack | 09:02 | |
*** yeming has quit IRC | 09:02 | |
Razique | yup; leme check | 09:02 |
*** miclorb_ has joined #openstack | 09:04 | |
Telamon | Hmm, I think I'm going to blow away my keystone.db and reinit it with the devstack script. It looks to be more flushed out than the keystone install docs... | 09:04 |
*** javiF has joined #openstack | 09:08 | |
Razique | Telamon: the devstack seems not to contain all the entries | 09:11 |
Razique | especially the "roles" tables lacks of KeystoneService and KeystoneServiceAdmin entries | 09:11 |
Telamon | Crap. Any docs around that do contain them all? | 09:12 |
*** BasTichelaar has joined #openstack | 09:12 | |
livemoon | does anyone use dashboard? | 09:13 |
BasTichelaar | anyone has knowledge of zones and schedulers in nova? | 09:15 |
*** catintheroof has joined #openstack | 09:17 | |
Razique | Telamon: not yet but i can give you my sql dump | 09:18 |
*** Telamon has quit IRC | 09:18 | |
livemoon | hi, Razique, Hava you installed dash? | 09:21 |
*** mandela123 has quit IRC | 09:22 | |
Razique | livemoon: nope I tried it via the devstack script | 09:23 |
Razique | what about u livemoon ? | 09:23 |
livemoon | I according devstack script | 09:25 |
livemoon | but failed | 09:25 |
*** Gollen has quit IRC | 09:25 | |
livemoon | it say "name '_' is not defined" | 09:25 |
*** reidrac has quit IRC | 09:28 | |
*** Jigen90 has joined #openstack | 09:29 | |
Jigen90 | Hi guys, I need a little help understanding a meaning of vcpu. | 09:30 |
Jigen90 | What's the link between 1 vpcu e 1 core of my processor ? Are they related? | 09:30 |
tjoy | good question | 09:31 |
Jigen90 | When I spawn an instance with 1 vcpu, how many cores of my processor are used? | 09:34 |
Jigen90 | All or only one? | 09:34 |
*** MarkAtwood has quit IRC | 09:36 | |
Razique | livemoon: during the install ? | 09:38 |
*** Telamon has joined #openstack | 09:39 | |
livemoon | Razique: during run test script | 09:39 |
*** reidrac has joined #openstack | 09:40 | |
Razique | have you run the install first ? | 09:42 |
Razique | python setyp.py build && python setup.py install | 09:43 |
Telamon | Razique: Sorry, I lost my net connection. How do you go about loading images when you are using keystone? | 09:44 |
Razique | Telamon: in fact see Keystone as an intermediary, no more, so use traditional commands | 09:44 |
Razique | this is Glance that will handle the communication with Keystone regarding it's operations | 09:45 |
Razique | while you only send to keystone temp. token and tenant token | 09:45 |
zykes- | Razique: wazzup | 09:45 |
Razique | Here is a scheme I did, in the review : http://img845.imageshack.us/img845/5205/sch5002v00nuackeystonen.png | 09:46 |
Razique | hey zykes- :) | 09:46 |
Razique | playing with live migration | 09:46 |
Razique | We are about to sign a 50 instances customer | 09:46 |
Razique | I'd like to make sure the whole HA stuff is ready :D | 09:47 |
*** darraghb has joined #openstack | 09:51 | |
zykes- | what you using for storing vm instances Razique ? | 09:56 |
zykes- | sheepdogg ? | 09:56 |
*** dobber has quit IRC | 09:56 | |
*** Jigen90 has quit IRC | 09:57 | |
Razique | atm the instances themselves are stored locally in every node, while the volumes use an ISCSI SAN | 09:59 |
Razique | I asked myself just yesterday if I shouldn't put the instances on the SAN alos | 09:59 |
Razique | but when I benched the instance, the fact that they were stored locally gave me outstanding performance | 10:00 |
*** ninkotech has joined #openstack | 10:00 | |
Razique | http://img820.imageshack.us/img820/507/plop20111010181702.jpg | 10:01 |
Razique | If I'm working through the boot from volumes, maybe I could also try to bench an on-san solution | 10:02 |
*** supriya_ has quit IRC | 10:06 | |
*** clopez has joined #openstack | 10:06 | |
*** opsnare has quit IRC | 10:10 | |
*** pixelbeat has joined #openstack | 10:11 | |
*** opsanre has joined #openstack | 10:12 | |
*** catintheroof has quit IRC | 10:12 | |
Telamon | Anyone know why "euca-describe-availability-zones verbose" trhows this error: Warning: failed to parse error message from AWS: <unknown>:1:0: syntax error | 10:14 |
livemoon | Razique: hi | 10:16 |
Razique | Telamon: incorrect endpoint | 10:16 |
Razique | check either ec2_url and ec2_host | 10:16 |
Razique | and the file your source | 10:17 |
Razique | EC2_URL | 10:17 |
Razique | livemoon: 'sup ? | 10:17 |
livemoon | I havenot completed dashboard | 10:17 |
livemoon | I think it is hard to me | 10:17 |
Razique | :( | 10:19 |
Razique | before Dashboard | 10:19 |
Telamon | Razique: In my nova.conf I have --ec2_url=http://192.168.2.254:8773/services/Cloud and in my env I have EC2_URL=http://192.168.2.254:8773/services/Cloud so that looks good. | 10:19 |
Razique | is Keystone integrated ? | 10:19 |
Razique | Telamon: does euca_describe_instances work ? | 10:20 |
Telamon | Nope, same error. | 10:20 |
Razique | Telamon: you use keystone ? | 10:23 |
* Razique fears the answer | 10:23 | |
Telamon | Yep. I'm seeing this in my api log: POST /services/Cloud/ None:None 400 [Boto/2.0 (linux2)] application/x-www-form-urlencoded text/plain | 10:23 |
Razique | ok so add this | 10:24 |
Razique | nova.conf : --keystone_ec2_url=http://172.16.40.11:5000/v2.0/ec2tokens | 10:24 |
Razique | into keystone/middleware/ec2_token.py | 10:24 |
Razique | make sure that u have # o = urlparse(FLAGS.keystone_ec1_url) | 10:24 |
Razique | o = urlparse(FLAGS.keystone_ec2_url) | 10:24 |
Razique | and # token_id = result['auth']['token']['id'] | 10:25 |
Razique | token_id = result['access']['token']['id'] | 10:25 |
Razique | restart nova-api and it should work | 10:25 |
*** ton_katsu has quit IRC | 10:26 | |
*** jollyfoo has quit IRC | 10:26 | |
*** ambo has quit IRC | 10:27 | |
*** ambo has joined #openstack | 10:29 | |
Telamon | Nope, same thing. | 10:31 |
*** Vek has quit IRC | 10:32 | |
*** dosdawg has quit IRC | 10:32 | |
Telamon | When I use curl to go to the ec2tokens URL I get a 404... | 10:32 |
livemoon | quit | 10:33 |
*** livemoon has quit IRC | 10:33 | |
Razique | Telamon: what version of keystone are u using , | 10:34 |
Razique | http://paste.openstack.org/show/3157/ migration doesn't work :( | 10:34 |
Razique | I don't have any errors here, have I ? | 10:34 |
*** perestre1ka has quit IRC | 10:35 | |
*** miclorb_ has quit IRC | 10:35 | |
Telamon | Razique: 1.0~d4+20111106-0mit1 from kalil's PPA | 10:35 |
Telamon | Only one reported to work... | 10:35 |
*** perestrelka has joined #openstack | 10:35 | |
Razique | Telamon: I've been able only with the trunk one | 10:35 |
Razique | (from github) | 10:36 |
uvirtbot | New bug: #887495 in horizon "Error authenticating with keystone: Unhandled error" [Undecided,New] https://launchpad.net/bugs/887495 | 10:36 |
Telamon | Ahh... Okay. I can load images directly with glance, and start them from Dashboard, so I think I'm going to just leave it as is for the moment. I don't want to upgrade to trunk and have it break some other component. | 10:37 |
*** naehring has joined #openstack | 10:38 | |
zykes- | Razique: where are you from ? Spain ? | 10:39 |
Razique | zykes-: France :D | 10:39 |
zykes- | oh | 10:39 |
zykes- | Razique: working for a large hosting provider or ? | 10:40 |
Razique | not all all | 10:40 |
Razique | Young sysadmin in a company we created the last year :D | 10:40 |
zykes- | :p | 10:40 |
Razique | We first wanted to use eucalyptus | 10:40 |
zykes- | but then OS came along ? | 10:40 |
Razique | but after some months of exploitation, and lot of troubles, I started to do the migration to OS | 10:40 |
Razique | We offer hosting and server management cloud-based | 10:41 |
Razique | the first year, that was KVM only, worked pretty well :) | 10:41 |
Razique | what about you ? :) | 10:41 |
zykes- | norawy :) | 10:42 |
zykes- | Norway, trying to convince folks here but not easy | 10:42 |
*** lelin has joined #openstack | 10:42 | |
Razique | mmm well if you want some user cases, you can ask | 10:43 |
zykes- | fire away captain | 10:43 |
ryan_fox1985 | Hi I'm installing swift with swauth and when I start the proxy-server the service doesn't start. | 10:43 |
Razique | cool country :D | 10:43 |
ryan_fox1985 | In the syslog appears Nov 8 11:41:35 proxy proxy-server UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-server", line 22, in <module>#012 run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)#012 File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 126, in run_wsgi#012 app = loadapp('config:%s' % conf_file, global_conf={'log_name': log_name})#012 File "/usr/lib/pymodules/py | 10:43 |
ryan_fox1985 | thon2.6/paste/deploy/loadwsgi.py", line 204, in loadapp#012 return loadobj(APP, uri, name=name, **kw)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 224, in loadobj#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 278, in _loadconfig#012 | 10:43 |
ryan_fox1985 | return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 405, in get_context#012 global_additions=global_additions)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 503, in _pipeline_app_context#012 for name in pipeline[:-1]]#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 409, in get_context#012 section)#012 File "/ | 10:43 |
ryan_fox1985 | usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 431, in _context_from_use#012 object_type, name=use, global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 361, in get_context#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 248, in loadcontext#012 global_conf=global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadws | 10:43 |
ryan_fox1985 | gi.py", line 285, in _loadegg#012 return loader.get_context(object_type, name, global_conf)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 561, in get_context#012 object_type, name=name)#012 File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 587, | 10:43 |
Razique | ryan_fox1985: use paste please :) | 10:44 |
Razique | here http://paste.openstack.org/ | 10:44 |
ryan_fox1985 | ok sorry | 10:44 |
ryan_fox1985 | I already install the swauth library | 10:46 |
ryan_fox1985 | And I already configure de proxy-server.conf | 10:47 |
*** dobber has joined #openstack | 10:48 | |
*** stevegjacobs_ has quit IRC | 10:50 | |
*** jedi4ever has quit IRC | 10:51 | |
*** ejat has quit IRC | 10:53 | |
*** Rajaram has quit IRC | 10:53 | |
Telamon | ryan_fox1985: If you want to pastebin your proxy-server.conf I can take a look at it. Swift was one of the few parts I got working easily. :) | 10:53 |
ryan_fox1985 | Ok one moment | 10:54 |
ryan_fox1985 | http://pastebin.com/1aYeKZ1f proxy-server.conf | 10:55 |
ryan_fox1985 | I download swauth from the git and I do python setup.py install | 10:56 |
*** foexle has joined #openstack | 10:57 | |
Telamon | Okay, first thing to check: do the /etc/swift/cert.* files exist? | 10:57 |
ryan_fox1985 | yes I created | 10:57 |
Telamon | Okay, and your machine's IP is 10.30.239.198 ? | 10:58 |
ryan_fox1985 | cd /etc/swift || openssl req -new -x509 -nodes -out cert.crt -keyout cert.key | 10:58 |
ryan_fox1985 | yes I check with ifconfig eth0 | 10:58 |
Telamon | Okay, can you pastebin the error from above? It looks like the last few lines are missing. | 11:00 |
foexle | hi guys, i've uploaded an image to glance => all done. Now i try to start an instance and nova calls glance to get this requested image, but glance says can't find. Now i request the sql command which glance do. If i try this command (http://pastebin.com/bR6WnYLs) with the 2 Agruments (False,2) so False=deleted and id=2, but i don't get a result. If i try this with (0,2) i get the image. is this a bug? | 11:00 |
ryan_fox1985 | http://pastebin.com/iURmkmCR | 11:01 |
ryan_fox1985 | it's all that have in the /var/log/syslog | 11:02 |
Telamon | ryan_fox1985: You seem to be missing a few lines of the error there. After the "line 587," it should have a function name and an error description. | 11:02 |
ryan_fox1985 | the log finish with comma | 11:02 |
Telamon | ryan_fox1985: Hmm, try running it from the command line: swift-init main start | 11:03 |
ryan_fox1985 | ok | 11:03 |
Telamon | As root. So you might need to put sudo in front of that | 11:04 |
ryan_fox1985 | appears an error | 11:04 |
ryan_fox1985 | main-server.conf don't found | 11:04 |
Telamon | ryan_fox1985: Hmm, do you have a bunch of other conf files and folders (say object-server.conf) in /etc/swift ? | 11:05 |
ryan_fox1985 | in the folder of etc/swift | 11:06 |
*** clopez has quit IRC | 11:06 | |
ryan_fox1985 | One moment I created a pastebin | 11:06 |
ryan_fox1985 | http://pastebin.com/kA3604Qy | 11:07 |
ryan_fox1985 | this is my proxy node with swauth | 11:08 |
foexle | no one any idea ? | 11:08 |
*** Rajaram has joined #openstack | 11:09 | |
Telamon | ryan_fox1985: Okay, I think you are missing some of the setup steps. 1 sec... | 11:09 |
*** PotHix has joined #openstack | 11:09 | |
ryan_fox1985 | I do the steps from os-objectstorage-adminguide-trunk.pdf | 11:10 |
Telamon | ryan_fox1985: Try using these: http://docs.openstack.org/diablo/openstack-object-storage/admin/content/ I don't actually know which are more recent, but the website ones seem to work for me. | 11:11 |
ryan_fox1985 | you already install swift with swauth? | 11:11 |
Telamon | foexle: What does euca-describe-images say | 11:11 |
foexle | Telamon: doesn't work | 11:12 |
foexle | auth require | 11:12 |
foexle | i think its the combination of keystone and glance | 11:12 |
foexle | but glance client works | 11:12 |
Telamon | ryan_fox1985: I installed it with keystone for the auth backend, but it's pretty much the same except for the password lookups. | 11:12 |
Telamon | foexle: Hmm, I dunno then. Are you using dashboard? Do you see them in there? | 11:13 |
foexle | yes and yes | 11:13 |
foexle | :) | 11:13 |
ryan_fox1985 | The web page It's the same steps that the manual that I have | 11:14 |
Telamon | Hmm, did you check in the system panel->images that the image fiel has the right ID for the kernel and ramdisk file? I don't know, just guessing... | 11:15 |
foexle | hmmmm you'r right .... | 11:17 |
foexle | kernel and ramdisk id = 123 | 11:17 |
foexle | hmmm | 11:17 |
Telamon | ryan_fox1985: You should have a bunch more config files in /etc/swift then. http://pastebin.com/R3PSykCw | 11:17 |
Telamon | foexle: If it turns out that I actually properly diagnosed an OpenStack problem I may have a heart attack... Don't tease me like that! ;-) | 11:18 |
foexle | :> | 11:18 |
foexle | but i followed the glance howto -.- | 11:19 |
Telamon | The 123 means nothing uploaded (it's probably in a grey font) If you have a separate kernel uploaded, just grab it's ID from it's own edit page and pop it in for the image one. Then you can start an instance. | 11:19 |
ryan_fox1985 | the account-server.conf, proxy-server.conf and object-server.conf I have in the storages nodes | 11:19 |
Telamon | ryan_fox1985: Ah, the docs must be slightly different... I don't know then, sorry. You might wan to double check your syslog though. Your error message is definitely getting cut off in the middle. syslog doesn't always flush the cash so that can happen. | 11:21 |
Telamon | Anyone know the default logins to the uec-images.ubuntu.com images? | 11:21 |
*** zorzar has quit IRC | 11:22 | |
*** ianloic has quit IRC | 11:22 | |
*** fujin has quit IRC | 11:23 | |
ryan_fox1985 | how I can see all the log? | 11:23 |
*** ahasenack has joined #openstack | 11:23 | |
Telamon | Probably tail /var/log/syslog | 11:24 |
ryan_fox1985 | ok I start again the proxy-server | 11:24 |
ryan_fox1985 | and I paste the logs | 11:24 |
ryan_fox1985 | http://pastebin.com/Hwp3bQfG | 11:26 |
ryan_fox1985 | I think that swift don't found the swauth module | 11:27 |
Telamon | Yeah, still missing part of the error message. Are you running Ubuntu? | 11:27 |
ryan_fox1985 | yes | 11:29 |
ryan_fox1985 | ubuntu 10.04 LTS | 11:29 |
ryan_fox1985 | server | 11:30 |
ryan_fox1985 | 32 bits | 11:30 |
Telamon | Hmm, I dunno then. It may very well be that swauth module is missing, but I can't really say more without the rest of the error message. Sorry I can't be of more help. | 11:31 |
ryan_fox1985 | Oks thanks! | 11:32 |
*** arun has quit IRC | 11:35 | |
*** clopez has joined #openstack | 11:36 | |
*** arun has joined #openstack | 11:37 | |
*** arun has joined #openstack | 11:37 | |
*** zorzar has joined #openstack | 11:38 | |
*** mmetheny has joined #openstack | 11:38 | |
*** Nathariel has joined #openstack | 11:38 | |
Nathariel | Hey guys. While installing the nova-common package on Ubuntu 11.10 I get "/usr/lib/python2.7/dist-packages/migrate/changeset/schema.py:124: MigrateDeprecationWarning: Passing a Column object to alter_column is deprecated. Just pass in keyword parameters instead." Any thoughts? | 11:40 |
Telamon | Nathariel: That's probably not a problem. It's just saying the package is using an upgrade option for SQL Alchemy that isn't officially supported any more. It should still work. | 11:44 |
*** Otter768 has quit IRC | 11:45 | |
Nathariel | Thanks, Telamon | 11:47 |
foexle | Telamon: i think the main problem ist the auth ..... if i try to get a response from syspanal-> tenats its unauthorized too | 11:47 |
foexle | this user have sysadmin, admin and netadmin role in keystone | 11:48 |
foexle | and the creds are correct too | 11:48 |
foexle | -.- oh man :D ... | 11:48 |
Telamon | foexle: Are you using devstack or packages? And if packages, did you use the devstack keystone_data.sh script? | 11:50 |
*** termie has quit IRC | 11:50 | |
*** rods has joined #openstack | 11:52 | |
foexle | Telamon: i'm using packages and no | 11:54 |
foexle | Telamon: i dont use any automate install scripts | 11:55 |
ninkotech | hi, i would like to use swift like solution, but i need to be able to configure number of copies of the blob when i upload data into it... would it be hard to achieve this with swift somehow? | 11:56 |
*** BasTichelaar has quit IRC | 11:57 | |
*** termie has joined #openstack | 11:58 | |
Telamon | foexle: Okay, the docs for keystone on the website are missing a bunch of setup commands, which might be causing your problems. Try using this: https://answers.launchpad.net/swift/+question/175595 just make sure to change the token for admin/admin to the one from your /etc/nova/api-paste.ini file | 11:58 |
*** supriya has joined #openstack | 11:59 | |
*** PeteDaGuru has joined #openstack | 12:01 | |
foexle | Telamon: ok .... the role MUST BE Admin .... | 12:05 |
foexle | Telamon: i configured in keystone conf with KeystoneAdmin | 12:05 |
foexle | but i think the communication between nova and keystone requires this role "Admin" | 12:05 |
foexle | ok Tenants show works now :) | 12:06 |
foexle | ok the last error euca-describe-images :D .... thx Telamon | 12:07 |
Telamon | Heh, let me know if you get that one working. No joy for me. | 12:07 |
foexle | other euca commands are running without errors .... hmmmm :) ... | 12:08 |
foexle | Telamon: ok i'll do | 12:08 |
*** supriya has quit IRC | 12:10 | |
*** livemoon has joined #openstack | 12:15 | |
*** praefect has joined #openstack | 12:15 | |
*** yeming has joined #openstack | 12:22 | |
*** Turicas has joined #openstack | 12:22 | |
*** yeming has quit IRC | 12:27 | |
*** Vek has joined #openstack | 12:29 | |
Razique | back guys ! | 12:31 |
*** nerdstein has joined #openstack | 12:32 | |
zykes- | Razique: ! | 12:32 |
zykes- | :d | 12:32 |
Razique | yup | 12:33 |
Razique | 'sup :p | 12:33 |
lelin | is there a way to create an image from a running system? | 12:36 |
Razique | lelin: yup, but I haven't tried yet | 12:37 |
Razique | it's into my todo list :p | 12:37 |
lelin | Razique, can you point me to some docs pls? | 12:37 |
Razique | only that atm https://lists.launchpad.net/openstack/msg03825.html | 12:38 |
Razique | good luck! | 12:38 |
*** Telamon has quit IRC | 12:38 | |
*** dirkx_ has joined #openstack | 12:39 | |
Razique | lelin: https://github.com/canarie/vm-toolkit#readme | 12:39 |
*** stevegjacobs_ has joined #openstack | 12:43 | |
*** Turicas has quit IRC | 12:44 | |
stevegjacobs_ | I have set up a web server instance, working perfectly, and I would like to figure out best way to snapshot and then make it available as a new image. | 12:45 |
*** stuntmachine has joined #openstack | 12:47 | |
*** Turicas has joined #openstack | 12:49 | |
*** stuntmachine has quit IRC | 12:49 | |
lelin | tnx Razique | 12:49 |
livemoon | hi | 13:03 |
livemoon | lelin | 13:04 |
livemoon | create an image from a running system, you can use python-novaclient | 13:04 |
*** openpercept has quit IRC | 13:05 | |
*** shang has joined #openstack | 13:05 | |
*** cmagina_ has joined #openstack | 13:06 | |
lelin | livemoon, tnx i ll give a try also to that. does it mandatory needs a volume attached? | 13:06 |
*** lorin1 has joined #openstack | 13:06 | |
livemoon | No | 13:06 |
lelin | cool | 13:07 |
livemoon | but I haven't to try snapshot a server attaching a volume | 13:08 |
*** cmagina has quit IRC | 13:09 | |
*** dprince has joined #openstack | 13:10 | |
*** PeteDaGuru has quit IRC | 13:16 | |
*** nRy has joined #openstack | 13:19 | |
*** andredieb has joined #openstack | 13:20 | |
lelin | livemoon, i'm using "nova image-create" but after several minutes, nova image-list still shows state as "saving". does it take so long for you too? | 13:25 |
nRy | Hello | 13:26 |
livemoon | according your size of instances | 13:26 |
livemoon | but you can see compute.log of your host | 13:26 |
*** bcwaldon has joined #openstack | 13:26 | |
nRy | Does anyone know of some instructions, possibly a web link with info on "Starting" Amazon EC2 instances using a component of Openstack? | 13:27 |
nRy | or possibly Openstack with Chef? | 13:27 |
*** andredieb has quit IRC | 13:27 | |
Razique | livemoon: thanks for the nova stuff :D | 13:28 |
Razique | omg, i'll test that | 13:28 |
Razique | create image from instance, another thing I need to chekc | 13:28 |
*** stuntmachine has joined #openstack | 13:30 | |
livemoon | Razique: I see you email about migration | 13:30 |
Razique | livemoon: yah, i'm starting to be desperate here :d | 13:30 |
kaigan | 417 | 13:30 |
kaigan | err | 13:30 |
livemoon | I need know how to block migration | 13:30 |
livemoon | if you know, tell me | 13:31 |
Razique | yup, I'm trying to do both (live and block) | 13:32 |
Razique | but not success atm :D | 13:32 |
Razique | lelin: ok I just tried | 13:32 |
livemoon | go on | 13:32 |
Razique | I tried to create an image from a running instance | 13:33 |
Razique | the process goes well, and Glance gets the new image as a private one | 13:33 |
Razique | then I create a new server based on that image | 13:33 |
Razique | its boots, and I can connect to it | 13:33 |
Razique | … but :D | 13:33 |
DuncanT | If you update an api extension, such as os-volume, in a back-compatible manner, should you update the 'updated' timestamp in its attributes, or leave it alone? | 13:33 |
Razique | I created a file into the instance "razique" | 13:33 |
Razique | that is missing from the new server | 13:34 |
DuncanT | Example change: https://review.openstack.org/#change,1202 | 13:34 |
livemoon | why? | 13:34 |
Razique | I dunno | 13:35 |
* Razique is starting to like novaclient tool | 13:35 | |
lelin | Razique, for me is still in "creating" state. but i think is because nova-volume is not configured (i need a different partition for that, right?) | 13:35 |
Razique | lelin: not necesseraly | 13:35 |
lelin | ok | 13:35 |
Razique | Here I don't use nova-volume for the instance | 13:35 |
Razique | I've a small test instance on a server | 13:36 |
Razique | a ttylinux | 13:36 |
lelin | so what could be the problem Razique ? i have no evidence in the logs. now i ll try to make a snap of the tty | 13:36 |
*** stuntmachine has quit IRC | 13:36 | |
Razique | lelin: I would check nova-compute in verbose mode | 13:36 |
zykes- | novaclient tool Razique ? | 13:37 |
lelin | Razique, i will | 13:38 |
Razique | zykes-: the nova client | 13:39 |
*** Nadeem has joined #openstack | 13:39 | |
Nadeem | guys | 13:39 |
Razique | you know the replacement for euca2ools | 13:39 |
Nadeem | am not too familiar with git | 13:39 |
praefect | Razique: to do your create image from instance test, do you use something like "nova rebuild xxx y" ? | 13:39 |
Nadeem | but i am getting this: | 13:39 |
Razique | weird because I see "qemu-img convert -f qcow2 -O raw -s 20eb85f3af074bf6a5c94c97932b7999" | 13:39 |
Nadeem | + git clone https://github.com/openstack/nova.git /root/cloud/nova/nova Cloning into /root/cloud/nova/nova... warning: remote HEAD refers to nonexistent ref, unable to checkout. | 13:39 |
Razique | praefect: i've a running instance that I create image from | 13:39 |
praefect | Razique: ok but you use "nova rebuild" right? | 13:39 |
Razique | praefect: I don't | 13:40 |
Razique | rebuild "Shutdown, re-image, and re-boot a server." | 13:40 |
praefect | Razique: what do you do then? | 13:40 |
praefect | I'm lost | 13:40 |
Razique | I do nova image-create $server $name | 13:40 |
praefect | thanks | 13:40 |
Razique | then glance index shows me the image | 13:40 |
Razique | then nova boot --image $clone --flavor $flavor | 13:40 |
Razique | but it's like it uses the backing files since I don't find the test file I created | 13:41 |
Razique | and 2011-11-08 14:38:10,283 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img convert -f qcow2 -O raw -s 20eb85f3af074bf6a5c94c97932b7999 /var/lib/nova/instances/instance-00000036/d | 13:41 |
Razique | we see that is create a clone from the running instance's disk, not the backing file | 13:41 |
*** ninkotech has quit IRC | 13:42 | |
Razique | ok weirder :D | 13:43 |
*** stuntmachine has joined #openstack | 13:43 | |
Razique | now the new server has the file I created | 13:43 |
Razique | but I created a second file, re-image and re-boot | 13:44 |
*** imsplitbit has joined #openstack | 13:44 | |
Razique | Now the second file doesn't exist | 13:44 |
Razique | haha | 13:44 |
praefect | Razique: this is very interesting, I've noticed that "glance index" never lists anything for me but I'm pretty sure I'm using glance... I have 10+ images in there and glance index shows nothing | 13:44 |
Razique | u mean the image ? | 13:44 |
Razique | or all ur images ? | 13:44 |
praefect | even with 10+ images in glance, "glance index" never returns anything, and if I look at glance-api.log I see entries with GET HEAD POST for all my image manipulation... | 13:45 |
Razique | praefect: u use Keystone ? | 13:46 |
uvirtbot | New bug: #887572 in keystone "Error authenticating with keystone: Unhandled error" [Undecided,New] https://launchpad.net/bugs/887572 | 13:46 |
praefect | Razique: no nothing keystone related.. | 13:46 |
Razique | great | 13:46 |
Razique | since Keystone requires extra changes in order to be able to use glance shell | 13:46 |
Razique | are the images private or public | 13:47 |
praefect | ok at least, nova image-list shows me an image that is SAVING... | 13:47 |
Razique | does nova image-list shows them ? | 13:47 |
praefect | Razique: yes | 13:47 |
*** msivanes has joined #openstack | 13:47 | |
*** rsampaio has joined #openstack | 13:47 | |
praefect | and one is in SAVING state (the test I just did clone a VM) | 13:47 |
Razique | put glance in debug mode | 13:47 |
Razique | while u run glance index, and get the sql request | 13:48 |
praefect | Razique: glance-api restarted with debug=true | 13:48 |
livemoon | razique | 13:49 |
livemoon | give me your backup mysql script again. | 13:50 |
*** livemoon has left #openstack | 13:50 | |
*** livemoon has joined #openstack | 13:50 | |
Razique | livemoon: https://github.com/Razique/BashStuff | 13:51 |
Razique | help urself :p | 13:51 |
*** Arminder has quit IRC | 13:51 | |
livemoon | fork your branch | 13:51 |
Razique | sure | 13:52 |
foexle | Hey guys, if i run euca-describe-images i get this error AttributeError: keystone_ec1_url | 13:52 |
Razique | praefect: ok, so It's like the clone of the server is one step back | 13:52 |
Razique | foexle: Keystone user :p | 13:53 |
foexle | other euca commands are running | 13:53 |
stevegjacobs_ | I am just installed python-novaclient and I'm trying to use it for the first time | 13:53 |
praefect | Razique: mine is still in SAVING state... | 13:53 |
foexle | Razique: keystone user ? | 13:53 |
*** naehring has quit IRC | 13:53 | |
foexle | Razique: this user has all roles and tenants they are need :/ | 13:53 |
Razique | foexle: do u have that file ? /usr/local/lib/python2.6/dist-packages/keystone-1.0-py2.6.egg/keystone/middleware/ec2_token.py | 13:53 |
Razique | praefect: what ur's setup ? | 13:53 |
foexle | Razique: yes | 13:54 |
Razique | ok | 13:54 |
Razique | paste it please :) | 13:54 |
stevegjacobs_ | I'm used to authenticating with euca-tools, but I need some instruction on how to do this with keystone and novaclient# | 13:54 |
praefect | Razique: not sure what you mean but my server is a xeon workstation with SATA disks | 13:55 |
praefect | if that's what you wanted to know | 13:55 |
foexle | Razique: http://pastebin.com/WZyGe9Nj | 13:55 |
Razique | stevegjacobs_: I've written up a guide here http://docs.openstack.org/trunk/openstack-compute/admin/content/migrating-from-cactus-to-diablo.html | 13:56 |
Razique | look the end of the doc | 13:56 |
Razique | foexle: the good one http://paste.openstack.org/show/3161/ | 13:56 |
foexle | Razique: ok thx | 13:57 |
Razique | two lines have changed, the o= url parse... | 13:57 |
Razique | and the array at the end of file | 13:57 |
*** Turicas has quit IRC | 13:57 | |
Razique | praefect: look into nova-compute.log | 13:57 |
*** Turicas has joined #openstack | 13:57 | |
*** aliguori has joined #openstack | 13:58 | |
Razique | maybe it hangs of the clone creation | 13:58 |
Razique | or maybe nova-api.log since it's trying to put it into glance | 13:58 |
livemoon | yes,look at nova-compute.log | 13:58 |
praefect | Razique: thanks, yes nova-compute is full of errors | 13:58 |
Razique | ok paste o/ | 13:58 |
*** dendro-afk is now known as dendrobates | 13:59 | |
praefect | Razique: qemu-img command failed: invalid option "-s" | 13:59 |
Razique | -s ? | 14:00 |
praefect | paste.openstack.org/show/3162 | 14:00 |
Razique | what version of nova are u running ? | 14:00 |
livemoon | it is older | 14:01 |
livemoon | python is 2.6 | 14:01 |
livemoon | in my server. it is python 2.7 | 14:01 |
praefect | Razique: I've had problems before because of qemu-img (I'm on centos using packages from griddynamics) (nova-manage version = 2011.3 (2011.3-LOCALBRANCH:LOCALREVISION)) | 14:01 |
Razique | praefect: diablo from trunk :/ | 14:02 |
Razique | mmm I use diablo stable | 14:02 |
praefect | Razique: like I said they are packages from griddynamics, the thing is - it's pretty hard to come up with the latest version of qemu-img on centos... | 14:03 |
praefect | package problems.. | 14:03 |
foexle | Razique: in your doc last line export NOVA_URL=http://$KEYSTONe-IP:5000/v.20/ should be /v2.0/ ? | 14:03 |
Razique | praefect: whatt's ur qemu version ? | 14:03 |
livemoon | Razique:you use stable? | 14:03 |
Razique | livemoon: diablo stable yah | 14:03 |
Razique | foexle: yup | 14:03 |
livemoon | but your keystone and novaclient is git? | 14:03 |
Razique | livemoon: yah | 14:03 |
livemoon | cool | 14:03 |
praefect | qemu-img 0.12.1 .. simply not enough | 14:03 |
Razique | here 0.14-0 | 14:03 |
Razique | praefect: changelog og 0.14 version http://repo.or.cz/w/qemu.git/commitdiff/51ef67270b1d10e1fcf3de7368dccad1ba0bf9d1 | 14:05 |
Razique | "The following patch adds a new option in "qemu-img": qemu-img convert -f qcow2 -O qcow2 -s snapshot_name src_img bck_img. | 14:05 |
Razique | " | 14:05 |
Razique | praefect: that's our answer here :) | 14:05 |
Razique | livemoon: I bypass Keystone remember :p | 14:06 |
Razique | breaks existing cactus project :/ | 14:06 |
praefect | Razique: yes, just got qemu-img 0.15.0 installed... going to retry | 14:07 |
livemoon | razique: you are so clover | 14:07 |
*** antenagora has joined #openstack | 14:08 | |
foexle | Razique: token_id = result['access']['token']['id'] | 14:08 |
foexle | KeyError: 'access' | 14:08 |
livemoon | clever | 14:08 |
*** kbringard has joined #openstack | 14:08 | |
*** dirkx_ has quit IRC | 14:09 | |
Razique | livemoon: u'll laugh | 14:09 |
Razique | check the mail I just sent to the ML | 14:09 |
livemoon | today I run dashboard in virtualenv, it can running | 14:10 |
livemoon | but I run with apache, somethins error | 14:10 |
*** sandywalsh_ has joined #openstack | 14:10 | |
praefect | Razique: new test "nova image-create 298 up" now complains about Unknown file format "ami" ... with qemu-img 0.15.0 -----> http://paste.openstack.org/show/3163 | 14:10 |
*** joesavak has joined #openstack | 14:11 | |
Razique | livemoon: what are you on actually ? | 14:11 |
livemoon | I don't understand | 14:11 |
Razique | praefect: I'd say it's the format of the image but not sure | 14:11 |
Razique | praefect: can u look into Glance DB | 14:12 |
*** shang has quit IRC | 14:12 | |
Razique | especially the disk_format field ? (for the base image) | 14:12 |
*** imsplitbit has quit IRC | 14:12 | |
Razique | livemoon: with the devstack script ? | 14:12 |
livemoon | no,just according README in git | 14:13 |
praefect | Razique: could you just run "qemu-img" on your system and look at the last line "Supported formats"... do you see ami there? and what about the qemu-img command that gets run on your compute node, is it trying to output in ami format? | 14:13 |
*** cmagina has joined #openstack | 14:13 | |
Razique | foexle: can I see our nova.conf ? | 14:13 |
livemoon | according to devstack, it cannot be running | 14:13 |
Razique | praefect: mine uses raw http://paste.openstack.org/show/3164/ | 14:13 |
foexle | Razique: i think it's an 401 in keystone | 14:13 |
*** lts has joined #openstack | 14:14 | |
foexle | keystone gets ec2key xxx-xxx-xx:<tenant_name> | 14:14 |
praefect | Razique: thanks for that | 14:14 |
Razique | praefect: ami means amazon machine image | 14:14 |
Razique | foexle: whuch version of KS are u using ? | 14:14 |
livemoon | hi | 14:14 |
*** dirkx_ has joined #openstack | 14:15 | |
livemoon | I want to know how openstack used in your countrie? | 14:15 |
Razique | praefect: that's why I think u must have the wrong format defined for a disk | 14:15 |
livemoon | contries | 14:15 |
*** imsplitbit has joined #openstack | 14:15 | |
foexle | Razique: hmpf .... where i can get this informations -.- | 14:16 |
Razique | foexle: dunno… how did u install it ? | 14:16 |
foexle | cant see it in log after restart | 14:16 |
livemoon | I want develop openstack in our city and country | 14:16 |
livemoon | how to do it? | 14:16 |
foexle | and keystone --version gets keystone <function version at 0x7fec002516e0> | 14:16 |
foexle | i installed keystone as deb package | 14:17 |
praefect | Razique: would you be so kind as to confirm that you have raw instead of ami in one of these column from the glance DB? http://paste.openstack.org/show/3165 | 14:17 |
foexle | Razique: oh wait .... no deb package ... hmm | 14:18 |
*** cereal_bars has joined #openstack | 14:18 | |
foexle | no was manually git checkout | 14:18 |
praefect | Razique:so I've got a problem with my glance: (1) it doesn't list anything with "glance index" and (2) it uses ami as an image format which does not make sense... | 14:18 |
Razique | praefect: hehe that's what I told u :p | 14:19 |
Razique | here I've raw | 14:19 |
Razique | disk_format either raw or qcow | 14:19 |
Razique | qcow/ qcow2 | 14:19 |
*** shawn has quit IRC | 14:19 | |
guaqua | anyone else having trouble with autocreation of accounts in swift? | 14:19 |
guaqua | i'm trying to go through this, but running into trouble: http://swift.openstack.org/howto_installmultinode.html | 14:20 |
*** supriya has joined #openstack | 14:20 | |
Razique | praefect: the image creation extracts the info from Glance db imho in order to create the same type for the clone | 14:20 |
Razique | praefect: how do u populate ur glance repo ? | 14:20 |
Razique | via nova-manage or glance directly ? | 14:21 |
praefect | Razique: euca-bundle etc... only | 14:21 |
praefect | never imported an image otherwise | 14:21 |
Razique | praefect: erf… don't use euca2ools for image upload | 14:21 |
praefect | I won't if that doesn't work properly.. what's the preferred method? | 14:21 |
Razique | it splits files, duplicate files into local and glance repo | 14:21 |
praefect | it does and it takes forever | 14:21 |
Razique | praefect: nova-manage image image_register/ kernel_register / ramdisk_register :) | 14:22 |
Razique | praefect: yah, check /var/lib/nova/images | 14:22 |
Razique | I bet u have images here | 14:22 |
*** localhost has quit IRC | 14:22 | |
Razique | you can also use native glance tools, but I like nova-manage image's way | 14:22 |
Razique | praefect: if you can, don't use euca2ools | 14:23 |
Razique | or only to "consult" infos | 14:23 |
*** marrusl has joined #openstack | 14:23 | |
*** rsampaio has quit IRC | 14:24 | |
gnu111 | Are the node_timeout and conn_timeout options in swift conf files in seconds? | 14:24 |
*** ldlework has joined #openstack | 14:24 | |
*** localhost has joined #openstack | 14:24 | |
*** Nadeem has quit IRC | 14:25 | |
*** ldlework has quit IRC | 14:29 | |
*** ldlework has joined #openstack | 14:29 | |
dweimer | gnu111: Yes. | 14:30 |
gnu111 | dweimer: Thanks. I am using the defaults now and testing with files > 200GB. Are there any recommended values? | 14:31 |
*** shawn has joined #openstack | 14:32 | |
livemoon | Razique | 14:34 |
livemoon | bye | 14:34 |
livemoon | good night | 14:34 |
Razique | good bye my friend :) | 14:35 |
*** livemoon has left #openstack | 14:35 | |
Razique | praefect: ok so I think i've figured out a way to create the images | 14:35 |
praefect | Razique: thanks to you I managed to clone a VM, I will boot it in 5 sec, I'm listening | 14:36 |
Razique | so well If I create two files, and image, the last one is missing | 14:36 |
Razique | if I create 5 files, I'll have 4 | 14:36 |
Razique | so its like a delay happened everytime | 14:37 |
praefect | ok do you do sync on the shell before cloning? | 14:37 |
praefect | sync; | 14:37 |
Razique | mmm what is that ? | 14:37 |
praefect | I'm sure you know about that | 14:37 |
Razique | haha | 14:37 |
praefect | it flushes pendiong io to disk | 14:37 |
Razique | no :D | 14:37 |
Razique | ahh | 14:37 |
Razique | yes | 14:37 |
praefect | well... if that's your problem I'm happy I could help | 14:37 |
Razique | I thought that was a nova stuff | 14:37 |
Razique | ok leme try | 14:38 |
Razique | :p | 14:38 |
praefect | I always do sync, an old habit that predates good linux systems that flushes io on reboot etc... | 14:38 |
Razique | so I create my file, sync | 14:39 |
Razique | then nova-image-create | 14:39 |
praefect | that's right | 14:39 |
*** cmagina_ has quit IRC | 14:39 | |
nRy | Hello, any Openstack people interested in some freelance work? :-) | 14:39 |
lelin | do you know if is there any plan to support selinux on openstack? | 14:39 |
praefect | Razique: I jsut booted my clonde, both files are there (up and down) and I'm pretty sure I did a sync... let me know | 14:41 |
*** shawn has quit IRC | 14:41 | |
Razique | man… I owe u one! | 14:41 |
Razique | It's working !!! | 14:41 |
Razique | sick | 14:41 |
Razique | nRy: sure :D | 14:42 |
nRy | I am looking for someone who is familar with Openstack to help with a cool project ;-) | 14:42 |
Razique | nRy: can you give us few details or is it kinda private ? | 14:43 |
nRy | well I can say, without giving too much away | 14:43 |
Razique | ofc :) | 14:44 |
nRy | we have some servers running on AWS, and some of our own private servers | 14:44 |
nRy | by private servers I mean ones we own that are located in other datacenters besides AWS resources | 14:44 |
nRy | we want to use Openstack to create a centralized management system for all of the servers | 14:45 |
*** nerens has quit IRC | 14:45 | |
*** dendrobates is now known as dendro-afk | 14:46 | |
*** lborda has joined #openstack | 14:46 | |
*** shang has joined #openstack | 14:47 | |
*** dtroyer has joined #openstack | 14:49 | |
nRy | Razique: what do you think? | 14:49 |
sandywalsh_ | ttx around? | 14:49 |
ttx | sandywalsh: yes | 14:50 |
sandywalsh_ | ttx, hey! I moved the UTC time for the orch meeting to keep the CST time the same. What is everyone else doing? | 14:50 |
sandywalsh_ | ttx (so I don't tread on toes) | 14:50 |
ttx | sandywalsh: everyone else should not move time | 14:50 |
ttx | So far we tried to keep the meeting times consistent in UTC | 14:51 |
sandywalsh_ | ttx, ok, I'll keep the UTC the same and start an hour earlier ... thanks | 14:51 |
*** AlanClark has joined #openstack | 14:51 | |
ttx | sandywalsh: great | 14:51 |
praefect | Razique: you rock.. the files are there but more importantly you showed me the path of nova-manage bundling | 14:52 |
dweimer | gnu111: It depends on your setup. If you have enough storage nodes to handle the load then you may not need to increase them at all. As your storage server I/O starts to go higher there's a larger chance of node_timeout. That's been our experience anyway. | 14:56 |
*** shawn has joined #openstack | 14:56 | |
praefect | .. | 14:56 |
*** robbiew has joined #openstack | 14:57 | |
dweimer | gnu111: When the different timeouts are hit, it will be logged on the proxies. Because of segmentation the actual size of the files doesn't matter as much. Consider that uploading a 200GB file is equivalent to uploading 40 5GB files if you use 5GB segments. If you use the standard swift client, I believe it will do 10 threads at once by default. | 14:59 |
*** AlanClark has quit IRC | 14:59 | |
*** datajerk has quit IRC | 14:59 | |
*** dirkx_ has quit IRC | 15:00 | |
*** dirkx_ has joined #openstack | 15:00 | |
*** dirkx_ has quit IRC | 15:01 | |
*** rnirmal has joined #openstack | 15:01 | |
uvirtbot | New bug: #887596 in glance "Allow syslog facility to be selected" [Undecided,New] https://launchpad.net/bugs/887596 | 15:01 |
*** datajerk has joined #openstack | 15:02 | |
*** dendro-afk is now known as dendrobates | 15:03 | |
*** Rajaram has quit IRC | 15:04 | |
Razique | praefect: fantastic | 15:04 |
*** supriya has quit IRC | 15:05 | |
*** marrusl has quit IRC | 15:06 | |
gnu111 | dweimer: Once in a while I noticed when I run list, not all files show up. For instance, I have one directory with a 8G, 10G and 20GB file. If I run list repeatedly, sometime the 10G and 20G do not show up. is this a rsync issue? | 15:06 |
*** jwalcik has joined #openstack | 15:07 | |
*** dolphm has joined #openstack | 15:07 | |
*** dragondm has joined #openstack | 15:07 | |
*** marrusl has joined #openstack | 15:08 | |
*** dgags has quit IRC | 15:10 | |
*** dgags has joined #openstack | 15:10 | |
*** antenagora has quit IRC | 15:11 | |
*** AlanClark has joined #openstack | 15:11 | |
*** winston-d_ has joined #openstack | 15:15 | |
winston-d_ | hi, all | 15:16 |
*** winston-d_ has left #openstack | 15:17 | |
foexle | Razique: i need your help again :(. I'm sorry ... | 15:18 |
dweimer | gnu111: It does sound like an issue with the container replicator, which I believe uses rsync. Make sure that you have the container-replicator and rsyncd running on all of the storage nodes. The other thing it could be is that some of the nodes may have outdated ring files. | 15:18 |
Razique | foexle: sure! | 15:18 |
*** shawn has quit IRC | 15:18 | |
foexle | Razique: i think the problem are deeper .... if i try now with nova-manage image xxxxxx i get every time 401 -.- | 15:18 |
dweimer | gnu111: It's a good idea to monitor the ring file md5sums on all of the storage and proxy nodes. md5sum /etc/swift/*.ring.gz should be the same across all of the nodes. | 15:19 |
foexle | Razique: i check all logs | 15:19 |
gnu111 | dweimer: ok. will do that. | 15:19 |
foexle | and nova-manage dont call keystone or nova-api or glance-register.log | 15:19 |
foexle | so i dont see anything | 15:20 |
*** hggdh has quit IRC | 15:20 | |
*** hggdh has joined #openstack | 15:20 | |
*** dirkx_ has joined #openstack | 15:20 | |
*** dendrobates is now known as dendro-afk | 15:21 | |
Razique | foexle: have u tried to bypass Keystone, or do u need it ? | 15:21 |
foexle | i need it :/ | 15:22 |
Razique | ok | 15:22 |
Razique | leme 10 minutes to validate something here | 15:23 |
Razique | then I'll look | 15:23 |
Razique | help* | 15:23 |
kbringard | the —allow_admin_api is just inherent now? doesn't work if you pass it a value? | 15:23 |
foexle | http://pastebin.com/6tPyiELr | 15:23 |
Razique | would u mind sharing SSH access ? | 15:23 |
foexle | Razique: thx | 15:23 |
Razique | kbringard: it is required if you need to do operations like pause /suspend | 15:23 |
*** dendro-afk is now known as dendrobates | 15:23 | |
Razique | kbringard: just use it as is into nova.conf | 15:23 |
kbringard | right, but I mean | 15:23 |
kbringard | it used to be --allow_admin_api=true | 15:24 |
Razique | ahhh | 15:24 |
Razique | sorry :D | 15:24 |
kbringard | but I'm seeing this | 15:24 |
kbringard | https://skitch.com/aub17/gg6ba/dreamweaver | 15:24 |
Razique | I use it like this "--allow_admin_api" | 15:24 |
kbringard | yea, I'd always done =true | 15:24 |
kbringard | :shrug: | 15:24 |
Razique | I think passing an non-value option into conf files make them evaluated as true | 15:26 |
Razique | except if you add=False | 15:26 |
Razique | but I'm not that sure :/ | 15:26 |
kbringard | well, the error specifically says "option does not take a valie" | 15:26 |
kbringard | value* | 15:26 |
kbringard | so I'd assume putting the flag in makes it true | 15:26 |
kbringard | and not putting it in makes it false | 15:26 |
*** bcwaldon has quit IRC | 15:26 | |
Razique | kbringard: yah u totally could be right on that | 15:27 |
kbringard | no biggie, though, was just askin' | 15:27 |
*** epsas has joined #openstack | 15:29 | |
*** shawn has joined #openstack | 15:29 | |
*** code_franco has joined #openstack | 15:31 | |
annegentle | kbringard: I've asked about that as well (boolean or blank so true if present) and hear it works either way | 15:31 |
kbringard | annegentle: doesn't seem to anymore :-) | 15:32 |
kbringard | https://skitch.com/aub17/gg6ba/dreamweaver | 15:32 |
annegentle | kbringard: hm there were flag changes recently in trunk? | 15:32 |
kbringard | that's running essex trunk from  2012.1~e1~20111021.11232-0ubuntu0ppa1~natty1 | 15:32 |
kbringard | I'm not sure when it changed, I was troubleshooting that for someone else | 15:32 |
gnu111 | dweimer: You were right! The md5sum wasn't the same. I fixed that. | 15:33 |
kbringard | so I'm not 100% sure what they updated from | 15:33 |
kbringard | but it was a diablo trunk build (I think in D2 or D3) | 15:33 |
kbringard | not that it matters, just good to know | 15:33 |
*** MarkAtwood has joined #openstack | 15:33 | |
annegentle | kbringard: sometime last week? | 15:34 |
kbringard | it looks like his build is from 10/21 | 15:34 |
kbringard | he updated from a pretty old diablo build, like June or July | 15:35 |
*** termie has quit IRC | 15:36 | |
*** negronjl has joined #openstack | 15:37 | |
*** jedi4ever has joined #openstack | 15:39 | |
dweimer | gnu111: Changing the rings will replicate the data to the new locations. Once that is done you shouldn't have the differing container listings any more. | 15:41 |
*** troytoman-away is now known as troytoman | 15:41 | |
*** vidd-away has quit IRC | 15:41 | |
*** jdg has joined #openstack | 15:45 | |
*** dirakx1 has quit IRC | 15:46 | |
*** Arminder has joined #openstack | 15:48 | |
*** cereal_bars has quit IRC | 15:48 | |
*** rnorwood has joined #openstack | 15:48 | |
*** supriya has joined #openstack | 15:48 | |
*** shawn has quit IRC | 15:48 | |
*** rsampaio has joined #openstack | 15:49 | |
stevegjacobs_ | Razique - I was on earlier asking about using novaclient - about getting authenticated. Got completely sidetracked for a bit but I am trying to follow your instructions now | 15:50 |
*** termie has joined #openstack | 15:51 | |
stevegjacobs_ | My installation is based on Kiall's ppa | 15:51 |
Razique | stevegjacobs_: hehe ok | 15:52 |
*** shang has quit IRC | 15:52 | |
Razique | foexle: u there ? | 15:52 |
stevegjacobs_ | but it modified - I have three nodes | 15:52 |
Kiall | stevegjacobs_: have a look at the settings file from the scripts.. | 15:52 |
foexle | Razique: !:) | 15:52 |
Razique | ok ok | 15:52 |
*** shang has joined #openstack | 15:52 | |
Kiall | it has a pile of NOVA_* settings, those are what would go into novarc for python-novaclient auth | 15:52 |
* Kiall has had a horrible day.. dell storage array decided to mark all disks as dead -_- | 15:53 | |
*** Hakon|mbp has quit IRC | 15:54 | |
stevegjacobs_ | ok Kiall - I'll have a look - sorry to hear about your woes! | 15:54 |
*** termie has quit IRC | 15:55 | |
Kiall | lol .. everything is back in order now.. nothing lost :) | 15:55 |
*** adjohn has joined #openstack | 15:56 | |
*** rsampaio has quit IRC | 15:56 | |
*** n81 has joined #openstack | 15:56 | |
*** dirkx_ has quit IRC | 15:57 | |
*** rnorwood has quit IRC | 15:57 | |
*** krow has joined #openstack | 15:59 | |
*** rsampaio has joined #openstack | 15:59 | |
*** rwmjones has joined #openstack | 16:00 | |
*** termie has joined #openstack | 16:00 | |
*** termie has quit IRC | 16:00 | |
*** termie has joined #openstack | 16:00 | |
*** rnorwood has joined #openstack | 16:02 | |
*** nerens has joined #openstack | 16:02 | |
*** po has joined #openstack | 16:03 | |
*** rnorwood has quit IRC | 16:04 | |
*** alperkanat has joined #openstack | 16:07 | |
*** alperkanat has joined #openstack | 16:07 | |
*** llang629 has joined #openstack | 16:09 | |
alperkanat | anybody here experienced in JBOD setup for HP Smart Array P410? | 16:09 |
*** llang629 has left #openstack | 16:09 | |
*** PotHix has quit IRC | 16:11 | |
stevegjacobs_ | Kiall: thanks - got that to work - made a new file with some of the credentials, sourced it and then nova image-list gives me an output | 16:11 |
Kiall | Cool - BTW You'll need a pile more settings in it for the euca-* tools to work.. :) | 16:12 |
stevegjacobs_ | Still have some issues with dashboard - especially a strange error when trying to do snapshots | 16:12 |
*** reidrac has quit IRC | 16:12 | |
Kiall | is the nova-compute node running my packages, or the original ubuntu ones? | 16:13 |
stevegjacobs_ | Yeah I saw a document on getting both to work - some kind of plumbing job | 16:13 |
Kiall | there was a bug in the ubuntu packages, thats been fixed in stable/diablo (so, its included in my packages) | 16:13 |
stevegjacobs_ | umm - you're talking about that second node? | 16:13 |
uvirtbot | New bug: #887611 in horizon "Console Log should have a nice message if instance state isn't running" [Undecided,New] https://launchpad.net/bugs/887611 | 16:13 |
*** hezekiah_ has joined #openstack | 16:14 | |
stevegjacobs_ | I don't think I need to install everything from your packages on there do I? | 16:14 |
*** TheOsprey has quit IRC | 16:14 | |
Kiall | yea - the server with nova-compute installed | 16:14 |
Kiall | it just needs the nova-copmpute package on the second node, which will bring in the right libs etc | 16:15 |
Kiall | StaceyTien: just got back to your email BTW .. sorry for the delay.. Its been one hell of a day ;) | 16:16 |
Kiall | stevegjacobs_: * | 16:16 |
*** webx has joined #openstack | 16:16 | |
*** jaypipes has quit IRC | 16:17 | |
*** jaypipes has joined #openstack | 16:17 | |
*** krow has quit IRC | 16:18 | |
webx | which package provides the swauth command? I'm using the diablo-centos repo found here: http://yum.griddynamics.net/yum/diablo-centos/ | 16:20 |
*** rnorwood has joined #openstack | 16:21 | |
stevegjacobs_ | Kiall: when upgrading the packages, is it a good idea to add the 'recommended' packages as well? in this case: radvd python-suds | 16:23 |
*** clauden___ has quit IRC | 16:24 | |
*** clauden_ has joined #openstack | 16:24 | |
Kiall | I haven't installed any of them recommended packages, my dep lists came from the official ubuntu ones, so I left them as is and only fixed what was broken | 16:24 |
Kiall | any of the* | 16:24 |
*** alperkanat has quit IRC | 16:26 | |
*** cp16net has joined #openstack | 16:26 | |
rwmjones | I want to boot a VM from external kernel + initrd + root disk ... is there any documentation on doing this? | 16:27 |
*** shawn has joined #openstack | 16:28 | |
*** alperkanat has joined #openstack | 16:28 | |
*** alperkanat has joined #openstack | 16:28 | |
alperkanat | notmyname: ping | 16:29 |
notmyname | alperkanat: good morning. get anywhere with performance last night? | 16:29 |
*** jdg has quit IRC | 16:30 | |
alperkanat | nope.. i couldn't do the tests.. but i want to tell you something | 16:30 |
alperkanat | leaseweb confirmed that our RAID controllers are backed with batteries | 16:30 |
*** stevegjacobs has joined #openstack | 16:30 | |
alperkanat | however the we seem to have RAID0 and not JBOD | 16:30 |
epsas | hmm -- building out tools with right_aws now | 16:31 |
alperkanat | i now enabled write-cache battery on one of the servers | 16:31 |
alperkanat | hoping to have increase for writes | 16:31 |
epsas | is anybody using right_aws with openstack? | 16:31 |
alperkanat | notmyname: do you think that RAID0 is responsible for slow performance? | 16:31 |
*** dprince has quit IRC | 16:32 | |
*** guigui has quit IRC | 16:32 | |
notmyname | alperkanat: no. RAID0 should give you better performance, in fact | 16:32 |
*** nati2 has joined #openstack | 16:32 | |
alperkanat | notmyname: hmm i see | 16:32 |
stevegjacobs_ | Kiall: updated compute node. Still have a problem with dashboard when trying to create snapshot | 16:32 |
stevegjacobs_ | Unexpected error: The server has either erred or is incapable of performing the requested operation. | 16:33 |
*** supriya has quit IRC | 16:33 | |
notmyname | alperkanat: since you have the battery backed cache, you should be able to go with the nobarrier option with no problem | 16:33 |
alperkanat | ok i'm enabling write cache for all storage nodes 1 by 1 now | 16:34 |
stevegjacobs_ | I set tried this on a new small instance as well as the old one from pre-keystone -same error | 16:34 |
*** shawn has quit IRC | 16:34 | |
notmyname | alperkanat: once that's done, I think your best bet is to look at each component independently. do the walt test (and perhaps the bonnie test). | 16:34 |
hezekiah_ | anyone seeing libvirtd restarting over and over on natty? | 16:34 |
alperkanat | notmyname: component? | 16:34 |
stevegjacobs_ | Kiall - not sure what logs to look at to see if I can tell where the error is coming from | 16:35 |
*** bcwaldon has joined #openstack | 16:35 | |
alperkanat | notmyname: can you try this on your servers? dd if=/dev/urandom of=testfile bs=1M count=100 | 16:35 |
*** dolphm has quit IRC | 16:35 | |
*** zaitcev has joined #openstack | 16:36 | |
*** juddm has joined #openstack | 16:36 | |
*** dolphm has joined #openstack | 16:36 | |
hezekiah_ | anyone see anything liek this? | 16:36 |
hezekiah_ | http://paste.openstack.org/show/3172/ | 16:36 |
alperkanat | my result on a cache enabled server and disabled server is almost the same: 104857600 bytes (105 MB) copied, 18,2724 s, 5,7 MB/s ///// 104857600 bytes (105 MB) copied, 18,3694 s, 5,7 MB/s | 16:36 |
*** primeministerp has joined #openstack | 16:36 | |
*** dobber has quit IRC | 16:37 | |
*** jsavak has joined #openstack | 16:38 | |
*** lelin has quit IRC | 16:39 | |
alperkanat | notmyname: http://cl.ly/BcYg (logical drive status on storage node), write cache battery stat: http://cl.ly/Bcu3 | 16:40 |
notmyname | alperkanat: I get 5.3 MB/sec on my VM. (but I don't think /dev/urandom is a good test) | 16:40 |
alperkanat | notmyname: the reason i haven't tried walt and bonnie is that get-nodes does not provide correct information and i didn't know how to use them | 16:40 |
*** rsampaio has quit IRC | 16:41 | |
*** dolphm has quit IRC | 16:41 | |
*** lvaughn_ has joined #openstack | 16:42 | |
*** lvaughn has quit IRC | 16:42 | |
hezekiah_ | I'm building nova-compute boxes with puppet | 16:42 |
hezekiah_ | and I keep seeing | 16:42 |
hezekiah_ | Nov 8 10:41:06 m0005048 libvirtd: 10:41:06.349: 8030: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range | 16:42 |
*** GheRivero_ has joined #openstack | 16:42 | |
hezekiah_ | and libvirtd goes into a restart loop | 16:42 |
*** smeier00 has joined #openstack | 16:43 | |
*** joesavak has quit IRC | 16:43 | |
*** alperkanat has left #openstack | 16:43 | |
*** alperkanat has joined #openstack | 16:43 | |
webx | http://paste.openstack.org/show/3173/ | 16:44 |
webx | I've been following the instructions here: http://swift.openstack.org/howto_installmultinode.html | 16:44 |
*** GheRivero_ has quit IRC | 16:45 | |
*** GheRivero_ has joined #openstack | 16:45 | |
webx | now that I'm to the part where I actually test, I'm seeing the error in the paste. any ideas? | 16:45 |
*** smeier00 has left #openstack | 16:45 | |
*** GheRivero_ has quit IRC | 16:46 | |
notmyname | webx: can you paste your proxy config? | 16:46 |
*** GheRivero_ has joined #openstack | 16:46 | |
webx | proxy-server.conf ? | 16:46 |
*** rsampaio has joined #openstack | 16:46 | |
alperkanat | notmyname: have you written something or maybe i missed it? | 16:47 |
webx | notmyname. http://paste.openstack.org/show/3174/ | 16:47 |
notmyname | alperkanat: nope | 16:47 |
*** popux has joined #openstack | 16:47 | |
alperkanat | notmyname: ok.. | 16:48 |
*** dolphm has joined #openstack | 16:48 | |
notmyname | webx: add "account_autocreate = true" to the [app:proxy-server] section. then reload the proxy and you should be good | 16:49 |
Kiall | stevegjacobs_: humm.. I'd check the nova-compute, nova-api and glance logs | 16:50 |
webx | notmyname. same error after adding that and swift-init proxy restart | 16:51 |
notmyname | webx: hmm | 16:52 |
webx | http://paste.openstack.org/show/3175/ | 16:52 |
*** datajerk has quit IRC | 16:52 | |
*** arBmind_ has joined #openstack | 16:55 | |
*** rnirmal has quit IRC | 16:55 | |
*** termie has quit IRC | 16:56 | |
*** cp16net has quit IRC | 16:57 | |
*** cp16net has joined #openstack | 16:57 | |
*** arBmind has quit IRC | 16:58 | |
*** arBmind_ is now known as arBmind | 16:58 | |
*** jog0 has joined #openstack | 16:58 | |
*** misheska has joined #openstack | 16:59 | |
*** rsampaio has quit IRC | 16:59 | |
*** jj0hns0n has joined #openstack | 17:00 | |
*** termie has joined #openstack | 17:00 | |
*** termie has quit IRC | 17:00 | |
*** termie has joined #openstack | 17:00 | |
*** datajerk has joined #openstack | 17:01 | |
*** javiF has quit IRC | 17:01 | |
*** lvaughn has joined #openstack | 17:02 | |
*** lvaughn_ has quit IRC | 17:02 | |
*** kaigan has quit IRC | 17:02 | |
*** TheOsprey has joined #openstack | 17:03 | |
*** datajerk has quit IRC | 17:05 | |
stevegjacobs_ | Kiall: haven't checked logs yet but I get the same error when I run nova image-create <server> <name> | 17:05 |
stevegjacobs_ | The server has either erred or is incapable of performing the requested operation. (HTTP 500) | 17:06 |
Kiall | I havent created a SS in a while.. let me see if I get the same error.. | 17:06 |
*** termie has quit IRC | 17:06 | |
Kiall | the image-create command completed without error, is that the point you get the error? | 17:07 |
*** rsampaio has joined #openstack | 17:08 | |
Kiall | stevegjacobs: BTW .. get a real IRC client so oyu get notified when your name is mentioned ;) | 17:08 |
*** krow has joined #openstack | 17:08 | |
*** ambo has quit IRC | 17:09 | |
*** ambo has joined #openstack | 17:10 | |
*** MarkAtwood has quit IRC | 17:12 | |
stevegjacobs | is this a real irc client? | 17:12 |
*** arun has quit IRC | 17:12 | |
Kiall | I thought you were on one of the web browser based ones? | 17:12 |
*** obino has quit IRC | 17:12 | |
Kiall | (Must have been someone else .. whoops) | 17:12 |
notmyname | webx: check the storage servers to see if there is an error there. you could probably see if there are any other errors in the proxy logs | 17:12 |
Kiall | BTW - My snapshot has gone from queue -> snapshotting -> saving... | 17:13 |
stevegjacobs | I have two open right now - a web based and a gnome one | 17:13 |
*** gyee has joined #openstack | 17:13 | |
*** termie has joined #openstack | 17:13 | |
Kiall | ->active | 17:13 |
stevegjacobs | Xchat gnome on ubuntu | 17:13 |
webx | Nov 8 17:13:35 netops-z3-a proxy-server Account HEAD returning 503 for [] | 17:13 |
Kiall | same ;) | 17:13 |
webx | Nov 8 17:13:35 netops-z3-a proxy-server - - 08/Nov/2011/17/13/35 HEAD /v1/AUTH_system HTTP/1.0 503 - TempAuth - - - - - - 0.0135 | 17:13 |
webx | notmyname. that's all I see on the proxy server in messages.. | 17:14 |
stevegjacobs | tbh I've never irc'd much till the last few days | 17:14 |
*** arBmind has quit IRC | 17:14 | |
*** misheska has quit IRC | 17:14 | |
notmyname | webx: ok, check your account server logs | 17:15 |
webx | notmyname. there are no configured logs in the conf... wouldn't that mean that they default to syslog? | 17:16 |
notmyname | webx: yes. it shoudl be in /var/log/syslog unless you've changed your syslog config | 17:16 |
*** tyska has joined #openstack | 17:16 | |
webx | it's /var/log/messages, but yea.. that's the place I'm looking | 17:17 |
stevegjacobs_ | Kiall: check this paste out - /var/log/nova-api.log http://paste.openstack.org/show/3176/ | 17:17 |
tyska | Hello guys! | 17:17 |
tyska | what's up? | 17:17 |
webx | Nov 8 15:34:42 netops-z3-a-1 account-replicator Skipping sdb1 as it is not mounted | 17:17 |
webx | Nov 8 15:34:42 netops-z3-a-1 account-replicator Beginning replication run | 17:17 |
webx | Nov 8 15:34:42 netops-z3-a-1 account-replicator Replication run OVER | 17:17 |
webx | hmm, maybe I mistyped the storage location ? | 17:18 |
gnu111 | webx: I had this error once. not sure exactly why. I rebuilt my system without RAID now. Do you see a "accounts" folder in your /srv/node/sda3 ? | 17:18 |
webx | would that cause what we're seeing? | 17:18 |
*** ccustine has joined #openstack | 17:18 | |
gnu111 | webx: not sure. I had seven nodes. with one node proxy which was also storage node. When I ran curl, it created the accounts folder only in the proxy node not in the others. | 17:19 |
*** shawn_ has joined #openstack | 17:19 | |
Kiall | stevegjacobs_: humm thats a nova log? | 17:19 |
webx | gnu111. no accounts folder. I think I see why though | 17:19 |
*** krow has quit IRC | 17:20 | |
webx | Devices: id zone ip address port name weight partitions balance meta | 17:20 |
webx | 0 1 10.68.224.190 6002 sdb1 100.00 6144 0.00 | 17:20 |
webx | name should be sdb, not sdb1 | 17:20 |
webx | I think | 17:20 |
Kiall | stevegjacobs_: and, thats "/var/log/nova-api.log"? Not /var/log/nova/nova-api.log ? | 17:20 |
webx | is there a way to edit that setting? | 17:20 |
*** dirkx_ has joined #openstack | 17:22 | |
stevegjacobs | /var/log/nova/nova-api.log | 17:22 |
gnu111 | webx: swift-ring-builder <builder-file> remove <ip_address>/<device_name>. Not sure if there is a modify function. So you have to remove the node and add it again then rebalance. | 17:22 |
Kiall | stevegjacobs_: anyway .. the error is in glance it seems, rather than nova. Somewhere around "2011-11-08 17:10" in the glance api logs | 17:22 |
webx | k, I'll just remove and re-add | 17:22 |
Kiall | look for "DEBUG [glance.api.middleware.version_negotiation] Processing request: POST /v1/image" | 17:22 |
*** dolphm has quit IRC | 17:22 | |
tyska | guys, im having trouble to connect in instances that are running in the server2 of a dual node openstack architecture, can anyone help me? | 17:23 |
*** dolphm has joined #openstack | 17:23 | |
Kiall | anything interesting in/around it? | 17:23 |
*** maplebed has joined #openstack | 17:23 | |
*** code_franco has quit IRC | 17:24 | |
*** Ruetobas has quit IRC | 17:24 | |
*** dprince has joined #openstack | 17:27 | |
*** javiF has joined #openstack | 17:27 | |
*** dolphm has quit IRC | 17:27 | |
*** Ruetobas has joined #openstack | 17:28 | |
*** heckj has joined #openstack | 17:28 | |
stevegjacobs | Kiall: http://paste.openstack.org/show/3178/ - ends with a key error? | 17:29 |
stevegjacobs | This is from /var/log/glance/api.log | 17:30 |
Kiall | Heh - I just had a double take at that log, thinking it was from my servers.. | 17:30 |
Kiall | the IP is nearly identical ;) | 17:30 |
*** dolphm has joined #openstack | 17:31 | |
*** neogenix has joined #openstack | 17:31 | |
*** misheska has joined #openstack | 17:32 | |
Kiall | stevegjacobs_: okay the next line that should be in your logs, before the stacktrace, is | 17:33 |
Kiall | 2011-11-08 17:07:11 DEBUG [glance.registry] Returned image metadata from call to RegistryClient.add_image(): | 17:33 |
Kiall | Frankly - I dont know where that calls out to though... | 17:33 |
webx | gnu111/notmyname: it was that my partition name was wrong. after I deleted, re-added, then re-balanced, the account creation seems to work now | 17:34 |
gnu111 | webx: great! | 17:34 |
*** shang has quit IRC | 17:35 | |
*** cmagina has quit IRC | 17:35 | |
uvirtbot | New bug: #887672 in glance "internationalization bug in exceptions - missing import" [Undecided,New] https://launchpad.net/bugs/887672 | 17:35 |
webx | although I don't follow how users are managed, I'll remain ignorant for the sake of continuing the testing | 17:35 |
*** mtaylor has quit IRC | 17:36 | |
*** mtaylor has joined #openstack | 17:36 | |
*** mtaylor has quit IRC | 17:36 | |
*** mtaylor has joined #openstack | 17:36 | |
*** ChanServ sets mode: +v mtaylor | 17:36 | |
*** misheska has quit IRC | 17:38 | |
Kiall | stevegjacobs_: it might be that there is a mix of packages installed.. can you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack|)"` on both server? | 17:38 |
Kiall | stevegjacobs_: it might be that there is a mix of packages installed.. can you pastebin the output of `dpkg -l | grep -E "(nova|glance|openstack)"` on both server?* | 17:38 |
*** popux has quit IRC | 17:39 | |
*** datajerk has joined #openstack | 17:40 | |
*** neogenix has quit IRC | 17:41 | |
*** neogenix has joined #openstack | 17:41 | |
*** jiva has quit IRC | 17:42 | |
*** MarkAtwood has joined #openstack | 17:42 | |
*** jiva has joined #openstack | 17:43 | |
*** dirkx_ has quit IRC | 17:43 | |
*** dolphm has quit IRC | 17:44 | |
*** exprexxo has joined #openstack | 17:44 | |
*** dolphm has joined #openstack | 17:45 | |
*** dysinger has joined #openstack | 17:45 | |
*** jdurgin has joined #openstack | 17:46 | |
*** dolphm_ has joined #openstack | 17:46 | |
*** thingee has joined #openstack | 17:47 | |
*** BasTichelaar has joined #openstack | 17:48 | |
*** haji has joined #openstack | 17:48 | |
BasTichelaar | anyone here who can help me with zones implementation in nova? | 17:48 |
haji | hey Kiall how do i use the keystone_data script | 17:48 |
*** dolphm has quit IRC | 17:49 | |
*** alperkanat has quit IRC | 17:50 | |
*** PotHix has joined #openstack | 17:50 | |
*** rnorwood has quit IRC | 17:51 | |
*** nacx has quit IRC | 17:53 | |
stevegjacobs | Kiall: l don't see too much difference in the packages that are on both servers | 17:53 |
stevegjacobs | http://paste.openstack.org/show/3179/ | 17:53 |
webx | is there a way to convert utilities like s3cmd to use a local swift installation? | 17:53 |
webx | (as the back-end store, instead of s3) | 17:54 |
*** obino has joined #openstack | 17:56 | |
*** tyska has quit IRC | 17:56 | |
*** Nathariel has quit IRC | 17:57 | |
*** hugokuo has joined #openstack | 17:57 | |
Kiall | stevegjacobs: I think I see the issue | 17:57 |
Kiall | on the compute node, you have the original ubuntu python-glance package installed | 17:58 |
stevegjacobs | I just noticed that too | 17:58 |
stevegjacobs | does it need to be there at all? | 17:58 |
Kiall | and, python-novaclien | 17:58 |
Kiall | t | 17:58 |
Kiall | off the top of my head, I'm not sure.. | 17:59 |
hugokuo | hi all | 18:00 |
*** javiF has quit IRC | 18:00 | |
Kiall | stevegjacobs: yea, it does need to be installed.. (Just checked) | 18:01 |
Kiall | I'd bet if you apt-get install python-glance python-novaclient things will work... | 18:01 |
*** arun has joined #openstack | 18:01 | |
*** arun has joined #openstack | 18:01 | |
*** dirkx_ has joined #openstack | 18:03 | |
stevegjacobs | just did that in x.x.x.199 - trying things out now | 18:03 |
*** tyska has joined #openstack | 18:03 | |
stevegjacobs | going to restart apache | 18:03 |
stevegjacobs | and also try it from nova-client | 18:03 |
Kiall | The controller node had the right stuff, it looks like it was just the compute node that needed the updates+a restart of the nova components | 18:04 |
*** cp16net has quit IRC | 18:05 | |
*** exprexxo has quit IRC | 18:06 | |
*** tyska has quit IRC | 18:06 | |
webx | (root@netops-z3-a-2) ~ > du -sh splunksearch-backups.tar | 18:06 |
webx | 877M splunksearch-backups.tar | 18:06 |
webx | (root@netops-z3-a-2) ~ > swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass upload bbartlett splunksearch-backups.tar | 18:06 |
webx | Object PUT failed: https://10.68.224.147:8080/v1/AUTH_system/bbartlett/splunksearch-backups.tar 503 Service Unavailable | 18:06 |
webx | hmm.. is there a filesize limit that I may be hitting here? | 18:07 |
*** Hakon|mbp has joined #openstack | 18:07 | |
*** dgags has joined #openstack | 18:07 | |
stevegjacobs | nope - still no joy either from dashboard or novaclient :-( | 18:08 |
*** lorin1 has quit IRC | 18:08 | |
Kiall | humm.. | 18:08 |
*** mdomsch has quit IRC | 18:09 | |
Kiall | stevegjacobs: you restarted the various nova-* services on the compute node after installing? | 18:10 |
BasTichelaar | anyone who can help me with zones and the basescheduler? | 18:12 |
*** Hakon|mbp has quit IRC | 18:13 | |
*** nati2 has quit IRC | 18:15 | |
*** nati2 has joined #openstack | 18:15 | |
*** Ryan_Lane has joined #openstack | 18:16 | |
*** hugokuo has left #openstack | 18:16 | |
*** shang has joined #openstack | 18:16 | |
uvirtbot | New bug: #887692 in nova "The QuantumManager could use some refactoring" [Undecided,Confirmed] https://launchpad.net/bugs/887692 | 18:18 |
*** jdg has joined #openstack | 18:20 | |
*** negronjl has quit IRC | 18:22 | |
*** obino has quit IRC | 18:24 | |
*** obino has joined #openstack | 18:25 | |
*** Hakon|mbp has joined #openstack | 18:25 | |
*** stevegjacobs has quit IRC | 18:27 | |
webx | http://paste.openstack.org/show/3180/ | 18:27 |
webx | is there a way to enable debug or something so that the logs will tell me which storage server(s) it's trying to connect to? trying to troubleshoot this is a magical nightmare right now | 18:27 |
*** dirkx_ has quit IRC | 18:30 | |
*** hadrian has joined #openstack | 18:30 | |
*** shang_ has joined #openstack | 18:30 | |
uvirtbot | New bug: #887706 in quantum "Exlude pyc files from pep8 verifications" [Undecided,New] https://launchpad.net/bugs/887706 | 18:31 |
haji | kiall | 18:32 |
Kiall | yup? | 18:32 |
haji | how do i use the keystone_data script | 18:32 |
Kiall | the one from my repo, or the devstack one? | 18:33 |
haji | the one from your repo | 18:33 |
*** thingee has left #openstack | 18:33 | |
Kiall | Oh sure.. you just run it, once the keystone.sh script has already been ran... | 18:33 |
haji | but... | 18:34 |
haji | where is keystone.sh | 18:34 |
Kiall | there all in the repo ;) https://github.com/managedit/openstack-setup | 18:34 |
haji | i just installed the packages | 18:34 |
Kiall | ah.. | 18:34 |
Kiall | have a look at that link above.. | 18:35 |
Kiall | it handles installing everything (doesnt matter if you have the packages installed already), generates some config files, and guides you through most of the steps to install everything.. | 18:35 |
haji | oohhh | 18:36 |
haji | NICE! | 18:36 |
haji | thanks | 18:36 |
uvirtbot | New bug: #887708 in nova "xenapi returns HANDLE_INVALID randomly" [Undecided,New] https://launchpad.net/bugs/887708 | 18:36 |
Kiall | Yea - I found myself doing the same steps over and over, forgetting 1 each time, and decided to just script it ;) | 18:36 |
*** lorin1 has joined #openstack | 18:36 | |
*** fulanito has joined #openstack | 18:37 | |
haji | great work! | 18:38 |
Kiall | ;) | 18:38 |
Kiall | The scripts are all of 100 lines or so.. Its really not much, It mostly just generating config files based on some templates, and then giving a list of instructions | 18:38 |
*** ccustine has quit IRC | 18:39 | |
webx | I seem to be able to create and list containers, but I am not able to upload files to my swift cluster | 18:39 |
*** clopez has quit IRC | 18:40 | |
webx | and then I see really fun behavior where a container shows up in a list, but if I ask to see the contents of the container, swift reports it as non-existant | 18:40 |
haji | kiall: in the install instructions why you don't install nova scheduler and objstore | 18:41 |
webx | for example: http://paste.openstack.org/show/3181/ | 18:41 |
uvirtbot | New bug: #887712 in openstack-qa "instance_update with uuid as instance_Id and metadata fails" [Medium,Confirmed] https://launchpad.net/bugs/887712 | 18:41 |
*** mszilagyi has joined #openstack | 18:41 | |
*** hadrian has quit IRC | 18:43 | |
dolphm_ | sandywalsh: my apologies for my order word of choice | 18:44 |
Kiall | haji: really? whoops ;) | 18:44 |
Kiall | the scripts do install it I'm sure though ;) | 18:44 |
haji | oh | 18:44 |
Kiall | I should probably update the PPA instructions to just point at the scripts ;) | 18:44 |
*** py___ has joined #openstack | 18:45 | |
*** _jeh_ has joined #openstack | 18:45 | |
*** _jeh_ has joined #openstack | 18:45 | |
*** fulanito has quit IRC | 18:45 | |
*** shang has quit IRC | 18:45 | |
*** dgags has quit IRC | 18:45 | |
*** datajerk has quit IRC | 18:45 | |
*** hezekiah_ has quit IRC | 18:45 | |
*** primeministerp has quit IRC | 18:45 | |
*** rwmjones has quit IRC | 18:45 | |
*** jeh has quit IRC | 18:45 | |
*** jamespage has quit IRC | 18:45 | |
*** nci has quit IRC | 18:45 | |
*** crayon has quit IRC | 18:45 | |
*** py has quit IRC | 18:45 | |
*** pfibiger has quit IRC | 18:45 | |
*** paltman has quit IRC | 18:45 | |
*** chadh has quit IRC | 18:45 | |
*** n0ano has quit IRC | 18:45 | |
*** troytoman has quit IRC | 18:45 | |
*** pvo has quit IRC | 18:45 | |
*** notmyname has quit IRC | 18:45 | |
*** chmouel has quit IRC | 18:45 | |
*** dgags has joined #openstack | 18:45 | |
*** notmyname has joined #openstack | 18:46 | |
*** ChanServ sets mode: +v notmyname | 18:46 | |
*** fulanito has joined #openstack | 18:46 | |
*** shang has joined #openstack | 18:46 | |
*** datajerk has joined #openstack | 18:46 | |
*** primeministerp has joined #openstack | 18:46 | |
*** hezekiah_ has joined #openstack | 18:46 | |
*** rwmjones has joined #openstack | 18:46 | |
*** jamespage has joined #openstack | 18:46 | |
*** nci has joined #openstack | 18:46 | |
*** crayon has joined #openstack | 18:46 | |
*** pfibiger has joined #openstack | 18:46 | |
*** paltman has joined #openstack | 18:46 | |
*** chadh has joined #openstack | 18:46 | |
*** n0ano has joined #openstack | 18:46 | |
*** troytoman has joined #openstack | 18:46 | |
*** pvo has joined #openstack | 18:46 | |
*** chmouel has joined #openstack | 18:46 | |
*** hezekiah_ has quit IRC | 18:46 | |
*** hezekiah_ has joined #openstack | 18:46 | |
*** nati2_ has joined #openstack | 18:51 | |
*** nati2 has quit IRC | 18:51 | |
haji | kiall: the warning is a joke right? | 18:51 |
haji | ahahha | 18:51 |
Kiall | ;) | 18:51 |
Kiall | kinda | 18:51 |
mtucloud | does anyone know why i cant access the public ip of an instance even with it properaly allocated, associated and put in security group? | 18:51 |
mtucloud | private ip works fine | 18:52 |
*** darraghb has quit IRC | 18:52 | |
Kiall | mtucloud: not much to go on there ;) does `ip addr show` show the IP listed anywhere? | 18:52 |
mtucloud | Kiall: yep. | 18:53 |
mtucloud | under eth0: inet 192.168.0.100/24 brd 192.168.0.255 scope global eth0 | 18:54 |
mtucloud | inet 192.168.0.225/32 scope global eth0 | 18:54 |
Kiall | and `iptables -t nat -L` has a NAT rule for it? | 18:54 |
mtucloud | the ip the vm is assoicated with is 192.168.0.255 | 18:54 |
*** code_franco has joined #openstack | 18:54 | |
mtucloud | DNAT all -- anywhere 192.168.0.225 to:10.0.0.2 | 18:54 |
mtucloud | the 10.0.0.0 network is my eth1 private network | 18:55 |
Kiall | and 10.0.0.2 is the instances private IP? | 18:55 |
mtucloud | yes sir | 18:55 |
mtucloud | and i can ssh and ping to that fine | 18:55 |
mtucloud | this is just a dual node arch as well | 18:55 |
Kiall | and, can you ping the public IP from the compute node? | 18:55 |
haji | fulanito: did u installed openstack already?? | 18:55 |
mtucloud | let me check | 18:56 |
Kiall | or just unable to get to it from outside the compute/network node | 18:56 |
fulanito | yes its working fine | 18:56 |
haji | fulanito: awesome | 18:56 |
*** fulanito has quit IRC | 18:56 | |
mtucloud | i can get to 10.0.0.2 from the controller node fine | 18:56 |
mtucloud | but the network node is also running on the same node | 18:57 |
mtucloud | i cant ping the public ip from the compute node | 18:57 |
*** rnorwood has joined #openstack | 18:57 | |
Kiall | mtucloud: weird.. all the basics look covered off | 18:58 |
Kiall | you sure the instance is part of the security group you set the rules on? | 18:58 |
*** hggdh has quit IRC | 18:59 | |
mtucloud | ya its just the default group | 18:59 |
*** haji has quit IRC | 18:59 | |
mtucloud | but hold on for the compute node iptables, should i be seeing those specific rules for that public ip? | 19:00 |
*** jeromatron has joined #openstack | 19:00 | |
jeromatron | just wanting to make sure I'm not in left field - openstack was deployed as part of rackspace UK from day 1, right? | 19:01 |
Kiall | Off the top of my head, I can't remember .. And - I've gotta run, sorry :) | 19:01 |
*** hggdh has joined #openstack | 19:01 | |
*** code_franco has quit IRC | 19:01 | |
mtucloud | kiall, thanks for the help | 19:02 |
*** cp16net has joined #openstack | 19:03 | |
DuncanT | jeromatron: I'm not sure, but articles like http://www.theregister.co.uk/2011/11/08/rackspace_openstack_private_cloud/ suggest not... | 19:03 |
*** tjikkun has quit IRC | 19:03 | |
*** AlanClark has quit IRC | 19:05 | |
jeromatron | DuncanT: Okay, when I worked at rackspace, when they talked about the UK DC, it always sounded like they were planning on doing openstack there from day 1. maybe it was just openstack files or something... | 19:05 |
*** AlanClark has joined #openstack | 19:05 | |
annegentle | jeromatron: probably Cloud Files as an OpenStack project | 19:05 |
annegentle | but definitely always question The Register's journalistic chops :) | 19:06 |
DuncanT | jeromatron: I've never worked for rackspace, so I can only guess based on their blogs and articles like the above | 19:06 |
BasTichelaar | anyone who can help with zone scheduling within nova? | 19:06 |
jeromatron | annegentle: yeah - that would make sense. (this is jeremy, previously from the austin office btw, worked on cassandra/hadoop there) | 19:06 |
gnu111 | webx: I had a the exact same issue you are describing. I wasn't able to solve it. Not sure if this is network issue or not. Did you check rsyncd setings? I also tried chmod 777 /srv/node. you can give that a try. | 19:07 |
annegentle | hey jeromatron nice nick :) | 19:07 |
jeromatron | DuncanT: yeah - that's why I wanted to clarify since it sounds like there's misinformation out there, at least partly. Would be good to clarify I think where it's deployed. | 19:07 |
*** tjikkun has joined #openstack | 19:07 | |
*** tjikkun has joined #openstack | 19:07 | |
jeromatron | annegentle: thanks :) just something unique. | 19:07 |
*** hezekiah_ has quit IRC | 19:08 | |
notmyname | webx: are you running all the consistency servers (swift-init rest start)? are the rings consistent? do they all have teh same md5 hash? | 19:08 |
DuncanT | jeromatron: I'm intregued by the answer... I'm off now but I'll read the channel logs later to see if you get a response from a rackspacer | 19:08 |
*** cereal_bars has joined #openstack | 19:14 | |
*** bhall has joined #openstack | 19:14 | |
*** cereal_bars has quit IRC | 19:20 | |
*** datajerk has quit IRC | 19:21 | |
*** dirkx_ has joined #openstack | 19:22 | |
*** dnjaramba has quit IRC | 19:24 | |
*** aliguori has quit IRC | 19:25 | |
*** syah has joined #openstack | 19:27 | |
*** krow has joined #openstack | 19:27 | |
*** hezekiah_ has joined #openstack | 19:29 | |
*** reed has joined #openstack | 19:30 | |
*** krow has quit IRC | 19:30 | |
*** cmagina has joined #openstack | 19:31 | |
webx | notmyname. I use swift-init all start. would that work the same? | 19:33 |
*** smeier001 has joined #openstack | 19:33 | |
*** nati2 has joined #openstack | 19:34 | |
*** nati2_ has quit IRC | 19:35 | |
*** cmagina has quit IRC | 19:37 | |
*** jakedahn has joined #openstack | 19:38 | |
*** krow has joined #openstack | 19:39 | |
*** cmagina has joined #openstack | 19:39 | |
webx | notmyname. I only have one storage node with anything in it. shouldn't that be distributed out? | 19:39 |
*** jj0hns0n has quit IRC | 19:40 | |
webx | ahh.. I didn't chown /srv/node like I was supposed to on all of them | 19:42 |
hezekiah_ | has anyone seen this? | 19:43 |
hezekiah_ | Nov  8 13:43:16 m0005048 libvirtd: 13:43:16.085: 11810: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range | 19:43 |
*** marrusl has quit IRC | 19:44 | |
*** vladimir3p has joined #openstack | 19:45 | |
*** smeier001 has left #openstack | 19:45 | |
notmyname | webx: ya, all start starts everything on that box (that may not actually be what you want, though) | 19:45 |
webx | yea, I just don't put a proxy config on the storage nodes and then start everything | 19:46 |
webx | seems like an easier way than manually starting every service | 19:46 |
webx | notmyname. is there a filesize limitation that we can configure somewhere? | 19:48 |
notmyname | webx: yes. there is a constant in the code (swift/common/constraints.py) that specifies the maximum object size. it's set to 5GB. I wouldn't recommend changing it unless you have a very good understanding of your use case | 19:49 |
notmyname | webx: however, larger objects can be saved with the object manifest feature | 19:50 |
notmyname | http://swift.openstack.org/overview_large_objects.html | 19:50 |
*** stuntmachine has quit IRC | 19:51 | |
*** primeministerp has quit IRC | 19:51 | |
webx | I understand our use case, but I don't know what sort of impact it will have on the swift system if we up that to the 25gb neighborhood | 19:51 |
*** oubiwann has quit IRC | 19:51 | |
uvirtbot | New bug: #887743 in keystone "User within ServiceCatalog need to change." [Undecided,New] https://launchpad.net/bugs/887743 | 19:51 |
notmyname | webx: how varied will the object sizes be in your cluster? 0 bytes to 25 GB? or something with a much smaller range? | 19:52 |
*** marrusl has joined #openstack | 19:52 | |
*** andyandy has quit IRC | 19:52 | |
*** andyandy_ has quit IRC | 19:52 | |
webx | yea, it will vary greatly. primary use in terms of file count will be in the 1mb or less, but the heaviest usage storage-wise will be 10-25gb backup blobs | 19:53 |
notmyname | ah ok | 19:53 |
uvirtbot | New bug: #887739 in keystone "Issues in Rackspace style Legacy Authentication" [Undecided,New] https://launchpad.net/bugs/887739 | 19:53 |
uvirtbot | New bug: #887740 in keystone "Elements in RAX-KSADM-users.xsd not used in contract." [Undecided,New] https://launchpad.net/bugs/887740 | 19:53 |
*** stevegjacobs has joined #openstack | 19:54 | |
notmyname | webx: then I strongly recommend not changing it from 5GB. there is a high-level explanation of why you don't want to change it in an old blog post of mine (http://programmerthoughts.com/openstack/the-story-of-an-openstack-feature/). the summary is that the variance of fullness across all of your storage volumes will be greater, and therefore it's much harder to capacity plan and efficiently use all of your disk space. that and a 25GB upload general | 19:55 |
webx | notmyname. do I understand right in that swift would prefer to have non-raided drives as separate partitions instead of raid'ing them together and making a single partition ? | 19:55 |
notmyname | ly would take so long the opportunity for a connection problem (and therefore losing all of the data uploaded so far) is much higher | 19:55 |
*** jakedahn has quit IRC | 19:55 | |
notmyname | webx: correct (in the general sense) | 19:55 |
notmyname | webx: raid5 or raid 6 + swift is a bad idea. performance will suffer and raid rebuild times take forever | 19:56 |
*** rsampaio has quit IRC | 19:56 | |
webx | yea.. I just tested with an 877M file and it took 60 seconds. | 19:56 |
notmyname | webx: but raid 10 for dedicated account and container nodes could be a very good idea for large clusters | 19:56 |
webx | (~14mb/sec... and it's raid6) | 19:57 |
*** alperkanat has joined #openstack | 19:57 | |
*** alperkanat has joined #openstack | 19:57 | |
alperkanat | notmyname: http://paste.openstack.org/show/3182/ | 19:58 |
webx | notmyname. thanks for the 5gb recommendation. can you point me to docs on how swift deals with files >5gb? | 19:58 |
notmyname | webx: the link I pasted above about large objects | 19:59 |
*** rsampaio has joined #openstack | 19:59 | |
*** j^2 has quit IRC | 19:59 | |
notmyname | http://swift.openstack.org/overview_large_objects.html | 19:59 |
webx | k, will read. | 19:59 |
*** jakedahn has joined #openstack | 19:59 | |
*** rsampaio has quit IRC | 19:59 | |
rmk | Has anyone been able to get vnc working with the diablo dash? | 20:00 |
*** termie has quit IRC | 20:00 | |
notmyname | webx: you are just building a POC right now, right? as your cluster gets very large, there are certain things you will need to keep in mind and lessons we have learned that we can share | 20:00 |
rmk | I always get "server disconnected" immediately. Several different setups. | 20:00 |
webx | notmyname. yea, POC right now. only 4 storage nodes and 1 proxy, but one of the tests we want to do are with those 10-25gb backups. | 20:03 |
*** shang_ has quit IRC | 20:03 | |
*** shang has quit IRC | 20:03 | |
notmyname | webx: ya, that's not an issue. | 20:03 |
*** termie has joined #openstack | 20:04 | |
*** termie has joined #openstack | 20:04 | |
webx | notmyname. I can set the segment size to 5gb, and then swift does all the magic behind the scenes when I retrieve it? | 20:04 |
notmyname | webx: "swift" as in the cli tool that ships with the code? | 20:04 |
webx | right.. I'm guessing there's more work being done on the backend as well. basically, my user would just have to know the filename to retrieve and they'll be good to go | 20:05 |
*** rsampaio has joined #openstack | 20:06 | |
webx | .. right? :) | 20:06 |
webx | according to this: http://swift.openstack.org/overview_large_objects.html -- when you download the file, all you need is the container and filename. I was just checking to make sure we wouldn't have to give them any more info about how we sliced the 'bigfile' when uploading | 20:07 |
*** po has quit IRC | 20:07 | |
notmyname | webx: right. as a smart client, the swift cli tool can split the data and upload the parts for you. swift the storage system (the server-side) doesn't do any automatic splitting of large objects | 20:07 |
notmyname | webx: correct. just the container and filename (of the manifest file) | 20:08 |
webx | right, thanks | 20:08 |
webx | is there a way to shortcut the size? ie, -s 1gb instead of -s 1073741824 | 20:09 |
webx | or 1g, 100m, etc. | 20:09 |
notmyname | webx: not sure. check the --help message | 20:09 |
*** stevegjacobs has quit IRC | 20:10 | |
*** stevegjacobs has joined #openstack | 20:12 | |
stevegjacobs | hi | 20:12 |
*** arBmind has joined #openstack | 20:14 | |
*** rsampaio has quit IRC | 20:15 | |
Spirilis | doesn't look like there is a shorthand for that -S option fyi | 20:15 |
Spirilis | the python code just takes int(segment_size) without analyzing for suffixes from what I can tell by cursory glance | 20:15 |
alperkanat | notmyname: http://paste.openstack.org/show/3183/ | 20:16 |
notmyname | alperkanat: use a much bigger concurrency (like 50) and something more like 1000+ for the number of requests | 20:17 |
alperkanat | notmyname: i'm out of options about this performance problem. today afternoon, i retried creating a new account, started a new proxy at 9090 without SSL, checked if the direct proxy url was correct and still no change | 20:18 |
webx | (root@netops-z3-a-2) ~ > swift --version | 20:19 |
webx | swift 1.0 | 20:19 |
webx | is that the latest ? | 20:19 |
*** po has joined #openstack | 20:20 | |
*** aliguori has joined #openstack | 20:20 | |
uvirtbot | New bug: #887762 in quantum "document keystone integration in Admin Guide" [Undecided,New] https://launchpad.net/bugs/887762 | 20:20 |
notmyname | webx: unfortunately that's the version of the cli tool, not the installed swift version. check dpkg (or whatever) for you installed version of swift. the current version is 1.4.3 | 20:21 |
alperkanat | notmyname: http://paste.openstack.org/show/3184/ | 20:21 |
webx | (root@netops-z3-a-2) ~ > rpm -qf `which swift` | 20:21 |
webx | openstack-swift-1.4.3-b447.noarch | 20:21 |
webx | yea, I have the latest.. just confusing versioning I guess | 20:22 |
notmyname | alperkanat: that looks better. do all storage nodes give similar results? | 20:22 |
alperkanat | notmyname: checking | 20:22 |
notmyname | webx: ya, sorry about that | 20:22 |
*** Turicas has quit IRC | 20:24 | |
webx | notmyname. is there a way to default the split size automatically? | 20:24 |
alperkanat | notmyname: http://paste.openstack.org/show/3185/ | 20:24 |
notmyname | webx: I don't think so | 20:24 |
webx | k | 20:24 |
notmyname | alperkanat: so the hard question is "why are your storage servers getting so much better performance than the entire set of servers?" perhaps that means there is a config issue in the proxy server | 20:25 |
*** PotHix has quit IRC | 20:26 | |
alperkanat | pasting proxy conf | 20:26 |
alperkanat | notmyname: http://paste.openstack.org/show/3186/ | 20:27 |
notmyname | alperkanat: not much there to go wrong | 20:28 |
alperkanat | notmyname: what else would it be? | 20:30 |
alperkanat | notmyname: http://paste.openstack.org/show/3187/ | 20:31 |
notmyname | alperkanat: it's hard to test things in isolation because you are already running it in prod. it could be something in the proxy config (check the available options in the docs or in the sample proxy config file). it could be networking settings somewhere in your cluster | 20:31 |
uvirtbot | New bug: #887766 in horizon "Tenant switch list is empty after architecture merge" [Undecided,New] https://launchpad.net/bugs/887766 | 20:31 |
uvirtbot | New bug: #887767 in horizon "Tenant switch list shouldn't change when on syspanel tenants panel" [Undecided,New] https://launchpad.net/bugs/887767 | 20:31 |
uvirtbot | New bug: #887768 in horizon "Duplicate code in auth views" [Undecided,New] https://launchpad.net/bugs/887768 | 20:31 |
uvirtbot | New bug: #887770 in horizon "user_home function in dashboard views redirects to wrong dashboard name" [Undecided,New] https://launchpad.net/bugs/887770 | 20:31 |
*** krow has quit IRC | 20:32 | |
webx | notmyname. do you guys have any benchmarks on expected performance given specific hardware ? | 20:34 |
alperkanat | notmyname: the only networking settings we made was about NAT. the storage nodes going online through proxy (for testing purposes and system updates) | 20:35 |
alperkanat | notmyname: and you just checked my prod. proxy conf which seems ok | 20:35 |
alperkanat | i know it's hard to test but i'm clueless why this may happen | 20:36 |
notmyname | webx: there are a lot of variables. generally, think about 1K req/sec/proxy as a decent number (-ish) | 20:36 |
notmyname | webx: but again, there are a ton of factors there | 20:37 |
webx | notmyname. yea, and what about throughput? as an example, on a raid10 device over local gigabit, I just transferred a file at ~14mb/sec which seems very low to me. | 20:38 |
webx | but maybe that's about right? | 20:38 |
*** dolphm_ has quit IRC | 20:38 | |
*** stuntmac_ has joined #openstack | 20:38 | |
*** dolphm has joined #openstack | 20:38 | |
*** clopez has joined #openstack | 20:40 | |
*** dolphm_ has joined #openstack | 20:40 | |
*** dolphm has quit IRC | 20:43 | |
*** haji has joined #openstack | 20:45 | |
haji | kiall: the settings file theres a bridge variable, you dont use the vlan set up? | 20:46 |
hezekiah_ | ugh | 20:46 |
hezekiah_ | I can't get any traction on this libvirt-bin issue | 20:46 |
hezekiah_ | Nov 8 14:17:11 m0005048 libvirtd: 14:17:11.796: 12918: error : virGetGroupID:2882 : Failed to find group record for name 'kvm': Numerical result out of range | 20:46 |
hezekiah_ | it looks like it just can't find the group | 20:46 |
hezekiah_ | but that group is there | 20:46 |
notmyname | webx: give me a bit. multitasking now | 20:47 |
webx | notmyname. not a problem | 20:49 |
*** rsampaio has joined #openstack | 20:51 | |
*** alperkanat has quit IRC | 20:56 | |
notmyname | webx: sorry I was wrong with the earlier numbers | 20:56 |
*** Matzie has joined #openstack | 20:57 | |
notmyname | webx: think 1-2K req/sec per proxy (with 4 storage nodes). the network throughput should be roughly whatever your NICs can support | 20:57 |
webx | so you guys don't anticipate much slowdown with the processing and storage of the files? | 20:58 |
notmyname | webx: no. swift is fast enough to saturate your network before CPU runs out | 20:58 |
notmyname | well, don't run it on a 386 or anything :-) | 20:58 |
*** troytoman is now known as troytoman-away | 20:59 | |
*** dprince has quit IRC | 20:59 | |
webx | hmm, interesting... I wonder why I have such poor performance | 20:59 |
webx | I'll dig | 20:59 |
Matzie | hi... qn re nova volume... does it need exclusive access to the LVM volume group defined as volume_group (ie, "nova-volumes" usually, but I want to use a different value) | 20:59 |
*** arBmind_ has joined #openstack | 21:00 | |
*** rnorwood has quit IRC | 21:02 | |
*** rnorwood1 has joined #openstack | 21:02 | |
*** mtucloud has quit IRC | 21:02 | |
mjfork | Matzie: i ran it in a shared group | 21:04 |
*** j^2 has joined #openstack | 21:05 | |
mjfork | it was testing/POC, but it seemed to work | 21:05 |
*** Zinic has joined #openstack | 21:05 | |
*** oubiwann has joined #openstack | 21:06 | |
Zinic | Anyone here work with devstack? | 21:06 |
*** praefect has quit IRC | 21:06 | |
*** nerdstein has quit IRC | 21:07 | |
*** vkp has joined #openstack | 21:09 | |
haji | kiall? | 21:10 |
haji | u there | 21:11 |
haji | ?? | 21:11 |
Matzie | thanks mjfork | 21:11 |
vishy | Zinic: lots of us | 21:12 |
webx | notmyname: where would I look for potential issues with the swift backend being slow? I've tested simple scp transfers between the proxy and the storage nodes, and all of them are well over 55MB/sec. When I send a file from a proxy server to swift, using the swift binary, I'm getting max speeds of ~15MB/sec. | 21:12 |
vkp | #openstack-meeting | 21:12 |
Zinic | vishy: has anyone run into an issue where rabbitmq-server refuses to start? if not, I'll keep digging on my end to see what's up but I thought I'd pop in to see if this was something others had run into. | 21:14 |
vishy | i have not | 21:14 |
vishy | but i have had rabbit fail many times | 21:14 |
vishy | often you have to delete /var/ib/rabbin/mnesia | 21:15 |
vishy | to get it to start | 21:15 |
vishy | but I've never had it fail with devstack yet | 21:15 |
notmyname | webx: hmm..I wonder if it's at all similar to what I've been talking to alperkanat about | 21:15 |
notmyname | webx: first look at your proxy logs. see if there are any errors there | 21:16 |
webx | I haven't been following. | 21:16 |
notmyname | webx: no worries. we havent' found the issue yet :-0 | 21:16 |
notmyname | webx: then check the proxy for memory or I/O contention | 21:16 |
notmyname | then check the same on the storage nodes | 21:16 |
notmyname | webx: the best place to get numbers is to use the included swift-bench tool. since all deployments have that, it's easier to get numbers that can be compared to one another | 21:17 |
Zinic | vishy: it fails to start for me on initial install using a fresh natty server (RS cloud server) - I'll keep poking around on my end just to make sure I haven't missed anything simple | 21:17 |
webx | they're brand new servers, and they're only being used for these tests. dual 6 core, 96gb ram.. a lot of beef so I'm hoping there's no contention | 21:17 |
webx | I'll check again while a big upload is going | 21:18 |
vishy | Zinic: natty cloud server won't work | 21:18 |
vishy | you need a pvgrub server with oneiric | 21:18 |
notmyname | webx: ya, doesn't sound like an issue. how many workers are you running on your proxy and storage nodes? it should be at least 1 per core | 21:18 |
Zinic | aha | 21:18 |
notmyname | webx: you are using tempauth, right? | 21:18 |
webx | notmyname: hmm, any issues with hyperthreading? yes, using tempauth | 21:18 |
notmyname | webx: I'm not sure about hyperthreading. that would be an interesting issue if there is one | 21:19 |
webx | notmyname: workers = 8 right now, but there are 24 reported to cpuinfo because of HT.. | 21:19 |
webx | I'll increase that to 24 just to see | 21:19 |
Zinic | gotcha. thanks vishy, that's exactly what I needed | 21:19 |
notmyname | webx: if you have a moment, run swift-bench from your proxy server (use --help to see usage) | 21:19 |
*** troytoman-away is now known as troytoman | 21:20 | |
webx | (root@netops-z3-a-2) swift > locate swift-bench | 21:20 |
webx | (root@netops-z3-a-2) swift > | 21:20 |
webx | which package provides that? | 21:20 |
notmyname | webx: that depends on which packages you're using :-) | 21:20 |
webx | diablo-centos from griddynamics repo | 21:20 |
webx | http://yum.griddynamics.net/yum/diablo-centos/ | 21:21 |
notmyname | webx: I'm not familiar with that one. I'm not sure if they've included it | 21:21 |
webx | ah.. do you have 'official' centos rpms I can use instead? | 21:21 |
notmyname | webx: https://github.com/openstack/swift/blob/master/bin/swift-bench | 21:21 |
notmyname | webx: no, the only "official" stuff is debs (since we target ubuntu 10.04) | 21:22 |
*** arBmind has quit IRC | 21:23 | |
notmyname | webx: worst case, you can grab the code, run `sudo python ./setup.py develop && swift-bench --help` | 21:23 |
webx | yea, going to do that now | 21:23 |
notmyname | webx: http://paste.openstack.org/show/3188/ <-- running swift-all-in-one on a 1GB slicehost VPS (so your numbers should be _much_ better) | 21:24 |
*** sebastianstadil has quit IRC | 21:25 | |
*** msivanes has quit IRC | 21:26 | |
*** Matzie has quit IRC | 21:28 | |
webx | http://paste.openstack.org/show/3190/ | 21:30 |
notmyname | webx: ya that looks good | 21:30 |
*** stuntmac_ has quit IRC | 21:30 | |
*** rnorwood1 has quit IRC | 21:30 | |
webx | ya | 21:30 |
notmyname | webx: just to verify (since I see it in your swift-bench command), are you running ssl direct to the proxy? | 21:31 |
*** miclorb_ has joined #openstack | 21:31 | |
webx | yea.. this is all direct to the proxy, from the proxy | 21:32 |
webx | no F5s or gateways in between | 21:32 |
webx | there are a few switches in between the proxy and storage nodes, but that's it.. all on the same network | 21:32 |
notmyname | webx: ok. don't do that. you should terminate ssl at your load balancer. ssl + python seems to have some problems under load | 21:32 |
webx | hmm | 21:33 |
notmyname | webx: we recommend using either zeus (commercial and pricy, but the best throughput) or pound (free and slightly less throughput) for your LB | 21:33 |
webx | notmyname: can I try it with http to see if that's a problem? | 21:33 |
webx | notmyname: we have plenty of F5s, which is what I'd guess we'll use when this gets to production | 21:33 |
notmyname | webx: your numbers don't indicate a problem. it's somethign that you would probably see at scale | 21:33 |
webx | notmyname: all, I'll show you the problem. :) | 21:34 |
*** rnorwood has joined #openstack | 21:34 | |
notmyname | webx: the trick is how much ssl throughput you can get. ssl connections/sec is not the issue. sustained throughput is. we are able to get 7-8Gb/sec with zeus (and 6-7Gb/sec with pound) | 21:35 |
*** juddm has left #openstack | 21:35 | |
notmyname | our tests were run a while back, and we didn't do anything fancy like using the intel chips that offload AES | 21:35 |
notmyname | but we haven't yet found a LB that can keep up with a 10gb pipe of ssl traffic | 21:36 |
notmyname | (we've got lots of 10g lines going to the LBs) | 21:37 |
*** jj0hns0n has joined #openstack | 21:37 | |
webx | yea | 21:37 |
webx | it's something I'm interested in testing on our hardware once I get to that point :) | 21:37 |
notmyname | webx: sounds like you've got an interesting project :-) | 21:37 |
webx | http://paste.openstack.org/show/3191/ | 21:37 |
webx | it's going to be :) | 21:38 |
*** dolphm_ has quit IRC | 21:39 | |
notmyname | ya, that seems a little slow. the easiest first thing to do (especially since it's a bad idea to do at scale) is to remove ssl from the proxy | 21:39 |
*** dolphm has joined #openstack | 21:39 | |
webx | sounds good to me.. I'd like to try with straight http but I haven't looked yet at how to do it | 21:40 |
webx | just to narrow down if it's ssl + python even at very low load | 21:40 |
*** clauden_ has quit IRC | 21:40 | |
notmyname | webx: simply remove the 2 cert config options from the proxy server (of course, they shouldn't be any other configs either) | 21:40 |
*** clauden_ has joined #openstack | 21:40 | |
uvirtbot | New bug: #887792 in keystone "get Tenants list with limit and marker get 500 unhandled ..." [Undecided,New] https://launchpad.net/bugs/887792 | 21:41 |
*** medberry has joined #openstack | 21:41 | |
*** medberry has quit IRC | 21:41 | |
*** medberry has joined #openstack | 21:41 | |
webx | notmyname: ok, trying that out now | 21:42 |
*** doorlock has quit IRC | 21:42 | |
*** troytoman is now known as troytoman-away | 21:42 | |
*** nati2 has quit IRC | 21:43 | |
*** troytoman-away is now known as troytoman | 21:43 | |
*** dolphm has quit IRC | 21:44 | |
webx | http://paste.openstack.org/show/3192/ | 21:44 |
*** dirkx_ has quit IRC | 21:45 | |
uvirtbot | New bug: #887797 in keystone "Create User failed with '400 Expecting User' message" [Undecided,New] https://launchpad.net/bugs/887797 | 21:45 |
notmyname | webx: I'll bet you didn't update the tempauth config to return an http storage url | 21:45 |
webx | I'll bet you're right | 21:46 |
*** cp16net has quit IRC | 21:48 | |
gnu111 | notmyname: do you have suggestions on how to test/simulate node failures? | 21:48 |
notmyname | gnu111: depends on what kind of failure you want to simulate. the most extreme (and we've done this) is to walk up to a rack on a production system an pull the power plug | 21:49 |
notmyname | gnu111: but more simply, you can simply unmount drives or turn off servers | 21:49 |
notmyname | gnu111: for more complicated testing, you could go as far as to write middleware that simulates network failures | 21:50 |
*** jj0hns0n has quit IRC | 21:50 | |
webx | http://paste.openstack.org/show/3193/ | 21:54 |
webx | that's better.. | 21:54 |
*** rsampaio has quit IRC | 21:55 | |
*** kieron has joined #openstack | 21:55 | |
*** lorin1 has quit IRC | 21:58 | |
notmyname | indeed | 21:58 |
*** heckj has quit IRC | 22:01 | |
*** thingee has joined #openstack | 22:01 | |
gnu111 | notmyname: thanks. I will need to think about it. | 22:02 |
*** kieron has quit IRC | 22:02 | |
*** dolphm has joined #openstack | 22:05 | |
*** jsavak has quit IRC | 22:07 | |
*** Tsel has joined #openstack | 22:09 | |
*** lionel has quit IRC | 22:09 | |
*** lionel has joined #openstack | 22:10 | |
uvirtbot | New bug: #887805 in nova "Error during report_driver_status(): 'LibvirtConnection' object has no attribute '_host_state'" [Undecided,New] https://launchpad.net/bugs/887805 | 22:10 |
*** jj0hns0n has joined #openstack | 22:21 | |
*** jdg has quit IRC | 22:23 | |
*** BasTichelaar has quit IRC | 22:23 | |
*** mgoldmann has quit IRC | 22:23 | |
*** vkp has quit IRC | 22:26 | |
*** arun_ has joined #openstack | 22:29 | |
*** arun_ has joined #openstack | 22:29 | |
*** exprexxo has joined #openstack | 22:29 | |
*** jeromatron has quit IRC | 22:33 | |
*** cdub has quit IRC | 22:37 | |
*** cdub has joined #openstack | 22:37 | |
*** vidd has joined #openstack | 22:38 | |
*** vidd has joined #openstack | 22:38 | |
tjoy | is there a new high-level HOWTO doc for Nova since the Diablo release? | 22:39 |
*** bcwaldon has quit IRC | 22:41 | |
*** TheOsprey has quit IRC | 22:41 | |
*** MarkAtwood has quit IRC | 22:44 | |
annegentle | tjoy: http://docs.openstack.org/diablo/openstack-compute/starter/content/ is the Diablo Starter Guide - but doesn't address Identity (Keystone) or Dashboard (Horizon) | 22:45 |
tjoy | cool. How's documentation for XCP? | 22:45 |
tjoy | as it relates to interfacing with nova | 22:45 |
*** dgags has quit IRC | 22:46 | |
*** GheRivero_ has quit IRC | 22:47 | |
medberry | tjoy, heh... I think that could be described as a work in progress. | 22:47 |
tjoy | has the xcp driver been demonstrated to work? | 22:48 |
*** ldlework has quit IRC | 22:49 | |
vidd | annegentle, is there any progress to a documented "full stack" with keystone and dashboard? | 22:49 |
vidd | according to launchpad, horizon/dashboard has been promoted to a "core" project | 22:50 |
vidd | as of the diablo release | 22:51 |
annegentle | tjoy: I haven't had much traction asking for people to write that. Do you know someone who would want to write about XCP with nova? the most I saw was this: http://etherpad.openstack.org/openstack-xen | 22:51 |
tjoy | annegentle: xcp is supposed to be a drop-in replacement for xenserver, from what I understand. | 22:51 |
annegentle | vidd: those are core projects for Essex, but I do have backporting available for docs. | 22:51 |
annegentle | tjoy: I keep hoping deshantm will go on a writing spree :) | 22:52 |
tjoy | I could document my work setting up openstack with xcp this week | 22:52 |
tjoy | finally got some headroom wrt the dayjob | 22:53 |
zykes- | vidd: sitll no progress ? M | 22:54 |
vidd | zykes? | 22:54 |
zykes- | vidd: yeah, horizon is core now but that's essex not diablo | 22:54 |
vidd | then why would it say "As of the Diablo release, Horizon is now an OpenStack Core project and is a fully supported OpenStack offering." =\ | 22:55 |
vidd | dont matter...its not that important | 22:55 |
*** hingo has quit IRC | 22:57 | |
annegentle | tjoy: would love that, write it up in any format and I'll take it :) | 22:57 |
tjoy | ok | 22:58 |
vidd | zykes-, does your tenant and user pages work properly? | 22:58 |
annegentle | vidd: where does it say that? | 22:58 |
vidd | annegentle, https://launchpad.net/horizon | 22:58 |
annegentle | vidd: ah will have to let devcamcar know (or heckj do you have access to that page?) | 22:59 |
vidd | annegentle, so its a typo? =] | 22:59 |
annegentle | vidd: welll.... it's a bit misleading... during the Diablo timeframe, the PPB voted Dashboard in, but incubation doesn't track very well with project release management.... | 23:00 |
annegentle | vidd: so yeah I'd say it's inaccurate slightly | 23:00 |
*** Zinic has quit IRC | 23:01 | |
annegentle | ok, I'm outta here, gets dark too early now! :) | 23:01 |
vidd | later annegentle | 23:01 |
*** dolphm_ has joined #openstack | 23:06 | |
*** Ruetobas has quit IRC | 23:08 | |
*** imsplitbit has quit IRC | 23:09 | |
*** dolphm has quit IRC | 23:09 | |
*** dendrobates is now known as dendro-afk | 23:11 | |
*** Jamey___ has joined #openstack | 23:11 | |
*** Ruetobas has joined #openstack | 23:13 | |
*** AlanClark has quit IRC | 23:13 | |
*** AlanClark has joined #openstack | 23:14 | |
*** dendro-afk is now known as dendrobates | 23:15 | |
*** jeromatron has joined #openstack | 23:17 | |
*** dosdawg has joined #openstack | 23:21 | |
*** Jamey___ has quit IRC | 23:23 | |
*** Jamey___ has joined #openstack | 23:23 | |
*** AlfaBetaGamma has joined #openstack | 23:25 | |
*** s1n4 has joined #openstack | 23:26 | |
*** exprexxo has quit IRC | 23:27 | |
*** robbiew has quit IRC | 23:31 | |
*** lborda has quit IRC | 23:35 | |
*** nerdstein has joined #openstack | 23:37 | |
*** mmetheny has quit IRC | 23:37 | |
*** mmetheny has joined #openstack | 23:38 | |
*** jj0hns0n has quit IRC | 23:39 | |
*** aliguori has quit IRC | 23:39 | |
*** jj0hns0n has joined #openstack | 23:39 | |
Ryan_Lane | is there no way to get and set quotas via the API? | 23:40 |
*** troytoman is now known as troytoman-away | 23:40 | |
*** Razique has quit IRC | 23:42 | |
*** uneti has joined #openstack | 23:44 | |
*** neogenix has quit IRC | 23:44 | |
*** uneti has quit IRC | 23:45 | |
*** MarkAtwood has joined #openstack | 23:46 | |
*** nati2 has joined #openstack | 23:47 | |
*** AlfaBetaGamma has quit IRC | 23:50 | |
*** Jamey___ has quit IRC | 23:50 | |
*** straylyon has joined #openstack | 23:52 | |
*** s1n4 has left #openstack | 23:53 | |
*** datajerk has joined #openstack | 23:53 | |
*** jeromatron has quit IRC | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!