*** enigma has quit IRC | 00:01 | |
*** markwash has quit IRC | 00:02 | |
*** kazu has joined #openstack | 00:05 | |
*** bcwaldon_ has joined #openstack | 00:06 | |
*** dprince has quit IRC | 00:06 | |
gholt | devcamcar: You're using swauth right? I think you need to curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: mypass' http://(host):8080/auth/v1.0 | 00:07 |
---|---|---|
gholt | Since swauth actually runs within the proxy software itself, you need the /auth/ to route the request properly as an auth request rather than a standard storage request. | 00:07 |
*** bcwaldon has quit IRC | 00:08 | |
devcamcar | gholt: yea, i munged that part, with /auth/v1.0 i get a 500: | 00:09 |
devcamcar | Feb 25 00:07:56 (host) Exception: Could not obtain services info: /v1/AUTH_.auth/system/.services 404 Not Found | 00:09 |
gholt | Bah, it never ends, eh? :) | 00:09 |
devcamcar | haha yea | 00:09 |
gholt | You created an account named 'system' before? | 00:09 |
devcamcar | gholt: system:root | 00:10 |
devcamcar | gholt: actually, hrm | 00:10 |
*** lamar has quit IRC | 00:10 | |
devcamcar | i only did swauth-add-user, i thought that would create the account | 00:10 |
*** bcwaldon_ has quit IRC | 00:11 | |
gholt | Try swauth-add-user -K <key> system root <password> again and see what happens | 00:11 |
devcamcar | do i need to explicitly do swauth-add-account as well? | 00:11 |
devcamcar | ok | 00:11 |
gholt | Nah, you shouldn't | 00:11 |
devcamcar | gholt: on 2nd run, it returned without error, just like first tiem | 00:11 |
gholt | I wonder if something was broken when you made the account at first... Can you try making a different user? | 00:15 |
gholt | In a different account | 00:15 |
*** johnpur has quit IRC | 00:16 | |
devcamcar | gholt: i haven't changed anything configuration wise since i made the account, but i can try with a different one | 00:16 |
devcamcar | gholt: i'm seeing tons of these in logs: http://paste.openstack.org/show/769/ | 00:16 |
*** kazu has quit IRC | 00:16 | |
gholt | Well, shoot, actually that doesn't make much sense either (looking at the code, which specifically does things in a certain order so that half-made accounts don't cause problems when remade) | 00:16 |
uvirtbot | New bug: #724654 in glance "Image Type is Erroneously Required" [Undecided,New] https://launchpad.net/bugs/724654 | 00:16 |
gholt | devcamcar: That sounds like you're running an incorrect version of rsync, but I'm fuzzy on that part of Swift, redbo or chuck would know better there. That shouldn't affect the account problems your having though. | 00:18 |
devcamcar | gholt: second account works | 00:18 |
devcamcar | weird! | 00:19 |
gholt | Grr. Must've missed a race condition somewhere then on the first. | 00:19 |
devcamcar | gholt: though i have tried creating system:root user before, maybe it had something half way done when i got it going the second time | 00:19 |
devcamcar | gholt: whats best way to nuke that account so i can recreate it? | 00:19 |
gholt | Can you do a st -A http://127.0.0.1:8080/auth/v1.0 -U .super_admin:.super_admin -K swauthkey list system ? | 00:20 |
gholt | I'm curious what's in there. :) | 00:20 |
devcamcar | sure, sec | 00:21 |
devcamcar | gholt: 401 unauthorized, weird | 00:22 |
devcamcar | oh i see why | 00:22 |
*** joearnold has quit IRC | 00:22 | |
devcamcar | sec it forwarded that through my load balancer, which doesn't actually exist yet :) | 00:22 |
gholt | Ah, hehe | 00:23 |
devcamcar | gholt: when does it use the public and private urls defined in default_cluster_url? | 00:23 |
devcamcar | gholt: for now i should probably leave everything just on the private while i get this worked out | 00:23 |
gholt | The public one is given to users of the system, the private one is used by swauth itself to make accounts, users, etc. | 00:23 |
gholt | Yeah, you can see what an account has with swauth-list -K swauthkey account | 00:24 |
gholt | And you can update it with swauth-set-account-service | 00:25 |
gholt | But we're getting into advanced stuff when the (supposedly) easier stuff is acting funny, hehe. | 00:25 |
*** zul has joined #openstack | 00:31 | |
devcamcar | gholt: cool, just fixed my load balancers, i'll see whats going on now | 00:32 |
*** kashyap has quit IRC | 00:33 | |
*** troytoman is now known as troytoman-away | 00:34 | |
devcamcar | gholt: so just noticed, when i do st list system i am actually getting a few things before it dies | 00:35 |
devcamcar | gholt: st blah blah blah list system | 00:35 |
devcamcar | .services | 00:35 |
devcamcar | root | 00:35 |
gholt | Okay, with just one user that's what it should look like. | 00:35 |
gholt | Or does it display some error too? | 00:35 |
devcamcar | __main__.ClientException: Container GET failed: http://load-balancer-host:80/v1/AUTH_.auth/system?format=json&marker=root 401 Unauthorized | 00:35 |
devcamcar | yea dumps a stack trace after that | 00:35 |
gholt | Hmm. How many proxy servers do you have? I wonder if one is acting funny | 00:36 |
devcamcar | its consistent | 00:37 |
devcamcar | on either system or test account | 00:37 |
gholt | And I guess double check the configs for them and make sure they're the same (minus maybe the allow_account_management thing) | 00:37 |
*** maple_bed has joined #openstack | 00:38 | |
devcamcar | gholt: so our setup is going to look like: a cluster of load balancers running pound and then all services running on the storage nodes | 00:38 |
devcamcar | so proxy is running everywhere and we're going to do ssl termination with pound | 00:39 |
devcamcar | so right now all the proxies have alow_account_management set to true | 00:39 |
devcamcar | they're all the same | 00:39 |
gholt | Ah I see. It just seems like one is having trouble validating you is why I ask, but others aren't | 00:40 |
gholt | But the super_admin_key is in the proxy-server.conf, so, that isn't making sense to me, hehe | 00:40 |
*** maplebed has quit IRC | 00:41 | |
devcamcar | gholt: seems to be consistent, i think maybe my load balancers are still messed up, i'm respushing new proxy-conf with private only dns now | 00:42 |
*** Ryan_Lane has joined #openstack | 00:45 | |
*** kashyap has joined #openstack | 00:50 | |
gholt | devcamcar: To answer your earlier question, you should be able to swauth-delete-user system root and then swauth-delete-account system to get rid of the account to be able to recreate it fresh. That failing, you can st delete system similar to the st list system you did. | 00:50 |
gholt | But that's all "should" since st list etc. are 401ing on you sometimes and sometimes not. :/ | 00:51 |
devcamcar | gholt: works now that i took my load balancers out of the equation | 00:51 |
*** Ryan_Lane has quit IRC | 00:51 | |
gholt | Ah, well, at least that helps a bit to figure it all out. :) | 00:51 |
devcamcar | so now i know where to focus :) | 00:51 |
devcamcar | thanks for the help! i'm gonna take a break now that there's some progress | 00:52 |
gholt | When everything's new, everything's suspect, lol | 00:52 |
*** gregp76 has quit IRC | 00:52 | |
*** dendro-afk is now known as dendrobates | 00:52 | |
*** dendrobates is now known as dendro-afk | 00:53 | |
devcamcar | gholt: odd, I was able to swauth-delete-user system root but i can't swauth-delete-account system | 00:56 |
devcamcar | gholt: nm, it looks like file system permissions issue | 00:57 |
devcamcar | i forgot a chown somewhere | 00:57 |
*** joearnold has joined #openstack | 00:57 | |
*** hggdh has quit IRC | 00:58 | |
*** oneiropolo has joined #openstack | 00:59 | |
oneiropolo | hello | 00:59 |
devcamcar | gholt: does this look normal? | 00:59 |
devcamcar | drwxr-xr-x 2 nobody swift 6 2011-02-25 00:50 4af061388b2465a1a69b21edcc5023dc | 00:59 |
devcamcar | nobody:swift instead of swift:swift ? | 00:59 |
gholt | Not even. Everything should be running as the swift user. It deliberately drops privs to that, or whatever you have set as user = | 01:01 |
*** hggdh has joined #openstack | 01:01 | |
gholt | I wonder if rsync did that. Hmm | 01:02 |
*** KenD has quit IRC | 01:03 | |
openstackhudson | Project nova build #576: SUCCESS in 1 min 42 sec: http://hudson.openstack.org/job/nova/576/ | 01:07 |
openstackhudson | Tarmac: Make tests start with a clean database for every test. | 01:07 |
devcamcar | gholt: user is set to swift in all my configs | 01:08 |
devcamcar | gholt: this must be related to all those strange rsync errors i'm getting in the logs | 01:09 |
devcamcar | gholt: crap I think I see it | 01:09 |
devcamcar | argh! | 01:09 |
devcamcar | gholt: yea i see it | 01:09 |
devcamcar | gholt: i have uuid instead of uid in my rsync config, fat fingered it somehow | 01:09 |
devcamcar | they can't protect me from myself | 01:10 |
gholt | Damn. So that's what that earlier invalid option uuid was, lol | 01:10 |
devcamcar | yea hah, makes a lot of sense now | 01:12 |
*** mahadev has quit IRC | 01:15 | |
*** hadrian has quit IRC | 01:21 | |
*** hadrian has joined #openstack | 01:21 | |
*** winston-d has joined #openstack | 01:22 | |
devcamcar | gholt: was able to delete system account now, had to reset the account service on it first | 01:22 |
*** hadrian has joined #openstack | 01:22 | |
winston-d | I've one question regarding to the 'zone' in Swift | 01:22 |
devcamcar | gholt: and it actually works, yay | 01:23 |
winston-d | are the machines in the same zone supposed to have exactly the same hardware configuration, such same hard drives, | 01:25 |
notmyname | winston-d: no, the zones are there to separate groups of servers as much as possible (to avoid system-wide failures like a switch going out or a cabinet losing power) | 01:28 |
notmyname | with no zones (the same as only one zone), multiple copies of each entity could be affected by a cab failure (or even a single storage node failure) | 01:30 |
notmyname | so it's simply a concept to separate things. the servers can be heterogeneous. | 01:30 |
winston-d | there can be multiple servers in one zone, right? the they can be heterogeneous? | 01:32 |
winston-d | actually, now i have 14 machines (10 of same type, 4 of the other type of config). and I have 12 SATA HDDs, and 8 SSDs. I'm considering how to deploy Nova & Swift | 01:34 |
winston-d | any suggestions or hint? | 01:35 |
*** hggdh has quit IRC | 01:36 | |
*** hggdh has joined #openstack | 01:38 | |
*** hazmat has joined #openstack | 01:42 | |
*** mahadev has joined #openstack | 01:44 | |
*** mahadev has quit IRC | 01:48 | |
notmyname | winston-d: sorry to leave you hanging there... | 01:51 |
notmyname | multiple servers in one zone is good | 01:52 |
winston-d | notmyname: it's ok. | 01:52 |
*** clauden has quit IRC | 01:52 | |
notmyname | are the 20 drives across all 14 machines? | 01:53 |
winston-d | notmyname: i haven't installed them yet. | 01:53 |
winston-d | notmyname: so the first thing is to decide how to install those drives | 01:54 |
notmyname | but 20 drives total? and 14 servers total? | 01:54 |
winston-d | notmyname: that's right | 01:54 |
*** mahadev has joined #openstack | 01:55 | |
winston-d | i was thinking 14 servers for Nova and some of them would also be configured as Swift nodes. | 01:56 |
notmyname | 4 zones with 3 SATA drives each would be a good start. one box per zone with 3 SATA HDDs would be a good start | 01:57 |
notmyname | with the caveat that the SSDs will of course give you better performance | 01:57 |
notmyname | but the 4 "other config" boxes with 3 drives each, plus perhaps one other box as the proxy (for 5 total swift boxes) | 01:58 |
notmyname | that leaves 9 or 10 boxes for nova (but short on drives...) | 01:59 |
winston-d | notmyname: so you don't suggest mixing Nova & Swift together? | 01:59 |
notmyname | or use the 8 SSDs as 4 zones with 2 drives each | 01:59 |
notmyname | it would probably work, but I don't think there is much experience doing that | 02:00 |
*** Dumfries has quit IRC | 02:00 | |
notmyname | for an idea performance setup, I would think they need to be optimized differently (IOPS vs CPU) | 02:00 |
notmyname | s/idea/ideal | 02:00 |
notmyname | maybe you could run nova and use the VMs as the storage nodes. OpenStack Inception ;-) | 02:01 |
winston-d | notmyname: that's cool idea. :) | 02:02 |
notmyname | of course, that would probably hurt the swift performance pretty bad (just a guess) | 02:02 |
notmyname | so you've got a few good options and a few others if this isn't for a prod setup :-) | 02:02 |
winston-d | I remembered once someone said there should be at least 5 zones? | 02:04 |
notmyname | for swift, one of the bottlenecks will be the networking. a 1Gbps connection is slower than a SATA HDD for large GETs/PUTs | 02:04 |
notmyname | 5 is a recommended start | 02:04 |
notmyname | 3 is the minimum | 02:04 |
notmyname | 4, in your case, may be good since you have limited amounts of drives | 02:05 |
winston-d | i see. | 02:05 |
winston-d | 4 zones with SATA hdd and 1 zone with SSDs, is that OK? | 02:06 |
winston-d | but clearly the one with SSD has less capacity | 02:06 |
notmyname | yes. if they are different sizes you should adjust the weights in the rings accordingly | 02:06 |
notmyname | swift won't put more than one copy of something in a zone, so the number of zones is a balancing act of how much you can partition single points of failure. ex, in your case 20 zones would protect against a single drive failing, but not against a box failing. 5 zones (with one server and multiple drives in each zone) would protect agains drive and server failure. separate those boxes in different cabs, and you protect agains a cab failure | 02:09 |
winston-d | notmyname: sorry, i don't get it. how does 5 zone (one box w/ multiple drivers) protects against box failure? | 02:11 |
oneiropolo | is there anybody can tell me about partition power? | 02:17 |
oneiropolo | i don't understand the meaning about the partition power | 02:17 |
winston-d | notmyname: my understanding is data replications will be put to different zones, in the deployment of 3 replica & 5 zones (one box w/ multiple drivers each). one box or one drive failure won't hurt since data is replicated to 3 boxes. | 02:17 |
notmyname | winston-d: in your case with 20 drives, one zone per drive (since you only have 14 boxes) could result in 2 copies (or all 3) on the same physical box (but different drives) | 02:18 |
notmyname | so you could lose data, at least temporarily if a box goes down | 02:18 |
notmyname | but with boxes in different zones, you are protected exactly as you said: one copy in distinct zones, so you are protected against box and drive failure | 02:20 |
winston-d | notmyname: do you mean that one can configure one box into two zones? | 02:20 |
notmyname | winston-d: the ring manages volumes (ie drives or RAID volumes), so yes | 02:20 |
winston-d | oh, i thought the ring manages boxes | 02:21 |
notmyname | oneiropolo: the partition power determines the maximum size of your swift cluster (in storage volumes) and how well you can balance the storage across all of the storage volumes | 02:21 |
winston-d | so that's the tricky part, i should configure the dirvers in same box in the same zone | 02:22 |
notmyname | yes | 02:22 |
winston-d | got it. thanks! | 02:23 |
notmyname | one entry in the ring is <ip address + mount location> | 02:23 |
notmyname | actually, I guess it would be <zone + ip + mount point> | 02:23 |
winston-d | notmyname: how to caculate the weight of the ring according to capacity? | 02:26 |
notmyname | the weight a dimensionless number that only makes sense in relation to the other weights. a simple method is to use the number of GB the drive has for the drive weight | 02:27 |
notmyname | so 2000 for a 2T drive, 1500 for a 1.5T, etc | 02:27 |
oneiropolo | what it means that size of swift cluster ? | 02:29 |
winston-d | i see. thanks. | 02:29 |
oneiropolo | is that disk size? or number of nodes? | 02:29 |
notmyname | oneiropolo: the total number of nodes in the system | 02:30 |
oneiropolo | oh, i see, thanks a lot ~ | 02:31 |
winston-d | if swift is deployed with mixed drives (say, slow SATA & fast SSD), the performance of swift is unpredictable, right? | 02:33 |
notmyname | oneiropolo: so, for example, based on your DC space available, budget, etc you may know that you can only ever have 1000 boxes in a cluster, each with 36 drives. then multiply by 100 so you can easily mange 1% of the ring space. (=3600000). then find the closest power of 2 bigger than that number | 02:33 |
*** guynaor has joined #openstack | 02:33 | |
*** guynaor has left #openstack | 02:33 | |
notmyname | winston-d: it depends on how they are deployed :-). you could do dedicated account/container nodes with SSDs and object nodes with HDDs. but if the SSDs and HDDs were mixed throughout, then performance would be unpredictable | 02:35 |
*** acmurthy has joined #openstack | 02:35 | |
*** RobertLaptop has joined #openstack | 02:36 | |
*** vvuksan has quit IRC | 02:37 | |
winston-d | hmm, interesting, i thought account/container/object are all required for each node | 02:38 |
winston-d | don't know one can separate them to different boxes. | 02:39 |
notmyname | it's all very flexible ;-) | 02:41 |
jaypipes | sandywalsh: damn you for creating a rift. :P | 02:42 |
*** bcwaldon has joined #openstack | 02:45 | |
*** MarkAtwood has joined #openstack | 02:46 | |
*** nelson has joined #openstack | 02:50 | |
sandywalsh | jaypipes, heh, sorry. I needed to draw a line somewhere. | 03:05 |
sandywalsh | jaypipes, all very valid arguments and a great debate, but I got donuts to make. | 03:05 |
*** lamar has joined #openstack | 03:07 | |
*** hazmat has quit IRC | 03:14 | |
*** hadrian has quit IRC | 03:23 | |
*** maple_bed has quit IRC | 03:36 | |
*** maplebed has joined #openstack | 03:39 | |
*** pvo has quit IRC | 03:40 | |
*** rchavik has joined #openstack | 03:42 | |
*** sateesh has joined #openstack | 03:48 | |
*** mdomsch has joined #openstack | 03:48 | |
uvirtbot | New bug: #724719 in nova "OpenStack-Nova-Compute-ScriptInstallation Errors and Hung" [Undecided,New] https://launchpad.net/bugs/724719 | 04:06 |
creiht | devcamcar: btw, the "name lookup failed for (ip):" error in the rsync logs is due rsync doing a reverse lookup on the ips | 04:06 |
creiht | unfortunately in the current rsync it can't be disabled | 04:07 |
creiht | we have a patched rsync where I just disabled the reverse lookups | 04:07 |
creiht | I should see about getting that into a ppa | 04:07 |
*** dragondm has quit IRC | 04:08 | |
*** MarkAtwood has quit IRC | 04:21 | |
*** acmurthy has quit IRC | 04:21 | |
*** Nick_ has joined #openstack | 04:40 | |
*** rchavik has quit IRC | 04:40 | |
*** Nick_ is now known as Guest54872 | 04:40 | |
*** sateesh has quit IRC | 04:42 | |
oneiropolo | if I have 10 server machines with 1 disk. | 04:45 |
oneiropolo | how could I make the partition policy? for example? | 04:45 |
*** joearnold has quit IRC | 04:52 | |
*** Guest54872 has quit IRC | 04:54 | |
*** Guest54872 has joined #openstack | 04:55 | |
*** kashyap has quit IRC | 04:56 | |
*** paltman has quit IRC | 05:01 | |
*** Guest54872 has quit IRC | 05:04 | |
*** Guest54872 has joined #openstack | 05:05 | |
*** gregp76 has joined #openstack | 05:08 | |
*** zenmatt has quit IRC | 05:14 | |
*** kashyap has joined #openstack | 05:17 | |
*** king has joined #openstack | 05:23 | |
*** king is now known as Guest99883 | 05:24 | |
*** Guest99883 has quit IRC | 05:30 | |
*** Guest54872 has quit IRC | 05:31 | |
*** Guest54872 has joined #openstack | 05:36 | |
*** bcwaldon has quit IRC | 05:42 | |
*** MarkAtwood has joined #openstack | 05:42 | |
*** blpiatt has joined #openstack | 05:42 | |
*** omidhdl has joined #openstack | 05:44 | |
*** f4m8_ is now known as f4m8 | 05:45 | |
*** lamar has quit IRC | 05:56 | |
*** acmurthy has joined #openstack | 06:01 | |
*** acmurthy has quit IRC | 06:05 | |
*** mdomsch has quit IRC | 06:10 | |
*** acmurthy has joined #openstack | 06:11 | |
*** bcwaldon has joined #openstack | 06:15 | |
*** Jbain has quit IRC | 06:17 | |
*** Jbain has joined #openstack | 06:17 | |
*** bcwaldon has quit IRC | 06:20 | |
*** gregp76 has quit IRC | 06:20 | |
*** bcwaldon has joined #openstack | 06:21 | |
*** bcwaldon has quit IRC | 06:28 | |
*** kazu has joined #openstack | 06:37 | |
*** MarkAtwood has quit IRC | 06:41 | |
*** miclorb_ has quit IRC | 07:01 | |
*** naehring has joined #openstack | 07:15 | |
*** Guest54872 has quit IRC | 07:18 | |
*** acmurthy has quit IRC | 07:20 | |
ttx | vishy: to report a bug against packaging: https://bugs.launchpad.net/ubuntu/+source/nova | 07:25 |
ttx | (just make it clear in desc that it's for trunk, not any released packages) | 07:25 |
winston-d | has anyone encountered error using 'nova-mange db sync' ? | 07:25 |
*** naehring has quit IRC | 07:28 | |
kpepple | winston-d: what kind of errors ? | 07:35 |
winston-d | kpepple: here's the output http://paste.openstack.org/show/770/ | 07:37 |
winston-d | kpepple: i'm using Bexar release on RHEL 6. Manually compiled. | 07:37 |
kpepple | winston-d: looking ... haven't seen this before | 07:39 |
*** omidhdl has quit IRC | 07:39 | |
kpepple | winston-d: what database are you using ? it's defined in your /etc/nova/nova.conf file with the --sql_connection flag | 07:40 |
*** omidhdl has joined #openstack | 07:40 | |
winston-d | kpepple: it is MYSQL | 07:40 |
*** ramkrsna has joined #openstack | 07:41 | |
*** ramkrsna has joined #openstack | 07:41 | |
winston-d | kpepple: --sql_connection=mysql://nova:nova@192.168.4.1/nova | 07:41 |
* ttx suspects a wrong version of python-migrate | 07:42 | |
winston-d | //winston-d feels the same as ttx | 07:42 |
ttx | we are using 0.6.x | 07:42 |
kpepple | winston-d: i agree with ttx ... 0.2.2 seems really old -- i think you need something like 0.6 | 07:42 |
ttx | I think it works with 0.5, but certainly not with 0.2 | 07:42 |
winston-d | Best match: sqlalchemy-migrate 0.6.1 | 07:42 |
winston-d | Adding sqlalchemy-migrate 0.6.1 to easy-install.pth file | 07:42 |
winston-d | Installing migrate script to /usr/bin | 07:42 |
winston-d | Installing migrate-repository script to /usr/bin | 07:42 |
kpepple | winston-d: make sure you either install all the pre-req packages or use pip to install all the new eggs thru pip-requires | 07:43 |
winston-d | this is what i got when 'easy_install sqlalchemy-migrate' | 07:43 |
ttx | /usr/lib/python2.6/site-packages/migrate-0.2.2-py2.6.egg/migrate/versioning/unique_instance.py | 07:43 |
ttx | apparently your system is using something else | 07:43 |
winston-d | well, how do i change that? remove the old one? | 07:44 |
* ttx is not a pip/venv/egg creature. I admit using only packages, so i can't really help you on that one. | 07:45 | |
kpepple | winston-d: i run through the virtualenv ... otherwise, you'll probably need to upgrade or remove the old one | 07:47 |
winston-d | well, i am trying to do that. | 07:48 |
*** CloudChris has joined #openstack | 07:48 | |
winston-d | setting up Nova has always been a nightmare... | 07:48 |
kpepple | winston-d: sadly, it's much easier on ubuntu ... we need to work harder on the redhat/centos side | 07:49 |
kpepple | winston-d: can you just uninstall with the easy_install -m ? | 07:50 |
winston-d | kpepple: let me try that | 07:50 |
*** mgoldmann has joined #openstack | 07:51 | |
winston-d | kpepple: doesn't work. | 07:52 |
winston-d | 'nova-mange db sync' reports same error. | 07:53 |
kpepple | winston-d: do you have virtualenv installed ? if so, run the test script (./run_test.sh) and it will install all the necessary (and correct) eggs locally ... | 07:54 |
winston-d | kpepple: thanks for the hint. i'm downloading virtualenv. | 07:56 |
*** guigui has joined #openstack | 08:00 | |
*** romain_lenglet_ has joined #openstack | 08:05 | |
*** GasbaKid has joined #openstack | 08:13 | |
*** rcc has joined #openstack | 08:15 | |
*** romain_lenglet_ has quit IRC | 08:15 | |
*** Guest54872 has joined #openstack | 08:21 | |
*** berendt has joined #openstack | 08:21 | |
*** skiold has joined #openstack | 08:22 | |
*** czajkowski has quit IRC | 08:24 | |
winston-d | kpepple: ./run_test.sh is still on going. really slow internet connection. | 08:25 |
kpepple | winston-d: this takes for ever --- the twisted module is really large | 08:25 |
*** littleidea has quit IRC | 08:26 | |
*** Nacx has joined #openstack | 08:30 | |
*** littleidea has joined #openstack | 08:30 | |
*** Guest54872 has quit IRC | 08:31 | |
*** Guest54872 has joined #openstack | 08:31 | |
winston-d | kpepple: it finished! | 08:36 |
winston-d | but there seems to be some errors | 08:36 |
kpepple | winston-d: make sure you activate the venv before you run db sync | 08:37 |
kpepple | winston-d: you'll see some errors ... hopefully nothing fatal | 08:37 |
winston-d | how to activate 'venv' ? | 08:37 |
openstackhudson | Project nova build #577: SUCCESS in 1 min 40 sec: http://hudson.openstack.org/job/nova/577/ | 08:37 |
openstackhudson | Tarmac: The proposed bug fix stubs out the _is_vdi_pv routine for testing purposes. | 08:37 |
kpepple | winston-d: ". .nova-venv/bin/activate" | 08:38 |
winston-d | kpepple: where's that? nova-venv. there's nothing 'whereis activate' | 08:40 |
kpepple | winston-d: you are going to source the .nova-venv/bin/activate (it's inside your nova source code folder) | 08:41 |
winston-d | kpepple: no, i don't see it. | 08:42 |
openstackhudson | Project nova build #578: SUCCESS in 1 min 39 sec: http://hudson.openstack.org/job/nova/578/ | 08:42 |
openstackhudson | Tarmac: Cleanup db method names for dealing with auth_tokens to follow standard naming pattern. | 08:42 |
kpepple | winston-d: when you did the ./run_tests.sh script, it made it in the same folder (top level of trunk) | 08:43 |
winston-d | kpepple: oh, sorry i missed that. grrr. | 08:45 |
winston-d | kpepple: 'nova-mange db sync' has the same error report | 08:45 |
kpepple | winston-d: even after you sourced the "activate" script ? | 08:46 |
winston-d | yes | 08:46 |
*** drico has quit IRC | 08:46 | |
kpepple | winston-d: just to check, do this (will paste something , hold on) | 08:47 |
*** naehring has joined #openstack | 08:49 | |
*** daveiw has joined #openstack | 08:49 | |
kpepple | winston-d: try this paste - http://paste.openstack.org/show/771/ | 08:49 |
kpepple | winston-d: basically, you will use nova-manage to open a shell, then find out what migrate it is using | 08:50 |
winston-d | kpepple: here's my output: http://paste.openstack.org/show/772/ | 08:52 |
kpepple | winston-d: it's still pointing to the old migrate :( | 08:52 |
winston-d | still the very old version | 08:52 |
*** calavera has joined #openstack | 08:53 | |
*** adjohn has quit IRC | 08:53 | |
kpepple | winston-d: i am vaguely remembering that we've seen this before ... something about migrate not versioning properly ... | 08:53 |
*** miclorb has joined #openstack | 08:54 | |
winston-d | kpepple: it's weird because the 'easy_install sqlalchemy-migrate' output seems OK, but there's no such thing inside /usr/lib/python2.6/site-packages | 08:54 |
kpepple | winston-d: did you easy_install after you sourced the activate ? | 08:55 |
*** littleidea has quit IRC | 08:56 | |
kpepple | winston-d: or was this part of the ./run_tests script ? | 08:56 |
winston-d | kpepple: i did that _before_ sourcing venv. let me try that | 08:56 |
*** allsystemsarego has joined #openstack | 08:59 | |
*** allsystemsarego has joined #openstack | 08:59 | |
*** blpiatt has quit IRC | 09:02 | |
winston-d | kpepple: this time, easy_install did install right version of migrate in venv, but 'nova-mange db sync' has another error. but I have to run for now. thank you very much for your kindly help | 09:05 |
kpepple | winston-d: no worries, best of luck | 09:05 |
winston-d | kpepple: this is the error: http://paste.openstack.org/show/773/ | 09:06 |
*** adjohn has joined #openstack | 09:08 | |
*** miclorb has quit IRC | 09:17 | |
*** mahadev has quit IRC | 09:20 | |
*** sateesh has joined #openstack | 09:22 | |
*** miclorb_ has joined #openstack | 09:24 | |
*** miclorb_ has quit IRC | 09:31 | |
*** miclorb_ has joined #openstack | 09:37 | |
ttx | vishy: created the bug, will take care of it: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/724844 | 09:47 |
uvirtbot | Launchpad bug 724844 in nova "[trunk] Add lvdisplay to nova_sudoers" [High,In progress] | 09:47 |
*** miclorb_ has quit IRC | 09:50 | |
*** Guest54872 has quit IRC | 09:56 | |
*** sateesh has quit IRC | 09:56 | |
berendt | after uploading images to nova-objectstore i only have the status "decrypting" for a long time (http://paste.openstack.org/show/775/) | 09:57 |
berendt | is this normal? | 09:57 |
*** miclorb_ has joined #openstack | 09:58 | |
ttx | berendt: define "long time" ? | 10:02 |
berendt | 10 minutes | 10:02 |
naehring | same here. | 10:02 |
berendt | in the past they were available after a few seconds or minutes | 10:02 |
ttx | berendt: yes | 10:02 |
berendt | also i have no cpu consuming processes on the system.. | 10:03 |
ttx | berendt: looks like a regression to me... current trunk ? | 10:03 |
berendt | nearly.. 732 | 10:03 |
berendt | i'll open a bugreport | 10:03 |
ttx | berendt: if you reproduce steadily, file a bug, I'll try to reproduce locally to confirm | 10:03 |
berendt | i tried it two times on my setup and naehring has the same problems on an other setup | 10:04 |
berendt | hmm.. probably an error with the rsa stuff | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Process Process-3: | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Traceback (most recent call last): | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] File "/usr/lib64/python2.6/multiprocessing/process.py", line 231, in _bootstrap | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] self.run() | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] self._target(*self._args, **self._kwargs) | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] File "/usr/lib64/python2.6/site-packages/nova/objectstore/image.py", line 242, in register_aws_image | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] cloud_private_key, decrypted_filename) | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] File "/usr/lib64/python2.6/site-packages/nova/objectstore/image.py", line 263, in decrypt_image | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] % err) | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Error: Failed to decrypt private key: RSA operation error | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] 1918:error:0407106B:rsa routines:RSA_padding_check_PKCS1_type_2:block type is not 02:rsa_pk1.c:190: | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] 1918:error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed:rsa_eay.c:596: | 10:06 |
berendt | 2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] | 10:06 |
berendt | i'll check the ca.. i moved the objectstore to an other system | 10:06 |
berendt | chronos:/var/lib/nova/CA # ./genrootca.sh | 10:07 |
berendt | Not installing, it's already done. | 10:07 |
*** miclorb_ has quit IRC | 10:07 | |
berendt | ok.. i copied /var/lib/nova/CA from the old system and now it's working | 10:11 |
berendt | seems to be something wrong with the generated CA on the new system | 10:12 |
*** adjohn has quit IRC | 10:12 | |
*** adjohn has joined #openstack | 10:13 | |
naehring | well, I ve removed all of the content below /var/lib/nova and resetted by database. now the process was just as fast as before. Maybe something while updating | 10:15 |
uvirtbot | New bug: #724853 in nova "after uploading images to nova-objectstore they are only decrypting " [Undecided,New] https://launchpad.net/bugs/724853 | 10:22 |
*** darea has joined #openstack | 10:23 | |
*** oneiropolo has quit IRC | 10:24 | |
darea | hi, can someone help me with configuring swift ? got some probs with the ring-builder | 10:28 |
*** piken has quit IRC | 10:37 | |
*** piken has joined #openstack | 10:37 | |
ttx | vishy: done, should appear soon in builds. | 10:48 |
*** MarkAtwood has joined #openstack | 10:49 | |
*** MarcMorata has joined #openstack | 10:53 | |
*** h0cin has joined #openstack | 11:24 | |
*** omidhdl has left #openstack | 11:24 | |
*** alekibango_ has quit IRC | 11:27 | |
*** alekibango has joined #openstack | 11:28 | |
*** Daviey has quit IRC | 11:53 | |
*** MarkAtwood has quit IRC | 11:55 | |
*** Daviey has joined #openstack | 11:58 | |
*** rcc has quit IRC | 12:00 | |
*** calavera has quit IRC | 12:22 | |
*** reldan has joined #openstack | 12:34 | |
*** zenmatt has joined #openstack | 12:36 | |
*** hazmat has joined #openstack | 12:37 | |
*** maplebed has quit IRC | 12:38 | |
*** ctennis has quit IRC | 12:40 | |
*** Daviey has quit IRC | 12:42 | |
*** reldan has quit IRC | 12:43 | |
*** Daviey has joined #openstack | 12:47 | |
*** rcc has joined #openstack | 12:52 | |
*** naehring has quit IRC | 12:53 | |
*** GasbaKid has quit IRC | 12:54 | |
*** GasbaKid has joined #openstack | 13:04 | |
*** kashyap has quit IRC | 13:05 | |
*** Daviey has quit IRC | 13:06 | |
*** Daviey has joined #openstack | 13:13 | |
*** j05h has joined #openstack | 13:13 | |
*** paltman has joined #openstack | 13:17 | |
*** doude has joined #openstack | 13:20 | |
*** eikke has joined #openstack | 13:21 | |
*** ctennis has joined #openstack | 13:23 | |
*** f4m8 is now known as f4m8_ | 13:32 | |
*** j05h has quit IRC | 13:33 | |
*** h0cin has quit IRC | 13:34 | |
*** j05h has joined #openstack | 13:36 | |
*** masumotok has joined #openstack | 13:39 | |
*** reldan has joined #openstack | 13:39 | |
*** masumotok has quit IRC | 13:40 | |
*** reldan has quit IRC | 13:44 | |
*** reldan has joined #openstack | 13:47 | |
*** omidhdl has joined #openstack | 13:49 | |
berendt | exists some documentation how to use nova-ajax-console-proxy? | 13:51 |
*** zenmatt has quit IRC | 13:55 | |
*** zenmatt has joined #openstack | 13:57 | |
*** johnpur has joined #openstack | 13:59 | |
*** ChanServ sets mode: +v johnpur | 13:59 | |
*** jmckenty has joined #openstack | 14:02 | |
*** dendro-afk is now known as dendrobates | 14:05 | |
*** h0cin has joined #openstack | 14:09 | |
*** patcoll has joined #openstack | 14:10 | |
*** patcoll has quit IRC | 14:14 | |
*** patcoll has joined #openstack | 14:14 | |
*** hadrian has joined #openstack | 14:16 | |
*** acmurthy has joined #openstack | 14:19 | |
*** ppetraki has joined #openstack | 14:20 | |
*** ramkrsna has quit IRC | 14:21 | |
creiht | darea: howdy | 14:24 |
annegentle | berendt: unfortunately, no doc yet on the AJAX console, but sleepsonthefloor and I are emailing about it now. | 14:27 |
berendt | annegentle: sounds good :) | 14:27 |
annegentle | berendt: so all we have to go on right now is the wiki page: http://wiki.openstack.org/WebBasedSerialConsole and then there's some info in the code checkin too, which I can't put my hands on right now, sorry. | 14:29 |
*** olivier__ is now known as olivier_ | 14:30 | |
*** Ryan_Lane has joined #openstack | 14:31 | |
*** rcc has quit IRC | 14:33 | |
*** reldan has quit IRC | 14:33 | |
*** reldan has joined #openstack | 14:34 | |
*** blpiatt has joined #openstack | 14:36 | |
*** acmurthy has quit IRC | 14:37 | |
Ryan_Lane | ttx: updated the bug where you asked for a test case | 14:37 |
ttx | Ryan_Lane: cool ! thanks. | 14:38 |
Ryan_Lane | ttx: last week, this week, and next I'm doing a datacenter buildout, so I'm not really near a computer | 14:38 |
Ryan_Lane | otherwise I'd write an actual test | 14:38 |
*** littleidea has joined #openstack | 14:38 | |
*** acmurthy has joined #openstack | 14:42 | |
*** gondoi has joined #openstack | 14:43 | |
*** kashyap has joined #openstack | 14:47 | |
*** kashyap_ has joined #openstack | 14:47 | |
*** acmurthy1 has joined #openstack | 14:48 | |
*** acmurthy has quit IRC | 14:52 | |
ttx | Ryan_Lane: I'll try to do it myself. | 14:53 |
*** pvo has joined #openstack | 14:54 | |
jaypipes | sirp-: k, fixed up all things from review, added lots of unit tests, and added documentation on the new disk and container formats: https://code.launchpad.net/~jaypipes/glance/api-image-format/+merge/51064 | 15:01 |
*** reldan has quit IRC | 15:03 | |
*** bcwaldon has joined #openstack | 15:07 | |
*** acmurthy1 has quit IRC | 15:08 | |
*** imsplitbit has joined #openstack | 15:10 | |
*** pvo has quit IRC | 15:11 | |
*** lwollney has joined #openstack | 15:14 | |
*** acmurthy has joined #openstack | 15:14 | |
*** kazu_ has joined #openstack | 15:16 | |
*** jmckenty has quit IRC | 15:16 | |
*** jmckenty has joined #openstack | 15:16 | |
*** reldan has joined #openstack | 15:16 | |
*** hazmat has quit IRC | 15:18 | |
*** hazmat has joined #openstack | 15:18 | |
*** raygtrejo has joined #openstack | 15:18 | |
*** acmurthy1 has joined #openstack | 15:20 | |
*** vvuksan has joined #openstack | 15:22 | |
*** acmurthy has quit IRC | 15:23 | |
*** mahadev has joined #openstack | 15:25 | |
*** pvo has joined #openstack | 15:27 | |
doude | Hi all, I've got a problem with the serial console with libvirt. I copy the XML template of instance and I set manually the parameters. I create the domain with 'virsh' command but I cannot acces to the serial console through the command 'virsh console mydomain' | 15:30 |
doude | my image must support serial console at boot (ttyS0) | 15:30 |
doude | If I remove the part '<serial type="file">' in ht e XML file, the serial console is available | 15:31 |
*** patcoll has left #openstack | 15:32 | |
doude | It works for you ? | 15:32 |
*** mahadev has quit IRC | 15:32 | |
*** mahadev has joined #openstack | 15:34 | |
*** KOL has joined #openstack | 15:37 | |
KOL | Hi, I have 8 desktop pcs with quad core, 2 harddisks and a 6 GB of ram. Using these machines, I want to make a ton of virtual machines. Only 8 of those virtual machines will be running at one time, but the others should be available to boot into. Is openstack what I need to virtualize the storage and processor capacity in each machine, and to run a virtualization solution on top of it? | 15:39 |
*** Seoman has joined #openstack | 15:41 | |
*** MarcMorata has quit IRC | 15:41 | |
*** reldan has quit IRC | 15:41 | |
*** GasbaKid has quit IRC | 15:46 | |
*** mahadev has quit IRC | 15:50 | |
*** GasbaKid has joined #openstack | 15:50 | |
*** aliguori has quit IRC | 15:51 | |
*** dragondm has joined #openstack | 15:53 | |
pvo | KOL: it can do do this. | 15:53 |
pvo | nova can do the proc/ram/disk provisioning | 15:54 |
*** daveiw has left #openstack | 15:55 | |
KOL | pvo awesome | 15:56 |
KOL | thanks | 15:56 |
*** mahadev has joined #openstack | 15:57 | |
*** Oneiropolo has joined #openstack | 15:59 | |
Oneiropolo | hello , | 15:59 |
Oneiropolo | i have one thing to ask | 15:59 |
Oneiropolo | i can't understand exactly the meaning of partition. | 16:00 |
Oneiropolo | if i assumed that i have only i disk (device) | 16:00 |
Oneiropolo | 1 disk | 16:00 |
Oneiropolo | that means i have 1 device | 16:00 |
Oneiropolo | and i can have 100 partitions (for recommend) | 16:01 |
Oneiropolo | what is the relation between partition and object? | 16:01 |
Oneiropolo | 1 object = 1 partition could possible? | 16:01 |
Oneiropolo | is there anyone can help? | 16:02 |
annegentle | Oneiropolo: are you asking specifically about the ring and partitions as related to Object Storage (swift)? | 16:03 |
Oneiropolo | yes i am | 16:04 |
notmyname | Oneiropolo: for a very technical description, gholt has a 5 part series on his blog about how the swift ring works (http://tlohg.wordpress.com/) | 16:04 |
Oneiropolo | notmyname: oh Thanks | 16:04 |
Oneiropolo | but at this time .. i wonder about the partition. | 16:05 |
creiht | Oneiropolo: there is also a ring overview here: http://swift.openstack.org/overview_ring.html | 16:06 |
creiht | and http://swift.openstack.org/deployment_guide.html#preparing-the-ring | 16:06 |
annegentle | Oneiropolo, notmyname wow last time I saw series that it was still 2 parts :) | 16:06 |
annegentle | nice job gholt | 16:06 |
Oneiropolo | what is the relation between objects and partitions . | 16:06 |
Oneiropolo | 1 partition has many object? is it right? | 16:07 |
creiht | yes | 16:07 |
notmyname | yes | 16:07 |
Oneiropolo | so, if i have 100GB disk | 16:07 |
Oneiropolo | and I set 100 partitions on 1 disk (device) | 16:07 |
*** BK_man has joined #openstack | 16:07 | |
Oneiropolo | it means that 1 partition is 1GB block | 16:07 |
Oneiropolo | is it right? | 16:07 |
notmyname | no | 16:08 |
Oneiropolo | umm... | 16:08 |
notmyname | partitions are a logical grouping, not a quota allocation | 16:08 |
Oneiropolo | oh... | 16:08 |
Oneiropolo | so it's not related with the size of disk? | 16:08 |
notmyname | well balanced partitions may end up using 1GB in your example, but that's a side-effect | 16:08 |
*** zenmatt has joined #openstack | 16:09 | |
Oneiropolo | it's just a side effect .. | 16:09 |
*** KOL has quit IRC | 16:09 | |
Oneiropolo | and it could possible that 1 partition = 1 object? | 16:10 |
notmyname | not directly, but the partitions come from the ring power you use when creating the ring and the weights for each node. drive size is a consideration when determining the node weights, so drive size only indirectly affects the number of partitions | 16:10 |
Oneiropolo | okay, i see. | 16:11 |
notmyname | it's possible that a partition can only have 1 object in it, but that would happen with a nearly-empty cluster | 16:11 |
Oneiropolo | ring power and weights. It's little bit complicate for me. | 16:11 |
notmyname | hm...that above (not directly....) wasn't worded the best, but I hope you get the idea | 16:12 |
Oneiropolo | no it's very helpful. | 16:12 |
Oneiropolo | i appreciate it. | 16:12 |
creiht | another way to think about it is that it is a logical abstraction | 16:12 |
creiht | objects hash to a partition | 16:12 |
creiht | and there is a mapping of partitions to devices in the cluster | 16:13 |
Oneiropolo | so partition is a kind of bucket | 16:13 |
notmyname | creiht: much better said. thanks. :-) | 16:13 |
Oneiropolo | and bucket's consisted with devices. | 16:14 |
jarrod | is anyone using xen with openstack in production | 16:14 |
jarrod | that is pleased with the experience | 16:14 |
notmyname | bucket is a dangerous word since it's used with S3 and means something completely different there | 16:14 |
Oneiropolo | yes, it is right. | 16:15 |
creiht | I like the idea of a mapping better | 16:15 |
*** aliguori has joined #openstack | 16:15 | |
Oneiropolo | so, a partition is just a abstracted structure for objects | 16:16 |
notmyname | the ring maps the partitions to the drives (storage volumes). when an object is hashed, its hash is used to find the partitions each replica goes to. then the ring mapping determines what storage volume (drive) it lives on | 16:16 |
Oneiropolo | i'm getting it now. | 16:17 |
pvo | jarrod: we're using xenserver and openstack in our dev envs | 16:17 |
pvo | jarrod: pushing very hard to use in prod soon | 16:17 |
jarrod | pvo: you run into any problems when provisioning images over glance? | 16:18 |
notmyname | partitions become a logical abstraction used to balance heterogeneous drives and allow fine tuning in the case of hot spots in the cluster | 16:18 |
pvo | are you using xen.org or xenserver? | 16:18 |
notmyname | rather than mapping objects directly to storage volumes, they are mapped to partitions which are mapped to volumes | 16:18 |
pvo | we're using xen server 5.5x and 5.6 | 16:18 |
jarrod | i am using 5.6fp1 | 16:18 |
pvo | jarrod: I haven't. What particular problems are you seeing? | 16:19 |
Oneiropolo | so depend on the size of disk volume (weights), the number of partitions assigned to device could be different? | 16:19 |
notmyname | yes | 16:20 |
Oneiropolo | okay. | 16:20 |
Oneiropolo | maybe i got a clue now. | 16:21 |
notmyname | and so you can use the weights to manage different sized drives or even to drain or slowly fill old/new drives | 16:21 |
*** adiantum has joined #openstack | 16:21 | |
Oneiropolo | that means that i can control the device's weights without consider disk size ( volume size) ? | 16:22 |
notmyname | weights are dimensionless and only matter in relation to one another | 16:22 |
notmyname | but a good start is to use the number of GB on the disk as the weight | 16:23 |
Oneiropolo | i see | 16:23 |
Oneiropolo | to build the ring. | 16:23 |
Oneiropolo | first i need to consider about number of partitions for whole ring? | 16:24 |
notmyname | the number of partitions is fixed and can't (realistically) be changed after it's deployed | 16:24 |
notmyname | so you choose the ring power based on how big your cluster could ever be | 16:25 |
*** zenmatt has quit IRC | 16:25 | |
Oneiropolo | if I added more node to the cluster, can not change the number of partitions? | 16:25 |
notmyname | right. the existing partitions will be rebalanced to account for the new node | 16:26 |
Oneiropolo | i see. | 16:26 |
notmyname | for example, if you know that based on budget or DC space or whatever that you can only ever have 100 storage servers with 10 drives each, you will want a total of 100*10*100 partitions | 16:27 |
notmyname | the last "*100" is to have 100 partitions on each drive in a full cluster so you can essentially control 1% of the data on the drive | 16:28 |
*** dfg has joined #openstack | 16:28 | |
notmyname | that gives you 100000 partitions | 16:28 |
Oneiropolo | 100000 = 2^14 | 16:28 |
Oneiropolo | rounded | 16:28 |
notmyname | find the nearest power of 2 that is bigger than that number and use that for the ring poer | 16:28 |
notmyname | ya | 16:28 |
*** zenmatt has joined #openstack | 16:28 | |
Oneiropolo | so power is | 16:28 |
Oneiropolo | 14 | 16:28 |
Oneiropolo | but at start | 16:29 |
notmyname | no, the power would be 20, I think | 16:29 |
Oneiropolo | i just only have 20 servers | 16:29 |
Oneiropolo | but i have to use the number of partitions | 16:29 |
Oneiropolo | 2^20 ? | 16:29 |
notmyname | nm. off by a zero there | 16:30 |
*** GasbaKid has quit IRC | 16:30 | |
notmyname | use 17 | 16:30 |
Oneiropolo | yes, you right :) | 16:30 |
notmyname | 2**17 is the smallest power of 2 greater than 100000 | 16:30 |
Oneiropolo | okay | 16:30 |
Oneiropolo | and if I want to scale out the cluster in the future | 16:31 |
Oneiropolo | i have to migrate whole data to the new cluster | 16:31 |
*** darea has quit IRC | 16:32 | |
*** rcc has joined #openstack | 16:32 | |
notmyname | in this example, if you had more than 1000 storage volumes you could use the same cluster but you would have less control over the balance of the data in the cluster | 16:32 |
notmyname | the "ideal" answer is to stand up a new cluster at that point | 16:32 |
Oneiropolo | okay, i get it. | 16:33 |
Oneiropolo | but at this time. | 16:33 |
Oneiropolo | I just have only 200 storage volumes | 16:33 |
Oneiropolo | but i have to use the number of partitions (2^17) | 16:33 |
Oneiropolo | based on partition power | 16:34 |
notmyname | when choosing the ring power, don't worry about what you have now, worry about how many volumes you will have in a full cluster | 16:34 |
*** kashyap has quit IRC | 16:34 | |
Oneiropolo | that doesn't cause a performance issue or problems? | 16:34 |
notmyname | full can be defined as "out of physical space", "out of network ports", "out of money", etc | 16:34 |
Oneiropolo | too many partitions on small number of storage volumes | 16:35 |
notmyname | larger powers will make ring generation slower, but won't affect ring lookups | 16:35 |
Oneiropolo | what about the disk performance? | 16:35 |
*** enigma1 has joined #openstack | 16:36 | |
Oneiropolo | i think many partitions on one disk volume could be a problem. | 16:36 |
notmyname | it means there will be more overhead for fs metadata (more inodes storing directories, etc), but that should be a small percentage of the "real" data in your cluster | 16:37 |
notmyname | so don't say "I'm going to have a 100PB cluster!" when realistically it's going to start much smaller and never be that big | 16:38 |
Oneiropolo | i see. | 16:38 |
Oneiropolo | i learned many things now. | 16:38 |
*** kazu_ has quit IRC | 16:38 | |
Oneiropolo | thank you so much about your help | 16:40 |
jarrod | when using KVM, does openstack create LVM volumes for each instance? | 16:40 |
*** guigui has quit IRC | 16:40 | |
Oneiropolo | notmyname, it was very helpful. appreciate it. | 16:40 |
notmyname | sure. hope it works out for you | 16:40 |
Oneiropolo | maybe i need to ask you more questions in the future . :) | 16:41 |
notmyname | if I'm not here, other swift devs and users can help too ;-) | 16:42 |
berendt | jarrod: if you use nova-volume with the default configuration and you attach volumes to your instances: yes | 16:42 |
jarrod | not attached volumes | 16:43 |
jarrod | the volumes used for the actual instance | 16:43 |
*** bcwaldon has quit IRC | 16:44 | |
Oneiropolo | ya, but i hope that you would be here :) | 16:44 |
*** maplebed has joined #openstack | 16:45 | |
*** bcwaldon has joined #openstack | 16:48 | |
*** kashyap has joined #openstack | 16:50 | |
*** jmckenty has quit IRC | 16:53 | |
*** imsplitbit has quit IRC | 16:57 | |
*** Oneiropolo has quit IRC | 16:58 | |
*** ovidwu_ has joined #openstack | 16:59 | |
*** Pentheus has quit IRC | 17:00 | |
sirp- | nova-core: https://code.launchpad.net/~rconradharris/nova/xs-unified-images/+merge/50102 is pretty well wrapped up, could one more core-dev take a peek at that, and if all is well, throw an Approve on there :) | 17:01 |
*** mdomsch has joined #openstack | 17:02 | |
*** bcwaldon has quit IRC | 17:04 | |
*** bcwaldon has joined #openstack | 17:04 | |
*** et_ has joined #openstack | 17:05 | |
*** mahadev has quit IRC | 17:09 | |
*** mahadev has joined #openstack | 17:10 | |
*** mahadev has quit IRC | 17:12 | |
*** mdomsch has quit IRC | 17:17 | |
*** bcwaldon has quit IRC | 17:18 | |
*** skiold has quit IRC | 17:22 | |
*** f2f has joined #openstack | 17:22 | |
f2f | hi | 17:22 |
f2f | is this a good place to ask a few questions about the openstack architecture? | 17:23 |
jarrod | yes | 17:23 |
jarrod | may or may not get answers | 17:23 |
jarrod | depending on how people like your ? | 17:23 |
jarrod | but, yes heh | 17:23 |
f2f | my primary concern is with the filesystems OpenStack uses: how is the ObjectVault configured (what filesystem lies underneath), and in a standard config, what FS should be configured on the physical nodes running the VMs? | 17:25 |
f2f | any special requirements there? | 17:25 |
f2f | also, how is the data transferred from the object storage to the compute node? | 17:26 |
f2f | is it all http transfers to/from the object store? | 17:27 |
jarrod | yes | 17:28 |
jarrod | via the restful interface | 17:28 |
jarrod | or direct http access | 17:29 |
f2f | so it doesn't really matter what the underlying filesystem is? that is not shared? | 17:29 |
jarrod | im pretty sure you can specify what you want | 17:29 |
f2f | what are the options? | 17:31 |
*** pvo has quit IRC | 17:32 | |
*** KenD has joined #openstack | 17:35 | |
f2f | thanks for your help! | 17:35 |
*** pvo has joined #openstack | 17:36 | |
*** dfg has quit IRC | 17:36 | |
*** acmurthy1 has quit IRC | 17:37 | |
*** hazmat has quit IRC | 17:39 | |
*** hazmat has joined #openstack | 17:39 | |
*** cjb1 has joined #openstack | 17:40 | |
cjb1 | hey there everyone... | 17:41 |
cjb1 | I've been trying to get openstack to work wtih Xen (under centos) and I think I've found my issue | 17:41 |
cjb1 | seems as though Xen doesn't support qcow2 images? | 17:42 |
cjb1 | has anyone else gotten Xen (PV) to work with qcow2 images? | 17:42 |
*** RichiH has quit IRC | 17:44 | |
cjb1 | I'm beginning to wonder if this is why everyone is moving to KVM in the "cloud"? | 17:44 |
f2f | isn't qcow2 primarily a kvm/qemu format? | 17:44 |
cjb1 | yes, but it's also something that is *supposed* to be supported by Xen | 17:45 |
cjb1 | and that's how openstack has put Xen support in (using qcow2 images) by default | 17:45 |
cjb1 | so someone MUST have gotten it to work, right? | 17:45 |
cjb1 | :) | 17:45 |
*** RichiH has joined #openstack | 17:45 | |
*** rlucio has joined #openstack | 17:46 | |
cjb1 | granted the support for qcow2 images isn't until 4.0.1, but qcow is supported in 3.0.3 (which is what centos ships with) | 17:46 |
cjb1 | and I tried that as well, manually, and it also doesn't work | 17:46 |
f2f | i've tried to do it the other direction -- convert a xen image (raw img file) to qcow2 using qemu-img but was unsuccessful | 17:46 |
jaypipes | sirp-: around? | 17:47 |
*** asksol_ is now known as asksol | 17:47 | |
*** dendrobates is now known as dendro-afk | 17:49 | |
*** markwash has joined #openstack | 17:50 | |
BK_man | BTW, RHEL port of Bexar now supports qcow2 images - without NBD! | 17:51 |
jaypipes | mtaylor: around? | 17:51 |
BK_man | grab it here: http://yum.griddynamics.net/ | 17:51 |
jaypipes | BK_man: nice :) | 17:51 |
cjb1 | awesome, thanks, will check it out! | 17:51 |
jaypipes | BK_man: pls let annegentle know so she can update the docs... | 17:52 |
BK_man | jaypipes: we are using libguestfs instead of nbd - more modern lib | 17:52 |
*** rcc has quit IRC | 17:52 | |
cjb1 | bk, are these built against centos 5.5 or some such flavor? | 17:52 |
BK_man | for anybody considering RHEL build: here are install instructions: http://wiki.openstack.org/NovaInstall/RHEL6Notes | 17:53 |
*** bcwaldon has joined #openstack | 17:53 | |
BK_man | cjb1: this is RHEL6.0 x86_64 RPMs | 17:53 |
jaypipes | BK_man: excellent, thx. annegentle please check out the docs above for porting to docs.openstack.org? | 17:53 |
*** bcwaldon has quit IRC | 17:57 | |
mtaylor | jaypipes: no. I'm not here | 18:03 |
jaypipes | mtaylor: ok, never mind then :) | 18:03 |
mtaylor | jaypipes: whazzup? | 18:04 |
cjb1 | hmm, not excited about moving to RHEL6 | 18:05 |
*** f2f has quit IRC | 18:08 | |
jaypipes | mtaylor: was wondering if we could talk about the Tarmac script on Hudson for Glance and Nova? | 18:09 |
jaypipes | mtaylor: I'm hoping that we could change the process to this: when Tarmac notices a merge proposal, it pulls the branch automatically, runs all tests and if they all pass, then make a comment on the merge proposal of "Tarmac Testing SUCCESS", and if the tests don't pass, have Tarmac make a comment of "Tarmac Tests FAILED", with *only* the tests that failed displayed in the comment, and set the merge prop to Work In Progre | 18:09 |
jaypipes | ss automatically? | 18:09 |
*** acmurthy has joined #openstack | 18:10 | |
jaypipes | ironcamel: did you see that your branch here: https://code.launchpad.net/~ironcamel/nova/openstack-api-hostid/+merge/50200 has a merge conflict. You need to merge that branch with the current trunk and resolve the conflict there... let me know if you need any assistance. | 18:10 |
jaypipes | mtaylor: we could test a change to Tarmac on Glance first, since it's minimal review volume...then propose the change on the ML for Nova? | 18:11 |
jaypipes | mtaylor: if you tell me where I can find the Tarmac script, I can do it myself... just not familiar with where that code is... | 18:12 |
mtaylor | jaypipes: well.... it's more than just a script | 18:13 |
mtaylor | jaypipes: you can grab it from lp:tarmac | 18:13 |
jaypipes | mtaylor: ok. what are your thoughts about the above sugestion? | 18:13 |
mtaylor | jaypipes: _well_ ... I'm not sure what the problem we're trying to solve is here? | 18:14 |
jaypipes | mtaylor: trying to enhance the review process to have Tarmac pull and run tests automatically *before* any reviewers need to comment... and to have Tarmac only show the FAILED test output, instead of the current behaviour of showing a giant list of successful test output mixed with the fails... | 18:15 |
mtaylor | jaypipes: oh - so, that's on my list of things to do for the tarmac jenkins rewrite | 18:17 |
jaypipes | mtaylor: blueprint or bug link for me? | 18:18 |
mtaylor | jaypipes: but we wind up needing launchpad merge queues for it to work sensibly | 18:20 |
mtaylor | jaypipes: uh - no, but I have an email write up that I can see you | 18:21 |
jaypipes | mtaylor: please, yes :) | 18:21 |
*** jlmjlm has quit IRC | 18:22 | |
mtaylor | jaypipes: we wind up in a place where we need java hacking, just to warn you | 18:22 |
jaypipes | mtaylor: hmm. ok. | 18:23 |
*** burris has quit IRC | 18:24 | |
*** markwash has quit IRC | 18:25 | |
*** burris has joined #openstack | 18:25 | |
*** mahadev has joined #openstack | 18:28 | |
*** dendro-afk is now known as dendrobates | 18:29 | |
*** rlucio has quit IRC | 18:33 | |
*** btorch_ is now known as btorch | 18:34 | |
*** blpiatt has quit IRC | 18:35 | |
*** jaypipes has quit IRC | 18:35 | |
*** rlucio_ has joined #openstack | 18:36 | |
*** tr3buchet has quit IRC | 18:40 | |
*** tr3buchet has joined #openstack | 18:42 | |
*** daveiw has joined #openstack | 18:52 | |
*** littleidea has quit IRC | 18:54 | |
*** zenmatt has quit IRC | 18:54 | |
*** littleidea has joined #openstack | 18:54 | |
*** adiantum has quit IRC | 18:54 | |
*** littleidea has quit IRC | 18:57 | |
*** littleidea has joined #openstack | 18:57 | |
*** mahadev has quit IRC | 18:58 | |
*** acmurthy has quit IRC | 18:59 | |
*** rlucio_ has quit IRC | 19:00 | |
*** bcwaldon has joined #openstack | 19:00 | |
*** mahadev has joined #openstack | 19:00 | |
*** rlucio_ has joined #openstack | 19:02 | |
openstackhudson | Project nova build #579: SUCCESS in 1 min 57 sec: http://hudson.openstack.org/job/nova/579/ | 19:03 |
openstackhudson | Tarmac: I'm working on consolidating install instructions specifically (they're the most asked-about right now) and pointing to the docs.openstack.org site for admin docs. | 19:03 |
*** littleidea has quit IRC | 19:03 | |
uvirtbot | New bug: #725176 in nova "bin/nova-ajax-console-proxy: error while trying to get URLs without a QUERY_STRING" [Undecided,New] https://launchpad.net/bugs/725176 | 19:06 |
* annegentle does a happy docs consolidation dance | 19:08 | |
*** markwash has joined #openstack | 19:10 | |
*** j05h has quit IRC | 19:15 | |
*** citral has joined #openstack | 19:20 | |
*** j05h has joined #openstack | 19:20 | |
*** gregp76 has joined #openstack | 19:23 | |
*** mdomsch has joined #openstack | 19:33 | |
*** dragondm has quit IRC | 19:37 | |
*** bcwaldon has quit IRC | 19:37 | |
berendt | annegentle: thumbs up :) | 19:38 |
*** MarkAtwood has joined #openstack | 19:39 | |
*** drico has joined #openstack | 19:40 | |
annegentle | berendt: baby steps, but thanks :) | 19:41 |
*** dendrobates is now known as dendro-afk | 19:53 | |
*** berendt has quit IRC | 19:53 | |
*** reldan has joined #openstack | 19:55 | |
openstackhudson | Project nova build #580: SUCCESS in 1 min 43 sec: http://hudson.openstack.org/job/nova/580/ | 19:57 |
openstackhudson | Tarmac: Add tests for 718999, fix a little brittle code introduced by the committed fix. | 19:57 |
openstackhudson | Also fix and test for a 500 if the auth token doesn't exist in the database. | 19:57 |
vishy | BK_man: awesome! How does it work without nbd? | 20:00 |
*** kang_ has joined #openstack | 20:01 | |
vishy | mtaylor, jaypipes: if we run unittests using --with-xunit jenkins can parse xml the output and show failing tests | 20:01 |
kang_ | Does libvirt_type=kvm support instance snapshots? | 20:01 |
vishy | kang_: not yet, although the naive version should be pretty easy to add | 20:03 |
*** mdomsch has quit IRC | 20:03 | |
vishy | kang_: snapshots generally refer to to different things | 20:04 |
kang_ | I am looking at doing something regarding savevm | 20:04 |
vishy | * two | 20:04 |
kang_ | I just want to save a copy of my instance for either backup, or replication | 20:04 |
vishy | one is snapshotting an individual vm for restore | 20:04 |
vishy | one is backup to launch later | 20:04 |
*** omidhdl has quit IRC | 20:04 | |
vishy | the first is super easy with qcow2 images | 20:05 |
kang_ | i would think one snapshot would accomplish the same thing | 20:05 |
vishy | kang_: qcow2 snapshots internal to the file. It is super fast | 20:05 |
kang_ | oh yes, i forgot that it stores it in the same image | 20:05 |
vishy | kang_: backing that up into an external service so it can be relaunched is a little tougher | 20:06 |
vishy | kang_: we were discussing backing up the entire cow image, but that could get unweildy if there are a lot of snapshots | 20:06 |
*** blpiatt has joined #openstack | 20:07 | |
vishy | kang_: so the backup version would probably have to mount the whole system and dd it into a new file | 20:07 |
*** h0cin has quit IRC | 20:08 | |
*** jaypipes has joined #openstack | 20:08 | |
kang_ | i may try to write that | 20:10 |
*** bcwaldon has joined #openstack | 20:10 | |
kang_ | i see the mechanisms in place and the space where it should be implemented in the code | 20:10 |
*** markwash has quit IRC | 20:11 | |
*** mdomsch has joined #openstack | 20:11 | |
*** markwash has joined #openstack | 20:12 | |
markwash | anybody here want to put themselves forward as a wsgi expert? | 20:16 |
uvirtbot | New bug: #725210 in swift "internal proxy needs to handle retries better" [Undecided,New] https://launchpad.net/bugs/725210 | 20:16 |
*** patcoll has joined #openstack | 20:17 | |
*** patcoll has left #openstack | 20:17 | |
*** bcwaldon has quit IRC | 20:18 | |
creiht | markwash: what's the question? | 20:19 |
annegentle | markwash: there's a new NovaCore wiki page at http://wiki.openstack.org/NovaCore where people list some of their areas of expertise, too | 20:20 |
uvirtbot | New bug: #725215 in swift "swift.stats.log_processor.run_once() needs to be refactored and tested" [Undecided,New] https://launchpad.net/bugs/725215 | 20:21 |
*** bcwaldon has joined #openstack | 20:22 | |
uvirtbot | New bug: #725219 in swift "log_processor can reprocess files if processed_files.pickle.gz can't be found" [Undecided,New] https://launchpad.net/bugs/725219 | 20:22 |
markwash | annegentle: thanks | 20:23 |
markwash | creiht: I'm just wondering about the appropriate scope of middleware, esp. in nova | 20:23 |
creiht | ahh... not sure that I can answer for nova | 20:24 |
markwash | I can move my question over to #wsgi too | 20:24 |
creiht | of course you can just ask, and see if someone answers :) | 20:26 |
*** mahadev has quit IRC | 20:31 | |
*** bcwaldon has quit IRC | 20:32 | |
*** markwash has quit IRC | 20:33 | |
*** clauden_ has joined #openstack | 20:35 | |
*** bcwaldon has joined #openstack | 20:37 | |
*** DIgitalFlux has joined #openstack | 20:38 | |
*** mdomsch has quit IRC | 20:39 | |
*** zenmatt has joined #openstack | 20:40 | |
*** dragondm has joined #openstack | 20:40 | |
*** zul has quit IRC | 20:40 | |
*** dragondm has quit IRC | 20:41 | |
*** dragondm has joined #openstack | 20:41 | |
*** DIgitalFlux has quit IRC | 20:41 | |
jaypipes | sirp-: around? | 20:42 |
*** bcwaldon has quit IRC | 20:43 | |
*** MarkAtwood has quit IRC | 20:45 | |
*** Nacx has quit IRC | 20:47 | |
*** mdomsch has joined #openstack | 20:50 | |
*** Pentheus has joined #openstack | 20:51 | |
*** rlucio_ has quit IRC | 20:53 | |
*** dendro-afk is now known as dendrobates | 20:54 | |
vishy | hey guys | 20:56 |
*** littleidea has joined #openstack | 20:56 | |
vishy | i added a crosstable to the NovaCore page, i thought it might be easier http://wiki.openstack.org/NovaCore | 20:56 |
jk0 | yeah, I like that better | 20:58 |
eday | shoud we just remove the first table then? | 21:05 |
eday | seems redundant | 21:05 |
*** MarkAtwood has joined #openstack | 21:06 | |
*** mahadev has joined #openstack | 21:07 | |
*** Pentheus has quit IRC | 21:07 | |
*** Pentheus has joined #openstack | 21:09 | |
annegentle | vishy: how did you edit that? In text that would be a bear to know the column alignment, wouldn't it? | 21:10 |
jarrod | dang vish | 21:12 |
jarrod | you are busy | 21:12 |
jarrod | you and eric day | 21:12 |
eday | annegentle: gui edit | 21:13 |
eday | I added an 'auth' column too, in case you're an auth expert | 21:13 |
annegentle | okay, gui edit has given me fits. and I do mean FITS! :) | 21:13 |
annegentle | eday: but if it works, cool | 21:14 |
eday | annegentle: this is the first I've used it, seems to work :) | 21:15 |
eday | I just added another 'zones' column, for the multi-zone work too | 21:15 |
*** aliguori has quit IRC | 21:15 | |
tr3buchet | there is no longer output from the binaries such bin/nova-compute | 21:19 |
tr3buchet | is there a way to get this output back? | 21:19 |
tr3buchet | i'm running with --nodaemon | 21:20 |
sirp- | jaypipes: ping | 21:24 |
jaypipes | sirp-: hey, got time for a chat? | 21:25 |
sirp- | jaypipes: sure, skype? | 21:25 |
mtaylor | vishy: yes | 21:26 |
jaypipes | sirp-: ya, starting up... | 21:26 |
mtaylor | vishy: except also we're using tarmac to run the unittests, which means they're happening ina tmpdir that jenkins doens't know about | 21:26 |
mtaylor | vishy, jaypipes: this is the reason we need tighter integration ... | 21:27 |
* mtaylor is talking with jaypipes about the needs we have to take this to the next level | 21:27 | |
openstackhudson | Project nova build #581: SUCCESS in 1 min 47 sec: http://hudson.openstack.org/job/nova/581/ | 21:27 |
openstackhudson | Tarmac: Fixes FlatDHCP by making it inherit from NetworkManager and moving some methods around. | 21:27 |
Vek | vishy: Re 715618: I proposed saving the last exception and adding it to the error message, rather than including the stack trace; I still think the trace is too much noise, and the information you're really interested in will likely be in that last exception. That sound reasonable to you? (Alternatively, add the exception message to the "is unreachable" messages in the exception handler...) | 21:29 |
Vek | (715618 is the "cannot reach AMQP" stack trace bug) | 21:29 |
*** brd_from_italy has joined #openstack | 21:32 | |
jaypipes | mtaylor: I think just having Tarmac only show the failing errors and/or pep8 failures instead of the entire test run output would be a fantastic first step. Are you saying that that would require Java? | 21:46 |
*** aliguori has joined #openstack | 21:47 | |
*** zenmatt has quit IRC | 21:48 | |
mtaylor | jaypipes: mostly | 21:49 |
jaypipes | mtaylor: fail. | 21:50 |
mtaylor | jaypipes: sorry - I will send you that email and I will expand on the needs to get what we need | 21:50 |
mtaylor | jaypipes: I know EXACTLY what you guys want... there are various ways to get there | 21:50 |
jaypipes | mtaylor: cheers | 21:51 |
mtaylor | jaypipes: running out right now - I will get it to you by tomorrow | 21:51 |
uvirtbot | New bug: #725281 in glance "No way to remove a custom image property" [Low,Confirmed] https://launchpad.net/bugs/725281 | 21:51 |
jaypipes | mtaylor: no worries, thx man | 21:51 |
*** mdomsch has quit IRC | 21:53 | |
openstackhudson | Project nova build #582: SUCCESS in 1 min 45 sec: http://hudson.openstack.org/job/nova/582/ | 21:58 |
openstackhudson | Tarmac: check if QUERY_STRING is empty or not before building the request URL in bin/nova-ajax-console-proxy | 21:58 |
*** rlucio has joined #openstack | 22:01 | |
*** ctennis has quit IRC | 22:08 | |
*** mdomsch has joined #openstack | 22:09 | |
*** mdomsch has quit IRC | 22:15 | |
*** johnpur has quit IRC | 22:17 | |
*** vvuksan has quit IRC | 22:26 | |
*** ctennis has joined #openstack | 22:31 | |
*** ctennis has joined #openstack | 22:31 | |
*** MarkAtwood has quit IRC | 22:36 | |
*** raygtrejo has left #openstack | 22:40 | |
*** trbs2 has joined #openstack | 22:44 | |
*** cjb1 has left #openstack | 22:47 | |
*** tr3buchet has quit IRC | 22:51 | |
*** tr3buchet has joined #openstack | 22:51 | |
*** MarkAtwood has joined #openstack | 22:57 | |
*** blpiatt has quit IRC | 22:59 | |
*** dinnerjacket has joined #openstack | 22:59 | |
dinnerjacket | hey guys, quick question: clout-init on my instances is receiving this when it hits the metadata service: {"versions": [{"status": "CURRENT", "id": "v1.0"}]} | 23:01 |
*** mgoldmann has quit IRC | 23:01 | |
dinnerjacket | the instance then breaks bad... I assume it's supposed to translate that to xml before sending? | 23:01 |
uvirtbot | New bug: #725328 in nova "removing of iSCSI volumes failed because "Device or resource busy."" [Undecided,New] https://launchpad.net/bugs/725328 | 23:06 |
*** enigma1 has left #openstack | 23:08 | |
*** clauden_ has quit IRC | 23:14 | |
*** brd_from_italy has quit IRC | 23:14 | |
*** rlucio has quit IRC | 23:15 | |
*** gondoi has quit IRC | 23:19 | |
vishy | tr3buchet --nodaemon doesn't exist. do you have a --logdir in you flagfile? If you remove it it will output normally | 23:26 |
vishy | Vek: it is only one stack trace when the binary crashes completely, that doesn't seem too noisy to me | 23:27 |
*** dinnerjacket has quit IRC | 23:32 | |
tr3buchet | ah ok, thanks vishy, wasn't aware the logfile circumvented output | 23:35 |
tr3buchet | networks aren't getting assigned project IDs for some reason, any ideas? | 23:36 |
vishy | tr3buchet, in vlan? | 23:36 |
vishy | networks are only assigned to projects in vlan mode | 23:36 |
*** trbs2 has quit IRC | 23:37 | |
*** et_ has quit IRC | 23:38 | |
*** pvo has quit IRC | 23:39 | |
dragondm | question: what's up w/ the Mac OS X binaries that were checked into nova trunk under /test/bin/? Was that intentional? | 23:40 |
Vek | except that the binary isn't crashing. There isn't even really an exception at that point! The only problem is that it can't connect to the AMQP server. | 23:40 |
Vek | that seems to me to be common enough that you just want it to tell you, "dude, I can't connect" | 23:41 |
Vek | stack traces should be for when the programmer screwed up | 23:41 |
tr3buchet | thanks again vishy | 23:43 |
vishy | Vek: ok it is reasonable to just print the exception type i suppose | 23:45 |
vishy | Vek: I wonder if we should change the exception handler in general to only print the stack trace if --verbose is specified | 23:46 |
*** vvuksan has joined #openstack | 23:47 | |
Vek | Perhaps, but that's probably a little out of scope for what I'm doing :) | 23:47 |
Vek | I can look into that for the future, though. | 23:48 |
vishy | sure | 23:48 |
vishy | :) | 23:48 |
*** dirakx is now known as dirakx_afk | 23:50 | |
*** MarkAtwood has quit IRC | 23:54 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!