Friday, 2011-02-25

*** enigma has quit IRC00:01
*** markwash has quit IRC00:02
*** kazu has joined #openstack00:05
*** bcwaldon_ has joined #openstack00:06
*** dprince has quit IRC00:06
gholtdevcamcar: You're using swauth right? I think you need to curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: mypass' http://(host):8080/auth/v1.000:07
gholtSince swauth actually runs within the proxy software itself, you need the /auth/ to route the request properly as an auth request rather than a standard storage request.00:07
*** bcwaldon has quit IRC00:08
devcamcargholt: yea, i munged that part, with /auth/v1.0 i get a 500:00:09
devcamcarFeb 25 00:07:56 (host) Exception: Could not obtain services info: /v1/AUTH_.auth/system/.services 404 Not Found00:09
gholtBah, it never ends, eh? :)00:09
devcamcarhaha yea00:09
gholtYou created an account named 'system' before?00:09
devcamcargholt: system:root00:10
devcamcargholt: actually, hrm00:10
*** lamar has quit IRC00:10
devcamcari only did swauth-add-user, i thought that would create the account00:10
*** bcwaldon_ has quit IRC00:11
gholtTry swauth-add-user -K <key> system root <password> again and see what happens00:11
devcamcardo i need to explicitly do swauth-add-account as well?00:11
devcamcarok00:11
gholtNah, you shouldn't00:11
devcamcargholt: on 2nd run, it returned without error, just like first tiem00:11
gholtI wonder if something was broken when you made the account at first... Can you try making a different user?00:15
gholtIn a different account00:15
*** johnpur has quit IRC00:16
devcamcargholt: i haven't changed anything configuration wise since i made the account, but i can try with a different one00:16
devcamcargholt: i'm seeing tons of these in logs: http://paste.openstack.org/show/769/00:16
*** kazu has quit IRC00:16
gholtWell, shoot, actually that doesn't make much sense either (looking at the code, which specifically does things in a certain order so that half-made accounts don't cause problems when remade)00:16
uvirtbotNew bug: #724654 in glance "Image Type is Erroneously Required" [Undecided,New] https://launchpad.net/bugs/72465400:16
gholtdevcamcar: That sounds like you're running an incorrect version of rsync, but I'm fuzzy on that part of Swift, redbo or chuck would know better there. That shouldn't affect the account problems your having though.00:18
devcamcargholt: second account works00:18
devcamcarweird!00:19
gholtGrr. Must've missed a race condition somewhere then on the first.00:19
devcamcargholt: though i have tried creating system:root user before, maybe it had something half way done when i got it going the second time00:19
devcamcargholt: whats best way to nuke that account so i can recreate it?00:19
gholtCan you do a st -A http://127.0.0.1:8080/auth/v1.0 -U .super_admin:.super_admin -K swauthkey list system ?00:20
gholtI'm curious what's in there. :)00:20
devcamcarsure, sec00:21
devcamcargholt: 401 unauthorized, weird00:22
devcamcaroh i see why00:22
*** joearnold has quit IRC00:22
devcamcarsec it forwarded that through my load balancer, which doesn't actually exist yet :)00:22
gholtAh, hehe00:23
devcamcargholt: when does it use the public and private urls defined in default_cluster_url?00:23
devcamcargholt: for now i should probably leave everything just on the private while i get this worked out00:23
gholtThe public one is given to users of the system, the private one is used by swauth itself to make accounts, users, etc.00:23
gholtYeah, you can see what an account has with swauth-list -K swauthkey account00:24
gholtAnd you can update it with swauth-set-account-service00:25
gholtBut we're getting into advanced stuff when the (supposedly) easier stuff is acting funny, hehe.00:25
*** zul has joined #openstack00:31
devcamcargholt: cool, just fixed my load balancers, i'll see whats going on now00:32
*** kashyap has quit IRC00:33
*** troytoman is now known as troytoman-away00:34
devcamcargholt: so just noticed, when i do st list system i am actually getting a few things before it dies00:35
devcamcargholt: st blah blah blah list system00:35
devcamcar.services00:35
devcamcarroot00:35
gholtOkay, with just one user that's what it should look like.00:35
gholtOr does it display some error too?00:35
devcamcar__main__.ClientException: Container GET failed: http://load-balancer-host:80/v1/AUTH_.auth/system?format=json&marker=root 401 Unauthorized00:35
devcamcaryea dumps a stack trace after that00:35
gholtHmm. How many proxy servers do you have? I wonder if one is acting funny00:36
devcamcarits consistent00:37
devcamcaron either system or test account00:37
gholtAnd I guess double check the configs for them and make sure they're the same (minus maybe the allow_account_management thing)00:37
*** maple_bed has joined #openstack00:38
devcamcargholt: so our setup is going to look like: a cluster of load balancers running pound and then all services running on the storage nodes00:38
devcamcarso proxy is running everywhere and we're going to do ssl termination with pound00:39
devcamcarso right now all the proxies have alow_account_management set to true00:39
devcamcarthey're all the same00:39
gholtAh I see. It just seems like one is having trouble validating you is why I ask, but others aren't00:40
gholtBut the super_admin_key is in the proxy-server.conf, so, that isn't making sense to me, hehe00:40
*** maplebed has quit IRC00:41
devcamcargholt: seems to be consistent, i think maybe my load balancers are still messed up, i'm respushing new proxy-conf with private only dns now00:42
*** Ryan_Lane has joined #openstack00:45
*** kashyap has joined #openstack00:50
gholtdevcamcar: To answer your earlier question, you should be able to swauth-delete-user system root and then swauth-delete-account system to get rid of the account to be able to recreate it fresh. That failing, you can st delete system similar to the st list system you did.00:50
gholtBut that's all "should" since st list etc. are 401ing on you sometimes and sometimes not. :/00:51
devcamcargholt: works now that i took my load balancers out of the equation00:51
*** Ryan_Lane has quit IRC00:51
gholtAh, well, at least that helps a bit to figure it all out. :)00:51
devcamcarso now i know where to focus :)00:51
devcamcarthanks for the help! i'm gonna take a break now that there's some progress00:52
gholtWhen everything's new, everything's suspect, lol00:52
*** gregp76 has quit IRC00:52
*** dendro-afk is now known as dendrobates00:52
*** dendrobates is now known as dendro-afk00:53
devcamcargholt: odd, I was able to swauth-delete-user system root but i can't  swauth-delete-account system00:56
devcamcargholt: nm, it looks like file system permissions issue00:57
devcamcari forgot a chown somewhere00:57
*** joearnold has joined #openstack00:57
*** hggdh has quit IRC00:58
*** oneiropolo has joined #openstack00:59
oneiropolohello00:59
devcamcargholt: does this look normal?00:59
devcamcardrwxr-xr-x 2 nobody swift 6 2011-02-25 00:50 4af061388b2465a1a69b21edcc5023dc00:59
devcamcarnobody:swift instead of swift:swift ?00:59
gholtNot even. Everything should be running as the swift user. It deliberately drops privs to that, or whatever you have set as user =01:01
*** hggdh has joined #openstack01:01
gholtI wonder if rsync did that. Hmm01:02
*** KenD has quit IRC01:03
openstackhudsonProject nova build #576: SUCCESS in 1 min 42 sec: http://hudson.openstack.org/job/nova/576/01:07
openstackhudsonTarmac: Make tests start with a clean database for every test.01:07
devcamcargholt: user is set to swift in all my configs01:08
devcamcargholt: this must be related to all those strange rsync errors i'm getting in the logs01:09
devcamcargholt: crap I think I see it01:09
devcamcarargh!01:09
devcamcargholt: yea i see it01:09
devcamcargholt: i have uuid instead of uid in my rsync config, fat fingered it somehow01:09
devcamcarthey can't protect me from myself01:10
gholtDamn. So that's what that earlier invalid option uuid was, lol01:10
devcamcaryea hah, makes a lot of sense now01:12
*** mahadev has quit IRC01:15
*** hadrian has quit IRC01:21
*** hadrian has joined #openstack01:21
*** winston-d has joined #openstack01:22
devcamcargholt: was able to delete system account now, had to reset the account service on it first01:22
*** hadrian has joined #openstack01:22
winston-dI've one question regarding to the 'zone' in Swift01:22
devcamcargholt: and it actually works, yay01:23
winston-dare the machines in the same zone supposed to have exactly the same hardware configuration, such same hard drives,01:25
notmynamewinston-d: no, the zones are there to separate groups of servers as much as possible (to avoid system-wide failures like a switch going out or a cabinet losing power)01:28
notmynamewith no zones (the same as only one zone), multiple copies of each entity could be affected by a cab failure (or even a single storage node failure)01:30
notmynameso it's simply a concept to separate things. the servers can be heterogeneous.01:30
winston-dthere can be multiple servers in one zone, right? the they can be heterogeneous?01:32
winston-dactually, now i have 14 machines (10 of same type, 4 of the other type of config). and I have 12 SATA HDDs, and 8 SSDs.  I'm considering how to deploy Nova & Swift01:34
winston-dany suggestions or hint?01:35
*** hggdh has quit IRC01:36
*** hggdh has joined #openstack01:38
*** hazmat has joined #openstack01:42
*** mahadev has joined #openstack01:44
*** mahadev has quit IRC01:48
notmynamewinston-d: sorry to leave you hanging there...01:51
notmynamemultiple servers in one zone is good01:52
winston-dnotmyname: it's ok.01:52
*** clauden has quit IRC01:52
notmynameare the 20 drives across all 14 machines?01:53
winston-dnotmyname: i haven't installed them yet.01:53
winston-dnotmyname: so the first thing is to decide how to install those drives01:54
notmynamebut 20 drives total? and 14 servers total?01:54
winston-dnotmyname: that's right01:54
*** mahadev has joined #openstack01:55
winston-di was thinking 14 servers for Nova and some of them would also be configured as Swift nodes.01:56
notmyname4 zones with 3 SATA drives each would be a good start. one box per zone with 3 SATA HDDs would be a good start01:57
notmynamewith the caveat that the SSDs will of course give you better performance01:57
notmynamebut the 4 "other config" boxes with 3 drives each, plus perhaps one other box as the proxy (for 5 total swift boxes)01:58
notmynamethat leaves 9 or 10 boxes for nova (but short on drives...)01:59
winston-dnotmyname: so you don't suggest mixing Nova & Swift together?01:59
notmynameor use the 8 SSDs as 4 zones with 2 drives each01:59
notmynameit would probably work, but I don't think there is much experience doing that02:00
*** Dumfries has quit IRC02:00
notmynamefor an idea performance setup, I would think they need to be optimized differently (IOPS vs CPU)02:00
notmynames/idea/ideal02:00
notmynamemaybe you could run nova and use the VMs as the storage nodes. OpenStack Inception ;-)02:01
winston-dnotmyname: that's cool idea. :)02:02
notmynameof course, that would probably hurt the swift performance pretty bad (just a guess)02:02
notmynameso you've got a few good options and a few others if this isn't for a prod setup :-)02:02
winston-dI remembered once someone said there should be at least 5 zones?02:04
notmynamefor swift, one of the bottlenecks will be the networking. a 1Gbps connection is slower than a SATA HDD for large GETs/PUTs02:04
notmyname5 is a recommended start02:04
notmyname3 is the minimum02:04
notmyname4, in your case, may be good since you have limited amounts of drives02:05
winston-di see.02:05
winston-d4 zones with SATA hdd and 1 zone with SSDs, is that OK?02:06
winston-dbut clearly the one with SSD has less capacity02:06
notmynameyes. if they are different sizes you should adjust the weights in the rings accordingly02:06
notmynameswift won't put more than one copy of something in a zone, so the number of zones is a balancing act of how much you can partition single points of failure. ex, in your case 20 zones would protect against a single drive failing, but not against a box failing. 5 zones (with one server and multiple drives in each zone) would protect agains drive and server failure. separate those boxes in different cabs, and you protect agains a cab failure02:09
winston-dnotmyname: sorry, i don't get it.  how does 5 zone (one box w/ multiple drivers) protects against box failure?02:11
oneiropolois there anybody can tell me about partition power?02:17
oneiropoloi don't understand the meaning about the partition power02:17
winston-dnotmyname: my understanding is data replications will be put to different zones, in the deployment of 3 replica & 5 zones (one box w/ multiple drivers each).  one box or one drive failure won't hurt since data is replicated to 3 boxes.02:17
notmynamewinston-d: in your case with 20 drives, one zone per drive (since you only have 14 boxes) could result in 2 copies (or all 3) on the same physical box (but different drives)02:18
notmynameso you could lose data, at least temporarily if a box goes down02:18
notmynamebut with boxes in different zones, you are protected exactly as you said: one copy in distinct zones, so you are protected against box and drive failure02:20
winston-dnotmyname: do you mean that one can configure one box into two zones?02:20
notmynamewinston-d: the ring manages volumes (ie drives or RAID volumes), so yes02:20
winston-doh, i thought the ring manages boxes02:21
notmynameoneiropolo: the partition power determines the maximum size of your swift cluster (in storage volumes) and how well you can balance the storage across all of the storage volumes02:21
winston-dso that's the tricky part, i should configure the dirvers in same box in the same zone02:22
notmynameyes02:22
winston-dgot it.  thanks!02:23
notmynameone entry in the ring is <ip address + mount location>02:23
notmynameactually, I guess it would be <zone + ip + mount point>02:23
winston-dnotmyname: how to caculate the weight of the ring according to capacity?02:26
notmynamethe weight a dimensionless number that only makes sense in relation to the other weights. a simple method is to use the number of GB the drive has for the drive weight02:27
notmynameso 2000 for a 2T drive, 1500 for a 1.5T, etc02:27
oneiropolowhat it means that size of swift cluster ?02:29
winston-di see. thanks.02:29
oneiropolois that disk size? or number of nodes?02:29
notmynameoneiropolo: the total number of nodes in the system02:30
oneiropolooh, i see, thanks a lot ~02:31
winston-dif swift is deployed with mixed drives (say, slow SATA & fast SSD), the performance of swift is unpredictable, right?02:33
notmynameoneiropolo: so, for example, based on your DC space available, budget, etc you may know that you can only ever have 1000 boxes in a cluster, each with 36 drives. then multiply by 100 so you can easily mange 1% of the ring space. (=3600000). then find the closest power of 2 bigger than that number02:33
*** guynaor has joined #openstack02:33
*** guynaor has left #openstack02:33
notmynamewinston-d: it depends on how they are deployed :-). you could do dedicated account/container nodes with SSDs and object nodes with HDDs. but if the SSDs and HDDs were mixed throughout, then performance would be unpredictable02:35
*** acmurthy has joined #openstack02:35
*** RobertLaptop has joined #openstack02:36
*** vvuksan has quit IRC02:37
winston-dhmm, interesting, i thought account/container/object are all required for each node02:38
winston-ddon't know one can separate them to different boxes.02:39
notmynameit's all very flexible ;-)02:41
jaypipessandywalsh: damn you for creating a rift. :P02:42
*** bcwaldon has joined #openstack02:45
*** MarkAtwood has joined #openstack02:46
*** nelson has joined #openstack02:50
sandywalshjaypipes, heh, sorry. I needed to draw a line somewhere.03:05
sandywalshjaypipes, all very valid arguments and a great debate, but I got donuts to make.03:05
*** lamar has joined #openstack03:07
*** hazmat has quit IRC03:14
*** hadrian has quit IRC03:23
*** maple_bed has quit IRC03:36
*** maplebed has joined #openstack03:39
*** pvo has quit IRC03:40
*** rchavik has joined #openstack03:42
*** sateesh has joined #openstack03:48
*** mdomsch has joined #openstack03:48
uvirtbotNew bug: #724719 in nova "OpenStack-Nova-Compute-ScriptInstallation Errors and Hung" [Undecided,New] https://launchpad.net/bugs/72471904:06
creihtdevcamcar: btw, the "name lookup failed for (ip):" error in the rsync logs is due rsync doing a reverse lookup on the ips04:06
creihtunfortunately in the current rsync it can't be disabled04:07
creihtwe have a patched rsync where I just disabled the reverse lookups04:07
creihtI should see about getting that into a ppa04:07
*** dragondm has quit IRC04:08
*** MarkAtwood has quit IRC04:21
*** acmurthy has quit IRC04:21
*** Nick_ has joined #openstack04:40
*** rchavik has quit IRC04:40
*** Nick_ is now known as Guest5487204:40
*** sateesh has quit IRC04:42
oneiropoloif I have 10 server machines with 1 disk.04:45
oneiropolohow could I make the partition policy? for example?04:45
*** joearnold has quit IRC04:52
*** Guest54872 has quit IRC04:54
*** Guest54872 has joined #openstack04:55
*** kashyap has quit IRC04:56
*** paltman has quit IRC05:01
*** Guest54872 has quit IRC05:04
*** Guest54872 has joined #openstack05:05
*** gregp76 has joined #openstack05:08
*** zenmatt has quit IRC05:14
*** kashyap has joined #openstack05:17
*** king has joined #openstack05:23
*** king is now known as Guest9988305:24
*** Guest99883 has quit IRC05:30
*** Guest54872 has quit IRC05:31
*** Guest54872 has joined #openstack05:36
*** bcwaldon has quit IRC05:42
*** MarkAtwood has joined #openstack05:42
*** blpiatt has joined #openstack05:42
*** omidhdl has joined #openstack05:44
*** f4m8_ is now known as f4m805:45
*** lamar has quit IRC05:56
*** acmurthy has joined #openstack06:01
*** acmurthy has quit IRC06:05
*** mdomsch has quit IRC06:10
*** acmurthy has joined #openstack06:11
*** bcwaldon has joined #openstack06:15
*** Jbain has quit IRC06:17
*** Jbain has joined #openstack06:17
*** bcwaldon has quit IRC06:20
*** gregp76 has quit IRC06:20
*** bcwaldon has joined #openstack06:21
*** bcwaldon has quit IRC06:28
*** kazu has joined #openstack06:37
*** MarkAtwood has quit IRC06:41
*** miclorb_ has quit IRC07:01
*** naehring has joined #openstack07:15
*** Guest54872 has quit IRC07:18
*** acmurthy has quit IRC07:20
ttxvishy: to report a bug against packaging: https://bugs.launchpad.net/ubuntu/+source/nova07:25
ttx(just make it clear in desc that it's for trunk, not any released packages)07:25
winston-dhas anyone encountered error using 'nova-mange db sync' ?07:25
*** naehring has quit IRC07:28
kpepplewinston-d: what kind of errors ?07:35
winston-dkpepple: here's the output http://paste.openstack.org/show/770/07:37
winston-dkpepple: i'm using Bexar release on RHEL 6.  Manually compiled.07:37
kpepplewinston-d: looking ... haven't seen this before07:39
*** omidhdl has quit IRC07:39
kpepplewinston-d: what database are you using ? it's defined in your /etc/nova/nova.conf file with the --sql_connection flag07:40
*** omidhdl has joined #openstack07:40
winston-dkpepple: it is MYSQL07:40
*** ramkrsna has joined #openstack07:41
*** ramkrsna has joined #openstack07:41
winston-dkpepple: --sql_connection=mysql://nova:nova@192.168.4.1/nova07:41
* ttx suspects a wrong version of python-migrate07:42
winston-d//winston-d feels the same as ttx07:42
ttxwe are using 0.6.x07:42
kpepplewinston-d: i agree with ttx ... 0.2.2 seems really old -- i think you need something like 0.607:42
ttxI think it works with 0.5, but certainly not with 0.207:42
winston-dBest match: sqlalchemy-migrate 0.6.107:42
winston-dAdding sqlalchemy-migrate 0.6.1 to easy-install.pth file07:42
winston-dInstalling migrate script to /usr/bin07:42
winston-dInstalling migrate-repository script to /usr/bin07:42
kpepplewinston-d: make sure you either install all the pre-req packages or use pip to install all the new eggs thru pip-requires07:43
winston-dthis is what i got when 'easy_install sqlalchemy-migrate'07:43
ttx/usr/lib/python2.6/site-packages/migrate-0.2.2-py2.6.egg/migrate/versioning/unique_instance.py07:43
ttxapparently your system is using something else07:43
winston-dwell, how do i change that? remove the old one?07:44
* ttx is not a pip/venv/egg creature. I admit using only packages, so i can't really help you on that one.07:45
kpepplewinston-d: i run through the virtualenv ... otherwise, you'll probably need to upgrade or remove the old one07:47
winston-dwell, i am trying to do that.07:48
*** CloudChris has joined #openstack07:48
winston-dsetting up Nova has always been a nightmare...07:48
kpepplewinston-d: sadly, it's much easier on ubuntu ... we need to work harder on the redhat/centos side07:49
kpepplewinston-d: can you just uninstall with the easy_install -m ?07:50
winston-dkpepple: let me try that07:50
*** mgoldmann has joined #openstack07:51
winston-dkpepple: doesn't work.07:52
winston-d'nova-mange db sync' reports same error.07:53
kpepplewinston-d: do you have virtualenv installed ? if so, run the test script (./run_test.sh) and it will install all the necessary (and correct) eggs locally ...07:54
winston-dkpepple: thanks for the hint. i'm downloading virtualenv.07:56
*** guigui has joined #openstack08:00
*** romain_lenglet_ has joined #openstack08:05
*** GasbaKid has joined #openstack08:13
*** rcc has joined #openstack08:15
*** romain_lenglet_ has quit IRC08:15
*** Guest54872 has joined #openstack08:21
*** berendt has joined #openstack08:21
*** skiold has joined #openstack08:22
*** czajkowski has quit IRC08:24
winston-dkpepple: ./run_test.sh is still on going.  really slow internet connection.08:25
kpepplewinston-d: this takes for ever --- the twisted module is really large08:25
*** littleidea has quit IRC08:26
*** Nacx has joined #openstack08:30
*** littleidea has joined #openstack08:30
*** Guest54872 has quit IRC08:31
*** Guest54872 has joined #openstack08:31
winston-dkpepple: it finished!08:36
winston-dbut there seems to be some errors08:36
kpepplewinston-d: make sure you activate the venv before you run db sync08:37
kpepplewinston-d: you'll see some errors ... hopefully nothing fatal08:37
winston-dhow to activate 'venv' ?08:37
openstackhudsonProject nova build #577: SUCCESS in 1 min 40 sec: http://hudson.openstack.org/job/nova/577/08:37
openstackhudsonTarmac: The proposed bug fix stubs out the _is_vdi_pv routine for testing purposes.08:37
kpepplewinston-d: ". .nova-venv/bin/activate"08:38
winston-dkpepple: where's that? nova-venv.  there's nothing 'whereis activate'08:40
kpepplewinston-d: you are going to source the .nova-venv/bin/activate (it's inside your nova source code folder)08:41
winston-dkpepple: no, i don't see it.08:42
openstackhudsonProject nova build #578: SUCCESS in 1 min 39 sec: http://hudson.openstack.org/job/nova/578/08:42
openstackhudsonTarmac: Cleanup db method names for dealing with auth_tokens to follow standard naming pattern.08:42
kpepplewinston-d: when you did the ./run_tests.sh script, it made it in the same folder (top level of trunk)08:43
winston-dkpepple: oh, sorry i missed that.  grrr.08:45
winston-dkpepple: 'nova-mange db sync' has the same error report08:45
kpepplewinston-d: even after you sourced the "activate" script ?08:46
winston-dyes08:46
*** drico has quit IRC08:46
kpepplewinston-d: just to check, do this (will paste something , hold on)08:47
*** naehring has joined #openstack08:49
*** daveiw has joined #openstack08:49
kpepplewinston-d: try this paste - http://paste.openstack.org/show/771/08:49
kpepplewinston-d: basically, you will use nova-manage to open a shell, then find out what migrate it is using08:50
winston-dkpepple: here's my output: http://paste.openstack.org/show/772/08:52
kpepplewinston-d: it's still pointing to the old migrate :(08:52
winston-dstill the very old version08:52
*** calavera has joined #openstack08:53
*** adjohn has quit IRC08:53
kpepplewinston-d: i am vaguely remembering that we've seen this before ... something about migrate not versioning properly ...08:53
*** miclorb has joined #openstack08:54
winston-dkpepple: it's weird because the 'easy_install sqlalchemy-migrate' output seems OK, but there's no such thing inside /usr/lib/python2.6/site-packages08:54
kpepplewinston-d: did you easy_install after you sourced the activate ?08:55
*** littleidea has quit IRC08:56
kpepplewinston-d: or was this part of the ./run_tests script ?08:56
winston-dkpepple: i did that _before_ sourcing venv. let me try that08:56
*** allsystemsarego has joined #openstack08:59
*** allsystemsarego has joined #openstack08:59
*** blpiatt has quit IRC09:02
winston-dkpepple: this time, easy_install did install right version of migrate in venv, but 'nova-mange db sync' has another error.  but I have to run for now.  thank you very much for your kindly help09:05
kpepplewinston-d: no worries, best of luck09:05
winston-dkpepple: this is the error: http://paste.openstack.org/show/773/09:06
*** adjohn has joined #openstack09:08
*** miclorb has quit IRC09:17
*** mahadev has quit IRC09:20
*** sateesh has joined #openstack09:22
*** miclorb_ has joined #openstack09:24
*** miclorb_ has quit IRC09:31
*** miclorb_ has joined #openstack09:37
ttxvishy: created the bug, will take care of it: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/72484409:47
uvirtbotLaunchpad bug 724844 in nova "[trunk] Add lvdisplay to nova_sudoers" [High,In progress]09:47
*** miclorb_ has quit IRC09:50
*** Guest54872 has quit IRC09:56
*** sateesh has quit IRC09:56
berendtafter uploading images to nova-objectstore i only have the status "decrypting" for a long time (http://paste.openstack.org/show/775/)09:57
berendtis this normal?09:57
*** miclorb_ has joined #openstack09:58
ttxberendt: define "long time" ?10:02
berendt10 minutes10:02
naehringsame here.10:02
berendtin the past they were available after a few seconds or minutes10:02
ttxberendt: yes10:02
berendtalso i have no cpu consuming processes on the system..10:03
ttxberendt: looks like a regression to me... current trunk ?10:03
berendtnearly.. 73210:03
berendti'll open a bugreport10:03
ttxberendt: if you reproduce steadily, file a bug, I'll try to reproduce locally to confirm10:03
berendti tried it two times on my setup and naehring has the same problems on an other setup10:04
berendthmm.. probably an error with the rsa stuff10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Process Process-3:10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Traceback (most recent call last):10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]   File "/usr/lib64/python2.6/multiprocessing/process.py", line 231, in _bootstrap10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]     self.run()10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]   File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]     self._target(*self._args, **self._kwargs)10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]   File "/usr/lib64/python2.6/site-packages/nova/objectstore/image.py", line 242, in register_aws_image10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]     cloud_private_key, decrypted_filename)10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]   File "/usr/lib64/python2.6/site-packages/nova/objectstore/image.py", line 263, in decrypt_image10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]     % err)10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] Error: Failed to decrypt private key: RSA operation error10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] 1918:error:0407106B:rsa routines:RSA_padding_check_PKCS1_type_2:block type is not 02:rsa_pk1.c:190:10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130] 1918:error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed:rsa_eay.c:596:10:06
berendt2011-02-25 10:46:16+0100 [HTTPChannel,4,192.168.2.130]10:06
berendti'll check the ca.. i moved the objectstore to an other system10:06
berendtchronos:/var/lib/nova/CA # ./genrootca.sh10:07
berendtNot installing, it's already done.10:07
*** miclorb_ has quit IRC10:07
berendtok.. i copied /var/lib/nova/CA from the old system and now it's working10:11
berendtseems to be something wrong with the generated CA on the new system10:12
*** adjohn has quit IRC10:12
*** adjohn has joined #openstack10:13
naehringwell, I ve removed all of the content below /var/lib/nova and resetted by database. now the process was just as fast as before. Maybe something while updating10:15
uvirtbotNew bug: #724853 in nova "after uploading images to nova-objectstore they are only decrypting " [Undecided,New] https://launchpad.net/bugs/72485310:22
*** darea has joined #openstack10:23
*** oneiropolo has quit IRC10:24
dareahi, can someone help me with configuring swift ? got some probs with the ring-builder10:28
*** piken has quit IRC10:37
*** piken has joined #openstack10:37
ttxvishy: done, should appear soon in builds.10:48
*** MarkAtwood has joined #openstack10:49
*** MarcMorata has joined #openstack10:53
*** h0cin has joined #openstack11:24
*** omidhdl has left #openstack11:24
*** alekibango_ has quit IRC11:27
*** alekibango has joined #openstack11:28
*** Daviey has quit IRC11:53
*** MarkAtwood has quit IRC11:55
*** Daviey has joined #openstack11:58
*** rcc has quit IRC12:00
*** calavera has quit IRC12:22
*** reldan has joined #openstack12:34
*** zenmatt has joined #openstack12:36
*** hazmat has joined #openstack12:37
*** maplebed has quit IRC12:38
*** ctennis has quit IRC12:40
*** Daviey has quit IRC12:42
*** reldan has quit IRC12:43
*** Daviey has joined #openstack12:47
*** rcc has joined #openstack12:52
*** naehring has quit IRC12:53
*** GasbaKid has quit IRC12:54
*** GasbaKid has joined #openstack13:04
*** kashyap has quit IRC13:05
*** Daviey has quit IRC13:06
*** Daviey has joined #openstack13:13
*** j05h has joined #openstack13:13
*** paltman has joined #openstack13:17
*** doude has joined #openstack13:20
*** eikke has joined #openstack13:21
*** ctennis has joined #openstack13:23
*** f4m8 is now known as f4m8_13:32
*** j05h has quit IRC13:33
*** h0cin has quit IRC13:34
*** j05h has joined #openstack13:36
*** masumotok has joined #openstack13:39
*** reldan has joined #openstack13:39
*** masumotok has quit IRC13:40
*** reldan has quit IRC13:44
*** reldan has joined #openstack13:47
*** omidhdl has joined #openstack13:49
berendtexists some documentation how to use nova-ajax-console-proxy?13:51
*** zenmatt has quit IRC13:55
*** zenmatt has joined #openstack13:57
*** johnpur has joined #openstack13:59
*** ChanServ sets mode: +v johnpur13:59
*** jmckenty has joined #openstack14:02
*** dendro-afk is now known as dendrobates14:05
*** h0cin has joined #openstack14:09
*** patcoll has joined #openstack14:10
*** patcoll has quit IRC14:14
*** patcoll has joined #openstack14:14
*** hadrian has joined #openstack14:16
*** acmurthy has joined #openstack14:19
*** ppetraki has joined #openstack14:20
*** ramkrsna has quit IRC14:21
creihtdarea: howdy14:24
annegentleberendt: unfortunately, no doc yet on the AJAX console, but sleepsonthefloor and I are emailing about it now.14:27
berendtannegentle: sounds good :)14:27
annegentleberendt: so all we have to go on right now is the wiki page: http://wiki.openstack.org/WebBasedSerialConsole and then there's some info in the code checkin too, which I can't put my hands on right now, sorry.14:29
*** olivier__ is now known as olivier_14:30
*** Ryan_Lane has joined #openstack14:31
*** rcc has quit IRC14:33
*** reldan has quit IRC14:33
*** reldan has joined #openstack14:34
*** blpiatt has joined #openstack14:36
*** acmurthy has quit IRC14:37
Ryan_Lanettx: updated the bug where you asked for a test case14:37
ttxRyan_Lane: cool ! thanks.14:38
Ryan_Lanettx: last week, this week, and next I'm doing a datacenter buildout, so I'm not really near a computer14:38
Ryan_Laneotherwise I'd write an actual test14:38
*** littleidea has joined #openstack14:38
*** acmurthy has joined #openstack14:42
*** gondoi has joined #openstack14:43
*** kashyap has joined #openstack14:47
*** kashyap_ has joined #openstack14:47
*** acmurthy1 has joined #openstack14:48
*** acmurthy has quit IRC14:52
ttxRyan_Lane: I'll try to do it myself.14:53
*** pvo has joined #openstack14:54
jaypipessirp-: k, fixed up all things from review, added lots of unit tests, and added documentation on the new disk and container formats: https://code.launchpad.net/~jaypipes/glance/api-image-format/+merge/5106415:01
*** reldan has quit IRC15:03
*** bcwaldon has joined #openstack15:07
*** acmurthy1 has quit IRC15:08
*** imsplitbit has joined #openstack15:10
*** pvo has quit IRC15:11
*** lwollney has joined #openstack15:14
*** acmurthy has joined #openstack15:14
*** kazu_ has joined #openstack15:16
*** jmckenty has quit IRC15:16
*** jmckenty has joined #openstack15:16
*** reldan has joined #openstack15:16
*** hazmat has quit IRC15:18
*** hazmat has joined #openstack15:18
*** raygtrejo has joined #openstack15:18
*** acmurthy1 has joined #openstack15:20
*** vvuksan has joined #openstack15:22
*** acmurthy has quit IRC15:23
*** mahadev has joined #openstack15:25
*** pvo has joined #openstack15:27
doudeHi all, I've got a problem with the serial console with libvirt. I copy the XML template of instance and I set manually the parameters. I create the domain with 'virsh' command but I cannot acces to the serial console through the command 'virsh console mydomain'15:30
doudemy image must support serial console at boot (ttyS0)15:30
doudeIf I remove the part '<serial type="file">' in ht e XML file, the serial console is available15:31
*** patcoll has left #openstack15:32
doudeIt works for you ?15:32
*** mahadev has quit IRC15:32
*** mahadev has joined #openstack15:34
*** KOL has joined #openstack15:37
KOLHi, I have 8 desktop pcs with quad core, 2 harddisks and a 6 GB of ram. Using these machines, I want to make a ton of virtual machines. Only 8 of those virtual machines will be running at one time, but the others should be available to boot into. Is openstack what I need to virtualize the storage and processor capacity in each machine, and to run a virtualization solution on top of it?15:39
*** Seoman has joined #openstack15:41
*** MarcMorata has quit IRC15:41
*** reldan has quit IRC15:41
*** GasbaKid has quit IRC15:46
*** mahadev has quit IRC15:50
*** GasbaKid has joined #openstack15:50
*** aliguori has quit IRC15:51
*** dragondm has joined #openstack15:53
pvoKOL: it can do do this.15:53
pvonova can do the proc/ram/disk provisioning15:54
*** daveiw has left #openstack15:55
KOLpvo awesome15:56
KOLthanks15:56
*** mahadev has joined #openstack15:57
*** Oneiropolo has joined #openstack15:59
Oneiropolohello ,15:59
Oneiropoloi have one thing to ask15:59
Oneiropoloi can't understand exactly the meaning of partition.16:00
Oneiropoloif i assumed that i have only i disk (device)16:00
Oneiropolo1 disk16:00
Oneiropolothat means i have 1 device16:00
Oneiropoloand i can have 100 partitions (for recommend)16:01
Oneiropolowhat is the relation between partition and object?16:01
Oneiropolo1 object = 1 partition could possible?16:01
Oneiropolois there anyone can help?16:02
annegentleOneiropolo: are you asking specifically about the ring and partitions as related to Object Storage (swift)?16:03
Oneiropoloyes i am16:04
notmynameOneiropolo: for a very technical description, gholt has a 5 part series on his blog about how the swift ring works (http://tlohg.wordpress.com/)16:04
Oneiropolonotmyname: oh Thanks16:04
Oneiropolobut at this time .. i wonder about the partition.16:05
creihtOneiropolo: there is also a ring overview here: http://swift.openstack.org/overview_ring.html16:06
creihtand http://swift.openstack.org/deployment_guide.html#preparing-the-ring16:06
annegentleOneiropolo, notmyname wow last time I saw series that it was still 2 parts :)16:06
annegentlenice job gholt16:06
Oneiropolowhat is the relation between objects and partitions .16:06
Oneiropolo1 partition has many object? is it right?16:07
creihtyes16:07
notmynameyes16:07
Oneiropoloso, if i have 100GB disk16:07
Oneiropoloand I set 100 partitions on 1 disk (device)16:07
*** BK_man has joined #openstack16:07
Oneiropoloit means that 1 partition is 1GB block16:07
Oneiropolois it right?16:07
notmynameno16:08
Oneiropoloumm...16:08
notmynamepartitions are a logical grouping, not a quota allocation16:08
Oneiropolooh...16:08
Oneiropoloso it's not related with the size of disk?16:08
notmynamewell balanced partitions may end up using 1GB in your example, but that's a side-effect16:08
*** zenmatt has joined #openstack16:09
Oneiropoloit's just a side effect ..16:09
*** KOL has quit IRC16:09
Oneiropoloand it could possible that 1 partition = 1 object?16:10
notmynamenot directly, but the partitions come from the ring power you use when creating the ring and the weights for each node. drive size is a consideration when determining the node weights, so drive size only indirectly affects the number of partitions16:10
Oneiropolookay, i see.16:11
notmynameit's possible that a partition can only have 1 object in it, but that would happen with a nearly-empty cluster16:11
Oneiropoloring power and weights. It's little bit complicate for me.16:11
notmynamehm...that above (not directly....) wasn't worded the best, but I hope you get the idea16:12
Oneiropolono it's very helpful.16:12
Oneiropoloi appreciate it.16:12
creihtanother way to think about it is that it is a logical abstraction16:12
creihtobjects hash to a partition16:12
creihtand there is a mapping of partitions to devices in the cluster16:13
Oneiropoloso partition is a kind of bucket16:13
notmynamecreiht: much better said. thanks. :-)16:13
Oneiropoloand bucket's consisted with devices.16:14
jarrodis anyone using xen with openstack in production16:14
jarrodthat is pleased with the experience16:14
notmynamebucket is a dangerous word since it's used with S3 and means something completely different there16:14
Oneiropoloyes, it is right.16:15
creihtI like the idea of a mapping better16:15
*** aliguori has joined #openstack16:15
Oneiropoloso, a partition is just a abstracted structure for objects16:16
notmynamethe ring maps the partitions to the drives (storage volumes). when an object is hashed, its hash is used to find the partitions each replica goes to. then the ring mapping determines what storage volume (drive) it lives on16:16
Oneiropoloi'm getting it now.16:17
pvojarrod: we're using xenserver and openstack in our dev envs16:17
pvojarrod: pushing very hard to use in prod soon16:17
jarrodpvo: you run into any problems when provisioning images over glance?16:18
notmynamepartitions become a logical abstraction used to balance heterogeneous drives and allow fine tuning in the case of hot spots in the cluster16:18
pvoare you using xen.org or xenserver?16:18
notmynamerather than mapping objects directly to storage volumes, they are mapped to partitions which are mapped to volumes16:18
pvowe're using xen server 5.5x and 5.616:18
jarrodi am using 5.6fp116:18
pvojarrod: I haven't. What particular problems are you seeing?16:19
Oneiropoloso depend on the size of disk volume (weights), the number of partitions assigned to device could be different?16:19
notmynameyes16:20
Oneiropolookay.16:20
Oneiropolomaybe i got a clue now.16:21
notmynameand so you can use the weights to manage different sized drives or even to drain or slowly fill old/new drives16:21
*** adiantum has joined #openstack16:21
Oneiropolothat means that i can control the device's weights without consider disk size ( volume size) ?16:22
notmynameweights are dimensionless and only matter in relation to one another16:22
notmynamebut a good start is to use the number of GB on the disk as the weight16:23
Oneiropoloi see16:23
Oneiropoloto build the ring.16:23
Oneiropolofirst i need to consider about number of partitions for whole ring?16:24
notmynamethe number of partitions is fixed and can't (realistically) be changed after it's deployed16:24
notmynameso you choose the ring power based on how big your cluster could ever be16:25
*** zenmatt has quit IRC16:25
Oneiropoloif I added more node to the cluster, can not change the number of partitions?16:25
notmynameright. the existing partitions will be rebalanced to account for the new node16:26
Oneiropoloi see.16:26
notmynamefor example, if you know that based on budget or DC space or whatever that you can only ever have 100 storage servers with 10 drives each, you will want a total of 100*10*100 partitions16:27
notmynamethe last "*100" is to have 100 partitions on each drive in a full cluster so you can essentially control 1% of the data on the drive16:28
*** dfg has joined #openstack16:28
notmynamethat gives you 100000 partitions16:28
Oneiropolo100000 = 2^1416:28
Oneiropolorounded16:28
notmynamefind the nearest power of 2 that is bigger than that number and use that for the ring poer16:28
notmynameya16:28
*** zenmatt has joined #openstack16:28
Oneiropoloso power is16:28
Oneiropolo1416:28
Oneiropolobut at start16:29
notmynameno, the power would be 20, I think16:29
Oneiropoloi just only have 20 servers16:29
Oneiropolobut i have to use the number of partitions16:29
Oneiropolo2^20 ?16:29
notmynamenm. off by a zero there16:30
*** GasbaKid has quit IRC16:30
notmynameuse 1716:30
Oneiropoloyes, you right :)16:30
notmyname2**17 is the smallest power of 2 greater than 10000016:30
Oneiropolookay16:30
Oneiropoloand if I want to scale out the cluster in the future16:31
Oneiropoloi have to migrate whole data to the new cluster16:31
*** darea has quit IRC16:32
*** rcc has joined #openstack16:32
notmynamein this example, if you had more than 1000 storage volumes you could use the same cluster but you would have less control over the balance of the data in the cluster16:32
notmynamethe "ideal" answer is to stand up a new cluster at that point16:32
Oneiropolookay, i get it.16:33
Oneiropolobut at this time.16:33
OneiropoloI just have only 200 storage volumes16:33
Oneiropolobut i have to use the number of partitions (2^17)16:33
Oneiropolobased on partition power16:34
notmynamewhen choosing the ring power, don't worry about what you have now, worry about how many volumes you will have in a full cluster16:34
*** kashyap has quit IRC16:34
Oneiropolothat doesn't cause a performance issue or problems?16:34
notmynamefull can be defined as "out of physical space", "out of network ports", "out of money", etc16:34
Oneiropolotoo many partitions on small number of storage volumes16:35
notmynamelarger powers will make ring generation slower, but won't affect ring lookups16:35
Oneiropolowhat about the disk performance?16:35
*** enigma1 has joined #openstack16:36
Oneiropoloi think many partitions on one disk volume could be a problem.16:36
notmynameit means there will be more overhead for fs metadata (more inodes storing directories, etc), but that should be a small percentage of the "real" data in your cluster16:37
notmynameso don't say "I'm going to have a 100PB cluster!" when realistically it's going to start much smaller and never be that big16:38
Oneiropoloi see.16:38
Oneiropoloi learned many things now.16:38
*** kazu_ has quit IRC16:38
Oneiropolothank you so much about your help16:40
jarrodwhen using KVM, does openstack create LVM volumes for each instance?16:40
*** guigui has quit IRC16:40
Oneiropolonotmyname, it was very helpful. appreciate it.16:40
notmynamesure. hope it works out for you16:40
Oneiropolomaybe i need to ask you more questions in the future . :)16:41
notmynameif I'm not here, other swift devs and users can help too ;-)16:42
berendtjarrod: if you use nova-volume with the default configuration and you attach volumes to your instances: yes16:42
jarrodnot attached volumes16:43
jarrodthe volumes used for the actual instance16:43
*** bcwaldon has quit IRC16:44
Oneiropoloya, but i hope that you would be here :)16:44
*** maplebed has joined #openstack16:45
*** bcwaldon has joined #openstack16:48
*** kashyap has joined #openstack16:50
*** jmckenty has quit IRC16:53
*** imsplitbit has quit IRC16:57
*** Oneiropolo has quit IRC16:58
*** ovidwu_ has joined #openstack16:59
*** Pentheus has quit IRC17:00
sirp-nova-core: https://code.launchpad.net/~rconradharris/nova/xs-unified-images/+merge/50102  is pretty well wrapped up, could one more core-dev take a peek at that, and if all is well, throw an Approve on there :)17:01
*** mdomsch has joined #openstack17:02
*** bcwaldon has quit IRC17:04
*** bcwaldon has joined #openstack17:04
*** et_ has joined #openstack17:05
*** mahadev has quit IRC17:09
*** mahadev has joined #openstack17:10
*** mahadev has quit IRC17:12
*** mdomsch has quit IRC17:17
*** bcwaldon has quit IRC17:18
*** skiold has quit IRC17:22
*** f2f has joined #openstack17:22
f2fhi17:22
f2fis this a good place to ask a few questions about the openstack architecture?17:23
jarrodyes17:23
jarrodmay or may not get answers17:23
jarroddepending on how people like your ?17:23
jarrodbut, yes heh17:23
f2fmy primary concern is with the filesystems OpenStack uses: how is the ObjectVault configured (what filesystem lies underneath), and in a standard config, what FS should be configured on the physical nodes running the VMs?17:25
f2fany special requirements there?17:25
f2falso, how is the data transferred from the object storage to the compute node?17:26
f2fis it all http transfers to/from the object store?17:27
jarrodyes17:28
jarrodvia the restful interface17:28
jarrodor direct http access17:29
f2fso it doesn't really matter what the underlying filesystem is? that is not shared?17:29
jarrodim pretty sure you can specify what you want17:29
f2fwhat are the options?17:31
*** pvo has quit IRC17:32
*** KenD has joined #openstack17:35
f2fthanks for your help!17:35
*** pvo has joined #openstack17:36
*** dfg has quit IRC17:36
*** acmurthy1 has quit IRC17:37
*** hazmat has quit IRC17:39
*** hazmat has joined #openstack17:39
*** cjb1 has joined #openstack17:40
cjb1hey there everyone...17:41
cjb1I've been trying to get openstack to work wtih Xen (under centos) and I think I've found my issue17:41
cjb1seems as though Xen doesn't support qcow2 images?17:42
cjb1has anyone else gotten Xen (PV) to work with qcow2 images?17:42
*** RichiH has quit IRC17:44
cjb1I'm beginning to wonder if this is why everyone is moving to KVM in the "cloud"?17:44
f2fisn't qcow2 primarily a kvm/qemu format?17:44
cjb1yes, but it's also something that is *supposed* to be supported by Xen17:45
cjb1and that's how openstack has put Xen support in (using qcow2 images) by default17:45
cjb1so someone MUST have gotten it to work, right?17:45
cjb1:)17:45
*** RichiH has joined #openstack17:45
*** rlucio has joined #openstack17:46
cjb1granted the support for qcow2 images isn't until 4.0.1, but qcow is supported in 3.0.3 (which is what centos ships with)17:46
cjb1and I tried that as well, manually, and it also doesn't work17:46
f2fi've tried to do it the other direction -- convert a xen image (raw img file) to qcow2 using qemu-img but was unsuccessful17:46
jaypipessirp-: around?17:47
*** asksol_ is now known as asksol17:47
*** dendrobates is now known as dendro-afk17:49
*** markwash has joined #openstack17:50
BK_manBTW, RHEL port of Bexar now supports qcow2 images - without NBD!17:51
jaypipesmtaylor: around?17:51
BK_mangrab it here: http://yum.griddynamics.net/17:51
jaypipesBK_man: nice :)17:51
cjb1awesome, thanks, will check it out!17:51
jaypipesBK_man: pls let annegentle know so she can update the docs...17:52
BK_manjaypipes: we are using libguestfs instead of nbd - more modern lib17:52
*** rcc has quit IRC17:52
cjb1bk, are these built against centos 5.5 or some such flavor?17:52
BK_manfor anybody considering RHEL build: here are install instructions: http://wiki.openstack.org/NovaInstall/RHEL6Notes17:53
*** bcwaldon has joined #openstack17:53
BK_mancjb1: this is RHEL6.0 x86_64 RPMs17:53
jaypipesBK_man: excellent, thx. annegentle please check out the docs above for porting to docs.openstack.org?17:53
*** bcwaldon has quit IRC17:57
mtaylorjaypipes: no. I'm not here18:03
jaypipesmtaylor: ok, never mind then :)18:03
mtaylorjaypipes: whazzup?18:04
cjb1hmm, not excited about moving to RHEL618:05
*** f2f has quit IRC18:08
jaypipesmtaylor: was wondering if we could talk about the Tarmac script on Hudson for Glance and Nova?18:09
jaypipesmtaylor: I'm hoping that we could change the process to this: when Tarmac notices a merge proposal, it pulls the branch automatically, runs all tests and if they all pass, then make a comment on the merge proposal of "Tarmac Testing SUCCESS", and if the tests don't pass, have Tarmac make a comment of "Tarmac Tests FAILED", with *only* the tests that failed displayed in the comment, and set the merge prop to Work In Progre18:09
jaypipesss automatically?18:09
*** acmurthy has joined #openstack18:10
jaypipesironcamel: did you see that your branch here: https://code.launchpad.net/~ironcamel/nova/openstack-api-hostid/+merge/50200 has a merge conflict. You need to merge that branch with the current trunk and resolve the conflict there... let me know if you need any assistance.18:10
jaypipesmtaylor: we could test a change to Tarmac on Glance first, since it's minimal review volume...then propose the change on the ML for Nova?18:11
jaypipesmtaylor: if you tell me where I can find the Tarmac script, I can do it myself... just not familiar with where that code is...18:12
mtaylorjaypipes: well.... it's more than just a script18:13
mtaylorjaypipes: you can grab it from lp:tarmac18:13
jaypipesmtaylor: ok. what are your thoughts about the above sugestion?18:13
mtaylorjaypipes: _well_ ... I'm not sure what the problem we're trying to solve is here?18:14
jaypipesmtaylor: trying to enhance the review process to have Tarmac pull and run tests automatically *before* any reviewers need to comment... and to have Tarmac only show the FAILED test output, instead of the current behaviour of showing a giant list of successful test output mixed with the fails...18:15
mtaylorjaypipes: oh - so, that's on my list of things to do for the tarmac jenkins rewrite18:17
jaypipesmtaylor: blueprint or bug link for me?18:18
mtaylorjaypipes: but we wind up needing launchpad merge queues for it to work sensibly18:20
mtaylorjaypipes: uh - no, but I have an email write up that I can see you18:21
jaypipesmtaylor: please, yes :)18:21
*** jlmjlm has quit IRC18:22
mtaylorjaypipes: we wind up in a place where we need java hacking, just to warn you18:22
jaypipesmtaylor: hmm. ok.18:23
*** burris has quit IRC18:24
*** markwash has quit IRC18:25
*** burris has joined #openstack18:25
*** mahadev has joined #openstack18:28
*** dendro-afk is now known as dendrobates18:29
*** rlucio has quit IRC18:33
*** btorch_ is now known as btorch18:34
*** blpiatt has quit IRC18:35
*** jaypipes has quit IRC18:35
*** rlucio_ has joined #openstack18:36
*** tr3buchet has quit IRC18:40
*** tr3buchet has joined #openstack18:42
*** daveiw has joined #openstack18:52
*** littleidea has quit IRC18:54
*** zenmatt has quit IRC18:54
*** littleidea has joined #openstack18:54
*** adiantum has quit IRC18:54
*** littleidea has quit IRC18:57
*** littleidea has joined #openstack18:57
*** mahadev has quit IRC18:58
*** acmurthy has quit IRC18:59
*** rlucio_ has quit IRC19:00
*** bcwaldon has joined #openstack19:00
*** mahadev has joined #openstack19:00
*** rlucio_ has joined #openstack19:02
openstackhudsonProject nova build #579: SUCCESS in 1 min 57 sec: http://hudson.openstack.org/job/nova/579/19:03
openstackhudsonTarmac: I'm working on consolidating install instructions specifically (they're the most asked-about right now) and pointing to the docs.openstack.org site for admin docs.19:03
*** littleidea has quit IRC19:03
uvirtbotNew bug: #725176 in nova "bin/nova-ajax-console-proxy: error while trying to get URLs without a QUERY_STRING" [Undecided,New] https://launchpad.net/bugs/72517619:06
* annegentle does a happy docs consolidation dance19:08
*** markwash has joined #openstack19:10
*** j05h has quit IRC19:15
*** citral has joined #openstack19:20
*** j05h has joined #openstack19:20
*** gregp76 has joined #openstack19:23
*** mdomsch has joined #openstack19:33
*** dragondm has quit IRC19:37
*** bcwaldon has quit IRC19:37
berendtannegentle: thumbs up :)19:38
*** MarkAtwood has joined #openstack19:39
*** drico has joined #openstack19:40
annegentleberendt: baby steps, but thanks :)19:41
*** dendrobates is now known as dendro-afk19:53
*** berendt has quit IRC19:53
*** reldan has joined #openstack19:55
openstackhudsonProject nova build #580: SUCCESS in 1 min 43 sec: http://hudson.openstack.org/job/nova/580/19:57
openstackhudsonTarmac: Add tests for 718999, fix a little brittle code introduced by the committed fix.19:57
openstackhudsonAlso fix and test for a 500 if the auth token doesn't exist in the database.19:57
vishyBK_man: awesome! How does it work without nbd?20:00
*** kang_ has joined #openstack20:01
vishymtaylor, jaypipes: if we run unittests using --with-xunit jenkins can parse xml the output and show failing tests20:01
kang_Does libvirt_type=kvm support instance snapshots?20:01
vishykang_: not yet, although the naive version should be pretty easy to add20:03
*** mdomsch has quit IRC20:03
vishykang_: snapshots generally refer to to different things20:04
kang_I am looking at doing something regarding savevm20:04
vishy* two20:04
kang_I just want to save a copy of my instance for either backup, or replication20:04
vishyone is snapshotting an individual vm for restore20:04
vishyone is backup to launch later20:04
*** omidhdl has quit IRC20:04
vishythe first is super easy with qcow2 images20:05
kang_i would think one snapshot would accomplish the same thing20:05
vishykang_: qcow2 snapshots internal to the file.  It is super fast20:05
kang_oh yes, i forgot that it stores it in the same image20:05
vishykang_: backing that up into an external service so it can be relaunched is a little tougher20:06
vishykang_: we were discussing backing up the entire cow image, but that could get unweildy if there are a lot of snapshots20:06
*** blpiatt has joined #openstack20:07
vishykang_: so the backup version would probably have to mount the whole system and dd it into a new file20:07
*** h0cin has quit IRC20:08
*** jaypipes has joined #openstack20:08
kang_i may try to write that20:10
*** bcwaldon has joined #openstack20:10
kang_i see the mechanisms in place and the space where it should be implemented in the code20:10
*** markwash has quit IRC20:11
*** mdomsch has joined #openstack20:11
*** markwash has joined #openstack20:12
markwashanybody here want to put themselves forward as a wsgi expert?20:16
uvirtbotNew bug: #725210 in swift "internal proxy needs to handle retries better" [Undecided,New] https://launchpad.net/bugs/72521020:16
*** patcoll has joined #openstack20:17
*** patcoll has left #openstack20:17
*** bcwaldon has quit IRC20:18
creihtmarkwash: what's the question?20:19
annegentlemarkwash: there's a new NovaCore wiki page at http://wiki.openstack.org/NovaCore where people list some of their areas of expertise, too20:20
uvirtbotNew bug: #725215 in swift "swift.stats.log_processor.run_once() needs to be refactored and tested" [Undecided,New] https://launchpad.net/bugs/72521520:21
*** bcwaldon has joined #openstack20:22
uvirtbotNew bug: #725219 in swift "log_processor can reprocess files if processed_files.pickle.gz can't be found" [Undecided,New] https://launchpad.net/bugs/72521920:22
markwashannegentle: thanks20:23
markwashcreiht: I'm just wondering about the appropriate scope of middleware, esp. in nova20:23
creihtahh... not sure that I can answer for nova20:24
markwashI can move my question over to #wsgi too20:24
creihtof course you can just ask, and see if someone answers :)20:26
*** mahadev has quit IRC20:31
*** bcwaldon has quit IRC20:32
*** markwash has quit IRC20:33
*** clauden_ has joined #openstack20:35
*** bcwaldon has joined #openstack20:37
*** DIgitalFlux has joined #openstack20:38
*** mdomsch has quit IRC20:39
*** zenmatt has joined #openstack20:40
*** dragondm has joined #openstack20:40
*** zul has quit IRC20:40
*** dragondm has quit IRC20:41
*** dragondm has joined #openstack20:41
*** DIgitalFlux has quit IRC20:41
jaypipessirp-: around?20:42
*** bcwaldon has quit IRC20:43
*** MarkAtwood has quit IRC20:45
*** Nacx has quit IRC20:47
*** mdomsch has joined #openstack20:50
*** Pentheus has joined #openstack20:51
*** rlucio_ has quit IRC20:53
*** dendro-afk is now known as dendrobates20:54
vishyhey guys20:56
*** littleidea has joined #openstack20:56
vishyi added a crosstable to the NovaCore page, i thought it might be easier http://wiki.openstack.org/NovaCore20:56
jk0yeah, I like that better20:58
edayshoud we just remove the first table then?21:05
edayseems redundant21:05
*** MarkAtwood has joined #openstack21:06
*** mahadev has joined #openstack21:07
*** Pentheus has quit IRC21:07
*** Pentheus has joined #openstack21:09
annegentlevishy: how did you edit that? In text that would be a bear to know the column alignment, wouldn't it?21:10
jarroddang vish21:12
jarrodyou are busy21:12
jarrodyou and eric day21:12
edayannegentle: gui edit21:13
edayI added an 'auth' column too, in case you're an auth expert21:13
annegentleokay, gui edit has given me fits. and I do mean FITS! :)21:13
annegentleeday: but if it works, cool21:14
edayannegentle: this is the first I've used it, seems to work :)21:15
edayI just added another 'zones' column, for the multi-zone work too21:15
*** aliguori has quit IRC21:15
tr3buchetthere is no longer output from the binaries such bin/nova-compute21:19
tr3buchetis there a way to get this output back?21:19
tr3bucheti'm running with --nodaemon21:20
sirp-jaypipes: ping21:24
jaypipessirp-: hey, got time for a chat?21:25
sirp-jaypipes: sure, skype?21:25
mtaylorvishy: yes21:26
jaypipessirp-: ya, starting up...21:26
mtaylorvishy: except also we're using tarmac to run the unittests, which means they're happening ina tmpdir that jenkins doens't know about21:26
mtaylorvishy, jaypipes: this is the reason we need tighter integration ...21:27
* mtaylor is talking with jaypipes about the needs we have to take this to the next level21:27
openstackhudsonProject nova build #581: SUCCESS in 1 min 47 sec: http://hudson.openstack.org/job/nova/581/21:27
openstackhudsonTarmac: Fixes FlatDHCP by making it inherit from NetworkManager and moving some methods around.21:27
Vekvishy: Re 715618: I proposed saving the last exception and adding it to the error message, rather than including the stack trace; I still think the trace is too much noise, and the information you're really interested in will likely be in that last exception.  That sound reasonable to you?  (Alternatively, add the exception message to the "is unreachable" messages in the exception handler...)21:29
Vek(715618 is the "cannot reach AMQP" stack trace bug)21:29
*** brd_from_italy has joined #openstack21:32
jaypipesmtaylor: I think just having Tarmac only show the failing errors and/or pep8 failures instead of the entire test run output would be a fantastic first step. Are you saying that that would require Java?21:46
*** aliguori has joined #openstack21:47
*** zenmatt has quit IRC21:48
mtaylorjaypipes: mostly21:49
jaypipesmtaylor: fail.21:50
mtaylorjaypipes: sorry - I will send you that email and I will expand on the needs to get what we need21:50
mtaylorjaypipes: I know EXACTLY what you guys want... there are various ways to get there21:50
jaypipesmtaylor: cheers21:51
mtaylorjaypipes: running out right now - I will get it to you by tomorrow21:51
uvirtbotNew bug: #725281 in glance "No way to remove a custom image property" [Low,Confirmed] https://launchpad.net/bugs/72528121:51
jaypipesmtaylor: no worries, thx man21:51
*** mdomsch has quit IRC21:53
openstackhudsonProject nova build #582: SUCCESS in 1 min 45 sec: http://hudson.openstack.org/job/nova/582/21:58
openstackhudsonTarmac: check if QUERY_STRING is empty or not before building the request URL in bin/nova-ajax-console-proxy21:58
*** rlucio has joined #openstack22:01
*** ctennis has quit IRC22:08
*** mdomsch has joined #openstack22:09
*** mdomsch has quit IRC22:15
*** johnpur has quit IRC22:17
*** vvuksan has quit IRC22:26
*** ctennis has joined #openstack22:31
*** ctennis has joined #openstack22:31
*** MarkAtwood has quit IRC22:36
*** raygtrejo has left #openstack22:40
*** trbs2 has joined #openstack22:44
*** cjb1 has left #openstack22:47
*** tr3buchet has quit IRC22:51
*** tr3buchet has joined #openstack22:51
*** MarkAtwood has joined #openstack22:57
*** blpiatt has quit IRC22:59
*** dinnerjacket has joined #openstack22:59
dinnerjackethey guys, quick question: clout-init on my instances is receiving this when it hits the metadata service: {"versions": [{"status": "CURRENT", "id": "v1.0"}]}23:01
*** mgoldmann has quit IRC23:01
dinnerjacketthe instance then breaks bad... I assume it's supposed to translate that to xml before sending?23:01
uvirtbotNew bug: #725328 in nova "removing of iSCSI volumes failed because "Device or resource busy."" [Undecided,New] https://launchpad.net/bugs/72532823:06
*** enigma1 has left #openstack23:08
*** clauden_ has quit IRC23:14
*** brd_from_italy has quit IRC23:14
*** rlucio has quit IRC23:15
*** gondoi has quit IRC23:19
vishytr3buchet --nodaemon doesn't exist.  do you have a --logdir in you flagfile?  If you remove it it will output normally23:26
vishyVek: it is only one stack trace when the binary crashes completely, that doesn't seem too noisy to me23:27
*** dinnerjacket has quit IRC23:32
tr3buchetah ok, thanks vishy, wasn't aware the logfile circumvented output23:35
tr3buchetnetworks aren't getting assigned project IDs for some reason, any ideas?23:36
vishytr3buchet, in vlan?23:36
vishynetworks are only assigned to projects in vlan mode23:36
*** trbs2 has quit IRC23:37
*** et_ has quit IRC23:38
*** pvo has quit IRC23:39
dragondmquestion: what's up w/ the Mac OS X binaries that were checked into nova trunk under /test/bin/?   Was that intentional?23:40
Vekexcept that the binary isn't crashing.  There isn't even really an exception at that point!  The only problem is that it can't connect to the AMQP server.23:40
Vekthat seems to me to be common enough that you just want it to tell you, "dude, I can't connect"23:41
Vekstack traces should be for when the programmer screwed up23:41
tr3buchetthanks again vishy23:43
vishyVek: ok it is reasonable to just print the exception type i suppose23:45
vishyVek: I wonder if we should change the exception handler in general to only print the stack trace if --verbose is specified23:46
*** vvuksan has joined #openstack23:47
VekPerhaps, but that's probably a little out of scope for what I'm doing :)23:47
VekI can look into that for the future, though.23:48
vishysure23:48
vishy:)23:48
*** dirakx is now known as dirakx_afk23:50
*** MarkAtwood has quit IRC23:54

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!