Wednesday, 2011-01-12

*** hggdh has quit IRC00:00
*** troytoman has quit IRC00:03
*** adiantum has quit IRC00:10
*** adiantum has joined #openstack00:15
dragondmbtw, having the CLOUDSERVERS vars in novarc is nice, but there is a bug.  CLOUD_SERVERS_URL has the wrong port00:20
dragondmoops, wrong channel.00:21
*** phymata has quit IRC00:22
rluciodubsquared1: did you log a bug for the vm launch failure you saw earlier?00:24
rluciodubsquared1: b/c i am getting the same issue now too00:24
uvirtbotNew bug: #701731 in nova "icmp rules created with euca-authorize improperly use port-range for icmp type/code" [Undecided,New] https://launchpad.net/bugs/70173100:31
*** kashyapc has quit IRC00:34
*** rnirmal has joined #openstack00:36
*** jt_zg has joined #openstack00:44
*** sophiap has joined #openstack00:44
jt_zgHey all, I was wondering where I can find the api key that cyberduck is asking for00:44
*** jdurgin has quit IRC00:44
*** rlucio has quit IRC00:45
vishysoren: about the iscsi, it seems safe to add the rule to iscsi directly if we use the full targetname00:46
vishyfor example: iqn.2010-10.org.openstack:volume-0000000100:46
colinnichjt_zg: username is your account and username combined ie account:username and api key is your password00:46
jt_zgthanks colinnich I'll try that00:47
jt_zgcolinnich, that worked! Thanks00:47
colinnichjt_zg: cool, no problem00:48
uvirtbotNew bug: #701734 in nova "vm launch fails if security-group chain file already exists" [Undecided,New] https://launchpad.net/bugs/70173400:51
jt_zgcolinnich, Cyberduck seems to hang when connecting/listing directories. Is this common?00:52
*** Ryan_Lane has quit IRC00:52
*** kashyapc has joined #openstack00:53
vishysoren: without iscsidev.sh finding the actual sd* device is a little tough00:53
colinnichjt_zg: It worked for me last time I tried it, but I'm now using swauth for authentation and cyberduck doesn't seem to be compatible00:55
jt_zgcolinnich, thanks for letting me know. Any recommendations for 'pretty front-ends' to demo Swift to my boss?00:56
colinnichjt_zg: Not really, no - cyberduck is the only one I know of. And I wouldn't say it was pretty :-)00:57
jt_zgcolinnich, fair :P I was being generous00:57
jt_zgSo, its safe to say I'm writing an API if I want to get the most bang for my buck with Swift?00:57
colinnichjt_zg: if you are using it in-house, then yes probably.00:58
jt_zgcolinnich, that's fair. I guess I'd better get to work!00:59
colinnichjt_zg: And I'd better get to bed, it's 1am01:00
jt_zgcolinnich, night! thanks for the help01:00
*** adiantum has quit IRC01:09
colinnichjt_zg: just thought of something... I take it cyberduck isn't on the same machine as swift?01:09
jt_zgcolinnich, right01:09
jt_zgI have swift running on 5 storage, 1 auth, 1 proxy server(dedicated). I have cyberduck running in an XP VM on my desktop machine01:10
colinnichjt_zg: have you changed the swift cluster url away from 127.0.0.1?01:10
colinnichjt_zg: either the default or at least your account01:10
jt_zgcolinnich, hmm, let me check. Thats in the auth configs?01:10
jt_zgcolinnich, Hmm, it seems to be pointing to an internal IP01:11
colinnichjt_zg: the default it. To find out what your account is, you could look in /etc/swift/auth.db01:11
colinnichjt_zg: that would explain cyberduck hanging it it couldn't connect to that ip01:11
jt_zgcolinnich, certainly does. I'll switch it to the external IP01:12
colinnichjt_zg: good luck, I'm definitely off to bed now01:12
jt_zgcolinnich, I believe you :P01:12
jt_zgthanks again01:12
*** dfg_ has quit IRC01:12
colinnichjt_zg: np01:12
*** adiantum has joined #openstack01:15
*** ccustine has quit IRC01:17
*** Ryan_Lane has joined #openstack01:19
*** ehazlett has joined #openstack01:21
ehazlettgreetings...  i'm following the novainstall doc on virtualbox -- when launching the instance i see it running but get "No route to host" when trying to ssh... any ideas?01:22
*** sophiap has quit IRC01:33
openstackhudsonProject nova build #387: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/387/01:33
openstackhudsonTarmac: Fixes bug #701575: run_tests.sh fails with a meaningless error if virtualenv is not installed. Proposed fix tries to use easy_install to install virtualenv if not present.01:33
uvirtbotLaunchpad bug 701575 in nova "install_venv.py does not install virtualenv if missing" [Undecided,Fix committed] https://launchpad.net/bugs/70157501:33
openstackhudsonTest by doing "run_tests.sh -V" on a system that has easy_install installed but not virtualenv.01:33
*** sophiap has joined #openstack01:41
xtoddxrlane: around?01:43
uvirtbotNew bug: #701749 in nova "volume creation doesn't recover from failure well" [Medium,Triaged] https://launchpad.net/bugs/70174901:46
*** adiantum has quit IRC01:46
uvirtbotNew bug: #701748 in nova "nova-volume is too hard to set up" [Low,In progress] https://launchpad.net/bugs/70174801:47
*** adiantum has joined #openstack01:52
*** joearnold has quit IRC01:55
*** dragondm has quit IRC01:57
*** ehazlett has quit IRC02:03
openstackhudsonProject nova build #388: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/388/02:03
openstackhudsonTarmac: Changing DN creation to do searches for entries.02:03
openstackhudsonThis change adds additional interoperability (as many directory servers and LDAP admins use cn, or another attribute, as the naming attribute). DN creation will incur a slight performance penalty for doing so, as DNs must be searched for now. User and project creation skip this performance penalty, as there is no need to search for an entry that is being created.02:03
*** adiantum has quit IRC02:03
*** dirakx has joined #openstack02:04
*** adiantum has joined #openstack02:09
*** schisamo has quit IRC02:11
*** Cybo has quit IRC02:13
creihtjt_zg: It also depends on your use case02:27
jt_zgcreiht, sorry, what does?02:27
creihtIf you are doing mainly backup there are tools that can integrate with swift (like duplicity)02:28
creihtin reference to weather or not you are going to need to write something to the api02:28
jt_zgMakes sense. I think we're going to be using Mezeo as a partner but we also want to go the EC2 bucket route also02:29
*** mray has quit IRC02:33
*** adiantum has quit IRC02:34
*** pvo has quit IRC02:38
*** Jordandev has joined #openstack02:38
*** adiantum has joined #openstack02:39
*** reldan has joined #openstack02:43
*** pvo_away has joined #openstack03:00
*** pvo_away has quit IRC03:03
*** pvo_away has joined #openstack03:03
*** pvo_away has quit IRC03:07
*** pvo_away has joined #openstack03:08
*** pvo_away has quit IRC03:08
*** pvo_away has joined #openstack03:15
*** pvo_away has joined #openstack03:16
*** pvo_away is now known as pvo03:16
*** pvo is now known as pvo_away03:17
*** pvo_away is now known as pvo03:17
*** pvo has joined #openstack03:18
*** ChanServ sets mode: +v pvo03:18
*** pvo is now known as pvo_away03:19
*** maple_bed has joined #openstack03:21
*** lorinh1 has joined #openstack03:21
*** maple_bed has quit IRC03:21
*** lorinh1 has left #openstack03:22
*** maplebed has quit IRC03:23
*** pvo_away is now known as pvo03:25
*** pvo has quit IRC03:27
*** adiantum has quit IRC03:34
*** adiantum has joined #openstack03:39
*** sophiap has quit IRC03:42
*** pvo_away has joined #openstack03:44
*** pvo_away is now known as pvo03:44
*** pvo has joined #openstack03:44
*** ChanServ sets mode: +v pvo03:44
*** daleolds has joined #openstack03:50
*** sophiap has joined #openstack03:55
*** lorin1 has joined #openstack03:59
*** lorin1 has quit IRC04:01
*** lorin1 has joined #openstack04:01
*** lorin1 has quit IRC04:02
*** adiantum has quit IRC04:03
*** sophiap has quit IRC04:04
*** kashyapc has quit IRC04:07
*** reldan has quit IRC04:08
*** adiantum has joined #openstack04:08
*** rnirmal has quit IRC04:20
*** pvo is now known as pvo_away04:26
*** ramkrsna has joined #openstack04:26
*** ramkrsna has joined #openstack04:26
jt_zgHow do you accomplish data spanning with Swift, over several servers. I.e., you have 2 servers. Each has capacity for 10Tb but you have 11Tb of information. Do the containers handle that automagically?04:35
jt_zgOr are you limited by the largest continuous disk size on a server?04:36
jeremybjt_zg: 1st of all you never just have 2 servers04:43
jt_zgjeremyb, I understand that04:43
jeremybthe recommended minimum cluster size is 5 zones with 2 servers each04:43
jeremyberr04:43
jeremyb1 server each04:43
jeremybso 5 servers total04:44
jt_zgjeremyb, wasn't really my point. What I mean is, what if you have a cluster, but that cluster isn't large enough?04:44
jt_zgcan you span data to another cluster?04:44
jeremybit will fill up?04:44
jeremybthere's a finite max limit of what a given swift cluster can store04:44
jeremybthere's a configurable max size for an individual object04:44
jeremybyou can't have 11tb with the default of 5GB (and no one changes it)04:45
jt_zgthat's not my question...04:45
jeremybeach object is stored with 3 complete copies so you need at least 3x the original size04:45
*** kashyapc has joined #openstack04:46
jeremybwell please clarify then04:46
jt_zgImagine I have an 10Tb server. I fill it up with one client. That client has 11 Tb of data. can I stretch the container to another set of servers?04:46
jeremyb?04:47
jeremyba 10tb server or 10tb cluster?04:47
jt_zg10tb server. 10tb cluster of 3 storage nodes. I want to incorporate another set of 3 storage nodes to make up for that spare 1tb04:47
jt_zgIn essence, stretching and increasing that clients container to grow beyond the first set of storage bricks, on to another set to grow their container04:49
jeremybi still don't understand the scenario04:50
jeremybyou'd probably just add them to the cluster so you'd end up with a bigger cluster04:50
notmynamejt_zg: essentially, what you are talking about is how to expand a logical cluster04:52
jt_zgnotmyname, exactly04:52
notmynamea swift cluster is defined by the ring(s)04:52
notmynameso add servers to the appropriate ring, and you have more storage04:52
jt_zgoh, that's pretty easy04:53
notmynameyou can either add to existing zones or add new zones04:53
jeremybbut then the rings have to be pushed to all nodes04:53
notmynameyes04:53
notmynamebut that's as complicated as rsync04:53
* jeremyb cna't remember if there's a built in tool to do it04:53
jt_zgso you just add other servers to a ring with a different mount device and the magic happens?04:53
notmynameessentially. replication will handle moving the appropriate partitions to the new servers04:54
jeremybis it a big deal if the rings are out of sync a little? what if it takes 5 mins to get to all the machines?04:54
jt_zgnotmyname, very cool. I wasn't sure if it was possible. And wanted to confirm before deploying more hardware :P04:54
notmynameheh, we do it all the time with cloud files04:54
jt_zgnotmyname, thanks, as always04:55
*** ramkrsna has quit IRC04:55
jeremybis cloud files one huge cluster?04:55
notmynameper DC04:55
jeremybcan you say how big?04:55
notmynamewe have one cluster per DC04:55
jeremybhuh04:55
pandemicsynjeremyb: no built in tool, we build the rings in a directory served out by nginx04:55
notmynameI'm not really supposed to, as far as I know :-)04:55
jeremybdoes it have multi region yet?04:55
jt_zgWe tested Gluster. Data spanning...not a futile task :S04:56
pandemicsynthen we just dsh a script on the nodes that wgets the rings and verify's checksums and stuff04:56
jt_zgSwift...makes it a trivial task with rings apparently04:56
jeremybpandemicsyn: ohhhh, you can make them only editable from one node...04:56
notmynamepandemicsyn: am I right in thinking that one should keep zones balanced. if you have a few servers, add to existing zones. if you have a bunch of servers, add a new zone04:57
pandemicsynyea, its better if they're similar04:57
*** dirakx has quit IRC04:58
notmynamejt_zg: ^ that04:58
pandemicsynif you have a zone thats significantly smaller you could have it at a lower weight in the ring though i guess04:58
pandemicsynthat would just help traffic wise though i guess04:58
jeremybso what about the window where some nodes have the new rings and some have old?04:59
notmynamejeremyb: it would be handled by each side (old and new) the same way an object failure would04:59
jeremybas in 404?04:59
notmynameso the servers with the old ring would try to push to one server, and replication will eventually move it to the right place05:00
jeremybfor GETS05:00
jeremyberr, GETs*05:00
notmynameno, it should be a 2xx response, but you may not be able to read the writes immediately05:00
notmynamethe proxy may have a new ring that doesn't know where the data is until replication moves it05:01
jeremybi mean for something written days earlier05:01
jeremybwhen i GET, how does it find it if it has the new ring?05:01
notmynameit checks the mtime on the ring file and reloads it if necessary, but the ring-builder tool doesn't allow you to shoot yourself in the foot that much. it prevents things from getting too out of sync. gholt wrote that code. he would be the expert on it05:02
notmynamepandemicsyn: do you remember those ring-builder options?05:03
pandemicsynsorry, R.app just locked my mac up05:04
*** trin_cz has quit IRC05:04
pandemicsynnotmyname: which ringer builder option ? the freeze/lockout time or whatever ?05:04
jeremybi guess that's a rackfiles client not a GNU R IDE? :)05:04
pandemicsynjeremyb: lol no its the R ide05:05
* pandemicsyn has a thing for graphs and stats 05:05
*** ivan has quit IRC05:05
jeremybhrmmm05:06
notmynamepandemicsyn: ya, I think I found it in the docstrings05:06
creihtWhen you make changes to the ring, it will only allow you to move one replica of a partition05:06
pandemicsynjust run "ring-builder" without args05:07
creihtthis allows the data to be still available while things are moving around05:07
pandemicsynits the "min_part_hours" part you want05:07
notmynamejeremyb: run swift-ring-builder with no options and you will see a bunch of text on how that works (not moving one replica of a partition, etc)05:07
jeremybcreiht: but many partitions at a time?05:07
creihta minimal number05:07
jeremyb(partitions are the thing i have like 10,000 of, right?)05:07
creihtusually millions05:08
jeremybi was going to do 500k05:08
jeremybi'm having trouble imagining more than 50 spindles05:08
creihtjeremyb: how big do you plan on your cluster getting?05:08
creihtjeremyb: are you positive about that?05:09
creihtYou can almost get 50 spindles on one machine :)05:10
jeremybheh05:10
jeremybhttp://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/ :P05:10
*** ivan has joined #openstack05:11
creihtheh05:11
creihtnot the backblaze server again :)05:11
pandemicsyncreiht: usb port replicators FTW05:11
creihtlol05:11
creihtjeremyb: what size spindles?05:12
creiht2T, 1.5T?05:12
*** joearnold has joined #openstack05:12
creihtif they are 2T then that maxes out at 33T user storage05:13
jeremyb30.2TB05:14
creihthehe05:14
creihtyes05:14
creiht:)05:14
creihtk05:14
notmynameassuming perfect distribution and no buffer :-)05:14
creihtjeremyb: if that is all you need, then that will be more than plenty partitions05:15
creihtactually probably too many :)05:16
jeremybcreiht: current size (on one big ext3) is ~8TB. let's say 300TB max size (i was originally thinking 25TB but you got me thinking bigger). that's ~220 spindles at 1.5TB each. (some 1TB, some 2TB, maybe some 1.5TB)05:16
notmynameisn't the "goal" to have about 100 partitions on each node (assuming uniform nodes) when the cluster is full. this gives you control of about 1% of the space per partition05:16
jeremybhow hard is it to change partition count later?05:17
notmynameimpossible05:17
creihtnotmyname: A rough guideline is to have a minimum of 100 partitions per device at the max cluster size that you might get to05:17
creihtnot impossible, but lets pretend it is :)05:17
notmynameya, that's what I thought I remembered05:17
notmynameheh05:17
jeremybso, node is spindle not host, right?05:18
notmynamejeremyb: ok, not impossible, but it would require lots of downtime and code that isn't yet written05:18
jeremybsure05:18
creihtjeremyb: in the ring, that is correct05:18
notmynamenode == spindle == drive05:18
creihtand we aren't entirely sure what notmyname is suggesting would even work reasonbly :)05:19
notmynameI'm not suggesting it! just being optimistic :-)05:19
creihthehe05:19
*** joearnold has quit IRC05:19
jeremybi'm sorry, make that 660 spindles (forgot to x3)05:19
creihthehe05:19
creihtalright lets round off to 70005:20
creiht:)05:20
jeremyb660 vs. my original plan of 500k is pretty good (original plan made while trying to sleep a month ago, don't remember what variables i used and it's not recorded anywhere)05:20
creihtso if you figured that 700 was that absulute max, then you want at least 70k partitions (700*100)05:21
creihthehe05:21
jeremyboh, i'm multiplying by 1k not 100...05:21
jeremybok, so i think i'll just go with 500k (5k spindles) and forget about it05:22
notmynameyou just increased your cluster size by an order of magnitude...05:22
creihtgive yourself a little headroom and 2**17 is 131K parts, or a little more with 2**18 for 262K parts05:23
creiht2**17 has a max recommend number of 1310 spindles05:23
jeremybyeah, maybe i'm making some irrational (or baseless) judgement about how expensive excess partitions are05:23
creiht2**18 has 262105:23
creihtthey aren't too bad, but you don't want to way over estimate05:24
notmynamemore partitions translates into more overhead for the system (data lost to fs metadata, time lost to creating/rebalancing rings, etc)05:24
creihtso I would say based on the minimal amount of information that you have given me so far, 2**17 or 2*18 should be fine for you05:24
jeremybright05:25
jeremybi'm partly limited because i can't imagine having the budget to have so many spindles (even if we had a cluster filled to the brim)05:26
creihtI should make a chart of the ring sizes, with the max spindles for each in the docs05:26
creihtright05:26
*** burris has quit IRC05:27
creihtWe estimaged ours based on a what point would we run out of physical space/network/power05:27
notmynamecreiht: http://robhirschfeld.com/2011/01/12/openstack-swift-demo-in-a-browser/05:28
creiht:)05:29
creihtthat sounds familiar :)05:29
creihtthough he just has the auth working there05:29
notmynameI told myself that I wouldn't stay up past midnight working tonight05:34
* jeremyb too05:34
jeremybugh, why did i df? dead nfs wait! oh it's back05:35
* jeremyb has another question: what happens when one of the servers (proxy/container/storage/account) dies in the middle of a request? the client transparently retries?05:38
jeremybor if something != proxy broke does the proxy retry?05:38
pandemicsynyep05:39
creihtif the proxy dies, then the request dies05:39
creihtif something dies behind the proxy, it will try to work around it if it can05:39
creihtmore or less05:39
creiht:)05:39
creihtit depends a lot on the request05:39
jeremybhrmm05:39
jeremybanyone considered making client libs retry?05:39
jeremyb(configurably)05:40
creihtjeremyb: swift/common/client.py has that05:40
creiht:)05:40
jeremybcreiht: oh, well it was part of the original question and it sounded like the answer was no05:41
jeremybanyway, thanks05:41
*** f4m8_ is now known as f4m805:42
*** burris has joined #openstack05:42
jeremybnacht!05:43
creihthttp://etherpad.openstack.org/SwiftRingCapacity05:46
creihtIf anyone finds that useful to get a rough idea of what the different ring sizings will give you05:47
*** hadrian has quit IRC05:53
*** ramkrsna has joined #openstack05:59
*** adiantum has quit IRC06:03
jt_zgcreiht, thanks for that link!06:05
*** adiantum has joined #openstack06:09
*** adiantum has quit IRC06:14
*** jdurgin has joined #openstack06:14
*** mray has joined #openstack06:16
mtayloreday: ping06:17
*** adiantum has joined #openstack06:20
*** ramkrsna has quit IRC06:21
*** DubLo7 has quit IRC06:24
*** DubLo7 has joined #openstack06:25
*** ramkrsna has joined #openstack06:34
*** kashyapc has quit IRC06:36
*** kashyapc has joined #openstack06:40
*** adiantum has quit IRC06:46
*** adiantum has joined #openstack06:47
*** daleolds has quit IRC06:49
*** Jordandev has quit IRC07:08
*** aimon has quit IRC07:11
*** aimon has joined #openstack07:11
*** winston-d has quit IRC07:15
*** sandywalsh has quit IRC07:20
*** jfluhmann_ has quit IRC07:21
*** jdurgin has quit IRC07:32
*** miclorb has quit IRC07:37
*** ibarrera has joined #openstack07:43
*** maplebed has joined #openstack07:54
*** befreax has joined #openstack07:57
*** rcc has joined #openstack07:58
*** brd_from_italy has joined #openstack08:01
*** adiantum has quit IRC08:03
*** adiantum has joined #openstack08:09
*** maplebed has quit IRC08:15
*** adiantum has quit IRC08:19
*** calavera has joined #openstack08:19
*** adiantum has joined #openstack08:24
*** adiantum has quit IRC08:36
*** rcc has quit IRC08:41
*** adiantum has joined #openstack08:41
*** arthurc has joined #openstack08:48
*** adiantum has quit IRC08:59
*** MarkAtwood has joined #openstack09:00
*** adiantum has joined #openstack09:01
*** opengeard_ has joined #openstack09:10
*** adiantum has quit IRC09:11
*** opengeard_ has quit IRC09:11
*** adiantum has joined #openstack09:16
*** irahgel has joined #openstack09:23
openstackhudsonProject nova build #389: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/389/09:23
openstackhudsonTarmac: This branch adds web based serial console access.  Here is an overview of how it works (for libvirt):09:23
openstackhudson1. User requests an ajax console for an instance_id (either through OS api, or tools/euca-get-ajax-console)09:23
openstackhudsona. api server calls compute worker to complete request09:23
openstackhudsonb. compute worker parses an instance's xml to locate its pseudo terminal (/dev/pts/x)09:23
openstackhudsonc. compute worker spawns an ajaxterm daemon, bound to a random port in a specified range.  socat is used to connect to /dev/pts/x.  Note that ajaxterm was modified in the following ways:09:23
openstackhudsoni. dies after 5 minutes of inactivity09:23
openstackhudsonii. now requires token authentication.  Previously it was trivial to hijack an ajaxterm09:23
openstackhudsond. compute worker returns ajaxterm connect information to the api server: port, host, token09:23
openstackhudsone. api server casts connect information to the nova-ajax-console-proxy (a new service)09:23
openstackhudsonf. api server returns a url for the ajaxterm (eg. http://nova-ajax-console-proxy/?token=123)09:23
openstackhudson2. User now has a url, and can paste it in a browser09:23
openstackhudsona. Browser sends request to https://nova-ajax-console-proxy/?token=12309:23
openstackhudsonb. nova-ajax-console-proxy maps token to connect information09:23
openstackhudsonc. nova-ajax-console-proxy constructs a proxy to the ajaxterm that is running on the host machine.  This is now done with eventlet, though previously it was done using twisted09:23
openstackhudson3. User interacts with console through web browser09:23
openstackhudsonNOTE: For this to work as expected, serial console login must be enabled in the instance.  Instructions for how to do this on ubuntu can be found here: https://help.ubuntu.com/community/SerialConsoleHowto.  Note that you must actively log out of the serial console when you are finished, otherwise the console will remain open even after the ajaxterm term session has ended.09:23
openstackhudsonAlso note that nova.sh has been modified in this branch to launch nova-ajax-console-proxy.09:23
ttxyay09:23
*** littleidea has quit IRC09:31
*** littleidea has joined #openstack09:34
*** adiantum has quit IRC09:36
*** trin_cz has joined #openstack09:36
*** adiantum has joined #openstack09:41
*** adiantum has quit IRC09:52
*** MarkAtwood has quit IRC10:13
*** tomo_bot_______5 has joined #openstack10:32
*** tomo_bot_______4 has quit IRC10:32
*** allsystemsarego has joined #openstack10:32
*** tomo_bot_______6 has joined #openstack10:34
*** alekibango has quit IRC10:35
*** tomo_bot_______5 has quit IRC10:36
*** BK_man has joined #openstack10:38
sorenWeird. We have a couple of tests that fail with Python2.710:46
*** guigui has joined #openstack10:47
sorenErr.. Occasionally fail with python 2.710:47
ttxsoren: what does it take to autoclose a bug on branch merge ? It seems to happen sometimes but not all the time10:48
sorenttx: Example of where it didn't work=10:48
soren?10:48
ttxhttps://bugs.launchpad.net/nova/+bug/68116410:49
uvirtbotLaunchpad bug 681164 in nova "Use a search to find DNs instead of creating them directly from attributes" [Medium,Fix committed]10:49
ttxI just closed it myself10:49
sorenttx: I'm not sure. http://hudson.openstack.org/job/nova-tarmac/51641/console looks ok.10:52
sorenOh.10:53
sorenheh.10:53
sorenIt requires that the bzr branch actually itself claims to fix the bug.10:53
sorenthat one didn't.10:53
ttxthe --fixes stuff ?10:53
sorenSomeone must have manually linked the branch and the bug.10:53
sorenYes.10:53
sorenThis is correct behaviour, IMO.10:53
*** trin_cz has quit IRC10:54
sorenJust linking a bug and a branch doesn't necessarily mean that the branch fixes the bug. They're just somehow releated.10:54
ttxright, but then we need to encourage use of --fixes if that's the only way to autoclose10:54
sorenCertainly.10:55
sorenOk, this makes no sense.10:55
sorenI'm on a Natty box.10:55
soren/usr/bin/python points to /usr/bin/python2.710:55
sorenIf I run "python run_tests.py", it fails. If I run "python2.7 run_tests.py", it works.10:56
ttxthat sounds strange10:58
ttxabout --fixes, does using it automatically link the resulting branch to the bug ?10:58
sorenYes.10:59
ttxok10:59
*** ewanmellor has quit IRC11:00
*** tomo_bot_______6 has quit IRC11:03
ttxsoren: could you set https://code.launchpad.net/~annegentle/nova/fixnewscript/+merge/45085 to Approved ? I think it's good now11:03
*** tomo_bot_______6 has joined #openstack11:04
sorenttx: Done.11:04
*** tomo_bot_______7 has joined #openstack11:05
*** fabiand_ has joined #openstack11:08
*** tomo_bot_______6 has quit IRC11:08
*** colinnich has quit IRC11:12
*** colinnich has joined #openstack11:12
openstackhudsonProject nova build #390: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/390/11:13
openstackhudsonTarmac: Had to abandon the other branch (~annegentle/nova/newscript) because the diffs weren't working right for me. This is a fresh branch that should be merged correctly with trunk. Thanks for your patience. :)11:13
sorenThat's going to look awesome in the ChangeLog.11:14
ttxheh11:14
ttxsoren: https://code.launchpad.net/~openstack-gd/nova/nova-avail-zones/+merge/44878 pep8 woes were fixed so it is also ready for an Approved switch11:15
ttxthis proxying is a bit inefficient, I should fix that.11:16
sorenUm.. No.11:19
sorenHe fixed a fraction of the pep8 problems.11:19
ttxah? I just checked them...11:19
ttxmaybe I'm not using the same --anal-checks flags11:20
sorenLooking at the million things hudson complained about, and then looking at the one patch he seems to have added since then: http://bazaar.launchpad.net/~openstack-gd/nova/nova-avail-zones/revision/49011:20
sorenOr was 489 after the attempted merge, too?11:21
* ttx digs deeper11:21
sorenHm...You seem to be right.11:21
ttxthat message about rev490 is a bit confusing indeed11:24
ttxsince the new merge proposal includes 489 and 490.11:24
sorenOk, reviewed. It looks good.11:25
sorenre-approved.11:26
*** reldan has joined #openstack11:30
*** tomo_bot_______7 has quit IRC11:30
*** tomo_bot_______7 has joined #openstack11:30
openstackhudsonProject nova build #391: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/391/11:33
openstackhudsonTarmac: Added support of availability zones for compute.11:33
openstackhudsonmodels.Service got additional field availability_zone and was created ZoneScheduler that make decisions based on this field.11:33
openstackhudsonAlso replaced fake 'nova' zone in EC2 cloud api.11:33
*** littleidea has quit IRC11:35
*** tomo_bot_______8 has joined #openstack11:35
*** tomo_bot_______7 has quit IRC11:35
uvirtbotNew bug: #701864 in nova "nova.tests.test_cloud.CloudTestCase.test_associate_disassociate_address fails under python 2.7" [Medium,In progress] https://launchpad.net/bugs/70186411:35
*** kashyapc has quit IRC11:40
*** trin_cz has joined #openstack11:44
*** sandywalsh has joined #openstack11:50
sorenMuhahah.11:51
* soren updates http://wiki.openstack.org/Nova/EucalyptusFeatureComparison for the second time today.11:51
sandywalsho/11:52
*** ctennis has quit IRC12:02
*** skrusty has quit IRC12:04
*** gustavomzw has joined #openstack12:14
*** skrusty has joined #openstack12:16
*** gustavomzw has quit IRC12:16
WhoopAnyone running OpenStack successfully on Ubuntu Lucid?12:27
*** ctennis has joined #openstack12:27
ttxWhoop: Nova, Swift ? Version ?12:28
WhoopSorry, Nova - don't mind the version (I've never used OpenStack prior - I just dont want to attempt on an OS with known problems)12:28
ttxI have been running the bleeding-edge trunk successfully... but there is lots of flux right now with FeatureFreeze coming up tomorrow, so YMMV12:29
uvirtbotNew bug: #701880 in nova "[BFE] Merge ~citrix-openstack/nova/xenapi-glance-2 for bexar-xenapi-support-for-glance blueprint" [Undecided,New] https://launchpad.net/bugs/70188012:31
WhoopAt a guess, how long do you recon it'd take for someone (me) to install from scratch if they've never used it before?12:31
sorenWhoop: Depends. Single machine install... 5 minutes.12:37
sorenWhoop: Multi-node install. More.12:37
Whoopwell I mean the central management machine :P12:39
Whoopnot on each node :)12:39
Whoop(each node is easy - tis all the same, can auto deploy that stuff)12:39
*** DubLo7 has quit IRC12:40
sorenThere is no "central management machine" in Nova.12:40
WhoopHmmm ok12:40
WhoopGuess I best go read shit12:40
Whoopta12:40
*** kashyapc has joined #openstack12:52
*** rackerhacker is now known as rkrhkr12:53
*** rkrhkr is now known as rackerhacker12:53
*** pvo_away is now known as pvo12:57
*** pvo is now known as pvo_away12:58
*** pvo_away is now known as pvo12:58
*** reldan has quit IRC12:59
*** sandywalsh has quit IRC13:03
*** sandywalsh has joined #openstack13:05
*** ramkrsna has quit IRC13:08
*** DubLo7 has joined #openstack13:08
sorenSo, Swift people... What are the odds the S3 API branch is going to land for Bexar?13:24
notmynamecreiht: ^13:31
ttxit's proposed, pending reviews, I think the odds are reasonably good13:33
*** drico has quit IRC13:36
*** reldan has joined #openstack13:47
*** hadrian has joined #openstack13:48
soren\o/13:54
soren\o/ * 2, even.13:54
soren\o/ that the S3 API branchis likely to land, and \o/ that deja-dup now has Cloud Files support.13:54
*** jdarcy has joined #openstack13:55
*** westmaas has joined #openstack13:56
*** adiantum has joined #openstack13:59
uvirtbotNew bug: #701904 in nova "Logging handler warning when running nova-manage" [Undecided,New] https://launchpad.net/bugs/70190414:06
*** jfluhmann has joined #openstack14:14
*** gustavomzw has joined #openstack14:15
* soren takes a (long) break14:16
*** gustavomzw has quit IRC14:16
*** gondoi has joined #openstack14:23
*** ppetraki has joined #openstack14:26
*** opengeard has quit IRC14:28
*** pvo is now known as pvo_away14:31
*** adiantum_ has joined #openstack14:36
*** nelson__ has quit IRC14:38
*** nelson__ has joined #openstack14:39
*** jdarcy has quit IRC14:44
*** pvo_away is now known as pvo14:47
*** dirakx has joined #openstack14:55
*** jdarcy has joined #openstack14:57
*** zul has joined #openstack15:01
*** sandywalsh has quit IRC15:18
*** rcc has joined #openstack15:19
*** zul has quit IRC15:21
*** sandywalsh has joined #openstack15:25
*** hadrian has quit IRC15:26
*** reldan has quit IRC15:40
*** hggdh has joined #openstack15:43
*** zul has joined #openstack15:51
*** reldan has joined #openstack15:53
*** abecc has joined #openstack15:55
*** dragondm has joined #openstack15:56
*** rnirmal has joined #openstack15:57
*** hggdh has quit IRC15:59
*** hggdh has joined #openstack15:59
*** fabiand_ has quit IRC16:09
*** Guest57763 has joined #openstack16:10
*** fabiand_ has joined #openstack16:11
*** abecc has quit IRC16:12
*** fabiand_ has quit IRC16:12
*** gustavomzw has joined #openstack16:15
*** f4m8 is now known as f4m8_16:16
*** gustavomzw has quit IRC16:18
*** dfg_ has joined #openstack16:23
*** blamar has joined #openstack16:26
*** henrichrubin has joined #openstack16:32
henrichrubinhi, anyone how to debug this:  "ERROR:root:AMQP server on localhost:5672 is unreachable. Trying again in 10 seconds."  occurs on nova-compute during "nova.sh run".  i was previously able to run nova using this same machine and code branch.16:33
*** kashyapc has quit IRC16:34
jaypipesannegentle: pls see my note to you here: https://code.launchpad.net/~morita-kazutaka/nova/sheepdog/+merge/45093. thanks!16:42
jaypipeshenrichrubin: do you have rabbit-mq running?16:43
*** calavera has quit IRC16:44
henrichrubinjaypipes:  yes "# ps -ef | grep rabbit16:45
henrichrubinrabbitmq  1865     1  0 Jan07 ?        00:00:00 /usr/lib/erlang/erts-5.7.4/bin/epmd -daemon"16:45
*** dragondm has quit IRC16:45
jaypipeshenrichrubin: does it only occur when running nova.sh or does it happen also when running nova-compute (or nova-scheduler) separately?16:46
*** gustavomzw has joined #openstack16:47
*** gustavomzw has quit IRC16:48
*** Cybo has joined #openstack16:50
*** kashyapc has joined #openstack16:51
*** Guest57763 has left #openstack16:54
henrichrubinjaypipes:  i killed the rabbitmq process and manually restarted.  same error occurs using either nova.sh or manually running nova-compute.16:54
jaypipeshenrichrubin: k.  please report a bug then. that shouldn't be happening :)16:55
jaypipeshenrichrubin: pls mention in the bug report that same error occurs regardless of using nova.sh or standalones..16:55
jaypipeshenrichrubin: ty!16:55
henrichrubinjaypipes:  ok, i'll report a bug.  any other ideas how to fix it?  or why the server is unreachable, what can i do to test manually?16:57
*** Ryan_Lane is now known as Ryan_Lane|away16:58
*** jfluhmann has quit IRC16:59
jaypipeshenrichrubin: not sure. vishy, any suggestions?16:59
*** brd_from_italy has quit IRC16:59
*** dendrobates is now known as dendro-afk17:01
*** rnirmal_ has joined #openstack17:01
*** dendro-afk is now known as dendrobates17:02
*** sophiap has joined #openstack17:03
*** rnirmal has quit IRC17:04
*** rnirmal_ is now known as rnirmal17:04
*** dragondm has joined #openstack17:06
edaymtaylor: pong17:08
annegentlejaypipes: taking a look now.17:08
mtayloreday: I was going to ask you a question about that exception test you did a while back, but I got it sorted17:09
colinnichnotmyname: Hi. What DNS library does cname_lookup require?17:09
notmynamednspython17:09
edaymtaylor: ahh, ok17:09
colinnichnotmyname: I feared that - installing that library seems to break my system17:10
colinnichnotmyname: proxy-server doesn't start properly and st hangs too17:10
notmynamedid you install it with the deb or with easy_install17:10
colinnichwith apt-get17:10
colinnichapt-get install python-dnspython17:11
*** ksteward has quit IRC17:11
notmynameya, some other people have said that it breaks with the debian package. easy_install it, and it should work. we probably should make a more recent deb and put it in a ppa17:12
colinnichnotmyname: ok, I'll give that a go17:12
*** ccustine has joined #openstack17:13
colinnichnotmyname: result. All is well, and proxy-server starts with cname_lookup loaded... now to test.. thanks17:14
notmynameglad it works17:16
*** jfluhmann has joined #openstack17:16
colinnichnotmyname: was planning on doing some documentation for cname and domain_remap. Need to get contribution permission first from my boss....17:17
notmynamegreat!17:17
colinnichnotmyname: also fixed a small problem with domain_remap relating to lower case reseller prefix17:17
colinnichnotmyname: cnames working :-)17:21
notmynameyay!17:21
colinnichnotmyname: everything working great for me now, using trunk (and swauth)17:22
creihtcolinnich: nice17:23
*** guigui has quit IRC17:24
*** befreax has quit IRC17:24
*** ibarrera has quit IRC17:33
*** hadrian has joined #openstack17:36
*** mray has joined #openstack17:37
*** rnirmal has quit IRC17:45
*** rnirmal has joined #openstack17:46
*** rlucio has joined #openstack17:47
*** jdurgin has joined #openstack17:48
uvirtbotNew bug: #702010 in nova "novarc template points CloudServer Auth URL to wrong port" [Undecided,New] https://launchpad.net/bugs/70201017:51
*** maplebed has joined #openstack17:51
*** joearnold has joined #openstack17:53
*** maplebed has quit IRC17:56
*** sophiap has quit IRC17:59
*** sophiap_ has joined #openstack17:59
jk0has anything changed recently in trunk that would prevent API requests from showing up in nova-api stdout?18:04
*** adiantum_ has quit IRC18:04
*** irahgel has left #openstack18:05
*** rlucio has quit IRC18:06
*** Ryan_Lane|away is now known as Ryan_Lane18:07
*** maplebed has joined #openstack18:07
edayjk0: the newlog branch perhaps, it might be going to a proper log no18:09
edaynow18:09
jk0ah18:09
jk0hm18:09
*** dirakx has quit IRC18:10
jk0doesn't look like it is -- I'm running verbose and nodaemon18:10
dabook, I've fixed the merge problems with https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537.18:13
daboFor some reason bazaar removed and re-added several files that had been added in a previous trunk merge, making for a messy diff. Sorry about that.18:13
*** troytoman has joined #openstack18:14
*** irahgel has joined #openstack18:19
*** ccustine has quit IRC18:23
*** rlucio has joined #openstack18:30
sandywalshsuper fast review for some keen core devs? https://code.launchpad.net/~sandy-walsh/nova/lp702010/+merge/4602218:33
jk0it's got my approval :)18:35
sandywalshnice ... one more core and we're golden (thx too vishy)18:37
*** ccustine has joined #openstack18:37
vishyany devs feel like tackling this one? https://bugs.launchpad.net/nova/+bug/70204018:42
uvirtbotLaunchpad bug 702040 in nova "Virtio net drivers do not work" [Undecided,New]18:42
openstackhudsonProject nova build #392: SUCCESS in 1 min 26 sec: http://hudson.openstack.org/job/nova/392/18:48
openstackhudsonTarmac: My previous modifications to novarc had CLOUDSERVER_AUTH_URL pointing to the ec2 api port. Now it's correctly pointing to os api port.18:48
*** trin_cz has quit IRC18:49
*** rcc has quit IRC18:49
sandywalsheday, are you suggesting I take all the flags out of -api and -combined and put them in flags?18:50
edaysandywalsh: at least that group of 4 (port/host)18:50
xtoddxsandywalsh: look at the lp:~anso/nova/wsgirouter branch, it involves moving that stuff away and into wsgi configuration18:50
xtoddxs/wsgi configuration/paste.deploy configuration/18:51
edayoh, probably better just to wait for that then18:51
edayignore me :)18:51
sandywalshxtoddx, the reason I needed it in flags was for the template generation, it isn't required in flags otherwise18:52
uvirtbotNew bug: #702040 in nova "Virtio net drivers do not work" [Undecided,New] https://launchpad.net/bugs/70204018:52
sandywalsheday, this is a small fix for a real pita. Perhaps it's easier for the wsgirouter branch to adjust rather than wait?18:54
xtoddxsandywalsh: i didn't think that flag was being used else where18:54
xtoddxsandywalsh: go ahead and merge yours and i'll fix mine later (yours is pretty small)18:55
sandywalshxtoddx, it wasn't, but I added cloudservers vars to novarc18:55
sandywalshxtoddx, thx!18:55
*** lcfseth has joined #openstack19:01
*** kpepple has joined #openstack19:04
*** reldan has quit IRC19:06
*** belred has joined #openstack19:07
belredi want to look into using openstack for my company.   I'm looking at the openstack web site, and I'm having trouble finding a starting place to lean about it, what it offers and a user's guide.  Where should I start?19:09
kpepplebelred: are you looking for features / functionality or more architectural / technical discussions ?19:10
belredI want to read both19:11
belredi guess features and functionality would a be simpler to start with.19:12
belredI've clicked through so many pages, but I feel lost.19:12
kpepplesoooo … openstack is long on technical / code detail but fairly short on overview / deployment advice. Although this probably isn't what you want to hear, the best way to get what you want may be to pull the code and compile the docs :(19:13
kpepplethese docs are migrating to the wiki (and should be mostly there by Bexar release in early February) but mostly aren't there today.19:14
rluciobelred: you look at this? http://nova.openstack.org/19:14
rluciothat should be the most up-to-date info19:14
rlucioand at least the key concepts should give you an idea about whats going on19:14
belredthanks19:15
kpepplerlucio: thanks, didn't know that was up19:15
kpepplebelred: the link that rlucio gave is the docs from source19:15
belredSo, the complete documentation is checked into bzr?19:15
rlucionp19:15
*** hggdh has quit IRC19:15
rlucioyes19:15
kpepplebelred: i believe all docs are in the doc/ directory in bzr19:15
rluciothe docs are checked in19:15
sandywalshbelred, annegentle handles openstack docs19:15
fraggelnnova != swift right?19:16
rlucioright :)19:16
*** belred has quit IRC19:16
sandywalshfraggeln, correct. nova = compute, swift = storage19:16
sandywalshfraggeln, glance = swift for nova images19:16
*** mdomsch has joined #openstack19:17
soren\o/19:17
* soren now has code in bzr19:17
sorenI mean.. I've got a patch accepted into bzr. I've had code managed "in bzr" for a while :)19:18
Whoop:q19:18
Whoopdoh, ignore >_<19:18
sorenIt's spelled "/quit"19:19
soren:)19:19
*** EdwinGrubbs is now known as Edwin-afk219:22
jdurginsoren: I updated https://code.launchpad.net/~jdurgin/nova/rbd_volume/+merge/45091 if you could take another look when you have a moment that'd be great19:23
*** paultag has joined #openstack19:25
czajkowskipaultag: aloha19:28
czajkowskipaultag: meet soren19:28
czajkowskisoren: meet paultag19:28
sorenjdurgin: approved19:29
sorenpaultag: o/19:29
*** odyi has quit IRC19:32
paultagheyya soren19:32
paultagsoren: czajkowski insisted that I get in touch with you guys while I go about my job searches :)19:32
paultagI can't say that I complained too much ;)19:33
czajkowskiI said talk to not insist wise ass19:33
paultagoi oi!19:33
sorenpaultag: Heh :)19:35
soren-> /msg19:36
paultagcheers :)19:36
openstackhudsonProject nova build #393: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/393/19:38
openstackhudsonTarmac: This branch adds a backend for using RBD (RADOS Block Device) volumes in nova via libvirt/qemu.19:38
openstackhudsonThis is described in the blueprint here: https://blueprints.launchpad.net/nova/+spec/ceph-block-driver19:38
openstackhudsonTesting requires Ceph and the latest qemu and libvirt from git. Instructions for installing these can be found on the Ceph wiki (http://ceph.newdream.net/wiki/#Getting_Ceph and http://ceph.newdream.net/wiki/QEMU-RBD).19:38
*** lcfseth has left #openstack19:41
sorenjdurgin: ^ congrats :)19:43
*** ccustine has quit IRC19:44
daboThanks to eday, I've fixed the merge issues. Reviews please! https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/4553719:44
*** brd_from_italy has joined #openstack19:47
*** sophiap_ has quit IRC19:48
jdurginsoren: thanks!19:50
sorenjdurgin: Thank *you*!19:51
jt_zgIs there a way to contribute to the documentation here: http://swift.openstack.org/howto_installmultinode.html ?19:53
*** brd_from_italy has quit IRC19:53
kpepplejt_zg: that's probably a rst doc in the doc/ tree within the swift bzr repo.19:55
jt_zgkpepple, thanks for the tip. I'll check that out!19:55
annegentlejt_zg: yep, that's right. it's an RST doc. (kpepple is fast on the keyboard!)19:57
jt_zgannegentle, Indeed he is. I felt my hair whooshing around.19:57
*** arthurc has quit IRC19:57
jt_zgannegentle, also want to say hi. Just joined your doc group :)19:58
kpeppleannegentle: what are you using to edit the rst docs ? is there a good editor .. beyond just using vim ?19:58
*** reldan has joined #openstack19:59
*** opengeard has joined #openstack19:59
annegentlejt_zg: oh, hi! Thanks for making the connection. I just use a text editor. With a spell check :)19:59
*** westmaas has quit IRC20:01
jt_zgannegentle, is there any set of docs that require some work?20:02
devcamcarhey all, after lunch i'll be dropping the nebula dashboard code into launchpad20:03
devcamcartoday is the day20:03
sorenI would really appreciate a review of https://code.launchpad.net/~soren/nova/lp701864/+merge/45976   It's quite straight forward and it's blocking package builds on Natty.20:03
creihtwoot20:03
sorendevcamcar: Project name?20:03
kpeppledevcamcar: is this the django dashboard ?20:03
*** brd_from_italy has joined #openstack20:04
devcamcarit will be 2 launchpad repos20:04
devcamcarone called django-nova which is the django module20:04
devcamcarand openstack-dashboard-django or something like that, need to talk to ttx about naming, but this one is a simple reference django site that uses the module20:04
*** westmaas has joined #openstack20:06
sorendevcamcar: Cool. Let me know when it's up. I'm really, really looking forward to seeing it.20:08
devcamcarsoren: here is a sneak peek: https://launchpad.net/django-nova20:08
devcamcarit needs documentation20:08
annegentlejt_zg: here's the set of goals for Bexar (the current release) http://www.openstack.org/blog/2010/11/doc-plans-for-upcoming-openstack-releases/ Some of it is done, but tutorials are not yet complete.20:08
devcamcarand i haven't pushed the reference site to make use of it yet, but there is code to browse20:09
*** johnpur has joined #openstack20:09
*** ChanServ sets mode: +v johnpur20:09
sorendevcamcar: It's really the reference site I'm dying to see :)20:09
devcamcaryep, after lunch, which i am now off to :)20:10
sorendevcamcar: Enjoy.20:10
devcamcari'll let you know when its up20:10
annegentledevcamcar: I wonder if just one new document on the nova site would cover it? Feel free to ping me for doc ideas.20:10
devcamcarannegentle: will do :)20:10
*** MarkAtwood has joined #openstack20:11
*** BK_man has quit IRC20:11
*** kpepple has left #openstack20:15
*** adiantum has quit IRC20:15
xtoddxquick 2-line bugfix review needed: https://code.launchpad.net/~anso/nova/managelog/+merge/4603320:16
*** mray has quit IRC20:21
jt_zgannegentle, I'll do some reading and get back to you if you'd like20:22
annegentlejt_zg: please do, feel free to email anne@openstack.org or just talk to me here on IRC20:23
jt_zgannegentle, will do. #5 on that list caught my eye20:24
*** opengeard has quit IRC20:24
*** trin_cz has joined #openstack20:24
annegentlejt_zg: excellent. there's a huge need for tutorials :)20:28
jt_zgannegentle, my area of expertise! I'll spin up some VPS' this weekend and try and bang something out for 5.1 and 5.220:29
daboxtoddx: reviewed and approved20:31
xtoddxdabo: thx20:31
uvirtbotNew bug: #702106 in swift "Functional Tests in test/functional Should Look For Config File in /etc" [Undecided,New] https://launchpad.net/bugs/70210620:36
*** reldan has quit IRC20:39
*** westmaas has quit IRC20:40
uvirtbotNew bug: #702107 in swift "swauth bins should default to the right port if no port is given" [Low,Triaged] https://launchpad.net/bugs/70210720:41
*** fabiand_ has joined #openstack20:42
*** westmaas has joined #openstack20:43
*** hggdh has joined #openstack20:44
*** sophiap has joined #openstack20:46
*** reldan has joined #openstack20:55
*** fabiand_ has quit IRC20:58
*** mdomsch has quit IRC20:58
*** ctennis has quit IRC21:00
*** Cybo has quit IRC21:00
*** hggdh has quit IRC21:08
*** hggdh has joined #openstack21:08
*** hggdh has quit IRC21:10
*** hggdh_ has joined #openstack21:10
sorenSave the Ubuntu Natty builds! Approve https://code.launchpad.net/~soren/nova/lp701864/+merge/45976 !21:10
*** daleolds has joined #openstack21:11
*** BK_man has joined #openstack21:13
*** Edwin-afk2 is now known as EdwinGrubbs21:14
*** kpepple has joined #openstack21:16
*** DubLo7 has quit IRC21:17
*** nati has joined #openstack21:22
*** ctennis has joined #openstack21:22
*** ctennis has joined #openstack21:22
*** abecc has joined #openstack21:23
_0x44jaypipes: I made those updates you requested.21:23
vishysome of you may find this useful: http://unchainyourbrain.com/openstack/12-testing-nova-openstack-compute-with-vagrant-and-chef21:28
*** westmaas has quit IRC21:30
devcamcarxtoddx: approved managelog branch21:32
xtoddxdevcamcar: thanks21:33
openstackhudsonProject nova build #394: SUCCESS in 1 min 22 sec: http://hudson.openstack.org/job/nova/394/21:33
openstackhudsonTarmac: _wait_with_callback was changed out from under suspend/resume. fixed.21:33
*** ccustine has joined #openstack21:33
henrichrubinjaypipes:  i was able to fix the rabbit-mq error but manually removing and reinstalling it. note, that i had to use dpkg --purge for it to work fully.21:34
henrichrubinbut now i have this error when i try to retrieve novarc from nova-manage "AttributeError: 'unicode' object has no attribute 'access'."  any idea?21:35
natiI think there is a bug in get_environment_rc in auth/manager.py21:37
natiMethod signature is wrong  self.__generate_rc(user.access, user.secret, pid, use_dmz)21:38
natiThis should be  self.__generate_rc(user,  pid, use_dmz)21:38
annegentlevishy: nice... added a link to it on http://wiki.openstack.org/NovaVirtually21:40
*** miclorb_ has joined #openstack21:40
openstackhudsonProject nova build #395: SUCCESS in 1 min 23 sec: http://hudson.openstack.org/job/nova/395/21:43
openstackhudsonTarmac: Initialize logging in nova-manage so we don't see errors about missing handlers.21:43
*** gustavomzw has joined #openstack21:44
sorenvishy: https://code.launchpad.net/~soren/nova/lp701864/+merge/45976 Can you have another look see?21:47
*** kpepple has left #openstack21:49
sorenvishy: How slow is your network really? (looking at bug 702040)21:52
uvirtbotLaunchpad bug 702040 in nova "Virtio net drivers do not work" [Undecided,New] https://launchpad.net/bugs/70204021:52
*** dirakx has joined #openstack21:52
vishyone sec, looking up my tests21:53
henrichrubinnati:  thx.  you are right.  do you want me to file a bug report or do you?21:54
natihenrichrubin: If you do'nt mind, would you please send report? I'm working on IPV6 now :)21:55
vishysoren: as i recall it was approx 100 MB/s vs 300 MB/s21:55
vishyboth are slow, but non-virtio is just terrible21:55
henrichrubinnati:  ok, i'll do it.21:57
sorenvishy: 100 MB/s is slow?!? What sort of alien gear do you guys have at NASA?21:57
vishy10GE21:58
soren100MB/s is exactly what you can expect when you're emulating a gigabit nic.21:58
vishyk apparently i misremembered or perhaps the network is a bit more loaded21:59
vishyvirtio: 571.4375 MB /  10.00 sec =  479.2965 Mbps 97 %TX 35 %RX 0 retrans 1.52 msRTT21:59
sorenYeah, that's not impressive.21:59
vishywhich is what 60?21:59
sorenWhich kernel is this on? (host, I mean)21:59
vishyI seem to remember it better than that21:59
vishyhost kernel is old21:59
vishylucid kernel and probably outdated22:00
sorenI forget when macvtap landed.22:00
sorenPost-lucid for sure.22:00
*** odyi has joined #openstack22:01
sorenBut yeah, without virtio you're not going to get above a gigabit.22:02
vishysoren: non-virtio to virtio 237.1875 MB /  10.00 sec =  198.9326 Mbps 99 %TX 35 %RX 0 retrans 1.16 msRTT22:03
vishysoren: non to non  205.3750 MB /   9.99 sec =  172.3916 Mbps 99 %TX 70 %RX 0 retrans 0.63 msRTT22:05
vishysoren: virtio to non-virtio 369.7717 MB /  10.00 sec =  310.0667 Mbps 7 %TX 87 %RX 0 retrans 0.97 msRTT22:06
vishyseems odd that we are well under a gig with virtio22:07
vishyconsidering it is 10G underneath.22:07
vishyperhaps we just need to upgrade the hosts22:07
*** drico has joined #openstack22:10
*** ChumbyJay has joined #openstack22:10
*** hggdh_ has quit IRC22:14
*** ppetraki has quit IRC22:15
sorenvishy: It would be an interesting experiment at least.22:16
*** mdomsch has joined #openstack22:20
openstackhudsonProject nova build #396: SUCCESS in 1 min 24 sec: http://hudson.openstack.org/job/nova/396/22:23
openstackhudsonTarmac: Fix test failures on Python 2.7 by eagerly loading the fixed_ip attribute on instances. No clue why it doesn't affect python 2.6, though.22:23
soren\o/22:23
devcamcarkerplunk22:24
devcamcarhttps://code.launchpad.net/openstack-dashboard22:24
devcamcarhttps://launchpad.net/django-nova22:24
devcamcarhttps://launchpad.net/openstack-dashboard is better link22:25
devcamcarso openstack-dashboard is a reference django implementation of django-nova22:25
devcamcardjango-nova is a django module meant to be reused by however many different django sites22:25
*** rnirmal has quit IRC22:26
vishysexy!22:27
*** sophiap has quit IRC22:28
dabosoren: can you re-review https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 for the changes I made?22:28
sorendevcamcar: Screenshots!22:28
vishydevcamcar: yeah test deploy up somewhere!22:29
*** fcarsten has joined #openstack22:29
devcamcaryes, lots to be done22:29
* vishy wants to see jake's ui changes22:29
devcamcari for one am happy that it now has a README22:29
devcamcarnot all of jake's ui stuff is in, its still a bit ugly in its current state22:29
devcamcarbut not too bad22:30
devcamcarhe's going to be fixing it up tomorrow22:30
devcamcarhah, just received an email from ewan asking when dashboard will be released22:31
soren"Someone" needs to be more on IRC :)22:31
vishydevcamcar: now that dashboard is up we just need to make it not suck :)22:32
* vishy is referring specifically to the auth through the ec2 api22:32
devcamcarvishy: indeed!22:32
devcamcari think its a good candidate to use easyapi22:33
*** burris has quit IRC22:33
devcamcarso hopefully we can rip out all the ec2 specific stuff soon22:33
* vishy agrees22:33
*** gustavomzw has quit IRC22:35
fcarstenswift (beginner) question: Is there a way to tell whether a swift installation has reached consistency (i.e. is currently consistent)?22:36
*** brd_from_italy has quit IRC22:36
creihtfcarsten: a good indicator is the dispersion report22:36
creihtfcarsten: well that depends on what do you mean by consistent22:37
fcarstencreight: where do I find that report?22:37
creihtfcarsten: http://swift.openstack.org/admin_guide.html#cluster-health22:37
*** burris has joined #openstack22:38
creihtThat will let you know in general if replicas of object or container partitions can not be reached22:38
fcarstencreight: What I mean: Swift replication claims to become eventually consistent. I want to check if my swift installation works (well or at all) by checking whether (and how fast) it reaches consistency.22:38
creihtand is useful if you are adding new storage to the system22:38
fcarstencreight: thanks I'll have a look :-)22:38
creihtthe above will only tell you on the consistency of the partitions in the event of a ring change (or outtage)22:39
devcamcarcreiht: howdy! refresh my memory - is there any disadvantage to starting with a small number of zones, say 4?22:39
creihtdevcamcar: hey!  The main disadvantage is availability in the case of failure scenarios22:40
devcamcarcreiht: is 5 the magic number?22:40
creihtfcarsten: the other things that you can look at are in the logs to see how long replication is running, and how much was replicated22:40
creihtyes22:40
devcamcarcreiht: awesome, thanks22:41
creihtwe saw a pretty dramatic difference in testing between 4 and 5 zones22:41
*** aimon_ has joined #openstack22:41
fcarstencreiht: Thanks again.22:42
creihtnp22:43
*** hky has quit IRC22:43
*** hky has joined #openstack22:44
*** daleolds has quit IRC22:44
*** daleolds1 has joined #openstack22:44
*** aimon has quit IRC22:45
*** aimon_ is now known as aimon22:45
*** mray has joined #openstack22:46
devcamcar3pm pst is a crappy time for launchpad to go down for maintenance.22:47
devcamcarthat is all.22:47
edaydevcamcar: it's a great time where it's hosted :)22:48
xtoddxsandywalsh: can you review https://code.launchpad.net/~anso/nova/wsgirouter22:50
sandywalshxtoddx, sure thing22:50
xtoddxsandywalsh: Mostly I want confirmation I didn't mess up the templating in the way nova-api translates the paste config into flags, and how I renamed the flags22:51
*** hggdh has joined #openstack22:52
sandywalshxtoddx, ok, I'll keep that in mind. I have to run for a bit, but will review tonight if that works?22:52
sorendabo: I think I've exhausted my reviewing energy for the day. First thing in the morning!22:52
dabosoren: ok, thanks!22:53
*** mdomsch has quit IRC22:54
openstackhudsonProject nova-tarmac build #51,894: FAILURE in 15 sec: http://hudson.openstack.org/job/nova-tarmac/51894/23:02
sandywalshxtoddx, hehe, trying to get the final diff before lp goes offline23:03
xtoddxsandywalsh: good luck!23:03
*** piken_ has joined #openstack23:05
*** piken has quit IRC23:05
*** troytoman has quit IRC23:05
* soren heads bedwards23:06
openstackhudsonProject nova-tarmac build #51,895: STILL FAILING in 37 sec: http://hudson.openstack.org/job/nova-tarmac/51895/23:07
*** ksteward has joined #openstack23:07
*** DubLo7 has joined #openstack23:07
openstackhudsonProject nova-tarmac build #51,896: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51896/23:12
openstackhudsonProject nova-tarmac build #51,897: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51897/23:17
mjmacnova question, sorry if this is a FAQ, but not finding the answer on nova.openstack.org...  how do volumes work?  are they block devices shared via AoE/iSCSI?23:17
mjmacah...  just found the service arch doc23:19
openstackhudsonProject nova-tarmac build #51,898: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51898/23:22
*** phymata has joined #openstack23:26
openstackhudsonProject nova-tarmac build #51,899: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51899/23:27
openstackhudsonProject nova-tarmac build #51,900: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51900/23:32
*** gondoi has quit IRC23:33
*** pvo is now known as pvo_away23:33
fcarstencreiht: Hmmm swift-stats-populate -d just sits there and doesn't do anything. So far 0 containers or objects created. Do I need to install something first (apart from swift) before it works?23:35
*** abecc has quit IRC23:36
openstackhudsonProject nova-tarmac build #51,901: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51901/23:37
fcarstencreiht: never mind. Found the problem: bad auth server URL23:40
*** deshantm has quit IRC23:42
openstackhudsonProject nova-tarmac build #51,902: STILL FAILING in 36 sec: http://hudson.openstack.org/job/nova-tarmac/51902/23:42
*** nate_h has joined #openstack23:42
*** deshantm has joined #openstack23:45
openstackhudsonProject nova-tarmac build #51,903: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51903/23:47
*** hggdh has quit IRC23:50
openstackhudsonProject nova-tarmac build #51,904: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51904/23:52
*** pvo_away is now known as pvo23:57
openstackhudsonProject nova-tarmac build #51,905: STILL FAILING in 35 sec: http://hudson.openstack.org/job/nova-tarmac/51905/23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!